Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
14,800
Given the following text description, write Python code to implement the functionality described below step by step Description: Basics * algoritmos * arquitectura * funciones basicas input, output, variables, while, for, if, def, vectores, diccionario, tupla y lista, graficacion https Step1: ''' Un número perfecto es un número natural que es igual a la suma de sus divisores propios positivos. Así, 6 es un número perfecto porque sus divisores propios son 1, 2 y 3; y 6 = 1 + 2 + 3. Los siguientes números perfectos son 28, 496 y 8128. ''' num=input("ingrese un numero Step2: Diferencias entre listas y tuplas Una lista puede ser alterada, no así una tupla. (cambiar sus valores internos) Una tupla puede ser utilizada como clave en un diccionario, no así una lista. Una tupla consume menos espacio que una lista Step3: tiempo https
Python Code: import sys print ('maximo float: ',sys.float_info.max) print ('minimo float: ',sys.float_info.min) print ('int: numero de bits por digito ',sys.int_info.bits_per_digit, 'esto es: ', 2**sys.int_info.bits_per_digit) import matplotlib.pyplot as plt import numpy as np plt.close("all") x=np.linspace(-10,10) cuadrado=plt.plot(x,x*x) plt.axhline(0,color='k') plt.axvline(0,color='k') plt.savefig("cuadrado_py.png") plt.show() import matplotlib.pyplot as plt from numpy import * plt.close("all") datos=loadtxt("experimento.dat") x=datos[:,0] y=datos[:,1] plt.grid(True) xlin=linspace(0.0,1.2) exp=plt.plot(x,y,'+',color='r',markersize=5) linea=plt.plot(xlin,9.8*xlin+0.5,'-',color='b') plt.text(0.6211,5.10204,"v(t)=9.8t + vo",fontsize=14) plt.xlabel("t (segundos)") plt.ylabel("v(m/s)") plt.title("Datos experimento de caida libre") plt.savefig("experimento_py.png") plt.show() #!/usr/bin/env python import math #recoleccion de informacion vo=input("ingrese la velocidad inicial del cuerpo: ") ang=input("ingrese el angulo con el cual es lanzado (formato deg): ") g=9.8; #calculos iniciales ymax=(((vo**2)*(math.sin(ang)**2)))/(2*g) xmax=(((vo*vo)*(math.sin(ang)*math.cos(ang)))/(g)) t=(((2*vo)*(math.sin(ang)))/g) #si el angulo es igual a 90 distancia en x es 0 despreciando la friccion if (ang==90): xmax=0 print"el alcance horizontal maximo es ",xmax print"el tiempo de vuelo es ",t print"la altura maxima es ",ymax #si el angulo esta entre 90 y 0 .... else: if(ang<90 and ang>=0): print"el alcance horizontal maximo es: ",xmax print"el tiempo de vuelo es ",t print"la altura maxima es: ",ymax #si el angulo supera 90 grados los calculos serian negativos... if (ang>90 and ang<360): ang=ang-90 ymax=(((vo**2)*(math.sin(ang)**2)))/(2*g) xmax=((((vo**2)*(math.sin(ang)*math.cos(ang)))/(g)))*(-1) t=((((2*vo)*(math.sin(ang)))/g))*(-1) print"atencion!! su disparo fue realizado hacia el lado contrario..." print"el alcance horizontal maximo es: ",xmax print"el tiempo de vuelo es: ",t print"la altura maxima es: ",ymax Explanation: Basics * algoritmos * arquitectura * funciones basicas input, output, variables, while, for, if, def, vectores, diccionario, tupla y lista, graficacion https://github.com/cosmolejo?tab=repositories * extra (tiempo) arquitectura: http://foobarnbaz.com/2012/07/08/understanding-python-variables/ (ayuda grafica) en python : no hay un valor fijo de memoria maximo para almacenar variables, el valor maximo, depende de la memoria disponible en el computador End of explanation def primo(f): i=2 sw=0 while i<=(f-1): comp=f%i if comp==0: sw=1 return 0 break i+=1 if sw==0: return 1 n=input("ingrese un numero: ") print"factores primos: \n" i=1 while i<=(n): comp=n%i if comp==0: p=primo(i) if (p==1): print" %d \n"%(i) i=i+1 %matplotlib inline from scipy import stats from scipy import constants as cons led=[1.6325,2.424,2.566,3.24050,3.7095] lamb=[1.10e6,1.60514e6,1.70648e6,1.76367e6,2.14133e6] slope, intercept, r_value, p_value, std_err = stats.linregress(lamb,led) x=np.linspace(lamb[0],lamb[-1],100) y=slope*x+intercept plt.plot(lamb,led,'o') plt.plot(x,y,'-') plt.show() h_planck=slope*cons.e/cons.c h=cons.h error=(h_planck-h)/h print ('r: ',r_value) print ('pendiente: ',slope) print ('error: ',std_err) print ('h_planck: ',h_planck) print ('h_real: ',h) print ('error_h: ',error) Explanation: ''' Un número perfecto es un número natural que es igual a la suma de sus divisores propios positivos. Así, 6 es un número perfecto porque sus divisores propios son 1, 2 y 3; y 6 = 1 + 2 + 3. Los siguientes números perfectos son 28, 496 y 8128. ''' num=input("ingrese un numero: ") i=1 suma=0 while i<=(num-1): cond=num%i if cond == 0: suma+=i i+=1 if suma==num: print"el numero %d es perfecto \n"%(num) else: print"el numero %d es no perfecto \n"%(num) End of explanation tupla = (1,2,3,4,5,6,7,8,9,10) lista = [1,2,3,4,5,6,7,8,9,10] print(tupla.__sizeof__()) # 52 bytes print(lista.__sizeof__()) # 60 bytes lista+[11,12,13,14] for i in range(len(lista)): lista[i]*=2 print lista lista1=lista lista1=lista*2 print lista1 tupla+(11,12,13,14) x = {'Name': 'Zara', 'Age': 7, 'Class': 'First'} print "x['Name']: ", x['Name'] print "x['Age']: ", x['Age'] dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'} dict['Age'] = 8; # update existing entry dict['School'] = "DPS School"; # Add new entry print "dict['Age']: ", dict['Age'] print "dict['School']: ", dict['School'] dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'} del dict['Name']; # remove entry with key 'Name' dict.clear(); # remove all entries in dict del dict ; # delete entire dictionary print "dict['Age']: ", dict['Age'] print "dict['School']: ", dict['School'] Explanation: Diferencias entre listas y tuplas Una lista puede ser alterada, no así una tupla. (cambiar sus valores internos) Una tupla puede ser utilizada como clave en un diccionario, no así una lista. Una tupla consume menos espacio que una lista End of explanation import time time.strptime("15 Nov 10", "%d %b %y") #diferencia de tiempo import random t=random.randint(1, 10) print t t1=time.time() time.sleep(t) t2=time.time() print 'pasaron: ',t2-t1,'segundos ' import numpy matriz=numpy.zeros((5,2,4,5)) print matriz Explanation: tiempo https://docs.python.org/2/library/time.html End of explanation
14,801
Given the following text description, write Python code to implement the functionality described below step by step Description: Prediction using the top down (kmeans) method This notebook details the process of prediction from which homework a notebook came after featurizing the notebook using the top down method. This is done by gathering all templates in each notebook after running the algorithm, then using countvectorizer to featurize the notebooks, and finally using random forests to make the prediction Step1: Inter and Intra Similarities The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below Step2: Actual Prediction While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows
Python Code: import sys home_directory = '/dfs/scratch2/fcipollone' sys.path.append(home_directory) import numpy as np from nbminer.notebook_miner import NotebookMiner hw_filenames = np.load('../homework_names_jplag_combined_per_student.npy') hw_notebooks = [[NotebookMiner(filename) for filename in temp[:59]] for temp in hw_filenames] from nbminer.pipeline.pipeline import Pipeline from nbminer.features.features import Features from nbminer.preprocess.get_ast_features import GetASTFeatures from nbminer.preprocess.get_imports import GetImports from nbminer.preprocess.resample_by_node import ResampleByNode from nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer from nbminer.preprocess.feature_encoding import FeatureEncoding from nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder from nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity from nbminer.results.prediction.corpus_identifier import CorpusIdentifier a = Features(hw_notebooks[0], 'hw0') a.add_notebooks(hw_notebooks[1], 'hw1') a.add_notebooks(hw_notebooks[2], 'hw2') a.add_notebooks(hw_notebooks[3], 'hw3') a.add_notebooks(hw_notebooks[4], 'hw4') a.add_notebooks(hw_notebooks[5], 'hw5') gastf = GetASTFeatures() rbn = ResampleByNode() gi = GetImports() fe = FeatureEncoding() ke = KmeansEncoder(n_clusters = 70) ci = CorpusIdentifier() pipe = Pipeline([gastf, rbn, gi, fe, ke, ci]) a = pipe.transform(a) import tqdm X, y = ci.get_data_set() similarities = np.zeros((len(X), len(X))) for i in tqdm.tqdm(range(len(X))): for j in range(len(X)): if len(set.union(set(X[i]), set(X[j]))) == 0: continue similarities[i][j] = len(set.intersection(set(X[i]), set(X[j]))) / (len(set.union(set(X[i]), set(X[j])))) Explanation: Prediction using the top down (kmeans) method This notebook details the process of prediction from which homework a notebook came after featurizing the notebook using the top down method. This is done by gathering all templates in each notebook after running the algorithm, then using countvectorizer to featurize the notebooks, and finally using random forests to make the prediction End of explanation def get_avg_inter_intra_sims(X, y, val): inter_sims = [] intra_sims = [] for i in range(len(X)): for j in range(i+1, len(X)): if y[i] == y[j] and y[i] == val: intra_sims.append(similarities[i][j]) else: inter_sims.append(similarities[i][j]) return np.array(intra_sims), np.array(inter_sims) for i in np.unique(y): intra_sims, inter_sims = get_avg_inter_intra_sims(X, y, i) print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims)) print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims)) print('----') %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = 5, 10 def get_all_sims(X, y, val): sims = [] sims_actual = [] for i in range(len(X)): for j in range(i+1, len(X)): if y[i] == val or y[j] == val: sims.append(similarities[i][j]) if y[i] == val and y[j] == val: sims_actual.append(similarities[i][j]) return sims, sims_actual fig, axes = plt.subplots(6,2) for i in range(6): axes[i,0].hist(get_all_sims(X,y,i)[0], bins=30) axes[i,1].hist(get_all_sims(X,y,i)[1], bins=30) Explanation: Inter and Intra Similarities The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below End of explanation import sklearn from sklearn.neural_network import MLPClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_score X, y = ci.get_data_set() countvec = sklearn.feature_extraction.text.CountVectorizer() X_list = [" ".join(el) for el in X] countvec.fit(X_list) X = countvec.transform(X_list) p = np.random.permutation(len(X.todense())) X = X.todense()[p] y = np.array(y)[p] clf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=3) scores = cross_val_score(clf, X, y, cv=10) print(scores) print(np.mean(scores)) X.shape clf.fit(X,y) fnames= countvec.get_feature_names() clfi = clf.feature_importances_ sa = [] for i in range(len(clfi)): sa.append((clfi[i], fnames[i])) sra = [el for el in reversed(sorted(sa))] import astor for temp in sra: temp = temp[1] print(temp) for i in range(3): print ('\t',astor.to_source(ke.templates.get_random_example(temp))) Explanation: Actual Prediction While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows: Split the data into train and test Vectorize based on templates that exist Build a random forest classifier that uses this feature representation, and measure the performance End of explanation
14,802
Given the following text description, write Python code to implement the functionality described below step by step Description: Collecting and Using Data in Python Laila A. Wahedi, PhD Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy<br> Follow along Step1: Save more than one variable Step2: Loading the data from a pickle open(<"path to file"><br> "rb") "Read Binary" Don't mix up rb and wb. wb will overwrite rb. Step3: Unpack the variables you saved on the fly Step4: Representing Data 1D Vectors of data in Step5: Arrays in Numpy Like arrays from Matlab Vectors and multi-dimensional arrays Numpy and scipy do math functions, and output in arrays Index like lists Step6: Series in Pandas Pandas is a package that creates labeled data frames Series are 1d Vectors Instantiate from list or array Built on Numpy Step7: Why Series Step8: Why Series Step9: Arrays Series and Lists Can Be Converted Step10: Two Dimensions List of lists Dictionary of lists Array Pandas Data Frame Lists of Lists (or tuples) Tuples are ordered collections like lists, but can't be changed once instantiated. Each item in list contains a row. Remember the position/order of your variables. Step11: Add a variable from another list You can only add to a list of lists, not tuples Must be the proper order and same length Step12: Keep Track of Variable Names With Dictionaries Curly Brackets Lots of memory, but search columns fast Easily add variables Index data with labels Step13: Use numpy to maintain a matrix shape Instantiate a 2d array with a list of lists or tuples Each variable is a column, each internal list/tuple a row Index each dimension like a list, separated by a comma. [row,column] Step14: Concatenate your matrices by stacking Axis = 0 Step15: Concatenate your matrices side by side Axis = 1 Step16: Do Matrix Operations Scalar multiplication Point-wise addition, subtraction, etc. Transpose Step17: Instantiate A Random Matrix For Simulations List of distributions here Step18: Index like a list with a comma between dimensions Step19: Sparse Matrices Save Memory When You Have Lots of Zeros Create a big empty array Create indexes to add values Add some values to each coordinate. e.g. place 4 in position (1,3,8) Step20: Sparse Matrices Save Memory When You Have Lots of Zeros Turn the matrix into a sparse matrix Use scipy package Will turn itself back if too big Different types good for different things. See Step21: Maintain Shape AND Labels with Pandas DataFrames like R Lots of built in functions Instantiate from a dictionary... Step22: Instantiate Your Data Frame... From a list of lists/tuples Step23: Instantiate Your Data Frame... From a matrix Name your rows too! Step24: Never Say No To Pandas Using Documentation Pandas website Try Step25: Look at your data with Matplotlib integration Matplotlib is like plotting in matlab Try ggplot package for ggplot2 in python See also Seaborn and Plotly Use some ipython magic to see plots inline Step26: One Variable At A Time Step27: Real Data Load Data from a text file Start by googling it Step28: Look at the data Also try .tail() Step29: Explore the data structure Step30: Rename things and adjust values Use dictionaries to rename and replace Step31: Set a useful index Step32: Save Your Changes Save it to a usable spreadsheet instead of an unreadable binary Step33: Slicing Get specific values from the dataframe. Pandas has several slice operators. iloc can be used to index the row by ordered integer. i.e. first row is 0, second row is 1, etc. Use this option sparingly. Better practice to use the index you have created. loc uses the named index and columns. Index using [row, columns] Put your column names in a list Use Step34: Slicing Using Conditionals Put conditionals in parentheses Stack multiple conditionals using Step35: Find a list of religious groups with territory Find a list of religious groups with territory Step36: Plot a histogram of organization age with 20 bins Plot a histogram of organization age with 20 bins Step37: Grouping By Variables Groupby() Step38: Making New Columns Assign values to a new column based on other columns Step39: Handle Missing Values First lets make some Default python type Step40: Handling Missing Values We could index by them Step41: Handling Missing Values We could fill them Step42: Handling Missing Values We could drop their rows or columns Step43: Reindexing Step44: Set a multi-index Order Matters. What happens when you reverse group and country? Step45: Did you get an error? Don't forget to reset the index first! Go ahead and change it back for the next step. Using the new index, make a new dataframe Note the new slicing operator for multi-index Step46: Warning Step47: What happened? copied_df changed when little_df changed. Let's fix that Step48: Merging and Concatenating Merges automatically if shared index Step49: Joins Same as SQL, inner and outer Step50: Concatenate Stack dataframes on top of one another Stack dataframes beside one another Step51: Some New Messy Data Step52: Look at those zip codes! Clean Zip Code We don't need the latitude and longitude Create two variables by splitting the zip code variable Step53: Rearrange The Data Step54: Lost Columns! Fips summed! Group by Step55: Rearrange The Data Step56: Rename Columns, Subset Data Step57: Save Your Data Save Your Data
Python Code: import pickle mydata = [1,2,3,4,5,6,7,8,9,10] pickle.dump(mydata, open('mydata.p','wb')) Explanation: Collecting and Using Data in Python Laila A. Wahedi, PhD Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy<br> Follow along: Slides: http://Wahedi.us, Tutorial Interactive Notebook: https://notebooks.azure.com/Laila/libraries/MDI-workshopFA18 Follow Along Go to https://notebooks.azure.com/Laila/libraries/MDI-workshopFA18 Clone the directory <img src='step1.png'> Follow Along Sign in with any Microsoft Account (Hotmail, Outlook, Azure, etc.) Create a folder to put it in, mark as private or public <img src='step2.png'> Follow Along Open a notebook Open this notebook to have the code to play with Open a blank notebook to follow along and try on your own. <img src='step4.png'> Do you get this error? HTTP Error 400. The size of the request headers is too long Clear your cookies then refresh the browser. Your Environment Jupyter Notebook Hosted in Azure Want to install it at home? Install the Anaconda distribution of Python https://www.anaconda.com/download/ Install Jupyter Notebooks http://jupyter.org/install Your Environment ctrl/apple+ enter runs a cell <img src='notebook.png'> Your Environment Persistent memory If you run a cell, results remain as long as the kernel ORDER MATTERS! <img src='persist.png'> Your Environment: Saving If your kernel dies, data are gone. Not R or Stata, you can't save your whole environment Data in memory more than spreadsheets Think carefully about what you want to save and how. Easy Saving (more later) dump to save the data to hard drive (out of memory) Contents of the command: variable to save, File to dump the variable into: open(<br> "name of file in quotes",<br> "wb") "Write Binary" Note: double and single quotes both work End of explanation more_data = [10,9,8,7,6,5,4,3,2,1] pickle.dump([mydata,more_data], open('so_much_data.p','wb')) Explanation: Save more than one variable: Put them in a list End of explanation mydata = pickle.load(open("mydata.p",'rb')) print(mydata) Explanation: Loading the data from a pickle open(<"path to file"><br> "rb") "Read Binary" Don't mix up rb and wb. wb will overwrite rb. End of explanation [mydata, more_data] = pickle.load(open('so_much_data.p','rb')) print(mydata) print(more_data) Explanation: Unpack the variables you saved on the fly End of explanation my_list = [1,3,2,4,7,'Sandwich'] print(len(my_list)) print(my_list[0:2]) print(my_list[-1]) print(my_list[0:4:2]) Explanation: Representing Data 1D Vectors of data in: Lists Arrays Series Lists Square brackets [] Can contain anything Ordered zero-indexing Slice with : Negatives to go backwards Third position to skip End of explanation import numpy as np my_array = np.random.poisson(lam=3,size=10) print(my_array) print(my_array.shape) Explanation: Arrays in Numpy Like arrays from Matlab Vectors and multi-dimensional arrays Numpy and scipy do math functions, and output in arrays Index like lists End of explanation import pandas as pd my_series = pd.Series(my_list) my_series.shape Explanation: Series in Pandas Pandas is a package that creates labeled data frames Series are 1d Vectors Instantiate from list or array Built on Numpy End of explanation my_series = pd.Series(my_array, index = [1,2,3,'cat','dog','10','n',8,7,6]) print(my_series) Explanation: Why Series: Label your data End of explanation print(my_series.mean()) my_series = pd.Series(['hello world','hello planet']) print(my_series.str.replace('hello','goodbye')) Explanation: Why Series: Suite of tools End of explanation new_list = list(my_array) print(new_list) Explanation: Arrays Series and Lists Can Be Converted End of explanation my_2d_list = [[1,4],[2,1],[8,10],[4,7],[9,2],[4,5]] my_3var_list = [(1,4,7),(2,1,0),(8,10,2),(4,7,4),(9,2,7),(4,5,3)] Explanation: Two Dimensions List of lists Dictionary of lists Array Pandas Data Frame Lists of Lists (or tuples) Tuples are ordered collections like lists, but can't be changed once instantiated. Each item in list contains a row. Remember the position/order of your variables. End of explanation for i,new_var in enumerate(my_list): my_2d_list[i].append(new_var) print(my_2d_list) Explanation: Add a variable from another list You can only add to a list of lists, not tuples Must be the proper order and same length End of explanation my_dict = { 'var1':[1,2,8,4,9,4], 'var2': [4,1,10,7,2,5] } my_dict['var3']=my_list print(my_dict['var3']) Explanation: Keep Track of Variable Names With Dictionaries Curly Brackets Lots of memory, but search columns fast Easily add variables Index data with labels End of explanation my_matrix = np.array(my_2d_list) my_other_matrix = np.array(my_3var_list) print(my_matrix) print(my_matrix[0,0:2]) Explanation: Use numpy to maintain a matrix shape Instantiate a 2d array with a list of lists or tuples Each variable is a column, each internal list/tuple a row Index each dimension like a list, separated by a comma. [row,column] End of explanation big_matrix = np.concatenate([my_matrix, my_other_matrix],axis=0) print(big_matrix) Explanation: Concatenate your matrices by stacking Axis = 0 End of explanation big_matrix = np.concatenate([my_matrix, my_other_matrix],axis=1) print(big_matrix) Explanation: Concatenate your matrices side by side Axis = 1 End of explanation print(my_matrix.T + my_other_matrix.T*5) Explanation: Do Matrix Operations Scalar multiplication Point-wise addition, subtraction, etc. Transpose End of explanation my_rand_matrix = np.random.randn(5,3) print(my_rand_matrix) Explanation: Instantiate A Random Matrix For Simulations List of distributions here: https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html End of explanation my_rand_matrix[:,0]=my_rand_matrix[:,0]*.5+5 my_rand_matrix[:,1]=my_rand_matrix[:,1]*.5-5 my_rand_matrix[:,2]=my_rand_matrix[:,2]*10+50 print(my_rand_matrix.T) Explanation: Index like a list with a comma between dimensions: [row,column] Each Column From A Different Normal Distribution: Multiply normal distribution by sigma, add mu End of explanation BIG_array = np.zeros((100,100)) rows = (1,6,29,40,43,50) columns = (3,6,90,58,34,88) BIG_array[(rows,columns)]=[4,6,14,1,3,22] Explanation: Sparse Matrices Save Memory When You Have Lots of Zeros Create a big empty array Create indexes to add values Add some values to each coordinate. e.g. place 4 in position (1,3,8) End of explanation import scipy as sp from scipy import sparse BIG_array = sparse.csc_matrix(BIG_array) print(BIG_array) Explanation: Sparse Matrices Save Memory When You Have Lots of Zeros Turn the matrix into a sparse matrix Use scipy package Will turn itself back if too big Different types good for different things. See: https://docs.scipy.org/doc/scipy/reference/sparse.html End of explanation df = pd.DataFrame(my_dict) df Explanation: Maintain Shape AND Labels with Pandas DataFrames like R Lots of built in functions Instantiate from a dictionary... End of explanation df = pd.DataFrame(my_2d_list, columns = ['var1','var2','var3']) df Explanation: Instantiate Your Data Frame... From a list of lists/tuples End of explanation df = pd.DataFrame(my_rand_matrix, columns = ['dist_1','dist_2','dist_3'], index = ['obs1','obs2','obs3','obs4','fred']) df Explanation: Instantiate Your Data Frame... From a matrix Name your rows too! End of explanation df.describe() Explanation: Never Say No To Pandas Using Documentation Pandas website Try: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html#pandas.DataFrame Stack Overflow Copy errors into google Look up syntax differences with R <img src="https://media.giphy.com/media/1hiVNxD34TpC0/giphy.gif"> Summarize Your Data End of explanation import matplotlib.pyplot as plt %matplotlib inline df.plot.density() Explanation: Look at your data with Matplotlib integration Matplotlib is like plotting in matlab Try ggplot package for ggplot2 in python See also Seaborn and Plotly Use some ipython magic to see plots inline End of explanation df.dist_1.plot.hist(bins=3) Explanation: One Variable At A Time: End of explanation baad_covars = pd.read_csv('BAAD_1_Lethality_Data.tab',sep='\t') Explanation: Real Data Load Data from a text file Start by googling it: http://lmgtfy.com/?q=pandas+load+csv Same method for comma (csv), tab (tab), |, and other separators Excel and R can both output spreadsheets to csv We will use the Big Allied and Dangerous Data from START https://dataverse.harvard.edu/file.xhtml?fileId=2298519&version=RELEASED&version=.0 End of explanation baad_covars.head(3) Explanation: Look at the data Also try .tail() End of explanation print(baad_covars.shape) baad_covars.columns Explanation: Explore the data structure End of explanation baad_covars.rename(columns = {'cowmastercountry':'country', 'masterccode':'ccode', 'mastertccode3606':'group_code', 'fatalities19982005':'fatalities'}, inplace = True) baad_covars.replace({'country':{'United States of America':'US'}}, inplace = True) print('Dimensions: ',baad_covars.shape) baad_covars.head() Explanation: Rename things and adjust values Use dictionaries to rename and replace End of explanation #Set the index baad_covars.set_index(['group_code'],inplace = True) baad_covars.head() Explanation: Set a useful index End of explanation baad_covars.to_csv('updated_baad.csv') Explanation: Save Your Changes Save it to a usable spreadsheet instead of an unreadable binary End of explanation baad_covars.loc[:, ['fatalities']].head() Explanation: Slicing Get specific values from the dataframe. Pandas has several slice operators. iloc can be used to index the row by ordered integer. i.e. first row is 0, second row is 1, etc. Use this option sparingly. Better practice to use the index you have created. loc uses the named index and columns. Index using [row, columns] Put your column names in a list Use : for all values Notice that the output keeps the index names. End of explanation baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1), ['group','country']].head() Explanation: Slicing Using Conditionals Put conditionals in parentheses Stack multiple conditionals using: & when both conditions must always apply | when at least one condition must apply End of explanation baad_covars.loc[(baad_covars.ContainRelig==1)& (baad_covars.terrStrong==1),['group']] Explanation: Find a list of religious groups with territory Find a list of religious groups with territory End of explanation baad_covars.OrgAge.plot.hist(bins=10) Explanation: Plot a histogram of organization age with 20 bins Plot a histogram of organization age with 20 bins End of explanation state_level = baad_covars.loc[:,['country','OrgAge', 'ordsize','degree', 'fatalities'] ].groupby(['country']).sum() state_level.head() Explanation: Grouping By Variables Groupby(): List the variables to group by .function(): How to aggregate the rows Try: .count(), .mean(), .first(), .mode() End of explanation baad_covars['big'] = 0 baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1), 'big']=1 baad_covars.big.head() Explanation: Making New Columns Assign values to a new column based on other columns: End of explanation print(type(np.nan)) baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1), ['terrStrong']] = None baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1), ['terrStrong']].head() Explanation: Handle Missing Values First lets make some Default python type: None Numpy datatype that can be treated like a number: np.nan Pandas turns None into an np.nan End of explanation baad_covars.loc[baad_covars.terrStrong.isnull(),'terrStrong'].head() Explanation: Handling Missing Values We could index by them End of explanation baad_covars['terrStrong'] = baad_covars.terrStrong.fillna(-77) baad_covars.terrStrong.head() Explanation: Handling Missing Values We could fill them: End of explanation baad_covars_dropped = baad_covars.dropna(axis='index', subset=['terrStrong'], inplace=False) Explanation: Handling Missing Values We could drop their rows or columns: Subset is optional: which columns to look in. inplace = True will drop rows in df without having to assign another variable End of explanation baad_covars.reset_index(inplace=True, drop = False) baad_covars.head() Explanation: Reindexing: Pop the index out without losing it End of explanation baad_covars.set_index(['group','country'],inplace = True) baad_covars.head() Explanation: Set a multi-index Order Matters. What happens when you reverse group and country? End of explanation indonesia_grps = baad_covars.xs('Indonesia',level = 'country',drop_level=False) indonesia_grps = indonesia_grps.loc[indonesia_grps.fatalities>=1,['degree','ContainRelig', 'ContainEthno','terrStrong', 'ordsize','OrgAge']] indonesia_grps.head() Explanation: Did you get an error? Don't forget to reset the index first! Go ahead and change it back for the next step. Using the new index, make a new dataframe Note the new slicing operator for multi-index End of explanation little_df = pd.DataFrame([1,2,3,4,5],columns = ['A']) little_df['B']=[0,1,0,1,1] copied_df = little_df print('before:') print(copied_df) little_df.loc[little_df.A == 3,'B'] = 'Sandwich' print('after') print(copied_df) Explanation: Warning: Making copies If you set a variable as equal to an object, Python creates a reference rather than copying the whole object. More efficient, unless you really want to make a copy End of explanation import copy little_df = pd.DataFrame([1,2,3,4,5],columns = ['A']) little_df['B']=[0,1,0,1,1] copied_df = little_df.copy() print('before:') print(copied_df) little_df.loc[little_df.A == 3,'B'] = 'Sandwich' print('after') print(copied_df) Explanation: What happened? copied_df changed when little_df changed. Let's fix that: import "copy" End of explanation C = pd.DataFrame(['apple','orange','grape','pear','banana'], columns = ['C'], index = [2,4,3,0,1]) little_df['C'] = C little_df Explanation: Merging and Concatenating Merges automatically if shared index End of explanation C = pd.DataFrame(['apple','orange','grape','apple'], columns = ['C'], index = [2,4,3,'a']) C['cuts']=['slices','wedges','whole','spirals'] print('C:') print(C) print('Inner: Intersection') print(little_df.merge(right=C, how='inner', on=None, left_index = True, right_index =True)) print('Outer: Keep all rows') print(little_df.merge(right=C, how='outer', on=None, left_index = True, right_index =True)) print('Left: Keep little_df') print(little_df.merge(right=C, how='left', on=None, left_index = True, right_index =True)) print('Right: Keep C') print(little_df.merge(right=C, how='right', on=None, left_index = True, right_index =True)) print('Outer, merging on column instead of index') print(little_df.merge(right=C, how='outer', on='C', left_index = False, right_index =False)) Explanation: Joins Same as SQL, inner and outer End of explanation add_df = pd.DataFrame({'A':[6],'B':[7],'C':'peach'},index= ['p']) little_df = pd.concat([little_df,add_df]) little_df Explanation: Concatenate Stack dataframes on top of one another Stack dataframes beside one another End of explanation asthma_data = pd.read_csv('asthma-emergency-department-visit-rates-by-zip-code.csv') asthma_data.head(2) Explanation: Some New Messy Data: Asthma by Zip Code From California Health and Human Services <br> https://data.chhs.ca.gov/dataset/asthma-emergency-department-visit-rates-by-zip-code Note: old version of data End of explanation asthma_data[['zip','coordinates']] = asthma_data.loc[:,'ZIP code'].str.split( pat='\n',expand=True) asthma_data.drop('ZIP code', axis=1,inplace=True) asthma_data.head(2) Explanation: Look at those zip codes! Clean Zip Code We don't need the latitude and longitude Create two variables by splitting the zip code variable: index the data frame to the zip code variable split it in two: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html assign it to another two variables Remember: can't run this cell twice without starting over End of explanation asthma_grouped = asthma_data.groupby(by=['Year','zip']).sum() asthma_grouped.head(4) Explanation: Rearrange The Data: Group By Make child and adult separate columns rather than rows. Must specify how to aggregate the columns <br> https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html End of explanation asthma_grouped.drop('County Fips code',axis=1,inplace=True) temp_grp = asthma_data.groupby(by=['Year','zip']).first() asthma_grouped[['fips', 'county', 'coordinates']]=temp_grp.loc[:,['County Fips code', 'County', 'coordinates']].copy() asthma_grouped.loc[:,'Number of Visits']=\ asthma_grouped.loc[:,'Number of Visits']/2 asthma_grouped.head(2) Explanation: Lost Columns! Fips summed! Group by: Cleaning Up Lost columns you can't sum took sum of fips Must add these back in Works because temp table has same index End of explanation asthma_unstacked = asthma_data.pivot_table(index = ['Year', 'zip', 'County', 'coordinates', 'County Fips code'], columns = 'Age Group', values = 'Number of Visits') asthma_unstacked.reset_index(drop=False,inplace=True) asthma_unstacked.head(2) Explanation: Rearrange The Data: Pivot Use pivot and melt to to move from row identifiers to column identifiers and back <br> https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-melt Tell computer what to do with every cell: Index: Stays the same Columns: The column containing the new column labels Values: The column containing values to insert <img src='pivot.png'> Rearrange The Data: Pivot Tell computer what to do with every cell: Index: Stays the same Columns: The column containing the new column labels Values: The column containing values to insert End of explanation asthma_unstacked.rename(columns={ 'zip':'Zip', 'coordinates':'Coordinates', 'County Fips code':'Fips', 'Adults (18+)':'Adults', 'All Ages':'Incidents', 'Children (0-17)': 'Children' }, inplace=True) asthma_2015 = asthma_unstacked.loc[asthma_unstacked.Year==2015,:] asthma_2015.head(2) Explanation: Rename Columns, Subset Data End of explanation asthma_2015.to_csv('asthma_2015.csv') Explanation: Save Your Data Save Your Data End of explanation
14,803
Given the following text description, write Python code to implement the functionality described below step by step Description: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. Step1: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. Step2: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. Step3: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. Step4: Inline question 1
Python Code: import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 Explanation: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. End of explanation from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() Explanation: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. End of explanation from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) Explanation: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. End of explanation # Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [5e4, 5e5, 5e6] results = {} best_val = -1 best_svm = None pass ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for lr in learning_rates: for reg in regularization_strengths: svm = LinearSVM() svm.train(X_train_feats, y_train, lr, reg, num_iters=2000) pred_train = svm.predict(X_train_feats) train_acc = np.mean(y_train == pred_train) pred_val = svm.predict(X_val_feats) val_acc = np.mean(y_val == pred_val) results[(lr, reg)] = (train_acc, val_acc) if val_acc > best_val: best_val = val_acc best_svm = svm ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print(test_accuracy) # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() Explanation: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. End of explanation print(X_train_feats.shape) from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 net = TwoLayerNet(input_dim, hidden_dim, num_classes) best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ best_val = -1 best_stats = None learning_rates = np.logspace(-10, 0, 5) # np.logspace(-10, 10, 8) #-10, -9, -8, -7, -6, -5, -4 regularization_strengths = np.logspace(-3, 5, 5) # causes numeric issues: np.logspace(-5, 5, 8) #[-4, -3, -2, -1, 1, 2, 3, 4, 5, 6] results = {} iters = 2000 #100 for lr in learning_rates: for rs in regularization_strengths: net = TwoLayerNet(input_dim, hidden_dim, num_classes) # Train the network stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=iters, batch_size=200, learning_rate=lr, learning_rate_decay=0.95, reg=rs) y_train_pred = net.predict(X_train_feats) acc_train = np.mean(y_train == y_train_pred) y_val_pred = net.predict(X_val_feats) acc_val = np.mean(y_val == y_val_pred) results[(lr, rs)] = (acc_train, acc_val) if best_val < acc_val: best_stats = stats best_val = acc_val best_net = net # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (net.predict(X_test_feats) == y_test).mean() print(test_acc) Explanation: Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. End of explanation
14,804
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute seed-based time-frequency connectivity in sensor space Computes the connectivity between a seed-gradiometer close to the visual cortex and all other gradiometers. The connectivity is computed in the time-frequency domain using Morlet wavelets and the debiased squared weighted phase lag index [1]_ is used as connectivity metric. .. [1] Vinck et al. "An improved index of phase-synchronization for electro- physiological data in the presence of volume-conduction, noise and sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011. Step1: Set parameters
Python Code: # Author: Martin Luessi <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne import io from mne.connectivity import spectral_connectivity, seed_target_indices from mne.datasets import sample from mne.time_frequency import AverageTFR print(__doc__) Explanation: Compute seed-based time-frequency connectivity in sensor space Computes the connectivity between a seed-gradiometer close to the visual cortex and all other gradiometers. The connectivity is computed in the time-frequency domain using Morlet wavelets and the debiased squared weighted phase lag index [1]_ is used as connectivity metric. .. [1] Vinck et al. "An improved index of phase-synchronization for electro- physiological data in the presence of volume-conduction, noise and sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011. End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Add a bad channel raw.info['bads'] += ['MEG 2443'] # Pick MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') # Create epochs for left-visual condition event_id, tmin, tmax = 3, -0.2, 0.5 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6), preload=True) # Use 'MEG 2343' as seed seed_ch = 'MEG 2343' picks_ch_names = [raw.ch_names[i] for i in picks] # Create seed-target indices for connectivity computation seed = picks_ch_names.index(seed_ch) targets = np.arange(len(picks)) indices = seed_target_indices(seed, targets) # Define wavelet frequencies and number of cycles cwt_freqs = np.arange(7, 30, 2) cwt_n_cycles = cwt_freqs / 7. # Run the connectivity analysis using 2 parallel jobs sfreq = raw.info['sfreq'] # the sampling frequency con, freqs, times, _, _ = spectral_connectivity( epochs, indices=indices, method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq, cwt_freqs=cwt_freqs, cwt_n_cycles=cwt_n_cycles, n_jobs=1) # Mark the seed channel with a value of 1.0, so we can see it in the plot con[np.where(indices[1] == seed)] = 1.0 # Show topography of connectivity from seed title = 'WPLI2 - Visual - Seed %s' % seed_ch layout = mne.find_layout(epochs.info, 'meg') # use full layout tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs)) tfr.plot_topo(fig_facecolor='w', font_color='k', border='k') Explanation: Set parameters End of explanation
14,805
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow Tutorial #03-C Keras API by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube Introduction Tutorial #02 showed how to implement a Convolutional Neural Network in TensorFlow. We made a few helper-functions for creating the layers in the network. It is essential to have a good high-level API because it makes it much easier to implement complex models, and it lowers the risk of errors. There are several of these builder API's available for TensorFlow Step1: We need to import several things from Keras. Step2: This was developed using Python 3.6 (Anaconda) and TensorFlow version Step3: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. Step4: The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. Step5: Copy some of the data-dimensions for convenience. Step6: Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. Step7: Plot a few images to see if data is correct Step8: Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified. Step9: PrettyTensor API This is how the Convolutional Neural Network was implemented in Tutorial #03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below. Step10: Sequential Model The Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence. Step11: Model Compilation The Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras. We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate. Step12: For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called categorical_crossentropy. The performance metric we are interested in is the classification accuracy. Step13: Training Now that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times. Step14: Evaluation Now that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input. Step15: We can print all the performance metrics for the test-set. Step16: Or we can just print the classification accuracy. Step17: Prediction We can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead. Step18: These are the true class-number for those images. This is only used when plotting the images. Step19: Get the predicted classes as One-Hot encoded arrays. Step20: Get the predicted classes as integers. Step21: Examples of Mis-Classified Images We can plot some examples of mis-classified images from the test-set. First we get the predicted classes for all the images in the test-set Step22: Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. Step23: Plot some of the mis-classified images. Step24: Functional Model The Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows. Step25: Model Compilation We have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training. Step26: Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above. Step27: Compile the Keras model using the RMSprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here. Step28: Training The model has now been defined and compiled so it can be trained using the same fit() function as used in the Sequential Model above. This also takes numpy-arrays as input. Step29: Evaluation Once the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model. Step30: The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency. Step31: We can also print the classification accuracy as a percentage Step32: Examples of Mis-Classified Images We can plot some examples of mis-classified images from the test-set. First we get the predicted classes for all the images in the test-set Step33: Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. Step34: Plot some of the mis-classified images. Step35: Save & Load Model NOTE Step36: Saving a Keras model with the trained weights is then just a single function call, as it should be. Step37: Delete the model from memory so we are sure it is no longer used. Step38: We need to import this Keras function for loading the model. Step39: Loading the model is then just a single function-call, as it should be. Step40: We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers. Step41: We then use the restored model to predict the class-numbers for those images. Step42: Get the class-numbers as integers. Step43: Plot the images with their true and predicted class-numbers. Step44: Visualization of Layer Weights and Outputs Helper-function for plotting convolutional weights Step45: Get Layers Keras has a simple way of listing the layers in the model. Step46: We count the indices to get the layers we want. The input-layer has index 0. Step47: The first convolutional layer has index 2. Step48: The second convolutional layer has index 4. Step49: Convolutional Weights Now that we have the layers we can easily get their weights. Step50: This gives us a 4-rank tensor. Step51: Plot the weights using the helper-function from above. Step52: We can also get the weights for the second convolutional layer and plot them. Step53: Helper-function for plotting the output of a convolutional layer Step54: Input Image Helper-function for plotting a single image. Step55: Plot an image from the test-set which will be used as an example below. Step56: Output of Convolutional Layer In order to show the output of a convolutional layer, we can create another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in. Step57: This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the predict() function with the input image. Step58: We can then plot the images for all 36 channels.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import math Explanation: TensorFlow Tutorial #03-C Keras API by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube Introduction Tutorial #02 showed how to implement a Convolutional Neural Network in TensorFlow. We made a few helper-functions for creating the layers in the network. It is essential to have a good high-level API because it makes it much easier to implement complex models, and it lowers the risk of errors. There are several of these builder API's available for TensorFlow: PrettyTensor (Tutorial #03), Layers API (Tutorial #03-B), and several others. But they were never really finished and now they seem to be more or less abandoned by their developers. This tutorial is about the Keras API which is already highly developed with very good documentation - and the development continues. It seems likely that Keras will be the standard API for TensorFlow in the future so it is recommended that you use it instead of the other APIs. The author of Keras has written a blog-post on his API design philosophy which you should read. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial #02 for a more detailed description of convolution. There are two convolutional layers, each followed by a down-sampling using max-pooling (not shown in this flowchart). Then there are two fully-connected layers ending in a softmax-classifier. Imports End of explanation from tensorflow.keras.models import Sequential from tensorflow.keras.layers import InputLayer, Input from tensorflow.keras.layers import Reshape, MaxPooling2D from tensorflow.keras.layers import Conv2D, Dense, Flatten Explanation: We need to import several things from Keras. End of explanation tf.__version__ Explanation: This was developed using Python 3.6 (Anaconda) and TensorFlow version: End of explanation from mnist import MNIST data = MNIST(data_dir="data/MNIST/") Explanation: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. End of explanation print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test)) Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. End of explanation # The number of pixels in each dimension of an image. img_size = data.img_size # The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Tuple with height, width and depth used to reshape arrays. # This is used for reshaping in Keras. img_shape_full = data.img_shape_full # Number of classes, one class for each of 10 digits. num_classes = data.num_classes # Number of colour channels for the images: 1 channel for gray-scale. num_channels = data.num_channels Explanation: Copy some of the data-dimensions for convenience. End of explanation def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. End of explanation # Get the first images from the test-set. images = data.x_test[0:9] # Get the true classes for those images. cls_true = data.y_test_cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true) Explanation: Plot a few images to see if data is correct End of explanation def plot_example_errors(cls_pred): # cls_pred is an array of the predicted class-number for # all images in the test-set. # Boolean array whether the predicted class is incorrect. incorrect = (cls_pred != data.y_test_cls) # Get the images from the test-set that have been # incorrectly classified. images = data.x_test[incorrect] # Get the predicted classes for those images. cls_pred = cls_pred[incorrect] # Get the true classes for those images. cls_true = data.y_test_cls[incorrect] # Plot the first 9 images. plot_images(images=images[0:9], cls_true=cls_true[0:9], cls_pred=cls_pred[0:9]) Explanation: Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified. End of explanation if False: x_pretty = pt.wrap(x_image) with pt.defaults_scope(activation_fn=tf.nn.relu): y_pred, loss = x_pretty.\ conv2d(kernel=5, depth=16, name='layer_conv1').\ max_pool(kernel=2, stride=2).\ conv2d(kernel=5, depth=36, name='layer_conv2').\ max_pool(kernel=2, stride=2).\ flatten().\ fully_connected(size=128, name='layer_fc1').\ softmax_classifier(num_classes=num_classes, labels=y_true) Explanation: PrettyTensor API This is how the Convolutional Neural Network was implemented in Tutorial #03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below. End of explanation # Start construction of the Keras Sequential model. model = Sequential() # Add an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. model.add(InputLayer(input_shape=(img_size_flat,))) # The input is a flattened array with 784 elements, # but the convolutional layers expect images with shape (28, 28, 1) model.add(Reshape(img_shape_full)) # First convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Second convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Flatten the 4-rank output of the convolutional layers # to 2-rank that can be input to a fully-connected / dense layer. model.add(Flatten()) # First fully-connected / dense layer with ReLU-activation. model.add(Dense(128, activation='relu')) # Last fully-connected / dense layer with softmax-activation # for use in classification. model.add(Dense(num_classes, activation='softmax')) Explanation: Sequential Model The Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence. End of explanation from tensorflow.keras.optimizers import Adam optimizer = Adam(lr=1e-3) Explanation: Model Compilation The Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras. We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate. End of explanation model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) Explanation: For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called categorical_crossentropy. The performance metric we are interested in is the classification accuracy. End of explanation model.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128) Explanation: Training Now that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times. End of explanation result = model.evaluate(x=data.x_test, y=data.y_test) Explanation: Evaluation Now that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input. End of explanation for name, value in zip(model.metrics_names, result): print(name, value) Explanation: We can print all the performance metrics for the test-set. End of explanation print("{0}: {1:.2%}".format(model.metrics_names[1], result[1])) Explanation: Or we can just print the classification accuracy. End of explanation images = data.x_test[0:9] Explanation: Prediction We can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead. End of explanation cls_true = data.y_test_cls[0:9] Explanation: These are the true class-number for those images. This is only used when plotting the images. End of explanation y_pred = model.predict(x=images) Explanation: Get the predicted classes as One-Hot encoded arrays. End of explanation cls_pred = np.argmax(y_pred, axis=1) plot_images(images=images, cls_true=cls_true, cls_pred=cls_pred) Explanation: Get the predicted classes as integers. End of explanation y_pred = model.predict(x=data.x_test) Explanation: Examples of Mis-Classified Images We can plot some examples of mis-classified images from the test-set. First we get the predicted classes for all the images in the test-set: End of explanation cls_pred = np.argmax(y_pred, axis=1) Explanation: Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. End of explanation plot_example_errors(cls_pred) Explanation: Plot some of the mis-classified images. End of explanation # Create an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. inputs = Input(shape=(img_size_flat,)) # Variable used for building the Neural Network. net = inputs # The input is an image as a flattened array with 784 elements. # But the convolutional layers expect images with shape (28, 28, 1) net = Reshape(img_shape_full)(net) # First convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Second convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Flatten the output of the conv-layer from 4-dim to 2-dim. net = Flatten()(net) # First fully-connected / dense layer with ReLU-activation. net = Dense(128, activation='relu')(net) # Last fully-connected / dense layer with softmax-activation # so it can be used for classification. net = Dense(num_classes, activation='softmax')(net) # Output of the Neural Network. outputs = net Explanation: Functional Model The Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows. End of explanation from tensorflow.python.keras.models import Model Explanation: Model Compilation We have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training. End of explanation model2 = Model(inputs=inputs, outputs=outputs) Explanation: Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above. End of explanation model2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) Explanation: Compile the Keras model using the RMSprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here. End of explanation model2.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128) Explanation: Training The model has now been defined and compiled so it can be trained using the same fit() function as used in the Sequential Model above. This also takes numpy-arrays as input. End of explanation result = model2.evaluate(x=data.x_test, y=data.y_test) Explanation: Evaluation Once the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model. End of explanation for name, value in zip(model2.metrics_names, result): print(name, value) Explanation: The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency. End of explanation print("{0}: {1:.2%}".format(model2.metrics_names[1], result[1])) Explanation: We can also print the classification accuracy as a percentage: End of explanation y_pred = model2.predict(x=data.x_test) Explanation: Examples of Mis-Classified Images We can plot some examples of mis-classified images from the test-set. First we get the predicted classes for all the images in the test-set: End of explanation cls_pred = np.argmax(y_pred, axis=1) Explanation: Then we convert the predicted class-numbers from One-Hot encoded arrays to integers. End of explanation plot_example_errors(cls_pred) Explanation: Plot some of the mis-classified images. End of explanation path_model = 'model.keras' Explanation: Save & Load Model NOTE: You need to install h5py for this to work! Tutorial #04 was about saving and restoring the weights of a model using native TensorFlow code. It was an absolutely horrible API! Fortunately, Keras makes this very easy. This is the file-path where we want to save the Keras model. End of explanation model2.save(path_model) Explanation: Saving a Keras model with the trained weights is then just a single function call, as it should be. End of explanation del model2 Explanation: Delete the model from memory so we are sure it is no longer used. End of explanation from tensorflow.python.keras.models import load_model Explanation: We need to import this Keras function for loading the model. End of explanation model3 = load_model(path_model) Explanation: Loading the model is then just a single function-call, as it should be. End of explanation images = data.x_test[0:9] cls_true = data.y_test_cls[0:9] Explanation: We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers. End of explanation y_pred = model3.predict(x=images) Explanation: We then use the restored model to predict the class-numbers for those images. End of explanation cls_pred = np.argmax(y_pred, axis=1) Explanation: Get the class-numbers as integers. End of explanation plot_images(images=images, cls_pred=cls_pred, cls_true=cls_true) Explanation: Plot the images with their true and predicted class-numbers. End of explanation def plot_conv_weights(weights, input_channel=0): # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(weights) w_max = np.max(weights) # Number of filters used in the conv. layer. num_filters = weights.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid filter-weights. if i<num_filters: # Get the weights for the i'th filter of the input channel. # See new_conv_layer() for details on the format # of this 4-dim tensor. img = weights[:, :, input_channel, i] # Plot image. ax.imshow(img, vmin=w_min, vmax=w_max, interpolation='nearest', cmap='seismic') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Visualization of Layer Weights and Outputs Helper-function for plotting convolutional weights End of explanation model3.summary() Explanation: Get Layers Keras has a simple way of listing the layers in the model. End of explanation layer_input = model3.layers[0] Explanation: We count the indices to get the layers we want. The input-layer has index 0. End of explanation layer_conv1 = model3.layers[2] layer_conv1 Explanation: The first convolutional layer has index 2. End of explanation layer_conv2 = model3.layers[4] Explanation: The second convolutional layer has index 4. End of explanation weights_conv1 = layer_conv1.get_weights()[0] Explanation: Convolutional Weights Now that we have the layers we can easily get their weights. End of explanation weights_conv1.shape Explanation: This gives us a 4-rank tensor. End of explanation plot_conv_weights(weights=weights_conv1, input_channel=0) Explanation: Plot the weights using the helper-function from above. End of explanation weights_conv2 = layer_conv2.get_weights()[0] plot_conv_weights(weights=weights_conv2, input_channel=0) Explanation: We can also get the weights for the second convolutional layer and plot them. End of explanation def plot_conv_output(values): # Number of filters used in the conv. layer. num_filters = values.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot the output images of all the filters. for i, ax in enumerate(axes.flat): # Only plot the images for valid filters. if i<num_filters: # Get the output image of using the i'th filter. img = values[0, :, :, i] # Plot image. ax.imshow(img, interpolation='nearest', cmap='binary') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Helper-function for plotting the output of a convolutional layer End of explanation def plot_image(image): plt.imshow(image.reshape(img_shape), interpolation='nearest', cmap='binary') plt.show() Explanation: Input Image Helper-function for plotting a single image. End of explanation image1 = data.x_test[0] plot_image(image1) Explanation: Plot an image from the test-set which will be used as an example below. End of explanation output_conv2 = Model(inputs=layer_input.input, outputs=layer_conv2.output) Explanation: Output of Convolutional Layer In order to show the output of a convolutional layer, we can create another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in. End of explanation layer_output2 = output_conv2.predict(np.array([image1])) layer_output2.shape Explanation: This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the predict() function with the input image. End of explanation plot_conv_output(values=layer_output2) Explanation: We can then plot the images for all 36 channels. End of explanation
14,806
Given the following text description, write Python code to implement the functionality described below step by step Description: WGAN-GP with R-GCN for the generation of small molecular graphs Author Step1: Dataset The dataset used in this tutorial is a quantum mechanics dataset (QM9), obtained from MoleculeNet. Although many feature and label columns come with the dataset, we'll only focus on the SMILES column. The QM9 dataset is a good first dataset to work with for generating graphs, as the maximum number of heavy (non-hydrogen) atoms found in a molecule is only nine. Step2: Define helper functions These helper functions will help convert SMILES to graphs and graphs to molecule objects. Representing a molecular graph. Molecules can naturally be expressed as undirected graphs G = (V, E), where V is a set of vertices (atoms), and E a set of edges (bonds). As for this implementation, each graph (molecule) will be represented as an adjacency tensor A, which encodes existence/non-existence of atom-pairs with their one-hot encoded bond types stretching an extra dimension, and a feature tensor H, which for each atom, one-hot encodes its atom type. Notice, as hydrogen atoms can be inferred by RDKit, hydrogen atoms are excluded from A and H for easier modeling. Step3: Generate training set To save training time, we'll only use a tenth of the QM9 dataset. Step4: Model The idea is to implement a generator network and a discriminator network via WGAN-GP, that will result in a generator network that can generate small novel molecules (small graphs). The generator network needs to be able to map (for each example in the batch) a vector z to a 3-D adjacency tensor (A) and 2-D feature tensor (H). For this, z will first be passed through a fully-connected network, for which the output will be further passed through two separate fully-connected networks. Each of these two fully-connected networks will then output (for each example in the batch) a tanh-activated vector followed by a reshape and softmax to match that of a multi-dimensional adjacency/feature tensor. As the discriminator network will recieves as input a graph (A, H) from either the genrator or from the training set, we'll need to implement graph convolutional layers, which allows us to operate on graphs. This means that input to the discriminator network will first pass through graph convolutional layers, then an average-pooling layer, and finally a few fully-connected layers. The final output should be a scalar (for each example in the batch) which indicates the "realness" of the associated input (in this case a "fake" or "real" molecule). Graph generator Step5: Graph discriminator Graph convolutional layer. The relational graph convolutional layers implements non-linearly transformed neighborhood aggregations. We can define these layers as follows Step6: WGAN-GP Step7: Train the model To save time (if run on a CPU), we'll only train the model for 10 epochs. Step8: Sample novel molecules with the generator
Python Code: from rdkit import Chem, RDLogger from rdkit.Chem.Draw import IPythonConsole, MolsToGridImage import numpy as np import tensorflow as tf from tensorflow import keras RDLogger.DisableLog("rdApp.*") Explanation: WGAN-GP with R-GCN for the generation of small molecular graphs Author: akensert<br> Date created: 2021/06/30<br> Last modified: 2021/06/30<br> Description: Complete implementation of WGAN-GP with R-GCN to generate novel molecules. Introduction In this tutorial, we implement a generative model for graphs and use it to generate novel molecules. Motivation: The development of new drugs (molecules) can be extremely time-consuming and costly. The use of deep learning models can alleviate the search for good candidate drugs, by predicting properties of known molecules (e.g., solubility, toxicity, affinity to target protein, etc.). As the number of possible molecules is astronomical, the space in which we search for/explore molecules is just a fraction of the entire space. Therefore, it's arguably desirable to implement generative models that can learn to generate novel molecules (which would otherwise have never been explored). References (implementation) The implementation in this tutorial is based on/inspired by the MolGAN paper and DeepChem's Basic MolGAN. Further reading (generative models) Recent implementations of generative models for molecular graphs also include Mol-CycleGAN, GraphVAE and JT-VAE. For more information on generative adverserial networks, see GAN, WGAN and WGAN-GP. Setup Install RDKit RDKit is a collection of cheminformatics and machine-learning software written in C++ and Python. In this tutorial, RDKit is used to conviently and efficiently transform SMILES to molecule objects, and then from those obtain sets of atoms and bonds. SMILES expresses the structure of a given molecule in the form of an ASCII string. The SMILES string is a compact encoding which, for smaller molecules, is relatively human-readable. Encoding molecules as a string both alleviates and facilitates database and/or web searching of a given molecule. RDKit uses algorithms to accurately transform a given SMILES to a molecule object, which can then be used to compute a great number of molecular properties/features. Notice, RDKit is commonly installed via Conda. However, thanks to rdkit_platform_wheels, rdkit can now (for the sake of this tutorial) be installed easily via pip, as follows: pip -q install rdkit-pypi And to allow easy visualization of a molecule objects, Pillow needs to be installed: pip -q install Pillow Import packages End of explanation csv_path = tf.keras.utils.get_file( "qm9.csv", "https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/qm9.csv" ) data = [] with open(csv_path, "r") as f: for line in f.readlines()[1:]: data.append(line.split(",")[1]) # Let's look at a molecule of the dataset smiles = data[1000] print("SMILES:", smiles) molecule = Chem.MolFromSmiles(smiles) print("Num heavy atoms:", molecule.GetNumHeavyAtoms()) molecule Explanation: Dataset The dataset used in this tutorial is a quantum mechanics dataset (QM9), obtained from MoleculeNet. Although many feature and label columns come with the dataset, we'll only focus on the SMILES column. The QM9 dataset is a good first dataset to work with for generating graphs, as the maximum number of heavy (non-hydrogen) atoms found in a molecule is only nine. End of explanation atom_mapping = { "C": 0, 0: "C", "N": 1, 1: "N", "O": 2, 2: "O", "F": 3, 3: "F", } bond_mapping = { "SINGLE": 0, 0: Chem.BondType.SINGLE, "DOUBLE": 1, 1: Chem.BondType.DOUBLE, "TRIPLE": 2, 2: Chem.BondType.TRIPLE, "AROMATIC": 3, 3: Chem.BondType.AROMATIC, } NUM_ATOMS = 9 # Maximum number of atoms ATOM_DIM = 4 + 1 # Number of atom types BOND_DIM = 4 + 1 # Number of bond types LATENT_DIM = 64 # Size of the latent space def smiles_to_graph(smiles): # Converts SMILES to molecule object molecule = Chem.MolFromSmiles(smiles) # Initialize adjacency and feature tensor adjacency = np.zeros((BOND_DIM, NUM_ATOMS, NUM_ATOMS), "float32") features = np.zeros((NUM_ATOMS, ATOM_DIM), "float32") # loop over each atom in molecule for atom in molecule.GetAtoms(): i = atom.GetIdx() atom_type = atom_mapping[atom.GetSymbol()] features[i] = np.eye(ATOM_DIM)[atom_type] # loop over one-hop neighbors for neighbor in atom.GetNeighbors(): j = neighbor.GetIdx() bond = molecule.GetBondBetweenAtoms(i, j) bond_type_idx = bond_mapping[bond.GetBondType().name] adjacency[bond_type_idx, [i, j], [j, i]] = 1 # Where no bond, add 1 to last channel (indicating "non-bond") # Notice: channels-first adjacency[-1, np.sum(adjacency, axis=0) == 0] = 1 # Where no atom, add 1 to last column (indicating "non-atom") features[np.where(np.sum(features, axis=1) == 0)[0], -1] = 1 return adjacency, features def graph_to_molecule(graph): # Unpack graph adjacency, features = graph # RWMol is a molecule object intended to be edited molecule = Chem.RWMol() # Remove "no atoms" & atoms with no bonds keep_idx = np.where( (np.argmax(features, axis=1) != ATOM_DIM - 1) & (np.sum(adjacency[:-1], axis=(0, 1)) != 0) )[0] features = features[keep_idx] adjacency = adjacency[:, keep_idx, :][:, :, keep_idx] # Add atoms to molecule for atom_type_idx in np.argmax(features, axis=1): atom = Chem.Atom(atom_mapping[atom_type_idx]) _ = molecule.AddAtom(atom) # Add bonds between atoms in molecule; based on the upper triangles # of the [symmetric] adjacency tensor (bonds_ij, atoms_i, atoms_j) = np.where(np.triu(adjacency) == 1) for (bond_ij, atom_i, atom_j) in zip(bonds_ij, atoms_i, atoms_j): if atom_i == atom_j or bond_ij == BOND_DIM - 1: continue bond_type = bond_mapping[bond_ij] molecule.AddBond(int(atom_i), int(atom_j), bond_type) # Sanitize the molecule; for more information on sanitization, see # https://www.rdkit.org/docs/RDKit_Book.html#molecular-sanitization flag = Chem.SanitizeMol(molecule, catchErrors=True) # Let's be strict. If sanitization fails, return None if flag != Chem.SanitizeFlags.SANITIZE_NONE: return None return molecule # Test helper functions graph_to_molecule(smiles_to_graph(smiles)) Explanation: Define helper functions These helper functions will help convert SMILES to graphs and graphs to molecule objects. Representing a molecular graph. Molecules can naturally be expressed as undirected graphs G = (V, E), where V is a set of vertices (atoms), and E a set of edges (bonds). As for this implementation, each graph (molecule) will be represented as an adjacency tensor A, which encodes existence/non-existence of atom-pairs with their one-hot encoded bond types stretching an extra dimension, and a feature tensor H, which for each atom, one-hot encodes its atom type. Notice, as hydrogen atoms can be inferred by RDKit, hydrogen atoms are excluded from A and H for easier modeling. End of explanation adjacency_tensor, feature_tensor = [], [] for smiles in data[::10]: adjacency, features = smiles_to_graph(smiles) adjacency_tensor.append(adjacency) feature_tensor.append(features) adjacency_tensor = np.array(adjacency_tensor) feature_tensor = np.array(feature_tensor) print("adjacency_tensor.shape =", adjacency_tensor.shape) print("feature_tensor.shape =", feature_tensor.shape) Explanation: Generate training set To save training time, we'll only use a tenth of the QM9 dataset. End of explanation def GraphGenerator( dense_units, dropout_rate, latent_dim, adjacency_shape, feature_shape, ): z = keras.layers.Input(shape=(LATENT_DIM,)) # Propagate through one or more densely connected layers x = z for units in dense_units: x = keras.layers.Dense(units, activation="tanh")(x) x = keras.layers.Dropout(dropout_rate)(x) # Map outputs of previous layer (x) to [continuous] adjacency tensors (x_adjacency) x_adjacency = keras.layers.Dense(tf.math.reduce_prod(adjacency_shape))(x) x_adjacency = keras.layers.Reshape(adjacency_shape)(x_adjacency) # Symmetrify tensors in the last two dimensions x_adjacency = (x_adjacency + tf.transpose(x_adjacency, (0, 1, 3, 2))) / 2 x_adjacency = keras.layers.Softmax(axis=1)(x_adjacency) # Map outputs of previous layer (x) to [continuous] feature tensors (x_features) x_features = keras.layers.Dense(tf.math.reduce_prod(feature_shape))(x) x_features = keras.layers.Reshape(feature_shape)(x_features) x_features = keras.layers.Softmax(axis=2)(x_features) return keras.Model(inputs=z, outputs=[x_adjacency, x_features], name="Generator") generator = GraphGenerator( dense_units=[128, 256, 512], dropout_rate=0.2, latent_dim=LATENT_DIM, adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS), feature_shape=(NUM_ATOMS, ATOM_DIM), ) generator.summary() Explanation: Model The idea is to implement a generator network and a discriminator network via WGAN-GP, that will result in a generator network that can generate small novel molecules (small graphs). The generator network needs to be able to map (for each example in the batch) a vector z to a 3-D adjacency tensor (A) and 2-D feature tensor (H). For this, z will first be passed through a fully-connected network, for which the output will be further passed through two separate fully-connected networks. Each of these two fully-connected networks will then output (for each example in the batch) a tanh-activated vector followed by a reshape and softmax to match that of a multi-dimensional adjacency/feature tensor. As the discriminator network will recieves as input a graph (A, H) from either the genrator or from the training set, we'll need to implement graph convolutional layers, which allows us to operate on graphs. This means that input to the discriminator network will first pass through graph convolutional layers, then an average-pooling layer, and finally a few fully-connected layers. The final output should be a scalar (for each example in the batch) which indicates the "realness" of the associated input (in this case a "fake" or "real" molecule). Graph generator End of explanation class RelationalGraphConvLayer(keras.layers.Layer): def __init__( self, units=128, activation="relu", use_bias=False, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, **kwargs ): super().__init__(**kwargs) self.units = units self.activation = keras.activations.get(activation) self.use_bias = use_bias self.kernel_initializer = keras.initializers.get(kernel_initializer) self.bias_initializer = keras.initializers.get(bias_initializer) self.kernel_regularizer = keras.regularizers.get(kernel_regularizer) self.bias_regularizer = keras.regularizers.get(bias_regularizer) def build(self, input_shape): bond_dim = input_shape[0][1] atom_dim = input_shape[1][2] self.kernel = self.add_weight( shape=(bond_dim, atom_dim, self.units), initializer=self.kernel_initializer, regularizer=self.kernel_regularizer, trainable=True, name="W", dtype=tf.float32, ) if self.use_bias: self.bias = self.add_weight( shape=(bond_dim, 1, self.units), initializer=self.bias_initializer, regularizer=self.bias_regularizer, trainable=True, name="b", dtype=tf.float32, ) self.built = True def call(self, inputs, training=False): adjacency, features = inputs # Aggregate information from neighbors x = tf.matmul(adjacency, features[:, None, :, :]) # Apply linear transformation x = tf.matmul(x, self.kernel) if self.use_bias: x += self.bias # Reduce bond types dim x_reduced = tf.reduce_sum(x, axis=1) # Apply non-linear transformation return self.activation(x_reduced) def GraphDiscriminator( gconv_units, dense_units, dropout_rate, adjacency_shape, feature_shape ): adjacency = keras.layers.Input(shape=adjacency_shape) features = keras.layers.Input(shape=feature_shape) # Propagate through one or more graph convolutional layers features_transformed = features for units in gconv_units: features_transformed = RelationalGraphConvLayer(units)( [adjacency, features_transformed] ) # Reduce 2-D representation of molecule to 1-D x = keras.layers.GlobalAveragePooling1D()(features_transformed) # Propagate through one or more densely connected layers for units in dense_units: x = keras.layers.Dense(units, activation="relu")(x) x = keras.layers.Dropout(dropout_rate)(x) # For each molecule, output a single scalar value expressing the # "realness" of the inputted molecule x_out = keras.layers.Dense(1, dtype="float32")(x) return keras.Model(inputs=[adjacency, features], outputs=x_out) discriminator = GraphDiscriminator( gconv_units=[128, 128, 128, 128], dense_units=[512, 512], dropout_rate=0.2, adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS), feature_shape=(NUM_ATOMS, ATOM_DIM), ) discriminator.summary() Explanation: Graph discriminator Graph convolutional layer. The relational graph convolutional layers implements non-linearly transformed neighborhood aggregations. We can define these layers as follows: H^{l+1} = σ(D^{-1} @ A @ H^{l+1} @ W^{l}) Where σ denotes the non-linear transformation (commonly a ReLU activation), A the adjacency tensor, H^{l} the feature tensor at the l:th layer, D^{-1} the inverse diagonal degree tensor of A, and W^{l} the trainable weight tensor at the l:th layer. Specifically, for each bond type (relation), the degree tensor expresses, in the diagonal, the number of bonds attached to each atom. Notice, in this tutorial D^{-1} is omitted, for two reasons: (1) it's not obvious how to apply this normalization on the continuous adjacency tensors (generated by the generator), and (2) the performance of the WGAN without normalization seems to work just fine. Furthermore, in contrast to the original paper, no self-loop is defined, as we don't want to train the generator to predict "self-bonding". End of explanation class GraphWGAN(keras.Model): def __init__( self, generator, discriminator, discriminator_steps=1, generator_steps=1, gp_weight=10, **kwargs ): super().__init__(**kwargs) self.generator = generator self.discriminator = discriminator self.discriminator_steps = discriminator_steps self.generator_steps = generator_steps self.gp_weight = gp_weight self.latent_dim = self.generator.input_shape[-1] def compile(self, optimizer_generator, optimizer_discriminator, **kwargs): super().compile(**kwargs) self.optimizer_generator = optimizer_generator self.optimizer_discriminator = optimizer_discriminator self.metric_generator = keras.metrics.Mean(name="loss_gen") self.metric_discriminator = keras.metrics.Mean(name="loss_dis") def train_step(self, inputs): if isinstance(inputs[0], tuple): inputs = inputs[0] graph_real = inputs self.batch_size = tf.shape(inputs[0])[0] # Train the discriminator for one or more steps for _ in range(self.discriminator_steps): z = tf.random.normal((self.batch_size, self.latent_dim)) with tf.GradientTape() as tape: graph_generated = self.generator(z, training=True) loss = self._loss_discriminator(graph_real, graph_generated) grads = tape.gradient(loss, self.discriminator.trainable_weights) self.optimizer_discriminator.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) self.metric_discriminator.update_state(loss) # Train the generator for one or more steps for _ in range(self.generator_steps): z = tf.random.normal((self.batch_size, self.latent_dim)) with tf.GradientTape() as tape: graph_generated = self.generator(z, training=True) loss = self._loss_generator(graph_generated) grads = tape.gradient(loss, self.generator.trainable_weights) self.optimizer_generator.apply_gradients( zip(grads, self.generator.trainable_weights) ) self.metric_generator.update_state(loss) return {m.name: m.result() for m in self.metrics} def _loss_discriminator(self, graph_real, graph_generated): logits_real = self.discriminator(graph_real, training=True) logits_generated = self.discriminator(graph_generated, training=True) loss = tf.reduce_mean(logits_generated) - tf.reduce_mean(logits_real) loss_gp = self._gradient_penalty(graph_real, graph_generated) return loss + loss_gp * self.gp_weight def _loss_generator(self, graph_generated): logits_generated = self.discriminator(graph_generated, training=True) return -tf.reduce_mean(logits_generated) def _gradient_penalty(self, graph_real, graph_generated): # Unpack graphs adjacency_real, features_real = graph_real adjacency_generated, features_generated = graph_generated # Generate interpolated graphs (adjacency_interp and features_interp) alpha = tf.random.uniform([self.batch_size]) alpha = tf.reshape(alpha, (self.batch_size, 1, 1, 1)) adjacency_interp = (adjacency_real * alpha) + (1 - alpha) * adjacency_generated alpha = tf.reshape(alpha, (self.batch_size, 1, 1)) features_interp = (features_real * alpha) + (1 - alpha) * features_generated # Compute the logits of interpolated graphs with tf.GradientTape() as tape: tape.watch(adjacency_interp) tape.watch(features_interp) logits = self.discriminator( [adjacency_interp, features_interp], training=True ) # Compute the gradients with respect to the interpolated graphs grads = tape.gradient(logits, [adjacency_interp, features_interp]) # Compute the gradient penalty grads_adjacency_penalty = (1 - tf.norm(grads[0], axis=1)) ** 2 grads_features_penalty = (1 - tf.norm(grads[1], axis=2)) ** 2 return tf.reduce_mean( tf.reduce_mean(grads_adjacency_penalty, axis=(-2, -1)) + tf.reduce_mean(grads_features_penalty, axis=(-1)) ) Explanation: WGAN-GP End of explanation wgan = GraphWGAN(generator, discriminator, discriminator_steps=1) wgan.compile( optimizer_generator=keras.optimizers.Adam(5e-4), optimizer_discriminator=keras.optimizers.Adam(5e-4), ) wgan.fit([adjacency_tensor, feature_tensor], epochs=10, batch_size=16) Explanation: Train the model To save time (if run on a CPU), we'll only train the model for 10 epochs. End of explanation def sample(generator, batch_size): z = tf.random.normal((batch_size, LATENT_DIM)) graph = generator.predict(z) # obtain one-hot encoded adjacency tensor adjacency = tf.argmax(graph[0], axis=1) adjacency = tf.one_hot(adjacency, depth=BOND_DIM, axis=1) # Remove potential self-loops from adjacency adjacency = tf.linalg.set_diag(adjacency, tf.zeros(tf.shape(adjacency)[:-1])) # obtain one-hot encoded feature tensor features = tf.argmax(graph[1], axis=2) features = tf.one_hot(features, depth=ATOM_DIM, axis=2) return [ graph_to_molecule([adjacency[i].numpy(), features[i].numpy()]) for i in range(batch_size) ] molecules = sample(wgan.generator, batch_size=48) MolsToGridImage( [m for m in molecules if m is not None][:25], molsPerRow=5, subImgSize=(150, 150) ) Explanation: Sample novel molecules with the generator End of explanation
14,807
Given the following text description, write Python code to implement the functionality described below step by step Description: Running on 4 cores with 30 gb RAM Step1: Timing individual functions Step2: Testing pipeline as a whole
Python Code: import analysis3 as a3 reload(a3) import time def time_function(fun, *args): start = time.time(); result = fun(*args); run_time = time.time() - start; minutes = run_time / 60; print('RUN TIME: %f s (%f m)' % (run_time, minutes)); return result; Explanation: Running on 4 cores with 30 gb RAM End of explanation token = 's275_to_ara3' cert_path = '../userToken.pem' time_function(a3.get_registered, token, cert_path); path = "img/" + token + "_regis.nii" im = time_function(a3.apply_clahe, path); output_ds = time_function(a3.downsample, im, 10000); time_function(a3.save_points, output_ds, "points/" + token + ".csv"); points_path = "points/" + token + ".csv"; time_function(a3.generate_pointcloud, points_path, "output/" + token + "_pointcloud.html"); time_function(a3.get_atlas_annotate, cert_path, True, None); time_function(a3.get_regions, points_path, "atlas/ara3_annotation.nii", "points/" + token + "_regions.csv"); points_region_path = "points/" + token + "_regions.csv"; g = time_function(a3.create_graph, points_region_path, 20, "graphml/" + token + "_graph.graphml"); time_function(a3.plot_graphml3d, g, False, "output/" + token + "_edgegraph.html"); time_function(a3.generate_region_graph, token, points_region_path, "output/" + token + "_regions.html"); time_function(a3.generate_density_graph, "graphml/" + token + "_graph.graphml", "output/" + token + "_density.html", "False-Color Density of " + token); print("Completed pipeline...!") Explanation: Timing individual functions End of explanation token = 's275_to_ara3' cert_path = '../userToken.pem' time_function(a3.run_pipeline, token, cert_path, 5); Explanation: Testing pipeline as a whole End of explanation
14,808
Given the following text description, write Python code to implement the functionality described below step by step Description: Ordinary Differential Equations Exercise 3 Imports Step1: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. Step5: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. Step7: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. Step9: Use interact to explore the plot_pendulum function with
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy.integrate import odeint from IPython.html.widgets import interact, fixed Explanation: Ordinary Differential Equations Exercise 3 Imports End of explanation g = 9.81 # m/s^2 l = 0.5 # length of pendulum, in meters tmax = 50. # seconds t = np.linspace(0, tmax, int(100*tmax)) Explanation: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta $$ When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t) $$ In this equation: $a$ governs the strength of the damping. $b$ governs the strength of the driving force. $\omega_0$ is the angular frequency of the driving force. When $a=0$ and $b=0$, the energy/mass is conserved: $$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$ Basic setup Here are the basic parameters we are going to use for this exercise: End of explanation def derivs(y, t, a, b, omega0): Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the differential equation. Returns ------- dy : ndarray The vector of derviatives at t[i]: [dtheta[i],domega[i]]. theta = y[0] dtheta = y[1] dw = -(g/l)*np.sin(theta) - a*dtheta - b*np.sin(omega0*t) return [dtheta,dw] assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.]) def energy(y): Compute the energy for the state array y. The state array y can have two forms: 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time. 2. It could be an ndim=2 array where each row is the [theta,omega] at single time. Parameters ---------- y : ndarray, list, tuple A solution vector Returns ------- E/m : float (ndim=1) or ndarray (ndim=2) The energy per mass. theta = y[0] omega = y[1] if y.ndim == 1: theta = y[0] omega = y[1] EperM = g*l*(1-np.cos(theta))+.5*(l**2)*omega**2 return EperM if y.ndim == 2: theta = y[:,0] omega = y[:,1] EperM = g*l*(1-np.cos(theta))+.5*(l**2)*omega**2 return EperM assert np.allclose(energy(np.array([np.pi,0])),g) assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1]))) Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. End of explanation y0 = [np.pi,0] a = 0 b = 0 omega0 = 0 soln = odeint(derivs,y0,t,args=(a,b,omega0),atol=1e-5, rtol=1e-4) theta = soln[:,0] omega = soln[:,1] plt.plot(t,energy(soln)); plt.title('Energy per Mass vs time'); plt.plot(t,theta,label='$\Theta(t)$'); plt.title('Theta and Omega vs Time'); plt.ylim((-np.pi,2*np.pi)); plt.plot(t,omega,label='$\omega(t)$'); plt.legend(); plt.xlabel('Time'); plt.ylabel('Omega,Theta'); assert True # leave this to grade the two plots and their tuning of atol, rtol. Explanation: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. End of explanation def plot_pendulum(a=0.0, b=0.0, omega0=0.0): Integrate the damped, driven pendulum and make a phase plot of the solution. y0 = [-np.pi + .1,0] soln = odeint(derivs,y0,t,args=(a,b,omega0),atol=1e-5, rtol=1e-4) theta = soln[:,0] omega = soln[:,1] plt.figure(figsize=(10,6)) plt.plot(theta,omega) plt.title('Pendlum Motion') plt.xlabel('Theta') plt.ylabel('Omega') Explanation: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. End of explanation plot_pendulum(0.5, 0.0, 0.0) Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. End of explanation interact(plot_pendulum,a=(0.0,1.0,.1),b=(0.0,10.0,.1),omega0=(0.0,10.0,.1)); Explanation: Use interact to explore the plot_pendulum function with: a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$. b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. End of explanation
14,809
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 1 - writing UDAFs the simple way This small example shows how simple it could be to write a UDAF in Spark with moderate additions to the existing API. It takes the example published in the Databricks blog to add an operator for the harmonic mean. Let's get done with some imports first Step1: Here is the definition of the harmonic mean, which is a simple function. Given a column containing floating point values, it is defined as such Step2: This is exactly how one would want to code it in numpy, pandas, and using basic Spark constructs. In fact, you can run this code straight inside Pandas Step3: This code has a number of problems if you want to use it in Spark however Step4: Something to immediately note is that the computation is lazy Step5: The compute function not only triggest the computation, but also provides more debugging information into what is happening. We are going to introspect the compiler passes to see how things get transformed. Step6: Here is the initial graph of computation, as we built it. Click on the nodes to have more detailed information. It is very clear that two computations are going to be run in parallel from the same dataset, and that caching will happen right before forking these computations. Step7: The important part to notice though is that after the count1 and the sum4 nodes, all the other nodes are observables (local values). They do not involve distributed datasets anymore, so they are very cheap to compute. The Karps compiler is going to optimize the distributed part to reduce the amount of computations, everything after that is not important for now. One of the first phases merges the inverse3 node into a single lineage, and then fuses the aggregations sum4 and count1 into a single joint aggregation. If you look at the graph below, the new nodes sum4 and count1 are in fact dummy projections that operate on local data. All the hard work is being done in a new node with a horrible name Step8: Now that we only perform a single aggregation, do we still need to cache the data? We don't! The next compiler phase is going to inspect the autocache nodes, and see how many times they get to be aggregated, and remove them if possible. In this case, it correctly infers that we do not need this autocache0 operator. Here is the final graph that gets executed Step9: More work could be done to simplify the local nodes, but this is outside the scope of this first project. As a conclusion, we wrote some minimalistic, poorly performing code in python. Karps turned it into high-performance operations that can then be optimized easily by the Spark SQL engine. In fact, this code in practice is faster than a UDAF because it can directly understood by Tungsten. In addition, this function can be reused inside aggregations with no change, as we will see. As a summary, karps lets you write the code you want to write, and turns it into a program that is Step10: Or in short if you do not want to see what is happening
Python Code: # The main function import karps as ks # The standard library import karps.functions as f # Some tools to display the computation process: from karps.display import show_phase Explanation: Example 1 - writing UDAFs the simple way This small example shows how simple it could be to write a UDAF in Spark with moderate additions to the existing API. It takes the example published in the Databricks blog to add an operator for the harmonic mean. Let's get done with some imports first: End of explanation def harmonic_mean(col): count = f.as_double(f.count(col)) inv_sum = 1.0/f.sum(1.0/col) return inv_sum * count Explanation: Here is the definition of the harmonic mean, which is a simple function. Given a column containing floating point values, it is defined as such: End of explanation # Using Pandas to evaluate our function: import pandas as pd pandas_df = pd.DataFrame([1.0, 2.0]) harmonic_mean(pandas_df) Explanation: This is exactly how one would want to code it in numpy, pandas, and using basic Spark constructs. In fact, you can run this code straight inside Pandas: End of explanation # Create a HUGE dataframe df = ks.dataframe([1.0, 2.0], name="my_input") df # And apply our function: cached_df = f.autocache(df) hmean = harmonic_mean(cached_df) hmean Explanation: This code has a number of problems if you want to use it in Spark however: - reusability: this function works great on the column of a dataframe or of a column, but it cannot be reused with groupby for instance. - performance: most Spark tutorials will teach you that as it stands, this function has crappy performance. It will recompute the input twice, which may be very expensive in some cases. This is why if one wants to use it, it is immediately advised to use the cache function of Spark, which still requires all the data to stay materialized. Karps provides the convenient autocache operator which automatically decide if caching is appropriate. We are going to use it on this simple example: End of explanation # All computations happen within a session, which keeps track of the state in Spark. s = ks.session("demo1") Explanation: Something to immediately note is that the computation is lazy: nothing gets computed and all you get is an object called multiply6 of type double. Let's compute it. Thanks to lazy evaluation, the Karps compiler can rearrange the computations to make them run faster: End of explanation comp = s.compute(hmean) Explanation: The compute function not only triggest the computation, but also provides more debugging information into what is happening. We are going to introspect the compiler passes to see how things get transformed. End of explanation show_phase(comp, "initial") show_phase(comp, "MERGE_PREAGG_AGGREGATIONS") Explanation: Here is the initial graph of computation, as we built it. Click on the nodes to have more detailed information. It is very clear that two computations are going to be run in parallel from the same dataset, and that caching will happen right before forking these computations. End of explanation show_phase(comp, "MERGE_AGGREGATIONS") Explanation: The important part to notice though is that after the count1 and the sum4 nodes, all the other nodes are observables (local values). They do not involve distributed datasets anymore, so they are very cheap to compute. The Karps compiler is going to optimize the distributed part to reduce the amount of computations, everything after that is not important for now. One of the first phases merges the inverse3 node into a single lineage, and then fuses the aggregations sum4 and count1 into a single joint aggregation. If you look at the graph below, the new nodes sum4 and count1 are in fact dummy projections that operate on local data. All the hard work is being done in a new node with a horrible name: autocache0_ks_aggstruct.... End of explanation show_phase(comp, "final") Explanation: Now that we only perform a single aggregation, do we still need to cache the data? We don't! The next compiler phase is going to inspect the autocache nodes, and see how many times they get to be aggregated, and remove them if possible. In this case, it correctly infers that we do not need this autocache0 operator. Here is the final graph that gets executed: End of explanation comp.values() Explanation: More work could be done to simplify the local nodes, but this is outside the scope of this first project. As a conclusion, we wrote some minimalistic, poorly performing code in python. Karps turned it into high-performance operations that can then be optimized easily by the Spark SQL engine. In fact, this code in practice is faster than a UDAF because it can directly understood by Tungsten. In addition, this function can be reused inside aggregations with no change, as we will see. As a summary, karps lets you write the code you want to write, and turns it into a program that is: - faster (sometimes as fast or faster than manually crafted code) - reusable and easy to compose - easy to introspect thanks to tensorboard - easy to test independently And to get the actual values: End of explanation s.eval(hmean) show_phase(comp, "parsed") show_phase(comp, "physical") show_phase(comp, "rdd") comp.dump_profile("karps_trace_1.json") Explanation: Or in short if you do not want to see what is happening: End of explanation
14,810
Given the following text description, write Python code to implement the functionality described below step by step Description: Repetitive DNA elements ("repeats") are DNA sequences prevalent in genomes, especially of higher eukaryotes. Repeats make up about 50% of the human genome and over 80% of the maize genome. Repeats can be categorized as interspersed, where similar DNA sequences are spread throughout the genome, or tandem, where similar sequences are adjacent (see Treangen and Salzberg). Some interspersed repeats are long segmental duplications, but most are relatively short transposons and retrotransposons. Though repeats are sometimes referred to as “junk,” they are involved in processes of current scientific interest, including genome expansion, speciation, and epigenetic regulation (see Fedoroff). Some are still actively expressed and duplicated, including in the human genome (see Witherspoon et al, Tyekucheva et al). RepeatMasker RepeatMasker is both a tool for identifying repeats in a genome sequence, and a database of repeats that have been found. The database covers some well known model species, like human, chimpanzee, gorilla, rhesus, rat, mouse, horse, cow, cat, dog, chicken, zebrafish, bee, fruitfly and roundworm. People often use RepeatMasker to remove ("mask out") repetitive sequences from the genome so that they can be ignored (or otherwise treated specially) in later analyses, though that's not our goal here. It's intructive to click on some of the species listed in the database and examine the associated bar and pie charts describing their repeat content. For example, note the differences between the bar charts for human and mouse, especially for SINE/Alu and LINE/L1. Working with RepeatMasker databases Let's obtain and parse a RepeatMasker database. We'll start with roundworm because it's relatively small (only about 2.5 megabytes compressed). Step1: Above are the first several lines of the .out.gz file for the roundworm (C. elegans). The columns have headers, which are somewhat helpful. More detail is available in the RepeatMasker documentation under "How to read the results". (Note that in addition to the 14 fields descrived in the documentation, there's also a 15th ID field.) Here's an extremely simple class that parses a line from these files and stores the individual values in its fields Step2: We can parse a file into a list of Repeat objects Step3: Extracting repeats from the genome in FASTA format Now let's obtain the genome for the roundworm in FASTA format. For more information on FASTA, see the FASTA notebook. As seen above, the name of the genome assembly used by RepeatMasker is ce10. We can get it from the UCSC server. It's around 30 MB. Step4: Let's load chromosome I into a string so that we can see the sequences of the repeats. Step5: Note the combination of lowercase and uppercase. Actually, that relates to our discussion here. The lowercase stretches are repeats! The UCSC genome sequences use the lowercase/uppercase distinction to make it clear where the repeats are -- and they know this because they ran RepeatMasker on the genome beforehand. In this case, the two repeats you can see are both simple hexamer repeats. Also, note that their position in the genome corresponds to the first two rows of the RepeatMasker database that we printed above. We write a function that, given a Repeat and given a dictionary containing the sequences of all the chromosomes in the genome, outputs each repeat string. Step6: Let's specifically try to extract a repeat from the DNA/CMC-Chapaev family. Step7: How are repeats related? Look at the repeat family/class names for the first several repeats in the roundworm database
Python Code: import urllib.request rm_site = 'http://www.repeatmasker.org' fn = 'ce10.fa.out.gz' url = '%s/genomes/ce10/RepeatMasker-rm405-db20140131/%s' % (rm_site, fn) urllib.request.urlretrieve(url, fn) import gzip import itertools fh = gzip.open(fn, 'rt') for ln in itertools.islice(fh, 10): print(ln, end='') Explanation: Repetitive DNA elements ("repeats") are DNA sequences prevalent in genomes, especially of higher eukaryotes. Repeats make up about 50% of the human genome and over 80% of the maize genome. Repeats can be categorized as interspersed, where similar DNA sequences are spread throughout the genome, or tandem, where similar sequences are adjacent (see Treangen and Salzberg). Some interspersed repeats are long segmental duplications, but most are relatively short transposons and retrotransposons. Though repeats are sometimes referred to as “junk,” they are involved in processes of current scientific interest, including genome expansion, speciation, and epigenetic regulation (see Fedoroff). Some are still actively expressed and duplicated, including in the human genome (see Witherspoon et al, Tyekucheva et al). RepeatMasker RepeatMasker is both a tool for identifying repeats in a genome sequence, and a database of repeats that have been found. The database covers some well known model species, like human, chimpanzee, gorilla, rhesus, rat, mouse, horse, cow, cat, dog, chicken, zebrafish, bee, fruitfly and roundworm. People often use RepeatMasker to remove ("mask out") repetitive sequences from the genome so that they can be ignored (or otherwise treated specially) in later analyses, though that's not our goal here. It's intructive to click on some of the species listed in the database and examine the associated bar and pie charts describing their repeat content. For example, note the differences between the bar charts for human and mouse, especially for SINE/Alu and LINE/L1. Working with RepeatMasker databases Let's obtain and parse a RepeatMasker database. We'll start with roundworm because it's relatively small (only about 2.5 megabytes compressed). End of explanation class Repeat(object): def __init__(self, ln): # parse fields (self.swsc, self.pctdiv, self.pctdel, self.pctins, self.refid, self.ref_i, self.ref_f, self.ref_remain, self.orient, self.rep_nm, self.rep_cl, self.rep_prior, self.rep_i, self.rep_f, self.unk) = ln.split() # int-ize the reference coordinates self.ref_i, self.ref_f = int(self.ref_i), int(self.ref_f) Explanation: Above are the first several lines of the .out.gz file for the roundworm (C. elegans). The columns have headers, which are somewhat helpful. More detail is available in the RepeatMasker documentation under "How to read the results". (Note that in addition to the 14 fields descrived in the documentation, there's also a 15th ID field.) Here's an extremely simple class that parses a line from these files and stores the individual values in its fields: End of explanation def parse_repeat_masker_db(fn): reps = [] with gzip.open(fn) if fn.endswith('.gz') else open(fn) as fh: fh.readline() # skip header fh.readline() # skip header fh.readline() # skip header while True: ln = fh.readline() if len(ln) == 0: break reps.append(Repeat(ln.decode('UTF8'))) return reps reps = parse_repeat_masker_db('ce10.fa.out.gz') Explanation: We can parse a file into a list of Repeat objects: End of explanation ucsc_site = 'http://hgdownload.cse.ucsc.edu/goldenPath' fn = 'chromFa.tar.gz' urllib.request.urlretrieve("%s/ce10/bigZips/%s" % (ucsc_site, fn), fn) !tar zxvf chromFa.tar.gz Explanation: Extracting repeats from the genome in FASTA format Now let's obtain the genome for the roundworm in FASTA format. For more information on FASTA, see the FASTA notebook. As seen above, the name of the genome assembly used by RepeatMasker is ce10. We can get it from the UCSC server. It's around 30 MB. End of explanation from collections import defaultdict def parse_fasta(fns): ret = defaultdict(list) for fn in fns: with open(fn, 'rt') as fh: for ln in fh: if ln[0] == '>': name = ln[1:].rstrip() else: ret[name].append(ln.rstrip()) for k, v in ret.items(): ret[k] = ''.join(v) return ret genome = parse_fasta(['chrI.fa', 'chrII.fa', 'chrIII.fa', 'chrIV.fa', 'chrM.fa', 'chrV.fa', 'chrX.fa']) genome['chrI'][:1000] # printing just the first 1K nucleotides Explanation: Let's load chromosome I into a string so that we can see the sequences of the repeats. End of explanation def extract_repeat(rep, genome): assert rep.refid in genome return genome[rep.refid][rep.ref_i-1:rep.ref_f] extract_repeat(reps[0], genome) extract_repeat(reps[1], genome) extract_repeat(reps[2], genome) Explanation: Note the combination of lowercase and uppercase. Actually, that relates to our discussion here. The lowercase stretches are repeats! The UCSC genome sequences use the lowercase/uppercase distinction to make it clear where the repeats are -- and they know this because they ran RepeatMasker on the genome beforehand. In this case, the two repeats you can see are both simple hexamer repeats. Also, note that their position in the genome corresponds to the first two rows of the RepeatMasker database that we printed above. We write a function that, given a Repeat and given a dictionary containing the sequences of all the chromosomes in the genome, outputs each repeat string. End of explanation chapaevs = filter(lambda x: 'DNA/CMC-Chapaev' == x.rep_cl, reps) [extract_repeat(chapaev, genome) for chapaev in chapaevs] Explanation: Let's specifically try to extract a repeat from the DNA/CMC-Chapaev family. End of explanation from operator import attrgetter ' '.join(map(attrgetter('rep_cl'), reps[:60])) Explanation: How are repeats related? Look at the repeat family/class names for the first several repeats in the roundworm database: End of explanation
14,811
Given the following text description, write Python code to implement the functionality described below step by step Description: Parameters used Query profile size Step1: 2. Helper methods Step2: 2. Plot decay and noise
Python Code: import matplotlib.lines as mlines import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import math import json %matplotlib inline Explanation: Parameters used Query profile size: 10 Number of query profiles: 5 Information Content: Annotation IC Profile aggregation: Best Pairs Directionality of similarity: Symmetric 1. Import required modules End of explanation def load_results(infile,quartile,scores,metric,granularity): next(infile) for line in infile: queryid,numreplaced,match,score=line.strip().split() numreplaced=int(numreplaced) if metric not in scores: scores[metric]=dict() if quartile not in scores[metric]: scores[metric][quartile]=dict() if granularity not in scores[metric][quartile]: scores[metric][quartile][granularity]=dict() if numreplaced not in scores[metric][quartile][granularity]: scores[metric][quartile][granularity][numreplaced]=[] scores[metric][quartile][granularity][numreplaced].append(float(score)) infile.close() return scores def error(scorelist): return 2*(np.std(scorelist)/math.sqrt(len(scorelist))) Explanation: 2. Helper methods End of explanation scores=dict() quartile=50 granularity='E' f, axarr = plt.subplots(3, 3) i=j=0 titledict={'BPSym__Jaccard':'Jaccard','BPSym_AIC_Resnik':'Resnik','BPSym_AIC_Lin':'Lin' ,'BPSym_AIC_Jiang':'Jiang','_AIC_simGIC':'simGIC','BPAsym_AIC_HRSS':'HRSS','Groupwise_Jaccard':'Groupwise_Jaccard'} lines=[] legend=[] for profilesize in [10]: for metric in ['BPSym_AIC_Resnik','BPSym_AIC_Lin','BPSym_AIC_Jiang','_AIC_simGIC','BPSym__Jaccard', 'Groupwise_Jaccard','BPAsym_AIC_HRSS']: # plotting annotation replacement infile=open("../../results/FullDistribution/AnnotationReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv") scores=load_results(infile,quartile,scores,metric,granularity) infile.close() signallist=[] errorlist=[] numreplacedlist=sorted(scores[metric][quartile][granularity].keys()) for numreplaced in numreplacedlist : signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced])) errorlist.append(error(scores[metric][quartile][granularity][numreplaced])) line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='blue',linewidth=3) if len(lines)==0: lines.append(line) legend.append("Annotation Replacement") axarr[i][j].set_title(titledict[metric]) axarr[i][j].set_ylim(0,1) # plotting Ancestral Replacement ancestralreplacementfile="../../results/FullDistribution/AncestralReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv" if os.path.isfile(ancestralreplacementfile): infile=open(ancestralreplacementfile) scores=load_results(infile,quartile,scores,metric,granularity) infile.close() signallist=[] errorlist=[] numreplacedlist=sorted(scores[metric][quartile][granularity].keys()) for numreplaced in numreplacedlist : signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced])) errorlist.append(error(scores[metric][quartile][granularity][numreplaced])) line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='green',linewidth=3) if len(lines)==1: lines.append(line) legend.append("Ancestral Replacement") # plotting noise decaytype="AnnotationReplacement" if "simGIC" in metric or "Groupwise_Jaccard" in metric: noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_Noise_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv" else: noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_NoiseDecay_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv" if os.path.isfile(noisefile): noisedist= json.load(open(noisefile)) line=axarr[i][j].axhline(y=np.percentile(noisedist,99.9),linestyle='--',color='black',label='_nolegend_') if len(lines)==2: lines.append(line) legend.append("99.9 percentile noise") if j==2: j=0 i+=1 else: j+=1 Explanation: 2. Plot decay and noise End of explanation
14,812
Given the following text description, write Python code to implement the functionality described below step by step Description: 练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间(n,m,k由用户输入)。 Step1: 写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数) Step2: 写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
Python Code: m=int(input('请输入数字下界,按回车键结束')) k=int(input('请输入数字上界,按回车键结束')) n=int(input('请输入数字个数')) i=0 import random while i<n: number=random.randint(m,k) i+=1 print(number) total=number+number+number print((total/n)**(1/2)) Explanation: 练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间(n,m,k由用户输入)。 End of explanation m=int(input('请输入数字下界,按回车键结束')) k=int(input('请输入数字上界,按回车键结束')) n=int(input('请输入数字个数')) i=0 import random import math while i<n: number=random.randint(m,k) i+=1 print(number) result_1=math.log(number) print(result_1, 1/(result_1)) Explanation: 写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数) End of explanation m=int(input('请输入你想求和的数字个数')) def compute_sum(end): import random i=0 total_1=0 total_2=0 while i<m: i+=1 total_1=total_1+total_2+a*10**(i-1) total_2=total_1-total_2 print(total_1) a=random.randint(1,9) compute_sum(a) def win(): print('Win!') def lose(): print('Lose!') def game_over(): print('Game Over!') def menu(): print('''=====游戏菜单===== 1. 游戏说明 2. 开始游戏 3. 退出游戏 4. 制作团队 =====游戏菜单=====''') def guess_game(): n = int(input('请输入一个大于0的整数作为数字的上限,按回车结束。')) import random number=random.randint(1,n) print('标准数字为' , number) m=random.randint(1,n) print('计算机猜测数字为' , m) k=int(input('请判断猜测数字的大小,若等于请输入0,若大于标准数字请输入1,若小于标准数字请输入2')) if k==0: win() elif k==1: m_1=random.randint(1,m) else: m_1=random.randint(m,n) def win(): print('Win!') def lose(): print('Lose!') def game_over(): print('Game Over!') def show_team(): print('wow') def show_instruction(): print('ok') def menu(): print('''=====游戏菜单===== 1. 游戏说明 2. 开始游戏 3. 退出游戏 4. 制作团队 =====游戏菜单=====''') def guess_game(): n = int(input('请输入一个大于0的整数作为数字的上限,按回车结束。')) import random number=random.randint(1,n) print('标准数字为' , number) m=random.randint(1,n) print('计算机猜测数字为' , m) k=int(input('请判断猜测数字的大小,若等于请输入0,若大于标准数字请输入1,若小于标准数字请输入2')) if k==0: win() elif k==1: m_1=random.randint(1,m) else: m_1=random.randint(m,n) def main(): while True: menu() choice = int(input('请输入你的选择')) if choice == 1: show_instruction() elif choice == 2: guess_game() elif choice == 3: game_over() break else: show_team() if __name__ == '__main__': main() Explanation: 写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。 End of explanation
14,813
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step7: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step9: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step11: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step13: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step16: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step19: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step22: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). Step25: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. Step28: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Step31: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note Step34: Build the Neural Network Apply the functions you implemented above to Step35: Neural Network Training Hyperparameters Tune the following parameters Step37: Build the Graph Build the graph using the neural network you implemented. Step40: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step42: Save Parameters Save the batch_size and save_path parameters for inference. Step44: Checkpoint Step47: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step49: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in source_text.split('\n')] y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in target_text.split('\n')] source_id_text = [] target_id_text = [] found in a forum post. necessary? n1 = len(x[i]) n2 = len(y[i]) n = n1 if n1 < n2 else n2 if abs(n1 - n2) <= 0.3 * n: if n1 <= 17 and n2 <= 17: for i in range(len(x)): source_id_text.append(x[i]) target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']]) return (source_id_text, target_id_text) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) # TODO: Implement Function input_text = tf.placeholder(tf.int32,[None, None], name="input") target_text = tf.placeholder(tf.int32,[None, None], name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") return input_text, target_text, learning_rate, keep_prob DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation def process_decoding_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_decoding_input(process_decoding_input) Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state # TODO: Implement Function enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) _, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32) return enc_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits # TODO: Implement Function train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \ dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(train_logits_drop) #I'm missing the keep_prob! don't know where to put it return train_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: Maximum length of :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) #Again, don't know where to put the keep_drop param return inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\ None, scope=decoding_scope) with tf.variable_scope("decoding") as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell_drop, dec_embed_input,\ sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, dec_cell_drop, dec_embeddings,\ target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'], sequence_length,\ vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob) processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data) train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\ sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 200 # Number of Layers num_layers = 30 # Embedding Size encoding_embedding_size = 64 decoding_embedding_size = 64 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.8 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import time def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target_batch, [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)], 'constant') if max_seq - batch_train_logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function return None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
14,814
Given the following text description, write Python code to implement the functionality described below step by step Description: Brainstorm auditory tutorial dataset Here we compute the evoked from raw for the auditory Brainstorm tutorial dataset. For comparison, see [1]_ and Step1: To reduce memory consumption and running time, some of the steps are precomputed. To run everything from scratch change this to False. With use_precomputed = False running time of this script can be several minutes even on a fast computer. Step2: The data was collected with a CTF 275 system at 2400 Hz and low-pass filtered at 600 Hz. Here the data and empty room data files are read to construct instances of Step3: In the memory saving mode we use preload=False and use the memory efficient IO which loads the data on demand. However, filtering and some other functions require the data to be preloaded in the memory. Step4: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference sensors and 2 EEG electrodes (Cz and Pz). In addition Step5: For noise reduction, a set of bad segments have been identified and stored in csv files. The bad segments are later used to reject epochs that overlap with them. The file for the second run also contains some saccades. The saccades are removed by using SSP. We use pandas to read the data from the csv files. You can also view the files with your favorite text editor. Step6: Here we compute the saccade and EOG projectors for magnetometers and add them to the raw data. The projectors are added to both runs. Step7: Visually inspect the effects of projections. Click on 'proj' button at the bottom right corner to toggle the projectors on/off. EOG events can be plotted by adding the event list as a keyword argument. As the bad segments and saccades were added as annotations to the raw data, they are plotted as well. Step8: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. The power spectra are plotted before and after the filtering to show the effect. The drop after 600 Hz appears because the data was filtered during the acquisition. In memory saving mode we do the filtering at evoked stage, which is not something you usually would do. Step9: We also lowpass filter the data at 100 Hz to remove the hf components. Step10: Epoching and averaging. First some parameters are defined and events extracted from the stimulus channel (UPPT001). The rejection thresholds are defined as peak-to-peak values and are in T / m for gradiometers, T for magnetometers and V for EOG and EEG channels. Step11: The event timing is adjusted by comparing the trigger times on detected sound onsets on channel UADC001-4408. Step12: We mark a set of bad channels that seem noisier than others. This can also be done interactively with raw.plot by clicking the channel name (or the line). The marked channels are added as bad when the browser window is closed. Step13: The epochs (trials) are created for MEG channels. First we find the picks for MEG and EOG channels. Then the epochs are constructed using these picks. The epochs overlapping with annotated bad segments are also rejected by default. To turn off rejection by bad segments (as was done earlier with saccades) you can use keyword reject_by_annotation=False. Step14: We only use first 40 good epochs from each run. Since we first drop the bad epochs, the indices of the epochs are no longer same as in the original epochs collection. Investigation of the event timings reveals that first epoch from the second run corresponds to index 182. Step15: The averages for each conditions are computed. Step16: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all line artifacts (and high frequency information). Normally this would be done to raw data (with Step17: Here we plot the ERF of standard and deviant conditions. In both conditions we can see the P50 and N100 responses. The mismatch negativity is visible only in the deviant condition around 100-200 ms. P200 is also visible around 170 ms in both conditions but much stronger in the standard condition. P300 is visible in deviant condition only (decision making in preparation of the button press). You can view the topographies from a certain time span by painting an area with clicking and holding the left mouse button. Step18: Show activations as topography figures. Step19: We can see the MMN effect more clearly by looking at the difference between the two conditions. P50 and N100 are no longer visible, but MMN/P200 and P300 are emphasised. Step20: Source estimation. We compute the noise covariance matrix from the empty room measurement and use it for the other runs. Step21: The transformation is read from a file. More information about coregistering the data, see ch_interactive_analysis or Step22: To save time and memory, the forward solution is read from a file. Set use_precomputed=False in the beginning of this script to build the forward solution from scratch. The head surfaces for constructing a BEM solution are read from a file. Since the data only contains MEG channels, we only need the inner skull surface for making the forward solution. For more information Step23: The sources are computed using dSPM method and plotted on an inflated brain surface. For interactive controls over the image, use keyword time_viewer=True. Standard condition. Step24: Deviant condition. Step25: Difference.
Python Code: # Authors: Mainak Jas <[email protected]> # Eric Larson <[email protected]> # Jaakko Leppakangas <[email protected]> # # License: BSD (3-clause) import os.path as op import pandas as pd import numpy as np import mne from mne import combine_evoked from mne.minimum_norm import apply_inverse from mne.datasets.brainstorm import bst_auditory from mne.io import read_raw_ctf print(__doc__) Explanation: Brainstorm auditory tutorial dataset Here we compute the evoked from raw for the auditory Brainstorm tutorial dataset. For comparison, see [1]_ and: http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory Experiment: - One subject, 2 acquisition runs 6 minutes each. - Each run contains 200 regular beeps and 40 easy deviant beeps. - Random ISI: between 0.7s and 1.7s seconds, uniformly distributed. - Button pressed when detecting a deviant with the right index finger. The specifications of this dataset were discussed initially on the FieldTrip bug tracker &lt;http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300&gt;_. References .. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716 End of explanation use_precomputed = True Explanation: To reduce memory consumption and running time, some of the steps are precomputed. To run everything from scratch change this to False. With use_precomputed = False running time of this script can be several minutes even on a fast computer. End of explanation data_path = bst_auditory.data_path() subject = 'bst_auditory' subjects_dir = op.join(data_path, 'subjects') raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_01.ds') raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_02.ds') erm_fname = op.join(data_path, 'MEG', 'bst_auditory', 'S01_Noise_20131218_01.ds') Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass filtered at 600 Hz. Here the data and empty room data files are read to construct instances of :class:mne.io.Raw. End of explanation preload = not use_precomputed raw = read_raw_ctf(raw_fname1, preload=preload) n_times_run1 = raw.n_times mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)]) raw_erm = read_raw_ctf(erm_fname, preload=preload) Explanation: In the memory saving mode we use preload=False and use the memory efficient IO which loads the data on demand. However, filtering and some other functions require the data to be preloaded in the memory. End of explanation raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'}) if not use_precomputed: # Leave out the two EEG channels for easier computation of forward. raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True, ecg=True) Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference sensors and 2 EEG electrodes (Cz and Pz). In addition: 1 stim channel for marking presentation times for the stimuli 1 audio channel for the sent signal 1 response channel for recording the button presses 1 ECG bipolar 2 EOG bipolar (vertical and horizontal) 12 head tracking channels 20 unused channels The head tracking channels and the unused channels are marked as misc channels. Here we define the EOG and ECG channels. End of explanation annotations_df = pd.DataFrame() offset = n_times_run1 for idx in [1, 2]: csv_fname = op.join(data_path, 'MEG', 'bst_auditory', 'events_bad_0%s.csv' % idx) df = pd.read_csv(csv_fname, header=None, names=['onset', 'duration', 'id', 'label']) print('Events from run {0}:'.format(idx)) print(df) df['onset'] += offset * (idx - 1) annotations_df = pd.concat([annotations_df, df], axis=0) saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int) # Conversion from samples to times: onsets = annotations_df['onset'].values / raw.info['sfreq'] durations = annotations_df['duration'].values / raw.info['sfreq'] descriptions = annotations_df['label'].values annotations = mne.Annotations(onsets, durations, descriptions) raw.annotations = annotations del onsets, durations, descriptions Explanation: For noise reduction, a set of bad segments have been identified and stored in csv files. The bad segments are later used to reject epochs that overlap with them. The file for the second run also contains some saccades. The saccades are removed by using SSP. We use pandas to read the data from the csv files. You can also view the files with your favorite text editor. End of explanation saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True, reject_by_annotation=False) projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0, desc_prefix='saccade') if use_precomputed: proj_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-eog-proj.fif') projs_eog = mne.read_proj(proj_fname)[0] else: projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(), n_mag=1, n_eeg=0) raw.add_proj(projs_saccade) raw.add_proj(projs_eog) del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory Explanation: Here we compute the saccade and EOG projectors for magnetometers and add them to the raw data. The projectors are added to both runs. End of explanation raw.plot(block=True) Explanation: Visually inspect the effects of projections. Click on 'proj' button at the bottom right corner to toggle the projectors on/off. EOG events can be plotted by adding the event list as a keyword argument. As the bad segments and saccades were added as annotations to the raw data, they are plotted as well. End of explanation if not use_precomputed: meg_picks = mne.pick_types(raw.info, meg=True, eeg=False) raw.plot_psd(tmax=np.inf, picks=meg_picks) notches = np.arange(60, 181, 60) raw.notch_filter(notches, phase='zero-double', fir_design='firwin2') raw.plot_psd(tmax=np.inf, picks=meg_picks) Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. The power spectra are plotted before and after the filtering to show the effect. The drop after 600 Hz appears because the data was filtered during the acquisition. In memory saving mode we do the filtering at evoked stage, which is not something you usually would do. End of explanation if not use_precomputed: raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s', phase='zero-double', fir_design='firwin2') Explanation: We also lowpass filter the data at 100 Hz to remove the hf components. End of explanation tmin, tmax = -0.1, 0.5 event_id = dict(standard=1, deviant=2) reject = dict(mag=4e-12, eog=250e-6) # find events events = mne.find_events(raw, stim_channel='UPPT001') Explanation: Epoching and averaging. First some parameters are defined and events extracted from the stimulus channel (UPPT001). The rejection thresholds are defined as peak-to-peak values and are in T / m for gradiometers, T for magnetometers and V for EOG and EEG channels. End of explanation sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0] onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0] min_diff = int(0.5 * raw.info['sfreq']) diffs = np.concatenate([[min_diff + 1], np.diff(onsets)]) onsets = onsets[diffs > min_diff] assert len(onsets) == len(events) diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq'] print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms' % (np.mean(diffs), np.std(diffs))) events[:, 0] = onsets del sound_data, diffs Explanation: The event timing is adjusted by comparing the trigger times on detected sound onsets on channel UADC001-4408. End of explanation raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408'] Explanation: We mark a set of bad channels that seem noisier than others. This can also be done interactively with raw.plot by clicking the channel name (or the line). The marked channels are added as bad when the browser window is closed. End of explanation picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=False, proj=True) Explanation: The epochs (trials) are created for MEG channels. First we find the picks for MEG and EOG channels. Then the epochs are constructed using these picks. The epochs overlapping with annotated bad segments are also rejected by default. To turn off rejection by bad segments (as was done earlier with saccades) you can use keyword reject_by_annotation=False. End of explanation epochs.drop_bad() epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)], epochs['standard'][182:222]]) epochs_standard.load_data() # Resampling to save memory. epochs_standard.resample(600, npad='auto') epochs_deviant = epochs['deviant'].load_data() epochs_deviant.resample(600, npad='auto') del epochs, picks Explanation: We only use first 40 good epochs from each run. Since we first drop the bad epochs, the indices of the epochs are no longer same as in the original epochs collection. Investigation of the event timings reveals that first epoch from the second run corresponds to index 182. End of explanation evoked_std = epochs_standard.average() evoked_dev = epochs_deviant.average() del epochs_standard, epochs_deviant Explanation: The averages for each conditions are computed. End of explanation for evoked in (evoked_std, evoked_dev): evoked.filter(l_freq=None, h_freq=40., fir_design='firwin') Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all line artifacts (and high frequency information). Normally this would be done to raw data (with :func:mne.io.Raw.filter), but to reduce memory consumption of this tutorial, we do it at evoked stage. (At the raw stage, you could alternatively notch filter with :func:mne.io.Raw.notch_filter.) End of explanation evoked_std.plot(window_title='Standard', gfp=True) evoked_dev.plot(window_title='Deviant', gfp=True) Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions we can see the P50 and N100 responses. The mismatch negativity is visible only in the deviant condition around 100-200 ms. P200 is also visible around 170 ms in both conditions but much stronger in the standard condition. P300 is visible in deviant condition only (decision making in preparation of the button press). You can view the topographies from a certain time span by painting an area with clicking and holding the left mouse button. End of explanation times = np.arange(0.05, 0.301, 0.025) evoked_std.plot_topomap(times=times, title='Standard') evoked_dev.plot_topomap(times=times, title='Deviant') Explanation: Show activations as topography figures. End of explanation evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal') evoked_difference.plot(window_title='Difference', gfp=True) Explanation: We can see the MMN effect more clearly by looking at the difference between the two conditions. P50 and N100 are no longer visible, but MMN/P200 and P300 are emphasised. End of explanation reject = dict(mag=4e-12) cov = mne.compute_raw_covariance(raw_erm, reject=reject) cov.plot(raw_erm.info) del raw_erm Explanation: Source estimation. We compute the noise covariance matrix from the empty room measurement and use it for the other runs. End of explanation trans_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-trans.fif') trans = mne.read_trans(trans_fname) Explanation: The transformation is read from a file. More information about coregistering the data, see ch_interactive_analysis or :func:mne.gui.coregistration. End of explanation if use_precomputed: fwd_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-meg-oct-6-fwd.fif') fwd = mne.read_forward_solution(fwd_fname) else: src = mne.setup_source_space(subject, spacing='ico4', subjects_dir=subjects_dir, overwrite=True) model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3], subjects_dir=subjects_dir) bem = mne.make_bem_solution(model) fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src, bem=bem) inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov) snr = 3.0 lambda2 = 1.0 / snr ** 2 del fwd Explanation: To save time and memory, the forward solution is read from a file. Set use_precomputed=False in the beginning of this script to build the forward solution from scratch. The head surfaces for constructing a BEM solution are read from a file. Since the data only contains MEG channels, we only need the inner skull surface for making the forward solution. For more information: CHDBBCEJ, :func:mne.setup_source_space, create_bem_model, :func:mne.bem.make_watershed_bem. End of explanation stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM') brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.1, time_unit='s') del stc_standard, brain Explanation: The sources are computed using dSPM method and plotted on an inflated brain surface. For interactive controls over the image, use keyword time_viewer=True. Standard condition. End of explanation stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM') brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.1, time_unit='s') del stc_deviant, brain Explanation: Deviant condition. End of explanation stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM') brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.15, time_unit='s') Explanation: Difference. End of explanation
14,815
Given the following text description, write Python code to implement the functionality described below step by step Description: Ploting points and lines on a reproject raster Here's how to draw a line between seattle and portland given the raster modis image of Vancouver produced by modis_to_h5.py Note that this assumes channel 1 is named '1', not 'chan1' as in the older version of modis_to_h5.py 1. read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads Step1: 2. Plot chan1 as in resample2.ipynb, and add vancouver and portland points with a line between them Step2: What is the distance between Vancouver and Portland? pyproj.Geod defines a geoid using the major and minor axes from our Basemap datum and calculates the azimuthal angles and distance between two points along a great circle rounte (i.e. shortest distance along the surface of the WGS84 geoid)
Python Code: import h5py from a301utils.a301_readfile import download from mpl_toolkits.basemap import Basemap from matplotlib import pyplot as plt import json import numpy as np rad_file=' MYD021KM.A2016217.1915.006.2016218155919.h5' geom_file='MYD03.A2016217.1915.006.2016218154759.h5' download(rad_file) data_name='MYD021KM.A2016224.2100.006_new.reproject.h5' download(data_name) with h5py.File(data_name,'r') as h5_file: basemap_args=json.loads(h5_file.attrs['basemap_args']) chan1=h5_file['channels']['1'][...] print(basemap_args) Explanation: Ploting points and lines on a reproject raster Here's how to draw a line between seattle and portland given the raster modis image of Vancouver produced by modis_to_h5.py Note that this assumes channel 1 is named '1', not 'chan1' as in the older version of modis_to_h5.py 1. read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads End of explanation %matplotlib inline from matplotlib import cm from matplotlib.colors import Normalize cmap=cm.autumn #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps cmap.set_over('w') cmap.set_under('b',alpha=0.2) cmap.set_bad('0.75') #75% grey plt.close('all') fig,ax = plt.subplots(1,1,figsize=(14,14)) # # set up the Basemap object # basemap_args['ax']=ax basemap_args['resolution']='c' bmap = Basemap(**basemap_args) num_meridians=180 num_parallels = 90 vmin=None; vmax=None col = bmap.imshow(chan1, origin='upper',cmap=cmap, vmin=0, vmax=0.4) lon_sep, lat_sep = 5,5 parallels = np.arange(-90, 90, lat_sep) meridians = np.arange(0, 360, lon_sep) bmap.drawparallels(parallels, labels=[1, 0, 0, 0], fontsize=10, latmax=90) bmap.drawmeridians(meridians, labels=[0, 0, 0, 1], fontsize=10, latmax=90) bmap.drawcoastlines() colorbar=fig.colorbar(col, shrink=0.5, pad=0.05,extend='both') colorbar.set_label('channel1 reflectivity',rotation=-90,verticalalignment='bottom') _=ax.set(title='vancouver') # # now use the basemap object to project the portland and vancouver # lon/lat coords into the xy lambert coordinate system # # remember what the asterisk * argument expansion does: # if I have a list A=[a,b] then fun(*A) is the same as fun(a,b) # # vancouver_lon_lat=[-123.1207,49.2827] portland_lon_lat=[-122.6765,45.5231] # # get the xy coords # van_xy = bmap(*vancouver_lon_lat) portland_xy = bmap(*portland_lon_lat) # # draw a blue circle for van and # a green circle for portland # bmap.plot(*van_xy,'bo',markersize=15) bmap.plot(*portland_xy,'go',markersize=15) # # connect them with a cyan line # xcoords=[van_xy[0],portland_xy[0]] ycoords=[van_xy[1],portland_xy[1]] _ = bmap.plot(xcoords,ycoords,'c-',linewidth=5) Explanation: 2. Plot chan1 as in resample2.ipynb, and add vancouver and portland points with a line between them End of explanation import pyproj great_circle=pyproj.Geod(a=bmap.rmajor,b=bmap.rminor) azi12,azi21,distance=great_circle.inv(vancouver_lon_lat[0],vancouver_lon_lat[1], portland_lon_lat[0],portland_lon_lat[1]) print('Vancouver to Portland -- great circle is: {:5.2f} km'.format(distance/1.e3)) Explanation: What is the distance between Vancouver and Portland? pyproj.Geod defines a geoid using the major and minor axes from our Basemap datum and calculates the azimuthal angles and distance between two points along a great circle rounte (i.e. shortest distance along the surface of the WGS84 geoid) End of explanation
14,816
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Create an array of 10 zeros Step2: Create an array of 10 ones Step3: Create an array of 10 fives Step4: Create an array of the integers from 10 to 50 Step5: Create an array of all the even integers from 10 to 50 Step6: Create a 3x3 matrix with values ranging from 0 to 8 Step7: Create a 3x3 identity matrix Step8: Use NumPy to generate a random number between 0 and 1 Step9: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution Step10: Create the following matrix Step11: Create an array of 20 linearly spaced points between 0 and 1 Step12: Numpy Indexing and Selection Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs Step13: Now do the following Get the sum of all the values in mat Step14: Get the standard deviation of the values in mat Step15: Get the sum of all the columns in mat Step16: Bonus Question We worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? Click Here for a Hint
Python Code: import numpy as np Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> <center>Copyright Pierian Data 2017</center> <center>For more information, visit us at www.pieriandata.com</center> NumPy Exercises - Solutions Now that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions. IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! Import NumPy as np End of explanation # CODE HERE np.zeros(10) Explanation: Create an array of 10 zeros End of explanation # CODE HERE np.ones(10) Explanation: Create an array of 10 ones End of explanation # CODE HERE np.ones(10) * 5 Explanation: Create an array of 10 fives End of explanation # CODE HERE np.arange(10,51) Explanation: Create an array of the integers from 10 to 50 End of explanation # CODE HERE np.arange(10,51,2) Explanation: Create an array of all the even integers from 10 to 50 End of explanation # CODE HERE np.arange(9).reshape(3,3) Explanation: Create a 3x3 matrix with values ranging from 0 to 8 End of explanation # CODE HERE np.eye(3) Explanation: Create a 3x3 identity matrix End of explanation # CODE HERE np.random.rand(1) Explanation: Use NumPy to generate a random number between 0 and 1 End of explanation # CODE HERE np.random.randn(25) Explanation: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution End of explanation np.arange(1,101).reshape(10,10) / 100 Explanation: Create the following matrix: End of explanation np.linspace(0,1,20) Explanation: Create an array of 20 linearly spaced points between 0 and 1: End of explanation # CODE HERE mat = np.arange(1,26).reshape(5,5) mat # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[2:,1:] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[3,4] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[:3,1:2] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[4,:] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[3:5,:] Explanation: Numpy Indexing and Selection Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs: End of explanation # CODE HERE mat.sum() Explanation: Now do the following Get the sum of all the values in mat End of explanation # CODE HERE mat.std() Explanation: Get the standard deviation of the values in mat End of explanation # CODE HERE mat.sum(axis=0) Explanation: Get the sum of all the columns in mat End of explanation np.random.seed(101) Explanation: Bonus Question We worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? Click Here for a Hint End of explanation
14,817
Given the following text description, write Python code to implement the functionality described below step by step Description: <!--BOOK_INFORMATION--> <a href="https Step1: Preprocessing the data However, since we are working with OpenCV, this time, we want to make sure the input matrix is made up of 32-bit floating point numbers, otherwise the code will break Step2: Furthermore, we need to think back to Chapter 4, Representing Data and Engineering and Features, and remember how to represent categorical variables. We need to find a way to represent target labels, not as integers but with a one-hot encoding. The easiest way to achieve this is by using scikit-learn's preprocessing module Step3: Creating an MLP classifier in OpenCV The syntax to create an MLP in OpenCV is the same as for all the other classifiers Step4: However, now we need to specify how many layers we want in the network and how many neurons there are per layer. We do this with a list of integers, which specify the number of neurons in each layer. Since the data matrix X has two features, the first layer should also have two neurons in it (n_input). Since the output has two different values, the last layer should also have two neurons in it (n_output). In between these two layers, we can put as many hidden layers with as many neurons as we want. Let's choose a single hidden layer with an arbitrary number of eight neurons in it (n_hidden) Step5: Customizing the MLP classifier Before we move on to training the classifier, we can customize the MLP classifier via a number of optional settings Step6: If you are curious what this activation function looks like, we can take a short excursion with Matplotlib Step7: As mentioned in the preceding part, a training method can be set via mlp.setTrainMethod. The following methods are available Step8: Lastly, we can specify the criteria that must be met for training to end via mlp.setTermCriteria. This works the same for every classifier in OpenCV and is closely tied to the underlying C++ functionality. We first tell OpenCV which criteria we are going to specify (for example, the maximum number of iterations). Then we specify the value for this criterion. All values must be delivered in a tuple. Step9: Training and testing the MLP classifier This is the easy part. Training the MLP classifier is the same as with all other classifiers Step10: The same goes for predicting target labels Step11: The easiest way to measure accuracy is by using scikit-learn's helper function Step12: It looks like we were able to increase our performance from 81% with a single perceptron to 84% with an MLP consisting of ten hidden-layer neurons and two output neurons. In order to see what changed, we can look at the decision boundary one more time
Python Code: from sklearn.datasets.samples_generator import make_blobs X_raw, y_raw = make_blobs(n_samples=100, centers=2, cluster_std=5.2, random_state=42) Explanation: <!--BOOK_INFORMATION--> <a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a> This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler. The code is released under the MIT license, and is available on GitHub. Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < Understanding Perceptrons | Contents | Getting Acquainted with Deep Learning > Implementing a Multi-Layer Perceptron (MLP) in OpenCV In order to create nonlinear decision boundaries, we can combine multiple perceptrons to form a larger network. This is also known as a multilayer perceptron (MLP). MLPs usually consist of at least three layers, where the first layer has a node (or neuron) for every input feature of the dataset, and the last layer has a node for every class label. The layer in between is called the hidden layer. Loading and preprocessing the data Implementing an MLP in OpenCV uses the same syntax that we have seen at least a dozen times before. In order to see how an MLP compares to a single perceptron, we will operate on the same toy data as before: End of explanation import numpy as np X = X_raw.astype(np.float32) Explanation: Preprocessing the data However, since we are working with OpenCV, this time, we want to make sure the input matrix is made up of 32-bit floating point numbers, otherwise the code will break: End of explanation from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(sparse=False, dtype=np.float32) y = enc.fit_transform(y_raw.reshape(-1, 1)) Explanation: Furthermore, we need to think back to Chapter 4, Representing Data and Engineering and Features, and remember how to represent categorical variables. We need to find a way to represent target labels, not as integers but with a one-hot encoding. The easiest way to achieve this is by using scikit-learn's preprocessing module: End of explanation import cv2 mlp = cv2.ml.ANN_MLP_create() Explanation: Creating an MLP classifier in OpenCV The syntax to create an MLP in OpenCV is the same as for all the other classifiers: End of explanation n_input = 2 n_hidden = 8 n_output = 2 mlp.setLayerSizes(np.array([n_input, n_hidden, n_output])) Explanation: However, now we need to specify how many layers we want in the network and how many neurons there are per layer. We do this with a list of integers, which specify the number of neurons in each layer. Since the data matrix X has two features, the first layer should also have two neurons in it (n_input). Since the output has two different values, the last layer should also have two neurons in it (n_output). In between these two layers, we can put as many hidden layers with as many neurons as we want. Let's choose a single hidden layer with an arbitrary number of eight neurons in it (n_hidden): End of explanation mlp.setActivationFunction(cv2.ml.ANN_MLP_SIGMOID_SYM, 2.5, 1.0) Explanation: Customizing the MLP classifier Before we move on to training the classifier, we can customize the MLP classifier via a number of optional settings: - mlp.setActivationFunction: This defines the activation function to be used for every neuron in the network - mlp.setTrainMethod: This defines a suitable training method - mlp.setTermCriteria: This sets the termination criteria of the training phase Whereas our home-brewed perceptron classifier used a linear activation function, OpenCV provides two additional options: - cv2.ml.ANN_MLP_IDENTITY: This is the linear activation function, $f(x) = x$. - cv2.ml.ANN_MLP_SIGMOID_SYM: This is the symmetrical sigmoid function (also known as hyperbolic tangent), $f(x) = \beta (1 - \exp(-\alpha x)) / (1 + \exp(-\alpha x))$. Whereas $\alpha$ controls the slope of the function, $\beta$ defines the upper and lower bounds of the output. - cv2.ml.ANN_GAUSSIAN: This is the Gaussian function (also known as the bell curve), $f(x) = \beta \exp(-\alpha x^2)$. Whereas $α$ controls the slope of the function, $\beta$ defines the upper bound of the output. In this example, we will use a proper sigmoid function that squashes the input values into the range [0, 1]. We do this by choosing $\alpha = 2.5$ and $\beta = 1.0$: End of explanation import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') alpha = 2.5 beta = 1.0 x_sig = np.linspace(-1.0, 1.0, 100) y_sig = beta * (1.0 - np.exp(-alpha * x_sig)) y_sig /= (1 + np.exp(-alpha * x_sig)) plt.figure(figsize=(10, 6)) plt.plot(x_sig, y_sig, linewidth=3) plt.xlabel('x') plt.ylabel('y') Explanation: If you are curious what this activation function looks like, we can take a short excursion with Matplotlib: End of explanation mlp.setTrainMethod(cv2.ml.ANN_MLP_BACKPROP) Explanation: As mentioned in the preceding part, a training method can be set via mlp.setTrainMethod. The following methods are available: - cv2.ml.ANN_MLP_BACKPROP: This is the backpropagation algorithm we talked about previously. You can set additional scaling factors via mlp.setBackpropMomentumScale and mlp.setBackpropWeightScale. - cv2.ml.ANN_MLP_RPROP: This is the Rprop algorithm, which is short for resilient backpropagation. We won't have time to discuss this algorithm, but you can set additional parameters of this algorithm via mlp.setRpropDW0, mlp.setRpropDWMax, mlp.setRpropDWMin, mlp.setRpropDWMinus, and mlp.setRpropDWPlus. In this example, we will choose backpropagation: End of explanation term_mode = cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS term_max_iter = 300 term_eps = 0.01 mlp.setTermCriteria((term_mode, term_max_iter, term_eps)) Explanation: Lastly, we can specify the criteria that must be met for training to end via mlp.setTermCriteria. This works the same for every classifier in OpenCV and is closely tied to the underlying C++ functionality. We first tell OpenCV which criteria we are going to specify (for example, the maximum number of iterations). Then we specify the value for this criterion. All values must be delivered in a tuple. End of explanation mlp.train(X, cv2.ml.ROW_SAMPLE, y) Explanation: Training and testing the MLP classifier This is the easy part. Training the MLP classifier is the same as with all other classifiers: End of explanation _, y_hat = mlp.predict(X) Explanation: The same goes for predicting target labels: End of explanation from sklearn.metrics import accuracy_score accuracy_score(y_hat.round(), y) Explanation: The easiest way to measure accuracy is by using scikit-learn's helper function: End of explanation def plot_decision_boundary(classifier, X_test, y_test): # create a mesh to plot in h = 0.02 # step size in mesh x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1 y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) X_hypo = np.c_[xx.ravel().astype(np.float32), yy.ravel().astype(np.float32)] _, zz = classifier.predict(X_hypo) zz = np.argmax(zz, axis=1) zz = zz.reshape(xx.shape) plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200) plt.figure(figsize=(10, 6)) plot_decision_boundary(mlp, X, y_raw) Explanation: It looks like we were able to increase our performance from 81% with a single perceptron to 84% with an MLP consisting of ten hidden-layer neurons and two output neurons. In order to see what changed, we can look at the decision boundary one more time: End of explanation
14,818
Given the following text description, write Python code to implement the functionality described below step by step Description: Brain-hacking 101 Author Step1: Numpy arrays (ndarrays) Creating a NumPy array is as simple as passing a sequence to np.array Step2: You can create arrays with special generating functions Step3: Exercise Step4: What would happen if A and B did not have the same shape? Arithmetic with scalars Step5: Arrays are addressed through indexing Python uses zero-based indexing Step6: numpy contains various functions for calculations on arrays Step7: Data in Nifti files is stored as an array In the tutorial directory, we have included a single run of an fMRI experiment that was included in the FIAC competition. The experiment is described in full in a paper by Dehaene-Lambertz et al. (2006), but for the purposes of what we do today, the exact details of the acquisition and the task are not particularly important. We can read out this array into the computer memory using the nibabel library Step8: Loading the file is simple Step9: But note that in order to save time and memory, nibabel is pretty lazy about reading data from file, until we really need this data. Meaning that at this point, we've only read information about the data, not the data itself. This thing is not the data array yet. What is it then? Step10: It's a Nifti1Image object! That means that it is a variable that holds various attributes of the data. For example, the 4 by 4 matrix that describes the spatial transformation between the world coordinates and the image coordinates Step11: This object also has functions. You can get the data, by calling a function of that object Step12: This is a 4-dimensional array! We happen to know that time is the last dimension, and there are 191 TRs recorded in this data. There are 30 slices in each TR/volume, with an inplane matrix of 64 by 64 in each slice. We can easily access different parts of the data. Here is the full time-series for the central voxel in the volume Step13: It's a one-dimensinal array! Here is the middle slice for the last time-point Step14: That's a 2D array. You get the picture, I hope. You can do all kinds of operations with the data using functions Step15: Using functions on parts of the data Many numpy functions have an axis optional argument. These arguments allow you to perform a reduction of the data along one of the dimensions of the array. For example, if you want to extract a 3D array with the mean/std in every one of the voxels Step16: You can save the resulting array into a new file
Python Code: import numpy as np # Numpy is a package. To see what's in a package, type the name, a period, then hit tab #np? #np. # Some examples of numpy functions and "things": print(np.sqrt(4)) print(np.pi) # Not a function, just a variable print(np.sin(np.pi)) # A function on a variable :) Explanation: Brain-hacking 101 Author: Ariel Rokem, The University of Washington eScience Institute Hack 1: Read your data into an array When you conduct a neuroimaging experiment, the computer that controls the scanner and receives the data from the scanner saves your data to a file. Neuroimaging data appears in many different file formats: NiFTI, Minc, Dicom, etc. These files all contain representations of the data that you collected in the form of an array. What is an array? It is a way of representing the data in the computer memory as a table, that is multi-dimensional and homogenous. What does this mean? table means that you will be able to read all or some of the numbers representing your data by addressing the variable that holds your array. It's like addressing a member of your lab to tell you the answer to a question you have, except here you are going to 'ask' a variable in your computer memory. Arrays are usually not as smart as your lab members, but they have very good memory. multi-dimensional means that you can represent different aspects of your data along different axes. For example, the three dimensions of space can be represented in different dimensions of the table: homogenous actually means two different things: The shape of the array is homogenous, so if there are three items in the first column, there have to be three items in all the columns. The data-type is homogenous. If the first item is an integer, all the other items will be integers as well. To demonstrate the properties of arrays, we will use the numpy library. This library contains implementations of many scientifically useful functions and objects. In particular, it contains an implementation of arrays that we will use throughout the folllowing examples. End of explanation arr1 = np.array([1, 2.3, 4]) print(type(arr1)) print(arr1.dtype) print(arr1.shape) print(arr1) Explanation: Numpy arrays (ndarrays) Creating a NumPy array is as simple as passing a sequence to np.array End of explanation arr4 = np.arange(2, 5) print(arr4) arr5 = np.arange(1, 5, 2) print(arr5) arr6 = np.arange(1, 10, 2) print(arr6) Explanation: You can create arrays with special generating functions: np.arange(start, stop, [step]) np.zeros(shape) np.ones(shape) End of explanation A = np.arange(5) B = np.arange(5, 10) print (A+B) print(B-A) print(A*B) Explanation: Exercise : Create an Array Create an array with values ranging from 0 to 10, in increments of 0.5. Reminder: get help by typing np.arange?, np.ndarray?, np.array?, etc. Arithmetic with arrays Since numpy exists to perform efficient numerical operations in Python, arrays have all the usual arithmetic operations available to them. These operations are performed element-wise (i.e. the same operation is performed independently on each element of the array). End of explanation A = np.arange(5) print(A+10) print(2*A) print(A**2) Explanation: What would happen if A and B did not have the same shape? Arithmetic with scalars: In addition, if one of the arguments is a scalar, that value will be applied to all the elements of the array. End of explanation print(A) print(A[0]) print(A[1]) print(A[2]) Explanation: Arrays are addressed through indexing Python uses zero-based indexing: The first item in the array is item 0 The second item is item 1, the third is item 2, etc. End of explanation # This gets the exponent, element-wise: print(np.exp(A)) # This is the average number in the entire array: print(np.mean(A)) Explanation: numpy contains various functions for calculations on arrays End of explanation import nibabel as nib Explanation: Data in Nifti files is stored as an array In the tutorial directory, we have included a single run of an fMRI experiment that was included in the FIAC competition. The experiment is described in full in a paper by Dehaene-Lambertz et al. (2006), but for the purposes of what we do today, the exact details of the acquisition and the task are not particularly important. We can read out this array into the computer memory using the nibabel library End of explanation img = nib.load('./data/run1.nii.gz') Explanation: Loading the file is simple: End of explanation type(img) Explanation: But note that in order to save time and memory, nibabel is pretty lazy about reading data from file, until we really need this data. Meaning that at this point, we've only read information about the data, not the data itself. This thing is not the data array yet. What is it then? End of explanation img.affine Explanation: It's a Nifti1Image object! That means that it is a variable that holds various attributes of the data. For example, the 4 by 4 matrix that describes the spatial transformation between the world coordinates and the image coordinates End of explanation hdr = img.get_header() print(hdr.get_zooms()) data = img.get_data() print(type(data)) print(data.shape) Explanation: This object also has functions. You can get the data, by calling a function of that object: There's a header in there that provides some additional information: End of explanation center_voxel_time_series = data[32, 32, 15, :] print(center_voxel_time_series) print(center_voxel_time_series.shape) Explanation: This is a 4-dimensional array! We happen to know that time is the last dimension, and there are 191 TRs recorded in this data. There are 30 slices in each TR/volume, with an inplane matrix of 64 by 64 in each slice. We can easily access different parts of the data. Here is the full time-series for the central voxel in the volume: End of explanation middle_slice_t0 = data[:, :, 15, -1] # Using negative numbers allows you to count *from the end* print(middle_slice_t0) print(middle_slice_t0.shape) Explanation: It's a one-dimensinal array! Here is the middle slice for the last time-point End of explanation print(np.mean(center_voxel_time_series)) print(np.std(center_voxel_time_series)) # TSNR is mean/std: print(np.mean(center_voxel_time_series)/np.std(center_voxel_time_series)) Explanation: That's a 2D array. You get the picture, I hope. You can do all kinds of operations with the data using functions: End of explanation mean_tseries = np.mean(data, axis=-1) # Select the last dimension std_tseries = np.std(data, axis=-1) tsnr = mean_tseries/std_tseries print(tsnr.shape) Explanation: Using functions on parts of the data Many numpy functions have an axis optional argument. These arguments allow you to perform a reduction of the data along one of the dimensions of the array. For example, if you want to extract a 3D array with the mean/std in every one of the voxels: End of explanation new_img = nib.Nifti1Image(tsnr, img.affine) new_img.to_filename('tsnr.nii.gz') Explanation: You can save the resulting array into a new file: End of explanation
14,819
Given the following text description, write Python code to implement the functionality described below step by step Description: Impact of Terrorism on World Development To explore our project with a lot more interaction and read our data story, visit our website! Datasets description Global Terrorism Database This dataset contains information on more than 170,000 terrorit attacks. The Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world from 1970 through 2016 (with annual updates planned for the future). Unlike many other event databases, the GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 170,000 cases. Learn more. For more precise information about important details like definitions, collection methodoloy and others plese see the GTD Codebook. Format Step1: Data exploration The first part of the database concerns the attacks. Let's see what it looks like. Step2: Analyzing this list along with the documentation of the dataset, we see that there is a lot of information which is a subclass of another similar class or which it depends from. Here are a few examples Step3: One can immediatly notice the September 11 attacks. Step5: We see that these few plots contain already a lot of information that can be very interesting to analyze. The attentive reader will notice that for the three first plots we see the Unknown type appearing each time. Although it mitght seem worhtless at first, we decided to keep it as it testifies the lack of information about the attacks. As we develop a website to offer the best possible experience for the user/reader, we set up an API to make requests and fetch the data. Each of the requests are SQL queries we are doing at the server database where stands all the information needed. Example of requests for the API (from ./api/api.py) Step6: Website We made quite an advancement on the website, especially the data visualization part. First, when the user lands on the page, a world map is displayed with the number of terrorist attacks in each of the country represented by a grayscale level. There is also a help available in the bottom-right corner, as can be seen in the interaction GIF hereafter. From there, the user can interact with the map, click on countries to zoom in and plot the coordinates of the terrorist attacks. The density of the attacks is shown as a function of the plasma colorscale Step7: How many entries are there ? Step8: Again, we have a lot of indicators and we need to choose the most relevant ones. The first criterion is the number of years for which the indicators are available, and the second is the number of countries for which the indicators are also available. Then, the information will be sufficiently and equally represented only if the two metrics just defined are high. Step9: Analyzing just the first few entries of the dataframe, we already see which indicators we should avoid. It is hard to draw strong conclusions from an indicator that is available only two years, and the more countries we have to compare the values of the indicators the better. Finally, the indicators we choose have to be pertinent. After several careful rereadings, we decided too consider the following development indicators Step10: We decided to represent the development status of a country based on different categories Step11: Brainstorming about implementation of the correlation comparison For the milestone 1, we proposed some ideas to tackle the problem of correlation comparison, and asked for recommendations from the TAs. We will quickly recall the propositions
Python Code: import json from urllib.request import urlopen from urllib.parse import quote_plus import pandas as pd import numpy as np from tqdm import tqdm_notebook from multiprocessing import Pool, cpu_count import matplotlib.pyplot as plt plt.style.use('ggplot') import pandas as pd import sqlite3 import warnings warnings.filterwarnings('ignore') # This function take the name of a place on Earth (countries in our case) and make a request # to Google Mas API in order to "normalize" the location and get the country code associated. # This country code will help to join the terrorist attacks with the country indicators. def getplace(place): # Google Maps API base URL url = "https://maps.googleapis.com/maps/api/geocode/json?" # Add the parameters to the base URL url += "address=%s&sensor=false&key=AIzaSyAk90LdWroCWnkWwOVEB_89kAzz1uPCwo0" % (quote_plus(place)) v = urlopen(url).read() j = json.loads(v) try: components = j['results'][0]['address_components'] long = short = None for c in components: if "country" in c['types']: long = c['long_name'] short = c['short_name'] return long, short except: # print('-------------', place) return None, None # Function that creates that normalize location in order to build a mapping which goes from # location to country codes. def mapping(n): return n, getplace(n)[1] # Data folder path data_path = '../data/' # Get the number of cores available for parallelization n_cores = cpu_count() attacks = pd.read_csv(data_path + './globalterrorismdb_0617dist.csv', encoding="ISO-8859-1") countries = pd.read_csv(data_path + './Country.csv') # Treat Congo names ambiguity attacks.loc[attacks.country_txt.str.contains('People\'s Republic of the Congo'), 'country_txt'] = 'Republic of the Congo' # Get all unique names all_names = attacks.country_txt.unique() # Create a mapping from country names to alpha2 country codes pool = Pool(n_cores) name_to_code = tqdm_notebook(pool.imap_unordered(mapping, all_names), total=all_names.size, desc='API calls') name_to_code = {k: v for k,v in name_to_code} # Add field with isocode attacks['iso_code'] = attacks.country_txt.apply(lambda x: name_to_code[x]) # Treat special country with no match for iso_code attacks.loc[attacks.country_txt == 'Ireland','iso_code'] = 'IE' attacks.loc[attacks.country_txt == 'Namibia','iso_code'] = 'NA' # Delete attacks where no country indicators are available countries.loc[countries.CountryCode=='NAM', 'Alpha2Code'] = 'NA' countries.loc[countries.CountryCode=='KSV', 'Alpha2Code'] = 'XK' attacks = attacks[attacks.iso_code.isin(countries.Alpha2Code)] # Build a mapping from alpha2 country codes to alpha3 country codes alpha2_to_alpha3 = countries[['Alpha2Code', 'CountryCode']] alpha2_to_alpha3 = dict(alpha2_to_alpha3.apply(lambda x: (x.Alpha2Code, x.CountryCode), axis=1).values) # Transform the iso_code field with the precedingly created mapping to get alpha3 code attacks.iso_code = attacks.iso_code.apply(lambda iso: alpha2_to_alpha3[iso]) # Save dataframe attacks.to_csv(data_path + 'attacks_cleaned.csv') Explanation: Impact of Terrorism on World Development To explore our project with a lot more interaction and read our data story, visit our website! Datasets description Global Terrorism Database This dataset contains information on more than 170,000 terrorit attacks. The Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world from 1970 through 2016 (with annual updates planned for the future). Unlike many other event databases, the GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 170,000 cases. Learn more. For more precise information about important details like definitions, collection methodoloy and others plese see the GTD Codebook. Format : CSV &nbsp;&nbsp;&nbsp;&nbsp; Size : 29 MB World Development Indicators The World Development Indicators from the World Bank contain over a thousand annual indicators of economic development from hundreds of countries around the world. Here's a list of the available indicators along with a list of the available countries. Format : SQLITE &nbsp;&nbsp;&nbsp;&nbsp; Size : 261 MB Data cleaning and merging Main data cleaning and merging constraints Our project main purpose involves to get countries statisitcs from different indicators in relation to period where terrorist attacks occur. In order to do it, we have to our diposal two datasets: one with the different countries indicators which is in a really convenient SQLITE format, and the other which contains terrorist attacks and data related to it (like where it happened, which type of attack it is, ect.). <br /> The first important part to deal with is to clean the datasets and merge them. Indeed, we would like to perform join operations between indicators and terrorist attacks. The main problem to overcome is the lack of "agreed" convention to uniquely identify countries between the two datasets. In the first one, all indicators use alpha3 codes (3 letters codes) to denote the countries. We also have a Country table which have the alpha2 code (2 letters) for each country. In the other dataset, the countries are identified using different codes but there wasn't a simple way to use it for merging. Moreover, the name of the countries used are not exactly the same for the two datasets. Our solution The solution we found was to use the Google Maps API and to send requests to it with the name of the countries coming from the second dataset. This API is robust to country names spelling and can "normalize" them for us by returning the alpha2 code of the country given its name. <br /> Here is the detail of our process: 1. First, we got the Country table from the first dataset into CSV format and the attack dataset (which was already in CSV format) in order to process them easily with pandas. We only need the Country table from the frst dataset to perform the cleaning and merging step. 2. Before normalizing the country names using the Google Map API, we needed to tackle some name ambiguity by hand for the Republic of the Congo. Indeed, in the attack dataset, this country is denoted as People's Republic of the Congo and the API couln't understand this name. So we replaced occurences of People's Republic of the Congo with Republic of the Congo. The issue was due to the similarity between the two countries Congo and Republic of the Congo and so the API couldn't figure out which one was People's Republic of the Congo. 3. After resolving this ambiguity, we found each unique country name in the attacks dataset and we built a mapping going from country names to alpha2 code using the Google Map API. Then, we added a field called iso_code to the dataset with the alpha2 code using the mapping. 4. After that, we had some last issue with some alpha2 code not found by the API, it was for Ireland and Namibia so we added them by hand. 5. We deleted all attacks for which we had no country indicators: if a country of an attack was not present in the Country table, we deleted the row corresponding to this attack. 6. At this point we were able to join the two datasets using the Country table, but since all indicators denote the country by an alpha3 code, we built a mapping going from alpha2 code to alpha3 code and we transformed the iso_code field so that it contains alpha3 code instead. If we did not performed this operation, each join we would make in the future would need an extra join to get the alpha3 code of an attack through the Country table. Now, we can only have a single join using directly the alpha3 code of the indicators and joining on the iso_code of the attacks table. 7. To conclude, we saved the cleaned attacks dataset and added it to the SQLITE database so we can easily use SQL to query our data. End of explanation connection = sqlite3.connect("../api/data/database.sqlite") attacks = pd.read_sql_query("SELECT * FROM Attacks", connection) attacks.head() # List of all headers from the Attacks table list(attacks) Explanation: Data exploration The first part of the database concerns the attacks. Let's see what it looks like. End of explanation attack_types_USA = pd.read_sql_query('SELECT attacktype1_txt, num_attacks FROM (SELECT attacktype1_txt, COUNT(attacktype1_txt) num_attacks FROM Attacks WHERE iso_code="USA" GROUP BY attacktype1_txt) ORDER BY num_attacks DESC', connection) ax = attack_types_USA.set_index('attacktype1_txt').plot(kind='barh', figsize=(10,5), title='Type of attacks distribution \n', legend=False) ax.set(xlabel='\nNumber of attacks', ylabel='Type of attacks\n') plt.show() attack_targets_USA = pd.read_sql_query('SELECT targtype1_txt, num_attacks FROM (SELECT targtype1_txt, COUNT(targtype1_txt) num_attacks FROM Attacks WHERE iso_code="USA" GROUP BY targtype1_txt) ORDER BY num_attacks DESC LIMIT 10', connection) ax = attack_targets_USA.set_index('targtype1_txt').plot(kind='barh', figsize=(10,5), title='Type of targets distribution \n', legend=False) ax.set(xlabel='\nNumber of attacks', ylabel='Type of targets\n') plt.show() attack_perpetrators_USA = pd.read_sql_query('SELECT gname, num_attacks FROM (SELECT gname, COUNT(gname) num_attacks FROM Attacks WHERE iso_code="USA" GROUP BY gname) ORDER BY num_attacks DESC LIMIT 10', connection) ax = attack_perpetrators_USA.set_index('gname').plot(kind='barh', figsize=(10,5), title='Perpetrators attack distribution \n', legend=False) ax.set(xlabel='\nNumber of attacks', ylabel='Perpetrators\n') plt.show() num_victims_USA = pd.read_sql_query('SELECT iyear, SUM(nkill) FROM Attacks WHERE iso_code="USA" GROUP BY iyear', connection) ax = num_victims_USA.set_index('iyear').plot(kind='bar', figsize=(10,5), title='Number of attacks per year \n', legend=False) ax.set(xlabel='\nYear', ylabel='Number of victims\n') plt.show() Explanation: Analyzing this list along with the documentation of the dataset, we see that there is a lot of information which is a subclass of another similar class or which it depends from. Here are a few examples: targettype1_txt, targetsubtype1_txt targettype2_txt, targetsubtype2_txt attacktype1_txt, attacktype2_txt, attacktype3_txt gname, gsubname Hence, unless we are interested in specific features of an attack, we decided to ignore those subcategories as they give too much details. Therefore, we chose the features that might give the more relevant information to the lambda user, having in mind that we can't flood him with a lot of information. Here is the list of the information the user will have access to concerning the attacks: coordinates of the attacks by country num attacks by country and year num victims by country and year top target types by country top weapon types by country top perpetrator types by country These information will allow to understand the situation of a given country in a clear and simple way (more details about how we present this information will come later). Let's dive into an example. What kind of results one can expect for USA ? End of explanation num_attacks_USA = pd.read_sql_query('SELECT iyear, COUNT(*) FROM Attacks WHERE iso_code="USA" GROUP BY iyear', connection) ax = num_attacks_USA.set_index('iyear').plot(kind='bar', figsize=(10,5), title='Number of attacks per year \n', legend=False) ax.set(xlabel='\nYear', ylabel='Number of attacks\n') plt.show() Explanation: One can immediatly notice the September 11 attacks. End of explanation @app.route('/attacks/perpetrators/<string:country>') def attack_perpetrators_by_country(country): Returns the perpetrators list with the number of attacks corresponding to their attacks in descending order of the given country. cur = get_db().execute('SELECT gname, num_attacks FROM (SELECT gname, COUNT(gname) num_attacks FROM Attacks WHERE iso_code="{}" GROUP BY gname) ORDER BY num_attacks DESC'.format(country)) attack_perpetrators = cur.fetchall() cur.close() return jsonify(attack_perpetrators) Explanation: We see that these few plots contain already a lot of information that can be very interesting to analyze. The attentive reader will notice that for the three first plots we see the Unknown type appearing each time. Although it mitght seem worhtless at first, we decided to keep it as it testifies the lack of information about the attacks. As we develop a website to offer the best possible experience for the user/reader, we set up an API to make requests and fetch the data. Each of the requests are SQL queries we are doing at the server database where stands all the information needed. Example of requests for the API (from ./api/api.py): End of explanation dev_indicators = pd.read_sql_query("SELECT * FROM Indicators", connection) dev_indicators.head() Explanation: Website We made quite an advancement on the website, especially the data visualization part. First, when the user lands on the page, a world map is displayed with the number of terrorist attacks in each of the country represented by a grayscale level. There is also a help available in the bottom-right corner, as can be seen in the interaction GIF hereafter. From there, the user can interact with the map, click on countries to zoom in and plot the coordinates of the terrorist attacks. The density of the attacks is shown as a function of the plasma colorscale: Here is a GIF of the interaction with the map: When the user is zoomed-in on a country, he can further explore details about the number of attacks/victims for each year, and also details about the most active terrorist groups, most targeted victims and most common attack types for a country: Here is a static view of the detailed view for a country: Technologies For that we used d3.js, JavaScript ES2016 (Yarn) and Wepback. The design is made with http://www.materializecss.com/ for the design of the interfaces/menus/icons and https://gionkunz.github.io/chartist-js/ for the beautiful plots. We created an API server with Flask to fetch and serve the data from the SQLite database. Everything will be hosted on a website, but at the moment the code is in the ./api/ folder for the API, and ./website/. A script will be made to make it easy to run the website locally in the future of the project, if the public hosting is not enough. What's next? From there, we need to incorporate the correlation part between terrorist attacks and the second main part of the database: the development indicators. Let's have a look at it. End of explanation print('The dataframe contains {} rows.'.format(dev_indicators.shape[0])) Explanation: How many entries are there ? End of explanation dev_indicators_summarize = pd.read_sql_query("SELECT IndicatorCode, IndicatorName, COUNT(DISTINCT Year) as numYears, COUNT(DISTINCT CountryName) as numCountries FROM Indicators GROUP BY IndicatorCode", connection) dev_indicators_summarize[:15] Explanation: Again, we have a lot of indicators and we need to choose the most relevant ones. The first criterion is the number of years for which the indicators are available, and the second is the number of countries for which the indicators are also available. Then, the information will be sufficiently and equally represented only if the two metrics just defined are high. End of explanation choosen_indicators = ['EG.USE.ELEC.KH.PC', 'EG.FEC.RNEW.ZS', 'EN.ATM.CO2E.KT', 'EN.POP.DNST', 'FI.RES.XGLD.CD', 'IS.AIR.PSGR', 'IT.NET.USER.P2', 'MS.MIL.MPRT.KD', 'MS.MIL.XPRT.KD', 'NE.EXP.GNFS.CD', 'NE.EXP.GNFS.KD.ZG', 'NE.IMP.GNFS.CD', 'NE.IMP.GNFS.KD.ZG', 'NY.GDP.MKTP.CD', 'NY.GDP.PCAP.CD', 'NY.GNP.MKTP.CD', 'NY.GNP.PCAP.CD', 'SH.DYN.MORT', 'SH.MED.BEDS.ZS', 'SP.DYN.CBRT.IN', 'SP.DYN.CDRT.IN', 'SP.POP.0014.TO.ZS', 'SP.POP.1564.TO.ZS', 'SP.POP.65UP.TO.ZS', 'SP.POP.GROW', 'SP.POP.TOTL'] dev_indicators_summarize[dev_indicators_summarize['IndicatorCode'].isin(choosen_indicators)] Explanation: Analyzing just the first few entries of the dataframe, we already see which indicators we should avoid. It is hard to draw strong conclusions from an indicator that is available only two years, and the more countries we have to compare the values of the indicators the better. Finally, the indicators we choose have to be pertinent. After several careful rereadings, we decided too consider the following development indicators: End of explanation arms_exported_USA = pd.read_sql_query('SELECT Year, Value FROM Indicators WHERE CountryCode="USA" AND IndicatorCode="MS.MIL.MPRT.KD"', connection) ax = arms_exported_USA.set_index('Year').plot(kind='bar', figsize=(10,5), title='Arms exports \n', legend=False) ax.set(xlabel='\nYear', ylabel='Exported arms\n') plt.show() hospital_beds_USA = pd.read_sql_query('SELECT Year, Value FROM Indicators WHERE CountryCode="USA" AND IndicatorCode="SH.MED.BEDS.ZS"', connection) ax = hospital_beds_USA.set_index('Year').plot(kind='bar', figsize=(10,5), title='Hospital beds \n', legend=False) ax.set(xlabel='\nYear', ylabel='Number of hospital beds per 1,000\n') plt.show() pop_growth_USA = pd.read_sql_query('SELECT Year, Value FROM Indicators WHERE CountryCode="USA" AND IndicatorCode="SP.POP.GROW"', connection) ax = pop_growth_USA.set_index('Year').plot(kind='bar', figsize=(10,5), title='Population growth \n', legend=False) ax.set(xlabel='\nYear', ylabel='Population growth in %\n') plt.show() electric_consumption_USA = pd.read_sql_query('SELECT Year, Value FROM Indicators WHERE CountryCode="USA" AND IndicatorCode="EG.USE.ELEC.KH.PC"', connection) ax = electric_consumption_USA.set_index('Year').plot(kind='bar', figsize=(10,5), title='Electric consumption \n', legend=False) ax.set(xlabel='\nYear', ylabel='Electric consumption (kWh per capita)\n') plt.show() Explanation: We decided to represent the development status of a country based on different categories: Economy (exports/imports of good and services, GDP, GNI, ...) Social health (mortality rate, hospital beds, ...) Population (age classes ratio, total, population growth, ...) Wealth (internet users, air transport passengers, renewable energy consumption, ...) These were the most relevant and reliable indicators we found. Again, let's see the values of some indicators for USA. End of explanation raw_score_switzerland = pd.read_sql_query('SELECT iyear, (1*COUNT(*) + 3*SUM(nkill) + 0.5*SUM(nwound) + 2*SUM(case propextent when 1.0 then 1 else 0 end) + 2*SUM(case propextent when 2.0 then 1 else 0 end) + 2*SUM(case propextent when 3.0 then 1 else 0 end) + 2*SUM(case propextent when 4.0 then 1 else 0 end)) FROM Attacks WHERE iso_code="USA" GROUP BY iyear', connection) ax = raw_score_switzerland.set_index('iyear').plot(kind='bar', figsize=(10,5), title='GTI \n', legend=False) ax.set(xlabel='\nYear', ylabel='Raw score \n') plt.show() Explanation: Brainstorming about implementation of the correlation comparison For the milestone 1, we proposed some ideas to tackle the problem of correlation comparison, and asked for recommendations from the TAs. We will quickly recall the propositions: 1. Take a neighboring country (geographic wise) which did not get attacked, and compare it with the targeted country. - TA: Is sensible to outliers. 2. Pick multiple neighboring countries which were not attacked, and do a sort of averaging to compare them with the targeted country. - TA: What if a country has only attacked neighbor? 3. Try to find a country who has a high correlation with the targeted country for one or multiple indicators, during the 5 years prior to the attack. From there, compare the same indicator(s), the year of the attack and for 5 years afterwards. - TA: It's hard to predict what the correlation will give and how do you choose the indicators, bias? 4. Using a mathematical-based technique such as Pearson correlation coefficient. - TA: Seems better indeed. From these advices and remarks, we will try to use some mathematical-based technique (4), looking for correlation prior to 5 years (or maybe 3 years if "this is asking too much"). Pearson correlation coefficient We might change this pearson correlation coefficient for something more robust or subject to non-linear relationships: Several sets of (x, y) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the non-linearity and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero. From Wikipedia. Implementation of the correlation The delicate part is to see if a link can exist between a pair of indicators chosen from each one of these two databases. However, this way of proceeding has a severe weak point: comparing only two indicators is equivalent to reducing a causal link to an unique variable while it includes several. To overcome one part of this issue, we decided to aggregate a subset of the GTD indicators in order to use the Global Terrorism Index (GTI) scoring system. This score is used in the well-known report of the same name published annually by Institute for Economics and Peace (IEP) to account for the relative impact of terrorism for a given year. The score, that was determined by GPI expert panel, is computed using four factors and different weight associated with. A description of the formula can be found here. Let's look at the raw score of USA as an example. End of explanation
14,820
Given the following text description, write Python code to implement the functionality described below step by step Description: Panel data sets Step1: Panel Demographics Panel demographic files have been standardized and are called ads demoN.csv, where N is the year number Step2: Store File Naming convention Step3: Product Attributes The improved file format, which incorporates further information, is prod_category.xls, 16 There are three sets of files. The first set of files are applicable to years 1-6 and are provided in a directory called “parsed stub files”. The second set of files are applicable to year 7 and are provided in a directory called “parsed stub files 2007”. The third set of files are applicable to years 8-11 are are provided in a directory called “parsed stub files 2008-2011” Step4: Panel Trips These files represent the trips made by panelists who purchased at least one item. These files have been standardized in format from the way they were originally constructed, and placed in the directory “parsed stub files”. The naming convention is 14 tripsN jul08.csv, where N is the year . Fields are listed below. These files contain the following fields Step5: Delivery Stores This is a flat file with information about the stores. This file also contains outlet, estimated acv, the market name so data can be aggregated by market, an open and close week, and finally a “chain” number representing a particular retailer. All the stores belonging to Chain8 are part of the same retailer that year. Step6: Join transactions on products The transactions table is more interesting if you join products on the item column Step7: Most popular Items by $ volume This query takes a long time
Python Code: panel = pd.read_csv("IRI_Data//Year1/External/saltsnck/saltsnck_PANEL_DR_1114_1165.dat", delimiter="\t") panel.head() Explanation: Panel data sets: Category_PANEL_outlet_startweek_endweek.dat Panel data is provided for two BehaviorScan markets, Eau Claire, Wisconsin and Pittsfield, Massachusetts. The naming convention for these is category name then “panel” then outlet then start week and then end week, all separated by underscores, with the extension DAT, so salted snacks drug data for the earliest year would be saltsnck_PANEL_DR_1114_1165 PANID panelist number within a market  UNITS Total number of units purchased by the Buying households. The sum of total units purchased by the households buying the Product.  OUTLET Channel to which the store/chain belongs MA=Mass GR=Grocery DR=drug DOLLARS  Total Paid dollars This is drawn from the store data, not entered by the panelist, in cases where IRI has store data. In cases where IRI does not receive store data, some panelists do IRI_KEY Masked store number  WEEK IRI WEEK   COLUPC (Collapsed UPC). This is the UPC which matches the internal form (e.g. private label collapsed). The information in COLUPC is the same as in the combination of SY, GE, VEND, ITE. This is the combination of a upc’s system (2 digits), generation (1 digit), vendor (5 digits) and item (5 digits) fields. See product description section for an explanation of these fields. No leading zeroes are shown. End of explanation demos1 = pd.read_csv("IRI_Data/demos trips external 1_11 may 13/ads demo1.csv") demos11 = pd.read_csv("IRI_Data/demos trips external 1_11 may 13/ads demos11.CSV") a=set(demos1.columns.values) b=set(demos11.columns.values) [col.replace(" ", "_") for col in demos11.columns] intr=a.intersection(b) [col.replace(" ", "_") for col in intr] #old columns removed... a-b #new columns added b-a Explanation: Panel Demographics Panel demographic files have been standardized and are called ads demoN.csv, where N is the year number: ads demo1.csv, ads demo2.csv ... ads demo5.csv. The panelists included are those who satisfied IRI’s standard 52 week reporting static. This means that (1) the panelists included reported all year, and (2) the panelists are different between years. For the initial set of data provided, the panelist demos reflect data current at that time. So, for the year 1, 2, and 3 (2001-2003) data, the panelist demos are from early 2007, not 2001. For this reason, there may be panelist records without demographics. For years 4 and 5 (2004-2005) the panelist demos are from later in 2007 and may be slightly different due to the demographic updates. Similarly for years 8-11: the demos reflect information pulled in summer, 2012. The field names and the first two panelist values are shown below. Due to the demographic updates, there are minor differences in the values for the two panelists. For example, the male head in household in 1100180 is now listed as “some college” rather than post-secondary “technical school”, and the male head occupation from laborer to machine operator. In these files, a missing value may appear as an empty field, a blank, a period, or a zero. End of explanation store = pd.read_fwf("../dse-iri-dataset/Year1/External/saltsnck/saltsnck_drug_1114_1165") store.head() Explanation: Store File Naming convention: The naming convention for these is category name then outlet then start week and then end week, all separated by underscores, with no extension, so salted snacks drug data for the earliest year would be saltsnck_drug_1114_1165. RI_KEY Masked Store number, keyed to delivery_stores file. WEEK IRI Week: see IRI Week Translation.xls file for calendar week translation SY UPC - System GE UPC – Generation VEND UPC - Vendor ITEM UPC - Item UNITS Total Unit sales DOLLARS Total Dollar sales F Feature: see table below D Display: (0=NO, 1=MINOR, 2=MAJOR. MAJOR includes codes lobby and end- aisle) PR Price Reduction flag: (1 if TPR is 5% or greater, 0 otherwise) End of explanation pd.read_ Explanation: Product Attributes The improved file format, which incorporates further information, is prod_category.xls, 16 There are three sets of files. The first set of files are applicable to years 1-6 and are provided in a directory called “parsed stub files”. The second set of files are applicable to year 7 and are provided in a directory called “parsed stub files 2007”. The third set of files are applicable to years 8-11 are are provided in a directory called “parsed stub files 2008-2011” End of explanation trips = pd.read_csv("../dse-iri-dataset/demos trips external 1_11 may13/trips1 jul08.csv") trips.head() Explanation: Panel Trips These files represent the trips made by panelists who purchased at least one item. These files have been standardized in format from the way they were originally constructed, and placed in the directory “parsed stub files”. The naming convention is 14 tripsN jul08.csv, where N is the year . Fields are listed below. These files contain the following fields: End of explanation stores = pd.read_fwf("../dse-iri-dataset/Year1/External/saltsnck/Delivery_Stores") stores.head() %load_ext sql %%sql postgresql://sharknado:[email protected]/sharknado select * from test names = _ names result = %sql select * from stores dataframe = result.DataFrame() dataframe.head() transactions = %sql select * from transactions limit 10 transactions.DataFrame().head() products_query = %sql select * from products limit 10 products_query.DataFrame().head() Explanation: Delivery Stores This is a flat file with information about the stores. This file also contains outlet, estimated acv, the market name so data can be aggregated by market, an open and close week, and finally a “chain” number representing a particular retailer. All the stores belonging to Chain8 are part of the same retailer that year. End of explanation %%sql select * from transactions join products on products.item = transactions.item limit 5 Explanation: Join transactions on products The transactions table is more interesting if you join products on the item column End of explanation %%sql SELECT products.product_type, sum(transactions.dollars) as dollar_sum FROM transactions JOIN products on products.item = transactions.item GROUP BY products.product_type ORDER BY dollar_sum desc LIMIT 10 Explanation: Most popular Items by $ volume This query takes a long time: either precompute a join table optimize the group by End of explanation
14,821
Given the following text description, write Python code to implement the functionality described below step by step Description: Setup Jupyter Notebook Step1: cf. Examples for solve_ivp Step2: An example for unit tests cf. Runge-Kutta method from OK state Depending on one's notation, for "stage" $s=4$, or $m=4$, then for $\alpha_2 = \frac{1}{2}, \, \alpha_3 = \frac{1}{2}, \, \alpha_4 = 1 \equiv \ c_2 = \frac{1}{2}, \, c_3 = \frac{1}{2}, \, c_4 = 1$ $\beta_{21} = \frac{1}{2}, \, \beta_{31} = 0, \, \beta_{32} = \frac{1}{2}, \, \beta_{41} = \beta_{42} = 0, \, \beta_{43} = 1 \equiv \ a_{21} = \frac{1}{2}, \, a_{31} =0, \, a_{32} = \frac{1}{2}, \, a_{41} = a_{42} = 0, \, a_{43} = 1$ $c_1 = c_4 = \frac{1}{6}, \, c_2 = c_3 = \frac{1}{3} \equiv b_1 = b_4 = \frac{1}{6}, \, b_2 = b_3 = \frac{1}{3}$ (authors having different notation sucks). Step3: Consider example $\begin{cases} & y' =y -x^2 + 1\ & y(0) = 0.5 \end{cases}$ Its exact solution is $ y= y(x) = x^2 + 2x + 1 - \frac{1}{2} \exp{x}$ Step4: We first solve this problem using RK4 with $h = 0.5$ and from $t=0$ to $t=2$. With step size $h = 0.5$, it takes 4 steps Step5: Step 2 $\quad \, x_2 = 1$ Step6: Calculated values for unit tests Step7: Embedded Formulas of Order 5 cf. pp. 178, Table 5.2. Dormand-Prince 5(4) (DOPRI5) Ordinary Differential Equations, Vol. 1. Nonstiff-Problems Step8: Coefficients for dense output cf. https Step9: Hermite Interpolation Step10: cf. pp. 916 17.2.2 Dense Output, Ch. 17. Integration of Ordinary Differential Equations of Numerical Recipes, 3rd Ed., and
Python Code: from pathlib import Path import sys notebook_directory_parent = Path.cwd().resolve().parent.parent if str(notebook_directory_parent) not in sys.path: sys.path.append(str(notebook_directory_parent)) %matplotlib inline import numpy as np import scipy import sympy from numpy import linspace from scipy.integrate import solve_ivp import matplotlib.pyplot as plt from T1000.numerical.RungeKuttaMethod import RungeKuttaMethod from T1000.numerical.RKMethods.bCoefficients import bCoefficients from T1000.numerical.RKMethods.DOPRI5Coefficients import DOPRI5Coefficients Explanation: Setup Jupyter Notebook End of explanation def exponential_decay(t, y): return -0.5 * y sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8]) print("t\n", sol.t) print("y\n", sol.y) Explanation: cf. Examples for solve_ivp End of explanation cs_for_rk4 = [0.5, 0.5, 1] as_for_rk4 = [0.5, 0, 0.5, 0, 0, 1] bs_for_rk4 = [1/6., 1/3., 1/3., 1/6.] rk4 = RungeKuttaMethod(4, cs_for_rk4, as_for_rk4, bs_for_rk4) Explanation: An example for unit tests cf. Runge-Kutta method from OK state Depending on one's notation, for "stage" $s=4$, or $m=4$, then for $\alpha_2 = \frac{1}{2}, \, \alpha_3 = \frac{1}{2}, \, \alpha_4 = 1 \equiv \ c_2 = \frac{1}{2}, \, c_3 = \frac{1}{2}, \, c_4 = 1$ $\beta_{21} = \frac{1}{2}, \, \beta_{31} = 0, \, \beta_{32} = \frac{1}{2}, \, \beta_{41} = \beta_{42} = 0, \, \beta_{43} = 1 \equiv \ a_{21} = \frac{1}{2}, \, a_{31} =0, \, a_{32} = \frac{1}{2}, \, a_{41} = a_{42} = 0, \, a_{43} = 1$ $c_1 = c_4 = \frac{1}{6}, \, c_2 = c_3 = \frac{1}{3} \equiv b_1 = b_4 = \frac{1}{6}, \, b_2 = b_3 = \frac{1}{3}$ (authors having different notation sucks). End of explanation def derivative(x, y): return y - np.power(x, 2) + 1.0 def exact_y(x): return np.power(x, 2) + 2.0 * x + 1.0 - 0.5 * np.exp(x) Explanation: Consider example $\begin{cases} & y' =y -x^2 + 1\ & y(0) = 0.5 \end{cases}$ Its exact solution is $ y= y(x) = x^2 + 2x + 1 - \frac{1}{2} \exp{x}$ End of explanation ks0 = rk4._calculate_k_coefficients(0.5, 0.0, 0.5, derivative) print(ks0) y1 = rk4.calculate_next_step(0.5, 0.0, 0.5, derivative) print(y1) print(exact_y(0)) print(exact_y(0.5)) Explanation: We first solve this problem using RK4 with $h = 0.5$ and from $t=0$ to $t=2$. With step size $h = 0.5$, it takes 4 steps: $t_0 = 0, \, t_1 = 0.5, \, t_2 = 1, \, t_3 = 1.5, \, t_4 = 2$ Step 0 \quad \, $x_0 = 0, \, y_0 = 0.5$ Step 1 \quad \, $x_1 = 0.5, \, h = 0.5$ End of explanation ks1 = rk4._calculate_k_coefficients(y1, 0.5, 0.5, derivative) print(ks1) y2 = rk4.calculate_next_step(y1, 0.5, 0.5, derivative) print(y2) print(exact_y(1.)) Explanation: Step 2 $\quad \, x_2 = 1$ End of explanation alpha_for_rk4 = [0.5, 0.5, 1.] beta_for_rk4 = [0.5, 0., 0.5, 0., 0., 1.] c_for_rk4 = [1./6., 1./3., 1./3., 1./6.] rk4 = RungeKuttaMethod(4, alpha_for_rk4, beta_for_rk4, c_for_rk4) m = 4 print(rk4._beta_coefficients) for i in range(2, m + 1): for j in range(1, i): print(i, " ", j, " ", rk4.get_beta_ij(i, j)) print(rk4._alpha_coefficients) for i in range(2, m + 1): print(i, " ", rk4.get_alpha_i(i)) print(rk4._c_coefficients) for i in range(1, m + 1): print(i, " ", rk4.get_c_i(i)) x_n = [np.array([2, 4, 8]), np.array([1.88836035, 3.7767207, 7.5534414])] t_n = [0., 0.11487653, 1.26364188] kresult = rk4._calculate_k_coefficients(x_n[0], t_n[0], t_n[1] - t_n[0], exponential_decay) print(kresult) result1 = rk4.calculate_next_step(x_n[0], t_n[0], t_n[1] - t_n[0], exponential_decay) print(result1) result2 = rk4.calculate_next_step(x_n[1], t_n[1], t_n[2] - t_n[1], exponential_decay) print(result2) [1, 2, 3] + [ 4, 5, 6] np.array([2]) Explanation: Calculated values for unit tests End of explanation # print(DOPRI5Coefficients.b_coefficients.get_ith_element(1)) deltas = [] for i in range(1, 8): delta = \ DOPRI5Coefficients.b_coefficients.get_ith_element(i) - \ DOPRI5Coefficients.bstar_coefficients.get_ith_element(i) deltas.append(delta) if (type(delta) != int): print(delta.simplify()) else: print(delta) Explanation: Embedded Formulas of Order 5 cf. pp. 178, Table 5.2. Dormand-Prince 5(4) (DOPRI5) Ordinary Differential Equations, Vol. 1. Nonstiff-Problems End of explanation bs = DOPRI5Coefficients.b_coefficients ds = DOPRI5Coefficients.dense_output_coefficients cstars = DOPRI5Coefficients.cstar_coefficients -2 * (1 + 4 * bs.get_ith_element(1) - 4 * cstars.get_ith_element(1)) Explanation: Coefficients for dense output cf. https://math.stackexchange.com/questions/2947231/how-can-i-derive-the-dense-output-of-ode45 End of explanation from sympy import Matrix, Symbol, symbols, pprint theta, y_n, y_np1, f_n, f_np1, h = symbols("theta y_n y_np1 f_n f_np1 h") Explanation: Hermite Interpolation End of explanation hermite_interpolation = (1 - theta) * y_n + theta * y_np1 + \ theta * (theta - 1) * ((1 - 2 * theta) * (y_np1 - y_n) + (theta - 1) * h *f_n + theta * h * f_np1) pprint(hermite_interpolation) pprint(hermite_interpolation.expand()) pprint(sympy.collect(hermite_interpolation.expand(), y_n, evaluate=False)[y_n]) pprint(sympy.collect(hermite_interpolation.expand(), y_np1, evaluate=False)[y_np1]) pprint(sympy.collect(hermite_interpolation.expand(), f_n, evaluate=False)[f_n]) pprint(sympy.collect(hermite_interpolation.expand(), f_np1, evaluate=False)[f_np1]) Explanation: cf. pp. 916 17.2.2 Dense Output, Ch. 17. Integration of Ordinary Differential Equations of Numerical Recipes, 3rd Ed., and End of explanation
14,822
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-1', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: MPI-M Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:17 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
14,823
Given the following text description, write Python code to implement the functionality described below step by step Description: Quadratic model The quadratic model, pytransit.QuadraticModel, implements a transit over a stellar disk with the stellar limb darkening described using the quadratic limb darkening model as described by Mandel & Agol (ApJ 580, 2002). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.QuadraticModelCL, and is discussed later in this notebook. Step1: Model initialization The quadratic model doesn't take any special initialization arguments, so the initialization is straightforward. Step2: Data setup Homogeneous time series The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled. Step3: Model use Evaluation The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector). tm.evaluate_ps(k, ldc, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, ldc is the limb darkening coefficient vector, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit. tm.evaluate_pv(pv, ldc) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as [[k1, t01, p1, a1, i1, e1, w1], [k2, t02, p2, a2, i2, e2, w2], ... [kn, t0n, pn, an, in, en, wn]] The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models. Note Step4: Supersampling The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data. Step5: Heterogeneous time series PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands. If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve. For example, a set of three light curves, two observed in one passband and the third in another passband times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4] times_2 (lc = 1, pb = 0, lc) = [3, 4] times_3 (lc = 2, pb = 1, sc) = [1, 5, 6] Would be set up as tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6], lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2], pbids = [0, 0, 1], nsamples = [ 1, 10, 1], exptimes = [0.1, 1.0, 0.1]) Further, each passband requires two limb darkening coefficients, so the limb darkening coefficient array for a single parameter set should now be ldc = [u1, v1, u2, v2] where u and v are the passband-specific quadratic limb darkening model coefficients. Example Step6: OpenCL Usage The OpenCL version of the quadratic model, pytransit.QuadraticModelCL works identically to the Python version, except that the OpenCL context and queue can be given as arguments in the initialiser, and the model evaluation method can be told to not to copy the model from the GPU memory. If the context and queue are not given, the model creates a default context using cl.create_some_context(). Step7: GPU vs. CPU Performance The performance difference between the OpenCL and Python versions depends on the CPU, GPU, number of simultaneously evaluated models, amount of supersampling, and whether the model data is copied from the GPU memory. The performance difference grows in the favour of OpenCL model with the number of simultaneous models and amount of supersampling, but copying the data slows the OpenCL implementation down. For best performance, also the log likelihood computations should be done in the GPU. Step8: Short cadence data without heavy supersampling Step9: Long cadence data with supersampling Step10: Increasing the number of simultaneously evaluated models
Python Code: %pylab inline sys.path.append('..') from pytransit import QuadraticModel seed(0) times_sc = linspace(0.85, 1.15, 1000) # Short cadence time stamps times_lc = linspace(0.85, 1.15, 100) # Long cadence time stamps k, t0, p, a, i, e, w = 0.1, 1., 2.1, 3.2, 0.5*pi, 0.3, 0.4*pi pvp = tile([k, t0, p, a, i, e, w], (50,1)) pvp[1:,0] += normal(0.0, 0.005, size=pvp.shape[0]-1) pvp[1:,1] += normal(0.0, 0.02, size=pvp.shape[0]-1) ldc = stack([normal(0.3, 0.05, pvp.shape[0]), normal(0.1, 0.02, pvp.shape[0])], 1) Explanation: Quadratic model The quadratic model, pytransit.QuadraticModel, implements a transit over a stellar disk with the stellar limb darkening described using the quadratic limb darkening model as described by Mandel & Agol (ApJ 580, 2002). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.QuadraticModelCL, and is discussed later in this notebook. End of explanation tm = QuadraticModel() Explanation: Model initialization The quadratic model doesn't take any special initialization arguments, so the initialization is straightforward. End of explanation tm.set_data(times_sc) Explanation: Data setup Homogeneous time series The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled. End of explanation def plot_transits(tm, ldc, fmt='k'): fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True) flux = tm.evaluate_ps(k, ldc[0], t0, p, a, i, e, w) axs[0].plot(tm.time, flux, fmt) axs[0].set_title('Individual parameters') flux = tm.evaluate_pv(pvp[0], ldc[0]) axs[1].plot(tm.time, flux, fmt) axs[1].set_title('Parameter vector') flux = tm.evaluate_pv(pvp, ldc) axs[2].plot(tm.time, flux.T, 'k', alpha=0.2); axs[2].set_title('Parameter vector array') setp(axs[0], ylabel='Normalised flux') setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]]) tm.set_data(times_sc) plot_transits(tm, ldc) Explanation: Model use Evaluation The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector). tm.evaluate_ps(k, ldc, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, ldc is the limb darkening coefficient vector, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit. tm.evaluate_pv(pv, ldc) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as [[k1, t01, p1, a1, i1, e1, w1], [k2, t02, p2, a2, i2, e2, w2], ... [kn, t0n, pn, an, in, en, wn]] The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models. Note: PyTransit uses always a 2D parameter vector array under the hood, and the scalar evaluation method just packs the parameters into an array before model evaluation. Limb darkening The quadratic limb darkening coefficients are given either as a 1D or 2D array, depending on whether the model is evaluated for a single set of parameters or an array of parameter vectors. In the first case, the coefficients can be given as [u, v], and in the second, as [[u1, v1], [u2, v2], ... [un, vn]]. In the case of a heterogeneous time series with multiple passbands (more details below), the coefficients are given for a single parameter set as a 1D array with a length $2n_{pb}$ ([u1, v1, u2, v2, ... un, vn], where the index now marks the passband), and for a parameter vector array as a 2D array with a shape (npv, 2*npb), as [[u11, v11, u12, v12, ... u1n, v1n], [u21, v21, u22, v22, ... u2n, v2n], ... [un1, vn1, un2, vn2, ... unn, vnn]] End of explanation tm.set_data(times_lc, nsamples=10, exptimes=0.01) plot_transits(tm, ldc) Explanation: Supersampling The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data. End of explanation times_1 = linspace(0.85, 1.0, 500) times_2 = linspace(1.0, 1.15, 10) times = concatenate([times_1, times_2]) lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')]) pbids = [0, 1] nsamples = [1, 10] exptimes = [0, 0.0167] ldc2 = tile(ldc, (1,2)) ldc2[:,2:] /= 2 tm.set_data(times, lcids, pbids, nsamples=nsamples, exptimes=exptimes) plot_transits(tm, ldc2, 'k.-') Explanation: Heterogeneous time series PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands. If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve. For example, a set of three light curves, two observed in one passband and the third in another passband times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4] times_2 (lc = 1, pb = 0, lc) = [3, 4] times_3 (lc = 2, pb = 1, sc) = [1, 5, 6] Would be set up as tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6], lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2], pbids = [0, 0, 1], nsamples = [ 1, 10, 1], exptimes = [0.1, 1.0, 0.1]) Further, each passband requires two limb darkening coefficients, so the limb darkening coefficient array for a single parameter set should now be ldc = [u1, v1, u2, v2] where u and v are the passband-specific quadratic limb darkening model coefficients. Example: two light curves with different cadences and passbands End of explanation import pyopencl as cl from pytransit import QuadraticModelCL devices = cl.get_platforms()[0].get_devices()[2:] ctx = cl.Context(devices) queue = cl.CommandQueue(ctx) tm_cl = QuadraticModelCL(cl_ctx=ctx, cl_queue=queue) tm_cl.set_data(times_sc) plot_transits(tm_cl, ldc) Explanation: OpenCL Usage The OpenCL version of the quadratic model, pytransit.QuadraticModelCL works identically to the Python version, except that the OpenCL context and queue can be given as arguments in the initialiser, and the model evaluation method can be told to not to copy the model from the GPU memory. If the context and queue are not given, the model creates a default context using cl.create_some_context(). End of explanation times_sc2 = tile(times_sc, 20) # 20000 short cadence datapoints times_lc2 = tile(times_lc, 50) # 5000 long cadence datapoints tm_py = QuadraticModel() tm_cl = QuadraticModelCL(cl_ctx=ctx, cl_queue=queue) Explanation: GPU vs. CPU Performance The performance difference between the OpenCL and Python versions depends on the CPU, GPU, number of simultaneously evaluated models, amount of supersampling, and whether the model data is copied from the GPU memory. The performance difference grows in the favour of OpenCL model with the number of simultaneous models and amount of supersampling, but copying the data slows the OpenCL implementation down. For best performance, also the log likelihood computations should be done in the GPU. End of explanation tm_py.set_data(times_sc2) tm_cl.set_data(times_sc2) %%timeit tm_py.evaluate_pv(pvp, ldc) %%timeit tm_cl.evaluate_pv(pvp, ldc, copy=True) Explanation: Short cadence data without heavy supersampling End of explanation tm_py.set_data(times_lc2, nsamples=10, exptimes=0.01) tm_cl.set_data(times_lc2, nsamples=10, exptimes=0.01) %%timeit tm_py.evaluate_pv(pvp, ldc) %%timeit tm_cl.evaluate_pv(pvp, ldc, copy=True) Explanation: Long cadence data with supersampling End of explanation pvp2 = tile(pvp, (3,1)) ldc2 = tile(ldc, (3,1)) %%timeit tm_py.evaluate_pv(pvp2, ldc2) %%timeit tm_cl.evaluate_pv(pvp2, ldc2, copy=False) Explanation: Increasing the number of simultaneously evaluated models End of explanation
14,824
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Session 4 Step2: <a name="part-1---pretrained-networks"></a> Part 1 - Pretrained Networks In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include Step3: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step4: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels. Step5: <a name="preprocessdeprocessing"></a> Preprocess/Deprocessing Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later). Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it. Step6: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function. Step7: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step8: <a name="tensorboard"></a> Tensorboard I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you. Be sure to interact with the graph and click on the various modules. For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code Step9: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you. <a name="a-note-on-1x1-convolutions"></a> A Note on 1x1 Convolutions The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is Step10: <a name="using-context-managers"></a> Using Context Managers Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers. Let's see how this works w/ VGG Step11: <a name="part-2---visualizing-gradients"></a> Part 2 - Visualizing Gradients Now that we know how to load a network and extract layers from it, let's grab only the pooling layers Step12: Let's also grab the input layer Step14: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x. Step15: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the utils.normalize function. Step16: <a name="part-3---basic-deep-dream"></a> Part 3 - Basic Deep Dream In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations. Have a look here for inspiration Step17: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro). Step18: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to Step19: We'll do the same thing as before, now w/ our noise image Step20: <a name="part-4---deep-dream-extensions"></a> Part 4 - Deep Dream Extensions As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image. <a name="using-the-softmax-layer"></a> Using the Softmax Layer Let's get another image to play with, preprocess it, and then make it 4-dimensional. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step21: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step22: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step23: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory. Step24: <a name="fractal"></a> Fractal During the lecture we also saw a simple trick for creating an infinite fractal Step25: <a name="guided-hallucinations"></a> Guided Hallucinations Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes its own layers activations look like the guide image. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step26: Preprocess both images Step27: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step28: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step29: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens? <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step30: <a name="further-explorations"></a> Further Explorations In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams. <a name="part-5---style-net"></a> Part 5 - Style Net We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead. Have a look here for inspiration Step31: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU. Step32: Let's then grab the names of every operation in our network Step33: Now we need an image for our content image and another one for our style image. Step34: Let's see what the network classifies these images as just for fun Step35: <a name="content-features"></a> Content Features We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer. Step36: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step37: <a name="style-features"></a> Style Features Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step38: Now we find the gram matrix which we'll use to optimize our features. Step39: <a name="remapping-the-input"></a> Remapping the Input We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step40: <a name="content-loss"></a> Content Loss In the lecture we saw that we'll simply find the l2 loss between our content layer features. Step41: <a name="style-loss"></a> Style Loss Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix. Step42: <a name="total-variation-loss"></a> Total Variation Loss And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss. Step43: <a name="training"></a> Training We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> Step44: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here. Step45: <a name="assignment-submission"></a> Assignment Submission After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as
Python Code: # First check the Python version import sys if sys.version_info < (3,4): print('You are running an older version of Python!\n\n', 'You should consider updating to Python 3.4.0 or', 'higher as the libraries built for this course', 'have only been tested in Python 3.4 and higher.\n') print('Try installing the Python 3.5 version of anaconda' 'and then restart `jupyter notebook`:\n', 'https://www.continuum.io/downloads\n\n') # Now get necessary libraries try: import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize from skimage import data from scipy.misc import imresize from scipy.ndimage.filters import gaussian_filter import IPython.display as ipyd import tensorflow as tf from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils except ImportError: print("Make sure you have started notebook in the same directory", "as the provided zip file which includes the 'libs' folder", "and the file 'utils.py' inside of it. You will NOT be able", "to complete this assignment unless you restart jupyter", "notebook inside the directory created by extracting", "the zip file or cloning the github repo. If you are still") # We'll tell matplotlib to inline any drawn figures like so: %matplotlib inline plt.style.use('ggplot') # Bit of formatting because I don't like the default inline code style: from IPython.core.display import HTML HTML(<style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style>) Explanation: Session 4: Visualizing Representations Assignment: Deep Dream and Style Net <p class='lead'> Creative Applications of Deep Learning with Google's Tensorflow Parag K. Mital Kadenze, Inc. </p> Overview In this homework, we'll first walk through visualizing the gradients of a trained convolutional network. Recall from the last session that we had trained a variational convolutional autoencoder. We also trained a deep convolutional network. In both of these networks, we learned only a few tools for understanding how the model performs. These included measuring the loss of the network and visualizing the W weight matrices and/or convolutional filters of the network. During the lecture we saw how to visualize the gradients of Inception, Google's state of the art network for object recognition. This resulted in a much more powerful technique for understanding how a network's activations transform or accentuate the representations in the input space. We'll explore this more in Part 1. We also explored how to use the gradients of a particular layer or neuron within a network with respect to its input for performing "gradient ascent". This resulted in Deep Dream. We'll explore this more in Parts 2-4. We also saw how the gradients at different layers of a convolutional network could be optimized for another image, resulting in the separation of content and style losses, depending on the chosen layers. This allowed us to synthesize new images that shared another image's content and/or style, even if they came from separate images. We'll explore this more in Part 5. Finally, you'll packaged all the GIFs you create throughout this notebook and upload them to Kadenze. <a name="learning-goals"></a> Learning Goals Learn how to inspect deep networks by visualizing their gradients Learn how to "deep dream" with different objective functions and regularization techniques Learn how to "stylize" an image using content and style losses from different images Table of Contents <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> Part 1 - Pretrained Networks Graph Definition Preprocess/Deprocessing Tensorboard A Note on 1x1 Convolutions Network Labels Using Context Managers Part 2 - Visualizing Gradients Part 3 - Basic Deep Dream Part 4 - Deep Dream Extensions Using the Softmax Layer Fractal Guided Hallucinations Further Explorations Part 5 - Style Net Network Content Features Style Features Remapping the Input Content Loss Style Loss Total Variation Loss Training Assignment Submission <!-- /MarkdownTOC --> End of explanation from libs import vgg16, inception, i2v Explanation: <a name="part-1---pretrained-networks"></a> Part 1 - Pretrained Networks In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include: Inception v3 This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! Inception v5 This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! It presents a few extensions to v5 which are not documented anywhere that I've found, as of yet... Visual Group Geometry @ Oxford's 16 layer This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects. This model is nearly half a gigabyte, about 10x larger in size than the inception network. The trade off is that it is very fast. Visual Group Geometry @ Oxford's Face Recognition This network has been trained on the VGG Face Dataset and its final output layer is a softmax layer denoting 1 of 2622 different possible people. Illustration2Vec This network has been trained on illustrations and manga and its final output layer is 4096 features. Illustration2Vec Tag Please do not use this network if you are under the age of 18 (seriously!) This network has been trained on manga and its final output layer is one of 1539 labels. When we use a pre-trained network, we load a network's definition and its weights which have already been trained. The network's definition includes a set of operations such as convolutions, and adding biases, but all of their values, i.e. the weights, have already been trained. <a name="graph-definition"></a> Graph Definition In the libs folder, you will see a few new modules for loading the above pre-trained networks. Each module is structured similarly to help you understand how they are loaded and include example code for using them. Each module includes a preprocess function for using before sending the image to the network. And when using deep dream techniques, we'll be using the deprocess function to undo the preprocess function's manipulations. Let's take a look at loading one of these. Every network except for i2v includes a key 'labels' denoting what labels the network has been trained on. If you are under the age of 18, please do not use the i2v_tag model, as its labels are unsuitable for minors. Let's load the libaries for the different pre-trained networks: End of explanation # Stick w/ Inception for now, and then after you see how # the next few sections work w/ this network, come back # and explore the other networks. net = inception.get_inception_model(version='v5') # net = inception.get_inception_model(version='v3') # net = vgg16.get_vgg_model() # net = vgg16.get_vgg_face_model() # net = i2v.get_i2v_model() # net = i2v.get_i2v_tag_model() Explanation: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation print(net.keys()) Explanation: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels. End of explanation # First, let's get an image: og = plt.imread('clinton.png')[..., :3] plt.imshow(og) print(og.min(), og.max()) Explanation: <a name="preprocessdeprocessing"></a> Preprocess/Deprocessing Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later). Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it. End of explanation # Now call the preprocess function. This will preprocess our # image ready for being input to the network, except for changes # to the dimensions. I.e., we will still need to convert this # to a 4-dimensional Tensor once we input it to the network. # We'll see how that works later. img = net['preprocess'](og) print(img.min(), img.max()) Explanation: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function. End of explanation deprocessed = ... plt.imshow(deprocessed) plt.show() Explanation: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation nb_utils.show_graph(net['graph_def']) Explanation: <a name="tensorboard"></a> Tensorboard I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you. Be sure to interact with the graph and click on the various modules. For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code: with tf.variable_scope("conv2d0") to create a set of operations with the prefix "conv2d0/". If you expand this scope, you'll see another scope, "pre_relu". This is created using another tf.variable_scope("pre_relu"), so that any new variables will have the prefix "conv2d0/pre_relu". Finally, inside here, you'll see the convolution operation (tf.nn.conv2d) and the 4d weight tensor, "w" (e.g. created using tf.get_variable), used for convolution (and so has the name, "conv2d0/pre_relu/w". Just after the convolution is the addition of the bias, b. And finally after exiting the "pre_relu" scope, you should be able to see the "conv2d0" operation which applies the relu nonlinearity. In summary, that region of the graph can be created in Tensorflow like so: python input = tf.placeholder(...) with tf.variable_scope('conv2d0'): with tf.variable_scope('pre_relu'): w = tf.get_variable(...) h = tf.nn.conv2d(input, h, ...) b = tf.get_variable(...) h = tf.nn.bias_add(h, b) h = tf.nn.relu(h) End of explanation net['labels'] label_i = 851 print(net['labels'][label_i]) Explanation: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you. <a name="a-note-on-1x1-convolutions"></a> A Note on 1x1 Convolutions The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is: $1\ x\ 1\ x\ \text{C}_I$ and this is perfomed for each output channel $\text{C}_O$. What this is doing is filtering the information only in the channels dimension, not the spatial dimensions. The output of this convolution will be a $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_O$ output tensor. The only thing that changes in the output is the number of output filters. The 1x1 convolution operation is essentially reducing the amount of information in the channels dimensions before performing a much more expensive operation, e.g. a 3x3 or 5x5 convolution. Effectively, it is a very clever trick for dimensionality reduction used in many state of the art convolutional networks. Another way to look at it is that it is preserving the spatial information, but at each location, there is a fully connected network taking all the information from every input channel, $\text{C}_I$, and reducing it down to $\text{C}_O$ channels (or could easily also be up, but that is not the typical use case for this). So it's not really a convolution, but we can use the convolution operation to perform it at every location in our image. If you are interested in reading more about this architecture, I highly encourage you to read Network in Network, Christian Szegedy's work on the Inception network, Highway Networks, Residual Networks, and Ladder Networks. In this course, we'll stick to focusing on the applications of these, while trying to delve as much into the code as possible. <a name="network-labels"></a> Network Labels Let's now look at the labels: End of explanation # Load the VGG network. Scroll back up to where we loaded the inception # network if you are unsure. It is inside the "vgg16" module... net = .. assert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca')) # Let's explicity use the CPU, since we don't gain anything using the GPU # when doing Deep Dream (it's only a single image, benefits come w/ many images). device = '/cpu:0' # We'll now explicitly create a graph g = tf.Graph() # And here is a context manager. We use the python "with" notation to create a context # and create a session that only exists within this indent, as soon as we leave it, # the session is automatically closed! We also tell the session which graph to use. # We can pass a second context after the comma, # which we'll use to be explicit about using the CPU instead of a GPU. with tf.Session(graph=g) as sess, g.device(device): # Now load the graph_def, which defines operations and their values into `g` tf.import_graph_def(net['graph_def'], name='net') # Now we can get all the operations that belong to the graph `g`: names = [op.name for op in g.get_operations()] print(names) Explanation: <a name="using-context-managers"></a> Using Context Managers Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers. Let's see how this works w/ VGG: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # First find all the pooling layers in the network. You can # use list comprehension to iterate over all the "names" we just # created, finding whichever ones have the name "pool" in them. # Then be sure to append a ":0" to the names features = ... # Let's print them print(features) # This is what we want to have at the end. You could just copy this list # if you are stuck! assert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0']) Explanation: <a name="part-2---visualizing-gradients"></a> Part 2 - Visualizing Gradients Now that we know how to load a network and extract layers from it, let's grab only the pooling layers: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Use the function 'get_tensor_by_name' and the 'names' array to help you # get the first tensor in the network. Remember you have to add ":0" to the # name to get the output of an operation which is the tensor. x = ... assert(x.name == 'net/images:0') Explanation: Let's also grab the input layer: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation def plot_gradient(img, x, feature, g, device='/cpu:0'): Let's visualize the network's gradient activation when backpropagated to the original input image. This is effectively telling us which pixels contribute to the predicted layer, class, or given neuron with the layer # We'll be explicit about the graph and the device # by using a context manager: with tf.Session(graph=g) as sess, g.device(device): saliency = tf.gradients(tf.reduce_mean(feature), x) this_res = sess.run(saliency[0], feed_dict={x: img}) grad = this_res[0] / np.max(np.abs(this_res)) return grad Explanation: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x. End of explanation og = plt.imread('clinton.png')[..., :3] img = net['preprocess'](og)[np.newaxis] fig, axs = plt.subplots(1, len(features), figsize=(20, 10)) for i in range(len(features)): axs[i].set_title(features[i]) grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g) axs[i].imshow(utils.normalize(grad)) Explanation: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the utils.normalize function. End of explanation def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10): # Copy the input image as we'll add the gradient to it in a loop img_copy = img.copy() fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10)) with tf.Session(graph=g) as sess, g.device(device): for it_i in range(n_iterations): # This will calculate the gradient of the layer we chose with respect to the input image. this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0] # Let's normalize it by the maximum activation this_res /= (np.max(np.abs(this_res) + 1e-8)) # Or alternatively, we can normalize by standard deviation # this_res /= (np.std(this_res) + 1e-8) # Or we could use the `utils.normalize function: # this_res = utils.normalize(this_res) # Experiment with all of the above options. They will drastically # effect the resulting dream, and really depend on the network # you use, and the way the network handles normalization of the # input image, and the step size you choose! Lots to explore! # Then add the gradient back to the input image # Think about what this gradient represents? # It says what direction we should move our input # in order to meet our objective stored in "gradient" img_copy += this_res * step # Plot the image if (it_i + 1) % plot_step == 0: m = net['deprocess'](img_copy[0]) axs[it_i // plot_step].imshow(m) # We'll run it for 3 iterations n_iterations = 3 # Think of this as our learning rate. This is how much of # the gradient we'll add back to the input image step = 1.0 # Every 1 iterations, we'll plot the current deep dream plot_step = 1 Explanation: <a name="part-3---basic-deep-dream"></a> Part 3 - Basic Deep Dream In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations. Have a look here for inspiration: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB https://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html Let's stick the necessary bits in a function and try exploring how deep dream amplifies the representations of the chosen layers: End of explanation for feature_i in range(len(features)): with tf.Session(graph=g) as sess, g.device(device): # Get a feature layer layer = g.get_tensor_by_name(features[feature_i]) # Find the gradient of this layer's mean activation # with respect to the input image gradient = tf.gradients(tf.reduce_mean(layer), x) # Dream w/ our image dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step) Explanation: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro). End of explanation noise = net['preprocess']( np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis] Explanation: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to: End of explanation for feature_i in range(len(features)): with tf.Session(graph=g) as sess, g.device(device): # Get a feature layer layer = ... # Find the gradient of this layer's mean activation # with respect to the input image gradient = ... # Dream w/ the noise image. Complete this! dream(...) Explanation: We'll do the same thing as before, now w/ our noise image: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Load your own image here og = ... plt.imshow(og) # Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension: img = ... assert(img.ndim == 4) # Let's get the softmax layer print(names[-2]) layer = g.get_tensor_by_name(names[-2] + ":0") # And find its shape with tf.Session(graph=g) as sess, g.device(device): layer_shape = tf.shape(layer).eval(feed_dict={x:img}) # We can find out how many neurons it has by feeding it an image and # calculating the shape. The number of output channels is the last dimension. n_els = layer_shape[-1] # Let's pick a label. First let's print out every label and then find one we like: print(net['labels']) Explanation: <a name="part-4---deep-dream-extensions"></a> Part 4 - Deep Dream Extensions As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image. <a name="using-the-softmax-layer"></a> Using the Softmax Layer Let's get another image to play with, preprocess it, and then make it 4-dimensional. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Pick a neuron. Or pick a random one. This should be 0-n_els neuron_i = ... print(net['labels'][neuron_i]) assert(neuron_i >= 0 and neuron_i < n_els) # And we'll create an activation of this layer which is very close to 0 layer_vec = np.ones(layer_shape) / 100.0 # Except for the randomly chosen neuron which will be very close to 1 layer_vec[..., neuron_i] = 0.99 Explanation: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Explore different parameters for this section. n_iterations = 51 plot_step = 5 # If you use a different network, you will definitely need to experiment # with the step size, as each network normalizes the input image differently. step = 0.2 Explanation: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation imgs = [] with tf.Session(graph=g) as sess, g.device(device): gradient = tf.gradients(tf.reduce_max(layer), x) # Copy the input image as we'll add the gradient to it in a loop img_copy = img.copy() with tf.Session(graph=g) as sess, g.device(device): for it_i in range(n_iterations): # This will calculate the gradient of the layer we chose with respect to the input image. this_res = sess.run(gradient[0], feed_dict={ x: img_copy, layer: layer_vec})[0] # Let's normalize it by the maximum activation this_res /= (np.max(np.abs(this_res) + 1e-8)) # Or alternatively, we can normalize by standard deviation # this_res /= (np.std(this_res) + 1e-8) # Then add the gradient back to the input image # Think about what this gradient represents? # It says what direction we should move our input # in order to meet our objective stored in "gradient" img_copy += this_res * step # Plot the image if (it_i + 1) % plot_step == 0: m = net['deprocess'](img_copy[0]) plt.figure(figsize=(5, 5)) plt.grid('off') plt.imshow(m) plt.show() imgs.append(m) # Save the gif gif.build_gif(imgs, saveto='softmax.gif') ipyd.Image(url='softmax.gif?i={}'.format( np.random.rand()), height=300, width=300) Explanation: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory. End of explanation n_iterations = 101 plot_step = 10 step = 0.1 crop = 1 imgs = [] n_imgs, height, width, *ch = img.shape with tf.Session(graph=g) as sess, g.device(device): # Explore changing the gradient here from max to mean # or even try using different concepts we learned about # when creating style net, such as using a total variational # loss on `x`. gradient = tf.gradients(tf.reduce_max(layer), x) # Copy the input image as we'll add the gradient to it in a loop img_copy = img.copy() with tf.Session(graph=g) as sess, g.device(device): for it_i in range(n_iterations): # This will calculate the gradient of the layer # we chose with respect to the input image. this_res = sess.run(gradient[0], feed_dict={ x: img_copy, layer: layer_vec})[0] # This is just one way we could normalize the # gradient. It helps to look at the range of your image's # values, e.g. if it is 0 - 1, or -115 to +115, # and then consider the best way to normalize the gradient. # For some networks, it might not even be necessary # to perform this normalization, especially if you # leave the dream to run for enough iterations. # this_res = this_res / (np.std(this_res) + 1e-10) this_res = this_res / (np.max(np.abs(this_res)) + 1e-10) # Then add the gradient back to the input image # Think about what this gradient represents? # It says what direction we should move our input # in order to meet our objective stored in "gradient" img_copy += this_res * step # Optionally, we could apply any number of regularization # techniques... Try exploring different ways of regularizing # gradient. ascent process. If you are adventurous, you can # also explore changing the gradient above using a # total variational loss, as we used in the style net # implementation during the lecture. I leave that to you # as an exercise! # Crop a 1 pixel border from height and width img_copy = img_copy[:, crop:-crop, crop:-crop, :] # Resize (Note: in the lecture, we used scipy's resize which # could not resize images outside of 0-1 range, and so we had # to store the image ranges. This is a much simpler resize # method that allows us to `preserve_range`.) img_copy = resize(img_copy[0], (height, width), order=3, clip=False, preserve_range=True )[np.newaxis].astype(np.float32) # Plot the image if (it_i + 1) % plot_step == 0: m = net['deprocess'](img_copy[0]) plt.grid('off') plt.imshow(m) plt.show() imgs.append(m) # Create a GIF gif.build_gif(imgs, saveto='fractal.gif') ipyd.Image(url='fractal.gif?i=2', height=300, width=300) Explanation: <a name="fractal"></a> Fractal During the lecture we also saw a simple trick for creating an infinite fractal: crop the image and then resize it. This can produce some lovely aesthetics and really show some strong object hallucinations if left long enough and with the right parameters for step size/normalization/regularization. Feel free to experiment with the code below, adding your own regularizations as shown in the lecture to produce different results! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Replace these with your own images! guide_og = plt.imread(...)[..., :3] dream_og = plt.imread(...)[..., :3] assert(guide_og.ndim == 3 and guide_og.shape[-1] == 3) assert(dream_og.ndim == 3 and dream_og.shape[-1] == 3) Explanation: <a name="guided-hallucinations"></a> Guided Hallucinations Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes its own layers activations look like the guide image. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation guide_img = net['preprocess'](guide_og)[np.newaxis] dream_img = net['preprocess'](dream_og)[np.newaxis] fig, axs = plt.subplots(1, 2, figsize=(7, 4)) axs[0].imshow(guide_og) axs[1].imshow(dream_og) Explanation: Preprocess both images: End of explanation x = g.get_tensor_by_name(names[0] + ":0") # Experiment with the weighting feature_loss_weight = 1.0 with tf.Session(graph=g) as sess, g.device(device): feature_loss = tf.Variable(0.0) # Explore different layers/subsets of layers. This is just an example. for feature_i in features[3:5]: # Get the activation of the feature layer = g.get_tensor_by_name(feature_i) # Do the same for our guide image guide_layer = sess.run(layer, feed_dict={x: guide_img}) # Now we need to measure how similar they are! # We'll use the dot product, which requires us to first reshape both # features to a 2D vector. But you should experiment with other ways # of measuring similarity such as l1 or l2 loss. # Reshape each layer to 2D vector layer = tf.reshape(layer, [-1, 1]) guide_layer = guide_layer.reshape(-1, 1) # Now calculate their dot product correlation = tf.matmul(guide_layer.T, layer) # And weight the loss by a factor so we can control its influence feature_loss += feature_loss_weight * correlation Explanation: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation n_img, height, width, ch = dream_img.shape # We'll weight the overall contribution of the total variational loss # Experiment with this weighting tv_loss_weight = 1.0 with tf.Session(graph=g) as sess, g.device(device): # Penalize variations in neighboring pixels, enforcing smoothness dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :]) dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :]) # We will calculate their difference raised to a power to push smaller # differences closer to 0 and larger differences higher. # Experiment w/ the power you raise this to to see how it effects the result tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2)) Explanation: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Experiment with the step size! step = 0.1 imgs = [] with tf.Session(graph=g) as sess, g.device(device): # Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing! gradient = tf.gradients(-tv_loss + feature_loss, x) # Copy the input image as we'll add the gradient to it in a loop img_copy = dream_img.copy() with tf.Session(graph=g) as sess, g.device(device): sess.run(tf.global_variables_initializer()) for it_i in range(n_iterations): # This will calculate the gradient of the layer we chose with respect to the input image. this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0] # Let's normalize it by the maximum activation this_res /= (np.max(np.abs(this_res) + 1e-8)) # Or alternatively, we can normalize by standard deviation # this_res /= (np.std(this_res) + 1e-8) # Then add the gradient back to the input image # Think about what this gradient represents? # It says what direction we should move our input # in order to meet our objective stored in "gradient" img_copy += this_res * step # Plot the image if (it_i + 1) % plot_step == 0: m = net['deprocess'](img_copy[0]) plt.figure(figsize=(5, 5)) plt.grid('off') plt.imshow(m) plt.show() imgs.append(m) gif.build_gif(imgs, saveto='guided.gif') ipyd.Image(url='guided.gif?i=0', height=300, width=300) Explanation: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens? <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation sess.close() tf.reset_default_graph() # Stick w/ VGG for now, and then after you see how # the next few sections work w/ this network, come back # and explore the other networks. net = vgg16.get_vgg_model() # net = vgg16.get_vgg_face_model() # net = inception.get_inception_model(version='v5') # net = inception.get_inception_model(version='v3') # net = i2v.get_i2v_model() # net = i2v.get_i2v_tag_model() # Let's explicity use the CPU, since we don't gain anything using the GPU # when doing Deep Dream (it's only a single image, benefits come w/ many images). device = '/cpu:0' # We'll now explicitly create a graph g = tf.Graph() Explanation: <a name="further-explorations"></a> Further Explorations In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams. <a name="part-5---style-net"></a> Part 5 - Style Net We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead. Have a look here for inspiration: https://mtyka.github.io/code/2015/10/02/experiments-with-style-transfer.html http://kylemcdonald.net/stylestudies/ <a name="network"></a> Network Let's reset the graph and load up a network. I'll include code here for loading up any of our pretrained networks so you can explore each of them! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # And here is a context manager. We use the python "with" notation to create a context # and create a session that only exists within this indent, as soon as we leave it, # the session is automatically closed! We also tel the session which graph to use. # We can pass a second context after the comma, # which we'll use to be explicit about using the CPU instead of a GPU. with tf.Session(graph=g) as sess, g.device(device): # Now load the graph_def, which defines operations and their values into `g` tf.import_graph_def(net['graph_def'], name='net') Explanation: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU. End of explanation names = [op.name for op in g.get_operations()] Explanation: Let's then grab the names of every operation in our network: End of explanation content_og = plt.imread('arles.png')[..., :3] style_og = plt.imread('clinton.png')[..., :3] fig, axs = plt.subplots(1, 2) axs[0].imshow(content_og) axs[0].set_title('Content Image') axs[0].grid('off') axs[1].imshow(style_og) axs[1].set_title('Style Image') axs[1].grid('off') # We'll save these with a specific name to include in your submission plt.imsave(arr=content_og, fname='content.png') plt.imsave(arr=style_og, fname='style.png') content_img = net['preprocess'](content_og)[np.newaxis] style_img = net['preprocess'](style_og)[np.newaxis] Explanation: Now we need an image for our content image and another one for our style image. End of explanation # Grab the tensor defining the input to the network x = ... # And grab the tensor defining the softmax layer of the network softmax = ... for img in [content_img, style_img]: with tf.Session(graph=g) as sess, g.device('/cpu:0'): # Remember from the lecture that we have to set the dropout # "keep probability" to 1.0. res = softmax.eval(feed_dict={x: img, 'net/dropout_1/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout_1/random_uniform:0' ).get_shape().as_list()), 'net/dropout/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout/random_uniform:0' ).get_shape().as_list())})[0] print([(res[idx], net['labels'][idx]) for idx in res.argsort()[-5:][::-1]]) Explanation: Let's see what the network classifies these images as just for fun: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation print(names) Explanation: <a name="content-features"></a> Content Features We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer. End of explanation # Experiment w/ different layers here. You'll need to change this if you # use another network! content_layer = 'net/conv3_2/conv3_2:0' with tf.Session(graph=g) as sess, g.device('/cpu:0'): content_features = g.get_tensor_by_name(content_layer).eval( session=sess, feed_dict={x: content_img, 'net/dropout_1/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout_1/random_uniform:0' ).get_shape().as_list()), 'net/dropout/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout/random_uniform:0' ).get_shape().as_list())}) Explanation: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation # Experiment with different layers and layer subsets. You'll need to change these # if you use a different network! style_layers = ['net/conv1_1/conv1_1:0', 'net/conv2_1/conv2_1:0', 'net/conv3_1/conv3_1:0', 'net/conv4_1/conv4_1:0', 'net/conv5_1/conv5_1:0'] style_activations = [] with tf.Session(graph=g) as sess, g.device('/cpu:0'): for style_i in style_layers: style_activation_i = g.get_tensor_by_name(style_i).eval( feed_dict={x: style_img, 'net/dropout_1/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout_1/random_uniform:0' ).get_shape().as_list()), 'net/dropout/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout/random_uniform:0' ).get_shape().as_list())}) style_activations.append(style_activation_i) Explanation: <a name="style-features"></a> Style Features Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation style_features = [] for style_activation_i in style_activations: s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]]) gram_matrix = np.matmul(s_i.T, s_i) / s_i.size style_features.append(gram_matrix.astype(np.float32)) Explanation: Now we find the gram matrix which we'll use to optimize our features. End of explanation tf.reset_default_graph() g = tf.Graph() # Get the network again net = vgg16.get_vgg_model() # Load up a session which we'll use to import the graph into. with tf.Session(graph=g) as sess, g.device('/cpu:0'): # We can set the `net_input` to our content image # or perhaps another image # or an image of noise # net_input = tf.Variable(content_img / 255.0) net_input = tf.get_variable( name='input', shape=content_img.shape, dtype=tf.float32, initializer=tf.random_normal_initializer( mean=np.mean(content_img), stddev=np.std(content_img))) # Now we load the network again, but this time replacing our placeholder # with the trainable tf.Variable tf.import_graph_def( net['graph_def'], name='net', input_map={'images:0': net_input}) Explanation: <a name="remapping-the-input"></a> Remapping the Input We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation with tf.Session(graph=g) as sess, g.device('/cpu:0'): content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) - content_features) / content_features.size) Explanation: <a name="content-loss"></a> Content Loss In the lecture we saw that we'll simply find the l2 loss between our content layer features. End of explanation with tf.Session(graph=g) as sess, g.device('/cpu:0'): style_loss = np.float32(0.0) for style_layer_i, style_gram_i in zip(style_layers, style_features): layer_i = g.get_tensor_by_name(style_layer_i) layer_shape = layer_i.get_shape().as_list() layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3] layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]]) gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size))) Explanation: <a name="style-loss"></a> Style Loss Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix. End of explanation def total_variation_loss(x): h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1] dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :]) dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :]) return tf.reduce_sum(tf.pow(dx + dy, 1.25)) with tf.Session(graph=g) as sess, g.device('/cpu:0'): tv_loss = total_variation_loss(net_input) Explanation: <a name="total-variation-loss"></a> Total Variation Loss And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss. End of explanation with tf.Session(graph=g) as sess, g.device('/cpu:0'): # Experiment w/ the weighting of these! They produce WILDLY different # results. loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss optimizer = tf.train.AdamOptimizer(0.05).minimize(loss) Explanation: <a name="training"></a> Training We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation imgs = [] n_iterations = 100 with tf.Session(graph=g) as sess, g.device('/cpu:0'): sess.run(tf.global_variables_initializer()) # map input to noise og_img = net_input.eval() for it_i in range(n_iterations): _, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={ 'net/dropout_1/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout_1/random_uniform:0' ).get_shape().as_list()), 'net/dropout/random_uniform:0': np.ones( g.get_tensor_by_name( 'net/dropout/random_uniform:0' ).get_shape().as_list()) }) print("%d: %f, (%f - %f)" % (it_i, this_loss, np.min(synth), np.max(synth))) if it_i % 5 == 0: m = vgg16.deprocess(synth[0]) imgs.append(m) plt.imshow(m) plt.show() gif.build_gif(imgs, saveto='stylenet.gif') ipyd.Image(url='stylenet.gif?i=0', height=300, width=300) Explanation: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here. End of explanation utils.build_submission('session-4.zip', ('softmax.gif', 'fractal.gif', 'guided.gif', 'content.png', 'style.png', 'stylenet.gif', 'session-4.ipynb')) Explanation: <a name="assignment-submission"></a> Assignment Submission After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: <pre> session-4/ session-4.ipynb softmax.gif fractal.gif guided.gif content.png style.png stylenet.gif </pre> You'll then submit this zip file for your third assignment on Kadenze for "Assignment 4: Deep Dream and Style Net"! Remember to complete the rest of the assignment, gallery commenting on your peers work, to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me. To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work! End of explanation
14,825
Given the following text description, write Python code to implement the functionality described below step by step Description: Time Series Visualization with Altair Author Step1: Other libraries Import other libraries used in this notebook. pandas Step4: Region reduction function Reduction of pixels intersecting the region of interest to a statistic will be performed multiple times. Define a reusable function that can perform the task for each dataset. The function accepts arguments such as scale and reduction method to parameterize the operation for each particular analysis. Note Step5: Formatting The result of the region reduction function above applied to an ee.ImageCollection produces an ee.FeatureCollection. This data needs to be transferred to the Python kernel, but serialized feature collections are large and awkward to deal with. This step defines a function to convert the feature collection to an ee.Dictionary where the keys are feature property names and values are corresponding lists of property values, which pandas can deal with handily. Extract the property values from the ee.FeatureCollection as a list of lists stored in an ee.Dictionary using reduceColumns(). Extract the list of lists from the dictionary. Add names to each list by converting to an ee.Dictionary where keys are property names and values are the corresponding value lists. The returned ee.Dictionary is essentially a table, where keys define columns and list elements define rows. Step6: Drought severity In this section we'll look at a time series of drought severity as a calendar heat map and a bar chart. Import data Load the gridded Palmer Drought Severity Index (PDSI) data as an ee.ImageCollection. Load the EPA Level-3 ecoregion boundaries as an ee.FeatureCollection and filter it to include only the Sierra Nevada region, which defines the area of interest (AOI). Step7: Note Step8: STOP Step9: Import the asset after the export completes Step10: * Remove comments (#) to run the above cells. CONTINUE Step11: The result is a Python dictionary. Print a small part to see how it is formatted. Step12: Convert the Python dictionary to a pandas DataFrame. Step13: Preview the DataFrame and check the column data types. Step14: Add date columns Add date columns derived from the milliseconds from Unix epoch column. The pandas library provides functions and objects for timestamps and the DataFrame object allows for easy mutation. Define a function to add date variables to the DataFrame Step15: Note Step16: Rename and drop columns Often it is desirable to rename columns and/or remove unnecessary columns. Do both here and preview the DataFrame. Step17: Check the data type of each column. Step18: At this point the DataFrame is in good shape for charting with Altair. Calendar heatmap Chart PDSI data as a calendar heatmap. Set observation year as the x-axis variable, month as y-axis, and PDSI value as color. Note that Altair features a convenient method for aggregating values within groups while encoding the chart (i.e., no need to create a new DataFrame). The mean aggregate transform is applied here because each month has three PDSI observations (year and month are the grouping factors). Also note that a tooltip has been added to the chart; hovering over cells reveals the values of the selected variables. Step19: The calendar heat map is good for interpretation of relative intra- and inter-annual differences in PDSI. However, since the PDSI variable is represented by color, estimating absolute values and magnitude of difference is difficult. Bar chart Chart PDSI time series as a bar chart to more easily interpret absolute values and compare them over time. Here, the observation timestamp is represented on the x-axis and PDSI is represented by both the y-axis and color. Since each PDSI observation has a unique timestamp that can be plotted to the x-axis, there is no need to aggregate PDSI values as in the above chart. A tooltip is added to the chart; hover over the bars to reveal the values for each variable. Step20: This temporal bar chart makes it easier to interpret and compare absolute values of PDSI over time, but relative intra- and inter-annual variability are arguably harder to interpret because the division of year and month is not as distinct as in the calendar heatmap above. Take note of the extended and severe period of drought from 2012 through 2016. In the next section, we'll look for a vegetation response to this event. Vegetation productivity NDVI is a proxy measure of photosynthetic capacity and is used in this tutorial to investigate vegetation response to the 2012-2016 drought identified in the PDSI bar chart above. MODIS provides an analysis-ready 16-day NDVI composite that is well suited for regional investigation of temporal vegetation dynamics. The following steps reduce and prepare this data for charting in the same manner as the PDSI data above; please refer to previous sections to review details. Import and reduce Load the MODIS NDVI data as an ee.ImageCollection. Create a region reduction function. Apply the function to all images in the time series. Filter out features with null computed values. Step21: STOP Step22: Remove the NDVI scaling. Add date attribute columns. Preview the DataFrame. Step23: These NDVI time series data are now ready for plotting. DOY line chart Make a day of year (DOY) line chart where each line represents a year of observations. This chart makes it possible to compare the same observation date among years. Use it to compare NDVI values for years during the drought and not. Day of year is represented on the x-axis and NDVI on the y-axis. Each line represents a year and is distinguished by color. Note that this plot includes a tooltip and has been made interactive so that the axes can be zoomed and panned. Step24: The first thing to note is that winter dates (when there is snow in the Sierra Nevada ecoregion) exhibit highly variable inter-annual NDVI, but spring, summer, and fall dates are more consistent. With regard to drought effects on vegetation, summer and fall dates are the most sensitive time. Zooming into observations for the summer/fall days (224-272), you'll notice that many years have a u-shaped pattern where NDVI values decrease and then rise. Another way to view these data is to plot the distribution of NDVI by DOY represented as an interquartile range envelope and median line. Here, these two charts are defined and then combined in the following snippet. Define a base chart. Define a line chart for median NDVI (note the use of aggregate median transform grouping by DOY). Define a band chart using 'iqr' (interquartile range) to represent NDVI distribution grouping on DOY. Combine the line and band charts. Step25: The summary statistics for the summer/fall days (224-272) certainly show an NDVI reduction, but there is also variability; some years exhibit greater NDVI reduction than others as suggested by the wide interquartile range during the middle of the summer. Assuming that NDVI reduction is due to water and heat limiting photosynthesis, we can hypothesize that during years of drought, photosynthesis (NDVI) will be lower than non-drought years. We can investigate the relationship between photosynthesis (NDVI) and drought (PDSI) using a scatter plot and linear regression. Dought and productivity relationship A scatterplot is a good way to visualize the relationship between two variables. Here, PDSI (drought indicator) will be plotted on the x-axis and NDVI (vegetation productivity) on the y-axis. To achieve this, both variables must exist in the same DataFrame. Each row will be an observation in time and columns will correspond to PDSI and NDVI values. Currently, PDSI and NDVI are in two different DataFrames and need to be merged. Prepare DataFrames Before they can be merged, each variable must be reduced to a common temporal observation unit to define correspondence. There are a number of ways to do this and each will define the relationship between PDSI and NDVI differently. Here, our temporal unit will be an annual observation set where NDVI is reduced to the intra-annual minimum from DOY 224 to 272 and PDSI will be the mean from DOY 1 to 272. We are proposing that average drought severity for the first three quarters of a year are related to minimum summer NDVI for a given year. Filter the NDVI DataFrame to observations that occur between DOY 224 and 272. Reduce the DOY-filtered subset to intra-annual minimum NDVI. Step26: Note Step27: Note Step28: NDVI and PDSI are now included in the same DataFrame linked by Year. This format is suitable for determining a linear relationship and drawing a line of best fit through the data. Including a line of best fit can be a helpful visual aid. Here, a 1D polynomial is fit through the xy point cloud defined by corresponding NDVI and PDSI observations. The resulting fit is added to the DataFrame as a new column 'Fit'. Add a line of best fit between PDSI and NDVI by determining the linear relationship and predicting NDVI based on PDSI for each year. Step29: Scatter plot The DataFrame is ready for plotting. Since this chart is to include points and a line of best fit, two charts need to be created, one for the points and one for the line. The results are combined into the final plot. Step30: As you can see, there seems to be some degree of positive correlation between PDSI and NDVI (i.e., as wetness increases, vegetation productivity increases; as wetness decreases, vegetation productivity decreases). Note that some of the greatest outliers are 2016, 2017, 2018 - the three years following recovery from the long drought. It is also important to note that there are many other factors that may influence the NDVI signal that are not being considered here. Patch-level vegetation mortality At a regional scale there appears to be a relationship between drought and vegetation productivity. This section will look more closely at effects of drought on vegetation at a patch level, with a specific focus on mortality. Here, a Landsat time series collection is created for the period 1984-present to provide greater temporal context for change at a relatively precise spatial resolution. Find a point of interest Use aerial imagery from the National Agriculture Imagery Program (NAIP) in an interactive Folium map to identify a location in the Sierra Nevada ecoregion that appears to have patches of dead trees. Run the following code block to render an interactive Folium map for a selected NAIP image. Zoom and pan around the image to identify a region of recently dead trees (standing silver snags with no fine branches or brown/grey snags with fine branches). Click the map to list the latitude and longitude for a patch of interest. Record these values for use in the following section (the example location used in the following section is presented as a yellow point). Step31: Prepare Landsat collection Landsat surface reflectance data need to be prepared before being reduced. The steps below will organize data from multiple sensors into congruent collections where band names are consistent, cloud and cloud shadows have been masked out, and the normalized burn ratio (NBR) transformation is calculated and returned as the image representative (NBR is a good indicator of forest disturbance). Finally, all sensor collections will be merged into a single collection and annual composites calculated based on mean annual NBR using a join. Define Landsat observation date window inputs based on NDVI curve plotted previously and set latitude and longitude variables from the map above. Step32: Note Step33: The result of the above code block is an image collection with as many images as there are years present in the merged Landsat collection. Each image represents the annual mean NBR constrained to observations within the given date window. Prepare DataFrame Create a region reduction function; use ee.Reducer.first() as the reducer since no spatial aggregation is needed (we are interested in the single pixel that intersects the point). Set the region as the geometry defined by the lat. and long. coordinates identified in the above map. Apply the function to all images in the time series. Filter out features with null computed values. Step34: Transfer data from the server to the client.<br> Note Step35: Add date attribute columns. Preview the DataFrame. Step36: Line chart Display the Landsat NBR time series for the point of interest as a line plot. Step37: As you can see from the above time series of NBR observations, a dramatic decrease in NBR began in 2015, shortly after the severe and extended drought began. The decline continued through 2017, when a minor recovery began. Within the context of the entire time series, it is apparent that the decline is outside of normal inter-annual variability and that the reduction in NBR for this site is quite severe. The lack of major recovery response in NBR in 2017-19 (time of writing) indicates that the event was not ephemeral; the loss of vegetation will have a lasting impact on this site. The corresponding onset of drought and reduction in NBR provides further evidence that there is a relationship between drought and vegetation response in the Sierra Nevada ecoregion. Past and future climate The previous data visualizations suggest there is a relationship between drought and vegetation stress and mortality in the Sierra Nevada ecoregion. This section will look at how climate is projected to change in the future, which can give us a sense for what to expect with regard to drought conditions and speculate about its impact on vegetation. We'll look at historical and projected temperature and precipitation. Projected data are represented by NEX-DCP30, and historical observations by PRISM. Future climate NEX-DCP30 data contain 33 climate models projected to the year 2100 using several scenarios of greenhouse gas concentration pathways (RCP). Here, we'll use the median of all models for RCP 8.5 (the worst case scenario) to look at potential future temperature and precipitation. Import and prepare collection Filter the collection by date and scenario. Calculate 'mean' temperature from median min and max among 33 models. Step38: Prepare DataFrame Create a region reduction function. Apply the function to all images in the time series. Filter out features with null computed values. Step39: Transfer data from the server to the client. Note Step40: Add date attribute columns. Preview the DataFrame. Step41: Convert precipitation rate to mm. Convert Kelvin to celsius. Add the model name as a column. Remove the 'Precip-rate' column. Step42: Past climate PRISM data are climate datasets for the conterminous United States. Grid cells are interpolated based on station data assimilated from many networks across the country. The datasets used here are monthly averages for precipitation and temperature. They provide a record of historical climate. Reduce collection and prepare DataFrame Import the collection and filter by date. Reduce the collection images by region and filter null computed values. Convert the feature collection to a dictionary and transfer it client-side.<br> Note Step43: Add date attribute columns. Add model name. Rename columns to be consistent with the NEX-DCP30 DataFrame. Preview the DataFrame. Step44: Combine DataFrames At this point the PRISM and NEX-DCP30 DataFrames have the same columns, the same units, and are distinguished by unique entries in the 'Model' column. Use the concat function to concatenate these DataFrames into a single DataFrame for plotting together in the same chart. Step45: Charts Chart the past and future precipitation and temperature together to get a sense for where climate has been and where it is projected to go under RCP 8.5. Precipitation Step46: Temperature
Python Code: import ee ee.Authenticate() ee.Initialize() Explanation: Time Series Visualization with Altair Author: jdbcode This tutorial provides methods for generating time series data in Earth Engine and visualizing it with the Altair library using drought and vegetation response as an example. Topics include: Time series region reduction in Earth Engine Formatting a table in Earth Engine Transferring an Earth Engine table to a Colab Python kernel Converting an Earth Engine table to a pandas DataFrame Data representation with various Altair chart types Note that this tutorial uses the Earth Engine Python API in a Colab notebook. Context At the heart of this tutorial is the notion of data reduction and the need to transform data into insights to help inform our understanding of Earth processes and human's role in them. It combines a series of technologies, each best suited to a particular task in the data reduction process. Earth Engine is used to access, clean, and reduce large amounts of spatiotemporal data, pandas is used to analyze and organize the results, and Altair is used to visualize the results. Note: This notebook demonstrates an analysis template and interactive workflow that is appropriate for a certain size of dataset, but there are limitations to interactive computation time and server-to-client data transfer size imposed by Colab and Earth Engine. To analyze even larger datasets, you may need to modify the workflow to export FeatureCollection results from Earth Engine as static assets and then use the static assets to perform the subsequent steps involving Earth Engine table formatting, conversion to pandas DataFrame, and charting with Altair. Materials Datasets Climate Drought severity (PDSI) Historical climate (PRISM) Projected climate (NEX-DCP30) Vegetation proxies NDVI (MODIS) NBR (Landsat) Region of interest The region of interest for these examples is the Sierra Nevada ecoregion of California. The vegetation grades from mostly ponderosa pine and Douglas-fir at low elevations on the western side, to pines and Sierra juniper on the eastern side, and to fir and other conifers at higher elevations. General workflow Preparation of every dataset for visualization follows the same basic steps: Filter the dataset (server-side Earth Engine) Reduce the data region by a statistic (server-side Earth Engine) Format the region reduction into a table (server-side Earth Engine) Convert the Earth Engine table to a DataFrame (server-side Earth Engine > client-side Python kernel) Alter the DataFrame (client-side pandas) Plot the DataFrame (client-side Altair) The first dataset will walk through each step in detail. Following examples will provide less description, unless there is variation that merits note. Python setup Earth Engine API Import the Earth Engine library. Authenticate access (registration verification and Google account access). Initialize the API. End of explanation import pandas as pd import altair as alt import numpy as np import folium Explanation: Other libraries Import other libraries used in this notebook. pandas: data analysis (including the DataFrame data structure) altair: declarative visualization library (used for charting) numpy: array-processing package (used for linear regression) folium: interactive web map End of explanation def create_reduce_region_function(geometry, reducer=ee.Reducer.mean(), scale=1000, crs='EPSG:4326', bestEffort=True, maxPixels=1e13, tileScale=4): Creates a region reduction function. Creates a region reduction function intended to be used as the input function to ee.ImageCollection.map() for reducing pixels intersecting a provided region to a statistic for each image in a collection. See ee.Image.reduceRegion() documentation for more details. Args: geometry: An ee.Geometry that defines the region over which to reduce data. reducer: Optional; An ee.Reducer that defines the reduction method. scale: Optional; A number that defines the nominal scale in meters of the projection to work in. crs: Optional; An ee.Projection or EPSG string ('EPSG:5070') that defines the projection to work in. bestEffort: Optional; A Boolean indicator for whether to use a larger scale if the geometry contains too many pixels at the given scale for the operation to succeed. maxPixels: Optional; A number specifying the maximum number of pixels to reduce. tileScale: Optional; A number representing the scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default. Returns: A function that accepts an ee.Image and reduces it by region, according to the provided arguments. def reduce_region_function(img): Applies the ee.Image.reduceRegion() method. Args: img: An ee.Image to reduce to a statistic by region. Returns: An ee.Feature that contains properties representing the image region reduction results per band and the image timestamp formatted as milliseconds from Unix epoch (included to enable time series plotting). stat = img.reduceRegion( reducer=reducer, geometry=geometry, scale=scale, crs=crs, bestEffort=bestEffort, maxPixels=maxPixels, tileScale=tileScale) return ee.Feature(geometry, stat).set({'millis': img.date().millis()}) return reduce_region_function Explanation: Region reduction function Reduction of pixels intersecting the region of interest to a statistic will be performed multiple times. Define a reusable function that can perform the task for each dataset. The function accepts arguments such as scale and reduction method to parameterize the operation for each particular analysis. Note: most of the reduction operations in this tutorial use a large pixel scale so that operations complete quickly. In your own application, set the scale and other parameter arguments as you wish. End of explanation # Define a function to transfer feature properties to a dictionary. def fc_to_dict(fc): prop_names = fc.first().propertyNames() prop_lists = fc.reduceColumns( reducer=ee.Reducer.toList().repeat(prop_names.size()), selectors=prop_names).get('list') return ee.Dictionary.fromLists(prop_names, prop_lists) Explanation: Formatting The result of the region reduction function above applied to an ee.ImageCollection produces an ee.FeatureCollection. This data needs to be transferred to the Python kernel, but serialized feature collections are large and awkward to deal with. This step defines a function to convert the feature collection to an ee.Dictionary where the keys are feature property names and values are corresponding lists of property values, which pandas can deal with handily. Extract the property values from the ee.FeatureCollection as a list of lists stored in an ee.Dictionary using reduceColumns(). Extract the list of lists from the dictionary. Add names to each list by converting to an ee.Dictionary where keys are property names and values are the corresponding value lists. The returned ee.Dictionary is essentially a table, where keys define columns and list elements define rows. End of explanation today = ee.Date(pd.to_datetime('today')) date_range = ee.DateRange(today.advance(-20, 'years'), today) pdsi = ee.ImageCollection('GRIDMET/DROUGHT').filterDate(date_range).select('pdsi') aoi = ee.FeatureCollection('EPA/Ecoregions/2013/L3').filter( ee.Filter.eq('na_l3name', 'Sierra Nevada')).geometry() Explanation: Drought severity In this section we'll look at a time series of drought severity as a calendar heat map and a bar chart. Import data Load the gridded Palmer Drought Severity Index (PDSI) data as an ee.ImageCollection. Load the EPA Level-3 ecoregion boundaries as an ee.FeatureCollection and filter it to include only the Sierra Nevada region, which defines the area of interest (AOI). End of explanation reduce_pdsi = create_reduce_region_function( geometry=aoi, reducer=ee.Reducer.mean(), scale=5000, crs='EPSG:3310') pdsi_stat_fc = ee.FeatureCollection(pdsi.map(reduce_pdsi)).filter( ee.Filter.notNull(pdsi.first().bandNames())) Explanation: Note: the aoi defined above will be used throughout this tutorial. In your own application, redefine it for your own area of interest. Reduce data Create a region reduction function. Map the function over the pdsi image collection to reduce each image. Filter out any resulting features that have null computed values (occurs when all pixels in an AOI are masked). End of explanation task = ee.batch.Export.table.toAsset( collection=pdsi_stat_fc, description='pdsi_stat_fc export', assetId='users/YOUR_USER_NAME/pdsi_stat_fc_ts_vis_with_altair') # task.start() Explanation: STOP: Optional export If your process is long-running, you'll want to export the pdsi_stat_fc variable as an asset using a batch task. Wait until the task finishes, import the asset, and continue on. Please see the Developer Guide section on exporting with the Python API. Export to asset: End of explanation # pdsi_stat_fc = ee.FeatureCollection('users/YOUR_USER_NAME/pdsi_stat_fc_ts_vis_with_altair') Explanation: Import the asset after the export completes: End of explanation pdsi_dict = fc_to_dict(pdsi_stat_fc).getInfo() Explanation: * Remove comments (#) to run the above cells. CONTINUE: Server to client transfer The ee.FeatureCollection needs to be converted to a dictionary and transferred to the Python kernel. Apply the fc_to_dict function to convert from ee.FeatureCollection to ee.Dictionary. Call getInfo() on the ee.Dictionary to transfer the data client-side. End of explanation print(type(pdsi_dict), '\n') for prop in pdsi_dict.keys(): print(prop + ':', pdsi_dict[prop][0:3] + ['...']) Explanation: The result is a Python dictionary. Print a small part to see how it is formatted. End of explanation pdsi_df = pd.DataFrame(pdsi_dict) Explanation: Convert the Python dictionary to a pandas DataFrame. End of explanation display(pdsi_df) print(pdsi_df.dtypes) Explanation: Preview the DataFrame and check the column data types. End of explanation # Function to add date variables to DataFrame. def add_date_info(df): df['Timestamp'] = pd.to_datetime(df['millis'], unit='ms') df['Year'] = pd.DatetimeIndex(df['Timestamp']).year df['Month'] = pd.DatetimeIndex(df['Timestamp']).month df['Day'] = pd.DatetimeIndex(df['Timestamp']).day df['DOY'] = pd.DatetimeIndex(df['Timestamp']).dayofyear return df Explanation: Add date columns Add date columns derived from the milliseconds from Unix epoch column. The pandas library provides functions and objects for timestamps and the DataFrame object allows for easy mutation. Define a function to add date variables to the DataFrame: year, month, day, and day of year (DOY). End of explanation pdsi_df = add_date_info(pdsi_df) pdsi_df.head(5) Explanation: Note: the above function for adding date information to a DataFrame will be used throughout this tutorial. Apply the add_date_info function to the PDSI DataFrame to add date attribute columns, preview the results. End of explanation pdsi_df = pdsi_df.rename(columns={ 'pdsi': 'PDSI' }).drop(columns=['millis', 'system:index']) pdsi_df.head(5) Explanation: Rename and drop columns Often it is desirable to rename columns and/or remove unnecessary columns. Do both here and preview the DataFrame. End of explanation pdsi_df.dtypes Explanation: Check the data type of each column. End of explanation alt.Chart(pdsi_df).mark_rect().encode( x='Year:O', y='Month:O', color=alt.Color( 'mean(PDSI):Q', scale=alt.Scale(scheme='redblue', domain=(-5, 5))), tooltip=[ alt.Tooltip('Year:O', title='Year'), alt.Tooltip('Month:O', title='Month'), alt.Tooltip('mean(PDSI):Q', title='PDSI') ]).properties(width=600, height=300) Explanation: At this point the DataFrame is in good shape for charting with Altair. Calendar heatmap Chart PDSI data as a calendar heatmap. Set observation year as the x-axis variable, month as y-axis, and PDSI value as color. Note that Altair features a convenient method for aggregating values within groups while encoding the chart (i.e., no need to create a new DataFrame). The mean aggregate transform is applied here because each month has three PDSI observations (year and month are the grouping factors). Also note that a tooltip has been added to the chart; hovering over cells reveals the values of the selected variables. End of explanation alt.Chart(pdsi_df).mark_bar(size=1).encode( x='Timestamp:T', y='PDSI:Q', color=alt.Color( 'PDSI:Q', scale=alt.Scale(scheme='redblue', domain=(-5, 5))), tooltip=[ alt.Tooltip('Timestamp:T', title='Date'), alt.Tooltip('PDSI:Q', title='PDSI') ]).properties(width=600, height=300) Explanation: The calendar heat map is good for interpretation of relative intra- and inter-annual differences in PDSI. However, since the PDSI variable is represented by color, estimating absolute values and magnitude of difference is difficult. Bar chart Chart PDSI time series as a bar chart to more easily interpret absolute values and compare them over time. Here, the observation timestamp is represented on the x-axis and PDSI is represented by both the y-axis and color. Since each PDSI observation has a unique timestamp that can be plotted to the x-axis, there is no need to aggregate PDSI values as in the above chart. A tooltip is added to the chart; hover over the bars to reveal the values for each variable. End of explanation ndvi = ee.ImageCollection('MODIS/006/MOD13A2').filterDate(date_range).select('NDVI') reduce_ndvi = create_reduce_region_function( geometry=aoi, reducer=ee.Reducer.mean(), scale=1000, crs='EPSG:3310') ndvi_stat_fc = ee.FeatureCollection(ndvi.map(reduce_ndvi)).filter( ee.Filter.notNull(ndvi.first().bandNames())) Explanation: This temporal bar chart makes it easier to interpret and compare absolute values of PDSI over time, but relative intra- and inter-annual variability are arguably harder to interpret because the division of year and month is not as distinct as in the calendar heatmap above. Take note of the extended and severe period of drought from 2012 through 2016. In the next section, we'll look for a vegetation response to this event. Vegetation productivity NDVI is a proxy measure of photosynthetic capacity and is used in this tutorial to investigate vegetation response to the 2012-2016 drought identified in the PDSI bar chart above. MODIS provides an analysis-ready 16-day NDVI composite that is well suited for regional investigation of temporal vegetation dynamics. The following steps reduce and prepare this data for charting in the same manner as the PDSI data above; please refer to previous sections to review details. Import and reduce Load the MODIS NDVI data as an ee.ImageCollection. Create a region reduction function. Apply the function to all images in the time series. Filter out features with null computed values. End of explanation ndvi_dict = fc_to_dict(ndvi_stat_fc).getInfo() ndvi_df = pd.DataFrame(ndvi_dict) display(ndvi_df) print(ndvi_df.dtypes) Explanation: STOP: If your process is long-running, you'll want to export the ndvi_stat_fc variable as an asset using a batch task. Wait until the task finishes, import the asset, and continue on. Please see the above Optional export section for more details. CONTINUE: Prepare DataFrame Transfer data from the server to the client. Convert the Python dictionary to a pandas DataFrame. Preview the DataFrame and check data types. End of explanation ndvi_df['NDVI'] = ndvi_df['NDVI'] / 10000 ndvi_df = add_date_info(ndvi_df) ndvi_df.head(5) Explanation: Remove the NDVI scaling. Add date attribute columns. Preview the DataFrame. End of explanation highlight = alt.selection( type='single', on='mouseover', fields=['Year'], nearest=True) base = alt.Chart(ndvi_df).encode( x=alt.X('DOY:Q', scale=alt.Scale(domain=[0, 353], clamp=True)), y=alt.Y('NDVI:Q', scale=alt.Scale(domain=[0.1, 0.6])), color=alt.Color('Year:O', scale=alt.Scale(scheme='magma'))) points = base.mark_circle().encode( opacity=alt.value(0), tooltip=[ alt.Tooltip('Year:O', title='Year'), alt.Tooltip('DOY:Q', title='DOY'), alt.Tooltip('NDVI:Q', title='NDVI') ]).add_selection(highlight) lines = base.mark_line().encode( size=alt.condition(~highlight, alt.value(1), alt.value(3))) (points + lines).properties(width=600, height=350).interactive() Explanation: These NDVI time series data are now ready for plotting. DOY line chart Make a day of year (DOY) line chart where each line represents a year of observations. This chart makes it possible to compare the same observation date among years. Use it to compare NDVI values for years during the drought and not. Day of year is represented on the x-axis and NDVI on the y-axis. Each line represents a year and is distinguished by color. Note that this plot includes a tooltip and has been made interactive so that the axes can be zoomed and panned. End of explanation base = alt.Chart(ndvi_df).encode( x=alt.X('DOY:Q', scale=alt.Scale(domain=(150, 340)))) line = base.mark_line().encode( y=alt.Y('median(NDVI):Q', scale=alt.Scale(domain=(0.47, 0.53)))) band = base.mark_errorband(extent='iqr').encode( y='NDVI:Q') (line + band).properties(width=600, height=300).interactive() Explanation: The first thing to note is that winter dates (when there is snow in the Sierra Nevada ecoregion) exhibit highly variable inter-annual NDVI, but spring, summer, and fall dates are more consistent. With regard to drought effects on vegetation, summer and fall dates are the most sensitive time. Zooming into observations for the summer/fall days (224-272), you'll notice that many years have a u-shaped pattern where NDVI values decrease and then rise. Another way to view these data is to plot the distribution of NDVI by DOY represented as an interquartile range envelope and median line. Here, these two charts are defined and then combined in the following snippet. Define a base chart. Define a line chart for median NDVI (note the use of aggregate median transform grouping by DOY). Define a band chart using 'iqr' (interquartile range) to represent NDVI distribution grouping on DOY. Combine the line and band charts. End of explanation ndvi_doy_range = [224, 272] ndvi_df_sub = ndvi_df[(ndvi_df['DOY'] >= ndvi_doy_range[0]) & (ndvi_df['DOY'] <= ndvi_doy_range[1])] ndvi_df_sub = ndvi_df_sub.groupby('Year').agg('min') Explanation: The summary statistics for the summer/fall days (224-272) certainly show an NDVI reduction, but there is also variability; some years exhibit greater NDVI reduction than others as suggested by the wide interquartile range during the middle of the summer. Assuming that NDVI reduction is due to water and heat limiting photosynthesis, we can hypothesize that during years of drought, photosynthesis (NDVI) will be lower than non-drought years. We can investigate the relationship between photosynthesis (NDVI) and drought (PDSI) using a scatter plot and linear regression. Dought and productivity relationship A scatterplot is a good way to visualize the relationship between two variables. Here, PDSI (drought indicator) will be plotted on the x-axis and NDVI (vegetation productivity) on the y-axis. To achieve this, both variables must exist in the same DataFrame. Each row will be an observation in time and columns will correspond to PDSI and NDVI values. Currently, PDSI and NDVI are in two different DataFrames and need to be merged. Prepare DataFrames Before they can be merged, each variable must be reduced to a common temporal observation unit to define correspondence. There are a number of ways to do this and each will define the relationship between PDSI and NDVI differently. Here, our temporal unit will be an annual observation set where NDVI is reduced to the intra-annual minimum from DOY 224 to 272 and PDSI will be the mean from DOY 1 to 272. We are proposing that average drought severity for the first three quarters of a year are related to minimum summer NDVI for a given year. Filter the NDVI DataFrame to observations that occur between DOY 224 and 272. Reduce the DOY-filtered subset to intra-annual minimum NDVI. End of explanation pdsi_doy_range = [1, 272] pdsi_df_sub = pdsi_df[(pdsi_df['DOY'] >= pdsi_doy_range[0]) & (pdsi_df['DOY'] <= pdsi_doy_range[1])] pdsi_df_sub = pdsi_df_sub.groupby('Year').agg('mean') Explanation: Note: in your own application you may find that a different DOY range is more suitable, change the ndvi_doy_range as needed. Filter the PDSI DataFrame to observations that occur between DOY 1 and 272. Reduce the values within a given year to the mean of the observations. End of explanation ndvi_pdsi_df = pd.merge( ndvi_df_sub, pdsi_df_sub, how='left', on='Year').reset_index() ndvi_pdsi_df = ndvi_pdsi_df[['Year', 'NDVI', 'PDSI']] ndvi_pdsi_df.head(5) Explanation: Note: in your own application you may find that a different DOY range is more suitable, change the pdsi_doy_range as needed. Perform a join on 'Year' to combine the two reduced DataFrames. Select only the columns of interest: 'Year', 'NDVI', 'PDSI'. Preview the DataFrame. End of explanation ndvi_pdsi_df['Fit'] = np.poly1d( np.polyfit(ndvi_pdsi_df['PDSI'], ndvi_pdsi_df['NDVI'], 1))( ndvi_pdsi_df['PDSI']) ndvi_pdsi_df.head(5) Explanation: NDVI and PDSI are now included in the same DataFrame linked by Year. This format is suitable for determining a linear relationship and drawing a line of best fit through the data. Including a line of best fit can be a helpful visual aid. Here, a 1D polynomial is fit through the xy point cloud defined by corresponding NDVI and PDSI observations. The resulting fit is added to the DataFrame as a new column 'Fit'. Add a line of best fit between PDSI and NDVI by determining the linear relationship and predicting NDVI based on PDSI for each year. End of explanation base = alt.Chart(ndvi_pdsi_df).encode( x=alt.X('PDSI:Q', scale=alt.Scale(domain=(-5, 5)))) points = base.mark_circle(size=60).encode( y=alt.Y('NDVI:Q', scale=alt.Scale(domain=(0.4, 0.6))), color=alt.Color('Year:O', scale=alt.Scale(scheme='magma')), tooltip=[ alt.Tooltip('Year:O', title='Year'), alt.Tooltip('PDSI:Q', title='PDSI'), alt.Tooltip('NDVI:Q', title='NDVI') ]) fit = base.mark_line().encode( y=alt.Y('Fit:Q'), color=alt.value('#808080')) (points + fit).properties(width=600, height=300).interactive() Explanation: Scatter plot The DataFrame is ready for plotting. Since this chart is to include points and a line of best fit, two charts need to be created, one for the points and one for the line. The results are combined into the final plot. End of explanation # Define a method for displaying Earth Engine image tiles to folium map. def add_ee_layer(self, ee_image_object, vis_params, name): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine, USDA National Agriculture Imagery Program</a>', name=name, overlay=True, control=True).add_to(self) # Add an Earth Engine layer drawing method to folium. folium.Map.add_ee_layer = add_ee_layer # Import a NAIP image for the area and date of interest. naip_img = ee.ImageCollection('USDA/NAIP/DOQQ').filterDate( '2016-01-01', '2017-01-01').filterBounds(ee.Geometry.Point([-118.6407, 35.9665])).first() # Display the NAIP image to the folium map. m = folium.Map(location=[35.9665, -118.6407], tiles='Stamen Terrain', zoom_start=16) m.add_ee_layer(naip_img, None, 'NAIP image, 2016') # Add the point of interest to the map. folium.Circle( radius=15, location=[35.9665, -118.6407], color='yellow', fill=False, ).add_to(m) # Add the AOI to the map. folium.GeoJson( aoi.getInfo(), name='geojson', style_function=lambda x: {'fillColor': '#00000000', 'color': '#000000'}, ).add_to(m) # Add a lat lon popup. folium.LatLngPopup().add_to(m) # Display the map. display(m) Explanation: As you can see, there seems to be some degree of positive correlation between PDSI and NDVI (i.e., as wetness increases, vegetation productivity increases; as wetness decreases, vegetation productivity decreases). Note that some of the greatest outliers are 2016, 2017, 2018 - the three years following recovery from the long drought. It is also important to note that there are many other factors that may influence the NDVI signal that are not being considered here. Patch-level vegetation mortality At a regional scale there appears to be a relationship between drought and vegetation productivity. This section will look more closely at effects of drought on vegetation at a patch level, with a specific focus on mortality. Here, a Landsat time series collection is created for the period 1984-present to provide greater temporal context for change at a relatively precise spatial resolution. Find a point of interest Use aerial imagery from the National Agriculture Imagery Program (NAIP) in an interactive Folium map to identify a location in the Sierra Nevada ecoregion that appears to have patches of dead trees. Run the following code block to render an interactive Folium map for a selected NAIP image. Zoom and pan around the image to identify a region of recently dead trees (standing silver snags with no fine branches or brown/grey snags with fine branches). Click the map to list the latitude and longitude for a patch of interest. Record these values for use in the following section (the example location used in the following section is presented as a yellow point). End of explanation start_day = 224 end_day = 272 latitude = 35.9665 longitude = -118.6407 Explanation: Prepare Landsat collection Landsat surface reflectance data need to be prepared before being reduced. The steps below will organize data from multiple sensors into congruent collections where band names are consistent, cloud and cloud shadows have been masked out, and the normalized burn ratio (NBR) transformation is calculated and returned as the image representative (NBR is a good indicator of forest disturbance). Finally, all sensor collections will be merged into a single collection and annual composites calculated based on mean annual NBR using a join. Define Landsat observation date window inputs based on NDVI curve plotted previously and set latitude and longitude variables from the map above. End of explanation # Make lat. and long. vars an `ee.Geometry.Point`. point = ee.Geometry.Point([longitude, latitude]) # Define a function to get and rename bands of interest from OLI. def rename_oli(img): return (img.select( ee.List(['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'pixel_qa']), ee.List(['Blue', 'Green', 'Red', 'NIR', 'SWIR1', 'SWIR2', 'pixel_qa']))) # Define a function to get and rename bands of interest from ETM+. def rename_etm(img): return (img.select( ee.List(['B1', 'B2', 'B3', 'B4', 'B5', 'B7', 'pixel_qa']), ee.List(['Blue', 'Green', 'Red', 'NIR', 'SWIR1', 'SWIR2', 'pixel_qa']))) # Define a function to mask out clouds and cloud shadows. def cfmask(img): cloud_shadow_bi_mask = 1 << 3 cloud_bit_mask = 1 << 5 qa = img.select('pixel_qa') mask = qa.bitwiseAnd(cloud_shadow_bi_mask).eq(0).And( qa.bitwiseAnd(cloud_bit_mask).eq(0)) return img.updateMask(mask) # Define a function to add year as an image property. def set_year(img): year = ee.Image(img).date().get('year') return img.set('Year', year) # Define a function to calculate NBR. def calc_nbr(img): return img.normalizedDifference(ee.List(['NIR', 'SWIR2'])).rename('NBR') # Define a function to prepare OLI images. def prep_oli(img): orig = img img = rename_oli(img) img = cfmask(img) img = calc_nbr(img) img = img.copyProperties(orig, orig.propertyNames()) return set_year(img) # Define a function to prepare TM/ETM+ images. def prep_etm(img): orig = img img = rename_etm(img) img = cfmask(img) img = calc_nbr(img) img = img.copyProperties(orig, orig.propertyNames()) return set_year(img) # Import image collections for each Landsat sensor (surface reflectance). tm_col = ee.ImageCollection('LANDSAT/LT05/C01/T1_SR') etm_col = ee.ImageCollection('LANDSAT/LE07/C01/T1_SR') oli_col = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Filter collections and prepare them for merging. oli_col = oli_col.filterBounds(point).filter( ee.Filter.calendarRange(start_day, end_day, 'day_of_year')).map(prep_oli) etm_col = etm_col.filterBounds(point).filter( ee.Filter.calendarRange(start_day, end_day, 'day_of_year')).map(prep_etm) tm_col = tm_col.filterBounds(point).filter( ee.Filter.calendarRange(start_day, end_day, 'day_of_year')).map(prep_etm) # Merge the collections. landsat_col = oli_col.merge(etm_col).merge(tm_col) # Get a distinct year collection. distinct_year_col = landsat_col.distinct('Year') # Define a filter that identifies which images from the complete collection # match the year from the distinct year collection. join_filter = ee.Filter.equals(leftField='Year', rightField='Year') # Define a join. join = ee.Join.saveAll('year_matches') # Apply the join and convert the resulting FeatureCollection to an # ImageCollection. join_col = ee.ImageCollection( join.apply(distinct_year_col, landsat_col, join_filter)) # Define a function to apply mean reduction among matching year collections. def reduce_by_join(img): year_col = ee.ImageCollection.fromImages(ee.Image(img).get('year_matches')) return year_col.reduce(ee.Reducer.mean()).rename('NBR').set( 'system:time_start', ee.Image(img).date().update(month=8, day=1).millis()) # Apply the `reduce_by_join` function to the list of annual images in the # properties of the join collection. landsat_col = join_col.map(reduce_by_join) Explanation: Note: in your own application it may be necessary to change these values. Prepare a Landsat surface reflectance collection 1984-present. Those unfamiliar with Landsat might find the following acronym definitions and links helpful. OLI (Landsat's Operational Land Imager sensor) ETM+ (Landsat's Enhanced Thematic Mapper Plus sensor) TM (Landsat's Thematic Mapper sensor) CFMask (Landsat USGS surface reflectance mask based on the CFMask algorithm) NBR. (Normalized Burn Ratio: a spectral vegetation index) Understanding Earth Engine joins End of explanation reduce_landsat = create_reduce_region_function( geometry=point, reducer=ee.Reducer.first(), scale=30, crs='EPSG:3310') nbr_stat_fc = ee.FeatureCollection(landsat_col.map(reduce_landsat)).filter( ee.Filter.notNull(landsat_col.first().bandNames())) Explanation: The result of the above code block is an image collection with as many images as there are years present in the merged Landsat collection. Each image represents the annual mean NBR constrained to observations within the given date window. Prepare DataFrame Create a region reduction function; use ee.Reducer.first() as the reducer since no spatial aggregation is needed (we are interested in the single pixel that intersects the point). Set the region as the geometry defined by the lat. and long. coordinates identified in the above map. Apply the function to all images in the time series. Filter out features with null computed values. End of explanation nbr_dict = fc_to_dict(nbr_stat_fc).getInfo() nbr_df = pd.DataFrame(nbr_dict) display(nbr_df.head()) print(nbr_df.dtypes) Explanation: Transfer data from the server to the client.<br> Note: if the process times out, you'll need to export/import the nbr_stat_fc feature collection as described in the Optional export section. Convert the Python dictionary to a pandas DataFrame. Preview the DataFrame and check data types. End of explanation nbr_df = add_date_info(nbr_df) nbr_df.head(5) Explanation: Add date attribute columns. Preview the DataFrame. End of explanation alt.Chart(nbr_df).mark_line().encode( x=alt.X('Timestamp:T', title='Date'), y='NBR:Q', tooltip=[ alt.Tooltip('Timestamp:T', title='Date'), alt.Tooltip('NBR:Q') ]).properties(width=600, height=300).interactive() Explanation: Line chart Display the Landsat NBR time series for the point of interest as a line plot. End of explanation dcp_col = (ee.ImageCollection('NASA/NEX-DCP30_ENSEMBLE_STATS') .select(['tasmax_median', 'tasmin_median', 'pr_median']) .filter( ee.Filter.And(ee.Filter.eq('scenario', 'rcp85'), ee.Filter.date('2019-01-01', '2070-01-01')))) def calc_mean_temp(img): return (img.select('tasmax_median') .add(img.select('tasmin_median')) .divide(ee.Image.constant(2.0)) .addBands(img.select('pr_median')) .rename(['Temp-mean', 'Precip-rate']) .copyProperties(img, img.propertyNames())) dcp_col = dcp_col.map(calc_mean_temp) Explanation: As you can see from the above time series of NBR observations, a dramatic decrease in NBR began in 2015, shortly after the severe and extended drought began. The decline continued through 2017, when a minor recovery began. Within the context of the entire time series, it is apparent that the decline is outside of normal inter-annual variability and that the reduction in NBR for this site is quite severe. The lack of major recovery response in NBR in 2017-19 (time of writing) indicates that the event was not ephemeral; the loss of vegetation will have a lasting impact on this site. The corresponding onset of drought and reduction in NBR provides further evidence that there is a relationship between drought and vegetation response in the Sierra Nevada ecoregion. Past and future climate The previous data visualizations suggest there is a relationship between drought and vegetation stress and mortality in the Sierra Nevada ecoregion. This section will look at how climate is projected to change in the future, which can give us a sense for what to expect with regard to drought conditions and speculate about its impact on vegetation. We'll look at historical and projected temperature and precipitation. Projected data are represented by NEX-DCP30, and historical observations by PRISM. Future climate NEX-DCP30 data contain 33 climate models projected to the year 2100 using several scenarios of greenhouse gas concentration pathways (RCP). Here, we'll use the median of all models for RCP 8.5 (the worst case scenario) to look at potential future temperature and precipitation. Import and prepare collection Filter the collection by date and scenario. Calculate 'mean' temperature from median min and max among 33 models. End of explanation reduce_dcp30 = create_reduce_region_function( geometry=point, reducer=ee.Reducer.first(), scale=5000, crs='EPSG:3310') dcp_stat_fc = ee.FeatureCollection(dcp_col.map(reduce_dcp30)).filter( ee.Filter.notNull(dcp_col.first().bandNames())) Explanation: Prepare DataFrame Create a region reduction function. Apply the function to all images in the time series. Filter out features with null computed values. End of explanation dcp_dict = fc_to_dict(dcp_stat_fc).getInfo() dcp_df = pd.DataFrame(dcp_dict) display(dcp_df) print(dcp_df.dtypes) Explanation: Transfer data from the server to the client. Note: if the process times out, you'll need to export/import the dcp_stat_fc feature collection as described in the Optional export section. Convert the Python dictionary to a pandas DataFrame. Preview the DataFrame and check the data types. End of explanation dcp_df = add_date_info(dcp_df) dcp_df.head(5) Explanation: Add date attribute columns. Preview the DataFrame. End of explanation dcp_df['Precip-mm'] = dcp_df['Precip-rate'] * 86400 * 30 dcp_df['Temp-mean'] = dcp_df['Temp-mean'] - 273.15 dcp_df['Model'] = 'NEX-DCP30' dcp_df = dcp_df.drop('Precip-rate', 1) dcp_df.head(5) Explanation: Convert precipitation rate to mm. Convert Kelvin to celsius. Add the model name as a column. Remove the 'Precip-rate' column. End of explanation prism_col = (ee.ImageCollection('OREGONSTATE/PRISM/AN81m') .select(['ppt', 'tmean']) .filter(ee.Filter.date('1979-01-01', '2019-12-31'))) reduce_prism = create_reduce_region_function( geometry=point, reducer=ee.Reducer.first(), scale=5000, crs='EPSG:3310') prism_stat_fc = (ee.FeatureCollection(prism_col.map(reduce_prism)) .filter(ee.Filter.notNull(prism_col.first().bandNames()))) prism_dict = fc_to_dict(prism_stat_fc).getInfo() prism_df = pd.DataFrame(prism_dict) display(prism_df) print(prism_df.dtypes) Explanation: Past climate PRISM data are climate datasets for the conterminous United States. Grid cells are interpolated based on station data assimilated from many networks across the country. The datasets used here are monthly averages for precipitation and temperature. They provide a record of historical climate. Reduce collection and prepare DataFrame Import the collection and filter by date. Reduce the collection images by region and filter null computed values. Convert the feature collection to a dictionary and transfer it client-side.<br> Note: if the process times out, you'll need to export/import the prism_stat_fc feature collection as described in the Optional export section. Convert the dictionary to a DataFrame. Preview the DataFrame. End of explanation prism_df = add_date_info(prism_df) prism_df['Model'] = 'PRISM' prism_df = prism_df.rename(columns={'ppt': 'Precip-mm', 'tmean': 'Temp-mean'}) prism_df.head(5) Explanation: Add date attribute columns. Add model name. Rename columns to be consistent with the NEX-DCP30 DataFrame. Preview the DataFrame. End of explanation climate_df = pd.concat([prism_df, dcp_df], sort=True) climate_df Explanation: Combine DataFrames At this point the PRISM and NEX-DCP30 DataFrames have the same columns, the same units, and are distinguished by unique entries in the 'Model' column. Use the concat function to concatenate these DataFrames into a single DataFrame for plotting together in the same chart. End of explanation base = alt.Chart(climate_df).encode( x='Year:O', color='Model') line = base.mark_line().encode( y=alt.Y('median(Precip-mm):Q', title='Precipitation (mm/month)')) band = base.mark_errorband(extent='iqr').encode( y=alt.Y('Precip-mm:Q', title='Precipitation (mm/month)')) (band + line).properties(width=600, height=300) Explanation: Charts Chart the past and future precipitation and temperature together to get a sense for where climate has been and where it is projected to go under RCP 8.5. Precipitation End of explanation line = alt.Chart(climate_df).mark_line().encode( x='Year:O', y='median(Temp-mean):Q', color='Model') band = alt.Chart(climate_df).mark_errorband(extent='iqr').encode( x='Year:O', y=alt.Y('Temp-mean:Q', title='Temperature (°C)'), color='Model') (band + line).properties(width=600, height=300) Explanation: Temperature End of explanation
14,826
Given the following text description, write Python code to implement the functionality described below step by step Description: pyphysio tutorial 2. Algorithms In this second tutorial we will see how to use the class Algorithm to create signal processing pipelines. A signal processing step is a computational function $F$ that operates on input data (a signal) to produce a result. It is characterized by a set of parameters p which regulate its behavior. Figure 1 Step1: 2.1 Filters Filters return a signal which has the same signal_nature of the input signal. The name Filters recalls the aim of this algorithms which is in general to increase the Signal/Noise ratio by filtering out the unwanted components in a signal (e.g high frequency noise). Step2: 2.2 Estimators Estimators are algorithms which aim at extracting the information of interest from the input signal, thus returning a new signal which has a different signal_nature. The name Estimators recalls the fact that the information extraction depends on the value of the algorithm parameters which might not be known a-priori. Thus the result should be considered as an estimate of the real content of information of the input signal. Step3: 2.3 Indicators Indicators are algorithm which extract a metrics (scalar value) from the input signal, for instance a statistic (average). Three types of indicators are provided in pyphysio Step4: 2.4 Tools This is a collection of useful algorithms that can be used for signal processing. These algorithms might return scalar values or numpy arrays.
Python Code: # import packages from __future__ import division import numpy as np import matplotlib.pyplot as plt %matplotlib inline # import data from included examples from pyphysio.tests import TestData from pyphysio import EvenlySignal ecg_data = TestData.ecg() eda_data = TestData.eda() # create two signals fsamp = 2048 tstart_ecg = 15 tstart_eda = 5 ecg = EvenlySignal(values = ecg_data, sampling_freq = fsamp, signal_type = 'ecg', start_time = tstart_ecg) eda = EvenlySignal(values = eda_data, sampling_freq = fsamp, signal_type = 'eda', start_time = tstart_eda) Explanation: pyphysio tutorial 2. Algorithms In this second tutorial we will see how to use the class Algorithm to create signal processing pipelines. A signal processing step is a computational function $F$ that operates on input data (a signal) to produce a result. It is characterized by a set of parameters p which regulate its behavior. Figure 1: Abstract representation of a processing step. In pyphysio each processing step is represented by an instance of a class derived from the generic class Algorithm. The type of function or algorithm is given by the class name (e.g. BeatFromECG extracts the heartbeats from an ECG signal, PeakDetection detects the peaks in the input signal). The parameters of the function/algorithm are the attributes of the created instance. Therefore, a processing step is defined by creating a new instance of the Class, which is initialized with the given parameters: processing_step = ph.BeatFromECG(parameters) To execute the processing step we need to give as input an instance of the class Signal: output = processing_step(input) Algorithms in pyphysio are grouped in four categories (see also the tutorial '3-pipelines'): Filters : deterministic algorithms that modify the values of the input signal without changing its nature; Estimators : algorithms that aim at extracting information from the input signal which is given in output as a signal with a different nature; Indicators : algorithms that operate on the signal to provide a scalar value (or metrics) Tools : algorithms that can be useful for the signal processing and return as output one or more numpy arrays or scalars. End of explanation # create a Filter import pyphysio.filters.Filters as flt lowpass_50 = flt.IIRFilter(fp=50, fs=75, ftype='ellip') # help inline #?flt.IIRFilter # check parameters print(lowpass_50) # OR print(lowpass_50.get()) # apply a Filter ecg_filtered = lowpass_50(ecg) #plot ecg.plot() ecg_filtered.plot() # check output type ecg.get_signal_type() Explanation: 2.1 Filters Filters return a signal which has the same signal_nature of the input signal. The name Filters recalls the aim of this algorithms which is in general to increase the Signal/Noise ratio by filtering out the unwanted components in a signal (e.g high frequency noise). End of explanation # create an Estimator import pyphysio.estimators.Estimators as est ibi_ecg = est.BeatFromECG() # check parameters ibi_ecg # apply an Estimator ibi = ibi_ecg(ecg_filtered) # plot ax1 = plt.subplot(211) ecg.plot() plt.subplot(212, sharex=ax1) ibi.plot() # check output type ibi.get_signal_type() Explanation: 2.2 Estimators Estimators are algorithms which aim at extracting the information of interest from the input signal, thus returning a new signal which has a different signal_nature. The name Estimators recalls the fact that the information extraction depends on the value of the algorithm parameters which might not be known a-priori. Thus the result should be considered as an estimate of the real content of information of the input signal. End of explanation # create an Indicator import pyphysio.indicators.TimeDomain as td_ind import pyphysio.indicators.FrequencyDomain as fd_ind rmssd = td_ind.RMSSD() HF = fd_ind.PowerInBand(interp_freq=4, freq_max=0.4, freq_min=0.15, method = 'ar') # check parameters print(rmssd) print(HF) # apply an Indicator rmssd_ = rmssd(ibi) HF_ = HF(ibi.resample(4)) #resampling is needed to compute the Power Spectrum Density print(rmssd_) print(HF_) # check output type print(type(rmssd_)) print(type(HF_)) Explanation: 2.3 Indicators Indicators are algorithm which extract a metrics (scalar value) from the input signal, for instance a statistic (average). Three types of indicators are provided in pyphysio: * Time domain indicators: comprising simple statistical indicators and other metrics that can be computed on the signal values; * Frequency domain indicators: metrics that are computed on the Power Spectrum Density (PSD) of the signal; * Non-linear indicators: complex indicators that are computed on the signal values (e.g. Entropy). End of explanation # create a Tool import pyphysio.tools.Tools as tll compute_psd = tll.PSD(method='ar', interp_freq = 4) # check parameters compute_psd # apply a Tool frequencies, power = compute_psd(ibi.resample(4)) plt.plot(frequencies, power) plt.show() Explanation: 2.4 Tools This is a collection of useful algorithms that can be used for signal processing. These algorithms might return scalar values or numpy arrays. End of explanation
14,827
Given the following text description, write Python code to implement the functionality described below step by step Description: 02 - Data from the Web Deadline Wednesday October 25, 2017 at 11 Step1: www.topuniversities.com Step2: Best universities in term of Step3: Comments Step4: Comments Step5: Comments Step6: Comments Step7: Comments Step8: Best universities in term of Step9: Comments Step10: Comments Step11: Comments Step12: Comments Step13: Comments Step14: Insights Here we first proceed by creating the correlation matrix (since it's a symetric matrix we only kept the lower triangle). We then plot it using a heatmap to see correlation between columns of the dataframe. We also made another heatmap with only correlation whose absolute value is greater than 0.5. Finally we averaged the features when they were available for the two websites (except rankings). Correlations analysis Some correlations bring interesting information Step15: Best university First we have to transform ranking in some score. Here we assume a linear relation for the score given the ranking, so we gave a score of 1 for the best ranking and 0 for the worst ranking with linear mapping between these two. We did it for each of the ranking (the two websites). Also we don't really know if a website is most trustworthy than the other, so a good merging for the ranking would be to take the average of the two scores with equal weights for each score. Finally we also took into account the ratio of staff member per students
Python Code: import requests, re, html import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from bs4 import BeautifulSoup from tqdm import tqdm_notebook import warnings warnings.filterwarnings('ignore') NUM_OBS = 200 Explanation: 02 - Data from the Web Deadline Wednesday October 25, 2017 at 11:59PM Important Notes Make sure you push on GitHub your Notebook with all the cells already evaluated (i.e., you don't want your colleagues to generate unnecessary Web traffic during the peer review) Don't forget to add a textual description of your thought process, the assumptions you made, and the solution you plan to implement! Please write all your comments in English, and use meaningful variable names in your code. Background In this homework we will extract interesting information from www.toptop_universities.com and www.timeshighereducation.com, two platforms that maintain a global ranking of worldwide top_universities. This ranking is not offered as a downloadable dataset, so you will have to find a way to scrape the information we need! You are not allowed to download manually the entire ranking -- rather you have to understand how the server loads it in your browser. For this task, Postman with the Interceptor extension can help you greatly. We recommend that you watch this brief tutorial to understand quickly how to use it. Assignment Obtain the 200 top-ranking top_universities in www.toptop_universities.com (ranking 2018). In particular, extract the following fields for each university: name, rank, country and region, number of faculty members (international and total) and number of students (international and total). Some information is not available in the main list and you have to find them in the details page. Store the resulting dataset in a pandas DataFrame and answer the following questions: Which are the best top_universities in term of: (a) ratio between faculty members and students, (b) ratio of international students? Answer the previous question aggregating the data by (c) country and (d) region. Plot your data using bar charts and describe briefly what you observed. Obtain the 200 top-ranking top_universities in www.timeshighereducation.com (ranking 2018). Repeat the analysis of the previous point and discuss briefly what you observed. Merge the two DataFrames created in questions 1 and 2 using university names. Match top_universities' names as well as you can, and explain your strategy. Keep track of the original position in both rankings. Find useful insights in the data by performing an exploratory analysis. Can you find a strong correlation between any pair of variables in the dataset you just created? Example: when a university is strong in its international dimension, can you observe a consistency both for students and faculty members? Can you find the best university taking in consideration both rankings? Explain your approach. Hints: - Keep your Notebook clean and don't print the verbose output of the requests if this does not add useful information for the reader. - In case of tie, use the order defined in the webpage. End of explanation root_url_1 = 'https://www.topuniversities.com' # we use the link to the API from where the website fetches its data instead of BeautifulSoup # much much cleaner list_url_1 = root_url_1 + '/sites/default/files/qs-rankings-data/357051_indicators.txt' r = requests.get(list_url_1) top_universities = pd.DataFrame() top_universities = top_universities.from_dict(r.json()['data'])[['uni', 'overall_rank', 'location', 'region']] # get the university name and details URL with a regex top_universities['name'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\"(.*?)\"[^>]*>(.*)?</a>', name)[0][1])) top_universities['url'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\"(.*?)\"[^>]*>(.*)?</a>', name)[0][0])) top_universities.drop('uni', axis=1, inplace=True) top_universities['overall_rank'] = top_universities['overall_rank'].astype(int) # selects the top N rows based on the colum_name of the dataframe df def select_top_N(df, column_name, N): df = df.sort_values(by=column_name) df = df[df[column_name] <= N] return df # get only the first top-200 universities by overall rank top_universities = select_top_N(top_universities, 'overall_rank', NUM_OBS) top_universities.head() students_total = [] students_inter = [] faculty_total = [] faculty_inter = [] def get_num(soup, selector): scraped = soup.select(selector) # Some top_universities don't have stats, return NaN for these case if scraped: return int(scraped[0].contents[0].replace(',', '')) else: return np.NaN for details_url in tqdm_notebook(top_universities['url']): soup = BeautifulSoup(requests.get(root_url_1 + details_url).text, 'html.parser') students_total.append(get_num(soup, 'div.total.student div.number')) students_inter.append(get_num(soup, 'div.total.inter div.number')) faculty_total.append(get_num(soup, 'div.total.faculty div.number')) faculty_inter.append(get_num(soup, 'div.inter.faculty div.number')) top_universities['students_total'] = students_total top_universities['students_international'] = students_inter top_universities['students_national'] = top_universities['students_total'] - top_universities['students_international'] top_universities['faculty_total'] = faculty_total top_universities['faculty_international'] = faculty_inter top_universities['faculty_national'] = top_universities['faculty_total'] - top_universities['faculty_international'] top_universities.head() #defining colors for each type of plot colors_1 = ['#FF9F9A', '#D0BBFF'] colors_2 = ['#92C6FF', '#97F0AA'] plt.style.use('ggplot') Explanation: www.topuniversities.com End of explanation top = 10 top_universities_ratio = select_top_N(top_universities, 'overall_rank', top) top_universities_ratio_sf = top_universities_ratio[['name', 'students_total', 'faculty_total']] top_universities_ratio_sf = top_universities_ratio_sf.set_index(['name']) top_universities_ratio_sf.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) top_universities_ratio_sf.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) fig.autofmt_xdate() plt.show() Explanation: Best universities in term of: We selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data. (a) ratio between faculty members and students End of explanation # normalize the data to be able to make a good comparison top_universities_ratio_normed = top_universities_ratio_sf.div(top_universities_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) top_universities_ratio_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) top_universities_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # we can restrict the range on the y axis to avoid displaying unnecessary content axes.set_ylim([0.7,1]) fig.autofmt_xdate() plt.show() Explanation: Comments: We see that it is rather difficult to compare the ratios of the different universities. This is due to the different sizes of the population. In order to draw more precise information about it, we need to normalize the data with repect to each university. End of explanation top_universities_ratio_s = top_universities_ratio[['name', 'students_international', 'students_national']] top_universities_ratio_s = top_universities_ratio_s.set_index(['name']) top_universities_ratio_s_normed = top_universities_ratio_s.div(top_universities_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) top_universities_ratio_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10, 5)) top_universities_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Top 10 ratio\'s of international and national students among universities') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.6]) fig.autofmt_xdate() plt.show() Explanation: Comments: You noticed that the y-axis ranges from 0.7 to 1. We limited the visualization to this interval because the complete interval does not add meaningful insight about the data. Analyzing the results, we see that the Caltech university is the university in the top 10 offering more faculty members to its students. ETHZ is in the last position. (b) ratio of international students End of explanation ratio_country_sf = top_universities.groupby(['location'])['students_total', 'faculty_total'].sum() ratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_country_sf_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by country') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show() ratio_country_s = top_universities.groupby(['location'])['students_international', 'students_national'].sum() ratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_country_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by country') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.4]) fig.autofmt_xdate() plt.show() Explanation: Comments: The most international university, by its students, among the top 10 universities is the Imperial College London. Notice that ETHZ is in the third position. (c) same comparisons by country End of explanation ratio_region_s = top_universities.groupby(['region'])['students_total', 'faculty_total'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by region') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() ratio_region_s = top_universities.groupby(['region'])['students_international', 'students_national'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by region') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0,0.4]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() Explanation: Comments: Aggregating the data by country, we see that Russia is the country offering more faculty members for its student, followed by Danemark and Saudi Arabia. The most international university in terms of students is Australia, followed by United Kingdom and Hong Kong. Switzerland is in the fifth position and India is the country with the lowest ratio of international students. (d) same comparisons by region End of explanation # we repeat the same procedure as for www.topuniversities.com root_url_2 = 'https://www.timeshighereducation.com' list_url_2 = root_url_2 + '/sites/default/files/the_data_rankings/world_university_rankings_2018_limit0_369a9045a203e176392b9fb8f8c1cb2a.json' r = requests.get(list_url_2) times_higher_education = pd.DataFrame() times_higher_education = times_higher_education.from_dict(r.json()['data'])[['rank', 'location', 'location', 'name', 'url', 'stats_number_students', 'stats_pc_intl_students', 'stats_student_staff_ratio']] # rename columns as is the first dataframe times_higher_education.columns = ['overall_rank', 'location', 'region', 'name', 'url', 'students_total', 'ratio_inter', 'student_staff_ratio'] # as the ranks have different represetation we had to delete the '=' in front of universities that have the same rank, # rewrite the rank when it is represented as an interval (ex: 201-250) and finally delete the '+' in the end for the last ones times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[=]', '', rank)) times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: rank.split('–')[0]) times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[+]', '', rank)).astype(int) # remaps a ranking in order to make selection by ranking easier # ex: 1,2,3,3,5,6,7 -> 1,2,3,3,4,5,6 def remap_ranking(rank): last=0 for i in range(len(rank)): if last == rank[i]-1: #no problem last = rank[i] elif last != rank[i]: last = last+1 rank[i] = last rank[(i+1):] = rank[(i+1):]-1 return rank times_higher_education['overall_rank'] = remap_ranking(times_higher_education['overall_rank'].copy()) # in the following lines we make the necessary transformation in order to get the right type or numbers for each column times_higher_education['students_total'] = times_higher_education['students_total'].apply(lambda x: re.sub('[^0-9]','', x)).astype(int) times_higher_education['ratio_inter'] = times_higher_education['ratio_inter'].apply(lambda x: re.sub('[^0-9]','', x)).astype(float) times_higher_education['student_staff_ratio'] = times_higher_education['student_staff_ratio'].astype(float) times_higher_education['students_international'] = (times_higher_education['students_total'] * (times_higher_education['ratio_inter']/100)).astype(int) times_higher_education['students_national'] = times_higher_education['students_total'] - times_higher_education['students_international'] times_higher_education['faculty_total'] = (times_higher_education['students_total'] / times_higher_education['student_staff_ratio']).astype(int) times_higher_education['faculty_international'] = np.NaN times_higher_education['faculty_national'] = np.NaN times_higher_education['region'] = np.NaN # resolve ties times_higher_education['overall_rank'] = np.arange(1, times_higher_education.shape[0]+1) # resolve N/A region loc_to_reg = top_universities[['location', 'region']] loc_to_reg = set(loc_to_reg.apply(lambda x: '{}_{}'.format(x['location'], x['region']), axis=1).values) loc_to_reg = {x.split('_')[0]: x.split('_')[1] for x in loc_to_reg} from collections import defaultdict loc_to_reg = defaultdict(lambda: 'N/A', loc_to_reg) def resolve_uni(x): x['region'] = loc_to_reg[x['location']] return x times_higher_education = times_higher_education.apply(resolve_uni, axis=1) del times_higher_education['ratio_inter'] del times_higher_education['student_staff_ratio'] # get only the first top-200 universities by overall rank times_higher_education = select_top_N(times_higher_education, 'overall_rank', NUM_OBS) times_higher_education.head() Explanation: Comments: Asia is the region offering more faculty members to its students. It is followed by North America and Europe. The most international university in terms of students is Oceania. Europe is second. Analysis of the two methods We get consistent results comparing the results obtained by region or by country about the ratio of international students. By country, we get Australia and by region, Oceania. This makes sense as Australia owns nine of the eleven top_universities of Oceania. www.timeshighereducation.com End of explanation top = 10 times_higher_education_ratio = select_top_N(times_higher_education, 'overall_rank', top) times_higher_education_ratio_sf = times_higher_education_ratio[['name', 'students_total', 'faculty_total']] times_higher_education_ratio_sf = times_higher_education_ratio_sf.set_index(['name']) times_higher_education_ratio_normed = times_higher_education_ratio_sf.div(times_higher_education_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) times_higher_education_ratio_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) times_higher_education_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show() Explanation: Best universities in term of: We selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data. (a) ratio between faculty members and students End of explanation times_higher_education_ratio_s = times_higher_education_ratio[['name', 'students_international', 'students_national']] times_higher_education_ratio_s = times_higher_education_ratio_s.set_index(['name']) times_higher_education_ratio_s_normed = times_higher_education_ratio_s.div(times_higher_education_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) times_higher_education_ratio_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10, 5)) times_higher_education_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Top 10 ratio\'s of international and national students among universities') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.2, 0.6]) fig.autofmt_xdate() plt.show() Explanation: Comments: The university of Chicago is the faculty with the more faculty members by students. It is closely followed by the California Institute of Technology. (b) ratio of international students End of explanation ratio_country_sf = times_higher_education.groupby(['location'])['students_total', 'faculty_total'].sum() ratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_country_sf_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by country') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show() Explanation: Comments: The Imperial College Longon university has a strong lead in the internationalization of its student. Oxford and ETHZ are following bunched together. (c) same comparisons by country End of explanation ratio_country_s = times_higher_education.groupby(['location'])['students_international', 'students_national'].sum() ratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_country_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by country') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.6]) fig.autofmt_xdate() plt.show() Explanation: Comments: Denmark is in the first position. We find the Russian Federation in the second place. This is the same result obtained with the top universities website the other way around. This shows that either the universities of each country have different ranking in each website or each website has different information about each university. End of explanation # Some countries have their field 'region' filled with 'N/A': this is due to the technique we used to write the # correct region for each university. In the sample we are considering, let's see how many universities are concerned: times_higher_education[times_higher_education['region'] == 'N/A'] # As there is only two universities concerned, we can rapidly write it by hand. Of course we should have develop a # more generalized manner to do it, if we had a much larger sample. times_higher_education.set_value(178, 'region', 'Europe') times_higher_education.set_value(193, 'region', 'Europe') ratio_region_s = times_higher_education.groupby(['region'])['students_total', 'faculty_total'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by region') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() ratio_region_s = times_higher_education.groupby(['region'])['students_international', 'students_national'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by region') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0,0.4]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() Explanation: Comments: Luxembourg has more international than national students which allows it to be in first position without difficulty. Switzerland is in the sixth position (versus fifth for top university website). (d) same comparisons by region End of explanation # Detects same universities with different names in the two dataframes before merging # using Jaccard similarity and same location rule (seems to keep matching entry) def t(x): # Compute Jaccard score (intersection over union) def jaccard(a, b): u = set(a.split(' ')) v = set(b.split(' ')) return len(u.intersection(v)) / len(u.union(v)) names = top_universities['name'].tolist() locations = top_universities['location'].tolist() scores = np.array([jaccard(x['name'], n) for n in names]) m = scores.max() i = scores.argmax() # Jaccard score for similarity and location match to filter out name with different locations if m > 0.5 and x['location'] == locations[i]: x['name'] = names[i] return x # Match universities name in both dataframes times_higher_education = times_higher_education.apply(t, axis=1) # Intersection on the name column of the two datasets merged = pd.merge(top_universities, times_higher_education, on='name', how='inner') merged.head() Explanation: Comments: In the first plot, we see that Africa is the region where there is more faculty members by students. The two following regions are very close to each other. In the second plot, Oceania is the more internationalized school in terms of its students and Europe is second. We had similar results by the other website concerning this last outcome. End of explanation merged_num = merged.select_dtypes(include=[np.number]) merged_num.dropna(how='all', axis=1) merged_num.dropna(how='any', axis=0) def avg_feature(x): cols = set([x for x in x.index if 'overall' not in x]) cols_common = set([x[0:-2] for x in cols]) for cc in cols_common: cc_x = '{}_x'.format(cc) cc_y = '{}_y'.format(cc) if cc_y in cols: x['{}_avg'.format(cc)] = (x[cc_x] + x[cc_y]) / 2 else: x['{}_avg'.format(cc)] = x[cc_x] / 2 for c in cols: del x[c] return x merged_num_avg = merged_num.apply(avg_feature, axis=1) merged_num.head() corr = merged_num.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show() # Keep only correlation with and absolute value superior to 0.5 corr[(corr < 0.5) & (corr > -0.5)] = 0 fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show() # Keep only correlation with and absolute value superior to 0.5 for averaged features corr = merged_num_avg.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True corr[(corr < 0.5) & (corr > -0.5)] = 0 fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show() Explanation: Insights Here we first proceed by creating the correlation matrix (since it's a symetric matrix we only kept the lower triangle). We then plot it using a heatmap to see correlation between columns of the dataframe. We also made another heatmap with only correlation whose absolute value is greater than 0.5. Finally we averaged the features when they were available for the two websites (except rankings). Correlations analysis Some correlations bring interesting information: - $Corr(overall_rank_x, overall_rank_y) = 0.7$ <br /> We get a strong correlation between the ranking of the first website and the second one. It shows us that the two website ranking methods lead on similar results (since the correlation is positive). It's insightfull since even if the features are approximately the same for the two websites, their methodology to attribute a rank could be really different. This important positive correlation reveals that the methodologies doesn't differ so much between the two websites. - $Corr(students_international_avg, faculty_international_avg) = 0.59$ <br /> Here we have an interesting correlation between the number of international students and the number of international staff members. - We have strong correlation between same features but coming from different websites. It's not really interesting since difference in same features from the two websites are likely to be small. Also we have important correlation between "total" features and their sub-categories like "international" and "national". These are not interesting too because they follow a simple relation: when the total is higher, the sub-categories are likely to be higher numbers too (i.e. if we have more students, we are likely to have more national or international students). End of explanation r = merged[['name', 'overall_rank_x', 'overall_rank_y']] r.head() def lin(df): best_rank = df.min() worst_rank = df.max() a = 1 / (best_rank - worst_rank) b = 1 - a*best_rank return df.apply(lambda x: a*x + b) r['stud_staff_ratio'] = merged[['faculty_international_x', 'faculty_international_y']].mean(axis=1) / \ merged[['students_total_x', 'students_total_y']].mean(axis=1) r['score_x'] = lin(r['overall_rank_x']) r['score_y'] = lin(r['overall_rank_y']) r['overall_score'] = r[['score_x', 'score_y', 'stud_staff_ratio']].mean(axis=1) r = r.dropna() r[r['overall_score'] == r['overall_score'].max()] Explanation: Best university First we have to transform ranking in some score. Here we assume a linear relation for the score given the ranking, so we gave a score of 1 for the best ranking and 0 for the worst ranking with linear mapping between these two. We did it for each of the ranking (the two websites). Also we don't really know if a website is most trustworthy than the other, so a good merging for the ranking would be to take the average of the two scores with equal weights for each score. Finally we also took into account the ratio of staff member per students: $finalScore = mean(score1, score2, staff per studiants)$ After computing these values, we found that Caltech is the best university (according to our assumptions). Per Website ranking: Caltech: top_universities -> 4 | times_higher_education = -> 3 | staff per student ratio -> 0.15 | => final score: 0.71 End of explanation
14,828
Given the following text description, write Python code to implement the functionality described below step by step Description: Figure Step1: 1. What is the voxelwise threshold? Step2: 2. Definition of alternative Detect 1 region We define a 'success' as a situation in which the maximum in the active field exceeds the threshold. Step3: 3. How large statistic in a field be to exceed the threshold with power 0.80? We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D). We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size. Step4: 5. From the required voxel statistic to Cohen's D for a given sample size Step5: The figure per List (Tal or David) Step6: Print median sample size and power for Neurosynth data Step7: Compute median of sample sizes over last 5 years, for use in correlation simulation notebook.
Python Code: % matplotlib inline from __future__ import division import os import nibabel as nib import numpy as np from neuropower import peakdistribution import scipy.integrate as integrate import pandas as pd import matplotlib.pyplot as plt import palettable.colorbrewer as cb if not 'FSLDIR' in os.environ.keys(): raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set') Explanation: Figure: How large should effect sizes be in neuroimaging to have sufficient power? Specification of alternative In a brain map in an MNI template, with smoothness of 3 times the voxelsize, there is one active region with voxelwise effect size D. The (spatial) size of the region is relatively small (<200 voxels). We want to know how large D should be in order to have 80% power to detect the region using voxelwise FWE thresholding using Random Field Theory. Detect the region means that the maximum in the activated area exceeds the significance threshold. Strategy Compute the voxelwise threshold for the specified smoothness and volume FweThres = 5.12 Define the alternative hypothesis, so that the omnibus power is 80% How large should the maximum statistic in a (small) region be to exceed the voxelwise threshold with 0.8 power? muMax = 4.00 How does this voxel statistic translate to Cohen's D for a given sample size? See Figure End of explanation # From smoothness + mask to ReselCount FWHM = 3 ReselSize = FWHM**3 MNI_mask = nib.load(os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')).get_data() Volume = np.sum(MNI_mask) ReselCount = Volume/ReselSize print("ReselSize: "+str(ReselSize)) print("Volume: "+str(Volume)) print("ReselCount: "+str(ReselCount)) print("------------") # From ReselCount to FWE treshold FweThres_cmd = 'ptoz 0.05 -g %s' %ReselCount FweThres = os.popen(FweThres_cmd).read() print("FWE voxelwise GRF threshold: "+str(FweThres)) Explanation: 1. What is the voxelwise threshold? End of explanation Power = 0.8 Explanation: 2. Definition of alternative Detect 1 region We define a 'success' as a situation in which the maximum in the active field exceeds the threshold. End of explanation muRange = np.arange(1.8,5,0.01) muSingle = [] for muMax in muRange: # what is the power to detect a maximum power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0] if power>Power: muSingle.append(muMax) break print("The power is sufficient for one region if mu equals: "+str(muSingle[0])) Explanation: 3. How large statistic in a field be to exceed the threshold with power 0.80? We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D). We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size. End of explanation # Read in data Data = pd.read_csv("../SampleSize/neurosynth_sampsizedata.txt",sep=" ",header=None,names=['year','n']) Data['source']='Tal' Data=Data[Data.year!=1997] #remove year with 1 entry David = pd.read_csv("../SampleSize/david_sampsizedata.txt",sep=" ",header=None,names=['year','n']) David['source']='David' Data=Data.append(David) # add detectable effect Data['deltaSingle']=muSingle[0]/np.sqrt(Data['n']) # add jitter for figure stdev = 0.01*(max(Data.year)-min(Data.year)) Data['year_jitter'] = Data.year+np.random.randn(len(Data))*stdev # Compute medians per year (for smoother) Medians = pd.DataFrame({'year': np.arange(start=np.min(Data.year),stop=np.max(Data.year)+1), 'TalMdSS':'nan', 'DavidMdSS':'nan', 'TalMdDSingle':'nan', 'DavidMdDSingle':'nan', 'MdSS':'nan', 'DSingle':'nan' }) for yearInd in (range(len(Medians))): # Compute medians for Tal's data yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==Medians.year[yearInd])]) Medians.TalMdSS[yearInd] = np.median(Data.n[yearBoolTal]) Medians.TalMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolTal]) # Compute medians for David's data yearBoolDavid = np.array([a and b for a,b in zip(Data.source=="David",Data.year==Medians.year[yearInd])]) Medians.DavidMdSS[yearInd] = np.median(Data.n[yearBoolDavid]) Medians.DavidMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolDavid]) # Compute medians for all data yearBool = np.array(Data.year==Medians.year[yearInd]) Medians.MdSS[yearInd] = np.median(Data.n[yearBool]) Medians.DSingle[yearInd] = np.median(Data.deltaSingle[yearBool]) Medians[0:5] # add logscale Medians['MdSSLog'] = [np.log(x) for x in Medians.MdSS] Medians['TalMdSSLog'] = [np.log(x) for x in Medians.TalMdSS] Medians['DavidMdSSLog'] = [np.log(x) for x in Medians.DavidMdSS] Data['nLog']= [np.log(x) for x in Data.n] Explanation: 5. From the required voxel statistic to Cohen's D for a given sample size End of explanation twocol = cb.qualitative.Paired_12.mpl_colors fig,axs = plt.subplots(1,2,figsize=(12,5)) fig.subplots_adjust(hspace=.5,wspace=.3) axs=axs.ravel() axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data['nLog'][Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="") axs[0].plot(Data.year_jitter[Data.source=="David"],Data['nLog'][Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="") axs[0].plot(Medians.year,Medians.TalMdSSLog,color=twocol[1],lw=3,label="Neurosynth") axs[0].plot(Medians.year,Medians.DavidMdSSLog,color=twocol[3],lw=3,label="David et al.") axs[0].set_xlim([1993,2016]) axs[0].set_ylim([0,8]) axs[0].set_xlabel("Year") axs[0].set_ylabel("Median Sample Size") axs[0].legend(loc="upper left",frameon=False) #labels=[1,5,10,20,50,150,500,1000,3000] labels=[1,4,16,64,256,1024,3000] axs[0].set_yticks(np.log(labels)) axs[0].set_yticklabels(labels) axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaSingle[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="") axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaSingle[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="") axs[1].plot(Medians.year,Medians.TalMdDSingle,color=twocol[1],lw=3,label="Neurosynth") axs[1].plot(Medians.year,Medians.DavidMdDSingle,color=twocol[3],lw=3,label="David et al.") axs[1].set_xlim([1993,2016]) axs[1].set_ylim([0,3]) axs[1].set_xlabel("Year") axs[1].set_ylabel("Effect Size with 80% power") axs[1].legend(loc="upper right",frameon=False) plt.savefig('Figure1.svg',dpi=600) plt.show() Explanation: The figure per List (Tal or David) End of explanation Medians.loc[:, lambda df: ['year', 'TalMdSS', 'TalMdDSingle']] Explanation: Print median sample size and power for Neurosynth data End of explanation yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year>2010)]) print('Median sample size (2011-2015):',np.median(Data.n[yearBoolTal])) Explanation: Compute median of sample sizes over last 5 years, for use in correlation simulation notebook. End of explanation
14,829
Given the following text description, write Python code to implement the functionality described below step by step Description: test netcdf+ This is a more extensive integration test, if all the features of netcdf+ work as expected. Step1: Open new storage try to create a new storage Step2: Create some stores Step3: And the default store. The last store for a particular object is used as the default if no specific store is specified. Step4: Initialize the store Step5: Reopen empty storage Step6: set caching of the new stores Step7: Check if the stores were correctly loaded Step8: Create variables types Get a list of all possible variable types Step9: Make a dimension on length 2 to simplify dimension nameing. Now we construct for each type a corresponding variable of dimensions 2x2x2. Step10: Bool Step11: Float Step12: Index Index is special in the sense that it supports only integers that are non-negative. Negative values will be interpreted as None Step13: Int Step14: JSON The variable type JSON encode the given object as a JSON string in the shortest possible way. This includes using referenes to storable objects. Step15: All object types registered as being Storable by subclassing from openpathsampling.base.StorableObject. JSONObj A JSON serializable object. This can be normal very simple python objects, plus numpy arrays, and objects that implement to_dict and from_dict. This is almost the same as JSON except if the object to be serialized is a storable object itself, it will not be referenced but the object itself will be turned into a JSON representation. Step16: Numpy Step17: Obj You can store objects of a type which you have previously added. For loading you need to make sure that the class (and the store if set manually) is present when you load from the store. Step18: lazy Lazy loading will reconstruct an object using proxies. These proxies behave almost like the loaded object, but will delay loading of the object until it is accessed. Saving for lazy objects is the same as for regular objects. Only loading return a proxy object. Step19: The type of the returned object is LoaderProxy while the class is the actual class is the baseclass loaded by the store to not trigger loading when the __class__ attribute is accessed. The actual object can be accessed by __subject__ and doing so will trigger loading the object. All regular attributes will be delegated to __subject__.attribute and also trigger loading. Step20: Load/Save objects Note that there are now 6 Node objects. Step21: Saving without specifying should use store nodes which was defined last. Step22: Get the index of the obj in the storage Step23: And test the different ways to access the contained json 1. direct json using variables in the store Step24: 2. direct json using variables in the full storage Step25: 3. indirect json and reconstruct using vars in the store Step26: 4. using the store accessor __getitem__ in the store Step27: One importance difference is that a store like nodes has a cache (which we set to 10 before). Using vars will not use a store and hence create a new object! ObjectStores ObjectStores are resposible to save and load objects. There are now 6 types available. ObjectStore The basic store which we have used before NamedObjectStore Supports to give objects names Step28: NamedObjects have a .name property, which has a default. Step29: and can be set. Step30: Once the object is saved, the name cannot be changed anymore. Step31: usually names are not unique (see next store). So we can have more than one object with the same name. Step32: See the list of named objects Step33: UniqueNamedObjectStore The forces names to be unique Step34: Note here that an object can be store more than once in a storage, but only if more than one store supports the file type. Step35: As said before this can only happen if you have more than one store for the same object type. Step36: some more tests. First saving onnamed objects. This is okay. Only given names should be unique. Step37: This works since it does a rename before saving. Step38: DictStore A dictstore works a like a dictionary on disk. The content is returned using dict() Step39: ImmutableDictStore This adds the check that already used names cannot be used again Step40: VariableStore Store a node with an int as we defined for our VariableStore Step41: clear the cache Step42: And try loading Step43: Try storing non int() parseable value Step44: Test fallback Step45: Try saving object in fallback
Python Code: import openpathsampling as paths from openpathsampling.netcdfplus import ( NetCDFPlus, ObjectStore, StorableObject, NamedObjectStore, UniqueNamedObjectStore, DictStore, ImmutableDictStore, VariableStore, StorableNamedObject ) import numpy as np from __future__ import print_function class Node(StorableObject): def __init__(self, value): super(Node, self).__init__() self.value = value def __repr__(self): return 'Node(%s)' % self.value class NamedNode(StorableNamedObject): def __init__(self, value): super(NamedNode, self).__init__() self.value = value def __repr__(self): return 'Node(%s)' % self.value Explanation: test netcdf+ This is a more extensive integration test, if all the features of netcdf+ work as expected. End of explanation st = NetCDFPlus('test_netcdfplus.nc', mode='w') Explanation: Open new storage try to create a new storage End of explanation class NodeIntStore(VariableStore): def __init__(self): super(NodeIntStore, self).__init__(Node, ['value']) def initialize(self): super(VariableStore, self).initialize() # Add here the stores to be imported self.create_variable('value', 'int') st.create_store('nodesnamed', NamedObjectStore(NamedNode)) st.create_store('nodesunique', UniqueNamedObjectStore(NamedNode)) st.create_store('dict', DictStore()) st.create_store('dictimmutable', ImmutableDictStore()) st.create_store('varstore', NodeIntStore()) Explanation: Create some stores End of explanation st.create_store('nodes', ObjectStore(Node)) print(st.find_store(Node)) Explanation: And the default store. The last store for a particular object is used as the default if no specific store is specified. End of explanation st.finalize_stores() v = st.variables['nodes_uuid'] v.chunking() print(st.find_store(Node)) st.nodes.save(Node(10)); st.close() Explanation: Initialize the store End of explanation st = NetCDFPlus('test_netcdfplus.nc', mode='a') Explanation: Reopen empty storage End of explanation for store in st.stores: store.set_caching(10) Explanation: set caching of the new stores End of explanation assert('nodes' in st.objects) assert('stores' in st.objects) assert(len(st.nodes) == 1) assert(len(st.stores) == 7) for store in st.stores: print('{:40} {:30}'.format(str(store), str(store.cache))) Explanation: Check if the stores were correctly loaded End of explanation print(sorted(st.get_var_types())) Explanation: Create variables types Get a list of all possible variable types End of explanation st.create_dimension('pair', 2) for var_type in st.get_var_types(): st.create_variable(var_type, var_type, dimensions=('pair', 'pair', 'pair')) st.update_delegates() for var_name, var in sorted(st.variables.items()): print(var_name, var.dimensions) for var in sorted(st.vars): print(var) Explanation: Make a dimension on length 2 to simplify dimension nameing. Now we construct for each type a corresponding variable of dimensions 2x2x2. End of explanation st.vars['bool'][:] = True print(st.vars['bool'][:]) Explanation: Bool End of explanation st.vars['float'][1,1] = 1.0 print(st.vars['float'][:]) Explanation: Float End of explanation st.vars['index'][0,1,0] = 10 st.vars['index'][0,1,1] = -1 st.vars['index'][0,0] = None print(st.vars['index'][0,1]) print(st.vars['index'][0,0]) Explanation: Index Index is special in the sense that it supports only integers that are non-negative. Negative values will be interpreted as None End of explanation st.vars['int'][0,1,0] = 10 st.vars['int'][0,1,1] = -1 print(st.vars['int'][:]) Explanation: Int End of explanation st.vars['json'][0,1,1] = {'Hallo': 2, 'Test': 3} print(st.vars['json'][0,1,1]) st.vars['json'][0,1,0] = Node(10) #! lazy print(st.variables['json'][0,1,:]) Explanation: JSON The variable type JSON encode the given object as a JSON string in the shortest possible way. This includes using referenes to storable objects. End of explanation nn = Node(10) st.vars['jsonobj'][1,0,0] = nn print(st.variables['jsonobj'][1,0,0]) st.vars['jsonobj'][1,0,0] Explanation: All object types registered as being Storable by subclassing from openpathsampling.base.StorableObject. JSONObj A JSON serializable object. This can be normal very simple python objects, plus numpy arrays, and objects that implement to_dict and from_dict. This is almost the same as JSON except if the object to be serialized is a storable object itself, it will not be referenced but the object itself will be turned into a JSON representation. End of explanation st.vars['numpy.float32'][:] = np.ones((2,2,2)) * 3.0 st.vars['numpy.float32'][0] = np.ones((2,2)) * 7.0 print(st.vars['numpy.float32'][:]) Explanation: Numpy End of explanation st.vars['obj.nodes'][0,0,0] = Node(1) st.vars['obj.nodes'][0,1,0] = Node('Second') st.vars['obj.nodes'][0,0,1] = Node('Third') # st.vars['obj.nodes'][1] = Node(20) print(st.variables['obj.nodes'][:]) print(st.variables['nodes_json'][:]) print(st.vars['obj.nodes'][0,0,0]) print(type(st.vars['obj.nodes'][0,0,0])) Explanation: Obj You can store objects of a type which you have previously added. For loading you need to make sure that the class (and the store if set manually) is present when you load from the store. End of explanation st.vars['lazyobj.nodes'][0,0,0] = Node('First') Explanation: lazy Lazy loading will reconstruct an object using proxies. These proxies behave almost like the loaded object, but will delay loading of the object until it is accessed. Saving for lazy objects is the same as for regular objects. Only loading return a proxy object. End of explanation #! lazy proxy = st.vars['lazyobj.nodes'][0,0,0] print('Type: ', type(proxy)) print('Class: ', proxy.__class__) print('Content:', proxy.__subject__.__dict__) print('Access: ', proxy.value) Explanation: The type of the returned object is LoaderProxy while the class is the actual class is the baseclass loaded by the store to not trigger loading when the __class__ attribute is accessed. The actual object can be accessed by __subject__ and doing so will trigger loading the object. All regular attributes will be delegated to __subject__.attribute and also trigger loading. End of explanation print(st.nodes[:]) obj = Node('BlaBla') st.nodes.save(obj); Explanation: Load/Save objects Note that there are now 6 Node objects. End of explanation print(len(st.nodes)) obj = Node('BlaBlaBla') st.save(obj) print(len(st.nodes)) Explanation: Saving without specifying should use store nodes which was defined last. End of explanation print(st.idx(obj)) Explanation: Get the index of the obj in the storage End of explanation print(st.nodes.variables['json'][st.idx(obj)]) Explanation: And test the different ways to access the contained json 1. direct json using variables in the store End of explanation print(st.variables['nodes_json'][st.idx(obj)]) Explanation: 2. direct json using variables in the full storage End of explanation print(st.nodes.vars['json'][st.idx(obj)]) print(st.nodes.vars['json'][st.idx(obj)] is obj) Explanation: 3. indirect json and reconstruct using vars in the store End of explanation print(st.nodes[st.idx(obj)]) print(st.nodes[st.idx(obj)] is obj) Explanation: 4. using the store accessor __getitem__ in the store End of explanation n = NamedNode(3) Explanation: One importance difference is that a store like nodes has a cache (which we set to 10 before). Using vars will not use a store and hence create a new object! ObjectStores ObjectStores are resposible to save and load objects. There are now 6 types available. ObjectStore The basic store which we have used before NamedObjectStore Supports to give objects names End of explanation print(n.name) Explanation: NamedObjects have a .name property, which has a default. End of explanation n.name = 'OneNode' print(n.name) n.name = 'MyNode' print(n.name) Explanation: and can be set. End of explanation st.nodesnamed.save(n); try: n.name = 'NewName' except ValueError as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: Once the object is saved, the name cannot be changed anymore. End of explanation n2 = NamedNode(9) n2.name = 'MyNode' st.nodesnamed.save(n2); Explanation: usually names are not unique (see next store). So we can have more than one object with the same name. End of explanation print(st.nodesnamed.name_idx) Explanation: See the list of named objects End of explanation st.nodesunique.save(n); Explanation: UniqueNamedObjectStore The forces names to be unique End of explanation try: st.nodesunique.save(n2) except RuntimeWarning as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: Note here that an object can be store more than once in a storage, but only if more than one store supports the file type. End of explanation print(st.nodesunique.name_idx) Explanation: As said before this can only happen if you have more than one store for the same object type. End of explanation n3 = NamedNode(10) n4 = NamedNode(12) st.nodesunique.save(n3); st.nodesunique.save(n4); n5 = NamedNode(1) n5.name = 'MyNode' try: st.nodesunique.save(n5) except RuntimeWarning as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: some more tests. First saving onnamed objects. This is okay. Only given names should be unique. End of explanation st.nodesunique.save(n5, 'NextNode'); n6 = NamedNode(1) n6.name = 'SecondNode' try: st.nodesunique.save(n6, 'MyNode') except RuntimeWarning as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: This works since it does a rename before saving. End of explanation print(dict(st.dict)) print(st.dict.name_idx) n1 = NamedNode(1) n2 = NamedNode(2) n3 = NamedNode(3) st.dict['Number1'] = n1 for key in sorted(st.dict): obj = st.dict[key] idxs = sorted(st.dict.name_idx[key]) print(key, ':', str(obj), idxs) st.dict['Number2'] = n2 for key in sorted(st.dict): obj = st.dict[key] idxs = sorted(st.dict.name_idx[key]) print(key, ':', str(obj), idxs) st.dict['Number1'] = n3 for key in sorted(st.dict): obj = st.dict[key] idxs = sorted(st.dict.name_idx[key]) print(key, ':', str(obj), idxs) print(st.dict['Number1']) print(st.dict.find('Number1')) print('[', ', '.join(st.dict.variables['json'][:]), ']') for key in sorted(st.dict): obj = st.dict[key] idxs = sorted(st.dict.name_idx[key]) print(key, ':', str(obj), idxs) Explanation: DictStore A dictstore works a like a dictionary on disk. The content is returned using dict() End of explanation try: st.dictimmutable['Number1'] = n1 st.dictimmutable['Number1'] = n2 except RuntimeWarning as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: ImmutableDictStore This adds the check that already used names cannot be used again End of explanation a = Node(30) st.varstore.save(a); Explanation: VariableStore Store a node with an int as we defined for our VariableStore End of explanation st.varstore.clear_cache() Explanation: clear the cache End of explanation assert(st.varstore[0].value == 30) Explanation: And try loading End of explanation try: a = Node('test') print(st.varstore.save(a)) except ValueError as e: print('# We had an exception') print(e) else: raise RuntimeWarning('This should have produced an error') Explanation: Try storing non int() parseable value End of explanation st_uuid = NetCDFPlus('test_netcdfplus_uuid.nc', mode='w') st_uuid.create_store('nodes', ObjectStore(Node)) st_uuid.finalize_stores() st_uuid.save(st.nodes[0]) st_uuid.close() st.close() st_fb = NetCDFPlus('test_netcdfplus_fb.nc', mode='w', fallback=NetCDFPlus('test_netcdfplus_uuid.nc')) st_fb.create_store('nodes', ObjectStore(Node)) st_fb.finalize_stores() st_fb.exclude_from_fallback assert(st_fb.fallback.nodes[0] in st_fb.fallback) assert(st_fb.fallback.nodes[0] in st_fb) assert(st.nodes[0] in st_fb) assert(st.nodes[0] in st_fb.fallback) Explanation: Test fallback End of explanation print(hex(st_fb.nodes.save(st_fb.fallback.nodes[0]))) assert(len(st_fb.nodes) == 0) assert(st_fb.fallback.nodes[0] in st_fb) assert(st_fb.fallback.nodes[0] in st_fb.fallback) assert(st.nodes[0] in st_fb) assert(st.nodes[0] in st_fb.fallback) st_fb.fallback.close() st_fb.close() Explanation: Try saving object in fallback End of explanation
14,830
Given the following text description, write Python code to implement the functionality described below step by step Description: Note that the sequence size Nzc is lower then the number of subcarriers that will have elements of the Zadoff-Chu sequence. That is $Nzc \leq 300/2 = 150$. Therefore, we will append new elements (creating a cyclic sequence). Step1: Create shifted sequences for 3 users First we arbitrarely choose some cyclic shift indexes and then we call zadoffchu.getShiftedZF to get the shifted sequence. Step2: Generate channels from users to the BS Now it's time to transmit the shifted sequences. We need to create the fading channels from two users to some BS. Step3: Perform the transmission First we need to prepare the input data from our shifted Zadoff-Chu sequences. To makes things clear, let's start transmiting a single sequence and we won't include the white noise. Since we use a comb to transmit the SRS sequence, we will use Nsc/2 subcarriers from the Nsc subcarriers from a comb like pattern. Step4: According to the paper, ... the received frequency-domain sequence Y is element-wise multiplied with the complex conjugate of the expected root sequence X before the IDFT. This provides in one shot the concatenated CIRs of all UEs multiplexed on the same root sequence. Now let's get the plot of the signal considering that all users transmitted. Notice how the part due to user 1 in the plot is the same channel when only user 1 transmitted. This indicates that Zadoff-chu 0 cross correlation is indeed working. Step5: Estimate the channels Since we get a concatenation of the impulse response of the different users, we need to know for each users we need to know the first and the last sample index corresponding to the particular user's impulse response. Since we have Nsc subcarriers, from which we will use $Nsc/2$, and we have 3 users, we can imagine that each user can have up to $Nsc/(2*3)$ samples, which for $Nsc=300$ corresponds to 50 subcarriers. Now let's estimate the channel of the first user. First let's check again what is the shift used by the first user. Step8: For an index equal to 1 the starting sample of the first user will be 101 and the ending sample will be 101+50-1=150. Step9: Now we will compute the squared error in each subcarrier. Step10: Estimated the channels from corrupted (white noise) signal Now we will add some white noise to Y
Python Code: # Create root sequence objects a_u1 = RootSequence(u1, size=Nsc//2, Nzc=Nzc) a_u2 = RootSequence(u1, size=Nsc//2, Nzc=Nzc) a_u3 = RootSequence(u1, size=Nsc//2, Nzc=Nzc) Explanation: Note that the sequence size Nzc is lower then the number of subcarriers that will have elements of the Zadoff-Chu sequence. That is $Nzc \leq 300/2 = 150$. Therefore, we will append new elements (creating a cyclic sequence). End of explanation m_u1 = 1 # Cyclic shift index m_u2 = 4 m_u3 = 7 r1 = SrsUeSequence(a_u1, m_u1) r2 = SrsUeSequence(a_u2, m_u2) r3 = SrsUeSequence(a_u3, m_u3) Explanation: Create shifted sequences for 3 users First we arbitrarely choose some cyclic shift indexes and then we call zadoffchu.getShiftedZF to get the shifted sequence. End of explanation speedTerminal = 3/3.6 # Speed in m/s fcDbl = 2.6e9 # Central carrier frequency (in Hz) timeTTIDbl = 1e-3 # Time of a single TTI subcarrierBandDbl = 15e3 # Subcarrier bandwidth (in Hz) numOfSubcarriersPRBInt = 12 # Number of subcarriers in each PRB # xxxxxxxxxx Dependent parametersxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx lambdaDbl = 3e8/fcDbl # Carrier wave length Fd = speedTerminal / lambdaDbl Ts = 1./(Nsc * subcarrierBandDbl) # xxxxxxxxxx Channel parameters xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx L = 16 # The number of rays for the Jakes model. # Create the MuSisoChannel jakes = JakesSampleGenerator(Fd, Ts, L) musisochannel = MuChannel(N=(1, 3), fading_generator=jakes, channel_profile=COST259_TUx) Explanation: Generate channels from users to the BS Now it's time to transmit the shifted sequences. We need to create the fading channels from two users to some BS. End of explanation comb_indexes = np.arange(0, Nsc, 2) data = np.vstack([r1.seq_array(),r2.seq_array(),r3.seq_array()]) Y = musisochannel.corrupt_data_in_freq_domain(data, Nsc, comb_indexes) Y = Y[0] # We only have one receiver impulse_response0 = musisochannel.get_last_impulse_response(0, 0) impulse_response1 = musisochannel.get_last_impulse_response(0, 1) impulse_response2 = musisochannel.get_last_impulse_response(0, 2) H1 = impulse_response0.get_freq_response(Nsc)[:, 0] H2 = impulse_response1.get_freq_response(Nsc)[:, 0] H3 = impulse_response2.get_freq_response(Nsc)[:, 0] h1 = np.fft.ifft(H1) h2 = np.fft.ifft(H2) h3 = np.fft.ifft(H3) Explanation: Perform the transmission First we need to prepare the input data from our shifted Zadoff-Chu sequences. To makes things clear, let's start transmiting a single sequence and we won't include the white noise. Since we use a comb to transmit the SRS sequence, we will use Nsc/2 subcarriers from the Nsc subcarriers from a comb like pattern. End of explanation y = np.fft.ifft(np.conj(a_u1) * Y, 150) plt.figure(figsize=(12,6)) plt.stem(np.abs(y), use_line_collection=True) plt.show() Explanation: According to the paper, ... the received frequency-domain sequence Y is element-wise multiplied with the complex conjugate of the expected root sequence X before the IDFT. This provides in one shot the concatenated CIRs of all UEs multiplexed on the same root sequence. Now let's get the plot of the signal considering that all users transmitted. Notice how the part due to user 1 in the plot is the same channel when only user 1 transmitted. This indicates that Zadoff-chu 0 cross correlation is indeed working. End of explanation m_u1 estimator1 = CazacBasedChannelEstimator(r1) estimator2 = CazacBasedChannelEstimator(r2) estimator3 = CazacBasedChannelEstimator(r3) Explanation: Estimate the channels Since we get a concatenation of the impulse response of the different users, we need to know for each users we need to know the first and the last sample index corresponding to the particular user's impulse response. Since we have Nsc subcarriers, from which we will use $Nsc/2$, and we have 3 users, we can imagine that each user can have up to $Nsc/(2*3)$ samples, which for $Nsc=300$ corresponds to 50 subcarriers. Now let's estimate the channel of the first user. First let's check again what is the shift used by the first user. End of explanation def plot_channel_responses(h, tilde_h): Plot the estimated and true channel responses Parameters ---------- h : numpy complex array The true channel impulse response tilde_h : numpy complex array The estimated channel impulse response H = np.fft.fft(h) tilde_H = np.fft.fft(tilde_h, Nsc) plt.figure(figsize=(16,12)) # Plot estimated impulse response ax1 = plt.subplot2grid((3,2), (0,0)) ax1.stem(np.abs(tilde_h[0:20]), use_line_collection=True) plt.xlabel("Time sample") plt.ylabel("Amplitude (abs)") plt.title("Estimated Impulse Response") plt.grid() # Plot TRUE impulse response ax2 = plt.subplot2grid((3,2), (0,1)) ax2.stem(np.abs(h[0:20]),linefmt='g', use_line_collection=True) plt.xlabel("Time sample") plt.ylabel("Amplitude (abs)") plt.xlabel("Time sample") plt.title("True Impulse Response") plt.grid() # Plot estimated frequency response (absolute value) ax3 = plt.subplot2grid((3,2), (1,0), colspan=2) plt.plot(np.abs(tilde_H)) #plt.xlabel("Subcarrier") plt.ylabel("Amplitude (abs)") plt.title("Frequency Response (abs)") # Plot TRUE frequency response (absolute value) #plt.subplot(3,2,4) ax3.plot(np.abs(H), 'g') plt.grid() plt.legend(["Estimated Value", "True Value"], loc='upper left') # Plot estimated frequency response (angle) ax4 = plt.subplot2grid((3,2), (2,0), colspan=2) ax4.plot(np.angle(tilde_H)) plt.xlabel("Subcarrier") plt.ylabel("Angle (phase)") plt.title("Frequency Response (phase)") # Plot TRUE frequency response (angle) ax4.plot(np.angle(H),'g') plt.grid() plt.legend(["Estimated Value", "True Value"], loc='upper left') # Show the plots plt.show() def plot_normalized_squared_error(H, tilde_H): Plot the normalized squared error (in dB). Parameters ---------- H : numpy complex array The true channel frequency response tilde_H : numpy complex array The estimated channel frequency response plt.figure(figsize=(12,8)) error = np.abs(tilde_H - H)**2 / (np.abs(H)**2) plt.plot(linear2dB(error)) plt.title("Normalized Squared Error") plt.xlabel("Subcarrier") plt.ylabel("Normalized Squared Error (in dB)") plt.grid() plt.show() # y = np.fft.ifft(np.conj(r1) * Y, 150) # tilde_h1 = y[0:20] # tilde_H1 = np.fft.fft(tilde_h1, Nsc) # tilde_Y1 = tilde_H1[comb_indexes] * r1 tilde_H1 = estimator1.estimate_channel_freq_domain(Y, num_taps_to_keep=20) tilde_h1 = np.fft.ifft(tilde_H1)[0:20] plot_channel_responses(h1, tilde_h1) Explanation: For an index equal to 1 the starting sample of the first user will be 101 and the ending sample will be 101+50-1=150. End of explanation tilde_H1 = np.fft.fft(tilde_h1, Nsc) plot_normalized_squared_error(H1, tilde_H1) # y = np.fft.ifft(np.conj(r2) * (Y), 150) # tilde_h2 = y[0:20] # tilde_H2 = np.fft.fft(tilde_h2, Nsc) # tilde_Y2 = tilde_H2[comb_indexes] * r2 tilde_H2 = estimator2.estimate_channel_freq_domain(Y, num_taps_to_keep=20) tilde_h2 = np.fft.ifft(tilde_H2)[0:20] plot_channel_responses(h2, tilde_h2) tilde_H2 = np.fft.fft(tilde_h2, Nsc) plot_normalized_squared_error(H2, tilde_H2) # y = np.fft.ifft(np.conj(r3) * (Y), 150) # tilde_h3 = y[0:11] # tilde_H3 = np.fft.fft(tilde_h3, Nsc) # tilde_Y3 = tilde_H3[comb_indexes] * r3 tilde_H3 = estimator3.estimate_channel_freq_domain(Y, num_taps_to_keep=20) tilde_h3 = np.fft.ifft(tilde_H3)[0:20] plot_channel_responses(h3, tilde_h3) tilde_H3 = np.fft.fft(tilde_h3, Nsc) plot_normalized_squared_error(H3, tilde_H3) Explanation: Now we will compute the squared error in each subcarrier. End of explanation # Add white noise noise_var = 1e-2 Y_noised = Y + np.sqrt(noise_var/2.) * (np.random.randn(Nsc//2) + 1j * np.random.randn(Nsc//2)) # y_noised = np.fft.ifft(np.conj(r2) * (Y_noised), 150) # tilde_h2_noised = y_noised[0:20] tilde_H2_noised = estimator2.estimate_channel_freq_domain(Y_noised, num_taps_to_keep=20) tilde_h2_noised = np.fft.ifft(tilde_H2_noised)[0:20] plot_channel_responses(h2, tilde_h2_noised) tilde_H2_noised = np.fft.fft(tilde_h2_noised, Nsc) plot_normalized_squared_error(H2, tilde_H2_noised) Explanation: Estimated the channels from corrupted (white noise) signal Now we will add some white noise to Y End of explanation
14,831
Given the following text description, write Python code to implement the functionality described below step by step Description: <img src="http Step1: Market Environment and Portfolio Object We start by instantiating a market environment object which in particular contains a list of ticker symbols in which we are interested in. Step2: Using pandas under the hood, the class retrieves historial stock price data from either Yahoo! Finance of Google. Step3: Basic Statistics Since no portfolio weights have been provided, the class defaults to equal weights. Step4: Given these weights you can calculate the portfolio return via the method get_portfolio_return. Step5: Analogously, you can call get_portfolio_variance to get the historical portfolio variance. Step6: The class also has a neatly printable string representation. Step7: Setting Weights Via the method set_weights the weights of the single portfolio components can be adjusted. Step8: You cal also easily check results for different weights with changing the attribute values of an object. Step9: Let us implement a Monte Carlo simulation over potential portfolio weights. Step10: And the simulation results visualized. Step11: Optimizing Portfolio Composition One of the major application areas of the mean-variance portfolio theory and therewith of this DX Analytics class it the optimization of the portfolio composition. Different target functions can be used to this end. Return The first target function might be the portfolio return. Step12: Instead of maximizing the portfolio return without any constraints, you can also set a (sensible/possible) maximum target volatility level as a constraint. Both, in an exact sense ("equality constraint") ... Step13: ... or just a an upper bound ("inequality constraint"). Step14: Risk The class also allows you to minimize portfolio risk. Step15: And, as before, to set constraints (in this case) for the target return level. Step16: Sharpe Ratio Often, the target of the portfolio optimization efforts is the so called Sharpe ratio. The mean_variance_portfolio class of DX Analytics assumes a risk-free rate of zero in this context. Step17: Efficient Frontier Another application area is to derive the efficient frontier in the mean-variance space. These are all these portfolios for which there is no portfolio with both lower risk and higher return. The method get_efficient_frontier yields the desired results. Step18: The plot with the random and efficient portfolios. Step19: Capital Market Line The capital market line is another key element of the mean-variance portfolio approach representing all those risk-return combinations (in mean-variance space) that are possible to form from a risk-less money market account and the market portfolio (or another appropriate substitute efficient portfolio). Step20: The following plot illustrates that the capital market line has an ordinate value equal to the risk-free rate (the safe return of the money market account) and is tangent to the efficient frontier. Step21: Portfolio return and risk of the efficient portfolio used are Step22: The portfolio composition can be derived as follows. Step23: Or also in this way. Step24: More Assets As a larger, more realistic example, consider a larger set of assets. Step25: Data retrieval in this case takes a bit. Step26: Given the larger data set now used, efficient frontier ... Step27: ... and capital market line derivations take also longer.
Python Code: from dx import * from pylab import plt plt.style.use('seaborn') Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4"> Mean-Variance Portfolio Class Without doubt, the Markowitz (1952) mean-variance portfolio theory is a cornerstone of modern financial theory. This section illustrates the use of the mean_variance_portfolio class to implement this approach. End of explanation ma = market_environment('ma', dt.date(2010, 1, 1)) ma.add_list('symbols', ['AAPL.O', 'INTC.O', 'MSFT.O', 'GS.N']) ma.add_constant('source', 'google') ma.add_constant('final date', dt.date(2014, 3, 1)) Explanation: Market Environment and Portfolio Object We start by instantiating a market environment object which in particular contains a list of ticker symbols in which we are interested in. End of explanation %%time port = mean_variance_portfolio('am_tech_stocks', ma) # instantiates the portfolio class # and retrieves all the time series data needed port.get_available_symbols() Explanation: Using pandas under the hood, the class retrieves historial stock price data from either Yahoo! Finance of Google. End of explanation port.get_weights() # defaults to equal weights Explanation: Basic Statistics Since no portfolio weights have been provided, the class defaults to equal weights. End of explanation port.get_portfolio_return() # expected (= historical mean) return Explanation: Given these weights you can calculate the portfolio return via the method get_portfolio_return. End of explanation port.get_portfolio_variance() # expected (= historical) variance Explanation: Analogously, you can call get_portfolio_variance to get the historical portfolio variance. End of explanation print(port) # ret. con. is "return contribution" # given the mean return and the weight # of the security Explanation: The class also has a neatly printable string representation. End of explanation port.set_weights([0.6, 0.2, 0.1, 0.1]) print(port) Explanation: Setting Weights Via the method set_weights the weights of the single portfolio components can be adjusted. End of explanation port.test_weights([0.6, 0.2, 0.1, 0.1]) # returns av. return + vol + Sharp ratio # without setting new weights Explanation: You cal also easily check results for different weights with changing the attribute values of an object. End of explanation # Monte Carlo simulation of portfolio compositions rets = [] vols = [] for w in range(500): weights = np.random.random(4) weights /= sum(weights) r, v, sr = port.test_weights(weights) rets.append(r) vols.append(v) rets = np.array(rets) vols = np.array(vols) Explanation: Let us implement a Monte Carlo simulation over potential portfolio weights. End of explanation import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(10, 6)) plt.scatter(vols, rets, c=rets / vols, marker='o', cmap='coolwarm') plt.xlabel('expected volatility') plt.ylabel('expected return') plt.colorbar(label='Sharpe ratio'); Explanation: And the simulation results visualized. End of explanation port.optimize('Return') # maximizes expected return of portfolio # no volatility constraint print(port) Explanation: Optimizing Portfolio Composition One of the major application areas of the mean-variance portfolio theory and therewith of this DX Analytics class it the optimization of the portfolio composition. Different target functions can be used to this end. Return The first target function might be the portfolio return. End of explanation port.optimize('Return', constraint=0.225, constraint_type='Exact') # interpretes volatility constraint as equality print(port) Explanation: Instead of maximizing the portfolio return without any constraints, you can also set a (sensible/possible) maximum target volatility level as a constraint. Both, in an exact sense ("equality constraint") ... End of explanation port.optimize('Return', constraint=0.4, constraint_type='Bound') # interpretes volatility constraint as inequality (upper bound) print(port) Explanation: ... or just a an upper bound ("inequality constraint"). End of explanation port.optimize('Vol') # minimizes expected volatility of portfolio # no return constraint print(port) Explanation: Risk The class also allows you to minimize portfolio risk. End of explanation port.optimize('Vol', constraint=0.175, constraint_type='Exact') # interpretes return constraint as equality print(port) port.optimize('Vol', constraint=0.20, constraint_type='Bound') # interpretes return constraint as inequality (upper bound) print(port) Explanation: And, as before, to set constraints (in this case) for the target return level. End of explanation port.optimize('Sharpe') # maximize Sharpe ratio print(port) Explanation: Sharpe Ratio Often, the target of the portfolio optimization efforts is the so called Sharpe ratio. The mean_variance_portfolio class of DX Analytics assumes a risk-free rate of zero in this context. End of explanation %%time evols, erets = port.get_efficient_frontier(100) # 100 points of the effient frontier Explanation: Efficient Frontier Another application area is to derive the efficient frontier in the mean-variance space. These are all these portfolios for which there is no portfolio with both lower risk and higher return. The method get_efficient_frontier yields the desired results. End of explanation plt.figure(figsize=(10, 6)) plt.scatter(vols, rets, c=rets / vols, marker='o') plt.scatter(evols, erets, c=erets / evols, marker='o', cmap='coolwarm') plt.xlabel('expected volatility') plt.ylabel('expected return') plt.colorbar(label='Sharpe ratio') Explanation: The plot with the random and efficient portfolios. End of explanation %%time cml, optv, optr = port.get_capital_market_line(riskless_asset=0.05) # capital market line for effiecient frontier and risk-less short rate cml # lambda function for capital market line Explanation: Capital Market Line The capital market line is another key element of the mean-variance portfolio approach representing all those risk-return combinations (in mean-variance space) that are possible to form from a risk-less money market account and the market portfolio (or another appropriate substitute efficient portfolio). End of explanation plt.figure(figsize=(10, 6)) plt.plot(evols, erets, lw=2.0, label='efficient frontier') plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line') plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio') plt.legend(loc=0) plt.ylim(0) plt.xlabel('expected volatility') plt.ylabel('expected return') Explanation: The following plot illustrates that the capital market line has an ordinate value equal to the risk-free rate (the safe return of the money market account) and is tangent to the efficient frontier. End of explanation optr optv Explanation: Portfolio return and risk of the efficient portfolio used are: End of explanation port.optimize('Vol', constraint=optr, constraint_type='Exact') print(port) Explanation: The portfolio composition can be derived as follows. End of explanation port.optimize('Return', constraint=optv, constraint_type='Exact') print(port) Explanation: Or also in this way. End of explanation symbols = list(port.get_available_symbols())[:7] symbols ma = market_environment('ma', dt.date(2010, 1, 1)) ma.add_list('symbols', symbols) ma.add_constant('source', 'google') ma.add_constant('final date', dt.date(2014, 3, 1)) Explanation: More Assets As a larger, more realistic example, consider a larger set of assets. End of explanation %%time djia = mean_variance_portfolio('djia', ma) # defining the portfolio and retrieving the data %%time djia.optimize('Vol') print(djia.variance, djia.variance ** 0.5) # minimium variance & volatility in decimals Explanation: Data retrieval in this case takes a bit. End of explanation %%time evols, erets = djia.get_efficient_frontier(25) # efficient frontier of DJIA Explanation: Given the larger data set now used, efficient frontier ... End of explanation %%time cml, optv, optr = djia.get_capital_market_line(riskless_asset=0.01) # capital market line and optimal (tangent) portfolio plt.figure(figsize=(10, 6)) plt.plot(evols, erets, lw=2.0, label='efficient frontier') plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line') plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio') plt.legend(loc=0) plt.ylim(0) plt.xlabel('expected volatility') plt.ylabel('expected return') Explanation: ... and capital market line derivations take also longer. End of explanation
14,832
Given the following text description, write Python code to implement the functionality described below step by step Description: Problem 1 Write a function that takes a list of 0s and 1s and produces the corresponding integer. The equation for converting a list $L = [l_1, l_2, ..., l_n]$ of 0's and 1's to binary is $\sum_i l_i*2^i$. What is the integer representation of [1, 0, 0, 0, 1, 1, 0, 1]? Step1: One note - there are actually 2 possible solutions to this problem, depending on which value of [1, 0, 0, 0, 1, 1, 0, 1] is treated as the least-significant bit (LSB). The solution above treats the left-most bit as the LSB (i.e. the bit that gets multiplied by $2^0=1$). How would you rewrite the function to treat the right-most bit as the LSB? Problem 2 Read data/alice_in_wonderland.txt into memory. How many characters does it contain? How does this compare to its size on disk? Print out the unique non-ASCII characters in Alice in Wonderland (hint Step2: So this tells us that there are non-ASCII characters (characters that use more than 1 byte) in the file Step3: Problem 3 Iterating over good_movies, print the name of the movies that Ben Affleck stars in. Find the total number of Oscar nominations for 2016 movies in the dataset. Step4: Problem 4 Create a NumPy array with 100,000 random integers between 1 and 100. Then, write two functions (in pure Python, not using built-in NumPy functions) Step5: A weight vector needs to sum to 1. So we'll create a vector of random numbers between 0 and 1 and normalize it (divide by its sum) so that it sums to 1.
Python Code: def to_binary(x): the_sum = 0 # enumerate returns pairs of values from `x` # as well as the index of each value for index, value in enumerate(x): the_sum += value * 2**index return the_sum my_list = [1, 1] to_binary(my_list) my_list = [1, 0, 0, 0, 1, 1, 0, 1] to_binary(my_list) Explanation: Problem 1 Write a function that takes a list of 0s and 1s and produces the corresponding integer. The equation for converting a list $L = [l_1, l_2, ..., l_n]$ of 0's and 1's to binary is $\sum_i l_i*2^i$. What is the integer representation of [1, 0, 0, 0, 1, 1, 0, 1]? End of explanation import os with open('data/alice_in_wonderland.txt', 'r') as file: alice = file.read() # how many characters are in Alice? print('number of characters is {}'.format(len(alice))) # how large is the file on disk? print('number of bytes on disk is {}'.format(os.path.getsize('data/alice_in_wonderland.txt'))) Explanation: One note - there are actually 2 possible solutions to this problem, depending on which value of [1, 0, 0, 0, 1, 1, 0, 1] is treated as the least-significant bit (LSB). The solution above treats the left-most bit as the LSB (i.e. the bit that gets multiplied by $2^0=1$). How would you rewrite the function to treat the right-most bit as the LSB? Problem 2 Read data/alice_in_wonderland.txt into memory. How many characters does it contain? How does this compare to its size on disk? Print out the unique non-ASCII characters in Alice in Wonderland (hint: non-ASCII means that the number of bytes used is greater than 1). Write the first 10,000 characters of Alice in Wonderland as text and as a pickle. What are the sizes of each file on disk? End of explanation # non-ASCI characters are characters that use more # than 1 byte to represent the character non_ascii = [] for character in alice: # convert character to Unicode bytes and check how many bytes there are if len(bytes(character, 'UTF-8')) > 1: non_ascii.append(character) # convert list to set to get only the unique characters print('unique non-ASCII characters:', set(non_ascii)) import pickle # open a file in write mode ('w') to write plain text with open('data/alice_partial.txt', 'w') as file: file.write(alice[:10000]) # open a file in write-binary ('wb') mode to write pickle protocol with open('data/alice_partial.pickle', 'wb') as file: pickle.dump(alice[:10000], file) print('size of plain text file: {}'.format(os.path.getsize('data/alice_partial.txt'))) print('size of pickled file: {}'.format(os.path.getsize('data/alice_partial.pickle'))) Explanation: So this tells us that there are non-ASCII characters (characters that use more than 1 byte) in the file End of explanation import json # use the `json` library to read json-structured plain text into Python objects with open('data/good_movies.json', 'r') as file: good_movies = json.loads(file.read()) # iterate over the movies, checking the list of stars for each for movie in good_movies: if 'Ben Affleck' in movie['stars']: print(movie['title']) # iterate over the movies, tallying the Oscars for movies in 2016 nominations_2016 = 0 for movie in good_movies: if movie['year'] == 2016: nominations_2016 += movie['oscar_nominations'] print(nominations_2016) Explanation: Problem 3 Iterating over good_movies, print the name of the movies that Ben Affleck stars in. Find the total number of Oscar nominations for 2016 movies in the dataset. End of explanation import numpy as np rand_array = np.random.randint(1, high=100, size=100000) def my_average(x): the_sum = 0 for el in x: the_sum += el return the_sum / len(x) def my_stdev(x): the_sum = 0 the_avg = my_average(x) for xi in x: the_sum += (xi - the_avg) ** 2 return np.sqrt(the_sum / len(x)) def my_weighted_average(x, weights): the_sum = 0 for el, weight in zip(x, weights): the_sum += el * weight return the_sum print('average:', my_average(rand_array)) print('standard deviation:', my_stdev(rand_array)) Explanation: Problem 4 Create a NumPy array with 100,000 random integers between 1 and 100. Then, write two functions (in pure Python, not using built-in NumPy functions): Compute the average Compute the standard deviation Create weight vector of 100,000 elements (the sum of the elements is 1). Compute the weighted average of your first vector with these weights. End of explanation rand_weights = np.random.random(size=100000) rand_weights /= np.sum(rand_weights) print('weighted average:', my_weighted_average(rand_array, rand_weights)) Explanation: A weight vector needs to sum to 1. So we'll create a vector of random numbers between 0 and 1 and normalize it (divide by its sum) so that it sums to 1. End of explanation
14,833
Given the following text description, write Python code to implement the functionality described below step by step Description: readwrite module pgmpy pgmpy is a python library for creation, manipulation and implementation of Probabilistic graph models. There are various standard file formats for representing PGM data. PGM data basically consists of graph, a distribution assoicated to each node and a few other attributes of a graph. pgmpy has a functionality to read networks from and write networks to these standard file formats. Currently pgmpy supports 5 file formats ProbModelXML, PomDPX, XMLBIF, XMLBeliefNetwork and UAI file formats. Using these modules, models can be specified in a uniform file format and readily converted to bayesian or markov model objects. Now, Let's read a ProbModel XML File and get the corresponding model instance of the probmodel. Step1: Now to get the corresponding model instance we need get_model Step2: Now we can query this model accoring to our requirements. It is an instance of BayesianModel or MarkovModel depending on the type of the model which is given. Suppose we want to know all the nodes in the given model, we can do Step3: To get all the edges we can use model.edges method. Step4: To get all the cpds of the given model we can use model.get_cpds and to get the corresponding values we can iterate over each cpd and call the corresponding get_cpd method. Step5: pgmpy not only allows us to read from the specific file format but also helps us to write the given model into the specific file format. Let's write a sample model into Probmodel XML file. For that first define our data for the model. Step6: Now let's create a BayesianModel for this data. Step7: To get the data which we need to give to the ProbModelXMLWriter to get the corresponding fileformat we need to use the method get_probmodel_data. This method is only specific to ProbModelXML file, for other file formats we would directly pass the model to the given Writer Class. Step8: To write the xml data into the file we can use the method write_file of the given Writer class. Step9: General WorkFlow of the readwrite module pgmpy.readwrite.[fileformat]Reader is base class for reading the given file format. Replace file format with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to parse the given file. For example for XMLBelief Network various methods which are defined are as follows Step10: get_model Step11: pgmpy.readwrite.[fileformat]Writer is base class for writing the model into the given file format. It takes a model as an argument which can be an instance of BayesianModel, MarkovModel. Replace file fomat with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to set the contents of the new file to be created from the given model. For example for XMLBelief Network various methods such as set_analysisnotebook, etc are defined which helps to set up the network data.
Python Code: from pgmpy.readwrite import ProbModelXMLReader reader_string = ProbModelXMLReader('../files/example.pgmx') Explanation: readwrite module pgmpy pgmpy is a python library for creation, manipulation and implementation of Probabilistic graph models. There are various standard file formats for representing PGM data. PGM data basically consists of graph, a distribution assoicated to each node and a few other attributes of a graph. pgmpy has a functionality to read networks from and write networks to these standard file formats. Currently pgmpy supports 5 file formats ProbModelXML, PomDPX, XMLBIF, XMLBeliefNetwork and UAI file formats. Using these modules, models can be specified in a uniform file format and readily converted to bayesian or markov model objects. Now, Let's read a ProbModel XML File and get the corresponding model instance of the probmodel. End of explanation model = reader_string.get_model() Explanation: Now to get the corresponding model instance we need get_model End of explanation print(model.nodes()) Explanation: Now we can query this model accoring to our requirements. It is an instance of BayesianModel or MarkovModel depending on the type of the model which is given. Suppose we want to know all the nodes in the given model, we can do: End of explanation model.edges() Explanation: To get all the edges we can use model.edges method. End of explanation cpds = model.get_cpds() for cpd in cpds: print(cpd.get_cpd()) Explanation: To get all the cpds of the given model we can use model.get_cpds and to get the corresponding values we can iterate over each cpd and call the corresponding get_cpd method. End of explanation import numpy as np edges_list = [('VisitToAsia', 'Tuberculosis'), ('LungCancer', 'TuberculosisOrCancer'), ('Smoker', 'LungCancer'), ('Smoker', 'Bronchitis'), ('Tuberculosis', 'TuberculosisOrCancer'), ('Bronchitis', 'Dyspnea'), ('TuberculosisOrCancer', 'Dyspnea'), ('TuberculosisOrCancer', 'X-ray')] nodes = {'Smoker': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '52', 'x': '568'}, 'AdditionalProperties': {'Title': 'S', 'Relevance': '7.0'}}, 'Bronchitis': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '181', 'x': '698'}, 'AdditionalProperties': {'Title': 'B', 'Relevance': '7.0'}}, 'VisitToAsia': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '58', 'x': '290'}, 'AdditionalProperties': {'Title': 'A', 'Relevance': '7.0'}}, 'Tuberculosis': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '150', 'x': '201'}, 'AdditionalProperties': {'Title': 'T', 'Relevance': '7.0'}}, 'X-ray': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'AdditionalProperties': {'Title': 'X', 'Relevance': '7.0'}, 'Coordinates': {'y': '322', 'x': '252'}, 'Comment': 'Indica si el test de rayos X ha sido positivo', 'type': 'finiteStates'}, 'Dyspnea': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '321', 'x': '533'}, 'AdditionalProperties': {'Title': 'D', 'Relevance': '7.0'}}, 'TuberculosisOrCancer': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '238', 'x': '336'}, 'AdditionalProperties': {'Title': 'E', 'Relevance': '7.0'}}, 'LungCancer': {'States': {'no': {}, 'yes': {}}, 'role': 'chance', 'type': 'finiteStates', 'Coordinates': {'y': '152', 'x': '421'}, 'AdditionalProperties': {'Title': 'L', 'Relevance': '7.0'}}} edges = {'LungCancer': {'TuberculosisOrCancer': {'directed': 'true'}}, 'Smoker': {'LungCancer': {'directed': 'true'}, 'Bronchitis': {'directed': 'true'}}, 'Dyspnea': {}, 'X-ray': {}, 'VisitToAsia': {'Tuberculosis': {'directed': 'true'}}, 'TuberculosisOrCancer': {'X-ray': {'directed': 'true'}, 'Dyspnea': {'directed': 'true'}}, 'Bronchitis': {'Dyspnea': {'directed': 'true'}}, 'Tuberculosis': {'TuberculosisOrCancer': {'directed': 'true'}}} cpds = [{'Values': np.array([[0.95, 0.05], [0.02, 0.98]]), 'Variables': {'X-ray': ['TuberculosisOrCancer']}}, {'Values': np.array([[0.7, 0.3], [0.4, 0.6]]), 'Variables': {'Bronchitis': ['Smoker']}}, {'Values': np.array([[0.9, 0.1, 0.3, 0.7], [0.2, 0.8, 0.1, 0.9]]), 'Variables': {'Dyspnea': ['TuberculosisOrCancer', 'Bronchitis']}}, {'Values': np.array([[0.99], [0.01]]), 'Variables': {'VisitToAsia': []}}, {'Values': np.array([[0.5], [0.5]]), 'Variables': {'Smoker': []}}, {'Values': np.array([[0.99, 0.01], [0.9, 0.1]]), 'Variables': {'LungCancer': ['Smoker']}}, {'Values': np.array([[0.99, 0.01], [0.95, 0.05]]), 'Variables': {'Tuberculosis': ['VisitToAsia']}}, {'Values': np.array([[1, 0, 0, 1], [0, 1, 0, 1]]), 'Variables': {'TuberculosisOrCancer': ['LungCancer', 'Tuberculosis']}}] Explanation: pgmpy not only allows us to read from the specific file format but also helps us to write the given model into the specific file format. Let's write a sample model into Probmodel XML file. For that first define our data for the model. End of explanation from pgmpy.models import BayesianModel from pgmpy.factors import TabularCPD model = BayesianModel(edges_list) for node in nodes: model.node[node] = nodes[node] for edge in edges: model.edge[edge] = edges[edge] tabular_cpds = [] for cpd in cpds: var = list(cpd['Variables'].keys())[0] evidence = cpd['Variables'][var] values = cpd['Values'] states = len(nodes[var]['States']) evidence_card = [len(nodes[evidence_var]['States']) for evidence_var in evidence] tabular_cpds.append( TabularCPD(var, states, values, evidence, evidence_card)) model.add_cpds(*tabular_cpds) from pgmpy.readwrite import ProbModelXMLWriter, get_probmodel_data Explanation: Now let's create a BayesianModel for this data. End of explanation model_data = get_probmodel_data(model) writer = ProbModelXMLWriter(model_data=model_data) print(writer) Explanation: To get the data which we need to give to the ProbModelXMLWriter to get the corresponding fileformat we need to use the method get_probmodel_data. This method is only specific to ProbModelXML file, for other file formats we would directly pass the model to the given Writer Class. End of explanation writer.write_file('probmodelxml.pgmx') Explanation: To write the xml data into the file we can use the method write_file of the given Writer class. End of explanation from pgmpy.readwrite.XMLBeliefNetwork import XBNReader reader = XBNReader('../files/xmlbelief.xml') Explanation: General WorkFlow of the readwrite module pgmpy.readwrite.[fileformat]Reader is base class for reading the given file format. Replace file format with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to parse the given file. For example for XMLBelief Network various methods which are defined are as follows: End of explanation model = reader.get_model() print(model.nodes()) print(model.edges()) Explanation: get_model: It returns an instance of the given model, for ex, BayesianModel in cases of XMLBelief format. End of explanation from pgmpy.models import BayesianModel from pgmpy.factors import TabularCPD import numpy as np nodes = {'c': {'STATES': ['Present', 'Absent'], 'DESCRIPTION': '(c) Brain Tumor', 'YPOS': '11935', 'XPOS': '15250', 'TYPE': 'discrete'}, 'a': {'STATES': ['Present', 'Absent'], 'DESCRIPTION': '(a) Metastatic Cancer', 'YPOS': '10465', 'XPOS': '13495', 'TYPE': 'discrete'}, 'b': {'STATES': ['Present', 'Absent'], 'DESCRIPTION': '(b) Serum Calcium Increase', 'YPOS': '11965', 'XPOS': '11290', 'TYPE': 'discrete'}, 'e': {'STATES': ['Present', 'Absent'], 'DESCRIPTION': '(e) Papilledema', 'YPOS': '13240', 'XPOS': '17305', 'TYPE': 'discrete'}, 'd': {'STATES': ['Present', 'Absent'], 'DESCRIPTION': '(d) Coma', 'YPOS': '12985', 'XPOS': '13960', 'TYPE': 'discrete'}} model = BayesianModel([('b', 'd'), ('a', 'b'), ('a', 'c'), ('c', 'd'), ('c', 'e')]) cpd_distribution = {'a': {'TYPE': 'discrete', 'DPIS': np.array([[0.2, 0.8]])}, 'e': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2], [0.6, 0.4]]), 'CONDSET': ['c'], 'CARDINALITY': [2]}, 'b': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2], [0.2, 0.8]]), 'CONDSET': ['a'], 'CARDINALITY': [2]}, 'c': {'TYPE': 'discrete', 'DPIS': np.array([[0.2, 0.8], [0.05, 0.95]]), 'CONDSET': ['a'], 'CARDINALITY': [2]}, 'd': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2], [0.9, 0.1], [0.7, 0.3], [0.05, 0.95]]), 'CONDSET': ['b', 'c'], 'CARDINALITY': [2, 2]}} tabular_cpds = [] for var, values in cpd_distribution.items(): evidence = values['CONDSET'] if 'CONDSET' in values else [] cpd = values['DPIS'] evidence_card = values['CARDINALITY'] if 'CARDINALITY' in values else [] states = nodes[var]['STATES'] cpd = TabularCPD(var, len(states), cpd, evidence=evidence, evidence_card=evidence_card) tabular_cpds.append(cpd) model.add_cpds(*tabular_cpds) for var, properties in nodes.items(): model.node[var] = properties from pgmpy.readwrite.XMLBeliefNetwork import XBNWriter writer = XBNWriter(model = model) Explanation: pgmpy.readwrite.[fileformat]Writer is base class for writing the model into the given file format. It takes a model as an argument which can be an instance of BayesianModel, MarkovModel. Replace file fomat with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to set the contents of the new file to be created from the given model. For example for XMLBelief Network various methods such as set_analysisnotebook, etc are defined which helps to set up the network data. End of explanation
14,834
Given the following text description, write Python code to implement the functionality described below step by step Description: Think Bayes Step1: Warm-up exercises Exercise Step2: Exercise Step3: Exercise Step4: Exercise Step8: The Boston Bruins problem The Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league. The Likelihood function takes as data the number of goals scored in a game. Step9: Now we can initialize a suite for each team Step10: Here's what the priors look like Step11: And we can update each suite with the scores from the first 4 games. Step13: To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons Step14: Here's what the results look like. Step15: Now we can compute the probability that the Bruins win, lose, or tie in regulation time. Step17: If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials. Step18: Here's what the predictive distributions for t look like. Step19: In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t Step20: Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime. Step21: Exercises Exercise Step22: Exercise
Python Code: from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot Explanation: Think Bayes: Chapter 7 This notebook presents code and exercises from Think Bayes, second edition. Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation # Solution from scipy.stats import poisson poisson.pmf(3, 2.9) # Solution from thinkbayes2 import EvalPoissonPmf EvalPoissonPmf(3, 2.9) # Solution from thinkbayes2 import MakePoissonPmf pmf = MakePoissonPmf(2.9, high=10) thinkplot.Hist(pmf) thinkplot.Config(xlabel='Number of goals', ylabel='PMF', xlim=[-0.5, 10.5]) Explanation: Warm-up exercises Exercise: Suppose that goal scoring in hockey is well modeled by a Poisson process, and that the long-run goal-scoring rate of the Boston Bruins against the Vancouver Canucks is 2.9 goals per game. In their next game, what is the probability that the Bruins score exactly 3 goals? Plot the PMF of k, the number of goals they score in a game. End of explanation # Solution pmf = MakePoissonPmf(2.9, high=30) total = pmf + pmf + pmf thinkplot.Hist(total) thinkplot.Config(xlabel='Number of goals', ylabel='PMF', xlim=[-0.5, 22.5]) total[9] # Solution EvalPoissonPmf(9, 3 * 2.9) Explanation: Exercise: Assuming again that the goal scoring rate is 2.9, what is the probability of scoring a total of 9 goals in three games? Answer this question two ways: Compute the distribution of goals scored in one game and then add it to itself twice to find the distribution of goals scored in 3 games. Use the Poisson PMF with parameter $\lambda t$, where $\lambda$ is the rate in goals per game and $t$ is the duration in games. End of explanation # Solution from thinkbayes2 import MakeExponentialPmf pmf = MakeExponentialPmf(lam=2.6, high=2.5) thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Time between goals', ylabel='PMF') # Solution from scipy.stats import expon expon.cdf(1/3, scale=1/2.6) # Solution from thinkbayes2 import EvalExponentialCdf EvalExponentialCdf(1/3, 2.6) Explanation: Exercise: Suppose that the long-run goal-scoring rate of the Canucks against the Bruins is 2.6 goals per game. Plot the distribution of t, the time until the Canucks score their first goal. In their next game, what is the probability that the Canucks score during the first period (that is, the first third of the game)? Hint: thinkbayes2 provides MakeExponentialPmf and EvalExponentialCdf. End of explanation # Solution 1 - EvalExponentialCdf(1, 2.6) # Solution EvalPoissonPmf(0, 2.6) Explanation: Exercise: Assuming again that the goal scoring rate is 2.8, what is the probability that the Canucks get shut out (that is, don't score for an entire game)? Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution. End of explanation from thinkbayes2 import MakeNormalPmf from thinkbayes2 import EvalPoissonPmf class Hockey(Suite): Represents hypotheses about the scoring rate for a team. def __init__(self, label=None): Initializes the Hockey object. label: string mu = 2.8 sigma = 0.3 pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=101) Suite.__init__(self, pmf, label=label) def Likelihood(self, data, hypo): Computes the likelihood of the data under the hypothesis. Evaluates the Poisson PMF for lambda and k. hypo: goal scoring rate in goals per game data: goals scored in one game lam = hypo k = data like = EvalPoissonPmf(k, lam) return like Explanation: The Boston Bruins problem The Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league. The Likelihood function takes as data the number of goals scored in a game. End of explanation suite1 = Hockey('bruins') suite2 = Hockey('canucks') Explanation: Now we can initialize a suite for each team: End of explanation thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability') Explanation: Here's what the priors look like: End of explanation suite1.UpdateSet([0, 2, 8, 4]) suite2.UpdateSet([1, 3, 1, 0]) thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability') suite1.Mean(), suite2.Mean() Explanation: And we can update each suite with the scores from the first 4 games. End of explanation from thinkbayes2 import MakeMixture from thinkbayes2 import MakePoissonPmf def MakeGoalPmf(suite, high=10): Makes the distribution of goals scored, given distribution of lam. suite: distribution of goal-scoring rate high: upper bound returns: Pmf of goals per game metapmf = Pmf() for lam, prob in suite.Items(): pmf = MakePoissonPmf(lam, high) metapmf.Set(pmf, prob) mix = MakeMixture(metapmf, label=suite.label) return mix Explanation: To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons: End of explanation goal_dist1 = MakeGoalPmf(suite1) goal_dist2 = MakeGoalPmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(goal_dist1) thinkplot.Pmf(goal_dist2) thinkplot.Config(xlabel='Goals', ylabel='Probability', xlim=[-0.7, 11.5]) goal_dist1.Mean(), goal_dist2.Mean() Explanation: Here's what the results look like. End of explanation diff = goal_dist1 - goal_dist2 p_win = diff.ProbGreater(0) p_loss = diff.ProbLess(0) p_tie = diff.Prob(0) print('Prob win, loss, tie:', p_win, p_loss, p_tie) Explanation: Now we can compute the probability that the Bruins win, lose, or tie in regulation time. End of explanation from thinkbayes2 import MakeExponentialPmf def MakeGoalTimePmf(suite): Makes the distribution of time til first goal. suite: distribution of goal-scoring rate returns: Pmf of goals per game metapmf = Pmf() for lam, prob in suite.Items(): pmf = MakeExponentialPmf(lam, high=2.5, n=1001) metapmf.Set(pmf, prob) mix = MakeMixture(metapmf, label=suite.label) return mix Explanation: If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials. End of explanation time_dist1 = MakeGoalTimePmf(suite1) time_dist2 = MakeGoalTimePmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(time_dist1) thinkplot.Pmf(time_dist2) thinkplot.Config(xlabel='Games until goal', ylabel='Probability') time_dist1.Mean(), time_dist2.Mean() Explanation: Here's what the predictive distributions for t look like. End of explanation p_win_in_overtime = time_dist1.ProbLess(time_dist2) p_adjust = time_dist1.ProbEqual(time_dist2) p_win_in_overtime += p_adjust / 2 print('p_win_in_overtime', p_win_in_overtime) Explanation: In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t: End of explanation p_win_overall = p_win + p_tie * p_win_in_overtime print('p_win_overall', p_win_overall) Explanation: Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime. End of explanation # Solution suite1.Update(0) suite2.Update(0) time_dist1 = MakeGoalTimePmf(suite1) time_dist2 = MakeGoalTimePmf(suite2) p_win_in_overtime = time_dist1.ProbLess(time_dist2) p_adjust = time_dist1.ProbEqual(time_dist2) p_win_in_overtime += p_adjust / 2 print('p_win_in_overtime', p_win_in_overtime) p_win_overall = p_win + p_tie * p_win_in_overtime print('p_win_overall', p_win_overall) Explanation: Exercises Exercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it has on the results. End of explanation from thinkbayes2 import MakeGammaPmf xs = np.linspace(0, 8, 101) pmf = MakeGammaPmf(xs, 1.3) thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Goals per game') pmf.Mean() Explanation: Exercise: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. What is the probability that Germany had the better team? What is the probability that Germany would win a rematch? For a prior distribution on the goal-scoring rate for each team, use a gamma distribution with parameter 1.3. End of explanation
14,835
Given the following text description, write Python code to implement the functionality described below step by step Description: IST256 Lesson 10 HTTP and Network Programming Assigned Readings From https Step1: A. str B. int C. dict D. list Vote Now Step2: A. str B. int C. dict D. list Vote Now Step3: A. 2 B. 3 C. KeyError D. IndexError Vote Now Step4: A. 2 B. 3 C. KeyError D. IndexError Vote Now Step5: A. 2 B. 3 C. KeyError D. IndexError Vote Now Step6: A. 2 B. 3 C. KeyError D. IndexError Vote Now
Python Code: x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( type(x['a']) ) Explanation: IST256 Lesson 10 HTTP and Network Programming Assigned Readings From https://ist256.github.io/spring2021/readings/Web-APIs-In-Python.html Links Participation: https://poll.ist256.com In-Class Questions: ZOOM CHAT! Agenda Homework How the Web Works Making HTTP requests using the Python requests module Parsing json responses into Python objects Procedure for calling API's How to read API documentation Project (49 out of 250 points) http://ist256.com/syllabus/#project-p1-p4 No grade until the end. only feedback. Project documents will be released after the 3rd exam. FEQT (Future Exam Questions Training) 1 What is the output of the following code? End of explanation x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( type(x['b'][1]) ) Explanation: A. str B. int C. dict D. list Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 2 What is the output of the following code? End of explanation x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['a'][2]) Explanation: A. str B. int C. dict D. list Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 3 What is the output of the following code? End of explanation x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['b'][4] ) Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 4 What is the output of the following code? End of explanation x = { 'a' : [1,7,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['c'] ) Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 5 What is the output of the following code? End of explanation x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['c']['r']) Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 6 What is the output of the following code? End of explanation import requests params = { 'a' : 1, 'b' : 2 } headers = { 'c' : '3'} url = "https://httpbin.org/get" response = requests.get(url, params = params, headers = headers) print(response.url) Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com Connect Activity Question: The Python module to consume Web API's is called: A. api B. requests C. http D. urllibrary Vote Now: https://poll.ist256.com # What is the Big Picture Here? First learned how to call functions built into python like, input() and int() Then we larned how to import a module of functions then use them, like math or json, or ipywidgets Then learned how to find new code on http://pypi.org, install with pip and then import to use it, like gtts or emoji Then we learned built-in functions of variables of type str list and dict such as str.find() and dict.keys() Now we will learn how to call functions over the internet, executing code remotely!!! HTTP: The Protocol of The Web When you type a URL into your browser you’re making a request. The site processing your request sends a response. Part of the response is the status code. This indicates “what happened” The other part of the response is content (this is usually HTML) which is rendered by the browser. HTTP is a text based protocol. It is stateless meaning each request in independent of the other. HTTP Request Verbs HTTP Request Verbs: - GET - used to get resources - POST - used to send large data payloads as input - PUT - used for updates - DELETE - used to delete a resource HTTP Response Status codes The HTTP response has a payload of data and a status code. HTTP Status Codes: - 1xx Informational - 2xx Success - 3xx Redirection - 4xx Client Error - 5xx Server Error Watch Me Code 1 A Non-Python Demo of HTTP - What happens when you request a site? Like http://www.syr.edu ? - Chrome Developer tools - Now using requests. - Status codes and request verbs. - de-serializing json output. Check Yourself: Response Codes The HTTP Response code for success is A. 404 B. 501 C. 200 D. 301 Vote Now: https://poll.ist256.com 4 Ways to Send Data over HTTP In the URL GET http://www.someapi.com/user/45 On the Query String - a set of key-value pairs on the URL GET http://www.someapi.com?user=45 In the request header - a set of key-value pairs in the HTTP header header = { 'user' : 45 } GET http://www.someapi.com In the body of an HTTP post - any format Body: user=45 POST http://www.someapi.com Which approach do you use? Depends on the service you are using! Watch Me Code 2 Examples of the many ways send data over HTTP using the https://httpbin.org/ website (Wait, scratch that, using https://api.ist256.com) !!! HTTP GET in the url HTTP GET in the query string and url generation HTTP GET in the header HTTP POST Combinations Check Yourself : HTTP Methods What is the URL printed on the last line? End of explanation
14,836
Given the following text description, write Python code to implement the functionality described below step by step Description: Inference with GPs The dataset needed for this worksheet can be downloaded. Once you have downloaded s9_gp_dat.tar.gz, and moved it to this folder, execute the following cell Step4: Here are the functions we wrote in the previous tutorial to compute and draw from a GP Step6: The Marginal Likelihood In the previous notebook, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to regression and inference Step7: <div style="background-color Step8: <div style="background-color
Python Code: !tar -zxvf s9_gp_dat.tar.gz !mv *.txt data/ Explanation: Inference with GPs The dataset needed for this worksheet can be downloaded. Once you have downloaded s9_gp_dat.tar.gz, and moved it to this folder, execute the following cell: End of explanation import numpy as np from scipy.linalg import cho_factor def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0): Return the ``N x M`` exponential squared covariance matrix between time vectors `t1` and `t2`. The kernel has amplitude `A` and lengthscale `l`. if t2 is None: t2 = t1 T2, T1 = np.meshgrid(t2, t1) return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2) def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12): Generate samples from a multivariate gaussian specified by covariance ``S`` and mean ``mu``. (We derived these equations in Day 1, Notebook 01, Exercise 7.) npts = S.shape[0] L, _ = cho_factor(S + eps * np.eye(npts), lower=True) L = np.tril(L) u = np.random.randn(npts, ndraws) x = np.dot(L, u) + mu[:, None] return x.T def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0): Compute the mean vector and covariance matrix of a GP at times `t_test` given training points `y_train(t_train)`. The training points have uncertainty `sigma` and the kernel is assumed to be an Exponential Squared Kernel with amplitude `A` and lengthscale `l`. # Compute the required matrices kernel = ExpSquaredKernel Stt = kernel(t_train, A=1.0, l=1.0) Stt += sigma ** 2 * np.eye(Stt.shape[0]) Spp = kernel(t_test, A=1.0, l=1.0) Spt = kernel(t_test, t_train, A=1.0, l=1.0) # Compute the mean and covariance of the GP mu = np.dot(Spt, np.linalg.solve(Stt, y_train)) S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T)) return mu, S Explanation: Here are the functions we wrote in the previous tutorial to compute and draw from a GP: End of explanation def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): # do stuff in here pass Explanation: The Marginal Likelihood In the previous notebook, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to regression and inference: given a dataset $D$ and a model $M(\theta)$, what are the values of the model parameters $\theta$ that are consistent with $D$? The parameters $\theta$ can be the hyperparameters of the GP (the amplitude and time scale), the parameters of some parametric model, or all of the above. A very common use of GPs is to model things you don't have an explicit physical model for, so quite often they are used to model "nuisances" in the dataset. But just because you don't care about these nuisances doesn't mean they don't affect your inference: in fact, unmodelled correlated noise can often lead to strong biases in the parameter values you infer. In this notebook, we'll learn how to compute likelihoods of Gaussian Processes so that we can marginalize over the nuisance parameters (given suitable priors) and obtain unbiased estimates for the physical parameters we care about. Given a set of measurements $y$ distributed according to $$ \begin{align} y \sim \mathcal{N}(\mathbf{\mu}(\theta), \mathbf{\Sigma}(\alpha)) \end{align} $$ where $\theta$ are the parameters of the mean model $\mu$ and $\alpha$ are the hyperparameters of the covariance model $\mathbf{\Sigma}$, the marginal likelihood of $y$ is $$ \begin{align} \ln P(y | \theta, \alpha) = -\frac{1}{2}(y-\mu)^\top \mathbf{\Sigma}^{-1} (y-\mu) - \frac{1}{2}\ln |\mathbf{\Sigma}| - \frac{N}{2} \ln 2\pi \end{align} $$ where $||$ denotes the determinant and $N$ is the number of measurements. The term marginal refers to the fact that this expression implicitly integrates over all possible values of the Gaussian Process; this is not the likelihood of the data given one particular draw from the GP, but given the ensemble of all possible draws from $\mathbf{\Sigma}$. <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1</h1> </div> Define a function ln_gp_likelihood(t, y, sigma, A=1, l=1) that returns the log-likelihood defined above for a vector of measurements y at a set of times t with uncertainty sigma. As before, A and l should get passed direcetly to the kernel function. Note that you're going to want to use np.linalg.slogdet to compute the log-determinant of the covariance instead of np.log(np.linalg.det). (Why?) End of explanation import matplotlib.pyplot as plt t, y, sigma = np.loadtxt("data/sample_data.txt", unpack=True) plt.plot(t, y, "k.", alpha=0.5, ms=3) plt.xlabel("time") plt.ylabel("data"); Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1> </div> The following dataset was generated from a zero-mean Gaussian Process with a Squared Exponential Kernel of unity amplitude and unknown timescale. Compute the marginal log likelihood of the data over a range of reasonable values of $l$ and find the maximum. Plot the likelihood (not log likelihood) versus $l$; it should be pretty Gaussian. How well are you able to constrain the timescale of the GP? End of explanation t, y, sigma = np.loadtxt("data/sample_data_line.txt", unpack=True) m_true, b_true, A_true, l_true = np.loadtxt("data/sample_data_line_truths.txt", unpack=True) plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth") plt.legend(fontsize=12) plt.xlabel("time") plt.ylabel("data"); Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3a</h1> </div> The timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\sigma$ (white noise), there is a fair bit of correlated (red) noise, which we will assume is well described by the squared exponential covariance with a certain (unknown) amplitude $A$ and timescale $l$. Your task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. In this part of the exercise, assume there is no correlated noise. Your model for the $n^\mathrm{th}$ datapoint is thus $$ \begin{align} y_n \sim \mathcal{N}(m t_n + b, \sigma_n\mathbf{I}) \end{align} $$ and the probability of the data given the model can be computed by calling your GP likelihood function: python def lnprob(params): m, b = params model = m * t + b return ln_gp_likelihood(t, y - model, sigma, A=0, l=1) Note, importantly, that we are passing the residual vector, $y - (mt + b)$, to the GP, since above we coded up a zero-mean Gaussian process. We are therefore using the GP to model the residuals of the data after applying our physical model (the equation of the line). To estimate the values of $m$ and $b$ we could generate a fine grid in those two parameters and compute the likelihood at every point. But since we'll soon be fitting for four parameters (in the next part), we might as well upgrade our inference scheme and use the emcee package to do Markov Chain Monte Carlo (MCMC). If you haven't used emcee before, check out the first few tutorials on the documentation page. The basic setup for the problem is this: ```python import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) initial = [4.0, 15.0] p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim) print("Running burn-in...") p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do sampler.reset() print("Running production...") sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do ``` where nwalkers is the number of walkers (something like 20 or 30 is fine), ndim is the number of dimensions (2 in this case), and lnprob is the log-probability function for the data given the model. Finally, p0 is a list of starting positions for each of the walkers. Above we picked some fiducial/eyeballed value for $m$ and $b$, then added a small random number to each to generate different initial positions for each walker. This will initialize all walkers in a ball centered on some point, and as the chain progresses they'll diffuse out and begin to explore the posterior. Once you have sampled the posterior, plot several draws from it on top of the data. You can access a random draw from the posterior by doing python m, b = sampler.flatchain[np.random.randint(len(sampler.flatchain))] Also plot the true line that generated the dataset (given by the variables m_true and b_true below). Do they agree, or is there bias in your inferred values? Use the corner package to plot the joint posterior. How many standard deviations away from the truth are your inferred values? End of explanation
14,837
Given the following text description, write Python code to implement the functionality described below step by step Description: Geospatial Analysis One of the most popular extensions to PostgreSQL is PostGIS, which adds support for storing geospatial geometries, as well as functionality for reasoning about and performing operations on those geometries. This is a demo showing how to assemble ibis expressions for a PostGIS-enabled database. We will be using a database that has been loaded with an Open Street Map extract for Southern California. This extract can be found here, and loaded into PostGIS using a tool like ogr2ogr. Preparation We first need to set up a demonstration database and load it with the sample data. If you have Docker installed, you can download and launch a PostGIS database with the following Step1: Next, we download our OSM extract (about 400 MB) Step2: Finally, we load it into the database using ogr2ogr (this may take some time) Step3: Connecting to the database We first make the relevant imports, and connect to the PostGIS database Step4: Let's look at the tables available in the database Step5: As you can see, this Open Street Map extract stores its data according to the geometry type. Let's grab references to the polygon and line tables Step6: Querying the data We query the polygons table for shapes with an administrative level of 8, which corresponds to municipalities. We also reproject some of the column names so we don't have a name collision later. Step7: We can assemble a specific query for the city of Los Angeles, and execute it to get the geometry of the city. This will be useful later when reasoning about other geospatial relationships in the LA area Step8: Let's also extract freeways from the lines table, which are indicated by the value 'motorway' in the highway column Step9: Making a spatial join Let's test a spatial join by selecting all the highways that intersect the city of Los Angeles, or one if its neighbors. We begin by assembling an expression for Los Angeles and its neighbors. We consider a city to be a neighbor if it has any point of intersection (by this critereon we also get Los Angeles itself). We can pass in the city geometry that we selected above when making our query by marking it as a literal value in ibis Step10: Now we join the neighbors expression with the freeways expression, on the condition that the highways intersect any of the city geometries Step11: Combining the results Now that we have made a number of queries and joins, let's combine them into a single plot. To make the plot a bit nicer, we can also load some shapefiles for the coast and land
Python Code: # Launch the postgis container. # This may take a bit of time if it needs to download the image. !docker run -d -p 5432:5432 --name postgis-db -e POSTGRES_PASSWORD=supersecret mdillon/postgis:9.6-alpine Explanation: Geospatial Analysis One of the most popular extensions to PostgreSQL is PostGIS, which adds support for storing geospatial geometries, as well as functionality for reasoning about and performing operations on those geometries. This is a demo showing how to assemble ibis expressions for a PostGIS-enabled database. We will be using a database that has been loaded with an Open Street Map extract for Southern California. This extract can be found here, and loaded into PostGIS using a tool like ogr2ogr. Preparation We first need to set up a demonstration database and load it with the sample data. If you have Docker installed, you can download and launch a PostGIS database with the following: End of explanation !wget https://download.geofabrik.de/north-america/us/california/socal-latest.osm.pbf Explanation: Next, we download our OSM extract (about 400 MB): End of explanation !ogr2ogr -f PostgreSQL PG:"dbname='postgres' user='postgres' password='supersecret' port=5432 host='localhost'" -lco OVERWRITE=yes --config PG_USE_COPY YES socal-latest.osm.pbf Explanation: Finally, we load it into the database using ogr2ogr (this may take some time): End of explanation import os import geopandas import ibis %matplotlib inline client = ibis.postgres.connect( url='postgres://postgres:supersecret@localhost:5432/postgres' ) Explanation: Connecting to the database We first make the relevant imports, and connect to the PostGIS database: End of explanation client.list_tables() Explanation: Let's look at the tables available in the database: End of explanation polygons = client.table('multipolygons') lines = client.table('lines') Explanation: As you can see, this Open Street Map extract stores its data according to the geometry type. Let's grab references to the polygon and line tables: End of explanation cities = polygons[polygons.admin_level == '8'] cities = cities[ cities.name.name('city_name'), cities.wkb_geometry.name('city_geometry') ] Explanation: Querying the data We query the polygons table for shapes with an administrative level of 8, which corresponds to municipalities. We also reproject some of the column names so we don't have a name collision later. End of explanation los_angeles = cities[cities.city_name == 'Los Angeles'] la_city = los_angeles.execute() la_city_geom = la_city.iloc[0].city_geometry la_city_geom Explanation: We can assemble a specific query for the city of Los Angeles, and execute it to get the geometry of the city. This will be useful later when reasoning about other geospatial relationships in the LA area: End of explanation highways = lines[(lines.highway == 'motorway')] highways = highways[ highways.name.name('highway_name'), highways.wkb_geometry.name('highway_geometry'), ] Explanation: Let's also extract freeways from the lines table, which are indicated by the value 'motorway' in the highway column: End of explanation la_neighbors_expr = cities[ cities.city_geometry.intersects( ibis.literal(la_city_geom, type='multipolygon;4326:geometry') ) ] la_neighbors = la_neighbors_expr.execute().dropna() la_neighbors Explanation: Making a spatial join Let's test a spatial join by selecting all the highways that intersect the city of Los Angeles, or one if its neighbors. We begin by assembling an expression for Los Angeles and its neighbors. We consider a city to be a neighbor if it has any point of intersection (by this critereon we also get Los Angeles itself). We can pass in the city geometry that we selected above when making our query by marking it as a literal value in ibis: End of explanation la_highways_expr = highways.inner_join( la_neighbors_expr, highways.highway_geometry.intersects(la_neighbors_expr.city_geometry), ) la_highways = la_highways_expr.execute() la_highways.plot() Explanation: Now we join the neighbors expression with the freeways expression, on the condition that the highways intersect any of the city geometries: End of explanation ocean = geopandas.read_file( 'https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_ocean.zip' ) land = geopandas.read_file( 'https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_land.zip' ) ax = la_neighbors.dropna().plot(figsize=(16, 16), cmap='rainbow', alpha=0.9) ax.set_autoscale_on(False) ax.set_axis_off() land.plot(ax=ax, color='tan', alpha=0.4) ax = ocean.plot(ax=ax, color='navy') la_highways.plot(ax=ax, color='maroon') Explanation: Combining the results Now that we have made a number of queries and joins, let's combine them into a single plot. To make the plot a bit nicer, we can also load some shapefiles for the coast and land: End of explanation
14,838
Given the following text description, write Python code to implement the functionality described below step by step Description: from SciPy, scipy.linalg.svd - Singular Value Decomposition using SciPy cf. scip.linalg.svd Factorizes the matrix a into 2 unitary matrices U and Vh, and a 1-d. array s of singular values (real, non-negative) such that a == U*S*Vh, where S is a suitably shaped matrix of zeros with main diagonal s. Step1: full_matrices Step2: compute_uv Step3: More Simple examples "Gold standard" example for cuSOLVER in CUDA Toolkit Documentation Step4: cf. 6.241J Course Notes, Ch. 4 Step5: cf. https Step6: Simple Examples, Test cases, of applying SVD successfully to obtain a Matrix Product State (MPS) Let $d=2$ (dimension of the state space, for each, say, spin system, $L=2$ (number of sites; should be an even number) Step7: Calculate US and reshape Step8: Calculate the right normalized matrices $B$'s which are only columns of Vh in this case Step9: Examples with complex numbers Step10: $d=2$, $L=4$ Step11: Examples with fixed values Step12: $d=2,L=4$ Step13: Calculate the dimensions $L=2$ case
Python Code: import numpy as np from scipy import linalg # Create an array of the given shape and populate it with # random samples from a uniform distribution # over ``[0, 1)``. a = np.random.randn(9,6) + 1.j * np.random.randn(9,6) a U, s, Vh = linalg.svd(a) U.shape, Vh.shape, s.shape U Vh s Explanation: from SciPy, scipy.linalg.svd - Singular Value Decomposition using SciPy cf. scip.linalg.svd Factorizes the matrix a into 2 unitary matrices U and Vh, and a 1-d. array s of singular values (real, non-negative) such that a == U*S*Vh, where S is a suitably shaped matrix of zeros with main diagonal s. End of explanation U,s,Vh=linalg.svd(a,full_matrices=False) U.shape, Vh.shape, s.shape S=linalg.diagsvd(s,6,6) # Construct the Sigma matrix S, given the vector s, of a == U*S*Vh S np.allclose(a,np.dot(U,np.dot(S,Vh))) Explanation: full_matrices : bool, optional - if True, U and Vh are of shape (M,M), (N,N). If False, the shapes are (M,K) and (K,N) where K = min(M,N) End of explanation s2 = linalg.svd(a,compute_uv=False) np.allclose(s,s2) Explanation: compute_uv : bool, optional Whether to compute also U and Vh in addition to s. Default is True. End of explanation A_gold = np.array( [[1.0,2.0], [4.0,5.0],[2.,1.0]]) UsVh_gold = linalg.svd(A_gold) UsVh_gold[0] linalg.diagsvd(UsVh_gold[1],2,2) UsVh_gold[2] Explanation: More Simple examples "Gold standard" example for cuSOLVER in CUDA Toolkit Documentation End of explanation A_6_241_04 = np.array([[100,100],[100.2,100]]) A_6_241_04 UsVh_6_241_04 = linalg.svd(A_6_241_04) UsVh_6_241_04[0] linalg.diagsvd( UsVh_6_241_04[1], 2,2) UsVh_6_241_04[2] Explanation: cf. 6.241J Course Notes, Ch. 4: Matrix norms and singular value decomposition - MIT6_241JS11_chap04 End of explanation A33 = np.array([[-149 ,-50, -154 ],[537 ,180, 546 ],[-27 ,-9, -25 ]]) UsVh_33 = linalg.svd(A33) UsVh_33[0] linalg.diagsvd( UsVh_33[1], 3,3) UsVh_33[2] Explanation: cf. https://www.mathworks.com/content/dam/mathworks/mathworks-dot-com/moler/eigs.pdf End of explanation d=2 L=2 Psi_d2_L2 = np.random.randn(d**(L-1),d) + 1.j * np.random.randn(d**(L-1),d) print(Psi_d2_L2.shape) Psi_d2_L2 UsVh_d2_L2 = linalg.svd(Psi_d2_L2) print(UsVh_d2_L2[0].shape) UsVh_d2_L2[0] UsVh_d2_L2[1] linalg.diagsvd(UsVh_d2_L2[1],2,2) print(UsVh_d2_L2[2].shape) UsVh_d2_L2[2] Explanation: Simple Examples, Test cases, of applying SVD successfully to obtain a Matrix Product State (MPS) Let $d=2$ (dimension of the state space, for each, say, spin system, $L=2$ (number of sites; should be an even number) End of explanation US_d2_L2 = np.dot( UsVh_d2_L2[0] , linalg.diagsvd(UsVh_d2_L2[1],2,2) ) print(US_d2_L2.shape) US_d2_L2 # new matrix after 1 iteration, l=1 Psi_d2_L2_l1 = US_d2_L2.reshape(d**(L-(1+1)),d*2) print(Psi_d2_L2_l1.shape) Psi_d2_L2_l1 Explanation: Calculate US and reshape End of explanation B0_d2_L2=UsVh_d2_L2[2][:,0] B1_d2_L2=UsVh_d2_L2[2][:,1] print(B0_d2_L2) print(B1_d2_L2) Explanation: Calculate the right normalized matrices $B$'s which are only columns of Vh in this case: End of explanation M=4 N=2 ind_RR = 1 ind_CC = 0.1 A_CC=[] for row in range(M): A_row =[] for col in range(N): A_val = ind_RR * (row+1 + M*col) + ind_CC*(row+1 + M*col)*1j A_row.append(A_val) A_CC.append(A_row) A_CC = np.array(A_CC) A_CC UsVh_CC = linalg.svd(A_CC) print(UsVh_CC[0].shape) UsVh_CC[0] print(UsVh_CC[1].shape) linalg.diagsvd(UsVh_CC[1],2,2) print(UsVh_CC[2].shape) UsVh_CC[2] Explanation: Examples with complex numbers End of explanation d=2 L=4 Psi_d2_L4 = np.random.randn(d**(L-1),d) + 1.j * np.random.randn(d**(L-1),d) print(Psi_d2_L4.shape) Psi_d2_L4 UsVh_d2_L4 = linalg.svd(Psi_d2_L4) print(UsVh_d2_L4[0].shape) UsVh_d2_L4[0] UsVh_d2_L4[1] print(UsVh_d2_L4[2].shape) UsVh_d2_L4[2] US_d2_L4 = np.dot( UsVh_d2_L4[0] , linalg.diagsvd(UsVh_d2_L4[1], UsVh_d2_L4[0].shape[0],d) ) print(US_d2_L4.shape) US_d2_L4 # after iteration l = 1, we obtain a new matrix \Psi to apply SVD on l=1 Psi_d2_L4_l1 = US_d2_L4.reshape(d**(L-(l+1)), d*2 ) print(Psi_d2_L4_l1.shape ) UsVh_d2_L4_l2 = linalg.svd(Psi_d2_L4_l1) print(UsVh_d2_L4_l2[0].shape) UsVh_d2_L4_l2[0] print(UsVh_d2_L4_l2[1].shape) UsVh_d2_L4[1] print(UsVh_d2_L4_l2[2].shape) UsVh_d2_L4_l2[2] US_d2_L4_l2 = np.dot( UsVh_d2_L4_l2[0] , linalg.diagsvd(UsVh_d2_L4_l2[1], UsVh_d2_L4_l2[0].shape[1],UsVh_d2_L4_l2[2].shape[0]) ) print(US_d2_L4_l2.shape) US_d2_L4_l2 # after iteration l = 2, we obtain a new matrix \Psi to apply SVD on l=2 Psi_d2_L4_l2 = US_d2_L4_l2.reshape(d**(L-(l+1)), d* US_d2_L4_l2.shape[1] ) print(Psi_d2_L4_l2.shape ) UsVh_d2_L4_l3 = linalg.svd(Psi_d2_L4_l2) print(UsVh_d2_L4_l3[0].shape) UsVh_d2_L4_l3[0] L-(l+1) Explanation: $d=2$, $L=4$ End of explanation d=2 L=2 def create_fixed_CC_mat(d,L): totalsysstates = d**(L-1) A = [] for i in range(totalsysstates): ithstate = [] f = i*(0.9/totalsysstates)+0.1 theta_f = 2.*np.arccos(-1.)*f d0 = f*(np.cos( theta_f) + np.sin(theta_f)*1j) d1 = (1.-f)*(np.sin( theta_f) + np.cos(theta_f)*1j) ithstate=[d0,d1] A.append(ithstate) return np.array(A) A_CC_d2L2=create_fixed_CC_mat(d,L) print(A_CC_d2L2.shape) print(A_CC_d2L2) UsVh_CC_d2L2 = linalg.svd(A_CC_d2L2) print(UsVh_CC_d2L2[0].shape) print(UsVh_CC_d2L2[0]) print(UsVh_CC_d2L2[1].shape) print(linalg.diagsvd(UsVh_CC_d2L2[1],2,2 ) ) print(UsVh_CC_d2L2[2].shape) print(UsVh_CC_d2L2[2]) Psi_new_CC_d2L2 = np.dot( UsVh_CC_d2L2[0], linalg.diagsvd(UsVh_CC_d2L2[1],2,2 ) ) print(Psi_new_CC_d2L2) Psi_new_CC_d2L2 = Psi_new_CC_d2L2.reshape(1,4) print(Psi_new_CC_d2L2) Explanation: Examples with fixed values End of explanation d=2 L=4 A_CC_d2L4=create_fixed_CC_mat(d,L) print(A_CC_d2L4.shape) print(A_CC_d2L4) UsVh_CC_d2L4 = linalg.svd(A_CC_d2L4) print(UsVh_CC_d2L4[0].shape) print(UsVh_CC_d2L4[0]) print(UsVh_CC_d2L4[1].shape) print(linalg.diagsvd(UsVh_CC_d2L4[1],2,2 ) ) print(UsVh_CC_d2L4[2].shape) print(UsVh_CC_d2L4[2]) Psi_new_CC_d2L4 = np.dot( UsVh_CC_d2L4[0], linalg.diagsvd(UsVh_CC_d2L4[1],8,2 ) ) print(Psi_new_CC_d2L4) Psi_new_CC_d2L4 = Psi_new_CC_d2L4.reshape( (2**(L-2),d*d),order='F') print(Psi_new_CC_d2L4) UsVh_CC_d2L4l01 = linalg.svd(Psi_new_CC_d2L4) print(UsVh_CC_d2L4l01[0].shape) print(UsVh_CC_d2L4l01[0]) print(UsVh_CC_d2L4l01[1].shape) print(linalg.diagsvd(UsVh_CC_d2L4l01[1],4,4 ) ) print(UsVh_CC_d2L4l01[2].shape) print(UsVh_CC_d2L4l01[2]) np.dot( UsVh_CC_d2L4l01[0], linalg.diagsvd(UsVh_CC_d2L4l01[1],4,4 ) ) Psi_new_CC_d2L4l01 = np.dot( UsVh_CC_d2L4l01[0], linalg.diagsvd(UsVh_CC_d2L4l01[1],4,4 ) ) print(Psi_new_CC_d2L4l01) Psi_new_CC_d2L4l01 = Psi_new_CC_d2L4l01.reshape( (2**(L-3),d*d*d),order='F') print(Psi_new_CC_d2L4l01) Explanation: $d=2,L=4$ End of explanation d=2 L=2 d**L d**(L-1) d**(L/2) L/2 [l+1 for l in range(L/2)] range(1,2) def calculate_dims(d,L): results = [] result_1 = [] result_1.append( (d**(L-1),d) ) r_1 = min( result_1[0][0], result_1[0][1]) newPsidim = ( result_1[0][0]/d, d*r_1) result_1.append( newPsidim ) newBdim = ( r_1, 1) result_1.append( newBdim ) results.append( result_1 ) for l in range(2,L+1): result_l = [] r_previous = results[l-2][2][0] result_l.append( (d**(L-l) , d*r_previous)) r_l = min( result_l[0][0], result_l[0][1]) newPsidim = ( result_l[0][0]/d , d*r_l) result_l.append( newPsidim ) newBdim = ( r_l, r_previous) result_l.append( newBdim) results.append( result_l ) return results results_d2_L2 = calculate_dims(2,2) results_d2_L2 results_d2_L4 = calculate_dims(2,4) results_d2_L4 d**L Explanation: Calculate the dimensions $L=2$ case End of explanation
14,839
Given the following text description, write Python code to implement the functionality described below step by step Description: &larr; Back to Index Jupyter Audio Basics Audio Libraries We will mainly use two libraries for audio acquisition and playback Step1: Visit https Step2: If you receive an error with librosa.load, you may need to install ffmpeg. Display the length of the audio array and sample rate Step3: Visualizing Audio In order to display plots inside the Jupyter notebook, run the following commands, preferably at the top of your notebook Step4: Plot the audio array using librosa.display.waveplot Step5: Display a spectrogram using librosa.display.specshow Step6: Playing Audio IPython.display.Audio Using IPython.display.Audio, you can play an audio file Step7: Audio can also accept a NumPy array. Let's synthesize a pure tone at 440 Hz Step8: Listen to the audio array Step9: Writing Audio librosa.output.write_wav saves a NumPy array to a WAV file.
Python Code: ls audio Explanation: &larr; Back to Index Jupyter Audio Basics Audio Libraries We will mainly use two libraries for audio acquisition and playback: 1. librosa librosa is a Python package for music and audio processing by Brian McFee. A large portion was ported from Dan Ellis's Matlab audio processing examples. 2. IPython.display.Audio IPython.display.Audio lets you play audio directly in an IPython notebook. Included Audio Data This GitHub repository includes many short audio excerpts for your convenience. Here are the files currently in the audio directory: End of explanation import librosa x, sr = librosa.load('audio/simple_loop.wav') Explanation: Visit https://ccrma.stanford.edu/workshops/mir2014/audio/ for more audio files. Reading Audio Use librosa.load to load an audio file into an audio array. Return both the audio array as well as the sample rate: End of explanation print(x.shape) print(sr) Explanation: If you receive an error with librosa.load, you may need to install ffmpeg. Display the length of the audio array and sample rate: End of explanation %matplotlib inline import matplotlib.pyplot as plt import librosa.display Explanation: Visualizing Audio In order to display plots inside the Jupyter notebook, run the following commands, preferably at the top of your notebook: End of explanation plt.figure(figsize=(14, 5)) librosa.display.waveplot(x, sr=sr) Explanation: Plot the audio array using librosa.display.waveplot: End of explanation X = librosa.stft(x) Xdb = librosa.amplitude_to_db(abs(X)) plt.figure(figsize=(14, 5)) librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='hz') Explanation: Display a spectrogram using librosa.display.specshow: End of explanation import IPython.display as ipd ipd.Audio('audio/conga_groove.wav') # load a local WAV file Explanation: Playing Audio IPython.display.Audio Using IPython.display.Audio, you can play an audio file: End of explanation import numpy sr = 22050 # sample rate T = 2.0 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz Explanation: Audio can also accept a NumPy array. Let's synthesize a pure tone at 440 Hz: End of explanation ipd.Audio(x, rate=sr) # load a NumPy array Explanation: Listen to the audio array: End of explanation librosa.output.write_wav('audio/tone_440.wav', x, sr) Explanation: Writing Audio librosa.output.write_wav saves a NumPy array to a WAV file. End of explanation
14,840
Given the following text description, write Python code to implement the functionality described below step by step Description: Right now, each data point consists of two strings and an integer label. Computers don't like dealing with strings directly very much, so we need to convert these strings to lists of integers. The way we do this is Step1: Now we will convert the raw_train_lines, which are string representations, to integers. Step2: If you compare the output of the first indexed example with the first raw example, you will see that each word has been assigned a unique index and words that are the same across sentences have the same index. Now, we'll repackage the lists into a slightly more digestible format for the model. We will have one list of lists (note that each "question" now is a list of integers) for all of the question_1's, and one list of lists for all of the question_2's. Then, we'll have a list of labels. These lists should correspond index-wise, so that label[i] should correspond to the correct label of the data point with indexed_question_1s[i] and indexed_question_2s[i]. Step3: Looks like everything matches up! We'll pickle these indexed instances for use when actually training the model.
Python Code: padding_token = "@@PADDING@@" oov_token = "@@UNKOWN@@" word_indices = {padding_token: 0, oov_token: 1} for train_instance in tqdm(raw_train_lines): # unpack the tuple into 3 variables question_1, question_2, label = train_instance # iterate over the tokens in each question, and add them to the word # indices if they aren't in there already for word in question_1: if word not in word_indices: # by taking the current length of the dictionary # to be the index, we can guarantee that each unique word # will get a unique index. index = len(word_indices) word_indices[word] = index for word in question_2: if word not in word_indices: # by taking the current length of the dictionary # to be the index, we can guarantee that each unique word # will get a unique index. index = len(word_indices) word_indices[word] = index # The number of unique tokens in our corpus len(word_indices) Explanation: Right now, each data point consists of two strings and an integer label. Computers don't like dealing with strings directly very much, so we need to convert these strings to lists of integers. The way we do this is: we will assign each string a unique integer ID, and then replace all occurences of the string with that integer ID. In this way, we can encode to the model what the various input strings are. This is called "indexing" the data. End of explanation indexed_train_lines = [] for train_instance in tqdm(raw_train_lines): # unpack the tuple into 3 variables question_1, question_2, label = train_instance # for each token in question_1 and question_2, replace it with its index indexed_question_1 = [word_indices[word] for word in question_1] indexed_question_2 = [word_indices[word] for word in question_2] indexed_train_lines.append((indexed_question_1, indexed_question_2, label)) # Print the first indexed example, which is the indexed version of # the raw example we printed above. indexed_train_lines[0] Explanation: Now we will convert the raw_train_lines, which are string representations, to integers. End of explanation indexed_question_1s = [] indexed_question_2s = [] labels = [] for indexed_train_line in tqdm(indexed_train_lines): # Unpack the tuple into 3 variables indexed_question_1, indexed_question_2, label = indexed_train_line # Now add each of the individual elements of one train instance to their # separate lists. indexed_question_1s.append(indexed_question_1) indexed_question_2s.append(indexed_question_2) labels.append(label) # Print the first element from each of the lists, it should be the same as the # first element of the combined dataset above. print("First indexed_question_1s: {}".format(indexed_question_1s[0])) print("First indexed_question_2s: {}".format(indexed_question_2s[0])) print("First label: {}".format(labels[0])) Explanation: If you compare the output of the first indexed example with the first raw example, you will see that each word has been assigned a unique index and words that are the same across sentences have the same index. Now, we'll repackage the lists into a slightly more digestible format for the model. We will have one list of lists (note that each "question" now is a list of integers) for all of the question_1's, and one list of lists for all of the question_2's. Then, we'll have a list of labels. These lists should correspond index-wise, so that label[i] should correspond to the correct label of the data point with indexed_question_1s[i] and indexed_question_2s[i]. End of explanation # Pickle the data lists. pickle.dump(indexed_question_1s, open("./data/processed/02.indexed_question_1s_train.pkl", "wb")) pickle.dump(indexed_question_2s, open("./data/processed/02.indexed_question_2s_train.pkl", "wb")) pickle.dump(labels, open("./data/processed/02.labels_train.pkl", "wb")) # Also pickle the word indices pickle.dump(word_indices, open("./data/processed/02.word_indices.pkl", "wb")) Explanation: Looks like everything matches up! We'll pickle these indexed instances for use when actually training the model. End of explanation
14,841
Given the following text description, write Python code to implement the functionality described below step by step Description: Poster popularity by country This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the countries that were affiliated with each poster. We find that the poster popularity across countries is not significant compare to what is expected by chance. Import libraries and load data Step1: 1. Summarize data by country Step2: 2. Poster popularity vs. prevalence Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters. We debatably see this again across countries to a trending level of significance (1-tailed p-value = 0.06) Step3: 3. Permutation tests
Python Code: %config InlineBackend.figure_format = 'retina' %matplotlib inline import numpy as np import scipy as sp import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') import pandas as pd # Load data df = pd.DataFrame.from_csv('./posterviewers_by_country.csv') key_N = 'Number of people' Explanation: Poster popularity by country This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the countries that were affiliated with each poster. We find that the poster popularity across countries is not significant compare to what is expected by chance. Import libraries and load data End of explanation # 0. Count number of posters from each state # Calculate mean poster popularity states = df['Country'].unique() dict_state_counts = {'Country':states,'count':np.zeros(len(states),dtype=int),'popularity':np.zeros(len(states))} for i, s in enumerate(states): dict_state_counts['count'][i] = int(sum(df['Country']==s)) dict_state_counts['popularity'][i] = np.round(np.mean(df[df['Country']==s][key_N]),3) df_counts = pd.DataFrame.from_dict(dict_state_counts) # Visualize dataframe # count = total number of posters counted affiliated with that country # popularity = average number of viewers at a poster affiliated with that country df_counts.head() df_counts.tail() !pip install https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tarball/master !pip install jupyter_nbextensions_configurator !jupyter contrib nbextension install --user !jupyter nbextensions_configurator enable --user Explanation: 1. Summarize data by country End of explanation print sp.stats.spearmanr(np.log10(df_counts['count']),df_counts['popularity']) plt.figure(figsize=(3,3)) plt.semilogx(df_counts['count'],df_counts['popularity'],'k.') plt.xlabel('Number of posters\nin the state') plt.ylabel('Average number of viewers per poster') plt.ylim((-.1,3.6)) plt.xlim((.9,1000)) Explanation: 2. Poster popularity vs. prevalence Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters. We debatably see this again across countries to a trending level of significance (1-tailed p-value = 0.06) End of explanation # Simulate randomized data Nperm = 100 N_posters = len(df) rand_statepop = np.zeros((Nperm,len(states)),dtype=np.ndarray) rand_statepopmean = np.zeros((Nperm,len(states))) for i in range(Nperm): # Random permutation of posters, organized by state randperm_viewers = np.random.permutation(df[key_N].values) for j, s in enumerate(states): rand_statepop[i,j] = randperm_viewers[np.where(df['Country']==s)[0]] rand_statepopmean[i,j] = np.mean(randperm_viewers[np.where(df['Country']==s)[0]]) # True data: Calculate all p-values for the difference between 1 state's popularity and the rest min_N_posters = 10 states_big = states[np.where(df_counts['count']>=min_N_posters)[0]] N_big = len(states_big) t_true_all = np.zeros(N_big) p_true_all = np.zeros(N_big) for i, state in enumerate(states_big): t_true_all[i], _ = sp.stats.ttest_ind(df[df['Country']==state][key_N],df[df['Country']!=state][key_N]) _, p_true_all[i] = sp.stats.mannwhitneyu(df[df['Country']==state][key_N],df[df['Country']!=state][key_N]) pmin_pop = np.min(p_true_all[np.where(t_true_all>0)[0]]) pmin_unpop = np.min(p_true_all[np.where(t_true_all<0)[0]]) print 'Most popular country: ', states_big[np.argmax(t_true_all)], '. p=', str(pmin_pop) print 'Least popular country: ', states_big[np.argmin(t_true_all)], '. p=', str(pmin_unpop) # Calculate minimum p-values for each permutation # Calculate all p and t values t_rand_all = np.zeros((Nperm,N_big)) p_rand_all = np.zeros((Nperm,N_big)) pmin_pop_rand = np.zeros(Nperm) pmin_unpop_rand = np.zeros(Nperm) for i in range(Nperm): for j, state in enumerate(states_big): idx_use = range(len(states_big)) idx_use.pop(j) t_rand_all[i,j], _ = sp.stats.ttest_ind(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use])) _, p_rand_all[i,j] = sp.stats.mannwhitneyu(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use])) # Identify the greatest significance of a state being more popular than the rest pmin_pop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]>0)[0]]) # Identify the greatest significance of a state being less popular than the rest pmin_unpop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]<0)[0]]) # Test if most popular and least popular countries are outside of expectation print 'Chance of a state being more distinctly popular than Canada: ' print sum(i < pmin_pop for i in pmin_pop_rand) / float(len(pmin_pop_rand)) print 'Chance of a state being less distinctly popular than US: ' print sum(i < pmin_unpop for i in pmin_unpop_rand) / float(len(pmin_unpop_rand)) Explanation: 3. Permutation tests: difference in popularity across countries In this code, we test if the relative popularity / unpopularity observed for any country is outside what is expected by chance Here, the most popular and least popular countries are defined by a nonparametric statiscal test between the number of viewers at posters from their country, compared to posters from all other countries. End of explanation
14,842
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: MRI Source ID: MRI-ESM2-0 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:19 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
14,843
Given the following text description, write Python code to implement the functionality described below step by step Description: Quickstart This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts. This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here. There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide. PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users. Step1: DataFrame Creation A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list. pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data. Firstly, you can create a PySpark DataFrame from a list of rows Step2: Create a PySpark DataFrame with an explicit schema. Step3: Create a PySpark DataFrame from a pandas DataFrame Step4: Create a PySpark DataFrame from an RDD consisting of a list of tuples. Step5: The DataFrames created above all have the same results and schema. Step6: Viewing Data The top rows of a DataFrame can be displayed using DataFrame.show(). Step7: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration. Step8: The rows can also be shown vertically. This is useful when rows are too long to show horizontally. Step9: You can see the DataFrame's schema and column names as follows Step10: Show the summary of the DataFrame Step11: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side. Step12: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail(). Step13: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas APIs. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side. Step14: Selecting and Accessing Data PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance. Step15: In fact, most of column-wise operations return Columns. Step16: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame. Step17: Assign new Column instance. Step18: To select a subset of rows, use DataFrame.filter(). Step19: Applying a Function PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function. Step20: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length. Step21: Grouping Data PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy. It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame. Step22: Grouping and then applying the avg() function to the resulting groups. Step23: You can also apply a Python native function against each group by using pandas APIs. Step24: Co-grouping and applying a function. Step25: Getting Data in/out CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster. There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation. CSV Step26: Parquet Step27: ORC Step28: Working with SQL DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below Step29: In addition, UDFs can be registered and invoked in SQL out of the box Step30: These SQL expressions can directly be mixed and used as PySpark columns.
Python Code: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() Explanation: Quickstart This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts. This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here. There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide. PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users. End of explanation from datetime import datetime, date import pandas as pd from pyspark.sql import Row df = spark.createDataFrame([ Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)), Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)), Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0)) ]) df Explanation: DataFrame Creation A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list. pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data. Firstly, you can create a PySpark DataFrame from a list of rows End of explanation df = spark.createDataFrame([ (1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)), (2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)), (3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0)) ], schema='a long, b double, c string, d date, e timestamp') df Explanation: Create a PySpark DataFrame with an explicit schema. End of explanation pandas_df = pd.DataFrame({ 'a': [1, 2, 3], 'b': [2., 3., 4.], 'c': ['string1', 'string2', 'string3'], 'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)], 'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)] }) df = spark.createDataFrame(pandas_df) df Explanation: Create a PySpark DataFrame from a pandas DataFrame End of explanation rdd = spark.sparkContext.parallelize([ (1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)), (2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)), (3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0)) ]) df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e']) df Explanation: Create a PySpark DataFrame from an RDD consisting of a list of tuples. End of explanation # All DataFrames above result same. df.show() df.printSchema() Explanation: The DataFrames created above all have the same results and schema. End of explanation df.show(1) Explanation: Viewing Data The top rows of a DataFrame can be displayed using DataFrame.show(). End of explanation spark.conf.set('spark.sql.repl.eagerEval.enabled', True) df Explanation: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration. End of explanation df.show(1, vertical=True) Explanation: The rows can also be shown vertically. This is useful when rows are too long to show horizontally. End of explanation df.columns df.printSchema() Explanation: You can see the DataFrame's schema and column names as follows: End of explanation df.select("a", "b", "c").describe().show() Explanation: Show the summary of the DataFrame End of explanation df.collect() Explanation: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side. End of explanation df.take(1) Explanation: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail(). End of explanation df.toPandas() Explanation: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas APIs. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side. End of explanation df.a Explanation: Selecting and Accessing Data PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance. End of explanation from pyspark.sql import Column from pyspark.sql.functions import upper type(df.c) == type(upper(df.c)) == type(df.c.isNull()) Explanation: In fact, most of column-wise operations return Columns. End of explanation df.select(df.c).show() Explanation: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame. End of explanation df.withColumn('upper_c', upper(df.c)).show() Explanation: Assign new Column instance. End of explanation df.filter(df.a == 1).show() Explanation: To select a subset of rows, use DataFrame.filter(). End of explanation import pandas from pyspark.sql.functions import pandas_udf @pandas_udf('long') def pandas_plus_one(series: pd.Series) -> pd.Series: # Simply plus one by using pandas Series. return series + 1 df.select(pandas_plus_one(df.a)).show() Explanation: Applying a Function PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function. End of explanation def pandas_filter_func(iterator): for pandas_df in iterator: yield pandas_df[pandas_df.a == 1] df.mapInPandas(pandas_filter_func, schema=df.schema).show() Explanation: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length. End of explanation df = spark.createDataFrame([ ['red', 'banana', 1, 10], ['blue', 'banana', 2, 20], ['red', 'carrot', 3, 30], ['blue', 'grape', 4, 40], ['red', 'carrot', 5, 50], ['black', 'carrot', 6, 60], ['red', 'banana', 7, 70], ['red', 'grape', 8, 80]], schema=['color', 'fruit', 'v1', 'v2']) df.show() Explanation: Grouping Data PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy. It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame. End of explanation df.groupby('color').avg().show() Explanation: Grouping and then applying the avg() function to the resulting groups. End of explanation def plus_mean(pandas_df): return pandas_df.assign(v1=pandas_df.v1 - pandas_df.v1.mean()) df.groupby('color').applyInPandas(plus_mean, schema=df.schema).show() Explanation: You can also apply a Python native function against each group by using pandas APIs. End of explanation df1 = spark.createDataFrame( [(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)], ('time', 'id', 'v1')) df2 = spark.createDataFrame( [(20000101, 1, 'x'), (20000101, 2, 'y')], ('time', 'id', 'v2')) def asof_join(l, r): return pd.merge_asof(l, r, on='time', by='id') df1.groupby('id').cogroup(df2.groupby('id')).applyInPandas( asof_join, schema='time int, id int, v1 double, v2 string').show() Explanation: Co-grouping and applying a function. End of explanation df.write.csv('foo.csv', header=True) spark.read.csv('foo.csv', header=True).show() Explanation: Getting Data in/out CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster. There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation. CSV End of explanation df.write.parquet('bar.parquet') spark.read.parquet('bar.parquet').show() Explanation: Parquet End of explanation df.write.orc('zoo.orc') spark.read.orc('zoo.orc').show() Explanation: ORC End of explanation df.createOrReplaceTempView("tableA") spark.sql("SELECT count(*) from tableA").show() Explanation: Working with SQL DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below: End of explanation @pandas_udf("integer") def add_one(s: pd.Series) -> pd.Series: return s + 1 spark.udf.register("add_one", add_one) spark.sql("SELECT add_one(v1) FROM tableA").show() Explanation: In addition, UDFs can be registered and invoked in SQL out of the box: End of explanation from pyspark.sql.functions import expr df.selectExpr('add_one(v1)').show() df.select(expr('count(*)') > 0).show() Explanation: These SQL expressions can directly be mixed and used as PySpark columns. End of explanation
14,844
Given the following text description, write Python code to implement the functionality described below step by step Description: 26 Maximum Flow each directed edge in a flow network like a conduit for the material Step1: 26.2 The Ford-Fulkerson method The Ford-Fulkerson method depends on three important ideas Step2: Augmenting paths an augmenting path $p$ is a simple path from $s$ to $t$ in the residual network $G_f$. $$c_f(p) = \min { c_f(u, v) Step3: The Edmonds-Karp algorithm We can improve the boudn of FORD-FULKERSON by finding the augmenting path $p$ in line 3 with a breadth-first search. Step4: 26.3 Maximum bipartite matching a maximum matching in a bipartite graph $G$ corresponds to a maximum flow in its corresponding flow network $G'$, and we can therefore compute a maximum matching in $G$ by running a maximum-flow algorithm on $G'$.
Python Code: plt.imshow(plt.imread('./res/fig26_1.png')) # Exercise Explanation: 26 Maximum Flow each directed edge in a flow network like a conduit for the material: Each conduit has a stated capacity, vertices are conduit junctions. In the maximum-flow problem, we wish to compute the greatest rate at which we can ship material from the source to the sink without violating any capacity constraints. 26.1 Flow networks Let $G = (V, E)$ be a flow network with a capacity function $c$. Let $s$ be the source of the network, and let $t$ be the sink. A flow in $G$ is a real-valued function $f : V \times V \to \mathcal{R}$ that satisfies the following two properties: Capacity constraint: $0 \geq f(u, v) \geq c(u, v) : \forall u, v \in V$ Flow conservation: $$\displaystyle \sum_{v \in V} f(v, u) = \sum_{v \in V} f(u, v)$$ real-world flow problem may violates our assumption: modeling problems with antiparalled edges. networks with multiple sources and sinks: add a supersourse and a supersink. End of explanation show_image('fig26_4.png', figsize=(8,12)) Explanation: 26.2 The Ford-Fulkerson method The Ford-Fulkerson method depends on three important ideas: residual networks, augmenting paths, and cuts. ```c FORD-FULKERSON-METHOD(G, s, t) initialize flow f to 0 while there exists an augmenting path p in the residual network G_f augment flow f along p return f ``` Residual networks the residual network $G_f$ consists of edges with capacitites that represent how we can change the flow on edges of $G$. we define the residual capacity $c_f(u, v)$ by \begin{equation} c_f(u, v) = \begin{cases} c(u, v) - f(u, v) \, & \text{ if } (u, v) \in E \ f(v, u) \, & \text{ if } (v, u) \in E \ 0 \, & \text{ otherwise} \end{cases} \end{equation} the residual network of $G$ induced by $f$ is $G_f = (V, E_f)$, where $$E_f = { (u, v) \in V \times V : c_f(u, v) > 0 }$$ If $f$ is a flow in $G$ and $f'$ is a flow in the corresponding residual network $G_f$, we define$f \uparrow f'$, the augmentation of flow $f$ by $f'$: \begin{equation} (f \uparrow f')(u, v) = \begin{cases} f(u, v) + f'(u, v) - f'(v, u) \, & \text{ if } (u, v) \in E \ 0 \, & \text{ otherwise} \end{cases} \end{equation} End of explanation show_image('ford.png') show_image('fig26_6.png') Explanation: Augmenting paths an augmenting path $p$ is a simple path from $s$ to $t$ in the residual network $G_f$. $$c_f(p) = \min { c_f(u, v): \text{ $(u, v)$ is on $p$} }$$ Cuts of flow networks The Ford-Fulkerson method repeatedly augments the flow along augmenting paths until it has found a maximum flow. The max-flow min-cut theorem tells us that a flow is maximum if and only if its residual network contains no augmenting path. The basic Ford-Fulkerson algorithm End of explanation #Exercise Explanation: The Edmonds-Karp algorithm We can improve the boudn of FORD-FULKERSON by finding the augmenting path $p$ in line 3 with a breadth-first search. End of explanation show_image('fig26_8.png') #Exercise Explanation: 26.3 Maximum bipartite matching a maximum matching in a bipartite graph $G$ corresponds to a maximum flow in its corresponding flow network $G'$, and we can therefore compute a maximum matching in $G$ by running a maximum-flow algorithm on $G'$. End of explanation
14,845
Given the following text description, write Python code to implement the functionality described below step by step Description: Thanks to unicode, we may use Greek letters directly in our code. In this Jupyter Notebook, lets use θ (theta), π (pi) and τ (tau) with τ = 2 * π. Then we'll plot the graph of Euler's Formula, e to the i θ over the range 0 to τ. In Python we signify i, the root of -1, as 1j for readability, so lets bind i to 1j as well. Step1: Below we import some industrial grade tools used for plotting with Python. The same greek letter names remain active and guide the construction of a domain t and range s. Then we label the graph and generate a picture. plt.show(), if used, produces a plot in its own window.
Python Code: from math import e, pi as π τ = 2 * π i = 1j result = e ** (i * τ) print ("{:1.5f}".format(result.real)) Explanation: Thanks to unicode, we may use Greek letters directly in our code. In this Jupyter Notebook, lets use θ (theta), π (pi) and τ (tau) with τ = 2 * π. Then we'll plot the graph of Euler's Formula, e to the i θ over the range 0 to τ. In Python we signify i, the root of -1, as 1j for readability, so lets bind i to 1j as well. End of explanation import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, τ, 0.01) s = np.array([(np.e ** (i * θ)).real for θ in t]) plt.plot(t, s) plt.xlabel('radians') plt.ylabel('real part') plt.title('Euler\'s Formula from 0 to tau') plt.grid(True) plt.savefig("euler_test.png") # uploaded to Flickr for display below Explanation: Below we import some industrial grade tools used for plotting with Python. The same greek letter names remain active and guide the construction of a domain t and range s. Then we label the graph and generate a picture. plt.show(), if used, produces a plot in its own window. End of explanation
14,846
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have a binary array, say, a = np.random.binomial(n=1, p=1/2, size=(9, 9)). I perform median filtering on it using a 3 x 3 kernel on it, like say, b = nd.median_filter(a, 3). I would expect that this should perform median filter based on the pixel and its eight neighbours. However, I am not sure about the placement of the kernel. The documentation says,
Problem: import numpy as np import scipy.ndimage a= np.zeros((5, 5)) a[1:4, 1:4] = np.arange(3*3).reshape((3, 3)) b = scipy.ndimage.median_filter(a, size=(3, 3), origin=(0, 1))
14,847
Given the following text description, write Python code to implement the functionality described below step by step Description: Core Techniques used in our ETL Generators Partial function application Batching / Chunking Caching Step1: Generators python generators allow you to concisely create iterators. They are a highlighted technique in this workshop because they provide Step2: Chaining Generators are first-class objects in python. So you can pass them as arguments (iterables) to other generators to change operations. Step3: Simplistic ETL This code sample shows a very simple ETL which leverages generators and chaining. This is somewhat contrived as it doesn't use a database. It uses a list as "source data" and a dictionary as a "destination" for inserting results. The main point is to show the separation of the 3 areas and how they can be chained together as generators. Step4: Partial functions You can create partial function objects using functools.partial(). This allows you to "freeze" function arguments (args) or keyword (kwargs). This is a quick method to implement encapsulation (bundling data with methods). Step5: For modules with single operations, you can quickly implement parameterization using partial functions. Step6: Batching This is also known as "chunking". This is easy using more_itertools.chunked(). This consumes any iterable, but outputs its iterated items into batched lists of a maximum size. This greatly reduces complexity of your code because you need not worry about how many items your input iterator produces. You also don't need any edge case logic to handle 'remainder' items. Step9: Depth First Search We're specifically using depth-first search with pre-order traversal Recursive Step10: Caching Some computations are time consuming. You can store pre-computed results in memory via a cache. Python comes with a built-in caching function Step11: However, using this cache requires a bit of care. The documentation briefly mentions that Step12: We can easily get around this using partial functions. Here is an example which implements a contrived (but simple Session) which explodes if you try and hash it.
Python Code: import collections import functools import more_itertools import json Explanation: Core Techniques used in our ETL Generators Partial function application Batching / Chunking Caching End of explanation # start with a function that produces a list of squared numbers def squares_as_list(max_n): accum = [] x = 1 while x <= max_n: accum.append(x * x) x = x + 1 return accum # output the result result = squares_as_list(10) print('Type is: ' + str(type(result))) for i in result: print(i) # here is a similar function, but implemented as a generator def squares_as_generator(max_n): x = 1 while x < max_n: yield x * x x = x + 1 result = squares_as_generator(10) print('Type is: ' + str(type(result))) # # loop directly as an iterable print('All 10 using a loop') for s in result: print(s) # print('Just 5 iterations to demonstrate deferred evaluation...') another_gen = squares_as_generator(10) print(next(another_gen)) print(next(another_gen)) print(next(another_gen)) print(next(another_gen)) print(next(another_gen)) Explanation: Generators python generators allow you to concisely create iterators. They are a highlighted technique in this workshop because they provide: * Concise code * Deferred evaluation * Easy chaining for composing a tranformation process End of explanation # # Generator Chaining example # def f_A(n): x = 1 while x < n: yield x * x x = x + 1 def f_B(iter_a): for y in iter_a: yield y + 10000 def f_C(iter_b): for z in iter_b: yield "'myprefix " + str(z) + "'" # chain the first two gen_a = f_A(10) gen_b = f_B(gen_a) print('First two chained') for r in gen_b: print(r) # print('\nAll 3 chained') gen_a = f_A(10) gen_b = f_B(gen_a) gen_c = f_C(gen_b) for r in gen_c: print(r) Explanation: Chaining Generators are first-class objects in python. So you can pass them as arguments (iterables) to other generators to change operations. End of explanation # source: assume this list are the database rows SOURCE_DATA = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14] DESTINATION_DB = collections.OrderedDict() def extractor(source_data): for item in source_data: yield item def transformer(iter_extractor): for item in iter_extractor: # transform it into a tuple of (n, n^2) transformed_item = (item, item * item) yield transformed_item def loader(iter_transformer, db): for item in iter_transformer: # insert each tuple as an item into the storage dictionary k = str(item[0]) v = item[1] db[k] = v # here is a simple example of chaining generators extracted_gen = extractor(SOURCE_DATA) transformed_gen = transformer(extracted_gen) loader(transformed_gen, DESTINATION_DB) # output the loaded results print(json.dumps(DESTINATION_DB, indent=2)) Explanation: Simplistic ETL This code sample shows a very simple ETL which leverages generators and chaining. This is somewhat contrived as it doesn't use a database. It uses a list as "source data" and a dictionary as a "destination" for inserting results. The main point is to show the separation of the 3 areas and how they can be chained together as generators. End of explanation def add(x, y): return x + y # print('Simple addition') # print('1 + 2 = %d' % add(1, 2)) # print('2 + 3 = %d' % add(2, 3)) print('partial add_1 function') # NOTE: order of args matters! add_1 = functools.partial(add, 1) print('add_1(1) = %d' % add_1(1)) print('add_1(2) = %d' % add_1(2)) print('partial add_2 function') add_2 = functools.partial(add, 2) print('add_2(1) = %d' % add_2(1)) print('add_2(2) = %d' % add_2(2)) import functools # similarly, you can freeze kwargs to avoid ordering constraints def pow(x, n=1): return x ** n print('regular') print( pow(2, n=3) ) print('partial with n=2') pow_2 = functools.partial(pow, n=2) print(type(pow_2)) print( pow_2(2) ) print('partial with n=3') pow_3 = functools.partial(pow, n=3) print( pow_3(2) ) pow_easy = functools.partial(pow, 5, n=2) print( pow_easy() ) Explanation: Partial functions You can create partial function objects using functools.partial(). This allows you to "freeze" function arguments (args) or keyword (kwargs). This is a quick method to implement encapsulation (bundling data with methods). End of explanation # example: this tranformer generator has multiple kwargs which serve # parameters indicating its behavior def tranform_func_with_config( iter_extractor, translate=0, scale=1, cast_func=int ): for x in iter_extractor: t = x + translate t = scale * t t = cast_func(t) yield (x, t) # now we can create multiple transformer configurations via partial functions # these configurations can be read from a JSON file config_1 = {'translate': 1, 'scale': 2} config_2 = {'scale': -1, 'cast_func': str} # create partial functions quickly by unpacking the configuration to freeze the kwargs transform_1 = functools.partial(tranform_func_with_config, **config_1) transform_2 = functools.partial(tranform_func_with_config, **config_2) # let's output one of them extracted_gen = extractor(SOURCE_DATA) tranform_1_gen = transform_1(extracted_gen) for t in tranform_1_gen: print(t) # any questions? # the real power is that the partial function _encapsulates_ the confirmation so that # other functions (like this simple process method) need not be concerned with it def process(f_extractor, f_transformer, f_loader): # run the process extractor_gen = f_extractor(SOURCE_DATA) transformer_gen = f_transformer(extractor_gen) f_loader(transformer_gen, DESTINATION_DB) DESTINATION_DB.clear() print('configuration 1') process(extractor, transform_1, loader) print(json.dumps(DESTINATION_DB, indent=2)) DESTINATION_DB.clear() print('\nconfiguration 2') process(extractor, transform_2, loader) print(json.dumps(DESTINATION_DB, indent=2)) Explanation: For modules with single operations, you can quickly implement parameterization using partial functions. End of explanation # range() is a python built-in. since python 3, it is a generator! source_gen = range(20) print('normal consumption') for item in source_gen: print(item) print('\nbatched consumption') source_gen = range(20) chunk_size = 3 batched_gen = more_itertools.chunked(source_gen, chunk_size) for item in batched_gen: print('{} of size {}: {}'.format(type(item), len(item), item)) Explanation: Batching This is also known as "chunking". This is easy using more_itertools.chunked(). This consumes any iterable, but outputs its iterated items into batched lists of a maximum size. This greatly reduces complexity of your code because you need not worry about how many items your input iterator produces. You also don't need any edge case logic to handle 'remainder' items. End of explanation # node - current node in the tree # path - list of strings representing 'path components' down the JSON tree # f_gen_items - produces 'transformed' items for a node # f_gen_children - produces child nodes to search def _recursive_map_nested(node:dict, path: list, f_gen_items, f_gen_children): if not node: # empty node return gen_items = f_gen_items(node, path) yield from gen_items gen_children = f_gen_children(node, path) for child_path, child_node in gen_children: yield from _recursive_map_nested( child_node, child_path, gen_items, gen_children) def my_gen_items(node: dict, path: list): converts scalar dictionary items to response event arguments reflecting answers for k, v in node.items(): if not isinstance(v, (dict, list,)): path_str = '.'.join(path + [k]) node_info = ... if node_info: str_value= ... yield path_str, { 'answer_type': node_info.answer_type, 'value': str_value } def my_gen_children(): locates and generates child nodes node_slug = node.get('slug') children = node.get('children') if node_slug and children: for child in children: child_slug = child.get('slug') if child_slug: yield path + [child_slug], child # initial call would be root = { ... } transformed_items = _recursive_map_nested(root, [], my_gen_items, my_gen_children) # pass 'transformed_items' (another generator) to the loader Explanation: Depth First Search We're specifically using depth-first search with pre-order traversal Recursive: The same function is called on nested subtrees Pre-order traversal: Inspect a node's items first before considering its children Depth first search: When you recurse, explore as far down a child's tree before backtracking End of explanation @functools.lru_cache(maxsize=4) def cached_pow(x, n): print("-- Oh be careful... I'm expensive!") return x ** n # this will run the actual method but cache the results print('Populate cache with 2 different items') print( cached_pow(2, 3) ) print( cached_pow(2, 4) ) # this will use cached results (notice the absence of the warning) print('\nRe-run same requests so that it retrieves from the cache') print( cached_pow(2, 3) ) print( cached_pow(2, 3) ) print( cached_pow(2, 4) ) print( cached_pow(2, 4) ) print( cached_pow(2, 4) ) # this will force an eviction (2+3 > 4 max items) of the first pow(2,3) result print('\n3 more different items') print( cached_pow(2, 5) ) print( cached_pow(2, 6) ) print( cached_pow(2, 7) ) # run the very last one along with (2,3) again to re-evaluate print('\n(2,3) should have been evicted, will require an evaluation') print( cached_pow(2, 7) ) print( cached_pow(2, 3) ) print('cache metrics') cache_info = cached_pow.cache_info() print(cache_info) Explanation: Caching Some computations are time consuming. You can store pre-computed results in memory via a cache. Python comes with a built-in caching function: functools.lru_cache(). You can easily wrap an "expensive" function so that it will cache a maximum number of results. This cache uses a Least Recently Used cache replacement policy. This just means that if you need to add a new item to a cache that is full, review your existing items and evict the least recently used one before inserting a new item. This is most easily implemented with a hash table (for quick lookup) along with a doubly-linked list (for quickly locating the least recently used item to evict). Other data structures exist with some tradeoffs (e.g. data structures with age bits). End of explanation # a contrived session class which uses our contrived database class CrankySession(object): def __init__(self, db): self.db = db def query(self, idx: int): print("-- fine fine... I'll check the database") return self.db[idx] def __hash__(self): raise RuntimeError("WATCH IT BUDDY! I'm not hashable!") # let's use the lru_cache decorator disregarding the documentation regarding hashable arguments @functools.lru_cache(maxsize=4) def broken_session_lookup(session: CrankySession, idx: int): return session.query(idx) # now try running it session = CrankySession(DESTINATION_DB) broken_session_lookup(session, "1") Explanation: However, using this cache requires a bit of care. The documentation briefly mentions that: ...the positional and keyword arguments to the function must be hashable... This is actually quite critical when working with the SQLAlchemy ORM. This is because the Session object should not be considered hasheable. It is a class instance that likely has a lot of internal state that is dynamically changing under the hood. End of explanation # start with an unwrapped function def raw_session_lookup(session: CrankySession, idx: int): return session.query(idx) # create a new partial function to "freeze" the session argument partial_session_lookup = functools.partial(raw_session_lookup, session) # now you can safely wrap the partial function with the lru_cache method # NOTE: you need to call the wrapper directly rather than using a decorator syntax cache_wrapper = functools.lru_cache(maxsize=4) cached_session_lookup = cache_wrapper(partial_session_lookup) # now call it to your heart's content print(cached_session_lookup("1")) print(cached_session_lookup("2")) print(cached_session_lookup("2")) print(cached_session_lookup("1")) print(cached_session_lookup("1")) cache_info = cached_session_lookup.cache_info() print(cache_info) Explanation: We can easily get around this using partial functions. Here is an example which implements a contrived (but simple Session) which explodes if you try and hash it. End of explanation
14,848
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Bubble sort In pseudo-code the bubble sort algorithm can be written as Step2: We can see the essential features of Python used Step4: Note Step6: This gets rid of the need for a temporary variable. Exercise Here is a pseudo-code for the counting sort algorithm Step7: Simplex Method For the linear programming problem $$ \begin{aligned} \max x_1 + x_2 &= z \ 2 x_1 + x_2 & \le 4 \ x_1 + 2 x_2 & \le 3 \end{aligned} $$ where $x_1, x_2 \ge 0$, one standard approach is the simplex method. Introducing slack variables $s_1, s_2 \ge 0$ the standard tableau form becomes $$ \begin{pmatrix} 1 & -1 & -1 & 0 & 0 \ 0 & 2 & 1 & 1 & 0 \ 0 & 1 & 2 & 0 & 1 \end{pmatrix} \begin{pmatrix} z & x_1 & x_2 & s_1 & s_2 \end{pmatrix}^T = \begin{pmatrix} 0 \ 4 \ 3 \end{pmatrix}. $$ The simplex method performs row operations to remove all negative numbers from the top row, at each stage choosing the smallest (in magnitude) pivot. Assume the tableau is given in this standard form. We can use numpy to implement the problem. Step8: To access an entry we use square brackets as with lists Step9: To access a complete row or column, we use slicing notation Step10: To apply the simplex method, we have to remove the negative entries in row 0. These appear in columns 1 and 2. For column 1 the pivot in row 1 has magnitude $|-1/2| = 1/2$ and the pivot in row 2 has magnitude $|-1/1|=1$. So we choose row 1. To perform the row operation we want to eliminate all entries in column 1 except for the diagonal, which is set to $1$ Step11: Now we repeat this on column 2, noting that we can only pivot on row 2 Step12: We read off the solution (noting that floating point representations mean we need care interpreting the results) Step14: Let's turn that into a function. Step15: Building the tableau Once the problem is phrased in the tableau form the short simplex function solves it without problem. However, for large problems, we don't want to type in the matrix by hand. Instead we want a way of keeping track of the objective function to maximize, and the constraints, and make the computer do all the work. To do that we'll introduce classes. In VBA a class is a special module, and you access its variables and methods using dot notation. For example, if Student is a class, which has a variable Name, and s1 is a Student object, then s1.Name is the name associated with that particular instance of student. The same approach is used in Python Step16: See how this compares to VBA. The class keyword is used to start the definition of the class. The name of the class (Student) is given. It follows similar rules and conventions to variables, but typically is capitalized. The name in brackets (object) is what the class inherits from. Here we use the default (object). The colon and indentation denotes the class definition, in the same way as we've seen for functions and loops. Functions defined inside the class are methods. The first argument will always be an instance of the class, and by convention is called self. Methods are called using &lt;instance&gt;.&lt;method&gt;. When an instance is created (eg, by s1 = Student(...)) the __init__ method is called if it exists. We can use this to set up the instance. There are a number of special methods that can be defined that work with Python operations. For example, suppose we printed the instances above Step17: This isn't very informative. However, we can define the string representation of our class using the __repr__ method Step18: We can also define what it means to add two instances of our class Step19: Going back to the simplex method, we want to define a class that contains the objective function and the constraints, a method to solve the problem, and a representation of the problem and solution. Step20: Using libraries - pulp The main advantage of using Python is the range of libraries there are that you can use to more efficiently solve your problems. For linear programming there is the pulp library, which is a Python wrapper to efficient low level libraries such as GLPK. It's worth noting that pulp provides high-level access to leading proprietary libraries like CPLEX, but doesn't provide the binaries or the licences. By default pulp uses CBC which is considerably slower Step21: This gives a "meaningful" title to the problem and says if we're going to maximize or minimize. Step22: Defining the variables again gives them "meaningful" names, and specifies their lower and upper bounds, and whether the variable type is continuous or integer. We could ignore the latter two definitions as they take their default values. The first thing to do now is to define the objective function by "adding" it to the problem Step23: Again we have given a "meaningful" name to the objective function we're maximizing. Next we can create constraints and add them to the problem. Step24: If you want to save the problem at this stage, you can use problem.writeLP(&lt;filename&gt;), where the .lp extension is normally used. To solve the problem, we just call Step25: The 1 just means it did it Step26: As it's found a solution, we can print the objective function and the variables Step27: Using pulp is far easier and robust than coding our own, and will cover a much wider range of problems. Exercise Try using pulp to implement the following optimisation problem. Whiskas want to make their cat food out of just two ingredients Step28: We will assume that the bus capacity is $85$ people, that $250$ people want to travel, that they are distributed at the $10$ stops following a discrete random distribution, and each wants to travel a number of stops that also follows a discrete random distribution (distributed between $1$ and the maximum number of stops they could travel). There are smarter ways of doing it than this, I'm sure Step30: And now that we know how to do it once, we can do it many times Step31: We see that, as expected, it's the stops in the middle that fare worst. We can easily plot this Step32: Exercise Take a look at the discrete distributions in scipy.stats and work out how to improve this model. Machine Learning If you want to get a computer to classify a large dataset for you, or to "learn", then packages for Machine Learning, or Neural Networks, or Deep Learning, are the place to go. This field is very much a moving target, but the Python scikit-learn library has a lot of very useful tools that can be used as a starting point. In this example we'll focus on classification Step33: A quick reminder of what the dataset contains Step34: There are different types of iris, classified by the Name. Each individual flower observed has four measurements, given by the data. We want to use some of the data (the Sepal Length and Width, and the Petal Length and Width) to construct a model. The model will take an observation - these four numbers - and predict which flower type we have. We'll use the rest of the data to check how accurate our model is. First let's look at how many different types of flower there are Step35: So we're trying to choose one of three types. The range of values can be summarized Step36: There's 150 observations, with a reasonable range of values. So, let's split the dataframe into its data and its labels. What we're wanting to do here is predict the label (the type, or Name, of the Iris observed) from the data (the measurements of the sepal and petal). Step37: We then want to split our data, and associated labels, into a training set (where we tell the classifier what the answer is) and a testing set (to check the accuracy of the model) Step38: Here we have split the data set in two Step39: We now have a model Step40: We see from the first couple of entries that it's done ok, but that there are errors. As estimating the accuracy of a classification by comparing to test data is so standard, there's a function for that Step41: So the result is very accurate on this simple dataset. Exercises Vary the size of the training / testing split to see how it affects the accuracy. Try a different classifier - the KNeighborsClassifier for example. Try a different dataset Step42: The accuracy of this classifier is not great. As the train_test_split function randomly selects its training and test data, the accuracy will change every time you run it, but it tends to be 60-70%. Let's try excluding the pop data.
Python Code: def bubblesort(unsorted): Sorts an array using bubble sort algorithm Paramters --------- unsorted : list The unsorted list Returns sorted : list The sorted list (in place) last = len(unsorted) # All Python lists start from 0 for i in range(last): for j in range(i+1, last): if unsorted[i] > unsorted[j]: temp = unsorted[j] unsorted[j] = unsorted[i] unsorted[i] = temp return unsorted unsorted = [2, 4, 6, 0, 1, 3, 5] print(bubblesort(unsorted)) Explanation: Bubble sort In pseudo-code the bubble sort algorithm can be written as: Start with an unsorted list list of length n. For each element i in the list from the first to the last: For each element j in the list from element i+1 to the last: If element i is bigger than element j then swap them After this loop, the list list is now sorted. We can do a direct translation of this into Python: End of explanation unsorted = [2, 4, 6, 0, 1, 3, 5] print(sorted(unsorted)) Explanation: We can see the essential features of Python used: Python does not declare the type of the variables; There is nothing special about lists or arrays as variables when passed as arguments; To define functions the keyword is def; To define the start of a block (the body of a function, or a loop, or a conditional) a colon : is used; To define the block itself, indentation is used. The block ends when the code indentation ends; Comments are either enclosed in quotes " as for the docstring, or using #; The return value(s) from a function use the keyword return; Accessing arrays uses square brackets; The function range produces a range of integers, usually used to loop over. Note: there are in-built Python functions to sort lists which should be used in general: End of explanation def bubblesort(unsorted): Sorts an array using bubble sort algorithm Paramters --------- unsorted : list The unsorted list Returns sorted : list The sorted list (in place) last = len(unsorted) # All Python lists start from 0 for i in range(last): for j in range(i+1, last): if unsorted[i] > unsorted[j]: unsorted[j], unsorted[i] = unsorted[i], unsorted[j] return unsorted unsorted = [2, 4, 6, 0, 1, 3, 5] print(bubblesort(unsorted)) Explanation: Note: there is a "more Pythonic" way of writing the bubble sort function, taking advantage of the feature that Python can assign to multiple things at once. Compare the internals of the loop: End of explanation def countingsort(unsorted): Sorts an array using counting sort algorithm Paramters --------- unsorted : list The unsorted list Returns sorted : list The sorted list (in place) # Allocate the counts array min_value = min(unsorted) max_value = max(unsorted) # This creates a list of the right length, but the entries are not zero, so reset counts = list(range(min_value, max_value+1)) for i in range(len(counts)): counts[i] = 0 # Count the values last = len(unsorted) for i in range(last): counts[unsorted[i]] += 1 # Write the items back into the list array next_index = 0 for i in range(min_value, max_value+1): for j in range(counts[i]): unsorted[next_index] = i next_index += 1 return unsorted unsorted = [2, 4, 6, 0, 1, 3, 5] print(countingsort(unsorted)) Explanation: This gets rid of the need for a temporary variable. Exercise Here is a pseudo-code for the counting sort algorithm: Start with an unsorted list list of length n. Find the minimum value min_value and maximum value max_value of the list. Create a list counts that will count the number of entries in the list with value between min_value and max_value inclusive, and set its entries to zero For each element i in list from the first to the last: Add one to the counts list entry whose index matches the value of this element For each element i in the counts list from the first to the last: Set the next j entries of list equal to i After this loop, the list list is now sorted. Translate this into Python. Note that the in-built Python min and max functions can be used on lists. To create a list of the correct size you can use python counts = list(range(min_value, max_value+1)) but this list will not contain zeros so must be reset. End of explanation import numpy tableau = numpy.array([ [1, -1, -1, 0, 0, 0], [0, 2, 1, 1, 0, 4], [0, 1, 2, 0, 1, 3] ], dtype=numpy.float64) print(tableau) Explanation: Simplex Method For the linear programming problem $$ \begin{aligned} \max x_1 + x_2 &= z \ 2 x_1 + x_2 & \le 4 \ x_1 + 2 x_2 & \le 3 \end{aligned} $$ where $x_1, x_2 \ge 0$, one standard approach is the simplex method. Introducing slack variables $s_1, s_2 \ge 0$ the standard tableau form becomes $$ \begin{pmatrix} 1 & -1 & -1 & 0 & 0 \ 0 & 2 & 1 & 1 & 0 \ 0 & 1 & 2 & 0 & 1 \end{pmatrix} \begin{pmatrix} z & x_1 & x_2 & s_1 & s_2 \end{pmatrix}^T = \begin{pmatrix} 0 \ 4 \ 3 \end{pmatrix}. $$ The simplex method performs row operations to remove all negative numbers from the top row, at each stage choosing the smallest (in magnitude) pivot. Assume the tableau is given in this standard form. We can use numpy to implement the problem. End of explanation print(tableau[0, 0]) print(tableau[1, 2]) row = 2 column = 5 print(tableau[row, column]) Explanation: To access an entry we use square brackets as with lists: End of explanation print(tableau[row, :]) print(tableau[:, column]) Explanation: To access a complete row or column, we use slicing notation: End of explanation column = 1 pivot_row = 1 # Rescale pivot row tableau[pivot_row, :] /= tableau[pivot_row, column] # Remove all entries in columns except the pivot pivot0 = tableau[0, column] / tableau[pivot_row, column] tableau[0, :] -= pivot0 * tableau[pivot_row, :] pivot2 = tableau[2, column] / tableau[pivot_row, column] tableau[2, :] -= pivot2 * tableau[pivot_row, :] print(tableau) Explanation: To apply the simplex method, we have to remove the negative entries in row 0. These appear in columns 1 and 2. For column 1 the pivot in row 1 has magnitude $|-1/2| = 1/2$ and the pivot in row 2 has magnitude $|-1/1|=1$. So we choose row 1. To perform the row operation we want to eliminate all entries in column 1 except for the diagonal, which is set to $1$: End of explanation column = 2 pivot_row = 2 # Rescale pivot row tableau[pivot_row, :] /= tableau[pivot_row, column] # Remove all entries in columns except the pivot pivot0 = tableau[0, column] / tableau[pivot_row, column] tableau[0, :] -= pivot0 * tableau[pivot_row, :] pivot1 = tableau[1, column] / tableau[pivot_row, column] tableau[1, :] -= pivot1 * tableau[pivot_row, :] print(tableau) Explanation: Now we repeat this on column 2, noting that we can only pivot on row 2: End of explanation print("z =", tableau[0, -1]) print("x_1 =", tableau[1, -1]) print("x_2 =", tableau[2, -1]) Explanation: We read off the solution (noting that floating point representations mean we need care interpreting the results): $z = 7/3$ when $x_1 = 5/3$ and $x_2 = 2/3$: End of explanation def simplex(tableau): Assuming a standard form tableau, find the solution nvars = tableau.shape[1] - tableau.shape[0] - 1 for column in range(1, nvars+2): if tableau[0, column] < 0: pivot_row = numpy.argmin(numpy.abs(tableau[0, column] / tableau[1:, column])) + 1 # Rescale pivot row tableau[pivot_row, :] /= tableau[pivot_row, column] # Remove all entries in columns except the pivot for row in range(0, pivot_row): pivot = tableau[row, column] / tableau[pivot_row, column] tableau[row, :] -= pivot * tableau[pivot_row, :] for row in range(pivot_row+1, tableau.shape[0]): pivot = tableau[row, column] / tableau[pivot_row, column] tableau[row, :] -= pivot * tableau[pivot_row, :] z = tableau[0, -1] x = tableau[1:nvars+1, -1] return z, x tableau = numpy.array([ [1, -1, -1, 0, 0, 0], [0, 2, 1, 1, 0, 4], [0, 1, 2, 0, 1, 3] ], dtype=numpy.float64) z, x = simplex(tableau) print("z =", z) print("x =", x) Explanation: Let's turn that into a function. End of explanation class Student(object): def __init__(self, name): self.name = name def print_name(self): print("Hello", self.name) s1 = Student("Christine Carpenter") print(s1.name) s2 = Student("Jörg Fliege") s2.print_name() Explanation: Building the tableau Once the problem is phrased in the tableau form the short simplex function solves it without problem. However, for large problems, we don't want to type in the matrix by hand. Instead we want a way of keeping track of the objective function to maximize, and the constraints, and make the computer do all the work. To do that we'll introduce classes. In VBA a class is a special module, and you access its variables and methods using dot notation. For example, if Student is a class, which has a variable Name, and s1 is a Student object, then s1.Name is the name associated with that particular instance of student. The same approach is used in Python: End of explanation print(s1) print(s2) Explanation: See how this compares to VBA. The class keyword is used to start the definition of the class. The name of the class (Student) is given. It follows similar rules and conventions to variables, but typically is capitalized. The name in brackets (object) is what the class inherits from. Here we use the default (object). The colon and indentation denotes the class definition, in the same way as we've seen for functions and loops. Functions defined inside the class are methods. The first argument will always be an instance of the class, and by convention is called self. Methods are called using &lt;instance&gt;.&lt;method&gt;. When an instance is created (eg, by s1 = Student(...)) the __init__ method is called if it exists. We can use this to set up the instance. There are a number of special methods that can be defined that work with Python operations. For example, suppose we printed the instances above: End of explanation class Student(object): def __init__(self, name): self.name = name def __repr__(self): return self.name s1 = Student("Christine Carpenter") s2 = Student("Jörg Fliege") print(s1) print(s2) Explanation: This isn't very informative. However, we can define the string representation of our class using the __repr__ method: End of explanation class Student(object): def __init__(self, name): self.name = name def __repr__(self): return self.name def __add__(self, other): return Student(self.name + " and " + other.name) s1 = Student("Christine Carpenter") s2 = Student("Jörg Fliege") print(s1 + s2) Explanation: We can also define what it means to add two instances of our class: End of explanation class Constraint(object): def __init__(self, coefficients, value): self.coefficients = numpy.array(coefficients) self.value = value def __repr__(self): string = "" for i in range(len(self.coefficients)-1): string += str(self.coefficients[i]) + " x_{}".format(i+1) + " + " string += str(self.coefficients[-1]) + " x_{}".format(len(self.coefficients)) string += " \le " string += str(self.value) return string c1 = Constraint([2, 1], 4) c2 = Constraint([1, 2], 3) print(c1) print(c2) class Linearprog(object): def __init__(self, objective, constraints): self.objective = numpy.array(objective) self.nvars = len(self.objective) self.constraints = constraints self.nconstraints = len(self.constraints) self.tableau = numpy.zeros((1+self.nconstraints, 2+self.nvars+self.nconstraints)) self.tableau[0, 0] = 1.0 self.tableau[0, 1:1+self.nvars] = -self.objective for nc, c in enumerate(self.constraints): self.tableau[1+nc, 1:1+self.nvars] = c.coefficients self.tableau[1+nc, 1+self.nvars+nc] = 1.0 self.tableau[1+nc, -1] = c.value self.z, self.x = self.simplex() def simplex(self): for column in range(1, self.nvars+2): if self.tableau[0, column] < 0: pivot_row = numpy.argmin(numpy.abs(self.tableau[0, column] / self.tableau[1:, column])) + 1 # Rescale pivot row self.tableau[pivot_row, :] /= self.tableau[pivot_row, column] # Remove all entries in columns except the pivot for row in range(0, pivot_row): pivot = self.tableau[row, column] / self.tableau[pivot_row, column] self.tableau[row, :] -= pivot * self.tableau[pivot_row, :] for row in range(pivot_row+1, self.tableau.shape[0]): pivot = self.tableau[row, column] / self.tableau[pivot_row, column] self.tableau[row, :] -= pivot * self.tableau[pivot_row, :] z = self.tableau[0, -1] x = self.tableau[1:self.nvars+1, -1] return z, x def __repr__(self): string = "max " for i in range(len(self.objective)-1): string += str(self.objective[i]) + " x_{}".format(i+1) + " + " string += str(self.objective[-1]) + " x_{}".format(len(self.objective)) string += "\n\nwith constraints\n" for c in self.constraints: string += "\n" string += c.__repr__() string += "\n\n" string += "Solution has objective function maximum of " + str(self.z) string += "\n\n" string += "at location x = " + str(self.x) return string problem = Linearprog([1, 1], [c1, c2]) print(problem) Explanation: Going back to the simplex method, we want to define a class that contains the objective function and the constraints, a method to solve the problem, and a representation of the problem and solution. End of explanation import pulp problem = pulp.LpProblem("Simple problem", pulp.LpMaximize) Explanation: Using libraries - pulp The main advantage of using Python is the range of libraries there are that you can use to more efficiently solve your problems. For linear programming there is the pulp library, which is a Python wrapper to efficient low level libraries such as GLPK. It's worth noting that pulp provides high-level access to leading proprietary libraries like CPLEX, but doesn't provide the binaries or the licences. By default pulp uses CBC which is considerably slower: consult your supervisor as to what's suitable when for your work. There are a range of examples that you can look at, but we'll quickly revisit the example above. The approach is to use a lot of pulp defined classes, which are hopefully fairly transparent: End of explanation x1 = pulp.LpVariable("x_1", lowBound=0, upBound=None, cat='continuous') x2 = pulp.LpVariable("x_2", lowBound=0, upBound=None, cat='continuous') Explanation: This gives a "meaningful" title to the problem and says if we're going to maximize or minimize. End of explanation objective = x1 + x2, "Objective function to maximize" problem += objective Explanation: Defining the variables again gives them "meaningful" names, and specifies their lower and upper bounds, and whether the variable type is continuous or integer. We could ignore the latter two definitions as they take their default values. The first thing to do now is to define the objective function by "adding" it to the problem: End of explanation c1 = 2 * x1 + x2 <= 4, "First constraint" c2 = x1 + 2 * x2 <= 3, "Second constraint" problem += c1 problem += c2 Explanation: Again we have given a "meaningful" name to the objective function we're maximizing. Next we can create constraints and add them to the problem. End of explanation problem.solve() Explanation: If you want to save the problem at this stage, you can use problem.writeLP(&lt;filename&gt;), where the .lp extension is normally used. To solve the problem, we just call End of explanation print("Status:", pulp.LpStatus[problem.status]) Explanation: The 1 just means it did it: it does not say whether it succeeded! We need to print the status: End of explanation print("Maximized objective function = ", pulp.value(problem.objective)) for v in problem.variables(): print(v.name, "=", v.varValue) Explanation: As it's found a solution, we can print the objective function and the variables: End of explanation bus_stops = ["Airport Parkway Station", "Wessex Lane", "Highfield Interchange", "Portswood Broadway", "The Avenue Archers Road", "Civic Centre", "Central Station", "West Quay", "Town Quay", "NOCS"] Explanation: Using pulp is far easier and robust than coding our own, and will cover a much wider range of problems. Exercise Try using pulp to implement the following optimisation problem. Whiskas want to make their cat food out of just two ingredients: chicken and beef. These ingredients must be blended such that they meet the nutritional requirements for the food whilst minimising costs. The costs of chicken and beef are \$0.013 and \$0.008 per gram, and their nutritional contributions per gram are: Stuff | Protein | Fat | Fibre | Salt :-- | :-: | --: Chicken | 0.100 | 0.080 | 0.001 | 0.002 Beef | 0.200 | 0.100 | 0.005 | 0.005 Let's define our decision variables: $$x_1 = \text{percentage of chicken in can of cat food} $$ $$x_2 = \text{percentage of beef in can of cat food}$$ As these are percentages, we know that both must be $0 \leq x \leq 100$ and that they must sum to 100. The objective function to minimise costs is $$\min 0.013 x_1 + 0.008 x_2$$ The constraints (that the variables must sum to 100 and that the nutritional requirements are met) are: $$1.000 x_1 + 1.000 x_2 = 100.0$$ $$0.100 x_1 + 0.200 x_2 \ge 8.0$$ $$0.080 x_1 + 0.100 x_2 \ge 6.0$$ $$0.001 x_1 + 0.005 x_2 \le 2.0$$ $$0.002 x_1 + 0.005 x_2 \le 0.4$$ This problem was taken from the pulp documentation - you can find the solution there. Further reading There's a number of projects using pulp out there - one for people interested in scheduling is Conference Scheduler which works out when to put talks on, given constraints. Monte Carlo One type of optimization problem deals with queues. As an example problem we'll take the Unilink bus service U1C from the Airport into the centre and ask: at busy times, how many people will not be able to get on the bus, and at what stops? If we use a fixed set of customers and want to simulate the events in time, this is an example of a discrete event model. An example Python Discrete Event simulator is ciw, which has a detailed set of tutorials. If we want to provide a random set of customers, to see what range of problems we may have, this is an example of Monte Carlo simulation. This can be done using standard Python random number generators, built into (for example) numpy and scipy. We will only consider the main stops: End of explanation import numpy capacity = 85 n_people = 250 total_stops = len(bus_stops) initial_stops = numpy.random.randint(0, total_stops-1, n_people) n_stops = numpy.zeros_like(initial_stops) n_onboard = numpy.zeros((total_stops,), dtype=numpy.int) n_left_behind = numpy.zeros_like(n_onboard) for i in range(total_stops): if i == total_stops - 1: # Can only take one stop n_stops[initial_stops == i] = 1 else: n_people_at_stop = len(initial_stops[initial_stops == i]) n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop) for i in range(total_stops): n_people_at_stop = len(initial_stops[initial_stops == i]) n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])]) n_left_behind[i] = max([n_people_at_stop - n_people_getting_on, 0]) for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]: n_onboard[i:i+fill_stops] += 1 print(n_left_behind) print(n_onboard) Explanation: We will assume that the bus capacity is $85$ people, that $250$ people want to travel, that they are distributed at the $10$ stops following a discrete random distribution, and each wants to travel a number of stops that also follows a discrete random distribution (distributed between $1$ and the maximum number of stops they could travel). There are smarter ways of doing it than this, I'm sure: End of explanation def mc_unilink(n_people, n_runs = 10000): Given n_people wanting to ride the U1, use Monte Carlo to see how many are left behind on average at each stop. Parameters ---------- n_people : int Total number of people wanting to use the bus n_runs : int Number of realizations Returns ------- n_left_behind_average : array of float Average number of people left behind at each stop bus_stops = ["Airport Parkway Station", "Wessex Lane", "Highfield Interchange", "Portswood Broadway", "The Avenue Archers Road", "Civic Centre", "Central Station", "West Quay", "Town Quay", "NOCS"] total_stops = len(bus_stops) capacity = 85 n_left_behind = numpy.zeros((total_stops, n_runs), dtype = numpy.int) for run in range(n_runs): initial_stops = numpy.random.randint(0, total_stops-1, n_people) n_stops = numpy.zeros_like(initial_stops) n_onboard = numpy.zeros((total_stops,), dtype=numpy.int) for i in range(total_stops): if i == total_stops - 1: # Can only take one stop n_stops[initial_stops == i] = 1 else: n_people_at_stop = len(initial_stops[initial_stops == i]) n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop) for i in range(total_stops): n_people_at_stop = len(initial_stops[initial_stops == i]) n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])]) n_left_behind[i, run] = max([n_people_at_stop - n_people_getting_on, 0]) for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]: n_onboard[i:i+fill_stops] += 1 return numpy.mean(n_left_behind, axis=1) n_left_behind_average = mc_unilink(250, 10000) n_left_behind_average Explanation: And now that we know how to do it once, we can do it many times: End of explanation %matplotlib inline from matplotlib import pyplot x = list(range(len(n_left_behind_average))) pyplot.bar(x, n_left_behind_average) pyplot.xticks(x, bus_stops, rotation='vertical') pyplot.ylabel("Average # passengers unable to board") pyplot.show() Explanation: We see that, as expected, it's the stops in the middle that fare worst. We can easily plot this: End of explanation import numpy import pandas import sklearn iris = pandas.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv') Explanation: Exercise Take a look at the discrete distributions in scipy.stats and work out how to improve this model. Machine Learning If you want to get a computer to classify a large dataset for you, or to "learn", then packages for Machine Learning, or Neural Networks, or Deep Learning, are the place to go. This field is very much a moving target, but the Python scikit-learn library has a lot of very useful tools that can be used as a starting point. In this example we'll focus on classification: given a dataset that's known to fall into fixed groups, develop a model that predicts from the data what group a new data point falls within. As a concrete example we'll use the standard Iris data set again, which we can get from GitHub. We used this with pandas, and we can use that route to get the data in. End of explanation iris.head() Explanation: A quick reminder of what the dataset contains: End of explanation iris['Name'].unique() Explanation: There are different types of iris, classified by the Name. Each individual flower observed has four measurements, given by the data. We want to use some of the data (the Sepal Length and Width, and the Petal Length and Width) to construct a model. The model will take an observation - these four numbers - and predict which flower type we have. We'll use the rest of the data to check how accurate our model is. First let's look at how many different types of flower there are: End of explanation iris.describe() Explanation: So we're trying to choose one of three types. The range of values can be summarized: End of explanation labels = iris['Name'] data = iris.drop('Name', axis=1) Explanation: There's 150 observations, with a reasonable range of values. So, let's split the dataframe into its data and its labels. What we're wanting to do here is predict the label (the type, or Name, of the Iris observed) from the data (the measurements of the sepal and petal). End of explanation from sklearn.model_selection import train_test_split data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.5) Explanation: We then want to split our data, and associated labels, into a training set (where we tell the classifier what the answer is) and a testing set (to check the accuracy of the model): End of explanation from sklearn import tree classifier = tree.DecisionTreeClassifier() classifier.fit(data_train, labels_train) Explanation: Here we have split the data set in two: 50% is in the training set, and 50% in the testing set. We can now use a classification algorithm. To start, we will use a decision tree algorithm: End of explanation print(labels_test) print(classifier.predict(data_test)) Explanation: We now have a model: given data, it will return its prediction for the label. We use the testing data to check the model: End of explanation from sklearn.metrics import accuracy_score accuracy = accuracy_score(labels_test, classifier.predict(data_test)) print("Decision Tree Accuracy with 50/50: {}".format(accuracy)) Explanation: We see from the first couple of entries that it's done ok, but that there are errors. As estimating the accuracy of a classification by comparing to test data is so standard, there's a function for that: End of explanation from sklearn.model_selection import train_test_split from sklearn import tree from sklearn.metrics import accuracy_score dfs = {'indie': pandas.read_csv('spotify_data/indie.csv'), 'pop': pandas.read_csv('spotify_data/pop.csv'), 'country': pandas.read_csv('spotify_data/country.csv'), 'metal': pandas.read_csv('spotify_data/metal.csv'), 'house': pandas.read_csv('spotify_data/house.csv'), 'rap': pandas.read_csv('spotify_data/rap.csv')} for genre, df in dfs.items(): df['genre'] = genre dat = pandas.concat(dfs.values()) # define a list of the fields we want to use to train our classifier columns = ['duration_ms', 'explicit', 'popularity', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'key', 'liveness', 'loudness', 'mode', 'speechiness', 'tempo', 'time_signature', 'valence', 'genre'] # define data as all columns but the genre column data = dat[columns].drop('genre', axis=1) # define labels as the genre column labels = dat[columns].genre # split the data into a training set and a testing set data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.3) # create the classifier classifier = tree.DecisionTreeClassifier() # train the classifier using the training data classifier.fit(data_train, labels_train) # calculate the accuracy of the classifier using the testing data accuracy = accuracy_score(labels_test, classifier.predict(data_test)) print("Decision Tree Accuracy with 50/50: {}".format(accuracy)) Explanation: So the result is very accurate on this simple dataset. Exercises Vary the size of the training / testing split to see how it affects the accuracy. Try a different classifier - the KNeighborsClassifier for example. Try a different dataset: have a go at creating a classifier for the music data we used in the data handling session. See how the accuracy changes when you exclude the pop dataset - can you think why this may be so? Does the same thing happen when we exclude other genres? look at sklearn.datasets for possibilities, such as the digits dataset which does handwriting recognition. Here's a worked solution for the music data classifier. We'll start by importing the libraries and data we need. End of explanation nopop_dat = dat[dat.genre != 'pop'] # define data as all columns but the genre column data = nopop_dat[columns].drop('genre', axis=1) # define labels as the genre column labels = nopop_dat[columns].genre data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.1) classifier = tree.DecisionTreeClassifier() classifier.fit(data_train, labels_train) accuracy = accuracy_score(labels_test, classifier.predict(data_test)) print("Decision Tree Accuracy with 50/50: {}".format(accuracy)) Explanation: The accuracy of this classifier is not great. As the train_test_split function randomly selects its training and test data, the accuracy will change every time you run it, but it tends to be 60-70%. Let's try excluding the pop data. End of explanation
14,849
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction In this notebook, we will demonstrate how Google Sheets can be used as a simple medium for managing, updating, and evaluating Intents and Training Phrases in Dialogflow CX. Specifically, we will show how to update Existing Intents and Training Phrases in Dialogflow CX using Google Sheets as a Source Prerequisites Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it Step1: Imports Step2: User Inputs In the next section, we will collect runtime variables needed to execute this notebook. This should be the only cell of the notebook you need to edit in order for this notebook to run. For this notebook, we'll need the following inputs Step3: CX to Sheets - Filtered by Intents in Scope of a Flow Here, we will demonstrate how to extract all of the Intents and Training Phrases associated with a specific Flow inside of a Dialogflow CX Agent. In our previous notebook example, we extracted ALL of the Intents and Training Phrases associated with the Agent. But in some cases, you may only be interested in Intents that are currently in use with Flow A or Flow B. The following code allows you to easily extract that information and move it to a Google Sheet for review. Prerequisites In order for the DataframeFunctions class to interact with Google Sheets, you must share your Google Sheet with your Service Account email address.
Python Code: #If you haven't already, make sure you install the `dfcx-scrapi` library !pip install dfcx-scrapi Explanation: Introduction In this notebook, we will demonstrate how Google Sheets can be used as a simple medium for managing, updating, and evaluating Intents and Training Phrases in Dialogflow CX. Specifically, we will show how to update Existing Intents and Training Phrases in Dialogflow CX using Google Sheets as a Source Prerequisites Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it End of explanation import pandas as pd from dfcx_scrapi.tools.copy_util import CopyUtil from dfcx_scrapi.tools.dataframe_functions import DataframeFunctions Explanation: Imports End of explanation creds_path = '<YOUR_CREDS_PATH_HERE>' agent_id = '<YOUR_AGENT_ID_HERE>' flow = '<YOUR_FLOW_DISPLAY_NAME>' google_sheet_name = 'My Google Sheet Name' google_sheet_tab_write = 'Write To My Tab Name' Explanation: User Inputs In the next section, we will collect runtime variables needed to execute this notebook. This should be the only cell of the notebook you need to edit in order for this notebook to run. For this notebook, we'll need the following inputs: creds_path: Your local path to your GCP Service Account Credentials agent_id: Your Dialogflow CX Agent ID in String format google_sheet_name: The name of your Google Sheet google_sheet_tab_read: The name of the tab in your Google Sheet to read the data from End of explanation cu = CopyUtil(creds_path=creds_path, agent_id=agent_id) dffx = DataframeFunctions(creds_path) flow_map = cu.flows.get_flows_map(reverse=True) pages = cu.pages.list_pages(flow_map[flow]) resources = cu.get_page_dependencies(pages) for key in resources.keys(): if key == 'intents': intent_list = list(resources[key]) all_intents = cu.intents.list_intents() final_intents = [] for intent in all_intents: if intent.name in intent_list: final_intents.append(intent) df = pd.DataFrame() for intent in final_intents: df = df.append(cu.intents.intent_proto_to_dataframe(intent)) # Push DataFrame to Google Sheets dffx.dataframe_to_sheets(google_sheet_name, google_sheet_tab_write, df) print('Total # of Intents = {}'.format(df.intent.nunique())) print('Total # of Training Phrases = {}'.format(df.tp.nunique())) Explanation: CX to Sheets - Filtered by Intents in Scope of a Flow Here, we will demonstrate how to extract all of the Intents and Training Phrases associated with a specific Flow inside of a Dialogflow CX Agent. In our previous notebook example, we extracted ALL of the Intents and Training Phrases associated with the Agent. But in some cases, you may only be interested in Intents that are currently in use with Flow A or Flow B. The following code allows you to easily extract that information and move it to a Google Sheet for review. Prerequisites In order for the DataframeFunctions class to interact with Google Sheets, you must share your Google Sheet with your Service Account email address. End of explanation
14,850
Given the following text description, write Python code to implement the functionality described below step by step Description: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small> Challenge Notebook Problem Step1: Unit Test
Python Code: %run ../stack/stack.py %load ../stack/stack.py class QueueFromStacks(object): def __init__(self): # TODO: Implement me pass def shift_stacks(self, source, destination): # TODO: Implement me pass def enqueue(self, data): # TODO: Implement me pass def dequeue(self): # TODO: Implement me pass Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small> Challenge Notebook Problem: Implement a queue using two stacks. Constraints Test Cases Algorithm Code Unit Test Solution Notebook Constraints Do you expect the methods to be enqueue and dequeue? Yes Can we assume we already have a stack class that can be used for this problem? Yes Test Cases Enqueue and dequeue on empty stack Enqueue and dequeue on non-empty stack Multiple enqueue in a row Multiple dequeue in a row Enqueue after a dequeue Dequeue after an enqueue Algorithm Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code End of explanation # %load test_queue_from_stacks.py from nose.tools import assert_equal class TestQueueFromStacks(object): def test_queue_from_stacks(self): print('Test: Dequeue on empty stack') queue = QueueFromStacks() assert_equal(queue.dequeue(), None) print('Test: Enqueue on empty stack') print('Test: Enqueue on non-empty stack') print('Test: Multiple enqueue in a row') num_items = 3 for i in range(0, num_items): queue.enqueue(i) print('Test: Dequeue on non-empty stack') print('Test: Dequeue after an enqueue') assert_equal(queue.dequeue(), 0) print('Test: Multiple dequeue in a row') assert_equal(queue.dequeue(), 1) assert_equal(queue.dequeue(), 2) print('Test: Enqueue after a dequeue') queue.enqueue(5) assert_equal(queue.dequeue(), 5) print('Success: test_queue_from_stacks') def main(): test = TestQueueFromStacks() test.test_queue_from_stacks() if __name__ == '__main__': main() Explanation: Unit Test End of explanation
14,851
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Python Steering The following is a tour of the basic layout of CRPropa 3, showing how to setup and run a 1D simulation of the extragalactic propagation of UHECR protons from a Python shell. Simulation setup We start with a ModuleList, which is a container for simulation modules, and represents the simulation. The first module in a simulation should be a propagation module, which will move the cosmic rays. In a 1D simulation magnetic deflections of charged particles are not considered, thus we can use the SimplePropagation module for rectalinear propagation. Next we add modules for photo-pion and electron-pair production with the cosmic microwave background and a module for neutron and nuclear decay. Finally we add a minimum energy requirement Step1: Propagating a single particle The simulation can now be used to propagate a cosmic ray, which is called candidate. We create a 100 EeV proton and propagate it using the simulation. The propagation stops when the energy drops below the minimum energy requirement that was specified. The possible propagation distances are rather long since we are neglecting cosmology in this example. Step2: Defining an observer To define an observer within the simulation we create a Observer object. The convention of 1D simulations is that cosmic rays, starting from positive coordinates, propagate in the negative direction until the reach the observer at 0. Only the x-coordinate is used in the three-vectors that represent position and momentum. Step3: Defining the output file We want to save the propagated cosmic rays to an output file. Plain text output is provided by the TextOutput module. For the type of information being stored we can use one of five presets Step4: If in the example above output1 is added to the module list, it is called on every propagation step to write out the cosmic ray information. To save only cosmic rays that reach our observer, we add an output to the observer that we previously defined. This time we are satisfied with the output type Event1D. Step5: Similary, the output could be linked to the MinimumEnergy module to save those cosmic rays that fall below the minimum energy, and so on. Note Step6: Running the simulation Finally we run the simulation to inject and propagate 10000 cosmic rays. An optional progress bar can show the progress of the simulation. Step7: (Optional) Plotting This is not part of CRPropa, but since we're at it we can plot the energy spectrum of detected particles to observe the GZK suppression. The plotting is done here using matplotlib, but of course you can use whatever plotting tool you prefer.
Python Code: from crpropa import * # simulation: a sequence of simulation modules sim = ModuleList() # add propagator for rectalinear propagation sim.add(SimplePropagation()) # add interaction modules sim.add(PhotoPionProduction(CMB())) sim.add(ElectronPairProduction(CMB())) sim.add(NuclearDecay()) sim.add(MinimumEnergy(1 * EeV)) Explanation: Introduction to Python Steering The following is a tour of the basic layout of CRPropa 3, showing how to setup and run a 1D simulation of the extragalactic propagation of UHECR protons from a Python shell. Simulation setup We start with a ModuleList, which is a container for simulation modules, and represents the simulation. The first module in a simulation should be a propagation module, which will move the cosmic rays. In a 1D simulation magnetic deflections of charged particles are not considered, thus we can use the SimplePropagation module for rectalinear propagation. Next we add modules for photo-pion and electron-pair production with the cosmic microwave background and a module for neutron and nuclear decay. Finally we add a minimum energy requirement: Cosmic rays are stopped once they reach the minimum energy. In general the order of modules doesn't matter much for sufficiently small propagation steps. For good practice, we recommend the order: Propagator --> Interactions -> Break conditions -> Observer / Output. Please note that all input, output and internal calculations are done using SI-units to enforce expressive statements such as E = 1 * EeV or D = 100 * Mpc. End of explanation cosmicray = Candidate(nucleusId(1, 1), 200 * EeV, Vector3d(100 * Mpc, 0, 0)) sim.run(cosmicray) print(cosmicray) print('Propagated distance', cosmicray.getTrajectoryLength() / Mpc, 'Mpc') Explanation: Propagating a single particle The simulation can now be used to propagate a cosmic ray, which is called candidate. We create a 100 EeV proton and propagate it using the simulation. The propagation stops when the energy drops below the minimum energy requirement that was specified. The possible propagation distances are rather long since we are neglecting cosmology in this example. End of explanation # add an observer obs = Observer() obs.add(ObserverPoint()) # observer at x = 0 sim.add(obs) print(obs) Explanation: Defining an observer To define an observer within the simulation we create a Observer object. The convention of 1D simulations is that cosmic rays, starting from positive coordinates, propagate in the negative direction until the reach the observer at 0. Only the x-coordinate is used in the three-vectors that represent position and momentum. End of explanation # trajectory output output1 = TextOutput('trajectories.txt', Output.Trajectory1D) #sim.add(output1) # generates a lot of output #output1.disable(Output.RedshiftColumn) # don't save the current redshift #output1.disableAll() # disable everything to start from scratch #output1.enable(Output.CurrentEnergyColumn) # current energy #output1.enable(Output.CurrentIdColumn) # current particle type # ... Explanation: Defining the output file We want to save the propagated cosmic rays to an output file. Plain text output is provided by the TextOutput module. For the type of information being stored we can use one of five presets: Event1D, Event3D, Trajectory1D, Trajectory3D and Everything. We can also fine tune with enable(XXXColumn) and disable(XXXColumn) End of explanation # event output output2 = TextOutput('events.txt', Output.Event1D) obs.onDetection(output2) #sim.run(cosmicray) #output2.close() Explanation: If in the example above output1 is added to the module list, it is called on every propagation step to write out the cosmic ray information. To save only cosmic rays that reach our observer, we add an output to the observer that we previously defined. This time we are satisfied with the output type Event1D. End of explanation # cosmic ray source source = Source() source.add(SourcePosition(100 * Mpc)) source.add(SourceParticleType(nucleusId(1, 1))) source.add(SourcePowerLawSpectrum(1 * EeV, 200 * EeV, -1)) print(source) Explanation: Similary, the output could be linked to the MinimumEnergy module to save those cosmic rays that fall below the minimum energy, and so on. Note: If we want to use the CRPropa output file from within the same script that runs the simulation, the output module should be explicitly closed after the simulation run in order to get all events flushed to the file. Defining the source To avoid setting each individual cosmic ray by hand we defince a cosmic ray source. The source is located at a distance of 100 Mpc and accelerates protons with a power law spectrum and energies between 1 - 200 EeV. End of explanation sim.setShowProgress(True) # switch on the progress bar sim.run(source, 10000) Explanation: Running the simulation Finally we run the simulation to inject and propagate 10000 cosmic rays. An optional progress bar can show the progress of the simulation. End of explanation %matplotlib inline import matplotlib.pyplot as plt import numpy as np output2.close() # close output file before loading data = np.genfromtxt('events.txt', names=True) print('Number of events', len(data)) logE0 = np.log10(data['E0']) + 18 logE = np.log10(data['E']) + 18 plt.figure(figsize=(10, 7)) h1 = plt.hist(logE0, bins=25, range=(18, 20.5), histtype='stepfilled', alpha=0.5, label='At source') h2 = plt.hist(logE, bins=25, range=(18, 20.5), histtype='stepfilled', alpha=0.5, label='Observed') plt.xlabel('log(E/eV)') plt.ylabel('N(E)') plt.legend(loc = 'upper left', fontsize=20) Explanation: (Optional) Plotting This is not part of CRPropa, but since we're at it we can plot the energy spectrum of detected particles to observe the GZK suppression. The plotting is done here using matplotlib, but of course you can use whatever plotting tool you prefer. End of explanation
14,852
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial Part 23 Step1: Make The Datasets Let's begin by loading some molecules to work with. We load Tox21, specifying splitter=None so everything will be returned as a single dataset. Step3: Because ScScore is trained on relative complexities, we want the X tensor in our dataset to have 3 dimensions (sample_id, molecule_id, features). The molecule_id dimension has size 2 because a sample is a pair of molecules. The label is 1 if the first molecule is more complex than the second molecule. The function create_dataset we introduce below pulls random pairs of SMILES strings out of a given list and ranks them according to this complexity measure. In the real world you could use purchase cost, or number of reaction steps required as your complexity score. Step4: With our complexity ranker in place we can now construct our dataset. Let's start by randomly splitting the list of molecules into training and test sets. Step5: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the function defined above. Step6: Now that we have our dataset created, let's train a ScScoreModel on this dataset. Step7: Model Performance Lets evaluate how well the model does on our holdout molecules. The SaScores should track the length of SMILES strings from never before seen molecules. Step8: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib.
Python Code: !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ Explanation: Tutorial Part 23: Synthetic Feasibility Synthetic feasibility is a problem when running large scale enumerations. Often molecules that are enumerated are very difficult to make and thus not worth inspection, even if their other chemical properties are good in silico. This tutorial goes through how to train the ScScore model [1]. The idea of the model is to train on pairs of molecules where one molecule is "more complex" than the other. The neural network then can make scores which attempt to keep this pairwise ordering of molecules. The final result is a model which can give a relative complexity of a molecule. The paper trains on every reaction in reaxys, declaring products more complex than reactions. Since this training set is prohibitively expensive we will instead train on arbitrary molecules declaring one more complex if its SMILES string is longer. In the real world you can use whatever measure of complexity makes sense for the project. In this tutorial, we'll use the Tox21 dataset to train our simple synthetic feasibility model. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. Setup To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine. End of explanation import deepchem as dc tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='Raw', splitter=None) molecules = datasets[0].X Explanation: Make The Datasets Let's begin by loading some molecules to work with. We load Tox21, specifying splitter=None so everything will be returned as a single dataset. End of explanation from rdkit import Chem import random from deepchem.feat import CircularFingerprint import numpy as np def create_dataset(fingerprints, smiles_lens, ds_size=100000): m1: list of np.Array fingerprints for molecules m2: list of int length of a molecules SMILES string returns: dc.data.Dataset for input into ScScore Model Dataset.X shape is (sample_id, molecule_id, features) Dataset.y shape is (sample_id,) values is 1 if the 0th index molecule is more complex 0 if the 1st index molecule is more complex X, y = [], [] all_data = list(zip(fingerprints, smiles_lens)) while len(y) < ds_size: i1 = random.randrange(0, len(smiles_lens)) i2 = random.randrange(0, len(smiles_lens)) m1 = all_data[i1] m2 = all_data[i2] if m1[1] == m2[1]: continue if m1[1] > m2[1]: y.append(1.0) else: y.append(0.0) X.append([m1[0], m2[0]]) return dc.data.NumpyDataset(np.array(X), np.expand_dims(np.array(y), axis=1)) Explanation: Because ScScore is trained on relative complexities, we want the X tensor in our dataset to have 3 dimensions (sample_id, molecule_id, features). The molecule_id dimension has size 2 because a sample is a pair of molecules. The label is 1 if the first molecule is more complex than the second molecule. The function create_dataset we introduce below pulls random pairs of SMILES strings out of a given list and ranks them according to this complexity measure. In the real world you could use purchase cost, or number of reaction steps required as your complexity score. End of explanation molecule_ds = dc.data.NumpyDataset(np.array(molecules)) splitter = dc.splits.RandomSplitter() train_mols, test_mols = splitter.train_test_split(molecule_ds) Explanation: With our complexity ranker in place we can now construct our dataset. Let's start by randomly splitting the list of molecules into training and test sets. End of explanation n_features = 1024 featurizer = dc.feat.CircularFingerprint(size=n_features, radius=2, chiral=True) train_features = featurizer.featurize(train_mols.X) train_smiles_len = [len(Chem.MolToSmiles(x)) for x in train_mols.X] train_dataset = create_dataset(train_features, train_smiles_len) Explanation: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the function defined above. End of explanation model = dc.models.ScScoreModel(n_features=n_features) model.fit(train_dataset, nb_epoch=20) Explanation: Now that we have our dataset created, let's train a ScScoreModel on this dataset. End of explanation import matplotlib.pyplot as plt %matplotlib inline mol_scores = model.predict_mols(test_mols.X) smiles_lengths = [len(Chem.MolToSmiles(x)) for x in test_mols.X] Explanation: Model Performance Lets evaluate how well the model does on our holdout molecules. The SaScores should track the length of SMILES strings from never before seen molecules. End of explanation plt.figure(figsize=(20,16)) plt.scatter(smiles_lengths, mol_scores) plt.xlim(0,80) plt.xlabel("SMILES length") plt.ylabel("ScScore") plt.show() Explanation: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib. End of explanation
14,853
Given the following text description, write Python code to implement the functionality described below step by step Description: Example plot for LFPy Step1: Function declaration Step2: Parameters etc. Step3: Main simulation procedure Step4: Plot
Python Code: # importing some modules, setting some matplotlib values for pl.plot. import LFPy import numpy as np import matplotlib.pyplot as plt plt.rcParams.update({'font.size' : 12, 'figure.facecolor' : '1', 'figure.subplot.wspace' : 0.5, 'figure.subplot.hspace' : 0.5}) #seed for random generation np.random.seed(1234) Explanation: Example plot for LFPy: Passive cell model adapted from Mainen and Sejnokwski (1996) This is an example scripts using LFPy with a passive cell model adapted from Mainen and Sejnowski, Nature 1996, for the original files, see http://senselab.med.yale.edu/modeldb/ShowModel.asp?model=2488 Here, excitatory and inhibitory neurons are distributed on different parts of the morphology, with stochastic spike times produced by the NEURON's NetStim objects associated with each individual synapse. Otherwise similar to LFPy-example-8.ipynb Copyright (C) 2017 Computational Neuroscience Group, NMBU. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. End of explanation def insert_synapses(synparams, section, n, netstimParameters): '''find n compartments to insert synapses onto''' idx = cell.get_rand_idx_area_norm(section=section, nidx=n) #Insert synapses in an iterative fashion for i in idx: synparams.update({'idx' : int(i)}) # Create synapse(s) and setting times using the Synapse class in LFPy s = LFPy.Synapse(cell, **synparams) s.set_spike_times_w_netstim(**netstimParameters) Explanation: Function declaration: End of explanation # Define cell parameters used as input to cell-class cellParameters = { 'morphology' : 'morphologies/L5_Mainen96_wAxon_LFPy.hoc', 'cm' : 1.0, # membrane capacitance 'Ra' : 150, # axial resistance 'v_init' : -65, # initial crossmembrane potential 'passive' : True, # switch on passive mechs 'passive_parameters' : {'g_pas' : 1./30000, 'e_pas' : -65}, # passive params 'nsegs_method' : 'lambda_f',# method for setting number of segments, 'lambda_f' : 100, # segments are isopotential at this frequency 'dt' : 2**-4, # dt of LFP and NEURON simulation. 'tstart' : -100, #start time, recorders start at t=0 'tstop' : 200, #stop time of simulation #'custom_code' : ['active_declarations_example3.hoc'], # will run this file } # Synaptic parameters taken from Hendrickson et al 2011 # Excitatory synapse parameters: synapseParameters_AMPA = { 'e' : 0, #reversal potential 'syntype' : 'Exp2Syn', #conductance based exponential synapse 'tau1' : 1., #Time constant, rise 'tau2' : 3., #Time constant, decay 'weight' : 0.005, #Synaptic weight 'record_current' : True, #record synaptic currents } # Excitatory synapse parameters synapseParameters_NMDA = { 'e' : 0, 'syntype' : 'Exp2Syn', 'tau1' : 10., 'tau2' : 30., 'weight' : 0.005, 'record_current' : True, } # Inhibitory synapse parameters synapseParameters_GABA_A = { 'e' : -80, 'syntype' : 'Exp2Syn', 'tau1' : 1., 'tau2' : 12., 'weight' : 0.005, 'record_current' : True } # where to insert, how many, and which input statistics insert_synapses_AMPA_args = { 'section' : 'apic', 'n' : 100, 'netstimParameters': { 'number' : 1000, 'start' : 0, 'noise' : 1, 'interval' : 20, } } insert_synapses_NMDA_args = { 'section' : ['dend', 'apic'], 'n' : 15, 'netstimParameters': { 'number' : 1000, 'start' : 0, 'noise' : 1, 'interval' : 90, } } insert_synapses_GABA_A_args = { 'section' : 'dend', 'n' : 100, 'netstimParameters': { 'number' : 1000, 'start' : 0, 'noise' : 1, 'interval' : 20, } } # Define electrode geometry corresponding to a laminar electrode, where contact # points have a radius r, surface normal vectors N, and LFP calculated as the # average LFP in n random points on each contact: N = np.empty((16, 3)) for i in range(N.shape[0]): N[i,] = [1, 0, 0] #normal unit vec. to contacts # put parameters in dictionary electrodeParameters = { 'sigma' : 0.3, # Extracellular potential 'x' : np.zeros(16) + 25, # x,y,z-coordinates of electrode contacts 'y' : np.zeros(16), 'z' : np.linspace(-500, 1000, 16), 'n' : 20, 'r' : 10, 'N' : N, } # Parameters for the cell.simulate() call, recording membrane- and syn.-currents simulationParameters = { 'rec_imem' : True, # Record Membrane currents during simulation } Explanation: Parameters etc.: Define parameters, using dictionaries. It is possible to set a few more parameters for each class or functions, but we chose to show only the most important ones here. End of explanation # Initialize cell instance, using the LFPy.Cell class cell = LFPy.Cell(**cellParameters) # Align apical dendrite with z-axis cell.set_rotation(x=4.98919, y=-4.33261, z=0.) # Insert synapses using the function defined earlier insert_synapses(synapseParameters_AMPA, **insert_synapses_AMPA_args) insert_synapses(synapseParameters_NMDA, **insert_synapses_NMDA_args) insert_synapses(synapseParameters_GABA_A, **insert_synapses_GABA_A_args) # Perform NEURON simulation, results saved as attributes in the cell instance cell.simulate(**simulationParameters) # Initialize electrode geometry, then calculate the LFP, using the # LFPy.RecExtElectrode class. Note that now cell is given as input to electrode # and created after the NEURON simulations are finished electrode = LFPy.RecExtElectrode(cell, **electrodeParameters) electrode.data = electrode.get_transformation_matrix() @ cell.imem Explanation: Main simulation procedure: End of explanation #plotting some variables and geometry, saving output to .pdf. from example_suppl import plot_ex3 fig = plot_ex3(cell, electrode) #fig.savefig('LFPy-example-09.pdf', dpi=300) Explanation: Plot: End of explanation
14,854
Given the following text description, write Python code to implement the functionality described below step by step Description: Patrick provided a pair of images from AuxTel. Let's look at how those images work with our cwfs code load the modules Step1: Define the image objects. Input arguments Step2: Define the instrument. Input arguments Step3: Define the algorithm being used. Input arguments Step4: Run it Step5: Print the Zernikes Zn (n>=4) Step6: plot the Zernikes Zn (n>=4) Step7: We check that the optical parameters provided are consistent with the image diameter. Otherwise the numerical solutions themselves do not make much sense. Step8: Patrick asked the question Step9: Now we do the forward raytrace using our wavefront solutions The code is simply borrowed from existing cwfs code. We first set up the pupil grid. Oversample means how many ray to trace from each grid point on the pupil. Step10: We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class. Step11: Now do the same thing for extra focal image
Python Code: from lsst.cwfs.instrument import Instrument from lsst.cwfs.algorithm import Algorithm from lsst.cwfs.image import Image, readFile, aperture2image, showProjection import lsst.cwfs.plots as plots import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Patrick provided a pair of images from AuxTel. Let's look at how those images work with our cwfs code load the modules End of explanation fieldXY = [0,0] I1 = Image(readFile('../tests/testImages/AuxTel/I1_intra_20190912_HD21161_z05.fits'), fieldXY, Image.INTRA) I2 = Image(readFile('../tests/testImages/AuxTel/I2_extra_20190912_HD21161_z05.fits'), fieldXY, Image.EXTRA) plots.plotImage(I1.image,'intra') plots.plotImage(I2.image,'extra') Explanation: Define the image objects. Input arguments: file name, field coordinates in deg, image type The colorbar() below may produce a warning message if your matplotlib version is older than 1.5.0 ( https://github.com/matplotlib/matplotlib/issues/5209 ) End of explanation inst=Instrument('AuxTel',I1.sizeinPix) Explanation: Define the instrument. Input arguments: instrument name, size of image stamps End of explanation algo=Algorithm('exp',inst,0) Explanation: Define the algorithm being used. Input arguments: baseline algorithm, instrument, debug level End of explanation algo.runIt(inst,I1,I2,'paraxial') Explanation: Run it End of explanation print(algo.zer4UpNm) Explanation: Print the Zernikes Zn (n>=4) End of explanation plots.plotZer(algo.zer4UpNm,'nm') Explanation: plot the Zernikes Zn (n>=4) End of explanation print("Expected image diameter in pixels = %.0f"%(inst.offset/inst.fno/inst.pixelSize)) plots.plotImage(I1.image0,'original intra', mask=algo.pMask) plots.plotImage(I2.image0,'original extra', mask=algo.pMask) Explanation: We check that the optical parameters provided are consistent with the image diameter. Otherwise the numerical solutions themselves do not make much sense. End of explanation nanMask = np.ones(I1.image.shape) nanMask[I1.pMask==0] = np.nan fig, ax = plt.subplots(1,2, figsize=[10,4]) img = ax[0].imshow(algo.Wconverge*nanMask, origin='lower') ax[0].set_title('Final WF = estimated + residual') fig.colorbar(img, ax=ax[0]) img = ax[1].imshow(algo.West*nanMask, origin='lower') ax[1].set_title('residual wavefront') fig.colorbar(img, ax=ax[1]) fig, ax = plt.subplots(1,2, figsize=[10,4]) img = ax[0].imshow(I1.image, origin='lower') ax[0].set_title('Intra residual image') fig.colorbar(img, ax=ax[0]) img = ax[1].imshow(I2.image, origin='lower') ax[1].set_title('Extra residual image') fig.colorbar(img, ax=ax[1]) Explanation: Patrick asked the question: can we show the results of the fit in intensity space, and also the residual? Great question. The short answer is no. The long answer: the current approach implemented is the so-called inversion approach, i.e., to inversely solve the Transport of Intensity Equation with boundary conditions. It is not a forward fit. If you think of the unperturbed image as I0, and the real image as I, we iteratively map I back toward I0 using the estimated wavefront. Upon convergence, our "residual images" should have intensity distributions that are nearly uniform. We always have an estimated wavefront, and a residual wavefront. The residual wavefront is obtained from the two residual images. However, using tools availabe in the cwfs package, we can easily make forward prediction of the images using the wavefront solution. This is basically to take the slope of the wavefront at any pupil position, and raytrace to the image plane. We will demostrate these below. End of explanation oversample = 10 projSamples = I1.image0.shape[0]*oversample luty, lutx = np.mgrid[ -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5), -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)] lutx = lutx / (projSamples / 2 / inst.sensorFactor) luty = luty / (projSamples / 2 / inst.sensorFactor) Explanation: Now we do the forward raytrace using our wavefront solutions The code is simply borrowed from existing cwfs code. We first set up the pupil grid. Oversample means how many ray to trace from each grid point on the pupil. End of explanation lutxp, lutyp, J = aperture2image(I1, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial') show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1) I1fit = Image(show_lutxyp, fieldXY, Image.INTRA) I1fit.downResolution(oversample, I1.image0.shape[0], I1.image0.shape[1]) Explanation: We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class. End of explanation luty, lutx = np.mgrid[ -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5), -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)] lutx = lutx / (projSamples / 2 / inst.sensorFactor) luty = luty / (projSamples / 2 / inst.sensorFactor) lutxp, lutyp, J = aperture2image(I2, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial') show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1) I2fit = Image(show_lutxyp, fieldXY, Image.EXTRA) I2fit.downResolution(oversample, I2.image0.shape[0], I2.image0.shape[1]) #The atmosphere used here is just a random Gaussian smearing. We do not care much about the size at this point from scipy.ndimage import gaussian_filter atmSigma = .6/3600/180*3.14159*21.6/1.44e-5 I1fit.image[np.isnan(I1fit.image)]=0 a = gaussian_filter(I1fit.image, sigma=atmSigma) fig, ax = plt.subplots(1,3, figsize=[15,4]) img = ax[0].imshow(I1fit.image, origin='lower') ax[0].set_title('Forward prediction (no atm) Intra') fig.colorbar(img, ax=ax[0]) img = ax[1].imshow(a, origin='lower') ax[1].set_title('Forward prediction (w atm) Intra') fig.colorbar(img, ax=ax[1]) img = ax[2].imshow(I1.image0, origin='lower') ax[2].set_title('Real Image, Intra') fig.colorbar(img, ax=ax[2]) I2fit.image[np.isnan(I2fit.image)]=0 b = gaussian_filter(I2fit.image, sigma=atmSigma) fig, ax = plt.subplots(1,3, figsize=[15,4]) img = ax[0].imshow(I2fit.image, origin='lower') ax[0].set_title('Forward prediction (no atm) Extra') fig.colorbar(img, ax=ax[0]) img = ax[1].imshow(b, origin='lower') ax[1].set_title('Forward prediction (w atm) Extra') fig.colorbar(img, ax=ax[1]) img = ax[2].imshow(I2.image0, origin='lower') ax[2].set_title('Real Image, Extra') fig.colorbar(img, ax=ax[2]) Explanation: Now do the same thing for extra focal image End of explanation
14,855
Given the following text description, write Python code to implement the functionality described below step by step Description: Access TTree in Python using PyROOT and fill a histogram <hr style="border-top-width Step1: Optional Step2: Open a file which is located on the web. No type is to be specified for "f". Step3: Loop over the TTree called "events" in the file. It is accessed with the dot operator. Same holds for the access to the branches
Python Code: import ROOT Explanation: Access TTree in Python using PyROOT and fill a histogram <hr style="border-top-width: 4px; border-top-color: #34609b;"> First import the ROOT Python module. End of explanation %jsroot on Explanation: Optional: activate the JavaScript visualisation to produce interactive plots. End of explanation f = ROOT.TFile.Open("https://root.cern.ch/files/summer_student_tutorial_tracks.root"); Explanation: Open a file which is located on the web. No type is to be specified for "f". End of explanation h = ROOT.TH1F("TracksPt","Tracks;Pt [GeV/c];#",128,0,64) for event in f.events: for track in event.tracks: h.Fill(track.Pt()) c = ROOT.TCanvas() h.Draw() c.Draw() Explanation: Loop over the TTree called "events" in the file. It is accessed with the dot operator. Same holds for the access to the branches: no need to set them up - they are just accessed by name, again with the dot operator. End of explanation
14,856
Given the following text description, write Python code to implement the functionality described below step by step Description: Aufgabe 4 Step1: a) We load the breast cancer data set. Step2: b) We split the data into features X and labels y. After that we transform the binary labels to numerical values. Step3: c) Next we split the data into a training and a validation set. Step4: d) Now we set up and train a pipeline which contains a scaler, dimensionality reduction and a classificator. Step5: e) Now we evaluate the score of our pipeline. Step6: f) Now we use RFE instead of PCA for feature selection. Step7: And look at our findings.
Python Code: # imports import pandas import matplotlib.pyplot as plt from sklearn.cross_validation import train_test_split from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.feature_selection import RFECV Explanation: Aufgabe 4: Preprocessing and Pipelines In this task we build a pipeline which performs the typical data preprocessing combined with a classification. End of explanation url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data" dataset = pandas.read_csv(url) Explanation: a) We load the breast cancer data set. End of explanation array = dataset.values X = array[:,[0] + list(range(2,32))] # transform binary labels to numerical values # benign -> 0, malignant -> 1 le = LabelEncoder() le.fit(["M", "B"]) y = le.transform(array[:,1]) Explanation: b) We split the data into features X and labels y. After that we transform the binary labels to numerical values. End of explanation random_state = 1 test_size = 0.20 train_size = 0.80 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, train_size=train_size, random_state=random_state) Explanation: c) Next we split the data into a training and a validation set. End of explanation scaler = StandardScaler() pca = PCA(n_components=2) logistic = LogisticRegression(random_state=1) pipeline = Pipeline(steps=[('StandardScaler', scaler), ('PCA', pca), ('LogisticRegression', logistic)]) pipeline.fit(X_train, y_train) Explanation: d) Now we set up and train a pipeline which contains a scaler, dimensionality reduction and a classificator. End of explanation accuracy = pipeline.score(X_test, y_test) print("Pipelines reaches with PCA an accuracy of:", accuracy) Explanation: e) Now we evaluate the score of our pipeline. End of explanation # set up and train pipeline with RFE scaler = StandardScaler() logistic = LogisticRegression(random_state=1) rfe = RFECV(logistic, scoring='accuracy') pipeline = Pipeline(steps=[('StandardScaler', scaler), ('rfe', rfe), ('LogisticRegression', logistic)]) pipeline.fit(X_train, y_train) Explanation: f) Now we use RFE instead of PCA for feature selection. End of explanation plt.plot(range(1, len(rfe.grid_scores_) + 1), rfe.grid_scores_, "ro") plt.xlabel("selected features") plt.ylabel("accuracy") plt.show() print("Highest accuracy is achieved with:", rfe.n_features_, "features") print() print("From the given 31 features numbered from 0 to 30 these are:") i = 0 while i < len(rfe.support_): if rfe.support_[i]: print(i) i += 1 print() accuracy = pipeline.score(X_test, y_test) print("The pipeline reaches with RFE a maximum accuracy of:", accuracy) Explanation: And look at our findings. End of explanation
14,857
Given the following text description, write Python code to implement the functionality described below step by step Description: Empirically Matching OM10 Lens Galaxies to SL2S Phil Marshall & Bryce Kalmbach, September 2016 Last Updated Step1: Set the CosmoDC2 catalog you want to use here and this will ensure consistent naming throughout all the output files created. Step2: Empirical Strong Lens Data We'll use the SL2S sample of galaxy-scale lenses to model the properties of OM10 lenses. The redshifts and velocity dispersions should cover most (but not all) of the LSST lensed quasar sample in OM10. The data is in Table 3 of Sonnenfeld et al (2013), and stored as a csv format file in the Twinkles data folder. Step3: Component Test We could fit our lens galaxy dataset directly, but one problem is that we don't know the optimal number of components (Gaussians) to use in the fit. Knowing the optimal number of components to fit allows us to obtain a good fit in the smallest amount of time without overfitting the data. BIC One way to determine the optimal number of Gaussian components to use in the model is by fitting the model with different numbers of components and calculating the Bayesian information criterion (BIC) for each model. The BIC incorporates the number of components in the model, the sample size, and the likelihood of the data under the model, and the model with the lowest score is the optimal model to use. We can test for the model with the lowest BIC score for a given dataset using the component_test function, which will compute the BIC for a given dataset and range of n_components and return an array containing all the BIC scores as well as the optimal number of components and corresponding BIC value. The code below will read in all the SN and host parameters we want to use from our data files (using the get_data function) and use these data to test the performance of the model with n_components ranging from 1 to 8. (Larger numbers of components tend to run into errors occurring because too few observations map to a given Gaussian component. With a bigger dataset, this range could be increased.) <Note that due to the multiple fits and the large dataset, the BIC test will likely take a while to run, depending on your system.> Step4: Based on the results of the above test, the model with 1 component has the lowest BIC score and is the optimal choice. Fitting a Model Once we know how many components to use in the model, we can fit a model using that number of components using the fit_model function. Before fitting for this demo, we are going to split our lens galaxy dataset 65-35, with the larger subsample being used to fit the model and the smaller subsample providing a test sample that we can use to predict stellar masses and compare with our predicted sample. Step5: Note how the fit_model function also saves the fit to a file for later re-use. If a model has already been fit, it can be read into an existing Empiricist worker, or a new Empiricist can be instantiated using the model, like this Step7: Predicting stellar masses of lens galaxies Our goal in this notebook is to predict the stellar masses of OM10 lens galaxies. Here we use the XDGMM model we just fit on the test data we separated off from the full dataset. We will use the model to predict stellar masses of lens galaxies based upon their redshift, velocity dispersion and radial size. <The "test" sample generated above gives us a set of 482 host properties that we can use to fit supernovae, and a set of supernova properties to compare with our model fits.> First, we adapt the "get_logR" function from empiriciSN to "get_log_m" changing the references and restrictions of that method (it only allows certain columns due to structure of SN dataset it uses) to suit our strong lensing dataset. Step8: With that ready to go, we now use it to get estimates on the stellar mass of our test data from the model we have trained above. Step9: Now we have a set of test masses in units of log(M/M_sun) and a set of masses sampled from the model. These should have the same distribution when plotted. Step10: It seems that while we can predict the stellar masses to a reasonable degree the estimates for Radius are very poor and we should get those values from cosmoDC2 while matching galaxies on just stellar mass, redshift and ellipticity. Furthermore, below we will drop the radius from the model. The dataset has a very small amount of data and to prevent overfitting against an even smaller sample we choose to use the full dataset in training the model going forward. We want to make sure that the model gives reasonable values when generating masses, redshift and velocity dispersions. Therefore, we will sample the GMM for 1, 2 and 3 component models and take a look at the results with the training data. Step11: Here we try fitting our model with 2 components in the GMM. Step12: And here we use 3 components. Step13: Estimating Stellar Masses for OM10 systems We have decided to move forward with the 1-parameter GMM model and will now use the available data in OM10 sytems to find a stellar mass for OM10 systems based upon redshift and velocity dispersion. Since our attempts to predict radius seem to be inaccurate we will get radius estimate from cosmoDC2 galaxies as well and thus only will be predicting stellar masses for OM10 lenses here. Step14: Below we compare the distributions of stellar mass given to the OM10 lens galaxies by the 1, 2 and 3 component models. Step15: Now we connect to the cosmoDC2 database and use redshift, stellar_mass and ellipticity of our OM10 galaxies to find associated radial sizes for our galaxies. The query looks for any galaxies within 10% in dex of redshift and 10% of stellar mass and ellipticity. For our lens galaxies we don't want disks. However, limiting ourselves in cosmoDC2 to only galaxies with stellar_mass_disk == 0.0 was too restrictive and we instead take the bulge properties for galaxies where the stellar mass of the bulge is over 99% of the total stellar mass. If no matches are found then it will skip on to the next system and we will leave that OM10 system out of the catalog available to the sprinkler. Step16: We are able to match over 95% of the systems, but it seems that this disproportionately leaves out very high masses. Let's see if the 3-component model can do better. Step17: The matching is now available in over 96% of the OM10 catalog but overall doesn't seem to make much of a difference at the high mass end. Since the distribution for the 3 component model seems to be unrealistic and doesn't add much improvement let's see if we can get a reasonable improvement on the 1 component results using a 2 component model. Step18: This does the worst of the 3 models. Therefore, it looks like we should use the 1-component model for the final match to the catalog. Matching to SEDs using sims_GCRCatSimInterface The other thing we want to add into the lensing catalogs are SEDs for the lens galaxies. Here we get the top hat filters out of cosmoDC2 and use the code in sims_GCRCatSimInterface to match these values to a CATSIM SED file in the same way the galaxies are matched for Instance Catalog production in DC2. We also use the code to calculate the magnitude normalization for PhoSim. Step19: Check to see that our bins are now in order when we call them. Step20: Before saving our new information we want to check that the SEDs we are matching are in fact old, metal-poor templates. So we take the metallicity and age from all the templates and check them out. Step21: It seems that we are indeed getting templates for older galaxies and mostly less than solar metallicity. Adding new info to Twinkles OM10 data We will take all the columns currently in the twinkles om10 data and add in our new reff values, SED filenames and SED magnitude normalizations. Step22: Great! Now that we have saved our new lens catalog we can open it up and make sure the data is where we want it.
Python Code: import numpy as np import matplotlib as mpl mpl.rcParams['text.usetex'] = False from matplotlib import pyplot as plt import corner import urllib import os from sklearn.cross_validation import train_test_split from astroML.plotting import setup_text_plots from lsst.sims.photUtils import Sed, Bandpass, BandpassDict, getImsimFluxNorm import empiriciSN from MatchingLensGalaxies_utilities import * %matplotlib inline Explanation: Empirically Matching OM10 Lens Galaxies to SL2S Phil Marshall & Bryce Kalmbach, September 2016 Last Updated: Bryce Kalmbach, December 2018 We need to be able to assign a stellar mass and size to each of our OM10 lens galaxies, so that we can, in turn, associate a suitable cosmoDC2 galaxy with that object. To do this, we will follow Tom Holoien's "empiriciSN" approach, and model the intrinsic distribution of lens galaxy size, stellar mass, redshift and velocity dispersion with the "extreme deconvolution" algorithm. SEDs will need to be matched in a separate code since gcr-catalogs does not have SEDs in the available as the old CATSIM galaxies did. Requirements You will need to have installed Tom Holoien's XDGMM and empiriciSN packages, as well as their dependencies. By default, in empiricSN all the model fitting is done with the AstroML XDGMM algorithm rather than the Bovy et al. (2011) algorithm - for this demo you do not need to have the Bovy et al. algorithm installed to run the code. However, we note that the Bovy et al. algorithm is, in general, significantly (i.e., several times) faster. We recommend you try each method on your dataset when using this class. End of explanation catalog_version = 'cosmoDC2_v1.1.4' Explanation: Set the CosmoDC2 catalog you want to use here and this will ensure consistent naming throughout all the output files created. End of explanation def get_sl2s_data(): filename = '../../data/SonnenfeldEtal2013_Table3.csv' ! wc -l $filename z = np.array([]) z_err = np.array([]) v_disp = np.array([]) v_disp_err = np.array([]) r_eff = np.array([]) r_eff_err = np.array([]) log_m = np.array([]) log_m_err = np.array([]) infile = open(filename, 'r') inlines = infile.readlines() for line1 in inlines: if line1[0] == '#': continue line = line1.split(',') #Params z = np.append(z, float(line[1])) v_disp = np.append(v_disp, float(line[2])) r_eff = np.append(r_eff, float(line[3])) log_m = np.append(log_m, float(line[4])) #Errors z_err = np.append(z_err, float(line[5])) v_disp_err = np.append(v_disp_err, float(line[6])) r_eff_err = np.append(r_eff_err, float(line[7])) log_m_err = np.append(log_m_err, float(line[8])) #Build final arrays X = np.vstack([z, v_disp, r_eff, log_m]).T Xerr = np.zeros(X.shape + X.shape[-1:]) diag = np.arange(X.shape[-1]) Xerr[:, diag, diag] = np.vstack([z_err**2, v_disp_err**2, r_eff_err**2, log_m_err**2]).T return X, Xerr # Here's what we did to get the csv file: # ! echo "ID, zlens, vdisp, Reff, Mstar, zlens_err, vdisp_err, Reff_err, Mstar_err" > SonnenfeldEtal2013_Table3.csv # ! cat gammaptable.tex | sed s%'&'%%g | sed s%'\$'%%g | sed s%'\\'%%g | sed s%'pm'%' '%g | sed s%'disky'%%g | awk '{print $1", "$2", "$5", "$3", "$7", 0.001, "$6", 0.01, "$8}' >> SonnenfeldEtal2013_Table3.csv Explanation: Empirical Strong Lens Data We'll use the SL2S sample of galaxy-scale lenses to model the properties of OM10 lenses. The redshifts and velocity dispersions should cover most (but not all) of the LSST lensed quasar sample in OM10. The data is in Table 3 of Sonnenfeld et al (2013), and stored as a csv format file in the Twinkles data folder. End of explanation # Instantiate an empiriciSN worker object: empiricist = empiriciSN.Empiricist() # Define the range of component numbers and read in the dataset: component_range = np.array([1,2,3,4,5,6,7,8]) X, Xerr = get_sl2s_data() %%capture --no-stdout # Loop over component numbers, fitting XDGMM model and computing the BIC. bics, optimal_n_comp, lowest_bic = empiricist.component_test(X, Xerr, component_range) plot_bic(component_range, bics, optimal_n_comp) Explanation: Component Test We could fit our lens galaxy dataset directly, but one problem is that we don't know the optimal number of components (Gaussians) to use in the fit. Knowing the optimal number of components to fit allows us to obtain a good fit in the smallest amount of time without overfitting the data. BIC One way to determine the optimal number of Gaussian components to use in the model is by fitting the model with different numbers of components and calculating the Bayesian information criterion (BIC) for each model. The BIC incorporates the number of components in the model, the sample size, and the likelihood of the data under the model, and the model with the lowest score is the optimal model to use. We can test for the model with the lowest BIC score for a given dataset using the component_test function, which will compute the BIC for a given dataset and range of n_components and return an array containing all the BIC scores as well as the optimal number of components and corresponding BIC value. The code below will read in all the SN and host parameters we want to use from our data files (using the get_data function) and use these data to test the performance of the model with n_components ranging from 1 to 8. (Larger numbers of components tend to run into errors occurring because too few observations map to a given Gaussian component. With a bigger dataset, this range could be increased.) <Note that due to the multiple fits and the large dataset, the BIC test will likely take a while to run, depending on your system.> End of explanation %%capture --no-stdout # Split the dataset 65/35: X_train, X_test, Xerr_train, Xerr_test = \ train_test_split(X, Xerr, test_size=0.35, random_state=17) # Fit the model: empiricist.fit_model(X_train, Xerr_train, filename = 'demo_model.fit', n_components=1) #empiricist.read_model('demo_model.fit') Explanation: Based on the results of the above test, the model with 1 component has the lowest BIC score and is the optimal choice. Fitting a Model Once we know how many components to use in the model, we can fit a model using that number of components using the fit_model function. Before fitting for this demo, we are going to split our lens galaxy dataset 65-35, with the larger subsample being used to fit the model and the smaller subsample providing a test sample that we can use to predict stellar masses and compare with our predicted sample. End of explanation alternative = empiriciSN.Empiricist(model_file='demo_model.fit') # Print the weights array for each object---they should be the same... print(empiricist.XDGMM.weights) print(alternative.XDGMM.weights) Explanation: Note how the fit_model function also saves the fit to a file for later re-use. If a model has already been fit, it can be read into an existing Empiricist worker, or a new Empiricist can be instantiated using the model, like this: End of explanation #Write new conditioning function def get_log_m(cond_indices, m_index, X, model_file, Xerr=None): Uses a subset of parameters in the given data to condition the model and return a sample value for log(M/M_sun). Parameters ---------- cond_indices: array_like Array of indices indicating which parameters to use to condition the model. m_index: int Index of log(M/M_sun) in the list of parameters that were used to fit the model. X: array_like, shape = (n < n_features,) Input data. Xerr: array_like, shape = (X.shape,) (optional) Error on input data. If none, no error used to condition. Returns ------- log_m: float Sample value of log(M/M_sun) taken from the conditioned model. Notes ----- The fit_params array specifies a list of indices to use to condition the model. The model will be conditioned and then a mass will be drawn from the conditioned model. This is so that the mass can be used to find cosmoDC2 galaxies to act as hosts for OM10 systems. This does not make assumptions about what parameters are being used in the model, but does assume that the model has been fit already. if m_index in cond_indices: raise ValueError("Cannot condition model on log(M/M_sun).") cond_data = np.array([]) if Xerr is not None: cond_err = np.array([]) m_cond_idx = m_index n_features = empiricist.XDGMM.mu.shape[1] j = 0 for i in range(n_features): if i in cond_indices: cond_data = np.append(cond_data,X[j]) if Xerr is not None: cond_err = np.append(cond_err, Xerr[j]) j += 1 if i < m_index: m_cond_idx -= 1 else: cond_data = np.append(cond_data,np.nan) if Xerr is not None: cond_err = np.append(cond_err, 0.0) if Xerr is not None: cond_XDGMM = empiricist.XDGMM.condition(cond_data, cond_err) else: cond_XDGMM = empiricist.XDGMM.condition(cond_data) sample = cond_XDGMM.sample() log_m = sample[0][m_cond_idx] return log_m Explanation: Predicting stellar masses of lens galaxies Our goal in this notebook is to predict the stellar masses of OM10 lens galaxies. Here we use the XDGMM model we just fit on the test data we separated off from the full dataset. We will use the model to predict stellar masses of lens galaxies based upon their redshift, velocity dispersion and radial size. <The "test" sample generated above gives us a set of 482 host properties that we can use to fit supernovae, and a set of supernova properties to compare with our model fits.> First, we adapt the "get_logR" function from empiriciSN to "get_log_m" changing the references and restrictions of that method (it only allows certain columns due to structure of SN dataset it uses) to suit our strong lensing dataset. End of explanation %%capture --no-stdout # Get actual masses from dataset, for comparison: log_m_test = X_test[:,3] r_test = X_test[:,2] # Predict a mass for each galaxy: np.random.seed(0) cond_indices = np.array([0,1]) sample_log_m = np.array([]) sample_r = np.array([]) model_file='demo_model.fit' for x, xerr in zip(X_test, Xerr_test): log_m = get_log_m(cond_indices, 3, x[cond_indices], model_file)#, Xerr=xerr) sample_log_m = np.append(sample_log_m,log_m) print(x[3], log_m) for x, xerr in zip(X_test, Xerr_test): r_cond = get_log_m(cond_indices, 2, x[cond_indices], model_file)#, Xerr=xerr) sample_r = np.append(sample_r,r_cond) print(x[2], r_cond) Explanation: With that ready to go, we now use it to get estimates on the stellar mass of our test data from the model we have trained above. End of explanation fig = plt.figure(figsize=(12,6)) fig.add_subplot(121) plt.hist(log_m_test, 10, range=(10.5, 12.5), histtype='step', lw=3) plt.hist(sample_log_m, 10, range=(10.5, 12.5), color ='r', histtype='step', lw=3) plt.xlabel('Log(M/M_{sun})') plt.ylabel('Counts') fig.add_subplot(122) plt.hist(r_test, 10, range=(0, 15), histtype='step', lw=3) plt.hist(sample_r, 10, range=(0, 15), color ='r', histtype='step', lw=3) plt.xlabel('radius') plt.ylabel('Counts') plt.legend(('Test Data', 'Sample')) plt.show() fig = plt.figure(figsize=(12,6)) fig.add_subplot(121) plt.hist(100*(log_m_test-sample_log_m)/log_m_test, 10, histtype='step', lw=3) plt.xlabel('Percent Residual Error in Log(M/M_{sun})') plt.ylabel('Counts') fig.add_subplot(122) plt.hist(100*(r_test-sample_r)/r_test, 10, histtype='step', lw=3) plt.xlabel('Percent Residual Error in Radius') plt.ylabel('Counts') plt.tight_layout() Explanation: Now we have a set of test masses in units of log(M/M_sun) and a set of masses sampled from the model. These should have the same distribution when plotted. End of explanation # Drop the radius data and fit only to predict stellar mass X = X[:,[0,1,3]] Xerr = Xerr[:,:,[0,1,3]] Xerr = Xerr[:,[0,1,3], :] %%capture --no-stdout empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=1) test_sample = empiricist.XDGMM.sample(size=10000) setup_text_plots(fontsize=16, usetex=False) mpl.rcParams['text.usetex'] = False figure = corner.corner(test_sample[:,:], labels=['z', 'v_disp', 'm'], range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed': True}, no_fill_contours=True, plot_density=False) corner.corner(X[:, :], labels=['z', 'v_disp', 'm'], color='red', range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed':True}, plot_contours=False, plot_density=False, plot_datapoints=True, data_kwargs={'marker':'o', 'alpha':0.4, 'markersize':10}, fig=figure) plt.show() Explanation: It seems that while we can predict the stellar masses to a reasonable degree the estimates for Radius are very poor and we should get those values from cosmoDC2 while matching galaxies on just stellar mass, redshift and ellipticity. Furthermore, below we will drop the radius from the model. The dataset has a very small amount of data and to prevent overfitting against an even smaller sample we choose to use the full dataset in training the model going forward. We want to make sure that the model gives reasonable values when generating masses, redshift and velocity dispersions. Therefore, we will sample the GMM for 1, 2 and 3 component models and take a look at the results with the training data. End of explanation %%capture --no-stdout empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=2) test_sample = empiricist.XDGMM.sample(size=10000) setup_text_plots(fontsize=16, usetex=False) mpl.rcParams['text.usetex'] = False figure = corner.corner(test_sample[:,:], labels=['z', 'v_disp', 'm'], range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed': True}, no_fill_contours=True, plot_density=False) corner.corner(X[:, :], labels=['z', 'v_disp', 'm'], color='red', range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed':True}, plot_contours=False, plot_density=False, plot_datapoints=True, data_kwargs={'marker':'o', 'alpha':0.4, 'markersize':10}, fig=figure) plt.show() Explanation: Here we try fitting our model with 2 components in the GMM. End of explanation %%capture --no-stdout empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=3) test_sample = empiricist.XDGMM.sample(size=10000) setup_text_plots(fontsize=16, usetex=False) mpl.rcParams['text.usetex'] = False figure = corner.corner(test_sample[:,:], labels=['z', 'v_disp', 'm'], range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed': True}, no_fill_contours=True, plot_density=False) corner.corner(X[:, :], labels=['z', 'v_disp', 'm'], color='red', range = [(0.0, 1.0), (160, 360), (10.5, 12.5)], hist_kwargs = {'normed':True}, plot_contours=False, plot_density=False, plot_datapoints=True, data_kwargs={'marker':'o', 'alpha':0.4, 'markersize':10}, fig=figure) plt.show() Explanation: And here we use 3 components. End of explanation # First load in OM10 lenses we are using in Twinkles from astropy.io import fits hdulist = fits.open('../../data/om10_qso_mock.fits') twinkles_lenses = hdulist[1].data %%capture --no-stdout # Predict a mass for each galaxy: np.random.seed(0) cond_indices = np.array([0,1]) twinkles_log_m_1comp = np.array([]) twinkles_log_m_2comp = np.array([]) twinkles_log_m_3comp = np.array([]) model_file='demo_model.fit' empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=1) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m_1comp = np.append(twinkles_log_m_1comp,log_m) np.random.seed(0) empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=2) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m_2comp = np.append(twinkles_log_m_2comp,log_m) np.random.seed(0) empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=3) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m_3comp = np.append(twinkles_log_m_3comp,log_m) Explanation: Estimating Stellar Masses for OM10 systems We have decided to move forward with the 1-parameter GMM model and will now use the available data in OM10 sytems to find a stellar mass for OM10 systems based upon redshift and velocity dispersion. Since our attempts to predict radius seem to be inaccurate we will get radius estimate from cosmoDC2 galaxies as well and thus only will be predicting stellar masses for OM10 lenses here. End of explanation fig = plt.figure(figsize=(8,8)) mpl.rcParams['text.usetex'] = False n, bins, _ = plt.hist(twinkles_log_m_1comp, histtype='step', label='1 component', range=(10, 14), lw=4, bins=16) plt.hist(twinkles_log_m_2comp, histtype='step', label='2 component', bins=bins, lw=4) plt.hist(twinkles_log_m_3comp, histtype='step', label='3 component', bins=bins, lw=4) plt.xlabel('Estimated Log(Stellar Mass)') plt.title('Predicting Masses for OM10 Lenses') plt.legend() Explanation: Below we compare the distributions of stellar mass given to the OM10 lens galaxies by the 1, 2 and 3 component models. End of explanation import GCRCatalogs import pandas as pd from GCR import GCRQuery # _small is a representative sample catalog = GCRCatalogs.load_catalog(str(catalog_version + '_small')) # Predict a mass for each galaxy: np.random.seed(0) cond_indices = np.array([0,1]) twinkles_log_m_1comp = np.array([]) model_file='demo_model.fit' empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=1) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m_1comp = np.append(twinkles_log_m_1comp,log_m) %%time gcr_om10_match = [] err = 0 np.random.seed(10) i = 0 z_cat_min = np.power(10, np.log10(np.min(twinkles_lenses['ZLENS'])) - .1) z_cat_max = np.power(10, np.log10(np.max(twinkles_lenses['ZLENS'])) + .1) stellar_mass_cat_min = np.min(np.power(10, twinkles_log_m_1comp))*0.9 stellar_mass_cat_max = np.max(np.power(10, twinkles_log_m_1comp))*1.1 data = catalog.get_quantities(['galaxy_id', 'redshift_true', 'stellar_mass', 'ellipticity_true', 'size_true', 'size_minor_true', 'stellar_mass_bulge', 'stellar_mass_disk', 'size_bulge_true', 'size_minor_bulge_true'], filters=['stellar_mass > %f' % stellar_mass_cat_min, 'stellar_mass < %f' % stellar_mass_cat_max, 'redshift_true > %f' % z_cat_min, 'redshift_true < %f' % z_cat_max, 'stellar_mass_bulge/stellar_mass > 0.99']) #### Important Note # Twinkles issue #310 (https://github.com/LSSTDESC/Twinkles/issues/310) says OM10 defines ellipticity as 1 - b/a but # gcr_catalogs defines ellipticity as (1-b/a)/(1+b/a) (https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md) data['om10_ellipticity'] = (1-(data['size_minor_bulge_true']/data['size_bulge_true'])) data_df = pd.DataFrame(data) print(data_df.head(10)) len(data_df) %%time row_num = -1 keep_rows = [] for zsrc, m_star, ellip in zip(twinkles_lenses['ZLENS'], np.power(10, twinkles_log_m_1comp), twinkles_lenses['ELLIP']): row_num += 1 #print(zsrc, m_star, ellip) if row_num % 1000 == 0: print(row_num) z_min, z_max = np.power(10, np.log10(zsrc) - .1), np.power(10, np.log10(zsrc) + .1) m_star_min, m_star_max = m_star*.9, m_star*1.1 ellip_min, ellip_max = ellip*.9, ellip*1.1 data_subset = data_df.query('redshift_true > %f and redshift_true < %f and stellar_mass > %f and stellar_mass < %f and om10_ellipticity > %f and om10_ellipticity < %f' % (z_min, z_max, m_star_min, m_star_max, ellip_min, ellip_max)) #data = catalog.get_quantities(['redshift_true', 'stellar_mass', 'ellipticity_true']) #data_subset = (query).filter(data) #print(data_subset) num_matches = len(data_subset['redshift_true']) if num_matches == 0: err += 1 continue elif num_matches == 1: gcr_data = [data_subset['redshift_true'].values[0], data_subset['stellar_mass_bulge'].values[0], data_subset['om10_ellipticity'].values[0], data_subset['size_bulge_true'].values[0], data_subset['size_minor_bulge_true'].values[0], data_subset['galaxy_id'].values[0]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) elif num_matches > 1: use_idx = np.random.choice(num_matches) gcr_data = [data_subset['redshift_true'].values[use_idx], data_subset['stellar_mass_bulge'].values[use_idx], data_subset['om10_ellipticity'].values[use_idx], data_subset['size_bulge_true'].values[use_idx], data_subset['size_minor_bulge_true'].values[use_idx], data_subset['galaxy_id'].values[use_idx]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) print("Total Match Failures: ", err, " Percentage Match Failures: ", np.float(err)/len(twinkles_log_m_1comp)) gcr_z_1comp = [] gcr_m_star_1comp = [] gcr_r_eff_1comp = [] gcr_gal_id_1comp = [] for row in gcr_om10_match: gcr_z_1comp.append(row[0]) gcr_m_star_1comp.append(row[1]) gcr_r_eff_1comp.append(np.sqrt(row[3]*row[4])) gcr_gal_id_1comp.append(row[5]) np.savetxt('keep_rows_agn.dat', np.array(keep_rows)) #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_lenses['ZLENS'], alpha=0.5, bins=15) plt.hist(gcr_z_1comp, alpha=0.5, bins=bins) plt.xlabel('Lens Galaxy Redshift') plt.ylabel('Counts') #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_log_m_1comp, alpha=0.5, bins=15)#, range=(0,100)) plt.hist(np.log10(gcr_m_star_1comp), alpha=0.5, bins=bins) plt.xlabel('Log10(Lens Galaxy Stellar Mass) (Solar Masses)') plt.ylabel('Counts') Explanation: Now we connect to the cosmoDC2 database and use redshift, stellar_mass and ellipticity of our OM10 galaxies to find associated radial sizes for our galaxies. The query looks for any galaxies within 10% in dex of redshift and 10% of stellar mass and ellipticity. For our lens galaxies we don't want disks. However, limiting ourselves in cosmoDC2 to only galaxies with stellar_mass_disk == 0.0 was too restrictive and we instead take the bulge properties for galaxies where the stellar mass of the bulge is over 99% of the total stellar mass. If no matches are found then it will skip on to the next system and we will leave that OM10 system out of the catalog available to the sprinkler. End of explanation # Predict a mass for each galaxy: np.random.seed(0) cond_indices = np.array([0,1]) twinkles_log_m = np.array([]) twinkles_reff = np.array([]) model_file='demo_model.fit' empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=3) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m = np.append(twinkles_log_m,log_m) %%time gcr_om10_match = [] err = 0 np.random.seed(10) i = 0 z_cat_min = np.power(10, np.log10(np.min(twinkles_lenses['ZLENS'])) - .1) z_cat_max = np.power(10, np.log10(np.max(twinkles_lenses['ZLENS'])) + .1) stellar_mass_cat_min = np.min(np.power(10, twinkles_log_m))*0.9 stellar_mass_cat_max = np.max(np.power(10, twinkles_log_m))*1.1 data = catalog.get_quantities(['galaxy_id', 'redshift_true', 'stellar_mass', 'ellipticity_true', 'size_true', 'size_minor_true', 'stellar_mass_bulge', 'stellar_mass_disk', 'size_bulge_true', 'size_minor_bulge_true'], filters=['stellar_mass > %f' % stellar_mass_cat_min, 'stellar_mass < %f' % stellar_mass_cat_max, 'redshift_true > %f' % z_cat_min, 'redshift_true < %f' % z_cat_max, 'stellar_mass_bulge/stellar_mass > 0.99']) #### Important Note # Twinkles issue #310 (https://github.com/LSSTDESC/Twinkles/issues/310) says OM10 defines ellipticity as 1 - b/a but # gcr_catalogs defines ellipticity as (1-b/a)/(1+b/a) (https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md) data['om10_ellipticity'] = (1-(data['size_minor_bulge_true']/data['size_bulge_true'])) data_df = pd.DataFrame(data) print(data_df.head(10)) row_num = -1 keep_rows = [] for zsrc, m_star, ellip in zip(twinkles_lenses['ZLENS'], np.power(10, twinkles_log_m), twinkles_lenses['ELLIP']): row_num += 1 #print(zsrc, m_star, ellip) if row_num % 1000 == 0: print(row_num) z_min, z_max = np.power(10, np.log10(zsrc) - .1), np.power(10, np.log10(zsrc) + .1) m_star_min, m_star_max = m_star*.9, m_star*1.1 ellip_min, ellip_max = ellip*.9, ellip*1.1 data_subset = data_df.query('redshift_true > %f and redshift_true < %f and stellar_mass > %f and stellar_mass < %f and om10_ellipticity > %f and om10_ellipticity < %f' % (z_min, z_max, m_star_min, m_star_max, ellip_min, ellip_max)) #data = catalog.get_quantities(['redshift_true', 'stellar_mass', 'ellipticity_true']) #data_subset = (query).filter(data) #print(data_subset) num_matches = len(data_subset['redshift_true']) if num_matches == 0: err += 1 continue elif num_matches == 1: gcr_data = [data_subset['redshift_true'].values[0], data_subset['stellar_mass_bulge'].values[0], data_subset['om10_ellipticity'].values[0], data_subset['size_bulge_true'].values[0], data_subset['size_minor_bulge_true'].values[0], data_subset['galaxy_id'].values[0]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) elif num_matches > 1: use_idx = np.random.choice(num_matches) gcr_data = [data_subset['redshift_true'].values[use_idx], data_subset['stellar_mass_bulge'].values[use_idx], data_subset['om10_ellipticity'].values[use_idx], data_subset['size_bulge_true'].values[use_idx], data_subset['size_minor_bulge_true'].values[use_idx], data_subset['galaxy_id'].values[use_idx]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) print("Total Match Failures: ", err, " Percentage Match Failures: ", np.float(err)/len(twinkles_log_m)) gcr_z = [] gcr_m_star = [] for row in gcr_om10_match: gcr_z.append(row[0]) gcr_m_star.append(row[1]) #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_lenses['ZLENS'], alpha=0.5, bins=15) plt.hist(gcr_z, alpha=0.5, bins=bins) plt.xlabel('Lens Galaxy Redshift') plt.ylabel('Counts') #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_log_m, alpha=0.5, bins=15)#, range=(0,100)) plt.hist(np.log10(gcr_m_star), alpha=0.5, bins=bins) plt.xlabel('Log10(Lens Galaxy Stellar Mass) (Solar Masses)') plt.ylabel('Counts') Explanation: We are able to match over 95% of the systems, but it seems that this disproportionately leaves out very high masses. Let's see if the 3-component model can do better. End of explanation # Predict a mass for each galaxy: np.random.seed(0) cond_indices = np.array([0,1]) twinkles_log_m = np.array([]) twinkles_reff = np.array([]) model_file='demo_model.fit' empiricist.fit_model(X, Xerr, filename = 'demo_model.fit', n_components=2) twinkles_data = np.array([twinkles_lenses['ZLENS'], twinkles_lenses['VELDISP']]).T for x in twinkles_data: log_m = get_log_m(cond_indices, 2, x[cond_indices], model_file) twinkles_log_m = np.append(twinkles_log_m,log_m) %%time gcr_om10_match = [] err = 0 np.random.seed(10) i = 0 z_cat_min = np.power(10, np.log10(np.min(twinkles_lenses['ZLENS'])) - .1) z_cat_max = np.power(10, np.log10(np.max(twinkles_lenses['ZLENS'])) + .1) stellar_mass_cat_min = np.min(np.power(10, twinkles_log_m))*0.9 stellar_mass_cat_max = np.max(np.power(10, twinkles_log_m))*1.1 data = catalog.get_quantities(['galaxy_id', 'redshift_true', 'stellar_mass', 'ellipticity_true', 'size_true', 'size_minor_true', 'stellar_mass_bulge', 'stellar_mass_disk', 'size_bulge_true', 'size_minor_bulge_true'], filters=['stellar_mass > %f' % stellar_mass_cat_min, 'stellar_mass < %f' % stellar_mass_cat_max, 'redshift_true > %f' % z_cat_min, 'redshift_true < %f' % z_cat_max, 'stellar_mass_bulge/stellar_mass > 0.99']) #### Important Note # Twinkles issue #310 (https://github.com/LSSTDESC/Twinkles/issues/310) says OM10 defines ellipticity as 1 - b/a but # gcr_catalogs defines ellipticity as (1-b/a)/(1+b/a) (https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md) data['om10_ellipticity'] = (1-(data['size_minor_bulge_true']/data['size_bulge_true'])) data_df = pd.DataFrame(data) print(data_df.head(10)) row_num = -1 keep_rows = [] for zsrc, m_star, ellip in zip(twinkles_lenses['ZLENS'], np.power(10, twinkles_log_m), twinkles_lenses['ELLIP']): row_num += 1 #print(zsrc, m_star, ellip) if row_num % 1000 == 0: print(row_num) z_min, z_max = np.power(10, np.log10(zsrc) - .1), np.power(10, np.log10(zsrc) + .1) m_star_min, m_star_max = m_star*.9, m_star*1.1 ellip_min, ellip_max = ellip*.9, ellip*1.1 data_subset = data_df.query('redshift_true > %f and redshift_true < %f and stellar_mass > %f and stellar_mass < %f and om10_ellipticity > %f and om10_ellipticity < %f' % (z_min, z_max, m_star_min, m_star_max, ellip_min, ellip_max)) #data = catalog.get_quantities(['redshift_true', 'stellar_mass', 'ellipticity_true']) #data_subset = (query).filter(data) #print(data_subset) num_matches = len(data_subset['redshift_true']) if num_matches == 0: err += 1 continue elif num_matches == 1: gcr_data = [data_subset['redshift_true'].values[0], data_subset['stellar_mass_bulge'].values[0], data_subset['om10_ellipticity'].values[0], data_subset['size_bulge_true'].values[0], data_subset['size_minor_bulge_true'].values[0], data_subset['galaxy_id'].values[0]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) elif num_matches > 1: use_idx = np.random.choice(num_matches) gcr_data = [data_subset['redshift_true'].values[use_idx], data_subset['stellar_mass_bulge'].values[use_idx], data_subset['om10_ellipticity'].values[use_idx], data_subset['size_bulge_true'].values[use_idx], data_subset['size_minor_bulge_true'].values[use_idx], data_subset['galaxy_id'].values[use_idx]] gcr_om10_match.append(gcr_data) keep_rows.append(row_num) print("Total Match Failures: ", err, " Percentage Match Failures: ", np.float(err)/len(twinkles_log_m)) gcr_z = [] gcr_m_star = [] gcr_r_eff = [] gcr_gal_id = [] for row in gcr_om10_match: gcr_z.append(row[0]) gcr_m_star.append(row[1]) gcr_r_eff.append(np.sqrt(row[3]*row[4])) gcr_gal_id.append(row[5]) #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_lenses['ZLENS'], alpha=0.5, bins=15) plt.hist(gcr_z, alpha=0.5, bins=bins) plt.xlabel('Lens Galaxy Redshift') plt.ylabel('Counts') #Let's take a look at a couple results n, bins, p = plt.hist(twinkles_log_m, alpha=0.5, bins=15)#, range=(0,100)) plt.hist(np.log10(gcr_m_star), alpha=0.5, bins=bins) plt.xlabel('Log10(Lens Galaxy Stellar Mass) (Solar Masses)') plt.ylabel('Counts') Explanation: The matching is now available in over 96% of the OM10 catalog but overall doesn't seem to make much of a difference at the high mass end. Since the distribution for the 3 component model seems to be unrealistic and doesn't add much improvement let's see if we can get a reasonable improvement on the 1 component results using a 2 component model. End of explanation import sys sys.path.append('/global/homes/b/brycek/DC2/sims_GCRCatSimInterface/workspace/sed_cache/') from SedFitter import sed_from_galacticus_mags H0 = catalog.cosmology.H0.value Om0 = catalog.cosmology.Om0 sed_label = [] sed_min_wave = [] sed_wave_width = [] for quant_label in sorted(catalog.list_all_quantities()): if (quant_label.startswith('sed') and quant_label.endswith('bulge')): sed_label.append(quant_label) label_split = quant_label.split('_') sed_min_wave.append(int(label_split[1])/10) sed_wave_width.append(int(label_split[2])/10) bin_order = np.argsort(sed_min_wave) sed_label = np.array(sed_label)[bin_order] sed_min_wave = np.array(sed_min_wave)[bin_order] sed_wave_width = np.array(sed_wave_width)[bin_order] Explanation: This does the worst of the 3 models. Therefore, it looks like we should use the 1-component model for the final match to the catalog. Matching to SEDs using sims_GCRCatSimInterface The other thing we want to add into the lensing catalogs are SEDs for the lens galaxies. Here we get the top hat filters out of cosmoDC2 and use the code in sims_GCRCatSimInterface to match these values to a CATSIM SED file in the same way the galaxies are matched for Instance Catalog production in DC2. We also use the code to calculate the magnitude normalization for PhoSim. End of explanation for i in zip(sed_label, sed_min_wave, sed_wave_width): print(i) del(data) del(data_df) keep_rows_1comp = np.genfromtxt('keep_rows_agn.dat') keep_rows_1comp = np.array(keep_rows_1comp, dtype=int) print(keep_rows_1comp) columns = ['galaxy_id', 'redshift_true', 'mag_u_lsst', 'mag_g_lsst', 'mag_r_lsst', 'mag_i_lsst', 'mag_z_lsst', 'mag_y_lsst'] for sed_bin in sed_label: columns.append(sed_bin) data = catalog.get_quantities(columns, filters=['stellar_mass > %f' % stellar_mass_cat_min, 'stellar_mass < %f' % stellar_mass_cat_max, 'redshift_true > %f' % z_cat_min, 'redshift_true < %f' % z_cat_max, 'stellar_mass_bulge/stellar_mass > 0.99']) data_df = pd.DataFrame(data) %%time sed_name_list = [] magNorm_list = [] lsst_mags = [] mag_30_list = [] redshift_list = [] i = 0 # Using 1-component model results for gal_id, gal_z in zip(gcr_gal_id_1comp, gcr_z_1comp): if i % 1000 == 0: print(i) i+=1 data_subset = data_df.query(str('galaxy_id == %i' % gal_id)) mag_array = [] lsst_mag_array = [data_subset['mag_%s_lsst' % band_name].values[0] for band_name in ['u', 'g', 'r', 'i', 'z', 'y']] for sed_bin in sed_label: mag_array.append(-2.5*np.log10(data_subset[sed_bin].values[0])) mag_array = np.array(mag_array) lsst_mag_array = np.array(lsst_mag_array) lsst_mags.append(lsst_mag_array) mag_30_list.append(mag_array) redshift_list.append(gal_z) print(len(sed_name_list), len(keep_rows_1comp)) mag_30_list = np.array(mag_30_list).T lsst_mags = np.array(lsst_mags).T redshift_list = np.array(redshift_list) sed_name, magNorm, av, rv = sed_from_galacticus_mags(mag_30_list, redshift_list, H0, Om0, sed_min_wave, sed_wave_width, lsst_mags) sed_name_list = sed_name magNorm_list = magNorm print(len(sed_name_list), len(keep_rows_1comp)) sed_name_array = np.array(sed_name_list) magNorm_array = np.array(magNorm_list) av_array = np.array(av) rv_array = np.array(rv) np.shape(av_array), np.shape(rv_array) Explanation: Check to see that our bins are now in order when we call them. End of explanation sed_metals = [] sed_ages = [] sed_metals_dict = {'0005Z':'.005', '002Z':'.02', '02Z':'.2', '04Z':'.4', '1Z':'1.0', '25Z':'2.5'} for sed_template in sed_name_array: sed_info = sed_template.split('/')[1].split('.') sed_age_info = sed_info[1].split('E') sed_ages.append(np.power(10, int(sed_age_info[1]))*int(sed_age_info[0])) sed_metals.append(sed_metals_dict[sed_info[2]]) fig = plt.figure(figsize=(12, 6)) mpl.rcParams['text.usetex'] = False fig.add_subplot(1,2,1) plt.hist(np.log10(sed_ages)) plt.xlabel('Log10(Galaxy Age (years))') plt.ylabel('Counts') fig.add_subplot(1,2,2) names, counts = np.unique(sed_metals, return_counts=True) x = np.arange(len(names), dtype=int) plt.bar(x, counts) plt.xticks(x, names) plt.xlabel('Metallicity (Z_sun)') plt.ylabel('Counts') plt.tight_layout() plt.subplots_adjust(top=0.9) plt.suptitle('Age and Metallicity of matched SED templates') Explanation: Before saving our new information we want to check that the SEDs we are matching are in fact old, metal-poor templates. So we take the metallicity and age from all the templates and check them out. End of explanation test_bandpassDict = BandpassDict.loadTotalBandpassesFromFiles() imsimband = Bandpass() imsimband.imsimBandpass() mag_norm_om10 = [] test_sed = Sed() test_sed.readSED_flambda(os.path.join(str(os.environ['SIMS_SED_LIBRARY_DIR']), sed_name_array[0])) a, b = test_sed.setupCCM_ab() # We need to adjust the magNorms of the galaxies so that, # in the i-band, they match the OM10 APMAG_I. # We do this be calculating the i-magnitudes of the galaxies as they # will be simulated, finding the difference between that magnitude # and APMAG_I, and adding that difference to *all* of the magNorms # of the galaxy (this way, the cosmoDC2 validated colors of the # galaxies are preserved) for i, idx in list(enumerate(keep_rows_1comp)): if i % 10000 == 0: print(i, idx) test_sed = Sed() test_sed.readSED_flambda(os.path.join(str(os.environ['SIMS_SED_LIBRARY_DIR']), sed_name_array[i])) fnorm = getImsimFluxNorm(test_sed, magNorm_array[3,i]) test_sed.multiplyFluxNorm(fnorm) test_sed.addDust(a, b, A_v=av_array[i], R_v=rv_array[i]) test_sed.redshiftSED(twinkles_lenses['ZLENS'][idx], dimming=True) i_mag = test_sed.calcMag(test_bandpassDict['i']) d_mag = twinkles_lenses['APMAG_I'][idx]-i_mag mag_norm_om10.append(magNorm_array[:,i] + d_mag) col_list = [] for col in twinkles_lenses.columns: if col.name != 'REFF': col_list.append(fits.Column(name=col.name, format=col.format, array=twinkles_lenses[col.name][keep_rows_1comp])) else: col_list.append(fits.Column(name=col.name, format=col.format, array=gcr_r_eff_1comp)) col_list.append(fits.Column(name='lens_sed', format='40A', array=sed_name_array)) col_list.append(fits.Column(name='sed_magNorm', format='6D', array=mag_norm_om10)) col_list.append(fits.Column(name='lens_av', format='D', array=av_array)) col_list.append(fits.Column(name='lens_rv', format='D', array=rv_array)) cols = fits.ColDefs(col_list) tbhdu = fits.BinTableHDU.from_columns(cols) tbhdu.writeto('../../data/twinkles_lenses_%s.fits' % catalog_version) Explanation: It seems that we are indeed getting templates for older galaxies and mostly less than solar metallicity. Adding new info to Twinkles OM10 data We will take all the columns currently in the twinkles om10 data and add in our new reff values, SED filenames and SED magnitude normalizations. End of explanation hdulist_2 = fits.open('../../data/twinkles_lenses_%s.fits' % catalog_version) print(hdulist_2[1].data[0]) print(hdulist_2[1].data['REFF'][0], hdulist_2[1].data['lens_sed'][0], hdulist_2[1].data['sed_magNorm'][0], hdulist_2[1].data['lens_av'][0], hdulist_2[1].data['lens_rv'][0]) print(gcr_r_eff_1comp[0], sed_name_list[0], mag_norm_om10[0], av_array[0], rv_array[0]) Explanation: Great! Now that we have saved our new lens catalog we can open it up and make sure the data is where we want it. End of explanation
14,858
Given the following text description, write Python code to implement the functionality described. Description: Minimum number of increment / decrement operations such that array contains all elements from 1 to N Function to find the minimum operations ; Sort the given array ; Count operations by assigning a [ i ] = i + 1 ; Driver Code
Python Code: def minimumMoves(a , n ) : operations = 0 a . sort(reverse = False ) for i in range(0 , n , 1 ) : operations = operations + abs(a[i ] -(i + 1 ) )  return operations  if __name__== ' __main __' : arr =[5 , 3 , 2 ] n = len(arr ) print(minimumMoves(arr , n ) ) 
14,859
Given the following text description, write Python code to implement the functionality described below step by step Description: NOAA Weather Analysis Frequency of Daily High and Low Record Temperatures Analysis Goal Given historical data for a weather station in the US, what is the frequency for new high or low temperature records? If there is scientific evidence of extreme fluctuations in our weather patterns due to human impact to the environment, then we should be able to identify significant factual examples of increases in the frequency in extreme temperature changes within the weather station data. There has been a great deal of discussion around climate change and global warming. Since NOAA has made their data public, let us explore the data ourselves and see what insights we can discover. General Analytical Questions For each of the possible 365 days of the year that a specific US weather station has gathered data, can we identify the frequency at which daily High and Low temperature records are broken. Does the historical frequency of daily temperature records (High or Low) in the US provide statistical evidence of dramatic climate change? For a given weather station, what is the longest duration of daily temperature record (High or Low) in the US? Approach This analysis is based on a <font color="green">15-March-2015</font> snapshot of the Global Historical Climatology Network (GHCN) dataset. This analysis leverages Historical Daily Summary weather station information that was generated using data derived from reproducible research. This summary data captures information about a given day throughout history at a specific weather station in the US. This dataset contains 365 rows where each row depicts the aggregated low and high record temperatures for a specific day throughout the history of the weather station. Each US weather station is associated with a single CSV file that contains historical daily summary data. All temperatures reported in Fahrenheit. Environment Setup This noteboook leverages the several Jupyter Incubation Extensions (urth_components) Step1: Load urth components Step2: Declare Globals Step3: Prepare Filesystem Data Preparation Options Use the NOAA data Munging project to generate CSV files for the latest NOAA data. Use the sample March 16, 2015 snapshot provided in this repo and do one of the following Step4: Data Munging In this section of the notebook we will define the necessary data extraction, transformation and loading functions for the desired interactive dashboard. Step5: Exploratory Analysis In this section of the notebook we will define the necessary computational functions for the desired interactive dashboard. Step6: Visualization In this section of the notebook we will define the widgets and supporting functions for the construction of an interactive dashboard. See Polymer Data Bindings for more details. Narration Widget Provide some introductory content for the user. Step7: Weather Channel Widget Display the current USA national weather map. Step8: Preferences Widget This composite widget allows the user to control several visualization switches Step9: Dashboard Control Widget This composite widget allows the user to control several visualization switches Step10: Channel Monitor Widget This widget provides status information pertaining to properties of the dashboard. Step11: Station Detail Widget This composite widget allows the user view station details for the selected state. Tabluar and map viewing options are available. Step12: HACK Step13: Station Summary Widget This widget provides the user with a glimpse into the historic hi/low record data for the selected station. Step14: Temperature Record Analysis for Selected Station This widget provides the user with insights for selected station.
Python Code: %matplotlib inline import os import struct import glob import pandas as pd import numpy as np import datetime as dt import matplotlib.pyplot as plt import seaborn as sns import folium from IPython.display import HTML from IPython.display import Javascript, display Explanation: NOAA Weather Analysis Frequency of Daily High and Low Record Temperatures Analysis Goal Given historical data for a weather station in the US, what is the frequency for new high or low temperature records? If there is scientific evidence of extreme fluctuations in our weather patterns due to human impact to the environment, then we should be able to identify significant factual examples of increases in the frequency in extreme temperature changes within the weather station data. There has been a great deal of discussion around climate change and global warming. Since NOAA has made their data public, let us explore the data ourselves and see what insights we can discover. General Analytical Questions For each of the possible 365 days of the year that a specific US weather station has gathered data, can we identify the frequency at which daily High and Low temperature records are broken. Does the historical frequency of daily temperature records (High or Low) in the US provide statistical evidence of dramatic climate change? For a given weather station, what is the longest duration of daily temperature record (High or Low) in the US? Approach This analysis is based on a <font color="green">15-March-2015</font> snapshot of the Global Historical Climatology Network (GHCN) dataset. This analysis leverages Historical Daily Summary weather station information that was generated using data derived from reproducible research. This summary data captures information about a given day throughout history at a specific weather station in the US. This dataset contains 365 rows where each row depicts the aggregated low and high record temperatures for a specific day throughout the history of the weather station. Each US weather station is associated with a single CSV file that contains historical daily summary data. All temperatures reported in Fahrenheit. Environment Setup This noteboook leverages the several Jupyter Incubation Extensions (urth_components): Declarative Widgets Dynamic Dashboards It also depends on a custom polymer widget: urth-raw-html.html Import Python Dependencies Depending on the state of your IPython environment, you may need to pre-instal a few dependencies: $ pip install seaborn folium End of explanation %%html <link rel="import" href="urth_components/paper-dropdown-menu/paper-dropdown-menu.html" is='urth-core-import' package='PolymerElements/paper-dropdown-menu'> <link rel="import" href="urth_components/paper-menu/paper-menu.html" is='urth-core-import' package='PolymerElements/paper-menu'> <link rel="import" href="urth_components/paper-item/paper-item.html" is='urth-core-import' package='PolymerElements/paper-item'> <link rel="import" href="urth_components/paper-button/paper-button.html" is='urth-core-import' package='PolymerElements/paper-button'> <link rel="import" href="urth_components/paper-card/paper-card.html" is='urth-core-import' package='PolymerElements/paper-card'> <link rel="import" href="urth_components/paper-slider/paper-slider.html" is='urth-core-import' package='PolymerElements/paper-slider'> <link rel="import" href="urth_components/google-map/google-map.html" is='urth-core-import' package='GoogleWebComponents/google-map'> <link rel="import" href="urth_components/google-map/google-map-marker.html" is='urth-core-import' package='GoogleWebComponents/google-map'> <link rel="import" href="urth_components/urth-viz-table/urth-viz-table.html" is='urth-core-import'> <link rel="import" href="urth_components/urth-viz-chart/urth-viz-chart.html" is='urth-core-import'> <!-- Add custom Polymer Widget for injecting raw HTML into a urth-core widget --> <link rel="import" href="./urth-raw-html.html"> <!-- HACK: Use Property Watch patch for v0.1.0 of declarativewidgets; This can be removed for v0.1.1 --> <link rel="import" href="./urth-core-watch.html"> Explanation: Load urth components End of explanation DATA_STATE_STATION_LIST = None DATA_STATION_DETAIL_RESULTS = None DATA_FREQUENCY_RESULTS = None Explanation: Declare Globals End of explanation IMAGE_DIRECTORY = "plotit" def image_cleanup(dirname): if not os.path.exists(dirname): os.makedirs(dirname) else: for filePath in glob.glob(dirname+"/*.png"): if os.path.isfile(filePath): os.remove(filePath) #image_cleanup(IMAGE_DIRECTORY) Explanation: Prepare Filesystem Data Preparation Options Use the NOAA data Munging project to generate CSV files for the latest NOAA data. Use the sample March 16, 2015 snapshot provided in this repo and do one of the following: Open a terminal session and run these commands: cd /home/main/notebooks/noaa/hdtadash/data/ tar -xvf station_summaries.tar Enable, execute and then disable the following bash cell Plot Storage Earlier versions of this notebook stored chart images to disk. We used a specific directory to store plot images (*.png files). However, this approach does not work if the notebook user would like to deploy as a local application. End of explanation # Use this global variable to specify the path for station summary files. NOAA_STATION_SUMMARY_PATH = "/home/main/notebooks/noaa/hdtadash/data/" # Use this global variable to specify the path for the GHCND Station Directory STATION_DETAIL_FILE = '/home/main/notebooks/noaa/hdtadash/data/ghcnd-stations.txt' # Station detail structures for building station lists station_detail_colnames = ['StationID','State','Name', 'Latitude','Longitude','QueryTag'] station_detail_rec_template = {'StationID': "", 'State': "", 'Name': "", 'Latitude': "", 'Longitude': "", 'QueryTag': "" } # ----------------------------------- # Station Detail Processing # ----------------------------------- def get_filename(pathname): '''Fetch filename portion of pathname.''' plist = pathname.split('/') fname, fext = os.path.splitext(plist[len(plist)-1]) return fname def fetch_station_list(): '''Return list of available stations given collection of summary files on disk.''' station_list = [] raw_files = os.path.join(NOAA_STATION_SUMMARY_PATH,'','*_sum.csv') for index, fname in enumerate(glob.glob(raw_files)): f = get_filename(fname).split('_')[0] station_list.append(str(f)) return station_list USA_STATION_LIST = fetch_station_list() def gather_states(fname,stations): '''Return a list of unique State abbreviations. Weather station data exists for these states.''' state_list = [] with open(fname, 'r', encoding='utf-8') as f: lines = f.readlines() f.close() for line in lines: r = noaa_gather_station_detail(line,stations) state_list += r df_unique_states = pd.DataFrame(state_list,columns=station_detail_colnames).sort('State').State.unique() return df_unique_states.tolist() def noaa_gather_station_detail(line,slist): '''Build a list of station tuples for stations in the USA.''' station_tuple_list = [] station_id_key = line[0:3] if station_id_key == 'USC' or station_id_key == 'USW': fields = struct.unpack('12s9s10s7s2s30s', line[0:70].encode()) if fields[0].decode().strip() in slist: station_tuple = dict(station_detail_rec_template) station_tuple['StationID'] = fields[0].decode().strip() station_tuple['State'] = fields[4].decode().strip() station_tuple['Name'] = fields[5].decode().strip() station_tuple['Latitude'] = fields[1].decode().strip() station_tuple['Longitude'] = fields[2].decode().strip() qt = "{0} at {1} in {2}".format(fields[0].decode().strip(),fields[5].decode().strip(),fields[4].decode().strip()) station_tuple['QueryTag'] = qt station_tuple_list.append(station_tuple) return station_tuple_list USA_STATES_WITH_STATIONS = gather_states(STATION_DETAIL_FILE,USA_STATION_LIST) def process_station_detail_for_state(fname,stations,statecode): '''Return dataframe of station detail for specified state.''' station_list = [] with open(fname, 'r', encoding='utf-8') as f: lines = f.readlines() f.close() for line in lines: r = noaa_build_station_detail_for_state(line,stations,statecode) station_list += r return pd.DataFrame(station_list,columns=station_detail_colnames) def noaa_build_station_detail_for_state(line,slist,statecode): '''Build a list of station tuples for the specified state in the USA.''' station_tuple_list = [] station_id_key = line[0:3] if station_id_key == 'USC' or station_id_key == 'USW': fields = struct.unpack('12s9s10s7s2s30s', line[0:70].encode()) if ((fields[0].decode().strip() in slist) and (fields[4].decode().strip() == statecode)): station_tuple = dict(station_detail_rec_template) station_tuple['StationID'] = fields[0].decode().strip() station_tuple['State'] = fields[4].decode().strip() station_tuple['Name'] = fields[5].decode().strip() station_tuple['Latitude'] = fields[1].decode().strip() station_tuple['Longitude'] = fields[2].decode().strip() qt = "Station {0} in {1} at {2}".format(fields[0].decode().strip(),fields[4].decode().strip(),fields[5].decode().strip()) station_tuple['QueryTag'] = qt station_tuple_list.append(station_tuple) return station_tuple_list # We can examine derived station detail data. #process_station_detail_for_state(STATION_DETAIL_FILE,USA_STATION_LIST,"NE") Explanation: Data Munging In this section of the notebook we will define the necessary data extraction, transformation and loading functions for the desired interactive dashboard. End of explanation # ----------------------------------- # Station Computation Methods # ----------------------------------- month_abbrev = { 1: 'Jan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec' } def compute_years_of_station_data(df): '''Compute years of service for the station.''' yrs = dt.date.today().year-min(df['FirstYearOfRecord']) return yrs def compute_tmax_record_quantity(df,freq): '''Compute number of days where maximum temperature records were greater than frequency factor.''' threshold = int(freq) df_result = df.query('(TMaxRecordCount > @threshold)', engine='python') return df_result def compute_tmin_record_quantity(df,freq): '''Compute number of days where minimum temperature records were greater than frequency factor.''' threshold = int(freq) df_result = df.query('(TMinRecordCount > @threshold)', engine='python') return df_result def fetch_station_data(stationid): '''Return dataframe for station summary file.''' fname = os.path.join(NOAA_STATION_SUMMARY_PATH,'',stationid+'_sum.csv') return pd.DataFrame.from_csv(fname) def create_day_identifier(month,day): '''Return dd-mmm string.''' return str(day)+'-'+month_abbrev[int(month)] def create_date_list(mlist,dlist): '''Return list of formated date strings.''' mv = list(mlist.values()) dv = list(dlist.values()) new_list = [] for index, value in enumerate(mv): new_list.append(create_day_identifier(value,dv[index])) return new_list def create_record_date_list(mlist,dlist,ylist): '''Return list of dates for max/min record events.''' mv = list(mlist.values()) dv = list(dlist.values()) yv = list(ylist.values()) new_list = [] for index, value in enumerate(mv): new_list.append(dt.date(yv[index],value,dv[index])) return new_list # Use the Polymer Channel API to establish two-way binding between elements and data. from urth.widgets.widget_channels import channel channel("noaaquery").set("states", USA_STATES_WITH_STATIONS) channel("noaaquery").set("recordTypeOptions", ["Low","High"]) channel("noaaquery").set("recordOccuranceOptions", list(range(4, 16))) channel("noaaquery").set("stationList",USA_STATION_LIST) channel("noaaquery").set("stationDetail",STATION_DETAIL_FILE) channel("noaaquery").set("narrationToggleOptions", ["Yes","No"]) channel("noaaquery").set("cleanupToggleOptions", ["Yes","No"]) channel("noaaquery").set("cleanupPreference", "No") channel("noaaquery").set("displayTypeOptions", ["Data","Map"]) def reset_settings(): channel("noaaquery").set("isNarration", True) channel("noaaquery").set("isMap", True) channel("noaaquery").set("isNewQuery", True) channel("noaaquery").set("stationResultsReady", "") reset_settings() Explanation: Exploratory Analysis In this section of the notebook we will define the necessary computational functions for the desired interactive dashboard. End of explanation %%html <a name="narrationdata"></a> <template id="narrationContent" is="urth-core-bind" channel="noaaquery"> <template is="dom-if" if="{{isNarration}}"> <p>This application allows the user to explore historical NOAA data to observer the actual frequency at which weather stations in the USA have actually experienced new high and low temperature records.</p> <blockquote>Are you able to identify a significant number of temperature changes within the weather station data?</blockquote> <blockquote>Would you consider these results representative of extreme weather changes?</blockquote> </paper-card> </template> Explanation: Visualization In this section of the notebook we will define the widgets and supporting functions for the construction of an interactive dashboard. See Polymer Data Bindings for more details. Narration Widget Provide some introductory content for the user. End of explanation %%html <template id="weatherchannel_currentusamap" is="urth-core-bind" channel="noaaquery"> <div id="wc_curmap"> <center><embed src="http://i.imwx.com/images/maps/current/curwx_600x405.jpg" width="500" height="300"></center> <div id="wc_map"> </template> Explanation: Weather Channel Widget Display the current USA national weather map. End of explanation def process_preferences(narrativepref,viewpref): if narrativepref == "Yes": channel("noaaquery").set("isNarration", True) else: channel("noaaquery").set("isNarration","") if viewpref == "Map": channel("noaaquery").set("isMap", True) else: channel("noaaquery").set("isMap", "") return %%html <a name="prefsettings"></a> <template id="setPreferences" is="urth-core-bind" channel="noaaquery"> <urth-core-function id="applySettingFunc" ref="process_preferences" arg-narrativepref="{{narrationPreference}}" arg-viewpref="{{displayPreference}}" auto> </urth-core-function> <paper-card heading="Preferences" elevation="1"> <div class="card-content"> <p class="widget">Select a narration preference to toggle informative content. <paper-dropdown-menu label="Show Narration" selected-item-label="{{narrationPreference}}" noink> <paper-menu class="dropdown-content" selected="[[narrationPreference]]" attr-for-selected="label"> <template is="dom-repeat" items="[[narrationToggleOptions]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu></p> <p class="widget">Would you like a geospacial view of a selected weather station? <paper-dropdown-menu label="Select Display Type" selected-item-label="{{displayPreference}}" noink> <paper-menu class="dropdown-content" selected="[[displayPreference]]" attr-for-selected="label"> <template is="dom-repeat" items="[[displayTypeOptions]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu></p> <p class="widget">Would you like to purge disk storage more frequently? <paper-dropdown-menu label="Manage Storage" selected-item-label="{{cleanupPreference}}" noink> <paper-menu class="dropdown-content" selected="[[cleanupPreference]]" attr-for-selected="label"> <template is="dom-repeat" items="[[cleanupToggleOptions]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu></p> </div> </paper-card> </template> Explanation: Preferences Widget This composite widget allows the user to control several visualization switches: Narration: This dropdown menu allows the user to hide/show narrative content within the dashboard. Display Type: This dropdown menu allows the user to toggle between geospacial and raw data visualizations. Storage Management: This dropdown menu allows the user to toggle the frequency of storage cleanup. End of explanation def process_query(fname,stations,statecode,cleanuppref): global DATA_STATE_STATION_LIST if cleanuppref == "Yes": image_cleanup(IMAGE_DIRECTORY) reset_settings() DATA_STATE_STATION_LIST = process_station_detail_for_state(fname,stations,statecode) channel("noaaquery").set("stationResultsReady", True) return DATA_STATE_STATION_LIST # We can examine stations per state data. #process_query(STATION_DETAIL_FILE,USA_STATION_LIST,"NE","No") %%html <a name="loaddata"></a> <template id="loadCard" is="urth-core-bind" channel="noaaquery"> <urth-core-function id="loadDataFunc" ref="process_query" arg-fname="{{stationDetail}}" arg-stations="{{stationList}}" arg-statecode="{{stateAbbrev}}" arg-cleanuppref="{{cleanupPreference}}" result="{{stationQueryResult}}" is-ready="{{isloadready}}"> </urth-core-function> <paper-card heading="Query Preferences" elevation="1"> <div class="card-content"> <div> <p class="widget">Which region of weather stations in the USA do you wish to examine?.</p> <paper-dropdown-menu label="Select State" selected-item-label="{{stateAbbrev}}" noink> <paper-menu class="dropdown-content" selected="{{stateAbbrev}}" attr-for-selected="label"> <template is="dom-repeat" items="[[states]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu> </div> <div> <p class="widget">Are you interested in daily minimum or maximum temperature records per station?.</p> <paper-dropdown-menu label="Select Record Type" selected-item-label="{{recType}}" noink> <paper-menu class="dropdown-content" selected="[[recType]]" attr-for-selected="label"> <template is="dom-repeat" items="[[recordTypeOptions]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu> </div> <div> <p class="widget">Each weather station has observed more than one new minimum or maximum temperature record event. How many new record occurrences would you consider significant enough to raise concerns about extreme weather fluctuations?.</p> <paper-dropdown-menu label="Select Occurrence Factor" selected-item-label="{{occurrenceFactor}}" noink> <paper-menu class="dropdown-content" selected="[[occurrenceFactor]]" attr-for-selected="label"> <template is="dom-repeat" items="[[recordOccuranceOptions]]"> <paper-item label="[[item]]">[[item]]</paper-item> </template> </paper-menu> </paper-dropdown-menu> </div> </div> <div class="card-actions"> <paper-button tabindex="0" disabled="{{!isloadready}}" onClick="loadDataFunc.invoke()">Apply</paper-button> </div> </paper-card> </template Explanation: Dashboard Control Widget This composite widget allows the user to control several visualization switches: State Selector: This dropdown menu allows the user to select a state for analysis. Only the data associated with the selected state will be loaded. Record Type: This dropdown menu allows the user focus the analysis on either High or Low records. Occurance Factor: This dropdown menu allows the user to specify the minimum number of new record events for a given calendar day. The widget uses a control method to manage interactive events. End of explanation %%html <template id="channelMonitorWidget" is="urth-core-bind" channel="noaaquery"> <h2 class="widget">Channel Monitor</h2> <p class="widget"><b>Query Selections:</b></p> <table border="1" align="center"> <tr> <th>Setting</th> <th>Value</th> </tr> <tr> <td>State</td> <td>{{stateAbbrev}}</td> </tr> <tr> <td>Record Type</td> <td>{{recType}}</td> </tr> <tr> <td>Occurance Factor</td> <td>{{occurrenceFactor}}</td> </tr> <tr> <td>Station ID</td> <td>{{station.0}}</td> </tr> <tr> <td>Narration</td> <td>{{isNarration}}</td> </tr> <tr> <td>Map View</td> <td>{{isMap}}</td> </tr> </table> <p class="widget">{{recType}} temperature record analysis using historical NOAA data from weather {{station.5}}.</p> </template> Explanation: Channel Monitor Widget This widget provides status information pertaining to properties of the dashboard. End of explanation # Use Python to generate a Folium Map with Markers for each weather station in the selected state. def display_map(m, height=500): '''Takes a folium instance and embed HTML.''' m._build_map() srcdoc = m.HTML.replace('"', '&quot;') embed = '<iframe srcdoc="{0}" style="width: 100%; height: {1}px; border: none"></iframe>'.format(srcdoc, height) return embed def render_map(height=500): '''Generate a map based on a dateframe of station detail.''' df = DATA_STATE_STATION_LIST centerpoint_latitude = np.mean(df.Latitude.astype(float)) centerpoint_longitude = np.mean(df.Longitude.astype(float)) map_obj = folium.Map(location=[centerpoint_latitude, centerpoint_longitude],zoom_start=6) for index, row in df.iterrows(): map_obj.simple_marker([row.Latitude, row.Longitude], popup=row.QueryTag) return display_map(map_obj) # We can examine the generated HTML for the dynamic map #render_map() Explanation: Station Detail Widget This composite widget allows the user view station details for the selected state. Tabluar and map viewing options are available. End of explanation %%html <template id="station_detail_combo_func" is="urth-core-bind" channel="noaaquery"> <urth-core-watch value="{{stationResultsReady}}"> <urth-core-function id="renderFoliumMapFunc" ref="render_map" result="{{foliumMap}}" auto> </urth-core-function> </urth-core-watch> </template> %%html <template id="station_detail_combo_widget" is="urth-core-bind" channel="noaaquery"> <paper-card style="width: 100%;" heading="{{stateAbbrev}} Weather Stations" elevation="1"> <p>These are the weather stations monitoring local conditions. Select a station to explore historical record temperatures.</p> <urth-viz-table datarows="{{ stationQueryResult.data }}" selection="{{station}}" columns="{{ stationQueryResult.columns }}" rows-visible=20> </urth-viz-table> </paper-card> <template is="dom-if" if="{{isNewQuery}}"> <template is="dom-if" if="{{isMap}}"> <div> <urth-raw-html html="{{foliumMap}}"/> </div> </template> </template> </template> Explanation: HACK: urth-core-watch seems to misbehave when combined with output elements. The workaround is to split the widget into two. End of explanation def explore_station_data(station): global DATA_STATION_DETAIL_RESULTS df_station_detail = fetch_station_data(station) channel("noaaquery").set("yearsOfService", compute_years_of_station_data(df_station_detail)) DATA_STATION_DETAIL_RESULTS = df_station_detail #display(Javascript("stationRecordFreqFunc.invoke()")) return df_station_detail %%html <template id="station_summary_widget" is="urth-core-bind" channel="noaaquery"> <urth-core-function id="exploreStationDataFunc" ref="explore_station_data" arg-station="[[station.0]]" result="{{stationSummaryResult}}" auto> </urth-core-function> <paper-card style="width: 100%;" heading="Station Summary" elevation="1"> <template is="dom-if" if="{{stationSummaryResult}}"> <p>{{recType}} temperature record analysis using historical NOAA data from weather {{station.5}}.</p> <p>This weather station has been in service and collecting data for {{yearsOfService}} years.</p> <urth-viz-table datarows="{{ stationSummaryResult.data }}" selection="{{dayAtStation}}" columns="{{ stationSummaryResult.columns }}" rows-visible=20> </urth-viz-table> </template> </paper-card> </template> Explanation: Station Summary Widget This widget provides the user with a glimpse into the historic hi/low record data for the selected station. End of explanation def plot_record_results(rectype,fname=None): df = DATA_FREQUENCY_RESULTS plt.figure(figsize = (9,9), dpi = 72) if rectype == "High": dates = create_record_date_list(df.Month.to_dict(), df.Day.to_dict(), df.TMaxRecordYear.to_dict() ) temperatureRecordsPerDate = {'RecordDate' : pd.Series(dates,index=df.index), 'RecordHighTemp' : pd.Series(df.TMax.to_dict(),index=df.index) } df_new = pd.DataFrame(temperatureRecordsPerDate) sns_plot = sns.factorplot(x="RecordDate", y="RecordHighTemp", kind="bar", data=df_new, size=6, aspect=1.5) sns_plot.set_xticklabels(rotation=30) else: dates = create_record_date_list(df.Month.to_dict(), df.Day.to_dict(), df.TMinRecordYear.to_dict() ) temperatureRecordsPerDate = {'RecordDate' : pd.Series(dates,index=df.index), 'RecordLowTemp' : pd.Series(df.TMin.to_dict(),index=df.index) } df_new = pd.DataFrame(temperatureRecordsPerDate) sns_plot = sns.factorplot(x="RecordDate", y="RecordLowTemp", kind="bar", data=df_new, size=6, aspect=1.5) sns_plot.set_xticklabels(rotation=30) if fname is not None: if os.path.isfile(fname): os.remove(fname) sns_plot.savefig(fname) return sns_plot.fig def compute_record_durations(df,rectype): '''Return dataframe of max/min temperature record durations for each day.''' dates = create_date_list(df.Month.to_dict(),df.Day.to_dict()) s_dates = pd.Series(dates) if rectype == "High": s_values = pd.Series(df.MaxDurTMaxRecord.to_dict(),index=df.index) else: s_values = pd.Series(df.MaxDurTMinRecord.to_dict(),index=df.index) temperatureDurationsPerDate = {'RecordDate' : pd.Series(dates,index=df.index), 'RecordLowTemp' : s_values } df_new = pd.DataFrame(temperatureDurationsPerDate) return df_new def plot_duration_results(rectype,fname=None): df_durations = compute_record_durations(DATA_FREQUENCY_RESULTS,rectype) fig = plt.figure(figsize = (9,9), dpi = 72) plt.xlabel('Day') plt.ylabel('Record Duration in Years') if rectype == "High": plt.title('Maximum Duration for TMax Records') else: plt.title('Maximum Duration for TMin Records') ax = plt.gca() colors= ['r', 'b'] df_durations.plot(kind='bar',color=colors, alpha=0.75, ax=ax) ax.xaxis.set_ticklabels( ['%s' % i for i in df_durations.RecordDate.values] ) plt.grid(b=True, which='major', linewidth=1.0) plt.grid(b=True, which='minor') if fname is not None: if os.path.isfile(fname): os.remove(fname) plt.savefig(fname) return fig def explore_record_temperature_frequency(rectype,recfreqfactor): global DATA_FREQUENCY_RESULTS channel("noaaquery").set("isAboveFreqFactor", True) channel("noaaquery").set("numberRecordDays", 0) if rectype == "High": df_record_days = compute_tmax_record_quantity(DATA_STATION_DETAIL_RESULTS,recfreqfactor) else: df_record_days = compute_tmin_record_quantity(DATA_STATION_DETAIL_RESULTS,recfreqfactor) if not df_record_days.empty: channel("noaaquery").set("numberRecordDays", len(df_record_days)) DATA_FREQUENCY_RESULTS = df_record_days else: channel("noaaquery").set("isAboveFreqFactor", "") #display(Javascript("stationRecordFreqFunc.invoke()")) return df_record_days %%html <template id="station_synopsis_data_widget" is="urth-core-bind" channel="noaaquery"> <urth-core-watch value="{{station.0}}"> <urth-core-function id="stationRecordFreqFunc" ref="explore_record_temperature_frequency" arg-rectype="[[recType]]" arg-recfreqfactor="[[occurrenceFactor]]" result="{{stationFreqRecordsResult}}" auto> </urth-core-function> </urth-core-watch> </template> %%html <template id="station_synopsis_chart_widget" is="urth-core-bind" channel="noaaquery"> <template is="dom-if" if="{{stationFreqRecordsResult}}"> <paper-card style="width: 100%;" heading="Temperature Record Analysis" elevation="1"> <p>This station has experienced {{numberRecordDays}} days of new {{recType}} records where a new record has been set more than {{occurrenceFactor}} times throughout the operation of the station.</p> <urth-viz-table datarows="{{ stationFreqRecordsResult.data }}" selection="{{dayAtStation}}" columns="{{ stationFreqRecordsResult.columns }}" rows-visible=20> </urth-viz-table> </paper-card> <template is="dom-if" if="{{isAboveFreqFactor}}"> <urth-core-function id="stationRecordsFunc" ref="plot_record_results" arg-rectype="[[recType]]" result="{{stationRecordsPlot}}" auto> </urth-core-function> <urth-core-function id="stationDurationsFunc" ref="plot_duration_results" arg-rectype="[[recType]]" result="{{stationDurationsPlot}}" auto> </urth-core-function> <paper-card heading="Station {{station.0}} Records Per Day" elevation="0"> <p>The current {{recType}} temperature record for each day that has experienced more than {{occurrenceFactor}} new record events since the station has come online.</p> <img src="{{stationRecordsPlot}}"/><br/> </paper-card> <paper-card heading="Duration of Station {{station.0}} Records Per Day" elevation="0"> <p>For each day that has experienced more than {{occurrenceFactor}} {{recType}} temperature records, some days have had records stand for a large portion of the life of the station.</p> <img src="{{stationDurationsPlot}}"/> </paper-card> </template> <template is="dom-if" if="{{!isAboveFreqFactor}}"> <p>This weather station has not experienced any days with greater than {{occurrenceFactor}} new {{recType}} records.</p> </template> </template> </template> Explanation: Temperature Record Analysis for Selected Station This widget provides the user with insights for selected station. End of explanation
14,860
Given the following text description, write Python code to implement the functionality described below step by step Description: Beautiful JavaScript Charts in Jupyter Notebooks Jupyter Notebooks tell stories by blending explanations, visualizations, and the code producing them. In my opinion, the most compelling charts are interactive, Javascript based. Here's how you can blend beautiful Javascript charts into Jupyter Notebooks to tell your story. For simple, plots iPlotter brings the latest D3.js and canvas charting libraries to Jupyter Notebooks using native python data structures. iPlotter integrates with C3.js, plotly.js, Chart.js, Chartist.js, and Google Charts. To get started Step2: Before we dive in to the charts, let's adjust the iframe style jupyter notebooks use so charts render more cleanly. Step3: To use iPlotter, select your JavaScript charting library of choice. Then, pass a python data structure in a format corresponding to the json the library expects. C3.js is a charting library based on D3.js that makes it easy to build and reuse beautiful charts. Step4: How about line charts? Step5: Chart.js Chart.js requires a slightly different input format, but works similarly. Chart.js is a canvas charting library so it can handle many points! Step6: Google Charts License is Creative Commons Attribution 3.0 License, which requires attribution, but allows for free commercial/personal use. No data is not sent to a Google server. It's rendered in the browser. Step7: When Rendering Fails sometimes, iPlotter just doesn't quite render... I couldn't get the Bubble Chart or Polar Area Chart documented on Chart.js to render or the pie chart in Google Charts. Step10: In those cases, call in the JavaScript Step11: The above is Javascript in a string with the data converted from Python dictionaries into strings. Step12: For Google Charts, simply use Jupyter magic methods to turn the cell into html! Step14: To pass Python Functions, copy the HTML into a python string to embed the python data Step15: You can also use the newer string.format() in Python to create the html_code string.
Python Code: import iplotter from IPython.core.display import HTML Explanation: Beautiful JavaScript Charts in Jupyter Notebooks Jupyter Notebooks tell stories by blending explanations, visualizations, and the code producing them. In my opinion, the most compelling charts are interactive, Javascript based. Here's how you can blend beautiful Javascript charts into Jupyter Notebooks to tell your story. For simple, plots iPlotter brings the latest D3.js and canvas charting libraries to Jupyter Notebooks using native python data structures. iPlotter integrates with C3.js, plotly.js, Chart.js, Chartist.js, and Google Charts. To get started: bash $ pip install iplotter When this fails, you can directly render JavaScript by passing Python data structures either as strings or through dictionaries as json. End of explanation # remove iFrame border for cleaner chart rendering # increase size of text explanations HTML( <style> iframe {border:0;} </style> ) Explanation: Before we dive in to the charts, let's adjust the iframe style jupyter notebooks use so charts render more cleanly. End of explanation # define chart + data chart = { "data": { "columns": [ ["setosa_x", 3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3], ["versicolor_x", 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8], ["setosa", 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2], ["versicolor", 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3] ], "type": 'scatter' }, "axis": { "x": { "label": 'Sepal.Width', "tick": { "fit": "false" } }, "y": { "label": 'Petal.Width' } } } c3_plotter = iplotter.C3Plotter() c3_plotter.plot(chart) Explanation: To use iPlotter, select your JavaScript charting library of choice. Then, pass a python data structure in a format corresponding to the json the library expects. C3.js is a charting library based on D3.js that makes it easy to build and reuse beautiful charts. End of explanation chart = { "data": { "columns": [ ['dogs', 300, 350, 300, 0, 0, 120], ['cats', 130, 100, 140, 200, 150, 50], ['people', 180, 75, 265, 100, 50, 100] ], "types": { "dogs": 'area-spline', "cats": 'area-spline', "people": 'area-spline' }, "groups": [['dogs', 'cats', 'people']] } } c3_plotter = iplotter.C3Plotter() c3_plotter.plot(chart) Explanation: How about line charts? End of explanation labels = ["Red", "Blue", "Yellow", "Green", "Purple", "Orange"] values = [{"x":20, "y": 30, "r":15}, {"x":40, "y":10, "r":10}] data = { "labels": labels, "datasets": [ { "label": "My First dataset", "backgroundColor": "rgba(179,181,198,0.2)", "borderColor": "rgba(179,181,198,1)", "pointBackgroundColor": "rgba(179,181,198,1)", "pointBorderColor": "#fff", "pointHoverBackgroundColor": "#fff", "pointHoverBorderColor": "rgba(179,181,198,1)", "data": [65, 59, 90, 81, 56, 55, 40] }, { "label": "My Second dataset", "backgroundColor": "rgba(255,99,132,0.2)", "borderColor": "rgba(255,99,132,1)", "pointBackgroundColor": "rgba(255,99,132,1)", "pointBorderColor": "#fff", "pointHoverBackgroundColor": "#fff", "pointHoverBorderColor": "rgba(255,99,132,1)", "data": [28, 48, 40, 19, 96, 27, 100] } ] } chart_js = iplotter.ChartJSPlotter() chart_js.plot(data, chart_type="radar", w=600, h=600) Explanation: Chart.js Chart.js requires a slightly different input format, but works similarly. Chart.js is a canvas charting library so it can handle many points! End of explanation data = [ ['Genre', 'Fantasy & Sci Fi', 'Romance', 'Mystery/Crime', 'General', 'Western', 'Literature', {"role": 'annotation'}], ['2010', 10, 24, 20, 32, 18, 5, ''], ['2020', 16, 22, 23, 30, 16, 9, ''], ['2030', 28, 19, 29, 30, 12, 13, ''] ] options = { "width": 600, "height": 400, "legend": {"position": 'top', "maxLines": 3}, "bar": {"groupWidth": '75%'}, "isStacked": "true", } gc_plotter = iplotter.GCPlotter() gc_plotter.plot(data, chart_type="ColumnChart",chart_package='corechart', options=options) Explanation: Google Charts License is Creative Commons Attribution 3.0 License, which requires attribution, but allows for free commercial/personal use. No data is not sent to a Google server. It's rendered in the browser. End of explanation data = [ ['Task', 'Hours per Day'], ['Work', 11], ['Eat', 2], ['Commute', 2], ['Watch TV', 2], ['Sleep', 7] ] options = { "title": "hi" } gc_plotter = iplotter.GCPlotter() gc_plotter.plot(data, chart_type="piechart", options=options) # sad face :< Explanation: When Rendering Fails sometimes, iPlotter just doesn't quite render... I couldn't get the Bubble Chart or Polar Area Chart documented on Chart.js to render or the pie chart in Google Charts. End of explanation import json from IPython.display import display, Javascript def chartjs(chartType, data, options={}, width="500px", height="400px"): Custom iphython extension allowing chartjs visualizations Usage: chartjs(chartType, data, options, width=1000, height=400) Args: chartType: one of the supported chart type options (line, bar, radar, polarArea, pie, doughnut) data: a python dictionary with datasets to be rapresented and related visualization settings, as expected by chart js (see data parameter in http://www.chartjs.org/docs/) options: defaults {}; a python dictionary with additional graph options, as expected by chart js (see options parameter in http://www.chartjs.org/docs/) width: default 700px height: default 400px NB. data and options structure depends on the chartType display( Javascript( require(['https://cdnjs.cloudflare.com/ajax/libs/Chart.js/1.0.2/Chart.min.js'], function(chartjs){ var chartType="%s"; var data=%s; var options=%s; var width="%s"; var height="%s"; element.append('<canvas width="' + width + '" height="' + height + '">s</canvas>'); var ctx = element.children()[0].getContext("2d"); switch(chartType.toLowerCase()) { case "line": var myChart = new Chart(ctx).Line(data, options); break; case "bar": var myChart = new Chart(ctx).Bar(data, options); break; case "radar": var myChart = new Chart(ctx).Radar(data, options); break; case "polarArea": var myChart = new Chart(ctx).PolarArea(data, options); break; case "pie": var myChart = new Chart(ctx).Pie(data, options); break; case "doughnut": var myChart = new Chart(ctx).Doughnut(data, options); break; } }); % (chartType, json.dumps(data), json.dumps(options), width, height) ) ) Explanation: In those cases, call in the JavaScript: (from this gist) End of explanation # to run data = { "labels": [1,2,3,4,5,6], "datasets": [ { "label": "Sample dataset", "fillColor": "#ffce56", "strokeColor": "rgba(151,187,205,1)", "pointColor": "rgba(151,187,205,1)", "pointStrokeColor": "#fff", "pointHighlightFill": "#fff", "pointHighlightStroke": "rgba(151,187,205,1)", "data": [1, 10, 3, 2, 7, 8] } ]} chartjs("Line", data, width=600) Explanation: The above is Javascript in a string with the data converted from Python dictionaries into strings. End of explanation %%html <html> <head> <!--Load the AJAX API--> <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script> <script type="text/javascript"> // Load the Visualization API and the corechart package. google.charts.load('current', {'packages':['corechart']}); // Set a callback to run when the Google Visualization API is loaded. google.charts.setOnLoadCallback(drawChart); // Callback that creates and populates a data table, // instantiates the pie chart, passes in the data and // draws it. function drawChart() { // Create the data table. var data = new google.visualization.DataTable(); data.addColumn('string', 'Topping'); data.addColumn('number', 'Slices'); data.addRows([ ['Mushrooms', 3], ['Onions', 1], ['Olives', 1], ['Zucchini', 1], ['Pepperoni', 2] ]); // Set chart options var options = {'title':'How Much Pizza I Ate Last Night', 'width':600, 'height':400}; // Instantiate and draw our chart, passing in some options. var chart = new google.visualization.PieChart(document.getElementById('chart_div')); chart.draw(data, options); } </script> </head> <body> <!--Div that will hold the pie chart--> <div id="chart_div" style="width: 700px; height: 410px;"></div> </body> </html> Explanation: For Google Charts, simply use Jupyter magic methods to turn the cell into html! End of explanation # python list data = [ ['Task', 'Hours per Day'], ['Work', 11], ['Eat', 2], ['Commute', 2], ['Watch TV', 2], ['Sleep', 7] ] # note the double escape to ensure apostrophe is rendered correctly title = "Mark\\'s Daily Activities" html_code = <html> <head> <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script> <script type="text/javascript"> google.charts.load("current", {packages:["corechart"]}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable(%s); var options = { title: '%s', pieHole: 0.4, }; var chart = new google.visualization.PieChart(document.getElementById('donutchart')); chart.draw(data, options); } </script> </head> <body> <div id="donutchart" style="width: 900px; height: 520px;"></div> </body> </html> % (data, title) Explanation: To pass Python Functions, copy the HTML into a python string to embed the python data: End of explanation # render use jupyter's html function HTML(html_code) Explanation: You can also use the newer string.format() in Python to create the html_code string. End of explanation
14,861
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction In these exercises, you'll explore the operations a couple of popular convnet architectures use for feature extraction, learn about how convnets can capture large-scale visual features through stacking layers, and finally see how convolution can be used on one-dimensional data, in this case, a time series. Run the cell below to set everything up. Step1: (Optional) Experimenting with Feature Extraction This exercise is meant to give you an opportunity to explore the sliding window computations and how their parameters affect feature extraction. There aren't any right or wrong answers -- it's just a chance to experiment! We've provided you with some images and kernels you can use. Run this cell to see them. Step2: To choose one to experiment with, just enter it's name in the appropriate place below. Then, set the parameters for the window computation. Try out some different combinations and see what they do! Step3: The Receptive Field Trace back all the connections from some neuron and eventually you reach the input image. All of the input pixels a neuron is connected to is that neuron's receptive field. The receptive field just tells you which parts of the input image a neuron receives information from. As we've seen, if your first layer is a convolution with $3 \times 3$ kernels, then each neuron in that layer gets input from a $3 \times 3$ patch of pixels (except maybe at the border). What happens if you add another convolutional layer with $3 \times 3$ kernels? Consider this next illustration Step4: So why stack layers like this? Three (3, 3) kernels have 27 parameters, while one (7, 7) kernel has 49, though they both create the same receptive field. This stacking-layers trick is one of the ways convnets are able to create large receptive fields without increasing the number of parameters too much. You'll see how to do this yourself in the next lesson! (Optional) One-Dimensional Convolution Convolutional networks turn out to be useful not only (two-dimensional) images, but also on things like time-series (one-dimensional) and video (three-dimensional). We've seen how convolutional networks can learn to extract features from (two-dimensional) images. It turns out that convnets can also learn to extract features from things like time-series (one-dimensional) and video (three-dimensional). In this (optional) exercise, we'll see what convolution looks like on a time-series. The time series we'll use is from Google Trends. It measures the popularity of the search term "machine learning" for weeks from January 25, 2015 to January 15, 2020. Step5: What about the kernels? Images are two-dimensional and so our kernels were 2D arrays. A time-series is one-dimensional, so what should the kernel be? A 1D array! Here are some kernels sometimes used on time-series data Step6: Convolution on a sequence works just like convolution on an image. The difference is just that a sliding window on a sequence only has one direction to travel -- left to right -- instead of the two directions on an image. And just like before, the features picked out depend on the pattern on numbers in the kernel. Can you guess what kind of features these kernels extract? Uncomment one of the kernels below and run the cell to see!
Python Code: # Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.computer_vision.ex4 import * import tensorflow as tf import matplotlib.pyplot as plt import learntools.computer_vision.visiontools as visiontools plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) plt.rc('image', cmap='magma') Explanation: Introduction In these exercises, you'll explore the operations a couple of popular convnet architectures use for feature extraction, learn about how convnets can capture large-scale visual features through stacking layers, and finally see how convolution can be used on one-dimensional data, in this case, a time series. Run the cell below to set everything up. End of explanation from learntools.computer_vision.visiontools import edge, blur, bottom_sobel, emboss, sharpen, circle image_dir = '../input/computer-vision-resources/' circle_64 = tf.expand_dims(circle([64, 64], val=1.0, r_shrink=4), axis=-1) kaggle_k = visiontools.read_image(image_dir + str('k.jpg'), channels=1) car = visiontools.read_image(image_dir + str('car_illus.jpg'), channels=1) car = tf.image.resize(car, size=[200, 200]) images = [(circle_64, "circle_64"), (kaggle_k, "kaggle_k"), (car, "car")] plt.figure(figsize=(14, 4)) for i, (img, title) in enumerate(images): plt.subplot(1, len(images), i+1) plt.imshow(tf.squeeze(img)) plt.axis('off') plt.title(title) plt.show(); kernels = [(edge, "edge"), (blur, "blur"), (bottom_sobel, "bottom_sobel"), (emboss, "emboss"), (sharpen, "sharpen")] plt.figure(figsize=(14, 4)) for i, (krn, title) in enumerate(kernels): plt.subplot(1, len(kernels), i+1) visiontools.show_kernel(krn, digits=2, text_size=20) plt.title(title) plt.show() Explanation: (Optional) Experimenting with Feature Extraction This exercise is meant to give you an opportunity to explore the sliding window computations and how their parameters affect feature extraction. There aren't any right or wrong answers -- it's just a chance to experiment! We've provided you with some images and kernels you can use. Run this cell to see them. End of explanation # YOUR CODE HERE: choose an image image = circle_64 # YOUR CODE HERE: choose a kernel kernel = bottom_sobel visiontools.show_extraction( image, kernel, # YOUR CODE HERE: set parameters conv_stride=1, conv_padding='valid', pool_size=2, pool_stride=2, pool_padding='same', subplot_shape=(1, 4), figsize=(14, 6), ) Explanation: To choose one to experiment with, just enter it's name in the appropriate place below. Then, set the parameters for the window computation. Try out some different combinations and see what they do! End of explanation # View the solution (Run this code cell to receive credit!) q_1.check() # Lines below will give you a hint #_COMMENT_IF(PROD)_ q_1.hint() Explanation: The Receptive Field Trace back all the connections from some neuron and eventually you reach the input image. All of the input pixels a neuron is connected to is that neuron's receptive field. The receptive field just tells you which parts of the input image a neuron receives information from. As we've seen, if your first layer is a convolution with $3 \times 3$ kernels, then each neuron in that layer gets input from a $3 \times 3$ patch of pixels (except maybe at the border). What happens if you add another convolutional layer with $3 \times 3$ kernels? Consider this next illustration: <figure> <img src="https://i.imgur.com/HmwQm2S.png" alt="Illustration of the receptive field of two stacked convolutions." width=250> </figure> Now trace back the connections from the neuron at top and you can see that it's connected to a $5 \times 5$ patch of pixels in the input (the bottom layer): each neuron in the $3 \times 3$ patch in the middle layer is connected to a $3 \times 3$ input patch, but they overlap in a $5 \times 5$ patch. So that neuron at top has a $5 \times 5$ receptive field. 1) Growing the Receptive Field Now, if you added a third convolutional layer with a (3, 3) kernel, what receptive field would its neurons have? Run the cell below for an answer. (Or see a hint first!) End of explanation import pandas as pd # Load the time series as a Pandas dataframe machinelearning = pd.read_csv( '../input/computer-vision-resources/machinelearning.csv', parse_dates=['Week'], index_col='Week', ) machinelearning.plot(); Explanation: So why stack layers like this? Three (3, 3) kernels have 27 parameters, while one (7, 7) kernel has 49, though they both create the same receptive field. This stacking-layers trick is one of the ways convnets are able to create large receptive fields without increasing the number of parameters too much. You'll see how to do this yourself in the next lesson! (Optional) One-Dimensional Convolution Convolutional networks turn out to be useful not only (two-dimensional) images, but also on things like time-series (one-dimensional) and video (three-dimensional). We've seen how convolutional networks can learn to extract features from (two-dimensional) images. It turns out that convnets can also learn to extract features from things like time-series (one-dimensional) and video (three-dimensional). In this (optional) exercise, we'll see what convolution looks like on a time-series. The time series we'll use is from Google Trends. It measures the popularity of the search term "machine learning" for weeks from January 25, 2015 to January 15, 2020. End of explanation detrend = tf.constant([-1, 1], dtype=tf.float32) average = tf.constant([0.2, 0.2, 0.2, 0.2, 0.2], dtype=tf.float32) spencer = tf.constant([-3, -6, -5, 3, 21, 46, 67, 74, 67, 46, 32, 3, -5, -6, -3], dtype=tf.float32) / 320 Explanation: What about the kernels? Images are two-dimensional and so our kernels were 2D arrays. A time-series is one-dimensional, so what should the kernel be? A 1D array! Here are some kernels sometimes used on time-series data: End of explanation # UNCOMMENT ONE kernel = detrend # kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE kernel = detrend # kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE # kernel = detrend kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE # kernel = detrend # kernel = average kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); Explanation: Convolution on a sequence works just like convolution on an image. The difference is just that a sliding window on a sequence only has one direction to travel -- left to right -- instead of the two directions on an image. And just like before, the features picked out depend on the pattern on numbers in the kernel. Can you guess what kind of features these kernels extract? Uncomment one of the kernels below and run the cell to see! End of explanation
14,862
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: MIROC Source ID: MIROC-ES2L Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
14,863
Given the following text description, write Python code to implement the functionality described below step by step Description: Timeseries Overview. We introduce the tools for working with dates, times, and time series data. We start with functionality built into python itself, then discuss how pandas builds on these tools to add powerful time series capabilities to DataFrames. Outline Quandl Step1: Quandl <a id=data></a> quandl is a company that collects and maintains financial and economic data from standard sources (e.g. FRED, IMF, BEA, etc.) and non-standard sources (Fx data, company level data, trader receipts). The data is viewable on their webpage (see here or there for examples), but made available to programming languages via their API. We will access their API using their python library. Suppose, for example, that we wanted to get data on taxes in the US. Here's how we might find some Step2: We can also pass start_date and end_date parameters to control the dates for the downloaded data Step3: Now, let's read in the data sets we found were interesting. Feel free to use the codes you looked up, or the ones I'm using here. Step4: So, "FRED/DFF" is the federal funds rate, or the interest rate at which banks can trade federal assets with eachother overnight. This is often used as a proxy for the risk free rate i economic analysis. From the printout above it looks like we have more than 22k observations starting in 1954 at a daily frequency. Notice, however that the column name is VALUE. Let's use our dict to clean up that name Step5: The second dataframe we dowloaded (using code NVCA/VENTURE_3_09C) contains quarterly data on total investment by venture capital firms in the US, broken down by the stage of the project. The column names here are ok, so we don't need to change anything. Exercise (5 min) do a simlar analysis/report for whatever datasets you chose to work with. Make sure to do the following Step6: Dates in python <a id=datetime></a> The date and time functionality in python comes from the built in datetime module. Notice above that we ran python import datetime as dt We've been using the dt.date.today() function throughout this course when we print the date at the top of our notebooks, but we haven't given it very much thought. Let's take a closer look now. To start, let's see what the type of dt.date.today() is Step7: Given that we have an object of type datetime.date we can do things like ask for the day, month, and year Step8: timedelta Suppose that we wanted to construct a "days until" counter. To do this we will construct another datetime.date and use the - operator to find the differene between the other date and today. Step9: We can get the number of days until new years eve by looking at until_nye.days Step10: Exercise (5 min) Step11: datetime Being able to work with dates and the difference between dates is very useful, but sometimes we need to also think about times. To do that, we will look to the dt.datetime module. We can get the current date and time using dt.datetime.now() Step12: The numbers in the printout above are year, month, day, hour, minute, second, millisecond. Because we still have day, month, year information ; we can access these properties just as we did for the today above Step13: Exercise (2 min) Step14: strftime Once we have date and time information, a very common thing to do is to print out a formatted version of that date. For example, suppose we wanted to print out a string in the format YYYY-MM-DD. To do this we use the strftime method. Here's an example Step15: Notice that the argument to strftime is a python string that can contain normal text (e.g. Today is) and a special formatters (the stuff starting with %). We haven't talked much about how to do string formatting, but in Python and many other languages using % inside strings has special meaning. Exercise (6 min) Using the documentation for the string formatting behavior, figure out how to write the following strings using the method strftime method on the spencer_bday_time object "Spencer was born on 1989-04-25" "Spencer was born on a Tuesday" "Spencer was born on Tuesday, April 25th" (bonus) "Spencer was born on Tuesday, April 25th at 04 Step16: Dates in Pandas <a id=pandas_dates></a> Now we will look at how to use date and dateime functionality in pandas. To begin, lets take a closer look at the type of index we have on our ffr and vc dataframes Step17: Here we have a DatetimeIndex, which menas pandas recogizes this DataFrame as containing time series data. What can we do now? A lot, here's a brief list Step18: Suppose we want to restrict to September 2008 Step19: We can use this same functionality to extract ranges of dates. To get the data starting in june 2007 and going until march 2011 we would do Step20: Exercise (3 min) Using one of your datasets from quandl, plot one or more variables for the last 3 years (2013 through 2016) resampling Now suppose that instead of daily data, we wanted our federal funds data at a monthly frequency. To do this we use the resample method on our DataFrame Step21: Notice that when we call resample we don't get back a DataFrame at that frequency. This is because there is some ambiguity regarding just how the frequency should be converted Step22: Note that we can also combine numbers with the specification of the resampling frequency. As an example, we can resample to a bi-weekly frequency using Step23: Exercise (5 min) Step24: Notice that the index is the same on both, but the data is clearly different. If we use MS instead of M we will have the index based on the first day of the month Step25: Notice how the data associated with "M" and first is the same as the data for "MS" and first. The same holds for last. Access year, month, day... Given a DatetimeIndex you can access the day, month, or year (also second, millisecond, etc.) by simply accessing the .XX property; where XX is the data you want Step26: Rolling computations We can use pandas to do rolling computations. For example, suppose we want to plot the maximum and minimum of the risk free rate within the past week at each date (think about that slowly -- for every date, we want to look back 7 days and compute the max). Here's how we can do that Step27: Note that this is different from just resampling because we will have an observation for every date in the original dataframe (except the number of dates at the front needed to construct the initial window). Step28: Merging with dates Let's see what happens when we merge the ffr and vc datasets Step29: Notice that we ended up with a lot of missing data. This happened for two reasons Step30: To resolve the second issue we will do two-steps Step31: Notice that using pad here just copied data forwards to fill in missing months (e.g. the data for March 1985 was applied to April and May) Now let's try that merge again Step32: That looks much better -- we have missing data at the top and the bottom for months that aren't available in the venture capital dataset, but nothing else should be missing. Let's try to do something interesting with this data. We want to plot the growth rate in the risk free rate, early stage vc funding, and total vc funding for the months following the start of the dotcom boom (rougly Jan 1995) and the housing boom (roughly Jan 2004). Read that again carefully. For each of the three series we want 2 lines. For each line, the x axis will be quarters since start of boom. The y axis growth rates since first month of bubble.
Python Code: import sys # system module import pandas as pd # data package import matplotlib.pyplot as plt # graphics module import datetime as dt # date and time module import numpy as np %matplotlib inline plt.style.use("ggplot") # quandl package import quandl # check versions (overkill, but why not?) print('Python version:', sys.version) print('Pandas version: ', pd.__version__) print('quandl version: ', quandl.version.VERSION) print('Today: ', dt.date.today()) # helper function to print info about dataframe def df_info(df): print("Shape: ", df.shape) print("dtypes: ", df.dtypes.to_dict()) print("index dtype: ", df.index.dtype) return pd.concat([df.head(3), df.tail(3)]) Explanation: Timeseries Overview. We introduce the tools for working with dates, times, and time series data. We start with functionality built into python itself, then discuss how pandas builds on these tools to add powerful time series capabilities to DataFrames. Outline Quandl: We show how to use the quandl package to access a large database of financial and economic dat. Dates in python: covers the basics of working with dates and times in python Dates in pandas: shows how to use dates with pandas objects Note: requires internet access to run. This Jupyter notebook was created by Chase Coleman and Spencer Lyon for the NYU Stern course Data Bootcamp. In order to run the code in this notebook, you will need to have the quandl package installed. You can do this from the command line using pip install quandl --upgrade End of explanation us_tax = quandl.get("OECD/REV_NES_TOTALTAX_TAXUSD_USA") df_info(us_tax) Explanation: Quandl <a id=data></a> quandl is a company that collects and maintains financial and economic data from standard sources (e.g. FRED, IMF, BEA, etc.) and non-standard sources (Fx data, company level data, trader receipts). The data is viewable on their webpage (see here or there for examples), but made available to programming languages via their API. We will access their API using their python library. Suppose, for example, that we wanted to get data on taxes in the US. Here's how we might find some: Open up the quandl search page Type in "US tax revenue" Click on one of the results that seems interesting to us Checkout things like the frequency (Annual for this data set), the quandl code (top right, here it is OECD/REV_NES_TOTALTAX_TAXUSD_USA) and description. Exercise (5 min): Go to Quandl's website and explore some of the data quandl has available. Come up with 2 datasets and make a dictionary that maps the quandl code into a reasonable name. For example, for the us tax revenue dataset above I could have done python my_data = {"OECD/REV_NES_TOTALTAX_TAXUSD_USA": "US_tax_rev"} We can download the data using the quandl.get function and passing it one of the Quandl codes we collected in the previous exercise End of explanation us_tax_recent = quandl.get("OECD/REV_NES_TOTALTAX_TAXUSD_USA", start_date="2000-01-01") df_info(us_tax_recent) Explanation: We can also pass start_date and end_date parameters to control the dates for the downloaded data: End of explanation my_data = { "FRED/DFF": "risk_free_rate", "NVCA/VENTURE_3_09C": "vc_investments" } dfs = [] for k in my_data.keys(): dfs.append(quandl.get(k)) df_info(dfs[0]) df_info(dfs[1]) Explanation: Now, let's read in the data sets we found were interesting. Feel free to use the codes you looked up, or the ones I'm using here. End of explanation dfs[0].rename(columns={"VALUE": my_data["FRED/DFF"]}, inplace=True) df_info(dfs[0]) Explanation: So, "FRED/DFF" is the federal funds rate, or the interest rate at which banks can trade federal assets with eachother overnight. This is often used as a proxy for the risk free rate i economic analysis. From the printout above it looks like we have more than 22k observations starting in 1954 at a daily frequency. Notice, however that the column name is VALUE. Let's use our dict to clean up that name: End of explanation ffr = dfs[0] vc = dfs[1] Explanation: The second dataframe we dowloaded (using code NVCA/VENTURE_3_09C) contains quarterly data on total investment by venture capital firms in the US, broken down by the stage of the project. The column names here are ok, so we don't need to change anything. Exercise (5 min) do a simlar analysis/report for whatever datasets you chose to work with. Make sure to do the following: Make note of the frequency of the data (e.g. daily, monthly, quarterly, yearly, etc.) Check the column names If you chose to use the same data as me, do something interesting with the data. Perhaps construct plots of differenet variables, or compute summary statistics -- use your imagination here. So that we have the data easily acessible for later on, let's store these two variables in individual dataframes: End of explanation today = dt.date.today() print("the type of today is ", type(today)) Explanation: Dates in python <a id=datetime></a> The date and time functionality in python comes from the built in datetime module. Notice above that we ran python import datetime as dt We've been using the dt.date.today() function throughout this course when we print the date at the top of our notebooks, but we haven't given it very much thought. Let's take a closer look now. To start, let's see what the type of dt.date.today() is End of explanation print("the day of the month is: ", today.day) print("we are curretly in month number", today.month) print("The year is", today.year) Explanation: Given that we have an object of type datetime.date we can do things like ask for the day, month, and year End of explanation # construct a date by hand new_years_eve = dt.date(2016, 12, 31) until_nye = new_years_eve - today type(until_nye) Explanation: timedelta Suppose that we wanted to construct a "days until" counter. To do this we will construct another datetime.date and use the - operator to find the differene between the other date and today. End of explanation until_nye.days Explanation: We can get the number of days until new years eve by looking at until_nye.days End of explanation spencer_bday = dt.date(1989, 4, 25) # NOTE: add 7 for the 7 leap years between 1989 and 2019 thirty_years = dt.timedelta(days=365*30 + 7) # check to make sure it is still April 25th spencer_bday + thirty_years days_to_30 = (spencer_bday + thirty_years - today).days print("Spencer will be 30 in {} days".format(days_to_30)) Explanation: Exercise (5 min): write a python function named days_until that accepts one argument (a datetime.date) and returns the number of days between today and that date. Apply your function to December 15, 2016 (day the UG project is due) Your birthday (HINT: unless your birthday is in late December, make sure to do 2017 as the year) We could also construct a datetime.timedelta by hand and add it to an existing date. Here's an example to see how many days until Spencer turns 30 End of explanation now = dt.datetime.now() print("type of now:", type(now)) now Explanation: datetime Being able to work with dates and the difference between dates is very useful, but sometimes we need to also think about times. To do that, we will look to the dt.datetime module. We can get the current date and time using dt.datetime.now(): End of explanation print("the day of the month is: ", now.day) print("we are curretly in month number", now.month) print("The year is", now.year) Explanation: The numbers in the printout above are year, month, day, hour, minute, second, millisecond. Because we still have day, month, year information ; we can access these properties just as we did for the today above: End of explanation # NOTE: we can only do arithmetic between many date objects or datetime obejcts # we cannot add or subtract a datetime to/from a date. So, we need to # re-create spencer_bday as a datetime object. # NOTE: The timedelta object is already compatible with date and datetime objects spencer_bday_time = dt.datetime(1989, 4, 25, 16, 33, 5) seconds_to_30 = (spencer_bday_time + thirty_years - now).seconds print("Spencer will be 30 in {} seconds".format(seconds_to_30)) Explanation: Exercise (2 min): Use tab completion to see what else we can access on our dt.datetime object now Time deltas work the same way with datetime objects as they did with date objects. We can see how many seconds until Spencer turns 30: End of explanation print(today.strftime("Today is %Y-%m-%d")) Explanation: strftime Once we have date and time information, a very common thing to do is to print out a formatted version of that date. For example, suppose we wanted to print out a string in the format YYYY-MM-DD. To do this we use the strftime method. Here's an example End of explanation spencer_bday_time.strftime("Spencer was born on %A, %B %dth at %I:%M %p") Explanation: Notice that the argument to strftime is a python string that can contain normal text (e.g. Today is) and a special formatters (the stuff starting with %). We haven't talked much about how to do string formatting, but in Python and many other languages using % inside strings has special meaning. Exercise (6 min) Using the documentation for the string formatting behavior, figure out how to write the following strings using the method strftime method on the spencer_bday_time object "Spencer was born on 1989-04-25" "Spencer was born on a Tuesday" "Spencer was born on Tuesday, April 25th" (bonus) "Spencer was born on Tuesday, April 25th at 04:33 PM" End of explanation type(ffr.index) Explanation: Dates in Pandas <a id=pandas_dates></a> Now we will look at how to use date and dateime functionality in pandas. To begin, lets take a closer look at the type of index we have on our ffr and vc dataframes: End of explanation ffr2008 = ffr["2008"] print("ffr2008 is a", type(ffr2008)) df_info(ffr2008) ffr2008.plot() Explanation: Here we have a DatetimeIndex, which menas pandas recogizes this DataFrame as containing time series data. What can we do now? A lot, here's a brief list: subset the data using strings to get data for a particular time frame resample the data to a diffrent frequency: this means we could convert daily to monthly, quarterly, etc. quickly access things like year, month, and day for the observation rolling computations: this will allow us to compute statistics on a rolling subset of the data. We'll show a simple example here, but check out the docs for more info snap the observations to a particular frequency -- this one is a bit advanced and we won't cover it here For a much more comprehensive list with other examples see the docs For now, let's look at how to do these things with the data we obtained from quandl NOTE You can only do these things when you have a DatetimeIndex. This means that even if one of the columns in your DataFrame has date or datetime information, you will need to set it as the index to access this functionality. subsetting Suppose we wanted to extract all the data for the federal funds rate for the year 2008. End of explanation ffr_sep2008 = ffr["2008-09"] df_info(ffr_sep2008) Explanation: Suppose we want to restrict to September 2008: End of explanation ffr2 = ffr["2007-06":"2011-03"] df_info(ffr2) ffr2.plot() Explanation: We can use this same functionality to extract ranges of dates. To get the data starting in june 2007 and going until march 2011 we would do End of explanation # MS means "month start" ffrM_resample = ffr.resample("MS") type(ffrM_resample) Explanation: Exercise (3 min) Using one of your datasets from quandl, plot one or more variables for the last 3 years (2013 through 2016) resampling Now suppose that instead of daily data, we wanted our federal funds data at a monthly frequency. To do this we use the resample method on our DataFrame End of explanation ffrM = ffrM_resample.first() df_info(ffrM) Explanation: Notice that when we call resample we don't get back a DataFrame at that frequency. This is because there is some ambiguity regarding just how the frequency should be converted: should we take the average during the period, the first observation, last observation, sum the observations? In order to get a DataFrame we have to call a method on our DatetimeIndexResampler object. For this example, let's do the first observation in each period: End of explanation ffr.resample("2w") Explanation: Note that we can also combine numbers with the specification of the resampling frequency. As an example, we can resample to a bi-weekly frequency using End of explanation ffr.resample("M").first().head() ffr.resample("M").last().head() Explanation: Exercise (5 min): Using the documentation for the most common frequencies, figure out how to resample one of your datasets to A quarterly frequency -- make sure to get the start of the quarter An annual frequency -- use the end of the year more than you need: I want to point out that when you use the first or last methods to perform the aggregations, there are two dates involved: (1) the date the resultant index will have and (2) the date used to fill in the data at that date. The first date (one on the index) will be assigned based on the string you pass to the resample method. The second date (the one for extracting data from the original dataframe) will be determined based on the method used to do the aggregation. first will extract the first data point from that subset and last will extract the last. Let's see some examples: End of explanation ffr.resample("MS").first().head() ffr.resample("MS").last().head() Explanation: Notice that the index is the same on both, but the data is clearly different. If we use MS instead of M we will have the index based on the first day of the month: End of explanation ffr.index.year ffr.index.day ffr.index.month Explanation: Notice how the data associated with "M" and first is the same as the data for "MS" and first. The same holds for last. Access year, month, day... Given a DatetimeIndex you can access the day, month, or year (also second, millisecond, etc.) by simply accessing the .XX property; where XX is the data you want End of explanation fig, ax = plt.subplots() ffr.rolling(window=7).max().plot(ax=ax) ffr.rolling(window=7).min().plot(ax=ax) ax.legend(["max", "min"]) Explanation: Rolling computations We can use pandas to do rolling computations. For example, suppose we want to plot the maximum and minimum of the risk free rate within the past week at each date (think about that slowly -- for every date, we want to look back 7 days and compute the max). Here's how we can do that End of explanation ffr.rolling(window=7).max().head(10) ffr.resample("7D").max().head(10) Explanation: Note that this is different from just resampling because we will have an observation for every date in the original dataframe (except the number of dates at the front needed to construct the initial window). End of explanation # do a left merge on the index (date info) df = pd.merge(ffr, vc, left_index=True, right_index=True, how="left") df_info(df) vc.head() Explanation: Merging with dates Let's see what happens when we merge the ffr and vc datasets End of explanation ffr_recent = ffr["1985":] Explanation: Notice that we ended up with a lot of missing data. This happened for two reasons: The ffr data goes back to 1954, but the vc data starts in 1985 The ffr data is at a daily frequency, but vc is at quarterly. To resolve the first issue we can subset the ffr data and only keep from 1985 on End of explanation ffr_recentM = ffr_recent.resample("M").first() vc_M = vc.resample("M").pad() vc_M.head() Explanation: To resolve the second issue we will do two-steps: resample the ffr data to a monthly frequency resample the vc data to a monthly frequency by padding. This is called upsampling because we are going from a lower frequency (quarterly) to a higher one (monthly) End of explanation df = pd.merge(ffr_recentM, vc_M, left_index=True, right_index=True, how="left") print(df.head(6)) print("\n\n", df.tail(8)) Explanation: Notice that using pad here just copied data forwards to fill in missing months (e.g. the data for March 1985 was applied to April and May) Now let's try that merge again End of explanation # subset the data, then remove datetime index as we don't need it again post_dotcom = df["1995":].reset_index(drop=True) post_housing = df["2004":].reset_index(drop=True) # take logs so we can do growth rates as log(x_{t+N}) - log(x_t) post_dotcom = np.log(post_dotcom) post_housing = np.log(post_housing) dotcom_growth = post_dotcom - post_dotcom.iloc[0, :] housing_growth = post_housing - post_housing.iloc[0, :] fig, axs = plt.subplots(3, 1, figsize=(10, 5)) variables = ["risk_free_rate", "Early Stage", "Total"] for i in range(len(variables)): var = variables[i] # add dotcom line dotcom_growth[var].plot(ax=axs[i]) # add housing line housing_growth[var].plot(ax=axs[i]) # set title axs[i].set_title(var) # set legend and xlabel on last plot only axs[-1].legend(["dotcom", "housing"]) axs[-1].set_xlabel("Quarters since boom") # make subplots not overlap fig.tight_layout() Explanation: That looks much better -- we have missing data at the top and the bottom for months that aren't available in the venture capital dataset, but nothing else should be missing. Let's try to do something interesting with this data. We want to plot the growth rate in the risk free rate, early stage vc funding, and total vc funding for the months following the start of the dotcom boom (rougly Jan 1995) and the housing boom (roughly Jan 2004). Read that again carefully. For each of the three series we want 2 lines. For each line, the x axis will be quarters since start of boom. The y axis growth rates since first month of bubble. End of explanation
14,864
Given the following text description, write Python code to implement the functionality described below step by step Description: Vortex sheets Our numerical method for 2D-potential flow will be developed using vortex sheets. In this first lesson, we derive the equations for a simple vortex sheet and plot the velocity fields. Potential Function Consider a 2D vortex sheet defined along a curve $\cal S$. The sheet is characterized by a strength $\gamma(s)$, where $s$ is the coordinate along the sheet. An infinitesimal segment of this sheet is an infinitesimal point-vortex with strength $d\Gamma = \gamma ds$. Therefore we can integrate the potential for a point vortex $$ \phi = \frac{\Gamma}{2\pi}\theta $$ to define the potential function for a vortex sheet as $$ \phi(x,y) = \int_{\cal S} \frac{\gamma(s)}{2\pi}\theta(x,y,s)\ ds $$ where $\theta$ is the angle from $s$ to the point at which we evaluate the potential. For a vortex sheet defined from $-S,S$ along the $x$-axis, $\theta=\tan^{-1}(y/(x-s))$. If the strength is constant along the sheet, the potential is $$ \phi(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \tan^{-1}\left(\frac y{x-s}\right)ds $$ Velocity Field The velocity is defined from the potential as $$ u = \frac{\partial\phi}{\partial x}, \quad v = \frac{\partial\phi}{\partial y} $$ Therefore, the $u$ velocity is $$u(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \frac{-y}{y^2+(x-s)^2} ds $$ substitute $t = (x-s)/y$, $dt = -ds/y$ and integrate to get $$u(x,y) = \frac{\gamma}{2\pi}\left[\tan^{-1}\left(\frac{x-S}y\right)-\tan^{-1}\left(\frac{x+S}y\right)\right].$$ While the $v$ velocity is $$v(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \frac{x-s}{y^2+(x-s)^2} ds $$ substitute $t = (x-s)^2+y^2$, $dt = -2(x-s)ds$ and integrate to get $$v(x,y) =\frac{\gamma}{4\pi} \log\left(\frac{(x+S)^2+y^2}{(x-S)^2+y^2}\right).$$ That's it! For a given $\gamma$ and $S$ we can determine the velocity at any point $x,y$ by evaluating some $\tan^{-1}$ and $\log$ functions. Numerical implementation To visualize the velocity field we will discretize the background space into a uniform grid and evaluate the functions above at each point. We need to import numpy to do this. This imports numerical functions like linspace to evenly divide up a line segment into an array points, and meshgrid which takes two one-dimensional arrays and creates two two-dimensional arrays to fill the space. Step1: Lets visualize the grid to see what we made. We need to import pyplot which has a large set of plotting functions similar to matlab, such as a scatter plot. Step2: As expected, a grid of equally space points. Next, we use the equations above to determine the velocity at each point in terms of the source strength and sheet extents. Since we'll use this repeatedly, we will write it as a set of functions. Coding fundamental Step3: Not the prettiest equations, but nothing numpy can't handle. Now let's make a function to plot the flow. We'll use arrows to show the velocity vectors (quiver) and color contours to show the velocity magnitude (contourf with colorbar for the legend). I've also included the streamplot function to plot streamlines, but this is slow, so I'll let you turn it on your self if you want them. Step4: Now we can compute the velocity on the grid and plot it Step5: Quiz 1 What shape do the streamlines make when you are sufficiently far from the body? Ellipses Circles Straight lines (Hint Step6: The dark blue circle is a stagnation point, ie the fluid has stopped, ie $u=v=0$. Quiz 3 How can you make the stagnation point touch the vortex sheet? Set $\alpha=0$ Set $\gamma=\pm 2 U_\infty$ Set $S = \infty$ Quiz 4 Change the background flow to be vertical, ie $\alpha=\frac\pi 2$. How can you make the stagnation point touch the vortex sheet now? Set $\gamma=\pm 2 U_\infty$ Set $\gamma=\pm 2 V_\infty$ Impossible General vortex panel Finally, we would like to be able to compute the flow induced by a flat vortex sheet defined between any two points $x_0,y_0$ and $x_1,y_1$. A sheet so defined is called a vortex panel. We could start from scratch, re-deriving the potential and velocity fields. But why would we want to do that? Coding fundamental Step7: Now we can define a general panel and compute its velocity. Step8: Quiz 5 How can we compute the flow for a pair of parallel vortex panels with opposite strengths in a free stream? Spend a couple hours writing more code Superposition Your turn Write the code to make this happen. And answer the following Step9: Ignore the line below - it just loads the style sheet.
Python Code: import numpy N = 30 # number of points along each axis X = numpy.linspace(-2, 2, N) # computes a 1D-array for x Y = numpy.linspace(-2, 2, N) # computes a 1D-array for y x, y = numpy.meshgrid(X, Y) # generates a mesh grid Explanation: Vortex sheets Our numerical method for 2D-potential flow will be developed using vortex sheets. In this first lesson, we derive the equations for a simple vortex sheet and plot the velocity fields. Potential Function Consider a 2D vortex sheet defined along a curve $\cal S$. The sheet is characterized by a strength $\gamma(s)$, where $s$ is the coordinate along the sheet. An infinitesimal segment of this sheet is an infinitesimal point-vortex with strength $d\Gamma = \gamma ds$. Therefore we can integrate the potential for a point vortex $$ \phi = \frac{\Gamma}{2\pi}\theta $$ to define the potential function for a vortex sheet as $$ \phi(x,y) = \int_{\cal S} \frac{\gamma(s)}{2\pi}\theta(x,y,s)\ ds $$ where $\theta$ is the angle from $s$ to the point at which we evaluate the potential. For a vortex sheet defined from $-S,S$ along the $x$-axis, $\theta=\tan^{-1}(y/(x-s))$. If the strength is constant along the sheet, the potential is $$ \phi(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \tan^{-1}\left(\frac y{x-s}\right)ds $$ Velocity Field The velocity is defined from the potential as $$ u = \frac{\partial\phi}{\partial x}, \quad v = \frac{\partial\phi}{\partial y} $$ Therefore, the $u$ velocity is $$u(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \frac{-y}{y^2+(x-s)^2} ds $$ substitute $t = (x-s)/y$, $dt = -ds/y$ and integrate to get $$u(x,y) = \frac{\gamma}{2\pi}\left[\tan^{-1}\left(\frac{x-S}y\right)-\tan^{-1}\left(\frac{x+S}y\right)\right].$$ While the $v$ velocity is $$v(x,y) = \frac{\gamma}{2\pi}\int^S_{-S} \frac{x-s}{y^2+(x-s)^2} ds $$ substitute $t = (x-s)^2+y^2$, $dt = -2(x-s)ds$ and integrate to get $$v(x,y) =\frac{\gamma}{4\pi} \log\left(\frac{(x+S)^2+y^2}{(x-S)^2+y^2}\right).$$ That's it! For a given $\gamma$ and $S$ we can determine the velocity at any point $x,y$ by evaluating some $\tan^{-1}$ and $\log$ functions. Numerical implementation To visualize the velocity field we will discretize the background space into a uniform grid and evaluate the functions above at each point. We need to import numpy to do this. This imports numerical functions like linspace to evenly divide up a line segment into an array points, and meshgrid which takes two one-dimensional arrays and creates two two-dimensional arrays to fill the space. End of explanation from matplotlib import pyplot %matplotlib inline pyplot.scatter(x, y) pyplot.xlabel('x') pyplot.ylabel('y') Explanation: Lets visualize the grid to see what we made. We need to import pyplot which has a large set of plotting functions similar to matlab, such as a scatter plot. End of explanation # velocity component functions def get_u( x, y, S, gamma ): return gamma/(2*numpy.pi)*(numpy.arctan((x-S)/y)-numpy.arctan((x+S)/y)) def get_v( x, y, S, gamma ): return gamma/(4*numpy.pi)*(numpy.log(((x+S)**2+y**2)/((x-S)**2+y**2))) Explanation: As expected, a grid of equally space points. Next, we use the equations above to determine the velocity at each point in terms of the source strength and sheet extents. Since we'll use this repeatedly, we will write it as a set of functions. Coding fundamental: Functions Don't ever write the same code twice! If you aren't familiar with defining functions in python, read up. End of explanation def plot_uv(u,v): pyplot.figure(figsize=(8,11)) # set size pyplot.xlabel('x', fontsize=16) # label x pyplot.ylabel('y', fontsize=16) # label y m = numpy.sqrt(u**2+v**2) # compute velocity magnitude velocity = pyplot.contourf(x, y, m, vmin=0) # plot magnitude contours cbar = pyplot.colorbar(velocity, orientation='horizontal') cbar.set_label('Velocity magnitude', fontsize=16); pyplot.quiver(x, y, u, v) # plot vector field # pyplot.streamplot(x, y, u, v) # plots streamlines - this is slow! Explanation: Not the prettiest equations, but nothing numpy can't handle. Now let's make a function to plot the flow. We'll use arrows to show the velocity vectors (quiver) and color contours to show the velocity magnitude (contourf with colorbar for the legend). I've also included the streamplot function to plot streamlines, but this is slow, so I'll let you turn it on your self if you want them. End of explanation # compute the velocity gamma = -4 # sheet strength S = 1 # sheet extents u = get_u(x,y,S,gamma) v = get_v(x,y,S,gamma) # plot it plot_uv(u,v) pyplot.plot([-min(S,2),min(S,2)],[0,0],'k-',lw=2) # draw the vortex sheet Explanation: Now we can compute the velocity on the grid and plot it End of explanation alpha = numpy.pi/10 # free-stream angle U_inf = numpy.cos(alpha) # free-stream in x V_inf = numpy.sin(alpha) # free-stream in y # superimpose to get velocity gamma = -4 # sheet strength S = 0.5 # sheet extents u = U_inf+get_u(x,y,S,gamma) v = V_inf+get_v(x,y,S,gamma) # plot it plot_uv(u,v) pyplot.plot([-min(S,2),min(S,2)],[0,0],'k-',lw=2) # draw the vortex sheet Explanation: Quiz 1 What shape do the streamlines make when you are sufficiently far from the body? Ellipses Circles Straight lines (Hint: This is an interactive notebook - which parameter can you vary to answer the question?) Quiz 2 What is the u,v velocity of points very near the center of the vortex sheet? $u=0,\ v=\sqrt\gamma$ $u=\pm\frac 12 \gamma,\ v=0$ $u=\pm\gamma^2,\ v=0$ (Hint: $tan^{-1}(\pm \infty) = \pm \frac \pi 2$ ) Background flow Next, lets add a uniform background flow with magnitude one at angle $\alpha$. $$ U_\infty = \cos\alpha,\quad V_\infty = \sin\alpha $$ Using superposition the total velocity is just the sum of the vortex sheet and uniform flow. End of explanation # vortex panel class class Panel: # save the inputs and pre-compute factors for the coordinate tranform def __init__( self, x0, y0, x1, y1, gamma ): self.x,self.y,self.gamma = [x0,x1],[y0,y1],gamma self.xc = 0.5*(x0+x1) # panel x-center self.yc = 0.5*(y0+y1) # panel y-center self.S = numpy.sqrt( # ... (x1-self.xc)**2+(y1-self.yc)**2) # panel width self.sx = (x1-self.xc)/self.S # unit vector in x self.sy = (y1-self.yc)/self.S # unit vector in y # get the velocity! def velocity( self, x, y ): gamma = self.gamma xp,yp = self.transform_xy( x, y ) # transform up = get_u( xp, yp, self.S, gamma ) # get u prime vp = get_v( xp, yp, self.S, gamma ) # get v prime return self.rotate_uv( up, vp ) # rotate back # plot the panel def plot(self): return pyplot.plot(self.x,self.y,'k-',lw=2) # transform from global to panel coordinates def transform_xy( self, x, y ): xt = x-self.xc # shift x yt = y-self.yc # shift y xp = xt*self.sx+yt*self.sy # rotate x yp = yt*self.sx-xt*self.sy # rotate y return [ xp, yp ] # rotate velocity back to global coordinates def rotate_uv( self, up, vp): u = up*self.sx-vp*self.sy # reverse rotate u prime v = vp*self.sx+up*self.sy # reverse rotate v prime return [ u, v ] Explanation: The dark blue circle is a stagnation point, ie the fluid has stopped, ie $u=v=0$. Quiz 3 How can you make the stagnation point touch the vortex sheet? Set $\alpha=0$ Set $\gamma=\pm 2 U_\infty$ Set $S = \infty$ Quiz 4 Change the background flow to be vertical, ie $\alpha=\frac\pi 2$. How can you make the stagnation point touch the vortex sheet now? Set $\gamma=\pm 2 U_\infty$ Set $\gamma=\pm 2 V_\infty$ Impossible General vortex panel Finally, we would like to be able to compute the flow induced by a flat vortex sheet defined between any two points $x_0,y_0$ and $x_1,y_1$. A sheet so defined is called a vortex panel. We could start from scratch, re-deriving the potential and velocity fields. But why would we want to do that? Coding fundamental: Reuse Recast problems to reuse existing code! By switching from the global coordinates to a panel-based coordinate system, we can transform any vortex panel to match our previous example. After computing the velocity using our old functions, we just need to rotate the vector back to the global coordinates. class Panel The velocity function below follows this process to compute the velocity induced by the panel at any point $x,y$. We've defined a class called Panel to hold the velocity, transform_xy and rotate_uv functions since these all belong together. This also lets us store the information about a Panel (the end points, strength, width, and direction) using the __init__ function, and draw the Panel using the plot function. If you're interested, read up on classes in Python. End of explanation # define panel my_panel = Panel(x0=-0.7,y0=0.5,x1=0.5,y1=-0.4,gamma=-2) # compute velocity on grid u,v = my_panel.velocity(x,y) # plot it plot_uv(u,v) # plot the flow on the grid my_panel.plot() # plot the panel Explanation: Now we can define a general panel and compute its velocity. End of explanation # your code here Explanation: Quiz 5 How can we compute the flow for a pair of parallel vortex panels with opposite strengths in a free stream? Spend a couple hours writing more code Superposition Your turn Write the code to make this happen. And answer the following: What strength is required to stop the flow between the panels? What do the streamlines look like in this case? What could this region of stopped fluid represent? End of explanation from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom.css', 'r').read() return HTML(styles) css_styling() Explanation: Ignore the line below - it just loads the style sheet. End of explanation
14,865
Given the following text description, write Python code to implement the functionality described below step by step Description: Style Transfer In this notebook we will implement the style transfer technique from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015). The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself. The deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency. Here's an example of the images you'll be able to produce by the end of this notebook Step1: We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data. Step3: As in the last assignment, we need to set the dtype to select either the CPU or the GPU Step5: Computing Loss We're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms Step6: Test your content loss. You should see errors less than 0.001. Step8: Style loss Now we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows Step9: Test your Gram matrix code. You should see errors less than 0.001. Step11: Next, implement the style loss Step12: Test your style loss implementation. The error should be less than 0.001. Step14: Total-variation regularization It turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values. You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$ Step15: Test your TV loss implementation. Error should be less than 0.001. Step17: Now we're ready to string it all together (you shouldn't have to modify this function) Step18: Generate some pretty pictures! Try out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook. The content_image is the filename of content image. The style_image is the filename of style image. The image_size is the size of smallest image dimension of the content image (used for content loss and generated image). The style_size is the size of smallest style image dimension. The content_layer specifies which layer to use for content loss. The content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content). style_layers specifies a list of which layers to use for style loss. style_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image. tv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes. Step19: Feature Inversion The code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations). Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image. (Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) [1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015
Python Code: import torch import torch.nn as nn from torch.autograd import Variable import torchvision import torchvision.transforms as T import PIL import numpy as np from scipy.misc import imread from collections import namedtuple import matplotlib.pyplot as plt from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD %matplotlib inline Explanation: Style Transfer In this notebook we will implement the style transfer technique from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015). The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself. The deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency. Here's an example of the images you'll be able to produce by the end of this notebook: Setup End of explanation def preprocess(img, size=512): transform = T.Compose([ T.Scale(size), T.ToTensor(), T.Normalize(mean=SQUEEZENET_MEAN.tolist(), std=SQUEEZENET_STD.tolist()), T.Lambda(lambda x: x[None]), ]) return transform(img) def deprocess(img): transform = T.Compose([ T.Lambda(lambda x: x[0]), T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]), T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]), T.Lambda(rescale), T.ToPILImage(), ]) return transform(img) def rescale(x): low, high = x.min(), x.max() x_rescaled = (x - low) / (high - low) return x_rescaled def rel_error(x,y): return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) def features_from_img(imgpath, imgsize): img = preprocess(PIL.Image.open(imgpath), size=imgsize) img_var = Variable(img.type(dtype)) return extract_features(img_var, cnn), img_var # Older versions of scipy.misc.imresize yield different results # from newer versions, so we check to make sure scipy is up to date. def check_scipy(): import scipy vnum = int(scipy.__version__.split('.')[1]) assert vnum >= 16, "You must install SciPy >= 0.16.0 to complete this notebook." check_scipy() answers = np.load('style-transfer-checks.npz') Explanation: We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data. End of explanation dtype = torch.FloatTensor # Uncomment out the following line if you're on a machine with a GPU set up for PyTorch! # dtype = torch.cuda.FloatTensor # Load the pre-trained SqueezeNet model. cnn = torchvision.models.squeezenet1_1(pretrained=True).features cnn.type(dtype) # We don't want to train the model any further, so we don't want PyTorch to waste computation # computing gradients on parameters we're never going to update. for param in cnn.parameters(): param.requires_grad = False # We provide this helper code which takes an image, a model (cnn), and returns a list of # feature maps, one per layer. def extract_features(x, cnn): Use the CNN to extract features from the input image x. Inputs: - x: A PyTorch Variable of shape (N, C, H, W) holding a minibatch of images that will be fed to the CNN. - cnn: A PyTorch model that we will use to extract features. Returns: - features: A list of feature for the input images x extracted using the cnn model. features[i] is a PyTorch Variable of shape (N, C_i, H_i, W_i); recall that features from different layers of the network may have different numbers of channels (C_i) and spatial dimensions (H_i, W_i). features = [] prev_feat = x for i, module in enumerate(cnn._modules.values()): next_feat = module(prev_feat) features.append(next_feat) prev_feat = next_feat return features Explanation: As in the last assignment, we need to set the dtype to select either the CPU or the GPU End of explanation def content_loss(content_weight, content_current, content_original): Compute the content loss for style transfer. Inputs: - content_weight: Scalar giving the weighting for the content loss. - content_current: features of the current image; this is a PyTorch Tensor of shape (1, C_l, H_l, W_l). - content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l). Returns: - scalar content loss pass Explanation: Computing Loss We're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below. Content loss We can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image. Let's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times C_\ell \times H_\ell \times W_\ell}$. $C_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function. Then the content loss is given by: $L_c = w_c \times \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$ End of explanation def content_loss_test(correct): content_image = 'styles/tubingen.jpg' image_size = 192 content_layer = 3 content_weight = 6e-2 c_feats, content_img_var = features_from_img(content_image, image_size) bad_img = Variable(torch.zeros(*content_img_var.data.size())) feats = extract_features(bad_img, cnn) student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).data.numpy() error = rel_error(correct, student_output) print('Maximum error is {:.3f}'.format(error)) content_loss_test(answers['cl_out']) Explanation: Test your content loss. You should see errors less than 0.001. End of explanation def gram_matrix(features, normalize=True): Compute the Gram matrix from features. Inputs: - features: PyTorch Variable of shape (N, C, H, W) giving features for a batch of N images. - normalize: optional, whether to normalize the Gram matrix If True, divide the Gram matrix by the number of neurons (H * W * C) Returns: - gram: PyTorch Variable of shape (N, C, C) giving the (optionally normalized) Gram matrices for the N input images. pass Explanation: Style loss Now we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows: First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results. Given a feature map $F^\ell$ of shape $(1, C_\ell, M_\ell)$, the Gram matrix has shape $(1, C_\ell, C_\ell)$ and its elements are given by: $$G_{ij}^\ell = \sum_k F^{\ell}{ik} F^{\ell}{jk}$$ Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\ell$ a scalar weight term, then the style loss for the layer $\ell$ is simply the weighted Euclidean distance between the two Gram matrices: $$L_s^\ell = w_\ell \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$ In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the sum of style losses at each layer: $$L_s = \sum_{\ell \in \mathcal{L}} L_s^\ell$$ Begin by implementing the Gram matrix computation below: End of explanation def gram_matrix_test(correct): style_image = 'styles/starry_night.jpg' style_size = 192 feats, _ = features_from_img(style_image, style_size) student_output = gram_matrix(feats[5].clone()).data.numpy() error = rel_error(correct, student_output) print('Maximum error is {:.3f}'.format(error)) gram_matrix_test(answers['gm_out']) Explanation: Test your Gram matrix code. You should see errors less than 0.001. End of explanation # Now put it together in the style_loss function... def style_loss(feats, style_layers, style_targets, style_weights): Computes the style loss at a set of layers. Inputs: - feats: list of the features at every layer of the current image, as produced by the extract_features function. - style_layers: List of layer indices into feats giving the layers to include in the style loss. - style_targets: List of the same length as style_layers, where style_targets[i] is a PyTorch Variable giving the Gram matrix the source style image computed at layer style_layers[i]. - style_weights: List of the same length as style_layers, where style_weights[i] is a scalar giving the weight for the style loss at layer style_layers[i]. Returns: - style_loss: A PyTorch Variable holding a scalar giving the style loss. # Hint: you can do this with one for loop over the style layers, and should # not be very much code (~5 lines). You will need to use your gram_matrix function. pass Explanation: Next, implement the style loss: End of explanation def style_loss_test(correct): content_image = 'styles/tubingen.jpg' style_image = 'styles/starry_night.jpg' image_size = 192 style_size = 192 style_layers = [1, 4, 6, 7] style_weights = [300000, 1000, 15, 3] c_feats, _ = features_from_img(content_image, image_size) feats, _ = features_from_img(style_image, style_size) style_targets = [] for idx in style_layers: style_targets.append(gram_matrix(feats[idx].clone())) student_output = style_loss(c_feats, style_layers, style_targets, style_weights).data.numpy() error = rel_error(correct, student_output) print('Error is {:.3f}'.format(error)) style_loss_test(answers['sl_out']) Explanation: Test your style loss implementation. The error should be less than 0.001. End of explanation def tv_loss(img, tv_weight): Compute total variation loss. Inputs: - img: PyTorch Variable of shape (1, 3, H, W) holding an input image. - tv_weight: Scalar giving the weight w_t to use for the TV loss. Returns: - loss: PyTorch Variable holding a scalar giving the total variation loss for img weighted by tv_weight. # Your implementation should be vectorized and not require any loops! pass Explanation: Total-variation regularization It turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values. You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$: $L_{tv} = w_t \times \sum_{c=1}^3\sum_{i=1}^{H-1} \sum_{j=1}^{W-1} \left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \right)$ In the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops. End of explanation def tv_loss_test(correct): content_image = 'styles/tubingen.jpg' image_size = 192 tv_weight = 2e-2 content_img = preprocess(PIL.Image.open(content_image), size=image_size) content_img_var = Variable(content_img.type(dtype)) student_output = tv_loss(content_img_var, tv_weight).data.numpy() error = rel_error(correct, student_output) print('Error is {:.3f}'.format(error)) tv_loss_test(answers['tv_out']) Explanation: Test your TV loss implementation. Error should be less than 0.001. End of explanation def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight, style_layers, style_weights, tv_weight, init_random = False): Run style transfer! Inputs: - content_image: filename of content image - style_image: filename of style image - image_size: size of smallest image dimension (used for content loss and generated image) - style_size: size of smallest style image dimension - content_layer: layer to use for content loss - content_weight: weighting on content loss - style_layers: list of layers to use for style loss - style_weights: list of weights to use for each layer in style_layers - tv_weight: weight of total variation regularization term - init_random: initialize the starting image to uniform random noise # Extract features for the content image content_img = preprocess(PIL.Image.open(content_image), size=image_size) content_img_var = Variable(content_img.type(dtype)) feats = extract_features(content_img_var, cnn) content_target = feats[content_layer].clone() # Extract features for the style image style_img = preprocess(PIL.Image.open(style_image), size=style_size) style_img_var = Variable(style_img.type(dtype)) feats = extract_features(style_img_var, cnn) style_targets = [] for idx in style_layers: style_targets.append(gram_matrix(feats[idx].clone())) # Initialize output image to content image or nois if init_random: img = torch.Tensor(content_img.size()).uniform_(0, 1) else: img = content_img.clone().type(dtype) # We do want the gradient computed on our image! img_var = Variable(img, requires_grad=True) # Set up optimization hyperparameters initial_lr = 3.0 decayed_lr = 0.1 decay_lr_at = 180 # Note that we are optimizing the pixel values of the image by passing # in the img_var Torch variable, whose requires_grad flag is set to True optimizer = torch.optim.Adam([img_var], lr=initial_lr) f, axarr = plt.subplots(1,2) axarr[0].axis('off') axarr[1].axis('off') axarr[0].set_title('Content Source Img.') axarr[1].set_title('Style Source Img.') axarr[0].imshow(deprocess(content_img.cpu())) axarr[1].imshow(deprocess(style_img.cpu())) plt.show() plt.figure() for t in range(200): if t < 190: img.clamp_(-1.5, 1.5) optimizer.zero_grad() feats = extract_features(img_var, cnn) # Compute loss c_loss = content_loss(content_weight, feats[content_layer], content_target) s_loss = style_loss(feats, style_layers, style_targets, style_weights) t_loss = tv_loss(img_var, tv_weight) loss = c_loss + s_loss + t_loss loss.backward() # Perform gradient descents on our image values if t == decay_lr_at: optimizer = torch.optim.Adam([img_var], lr=decayed_lr) optimizer.step() if t % 100 == 0: print('Iteration {}'.format(t)) plt.axis('off') plt.imshow(deprocess(img.cpu())) plt.show() print('Iteration {}'.format(t)) plt.axis('off') plt.imshow(deprocess(img.cpu())) plt.show() Explanation: Now we're ready to string it all together (you shouldn't have to modify this function): End of explanation # Composition VII + Tubingen params1 = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/composition_vii.jpg', 'image_size' : 192, 'style_size' : 512, 'content_layer' : 3, 'content_weight' : 5e-2, 'style_layers' : (1, 4, 6, 7), 'style_weights' : (20000, 500, 12, 1), 'tv_weight' : 5e-2 } style_transfer(**params1) # Scream + Tubingen params2 = { 'content_image':'styles/tubingen.jpg', 'style_image':'styles/the_scream.jpg', 'image_size':192, 'style_size':224, 'content_layer':3, 'content_weight':3e-2, 'style_layers':[1, 4, 6, 7], 'style_weights':[200000, 800, 12, 1], 'tv_weight':2e-2 } style_transfer(**params2) # Starry Night + Tubingen params3 = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/starry_night.jpg', 'image_size' : 192, 'style_size' : 192, 'content_layer' : 3, 'content_weight' : 6e-2, 'style_layers' : [1, 4, 6, 7], 'style_weights' : [300000, 1000, 15, 3], 'tv_weight' : 2e-2 } style_transfer(**params3) Explanation: Generate some pretty pictures! Try out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook. The content_image is the filename of content image. The style_image is the filename of style image. The image_size is the size of smallest image dimension of the content image (used for content loss and generated image). The style_size is the size of smallest style image dimension. The content_layer specifies which layer to use for content loss. The content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content). style_layers specifies a list of which layers to use for style loss. style_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image. tv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes. End of explanation # Feature Inversion -- Starry Night + Tubingen params_inv = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/starry_night.jpg', 'image_size' : 192, 'style_size' : 192, 'content_layer' : 3, 'content_weight' : 6e-2, 'style_layers' : [1, 4, 6, 7], 'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss 'tv_weight' : 2e-2, 'init_random': True # we want to initialize our image to be random } style_transfer(**params_inv) Explanation: Feature Inversion The code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations). Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image. (Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) [1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015 End of explanation
14,866
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 20 - Tables and Networks In the previous chapter we looked into various types of charts and correlations that are useful for scientific analysis in Python. Here, we present two more groups of visualizations Step1: 1. Tables There are (at least) two ways to output your data as a formatted table Step2: Option 2 Step3: Once you've produced your LaTeX table, it's almost ready to put in your paper. If you're writing an NLP paper and your table contains scores for different system outputs, you might want to make the best scores bold, so that they stand out from the other numbers in the table. More to explore The pandas library is really useful if you work with a lot of data (we'll also use it below). As Jake Vanderplas said in the State of the tools video, the pandas DataFrame is becoming the central format in the Python ecosystem. Here is a page with pandas tutorials. 2. Networks Some data is best visualized as a network. There are several options out there for doing this. The easiest is to use the NetworkX library and either plot the network using Matplotlib, or export it to JSON or GEXF (Graph EXchange Format) and visualize the network using external tools. Let's explore a bit of WordNet today. For this, we'll want to import the NetworkX library, as well as the WordNet module. We'll look at the first synset for dog Step5: Networks are made up out of edges Step6: Now we can actually start drawing the graph. We'll increase the figure size, and use the draw_spring method (that implements the Fruchterman-Reingold layout algorithm). Step7: What is interesting about this is that there is a cycle in the graph! This is because dog has two hypernyms, and those hypernyms are both superseded (directly or indirectly) by animal.n.01. What is not so good is that the graph looks pretty ugly
Python Code: %matplotlib inline Explanation: Chapter 20 - Tables and Networks In the previous chapter we looked into various types of charts and correlations that are useful for scientific analysis in Python. Here, we present two more groups of visualizations: tables and networks. We will spend little attention to these, since they are less/not useful for the final assignment; however, note that they are still often a useful visualization options in practice. At the end of this chapter, you will be able to: - Create formatted tables - Create networks This requires that you already have (some) knowledge about: - Loading and manipulating data. If you want to learn more about these topics, you might find the following links useful: - List of visualization blogs: https://flowingdata.com/2012/04/27/data-and-visualization-blogs-worth-following/ End of explanation from tabulate import tabulate table = [["spam",42],["eggs",451],["bacon",0]] headers = ["item", "qty"] # Documentation: https://pypi.python.org/pypi/tabulate print(tabulate(table, headers, tablefmt="latex_booktabs")) Explanation: 1. Tables There are (at least) two ways to output your data as a formatted table: Using the tabulate package. (You might need to install it first, using conda install tabulate) Using the pandas dataframe method df.to_latex(...), df.to_string(...), or even df.to_clipboard(...). This is extremely useful if you're writing a paper. First version of the 'results' section: done! Option 1: Tabulate End of explanation import pandas as pd # Documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html df = pd.DataFrame(data=table, columns=headers) print(df.to_latex(index=False)) Explanation: Option 2: Pandas DataFrames End of explanation import networkx as nx # You might need to install networkx first (conda install -c anaconda networkx) from nltk.corpus import wordnet as wn from nltk.util import bigrams # This is a useful function. Explanation: Once you've produced your LaTeX table, it's almost ready to put in your paper. If you're writing an NLP paper and your table contains scores for different system outputs, you might want to make the best scores bold, so that they stand out from the other numbers in the table. More to explore The pandas library is really useful if you work with a lot of data (we'll also use it below). As Jake Vanderplas said in the State of the tools video, the pandas DataFrame is becoming the central format in the Python ecosystem. Here is a page with pandas tutorials. 2. Networks Some data is best visualized as a network. There are several options out there for doing this. The easiest is to use the NetworkX library and either plot the network using Matplotlib, or export it to JSON or GEXF (Graph EXchange Format) and visualize the network using external tools. Let's explore a bit of WordNet today. For this, we'll want to import the NetworkX library, as well as the WordNet module. We'll look at the first synset for dog: dog.n.01, and how it's positioned in the WordNet taxonomy. All credits for this idea go to this blog. End of explanation def hypernym_edges(synset): Function that generates a set of edges based on the path between the synset and entity.n.01 edges = set() for path in synset.hypernym_paths(): synset_names = [s.name() for s in path] # bigrams turns a list of arbitrary length into tuples: [(0,1),(1,2),(2,3),...] # edges.update adds novel edges to the set. edges.update(bigrams(synset_names)) return edges # Use the synset 'dog.n.01' dog = wn.synset('dog.n.01') # Generate a set of edges connecting the synset for 'dog' to the root node (entity.n.01) edges = hypernym_edges(dog) # Create a graph object. G = nx.Graph() # Add all the edges that we generated earlier. G.add_edges_from(edges) Explanation: Networks are made up out of edges: connections between nodes (also called vertices). To build a graph of the WordNet-taxonomy, we need to generate a set of edges. This is what the function below does. End of explanation # Increasing figure size for better display of the graph. from pylab import rcParams rcParams['figure.figsize'] = 11, 11 # Draw the actual graph. nx.draw_spring(G,with_labels=True) Explanation: Now we can actually start drawing the graph. We'll increase the figure size, and use the draw_spring method (that implements the Fruchterman-Reingold layout algorithm). End of explanation # Install pygraphviz first: pip install pygraphviz from networkx.drawing.nx_agraph import graphviz_layout # Let's add 'cat' to the bunch as well. cat = wn.synset('cat.n.01') cat_edges = hypernym_edges(cat) G.add_edges_from(cat_edges) # Use the graphviz layout. First compute the node positions.. positioning = graphviz_layout(G) # And then pass node positions to the drawing function. nx.draw_networkx(G,pos=positioning) Explanation: What is interesting about this is that there is a cycle in the graph! This is because dog has two hypernyms, and those hypernyms are both superseded (directly or indirectly) by animal.n.01. What is not so good is that the graph looks pretty ugly: there are several crossing edges, which is totally unnecessary. There are better layouts implemented in NetworkX, but they do require you to install pygraphviz. Once you've done that, you can execute the next cell. (And if not, then just assume it looks much prettier!) End of explanation
14,867
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first presentation Import Beampy To start, you need to import beampy module in your python file. .. code-block Step1: Change the position of the text element By default the text element is centred in x direction and automatically positioned in y --which means that if you add other elements they will be equally spaced vertically--. Now we change the x and y with numerical values (x=0, y=0), the text is now in the upper-left corner. Step2: When value of x and y are lower than 1.0, they are by default in percent of slide (or group) width. So if you set x=0.5 and y=0.5x3/4 (as the aspect ratio of the theme is 4/3) the text will be anchored (default anchor is upper-left) to the center of the slide. Step3: Now we could also set fixed position for x and y. To do so, if x and y are greater than 1.0 and their coordinates will be in pixel.
Python Code: from beampy import * # We first create a new document for our presentation # Remove quiet=True to see Beampy compiler output doc = document(quiet=True) # Then we create a new slide with the title "My first new slide" with slide('My first slide title'): # All the slide contents are functions added inside the with statement. # Here we add a text content using the Beampy module text text('Hello Beampy!') # At the end we save our presentation to an HTML file. # The save command will launch the compilation of all slides of the # presentation. save('hello.html') # If you want to save it to pdf just change the file extension. # save('hello.pdf') # This line is just for displaying the slide in this tutorial webpage # Remove it in your presentation display_matplotlib(gcs()) Explanation: Your first presentation Import Beampy To start, you need to import beampy module in your python file. .. code-block:: python from beampy import * Your first slide: Hello Beampy A Beampy presentation is based on the document class, in which all slides and their contents will be stored. Let's create our first slide. End of explanation with slide('My first slide title'): text('Hello Beampy!', x=0, y=0) display_matplotlib(gcs()) Explanation: Change the position of the text element By default the text element is centred in x direction and automatically positioned in y --which means that if you add other elements they will be equally spaced vertically--. Now we change the x and y with numerical values (x=0, y=0), the text is now in the upper-left corner. End of explanation with slide('My first slide title'): text('Hello Beampy!', x=0.5, y=0.5*3/4.) display_matplotlib(gcs()) Explanation: When value of x and y are lower than 1.0, they are by default in percent of slide (or group) width. So if you set x=0.5 and y=0.5x3/4 (as the aspect ratio of the theme is 4/3) the text will be anchored (default anchor is upper-left) to the center of the slide. End of explanation with slide('My first slide title'): text('Hello Beampy!', x=100, y=100) display_matplotlib(gcs()) Explanation: Now we could also set fixed position for x and y. To do so, if x and y are greater than 1.0 and their coordinates will be in pixel. End of explanation
14,868
Given the following text description, write Python code to implement the functionality described below step by step Description: Content and Objectives Show transmission signals and spectra of BPSK and OOK Random data (and thus random signals) are generated, spectra being estimated by averaging Import Step1: Parameters Step2: Real data-modulated Tx-signal Step3: Plotting
Python Code: # importing import numpy as np import matplotlib.pyplot as plt import matplotlib # showing figures inline %matplotlib inline # plotting options font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=matplotlib.checkdep_usetex(True)) matplotlib.rc('figure', figsize=(14, 6) ) Explanation: Content and Objectives Show transmission signals and spectra of BPSK and OOK Random data (and thus random signals) are generated, spectra being estimated by averaging Import End of explanation # symbol time and number of symbols t_symb = 1.0 n_symb = 100 # samples per symbol n_up = 8 # parameters for frequency regime N_fft = 512 Omega = np.linspace(-np.pi, np.pi, N_fft) f_vec = Omega / ( 2*np.pi*t_symb/n_up ) Explanation: Parameters End of explanation # define rectanguler pulse corresponding to sample-and-hold rect = np.ones( n_up) # number of realizations along which to average the psd estimate n_real = 10 # initialize two-dimensional field for collecting several realizations along which to average OOK = np.zeros( (n_real, N_fft ), dtype=complex ) BPSK = np.zeros( (n_real, N_fft ), dtype=complex ) # variance of 0.1 for the noise sigma2 = .1 # loop for realizations for k in np.arange( n_real ): # generate random binary vector and modulate the specified modulation scheme data = np.random.randint( 2, size = n_symb ) # get signals by putting symbols and filtering s_up_ook = np.zeros( n_symb * n_up ) s_up_ook[ : : n_up ] = np.sqrt(2) * data s_ook = np.convolve( rect, s_up_ook) s_up_bpsk = np.zeros( n_symb * n_up ) s_up_bpsk[ : : n_up ] = (-1)**( data + 1 ) s_bpsk = np.convolve( rect, s_up_bpsk) # get magnitude square in the frequency regima OOK[ k, :] = np.fft.fftshift( np.fft.fft( s_ook, N_fft ) ) BPSK[ k, :] = np.fft.fftshift( np.fft.fft( s_bpsk, N_fft ) ) # average along realizations OOK_PSD_sim = np.average( np.abs( OOK )**2 , axis=0 ) OOK_PSD_sim /= np.max( OOK_PSD_sim ) BPSK_PSD_sim = np.average( np.abs( BPSK )**2, axis=0 ) BPSK_PSD_sim /= np.max( BPSK_PSD_sim ) Explanation: Real data-modulated Tx-signal End of explanation plt.subplot(121) plt.plot( np.arange( np.size( s_ook[ : 20 * n_up] ) ) * t_symb / n_up, s_ook[ : 20 * n_up ], linewidth=2.0, label='OOK', c=(0.875, 0.61, 0.1) ) plt.plot( np.arange( np.size( s_bpsk[ : 20 * n_up] ) ) * t_symb / n_up, s_bpsk[ : 20 * n_up ], linewidth=2.0, label='BPSK', c=(0,0.59,0.51)) plt.ylim( (-1.1, 1.1 ) ) plt.grid(True) plt.legend(loc='lower right') plt.xlabel('$t/T$') plt.title('$s(t)$') plt.xlim( (0, 20) ) plt.ylim( (-2, 2 ) ) plt.subplot(122) np.seterr(divide='ignore') # ignore warning for logarithm of 0 plt.plot( f_vec, 10*np.log10( OOK_PSD_sim ), linewidth=2.0, label='OOK', c=(0.875, 0.61, 0.1) ) plt.plot( f_vec, 10*np.log10( BPSK_PSD_sim ), linewidth=2.0, label='BPSK', c=(0,0.59,0.51) ) np.seterr(divide='warn') # enable warning for logarithm of 0 plt.grid(True); plt.xlabel('$fT$'); plt.title(r'$|\widehat{S}(f)|^2$ (dB)') plt.legend(loc='upper right') plt.ylim( (-60, 10 ) ) plt.xlim( (-4, 4) ) plt.savefig('bpsk_ook.pdf', bbox_inches='tight') Explanation: Plotting End of explanation
14,869
Given the following text description, write Python code to implement the functionality described below step by step Description: Partial Fraction Expansion using Sympy This is an example for using partial fraction expansion within Python This only covers a tiny fraction of what is possible. As always it's a good idea to look at the documentation http Step1: Lets try to complete the examples in section 4-3 using Sympy instead of Matlab Consider the transfer function Step2: Now we can expand using partial fractions Step3: Complex roots Here's another example with complex roots to see what happens $$ Y(s) = U(s) G(s) = \frac{1}{s} \frac{2s+10}{s^2 +2s+10} $$
Python Code: import sympy import numpy as np sympy.init_printing() Explanation: Partial Fraction Expansion using Sympy This is an example for using partial fraction expansion within Python This only covers a tiny fraction of what is possible. As always it's a good idea to look at the documentation http://docs.sympy.org/latest/index.html End of explanation s = sympy.symbols('s') G = (s**4 + 8*s**3+16*s**2+9*s+6) / (s**3 + 6*s**2 + 11*s +6) G Explanation: Lets try to complete the examples in section 4-3 using Sympy instead of Matlab Consider the transfer function: $$ \frac{Y(s)}{U(s)} = \frac{s^4 + 8 s^3 + 16 s^2 + 9s + 6}{s^3 + 6s^2+11s +6} $$ We want to use partial fraction expansion to simplify this expression First we need to define the function in Python End of explanation sympy.apart(G) Explanation: Now we can expand using partial fractions End of explanation F = 1 / s * (2*s+10)/(s**2+2*s+10) F sympy.apart(F, full=True).doit() Explanation: Complex roots Here's another example with complex roots to see what happens $$ Y(s) = U(s) G(s) = \frac{1}{s} \frac{2s+10}{s^2 +2s+10} $$ End of explanation
14,870
Given the following text description, write Python code to implement the functionality described below step by step Description: Class 15 Step1: Example 2 Step2: The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only
Python Code: # 1. Input model parameters parameters = pd.Series() parameters['rhoa'] = .9 parameters['sigma'] = 0.001 print(parameters) # 2. Define a function that evaluates the equilibrium conditions def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters p = parameters # Variables fwd = variables_forward cur = variables_current # Exogenous tfp tfp_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a) # Stack equilibrium conditions into a numpy array return np.array([ tfp_proc ]) # 3. Initialize the model model = ls.model(equations = equilibrium_equations, nstates=1, varNames=['a'], shockNames=['eA'], parameters = parameters) # 4. Have linearsolve compute the steady state numerically guess = [1] model.compute_ss(guess) print(model.ss) # 5. Find the log-linear approximation around the non-stochastic steady state and solve model.approximate_and_solve() # 6 (a) Compute impulse responses model.impulse(T=41,t0=5,shock=None) print(model.irs['eA'].head(10)) # 6 (b) Plot impulse responses model.irs['eA'][['eA','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2) # 6(c) Compute stochastic simulation model.stoch_sim(seed=192,covMat= [[parameters['sigma']]]) print(model.simulated.head(10)) # 6(d) Plot stochastic simulation model.simulated[['eA','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2) Explanation: Class 15: Introduction to the linearsolve package In general, dynamic stochastic general equilibrium (DSGE) models are time-consuming to work with. The linearsolve Python package approximates, solves, and simulates DSGE models. Example 1: A one-equation model of TFP Consider the following AR(1) specification for $\log$ TFP: \begin{align} \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \tag{1} \end{align} where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with linearsolve. To do this we need to do several things: Create a Pandas series that stores the names of the parameters of the model. Define a function that returns the equilibrium conditions of the model solved for zero. Initialize an instance of the linearsolve.model class Compute and input the steady state of the model. Approximate and solve the model. Compute simulations of the model. End of explanation # 1. Input model parameters parameters = pd.Series() parameters['rhoa'] = .9 parameters['sigma'] = 0.01 parameters['alpha'] = 0.35 parameters['delta'] = 0.025 parameters['s'] = 0.15 print(parameters) # 2. Define a function that evaluates the equilibrium conditions def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters p = parameters # Variables fwd = variables_forward cur = variables_current # Production function prod_fn = cur.a*cur.k**p.alpha - cur.y # Capital evolution capital_evolution = p.s*cur.a*cur.k**p.alpha + (1 - p.delta)*cur.k - fwd.k # Exogenous tfp tfp_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a) # Stack equilibrium conditions into a numpy array return np.array([ prod_fn, capital_evolution, tfp_proc ]) # 3. Initialize the model model = ls.model(equations = equilibrium_equations, nstates=2, varNames=['a','k','y'], # Any order as long as the state variables are named first shockNames=['eA','eK'], # Name a shock for each state variable *even if there is no corresponding shock in the model* parameters = parameters) # 4. Have linearsolve compute the steady state numerically guess = [1,4,1] model.compute_ss(guess) # 5. Find the log-linear approximation around the non-stochastic steady state and solve model.approximate_and_solve() Explanation: Example 2: A three-equation model business cycle model Now consider the following system of equations: \begin{align} Y_t & = A_t K_t^{\alpha} \tag{2}\ K_{t+1} & = sY_t + (1-\delta) K_t \tag{3}\ \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \tag{4} \end{align} where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. Let's simulate the model with linearsolve. Before proceding, let's also go ahead and rewrite the model with all variables moved to the lefthand side of the equations: \begin{align} 0 & = A_t K_t^{\alpha} - Y_t \tag{5}\ 0 & = sY_t + (1-\delta) K_t - K_{t+1} \tag{6}\ 0 & = \rho \log A_t + \epsilon_{t+1} - \log A_{t+1} \tag{7} \end{align} Capital and TFP are called state variables because they're $t+1$ values are predetermined. Output is called a costate variable. Note that the model as 3 endogenous variables with 2 state variables. End of explanation # Print the coeficient matrix P print(model.p) # Print the coeficient matrix F print(model.f) # 6 (a) Compute impulse responses and print the computed impulse responses model.impulse(T=41,t0=5,shock=None) print(model.irs['eA'].head(10)) # 6(b) Plot the computed impulse responses to a TFP shock fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(1,2,1) model.irs['eA'][['a','y','k']].plot(lw='5',alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=2) ax2 = fig.add_subplot(1,2,2) model.irs['eA'][['eA','a']].plot(lw='5',alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2) # 6(c) Compute stochastic simulation and print the simulated values model.stoch_sim(seed=192,covMat= [[parameters['sigma'],0],[0,0]]) print(model.simulated.head(10)) # 6(d) Plot the computed stochastic simulation fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(1,2,1) model.simulated[['a','y','k']].plot(lw='5',alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=3) ax2 = fig.add_subplot(1,2,2) model.simulated[['eA','a']].plot(lw='5',alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2) Explanation: The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only: \begin{align} \left[ \hat{y}t\right] & = F \left[\begin{array}{c} \hat{a}_t \ \hat{k}_t \end{array}\right]\ \left[\begin{array}{c} \hat{a}{t+1} \ \hat{k}{t+1} \end{array}\right] & = P \left[\begin{array}{c} \hat{a}_t \ \hat{k}_t \end{array}\right] + \left[\begin{array}{c} \epsilon{t+1}^a \ \epsilon^k_{t+1} \end{array}\right]. \end{align} where $F$ and $P$ are coefficient matrices computed by the program. End of explanation
14,871
Given the following text description, write Python code to implement the functionality described below step by step Description: Title Step1: Load Iris Data Step2: Create A Linear Step3: View Results Step4: View Percentage Of Variance Retained By New Features
Python Code: # Load libraries from sklearn import datasets from sklearn.discriminant_analysis import LinearDiscriminantAnalysis Explanation: Title: Using Linear Discriminant Analysis For Dimensionality Reduction Slug: lda_for_dimensionality_reduction Summary: How to use linear discriminant analysis for dimensionality reduction using Python. Date: 2017-09-13 12:00 Category: Machine Learning Tags: Feature Engineering Authors: Chris Albon Preliminaries End of explanation # Load the Iris flower dataset: iris = datasets.load_iris() X = iris.data y = iris.target Explanation: Load Iris Data End of explanation # Create an LDA that will reduce the data down to 1 feature lda = LinearDiscriminantAnalysis(n_components=1) # run an LDA and use it to transform the features X_lda = lda.fit(X, y).transform(X) Explanation: Create A Linear End of explanation # Print the number of features print('Original number of features:', X.shape[1]) print('Reduced number of features:', X_lda.shape[1]) Explanation: View Results End of explanation ## View the ratio of explained variance lda.explained_variance_ratio_ Explanation: View Percentage Of Variance Retained By New Features End of explanation
14,872
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have a 2-dimensional numpy array which contains time series data. I want to bin that array into equal partitions of a given length (it is fine to drop the last partition if it is not the same size) and then calculate the mean of each of those bins. Due to some reason, I want the binning to be aligned to the end of the array. That is, discarding the first few elements of each row when misalignment occurs.
Problem: import numpy as np data = np.array([[4, 2, 5, 6, 7], [ 5, 4, 3, 5, 7]]) bin_size = 3 new_data = data[:, ::-1] bin_data_mean = new_data[:,:(data.shape[1] // bin_size) * bin_size].reshape(data.shape[0], -1, bin_size).mean(axis=-1)[:,::-1]
14,873
Given the following text description, write Python code to implement the functionality described below step by step Description: Secton 6.5. The Dalem pumping test (semi-confined, Hantush type) IHE, Transient groundwater Olsthoorn, 2019-01-03 The most famous book on pumping test analyses is due to Krusemand and De Ridder (1970, 1994). Their book contains all known solutions suitable for the analyses of pumping tests on groundwater wells and some examples with data. The Dalem pumping test, held in the Netherlands, is a test in a semi-confined setting, which should yield a value for the aquifers' transmissivity $kD$ [m2/d] and its storage coefficient $S$ [-] and the hydraulic resistance $c$ [d]. The situation in cross section is hier (taken from Kruzeman and De Ridder (1994). Hantush considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent that is covered by a layer with uniform hydralic resistance against vertical flow and a fixed head equal to zero maintained above this covering layer. The test can be interpreted from the Theis or the Hantush point of view, i.e. without or with leakage from a layer with fixed head. Which of the two may be deduced from the data Step1: The Hantush well function $$ W(u, \frac r \lambda) = \intop _u ^\infty \frac {e^{-y - \frac {\left( \frac r {\lambda} \right) ^2} {4 y} }} y dy $$ The implementation is readily done by numeric integration using Simpsons rule, with sufficient points te make sure the function is computed accurately enough. Step2: Read the data Step3: Same, but using log scale Step4: Drawdown on double log scale Step5: Drawdown on double log scale using $t/r^2$ on x-axis Step6: Interpretation using the match on double log scales (Classical method) The classical interpreation plots the measured drawdowns on double log paper (drawdown $s$ versus $t/r^2$ and compares them with the Theis type curve $W(u)$ versus $1/u$ also drawn on double log paper. Because $1/u = (4 kD t) / (r^2 S)$ it follows that on logarthmic scales $1/u$ and $t/r^2$ differ only by a constant factor, which represents a horizontal shift on the log scale. The drawdown $s$ only differs the constant $Q/(4 \pi kD$ from the well function $W(u)$, and so this implies a vertical shift on logarithmic scale. Hence the measured drawdown versus $t/r^2$ on double log scale looks exactly the same as the theis type curve but it is only shifted a given distance along the horizontal axis and a given distance along the vertical axis. These two shifts yield the sought transmissivity and storage coefficient. Below we draw the Theis type curve and the drawdown $s$ multiplied by a factor $A$ and the $t/r^2$ multiplied by a factor $B$, choosing $A$ and $B$ interactively untill the measured and the type curve match best. In this worked out example, I already optmized the values of $A$ and $B$ by hand. Set them both to 1 and try optimizing them yourself. Step7: So $A s = W(u)$ and $s = \frac Q {2 \pi kD} W(u)$ and, therefore $A = \frac {4 \pi kD} {Q}$ and $ kD = \frac {A Q} {4 \pi}$ Step8: The storage coefficient then follows from $\frac 1 u = B \frac t {r^2}$, that is, $\frac {4 kD t} {r^2 S} = B \frac t {r^2}$ so that $S = \frac {4 kD} B$ Step9: The vertical resistance is obtained from observing which of the lines depending on $\rho$ the measurements of the individual piezomters follow. In this case, the $r=30$ m piezometer seems to follow the type curve for $\rho = 0.03$ and the 90 m curve seems to follow the $\rho = 0.1$ type curve. $$\rho = r / \lambda$$ then yields $$ \lambda = \sqrt{kD c} = \frac r \rho $$ and $$ c = \frac {\left( \frac r \rho \right)^2} {kD} $$
Python Code: from scipy.special import exp1 import numpy as np import matplotlib.pyplot as plt Explanation: Secton 6.5. The Dalem pumping test (semi-confined, Hantush type) IHE, Transient groundwater Olsthoorn, 2019-01-03 The most famous book on pumping test analyses is due to Krusemand and De Ridder (1970, 1994). Their book contains all known solutions suitable for the analyses of pumping tests on groundwater wells and some examples with data. The Dalem pumping test, held in the Netherlands, is a test in a semi-confined setting, which should yield a value for the aquifers' transmissivity $kD$ [m2/d] and its storage coefficient $S$ [-] and the hydraulic resistance $c$ [d]. The situation in cross section is hier (taken from Kruzeman and De Ridder (1994). Hantush considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent that is covered by a layer with uniform hydralic resistance against vertical flow and a fixed head equal to zero maintained above this covering layer. The test can be interpreted from the Theis or the Hantush point of view, i.e. without or with leakage from a layer with fixed head. Which of the two may be deduced from the data: will they fit onto the Theis type curve or, when not, do they match with one of the Hantush type curves. Other effects may also influence the data, like partial penetration of the screen in the aquifer, storage inside the well and delayed yield and, notably, any effects caused by non-linearity, such as non-constant aquifer thickness under the influence of the drawdown in water table aquifers. All such effects may play their role under various circumstances, but may initially be ignored, to be included only when the data show that it is necessary. The data for the pumping test are in a small text file "Dalem_data.txt", which we'll open and read into this notebook shortly. We will interpret the test using the Hantush solution for flow to a single well with fully penetrating screen in a uniorm aquifer of infinite extent having as yet unknown transmissivity $kD$ and storage coefficient $S$. $$ s(r, t) = \frac Q {4 \pi kD} W_h(u, \frac r \lambda),\,\,\,\, u = \frac {r^2 S} {4 kD t}, \,\,\, \lambda = \sqrt{kD c}$$ The Hantush well function will be implentend first as it is not available in scipy.special. End of explanation def Wh(U, rho): '''Return Hantus well function for vector of u values and single rho''' W = np.zeros_like(U) for i, u in enumerate(U): W[i] = wh(u, rho) return W def wh(u, rho): '''Return Wh(u, rho) for single value of u and rho''' uMax = 20 # sufficiently high y = np.logspace(np.log10(u), np.log10(uMax), 1000) # enough points, log axis ym = 0.5*(y[:-1] + y[1:]) dy = np.diff(y) return np.sum(np.exp(-ym - rho**2 / (4 * ym)) / ym * dy) Explanation: The Hantush well function $$ W(u, \frac r \lambda) = \intop _u ^\infty \frac {e^{-y - \frac {\left( \frac r {\lambda} \right) ^2} {4 y} }} y dy $$ The implementation is readily done by numeric integration using Simpsons rule, with sufficient points te make sure the function is computed accurately enough. End of explanation fname = './Dalem_data.txt' with open(fname, 'r') as f: data = f.readlines() # read the data as a list of strings hdr = data[0].split() # get the first line, i.e. the header data = data[1:] # remove the header line from the data # split each line (string) into its individual tokens # each token is still a string not yet a number toklist = [d.split() for d in data] # convert this list of lines with string tokens into a list of lists with numbers data = [] # start empty for line in toklist: data.append([float(d) for d in line]) # convert this line # when done, convert this list of lists of numbers into a numpy array data = np.array(data) #data # show what we've got # get the piezometer distances from the first data column, the unique values distances = np.unique(data[:,0]) plt.title('Dalem pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.grid() for r in distances: I = data[:,0] == r # boolean array telling which data belong to this observation well plt.plot(data[I, -2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show() Explanation: Read the data End of explanation plt.title('Dalem pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show() Explanation: Same, but using log scale End of explanation plt.title('Dalem pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show() Explanation: Drawdown on double log scale End of explanation plt.title('Dalem pumping test measured drawdowns') plt.xlabel('$t/r^2$ [min/m$^2$]') plt.ylabel('dd [m]') plt.xscale('log') #plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r tr2 = data[I, -2] / r**2 plt.plot(tr2, data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show() Explanation: Drawdown on double log scale using $t/r^2$ on x-axis End of explanation A = 30 B = 5.0e6 u = np.logspace(-4, 0, 41) plt.title('Type curve and $A \times s$ vs $B \times t/r^2$, with $A$={}, $B$={}'.format(A, B)) plt.xlabel('$1/u$ and $B \, t/r^2$') plt.ylabel('W(u) and $A \, s$') plt.xscale('log') plt.yscale('log') plt.grid() # the Theis type curve plt.plot(1/u, exp1(u), 'k', lw=3, label='Theis') for rho in [0.01, 0.03, 0.1, 0.3, 3]: plt.plot(1/u, Wh(u, rho), label='rho={:.2f}'.format(rho)) # The measurements for r in distances : I = data[:,0] == r t = data[I,-2] / (24 * 60) s = data[I,-1] # Q /(4 * np.pi * kD) * exp1(r**2 * S / (4 * kD * t)) plt.plot(B * t/r**2, A * s, 'o', label='$r$= {:.3g} m'.format(r)) plt.legend() plt.show() Explanation: Interpretation using the match on double log scales (Classical method) The classical interpreation plots the measured drawdowns on double log paper (drawdown $s$ versus $t/r^2$ and compares them with the Theis type curve $W(u)$ versus $1/u$ also drawn on double log paper. Because $1/u = (4 kD t) / (r^2 S)$ it follows that on logarthmic scales $1/u$ and $t/r^2$ differ only by a constant factor, which represents a horizontal shift on the log scale. The drawdown $s$ only differs the constant $Q/(4 \pi kD$ from the well function $W(u)$, and so this implies a vertical shift on logarithmic scale. Hence the measured drawdown versus $t/r^2$ on double log scale looks exactly the same as the theis type curve but it is only shifted a given distance along the horizontal axis and a given distance along the vertical axis. These two shifts yield the sought transmissivity and storage coefficient. Below we draw the Theis type curve and the drawdown $s$ multiplied by a factor $A$ and the $t/r^2$ multiplied by a factor $B$, choosing $A$ and $B$ interactively untill the measured and the type curve match best. In this worked out example, I already optmized the values of $A$ and $B$ by hand. Set them both to 1 and try optimizing them yourself. End of explanation Q = 761 # m3/d kD = A * Q /4 /np.pi print('kD = {:.0f} m2/d'.format(kD)) Explanation: So $A s = W(u)$ and $s = \frac Q {2 \pi kD} W(u)$ and, therefore $A = \frac {4 \pi kD} {Q}$ and $ kD = \frac {A Q} {4 \pi}$ End of explanation S = 4 * kD / B print('S = {:.2e} [-]'.format(S)) Explanation: The storage coefficient then follows from $\frac 1 u = B \frac t {r^2}$, that is, $\frac {4 kD t} {r^2 S} = B \frac t {r^2}$ so that $S = \frac {4 kD} B$ End of explanation r = 30 rho = 0.03 c = (r/rho)**2 / kD print('c = {:.0f}'.format(c)) r = 90 rho = 0.1 c = (r/rho)**2 / kD print('c = {:.0f}'.format(c)) Explanation: The vertical resistance is obtained from observing which of the lines depending on $\rho$ the measurements of the individual piezomters follow. In this case, the $r=30$ m piezometer seems to follow the type curve for $\rho = 0.03$ and the 90 m curve seems to follow the $\rho = 0.1$ type curve. $$\rho = r / \lambda$$ then yields $$ \lambda = \sqrt{kD c} = \frac r \rho $$ and $$ c = \frac {\left( \frac r \rho \right)^2} {kD} $$ End of explanation
14,874
Given the following text description, write Python code to implement the functionality described below step by step Description: p-Hacking and Multiple Comparisons Bias By Delaney Mackenzie and Maxwell Margenot. Part of the Quantopian Lecture Series Step1: Refresher Step2: If we add some noise our coefficient will drop. Step3: p-value Refresher For more info on p-values see this lecture. What's important to remember is they're used to test a hypothesis given some data. Here we are testing the hypothesis that a relationship exists between two series given the series values. IMPORTANT Step4: Experiment - Running Many Tests We'll start by defining a data frame. Step5: Now we'll populate it by adding N randomly generated timeseries of length T. Step6: Now we'll run a test on all pairs within our data looking for instances where our p-value is below our defined cutoff of 5%. Step7: Before we check how many significant results we got, let's run out some math to check how many we'd expect. The formula for the number of pairs given N series is $$\frac{N(N-1)}{2}$$ There are no relationships in our data as it's all randomly generated. If our test is properly calibrated we should expect a false positive rate of 5% given our 5% cutoff. Therefore we should expect the following number of pairs that achieved significance based on pure random chance. Step8: Now let's compare to how many we actually found. Step9: We shouldn't expect the numbers to match too closely here on a consistent basis as we've only run one experiment. If we run many of these experiments we should see a convergence to what we'd expect. Repeating the Experiment Step10: The average over many experiments should be closer. Step11: Visualizing What's Going On What's happening here is that p-values should be uniformly distributed, given no signal in the underlying data. Basically, they carry no information whatsoever and will be equally likely to be 0.01 as 0.99. Because they're popping out randomly, you will expect a certain percentage of p-values to be underneath any threshold you choose. The lower the threshold the fewer will pass your test. Let's visualize this by making a modified function that returns p-values. Step12: We'll now collect a bunch of pvalues. As in any case we'll want to collect quite a number of p-values to start getting a sense of how the underlying distribution looks. If we only collect few, it will be noisy like this Step13: Let's dial up our N parameter to get a better sense. Keep in mind that the number of p-values will increase at a rate of $$\frac{N (N-1)}{2}$$ or approximately quadratically. Therefore we don't need to increase N by much. Step14: Starting to look pretty flat, as we expected. Lastly, just to visualize the process of drawing a cutoff, we'll draw two artificial lines. Step15: We can see that with a lower cutoff we should expect to get fewer false positives. Let's check that with our above experiment. Step16: And finally compare it to what we expected. Step17: Sensitivity / Specificity Tradeoff As with any adjustment of p-value cutoff, we have a tradeoff. A lower cutoff decreases the rate of false positives, but also decreases the chance we find a real relationship (true positive). So you can't just decrease your cutoff to solve this problem. https
Python Code: import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt Explanation: p-Hacking and Multiple Comparisons Bias By Delaney Mackenzie and Maxwell Margenot. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. Multiple comparisons bias is a pervasive problem in statistics, data science, and in general forecasting/predictions. The short explanation is that the more tests you run, the more likely you are to get an outcome that you want/expect. If you ignore the multitude of tests that failed, you are clearly setting yourself up for failure by misinterpreting what's going on in your data. A particularly common example of this is when looking for relationships in large data sets comprising of many indepedent series or variables. In this case you run a test each time you evaluate whether a relationship exists between a set of variables. Statistics Merely Illuminates This Issue Most folks also fall prey to multiple comparisons bias in real life. Any time you make a decision you are effectively taking an action based on an hypothesis. That hypothesis is often tested. You can end up unknowingly making many tests in your daily life. An example might be deciding which medicine is helping cure a cold you have. Many people will take multiple medicines at once to try and get rid of symptoms. You may think that a certain medicine worked, when in reality none did and the cold just happened to start getting better at some point. The point here is that this problem doesn't stem from statistical testing and p-values. Rather, these techniques give us much more information about the problem and when it might be occuring. End of explanation X = pd.Series(np.random.normal(0, 1, 100)) Y = X r_s = stats.spearmanr(Y, X) print 'Spearman Rank Coefficient: ', r_s[0] print 'p-value: ', r_s[1] Explanation: Refresher: Spearman Rank Correlation Please refer to this lecture for more full info, but here is a very brief refresher on Spearman Rank Correlation. It's a variation of correlation that takes into account the ranks of the data. This can help with weird distributions or outliers that would confuse other measures. The test also returns a p-value, which is key here. A higher coefficient means a stronger estimated relationship. End of explanation X = pd.Series(np.random.normal(0, 1, 100)) Y = X + np.random.normal(0, 1, 100) r_s = stats.spearmanr(Y, X) print 'Spearman Rank Coefficient: ', r_s[0] print 'p-value: ', r_s[1] Explanation: If we add some noise our coefficient will drop. End of explanation # Setting a cutoff of 5% means that there is a 5% chance # of us getting a significant p-value given no relationship # in our data (false positive). # NOTE: This is only true if the test's assumptions have been # satisfied and the test is therefore properly calibrated. # All tests have different assumptions. cutoff = 0.05 X = pd.Series(np.random.normal(0, 1, 100)) Y = X + np.random.normal(0, 1, 100) r_s = stats.spearmanr(Y, X) print 'Spearman Rank Coefficient: ', r_s[0] if r_s[1] < cutoff: print 'There is significant evidence of a relationship.' else: print 'There is not significant evidence of a relationship.' Explanation: p-value Refresher For more info on p-values see this lecture. What's important to remember is they're used to test a hypothesis given some data. Here we are testing the hypothesis that a relationship exists between two series given the series values. IMPORTANT: p-values must be treated as binary. A common mistake is that p-values are treated as more or less significant. This is bad practice as it allows for what's known as p-hacking and will result in more false positives than you expect. Effectively, you will be too likely to convince yourself that relationships exist in your data. To treat p-values as binary, a cutoff must be set in advance. Then the p-value must be compared with the cutoff and treated as significant/not signficant. Here we'll show this. The Cutoff is our Significance Level We can refer to the cutoff as our significance level because a lower cutoff means that results which pass it are significant at a higher level of confidence. So if you have a cutoff of 0.05, then even on random data 5% of tests will pass based on chance. A cutoff of 0.01 reduces this to 1%, which is a more stringent test. We can therefore have more confidence in our results. End of explanation df = pd.DataFrame() Explanation: Experiment - Running Many Tests We'll start by defining a data frame. End of explanation N = 20 T = 100 for i in range(N): X = np.random.normal(0, 1, T) X = pd.Series(X) name = 'X%s' % i df[name] = X df.head() Explanation: Now we'll populate it by adding N randomly generated timeseries of length T. End of explanation cutoff = 0.05 significant_pairs = [] for i in range(N): for j in range(i+1, N): Xi = df.iloc[:, i] Xj = df.iloc[:, j] results = stats.spearmanr(Xi, Xj) pvalue = results[1] if pvalue < cutoff: significant_pairs.append((i, j)) Explanation: Now we'll run a test on all pairs within our data looking for instances where our p-value is below our defined cutoff of 5%. End of explanation (N * (N-1) / 2) * 0.05 Explanation: Before we check how many significant results we got, let's run out some math to check how many we'd expect. The formula for the number of pairs given N series is $$\frac{N(N-1)}{2}$$ There are no relationships in our data as it's all randomly generated. If our test is properly calibrated we should expect a false positive rate of 5% given our 5% cutoff. Therefore we should expect the following number of pairs that achieved significance based on pure random chance. End of explanation len(significant_pairs) Explanation: Now let's compare to how many we actually found. End of explanation def do_experiment(N, T, cutoff=0.05): df = pd.DataFrame() # Make random data for i in range(N): X = np.random.normal(0, 1, T) X = pd.Series(X) name = 'X%s' % i df[name] = X significant_pairs = [] # Look for relationships for i in range(N): for j in range(i+1, N): Xi = df.iloc[:, i] Xj = df.iloc[:, j] results = stats.spearmanr(Xi, Xj) pvalue = results[1] if pvalue < cutoff: significant_pairs.append((i, j)) return significant_pairs num_experiments = 100 results = np.zeros((num_experiments,)) for i in range(num_experiments): # Run a single experiment result = do_experiment(20, 100, cutoff=0.05) # Count how many pairs n = len(result) # Add to array results[i] = n Explanation: We shouldn't expect the numbers to match too closely here on a consistent basis as we've only run one experiment. If we run many of these experiments we should see a convergence to what we'd expect. Repeating the Experiment End of explanation np.mean(results) Explanation: The average over many experiments should be closer. End of explanation def get_pvalues_from_experiment(N, T): df = pd.DataFrame() # Make random data for i in range(N): X = np.random.normal(0, 1, T) X = pd.Series(X) name = 'X%s' % i df[name] = X pvalues = [] # Look for relationships for i in range(N): for j in range(i+1, N): Xi = df.iloc[:, i] Xj = df.iloc[:, j] results = stats.spearmanr(Xi, Xj) pvalue = results[1] pvalues.append(pvalue) return pvalues Explanation: Visualizing What's Going On What's happening here is that p-values should be uniformly distributed, given no signal in the underlying data. Basically, they carry no information whatsoever and will be equally likely to be 0.01 as 0.99. Because they're popping out randomly, you will expect a certain percentage of p-values to be underneath any threshold you choose. The lower the threshold the fewer will pass your test. Let's visualize this by making a modified function that returns p-values. End of explanation pvalues = get_pvalues_from_experiment(10, 100) plt.hist(pvalues) plt.ylabel('Frequency') plt.title('Observed p-value'); Explanation: We'll now collect a bunch of pvalues. As in any case we'll want to collect quite a number of p-values to start getting a sense of how the underlying distribution looks. If we only collect few, it will be noisy like this: End of explanation pvalues = get_pvalues_from_experiment(50, 100) plt.hist(pvalues) plt.ylabel('Frequency') plt.title('Observed p-value'); Explanation: Let's dial up our N parameter to get a better sense. Keep in mind that the number of p-values will increase at a rate of $$\frac{N (N-1)}{2}$$ or approximately quadratically. Therefore we don't need to increase N by much. End of explanation pvalues = get_pvalues_from_experiment(50, 100) plt.vlines(0.01, 0, 150, colors='r', linestyle='--', label='0.01 Cutoff') plt.vlines(0.05, 0, 150, colors='r', label='0.05 Cutoff') plt.hist(pvalues, label='P-Value Distribution') plt.legend() plt.ylabel('Frequency') plt.title('Observed p-value'); Explanation: Starting to look pretty flat, as we expected. Lastly, just to visualize the process of drawing a cutoff, we'll draw two artificial lines. End of explanation num_experiments = 100 results = np.zeros((num_experiments,)) for i in range(num_experiments): # Run a single experiment result = do_experiment(20, 100, cutoff=0.01) # Count how many pairs n = len(result) # Add to array results[i] = n np.mean(results) Explanation: We can see that with a lower cutoff we should expect to get fewer false positives. Let's check that with our above experiment. End of explanation (N * (N-1) / 2) * 0.01 Explanation: And finally compare it to what we expected. End of explanation num_experiments = 100 results = np.zeros((num_experiments,)) N = 20 T = 100 desired_level = 0.05 num_tests = N * (N - 1) / 2 new_cutoff = desired_level / num_tests for i in range(num_experiments): # Run a single experiment result = do_experiment(20, 100, cutoff=new_cutoff) # Count how many pairs n = len(result) # Add to array results[i] = n np.mean(results) Explanation: Sensitivity / Specificity Tradeoff As with any adjustment of p-value cutoff, we have a tradeoff. A lower cutoff decreases the rate of false positives, but also decreases the chance we find a real relationship (true positive). So you can't just decrease your cutoff to solve this problem. https://en.wikipedia.org/wiki/Sensitivity_and_specificity Reducing Multiple Comparisons Bias You can't really eliminate multiple comparisons bias, but you can reduce how much it impacts you. To do so we have two options. Option 1: Run fewer tests. This is often the best option. Rather than just sweeping around hoping you hit an interesting signal, use your expert knowledge of the system to develop a great hypothesis and test that. This process of exploring the data, coming up with a hypothesis, then gathering more data and testing the hypothesis on the new data is considered the gold standard in statistical and scientific research. It's crucial that the data set on which you develop your hypothesis is not the one on which you test it. Because you found the effect while exploring, the test will likely pass and not really tell you anything. What you want to know is how consistent the effect is. Moving to new data and testing there will not only mean you only run one test, but will be an 'unbiased estimator' of whether your hypothesis is true. We discuss this a lot in other lectures. Option 2: Adjustment Factors and Bon Ferroni Correction WARNING: This section gets a little technical. Unless you're comfortable with significance levels, we recommend looking at the code examples first and maybe reading the linked articles before fully diving into the text. If you must run many tests, try to correct your p-values. This means applying a correction factor to the cutoff you desire to obtain the one actually used when determining whether p-values are significant. The most conservative and common correction factor is Bon Ferroni. Example: Bon Ferroni Correction The concept behind Bon Ferroni is quite simple. It just says that if we run $m$ tests, and we have a significance level/cutoff of $a$, then we should use $a/m$ as our new cutoff when determining significance. The math works out because of the following. Let's say we run $m$ tests. We should expect to see $ma$ false positives based on random chance that pass out cutoff. If we instead use $a/m$ as our cutoff, then we should expect to see $ma/m = a$ tests that pass our cutoff. Therefore we are back to our desired false positive rate of $a$. Let's try it on our experiment above. End of explanation
14,875
Given the following text description, write Python code to implement the functionality described below step by step Description: Theano 패키지 소개 Theano 패키지는 GPU를 지원하는 선형 대수 심볼 컴파일러(Symbolic Linear Algebra Compiler)이다. 심볼 컴파일러란 수치적인 미분, 선형 대수 계산 뿐 아니라 symbolic expression을 통해 정의된 수식(주로 목적 함수)을 사람처럼 미분하거나 재정리하여 전체 계산에 대한 최적의 계산 경로를 찾아내는 소프트웨어를 말한다. 수치 미분을 사용한 연산에 비해 정확도나 속도가 빠르기 때문에 대용량 선형 대수 계산이나 몬테카를로 시뮬레이션, 또는 딥 러닝에 사용된다. Theano 패키지의 또다른 특징은 GPU와 CPU에 대해 동일한 파이썬 코드를 사용할 수 있다는 점이다. 아래는 GPU를 사용한 Theano 연산 결과를 비교한 것이다. <img src="http Step1: 심볼 관계 정의 이미 만들어진 심볼 변수에 일반 사칙연산이나 Theano 수학 함수를 사용하여 종속 변수 역할을 하는 심볼 변수를 만들 수 있다. Step2: 심볼 프린트 Theano의 심볼 변수의 내용을 살펴보기 위해서는 theano.printing.pprint 명령 또는 theano.printing.pydotprint 을 사용한다. Step3: 심볼 함수 심볼 함수는 theano.function 명령으로 정의하며 입력 심볼 변수와 출력 심볼 변수를 지정한다. 출력 심볼 변수는 입력 심볼 변수의 연산으로 정의되어 있어야 한다. 처음 심볼 함수를 정의할 때는 내부적으로 컴파일을 하기 때문에 시간이 다소 걸릴 수 있다. Step4: 함수의 값을 계산하려면 일반 함수와 같이 사용하면 된다. Step5: 벡터와 행렬도 마찬가지 방법으로 사용한다. Step6: 로지스틱 함수나 난수를 사용하는 함수는 다음과 같이 정의한다. Step7: 함수에서 디폴트 인수는 다음과 같이 In 명령을 사용해서 정의한다. Step8: 난수 발생도 theano 의 RandomStreams 명령을 사용해야 한다. Step9: shared memory 상태값을 저장할 때는 shared memory 를 사용할 수 있다. shared memory는 GPU 내부 메모리를 사용하므로 연산 속도가 향상된다. 상태값을 변경하기 위해서는 함수에 updates 라는 콜백(callback)을 정의해야 한다. givens 라는 콜백을 정의하면 함수내에서 출력까지 한번에 정의할 수도 있다. givens Step10: 최적화 Theano는 함수 계산을 위한 최적화를 지원한다. 예를 들어 다음과 같은 함수는 제곱 연산을 사용하여 최적화 할 수 있다. Step11: 함수 실행 속도를 비교해 보면 다음과 같다. Step12: 미분 심볼릭 연산의 가장 큰 장점은 빠르고 정확하게 미분값(gradient, Hessian 등)을 계산할 수 있다는 점이다. Step13: 퍼셉트론 구현(이게 아니고 사실은 로지스틱이라고 보면 된다.) Theano를 사용하면 다음과 같이 퍼셉트론을 구현할 수 있다. 입력, 가중치, 상수항을 각각 x, w, b 로 정의하고 출력 함수를 f로 정의한다. Step14: 출력을 y, 목적 함수를 cost, 목적 함수의 미분(그레디언트)을 gradient로 정의한다. Step15: 초기값에서 그레디언트 값을 계산하고 이를 이용하여 가중치를 갱신한다. Step20: 로지스틱 회귀 구현(멀티클래스 문제) 그 전에는 바이너리. 1아니면 0이었는데. 0,1,2,3,4,5,6 이렇게 여러개 멀티클래스 분류 문제로 넘어간다. Theano를 사용하여 로지스틱 회귀를 구현하면 다음과 같다. 다음 웹사이트를 참조하였다. http Step23: 이 모형의 가중치를 찾기 위한 SGD 알고리즘은 다음과 같이 구현한다.
Python Code: import theano import theano.tensor as T #벡터에서 차원 늘어나면 metrix. 거기서 차원 늘어나면 tensor라고 한다. from theano import function x = T.dscalar('x') #d는 double. float 타입 y = T.dscalar('y') type(x), type(y) Explanation: Theano 패키지 소개 Theano 패키지는 GPU를 지원하는 선형 대수 심볼 컴파일러(Symbolic Linear Algebra Compiler)이다. 심볼 컴파일러란 수치적인 미분, 선형 대수 계산 뿐 아니라 symbolic expression을 통해 정의된 수식(주로 목적 함수)을 사람처럼 미분하거나 재정리하여 전체 계산에 대한 최적의 계산 경로를 찾아내는 소프트웨어를 말한다. 수치 미분을 사용한 연산에 비해 정확도나 속도가 빠르기 때문에 대용량 선형 대수 계산이나 몬테카를로 시뮬레이션, 또는 딥 러닝에 사용된다. Theano 패키지의 또다른 특징은 GPU와 CPU에 대해 동일한 파이썬 코드를 사용할 수 있다는 점이다. 아래는 GPU를 사용한 Theano 연산 결과를 비교한 것이다. <img src="http://deeplearning.net/software/theano/_images/mlp.png"> GPGPU GPU는 GPGPU(General-Purpose computing on Graphics Processing Units: 범용 그래픽 연산 유니트)를 줄어서 쓰는 말이다. 그래픽 작업은 다음 그림에서 보듯 상당한 병렬 연산을 필요로 하기 때문에 일반 CPU와 달리 성능이 낮은 다수의 코어를 동시에 사용할 수 있는 구조를 가지고 있다. 이러한 구조는 단순한 계산을 반복해야 하는 몬테카를로 시뮬레이션이나 딥 러닝에서 상당한 효과를 볼 수 있다. <img src="http://cdn.iopscience.com/images/1742-5468/2009/06/P06016/Full/1354202.jpg"> <img src="http://bioinfo-fr.net/wp-content/uploads/2012/02/cpugpu_1.png"> ALU는 연산유닛. 코어가 4개면 유닛이 4개. GPU는 코어가 CPU에 비해서 많기 때문에 동시에 많은 연산이 가능하다. CUDA vs OpenCL GPU는 기본 구조가 일반적인 CPU와 다르기 때문에 저수준 명령어 체계가 다르므로 별도의 플랫폼과 라이브러리가 필요하다. 현재 많이 사용되는 GPU 연산용 플랫폼에는 nvidia 계열의 CUDA와 Apple, AMD, Intel 계열의 OpenCL 이 있다. 파이썬에서는 pyCUDA 패키지와 pyOpenCL 패키지를 사용할 수 있다. 이미 CUDA가 앞서 있어서 많이 쓴다. Theano 기본 사용법 다음은 Theano를 사용하는데 도움이 되는 참조 문서 목록이다. http://ir.hit.edu.cn/~jguo/docs/notes/a_simple_tutorial_on_theano.pdf http://mlg.eng.cam.ac.uk/yarin/PDFs/RCC-Auto-Diff-presentation.pdf http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/Theano%20DNN.pdf 뒤에서 c코드로 만들어서 빌드해서 쓸 수 있게끔 변경해준다. CPU, GPU에서 모두 쓰일 수 있게 Theano를 사용하기 위해서는 다음과 같은 과정을 거쳐야 한다. 심볼 변수 정의 심볼 관계 정의 심볼 함수 정의 심볼 함수 사용 심볼 변수 정의 Theano의 모든 변수는 심볼 변수이므로 수치 변수와 혼동이 되지 않게 별도로 정의해야 한다. 스칼라, 벡터, 행렬을 정의하기 위해 theano.tensor.T 서브패키지의 dscalar, dvector, dmatrix 명령을 사용하거나 이미 심볼로 정의된 변수의 연산을 통해 자동으로 정의된다. End of explanation z = x + y type(z) u = T.exp(z) type(u) Explanation: 심볼 관계 정의 이미 만들어진 심볼 변수에 일반 사칙연산이나 Theano 수학 함수를 사용하여 종속 변수 역할을 하는 심볼 변수를 만들 수 있다. End of explanation theano.printing.pprint(x) theano.printing.pprint(y) theano.printing.pprint(z) theano.printing.pprint(u) from IPython.display import SVG SVG(theano.printing.pydotprint(z, return_image=True, format='svg')) SVG(theano.printing.pydotprint(u, return_image=True, format='svg')) Explanation: 심볼 프린트 Theano의 심볼 변수의 내용을 살펴보기 위해서는 theano.printing.pprint 명령 또는 theano.printing.pydotprint 을 사용한다. End of explanation %time f = theano.function(inputs=[x, y], outputs=z) Explanation: 심볼 함수 심볼 함수는 theano.function 명령으로 정의하며 입력 심볼 변수와 출력 심볼 변수를 지정한다. 출력 심볼 변수는 입력 심볼 변수의 연산으로 정의되어 있어야 한다. 처음 심볼 함수를 정의할 때는 내부적으로 컴파일을 하기 때문에 시간이 다소 걸릴 수 있다. End of explanation f(2, 3) Explanation: 함수의 값을 계산하려면 일반 함수와 같이 사용하면 된다. End of explanation x2 = T.dvector('x2') y2 = T.dvector('y2') z2 = x2 + y2 f2 = function([x2, y2], z2) f2([1, 2], [3, 4]) x3 = T.dmatrix('x3') y3= T.dmatrix('y3') z3 = x3 + y3 f3 = function([x3, y3], z3) f3([[1], [2]], [[3], [4]]) Explanation: 벡터와 행렬도 마찬가지 방법으로 사용한다. End of explanation s = 1 / (1 + T.exp(-x)) logistic = function([x], s) logistic(1) s2 = 1 / (1 + T.exp(-x2)) logistic2 = function([x2], s2) logistic2([0, 1]) Explanation: 로지스틱 함수나 난수를 사용하는 함수는 다음과 같이 정의한다. End of explanation from theano import In x, y = T.dscalars('x', 'y') z = x + y f = theano.function([x, In(y, value=3)], z) f(1) Explanation: 함수에서 디폴트 인수는 다음과 같이 In 명령을 사용해서 정의한다. End of explanation from theano.tensor.shared_randomstreams import RandomStreams srng = RandomStreams() rv_u = srng.uniform((1,2)) rv_n = srng.normal((1,2)) f = function([], rv_u) g = function([], rv_n, no_default_updates=True) #Not updating rv_n.rng f(), f(), f(), f() g(), g(), g(), g() Explanation: 난수 발생도 theano 의 RandomStreams 명령을 사용해야 한다. End of explanation state = theano.shared(0) #0으로 한 것은 초기화 inc = T.iscalar('inc') out = T.iscalar('out') accumulator = function(inputs=[inc], outputs=out, givens={out: inc * 2}, updates={state: state + inc}) accumulator(1) state.get_value() accumulator(2) state.get_value() accumulator(5) state.get_value() Explanation: shared memory 상태값을 저장할 때는 shared memory 를 사용할 수 있다. shared memory는 GPU 내부 메모리를 사용하므로 연산 속도가 향상된다. 상태값을 변경하기 위해서는 함수에 updates 라는 콜백(callback)을 정의해야 한다. givens 라는 콜백을 정의하면 함수내에서 출력까지 한번에 정의할 수도 있다. givens: 함수 실행 전에 계산되는 값 updates: 함수 실행 후에 계산되는 값 shared memory 에 저장된 값은 get_value 메서드로 읽는다. 위에 CPU, GPU 설명하는 그림에서 GPU 칩안에 메모리가 들어있는 부분. vmemory. 바깥으로 CPU로 왔다갔다 안해도 되기 때문에 속도가 빠르다. v메모리 안에 보유하고 있기 때문에. End of explanation x = T.vector('x') y = x ** 10 f = theano.function([x], y) SVG(theano.printing.pydotprint(f, return_image=True, format='svg')) Explanation: 최적화 Theano는 함수 계산을 위한 최적화를 지원한다. 예를 들어 다음과 같은 함수는 제곱 연산을 사용하여 최적화 할 수 있다. End of explanation x = np.ones(10000000) %timeit x ** 10 %timeit f(x) Explanation: 함수 실행 속도를 비교해 보면 다음과 같다. End of explanation x = T.dscalar('x') y = x ** 2 gy = T.grad(y, x) fy = theano.function([x], y) fgy = theano.function([x], gy) SVG(theano.printing.pydotprint(fy.maker.fgraph.outputs[0], return_image=True, format='svg')) SVG(theano.printing.pydotprint(fgy.maker.fgraph.outputs[0], return_image=True, format='svg')) x = T.dscalar('x') #시그모이드의 사례. 자주 사용하기 때문에 룰이 들어가 있다. s = 1 / (1 + T.exp(-x)) logistic = theano.function([x], s) gs = T.grad(s, x) dlogistic = theano.function([x], gs) SVG(theano.printing.pydotprint(logistic, return_image=True, format='svg')) SVG(theano.printing.pydotprint(dlogistic.maker.fgraph.outputs[0], return_image=True, format='svg')) xx = np.linspace(-5, 5, 100) y1 = np.hstack([logistic(xi) for xi in xx]) y2 = np.hstack([dlogistic(xi) for xi in xx]) plt.plot(xx, y1, label="logistic") plt.plot(xx, y2, label="derivative of logistic") plt.legend(loc=0) plt.show() Explanation: 미분 심볼릭 연산의 가장 큰 장점은 빠르고 정확하게 미분값(gradient, Hessian 등)을 계산할 수 있다는 점이다. End of explanation x = T.dvector('x') w = theano.shared(np.zeros(2)) b = theano.shared(0.0) z = T.dot(w, x) + b a = 1/(1 + T.exp(-z)) f = function([x], a) Explanation: 퍼셉트론 구현(이게 아니고 사실은 로지스틱이라고 보면 된다.) Theano를 사용하면 다음과 같이 퍼셉트론을 구현할 수 있다. 입력, 가중치, 상수항을 각각 x, w, b 로 정의하고 출력 함수를 f로 정의한다. End of explanation y = T.dscalar('y') cost = T.sum((y - a)**2) #목적함수 정의. 에러펑션을 줄이기 위해서 gw, gb = T.grad(cost, [w, b]) gradient = function([x, y], [gw, gb]) Explanation: 출력을 y, 목적 함수를 cost, 목적 함수의 미분(그레디언트)을 gradient로 정의한다. End of explanation eta = 0.1 xi = [1, -1] yi = 1 gradient = theano.function([x, y], updates=[(w, w - eta * gw), (b, b - eta * gb)]) gradient(xi, yi) w.get_value(), b.get_value() Explanation: 초기값에서 그레디언트 값을 계산하고 이를 이용하여 가중치를 갱신한다. End of explanation class LogisticRegression(object): Multi-class Logistic Regression Class The logistic regression is fully described by a weight matrix :math:`W` and bias vector :math:`b`. Classification is done by projecting data points onto a set of hyperplanes, the distance to which is used to determine a class membership probability. def __init__(self, input, n_in, n_out): Initialize the parameters of the logistic regression :type input: theano.tensor.TensorType :param input: symbolic variable that describes the input of the architecture (one minibatch) :type n_in: int :param n_in: number of input units, the dimension of the space in which the datapoints lie :type n_out: int :param n_out: number of output units, the dimension of the space in which the labels lie # initialize with 0 the weights W as a matrix of shape (n_in, n_out) self.W = theano.shared(value=np.zeros((n_in, n_out), dtype=theano.config.floatX), name='W', borrow=True) # initialize the biases b as a vector of n_out 0s self.b = theano.shared(value=np.zeros((n_out,), dtype=theano.config.floatX), name='b', borrow=True) # symbolic expression for computing the matrix of class-membership probabilities Where: # W is a matrix where column-k represent the separation hyperplane for class-k # x is a matrix where row-j represents input training sample-j # b is a vector where element-k represent the free parameter of hyperplane-k self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b) #노트 필기 참고 # symbolic description of how to compute prediction as class whose probability is maximal self.y_pred = T.argmax(self.p_y_given_x, axis=1) # parameters of the model self.params = [self.W, self.b] # keep track of model input self.input = input def negative_log_likelihood(self, y): Return the mean of the negative log-likelihood of the prediction of this model under a given target distribution. :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size # y.shape[0] is (symbolically) the number of rows in y, i.e., number of examples (call it n) in the minibatch # T.arange(y.shape[0]) is a symbolic vector which will contain [0,1,2,... n-1] # T.log(self.p_y_given_x) is a matrix of Log-Probabilities (call it LP) with one row per example and one column per class # LP[T.arange(y.shape[0]),y] is a vector v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ..., # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is the mean (across minibatch examples) of the elements in v, # i.e., the mean log-likelihood across the minibatch. return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y]) #y는 원핫인코딩 되어 들어가서 0100, 1000 이런식으로 들어간다. #metrix로 들어간 것이 아니라 y가 3이라면 3번째 것만 뽑겠다. #이 구현에서는 샘플을 모두 다 넣는다. 미니 배치 사이즈. 28X28= 500. 10개만 나오게 된다. summation을 해주어야 한다. def errors(self, y): Return a float representing the number of errors in the minibatch over the total number of examples of the minibatch ; zero one loss over the size of the minibatch :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label # check if y has same dimension of y_pred if y.ndim != self.y_pred.ndim: raise TypeError('y should have the same shape as self.y_pred', ('y', y.type, 'y_pred', self.y_pred.type)) # check if y is of the correct datatype if y.dtype.startswith('int'): # the T.neq operator returns a vector of 0s and 1s, where 1 represents a mistake in prediction return T.mean(T.neq(self.y_pred, y)) else: raise NotImplementedError() Explanation: 로지스틱 회귀 구현(멀티클래스 문제) 그 전에는 바이너리. 1아니면 0이었는데. 0,1,2,3,4,5,6 이렇게 여러개 멀티클래스 분류 문제로 넘어간다. Theano를 사용하여 로지스틱 회귀를 구현하면 다음과 같다. 다음 웹사이트를 참조하였다. http://deeplearning.net/tutorial/logreg.html 이 모형은 Softmax 함수를 사용하여 멀티 클래스 출력을 구현하였다. $$ \begin{eqnarray} P(Y=i \mid x, W,b) &= \text{softmax}_i(W x + b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}} \end{eqnarray} $$ $$ \begin{eqnarray} y_{pred} = {\rm argmax}_i P(Y=i \mid x,W,b) \end{eqnarray} $$ Teano에서의 핵심 코드는 def negative_log_likelihood(self, y): 에서 return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y]) 이것. 이거 점수가 낮아야 좋은 것이기 때문이다. 리니어리그레션은 y값이 명시되어야 하지만 여기는 명시 되지 않아도 된다. End of explanation import timeit def sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000, dataset='mnist.pkl.gz', batch_size=600): Demonstrate stochastic gradient descent optimization of a log-linear model This is demonstrated on MNIST. :type learning_rate: float :param learning_rate: learning rate used (factor for the stochastic gradient) :type n_epochs: int :param n_epochs: maximal number of epochs to run the optimizer :type dataset: string :param dataset: the path of the MNIST dataset file from http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz datasets = load_data(dataset) train_set_x, train_set_y = datasets[0] valid_set_x, valid_set_y = datasets[1] test_set_x, test_set_y = datasets[2] # compute number of minibatches for training, validation and testing n_train_batches = train_set_x.get_value(borrow=True).shape[0] // batch_size n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] // batch_size n_test_batches = test_set_x.get_value(borrow=True).shape[0] // batch_size ###################### # BUILD ACTUAL MODEL # ###################### print('... building the model') # allocate symbolic variables for the data index = T.lscalar() # index to a [mini]batch # generate symbolic variables for input (x and y represent a minibatch) x = T.matrix('x') # data, presented as rasterized images y = T.ivector('y') # labels, presented as 1D vector of [int] labels # construct the logistic regression class # Each MNIST image has size 28*28 classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10) # the cost we minimize during training is the negative log likelihood of the model in symbolic format cost = classifier.negative_log_likelihood(y) # compiling a Theano function that computes the mistakes that are made by the model on a minibatch test_model = theano.function( inputs=[index], outputs=classifier.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size] } ) validate_model = theano.function( inputs=[index], outputs=classifier.errors(y), givens={ x: valid_set_x[index * batch_size: (index + 1) * batch_size], y: valid_set_y[index * batch_size: (index + 1) * batch_size] } ) # compute the gradient of cost with respect to theta = (W,b) g_W = T.grad(cost=cost, wrt=classifier.W) g_b = T.grad(cost=cost, wrt=classifier.b) # start-snippet-3 # specify how to update the parameters of the model as a list of # (variable, update expression) pairs. updates = [(classifier.W, classifier.W - learning_rate * g_W), (classifier.b, classifier.b - learning_rate * g_b)] # compiling a Theano function `train_model` that returns the cost, but in # the same time updates the parameter of the model based on the rules defined in `updates` train_model = theano.function( inputs=[index], outputs=cost, updates=updates, givens={ x: train_set_x[index * batch_size: (index + 1) * batch_size], y: train_set_y[index * batch_size: (index + 1) * batch_size] } ) # end-snippet-3 ############### # TRAIN MODEL # ############### print('... training the model') # early-stopping parameters patience = 5000 # look as this many examples regardless patience_increase = 2 # wait this much longer when a new best is found improvement_threshold = 0.995 # a relative improvement of this much is considered significant #멈추게 하는 값 validation_frequency = min(n_train_batches, patience // 2) # go through this many minibatche before checking the network # on the validation set; in this case we check every epoch best_validation_loss = np.inf test_score = 0. start_time = timeit.default_timer() done_looping = False epoch = 0 while (epoch < n_epochs) and (not done_looping): epoch = epoch + 1 for minibatch_index in range(n_train_batches): minibatch_avg_cost = train_model(minibatch_index) # iteration number iter = (epoch - 1) * n_train_batches + minibatch_index if (iter + 1) % validation_frequency == 0: # compute zero-one loss on validation set validation_losses = [validate_model(i) for i in range(n_valid_batches)] this_validation_loss = np.mean(validation_losses) print( 'epoch %2i, minibatch %i/%i, validation error %12.4f %%' % ( epoch, minibatch_index + 1, n_train_batches, this_validation_loss * 100. ) ) # if we got the best validation score until now if this_validation_loss < best_validation_loss: #improve patience if loss improvement is good enough if this_validation_loss < best_validation_loss * improvement_threshold: patience = max(patience, iter * patience_increase) best_validation_loss = this_validation_loss # test it on the test set test_losses = [test_model(i) for i in range(n_test_batches)] test_score = np.mean(test_losses) # save the best model with open('best_model.pkl', 'wb') as f: pickle.dump(classifier, f) if patience <= iter: done_looping = True break end_time = timeit.default_timer() print( ( 'Optimization complete with best validation score of %f %%,' 'with test performance %f %%' ) % (best_validation_loss * 100., test_score * 100.) ) print('The code run for %d epochs, with %f epochs/sec' % (epoch, 1. * epoch / (end_time - start_time))) import six.moves.cPickle as pickle import gzip import os import sys def load_data(dataset): ''' Loads the dataset :type dataset: string :param dataset: the path to the dataset (here MNIST) ''' ############# # LOAD DATA # ############# # Download the MNIST dataset if it is not present data_dir, data_file = os.path.split(dataset) if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz': from six.moves import urllib origin = ( 'http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz' ) print('Downloading data from %s' % origin) urllib.request.urlretrieve(origin, dataset) print('... loading data') # Load the dataset with gzip.open(dataset, 'rb') as f: try: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') except: train_set, valid_set, test_set = pickle.load(f) # train_set, valid_set, test_set format: tuple(input, target) # input is a np.ndarray of 2 dimensions (a matrix) # where each row corresponds to an example. target is a # np.ndarray of 1 dimension (vector) that has the same length as # the number of rows in the input. It should give the target # to the example with the same index in the input. def shared_dataset(data_xy, borrow=True): Function that loads the dataset into shared variables The reason we store our dataset in shared variables is to allow Theano to copy it into the GPU memory (when code is run on GPU). Since copying data into the GPU is slow, copying a minibatch everytime is needed (the default behaviour if the data is not in a shared variable) would lead to a large decrease in performance. data_x, data_y = data_xy shared_x = theano.shared(np.asarray(data_x, dtype=theano.config.floatX), borrow=borrow) shared_y = theano.shared(np.asarray(data_y, dtype=theano.config.floatX), borrow=borrow) # When storing data on the GPU it has to be stored as floats # therefore we will store the labels as ``floatX`` as well # (``shared_y`` does exactly that). But during our computations # we need them as ints (we use labels as index, and if they are # floats it doesn't make sense) therefore instead of returning # ``shared_y`` we will have to cast it to int. This little hack # lets ous get around this issue return shared_x, T.cast(shared_y, 'int32') test_set_x, test_set_y = shared_dataset(test_set) valid_set_x, valid_set_y = shared_dataset(valid_set) train_set_x, train_set_y = shared_dataset(train_set) rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y), (test_set_x, test_set_y)] return rval %time sgd_optimization_mnist() Explanation: 이 모형의 가중치를 찾기 위한 SGD 알고리즘은 다음과 같이 구현한다. End of explanation
14,876
Given the following text description, write Python code to implement the functionality described below step by step Description: The Ames Housing dataset was compiled by Dean De Cock for use in data science education. It's an incredible alternative for data scientists looking for a modernized and expanded version of the often cited Boston Housing dataset. Import required libraries Step1: Load train and test dataset Step2: Basic EDA and model training Want to do basic EDA? AutoML contains automated Exploratory data analysis for input data, by default it performs and saved the automated eda report for the given training dataset. For more details check here The model training is done using AutoML.fit() method, you can control the and select the algorithms to be used and the training time etc, please refer docs here for more details. Want to do automated feature engineering? mljar provides golden features - Golden Features are new features constructed from original data which have great predictive power. Set the golden_features parameter to True and see if work. What to do cross-validation? specify your cross-validation strategy in validation_stategy parameter in AutoML. What to do ML Explainability? AutoML provides feature importances and SHAP value explanations for tree based models. This is controlled by explain_level parameter in AutoML. Refer docs for more information. Step3: Predict on test
Python Code: import pandas as pd from supervised.automl import AutoML from supervised.preprocessing.eda import EDA Explanation: The Ames Housing dataset was compiled by Dean De Cock for use in data science education. It's an incredible alternative for data scientists looking for a modernized and expanded version of the often cited Boston Housing dataset. Import required libraries End of explanation train_df = pd.read_csv("data/house_price_train.csv") test_df = pd.read_csv("data/house_price_test.csv") train_df.head() print("\nThe train data size after dropping Id feature is : {} ".format(train_df.shape)) print("The test data size after dropping Id feature is : {} ".format(test_df.shape)) X_train = train_df.drop(['SalePrice'],axis=1) y_train = train_df['SalePrice'] Explanation: Load train and test dataset End of explanation a = AutoML(algorithms=['Xgboost'],total_time_limit=30, explain_level=2,golden_features=True, validation_strategy={ "validation_type": "kfold", "k_folds": 3, "shuffle": False, "stratify": True, }) a.fit(X_train,y_train) Explanation: Basic EDA and model training Want to do basic EDA? AutoML contains automated Exploratory data analysis for input data, by default it performs and saved the automated eda report for the given training dataset. For more details check here The model training is done using AutoML.fit() method, you can control the and select the algorithms to be used and the training time etc, please refer docs here for more details. Want to do automated feature engineering? mljar provides golden features - Golden Features are new features constructed from original data which have great predictive power. Set the golden_features parameter to True and see if work. What to do cross-validation? specify your cross-validation strategy in validation_stategy parameter in AutoML. What to do ML Explainability? AutoML provides feature importances and SHAP value explanations for tree based models. This is controlled by explain_level parameter in AutoML. Refer docs for more information. End of explanation predictions = a.predict(test_df) submission = pd.read_csv("data/sample_submission.csv") submission['SalePrice'] = predictions submission.head() Explanation: Predict on test End of explanation
14,877
Given the following text description, write Python code to implement the functionality described below step by step Description: Express Deep Learning in Python - Part 1 Do you have everything ready? Check the part 0! How fast can you build a MLP? In this first part we will see how to implement the basic components of a MultiLayer Perceptron (MLP) classifier, most commonly known as Neural Network. We will be working with the Keras Step1: 2 - The dataset For this quick tutorial we will use the (very popular) MNIST dataset. This is a dataset of 70K images of handwritten digits. Our task is to recognize which digits is displayed in the image Step2: 3 - The model The concept of Deep Learning is very broad, but the core of it is the use of classifiers with multiple hidden layer of neurons, or smaller classifiers. We all know the classical image of the simplest possible possible deep model Step3: We have successfully build a Neural Network! We can print a description of our architecture using the following command Step4: Compiling a model in Keras A very appealing aspect of Deep Learning frameworks is that they solve the implementation of complex algorithms such as Backpropagation. For those with some numerical optimization notions, minimization algorithms often involve the calculation of first defivatives. Neural Networks are huge functions full of non-linearities, and differentiating them is a... nightmare. For this reason, models need to be "compiled". In this stage, the backend builds complex computational graphs, and we don't have to worry about derivatives or gradients. In Keras, a model can be compiled with the method .compile(). The method takes two parameters Step5: [OPTIONAL] We can now visualize the architecture of our model using the vis_util tools. It's a very schematic view, but you can check it's not that different from the image we saw above (and that we intended to replicate). If you can't execute this step don't worry, you can still finish the tutorial. This step requires graphviz and pydotplus libraries. Step6: Training Once the model is compiled, everything is ready to train the classifier. Keras' Sequential model has a similar interface as the sklearn library that you have seen before, with fit and predict methods. As usual, we need to pass our training examples and their corresponding labels. Other parameters needed to train a neural network is the size of the batch and the number of epochs. We have two ways of specifying a validation dataset Step7: We have trained our model! Additionally, Keras has printed out a lot of information of the training, thanks to the parameter verbose=1 that we passed to the fit function. We can see how many time it took in each iteration, and the value of the loss and metrics in the training and the validation dataset. The same information is stored in the output of the fit method, which sadly it's not well documented. We can see it in a pretty table with pandas. Step8: Why is this useful? This will give you an insight on how well your network is optimizing the loss, and how much it's actually learning. When training, you need to keep track of two things Step9: As you can see, using only 10 training epochs we get a very surprising accuracy in the training and test dataset. If you want to take a deeper look into your model, you can obtain the predictions as a vector and then use general purpose tools to explore the results. For example, we can plot the confusion matrix to see the most common errors.
Python Code: import numpy import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras.datasets import mnist Explanation: Express Deep Learning in Python - Part 1 Do you have everything ready? Check the part 0! How fast can you build a MLP? In this first part we will see how to implement the basic components of a MultiLayer Perceptron (MLP) classifier, most commonly known as Neural Network. We will be working with the Keras: a very simple library for deep learning. At this point, you may know how machine learning in general is applied and have some intuitions about how deep learning works, and more importantly, why it works. Now it's time to make some experiments, and for that you need to be as quick and flexible as possible. Keras is an idea tool for prototyping and doing your first approximations to a Machine Learning problem. On the one hand, Keras is integrated with two very powerfull backends that support GPU computations, Tensorflow and Theano. On the other hand, it has a level of abstraction high enough to be simple to understand and easy to use. For example, it uses a very similar interface to the sklearn library that you have seen before, with fit and predict methods. Now let's get to work with an example: 1 - The libraries Firts let's check we have installed everything we need for this tutorial: End of explanation batch_size = 128 num_classes = 10 epochs = 10 TRAIN_EXAMPLES = 60000 TEST_EXAMPLES = 10000 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() # reshape the dataset to convert the examples from 2D matrixes to 1D arrays. x_train = x_train.reshape(60000, 28*28) x_test = x_test.reshape(10000, 28*28) # to make quick runs, select a smaller set of images. train_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False) x_train = x_train[train_mask, :].astype('float32') y_train = y_train[train_mask] test_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False) x_test = x_test[test_mask, :].astype('float32') y_test = y_test[test_mask] # normalize the input x_train /= 255 x_test /= 255 # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) Explanation: 2 - The dataset For this quick tutorial we will use the (very popular) MNIST dataset. This is a dataset of 70K images of handwritten digits. Our task is to recognize which digits is displayed in the image: a classification problem. You have seen in previous courses how to train and evaluate a classifier, so we wont talk in further details about supervised learning. The input to the MLP classifier are going to be images of 28x28 pixels represented as matrixes. The output will be one of ten classes (0 to 9), representing the predicted number written in the image. End of explanation model = Sequential() # Input to hidden layer model.add(Dense(512, activation='relu', input_shape=(784,))) # Hidden to output layer model.add(Dense(10, activation='softmax')) Explanation: 3 - The model The concept of Deep Learning is very broad, but the core of it is the use of classifiers with multiple hidden layer of neurons, or smaller classifiers. We all know the classical image of the simplest possible possible deep model: a neural network with a single hidden layer. credits http://www.extremetech.com/wp-content/uploads/2015/07/NeuralNetwork.png In theory, this model can represent any function TODO add a citation here. We will see how to implement this network in Keras, and during the second part of this tutorial how to add more features to create a deep and powerful classifier. First, Deep Learning models are concatenations of Layers. This is represented in Keras with the Sequential model. We create the Sequential instance as an "empty carcass" and then we fill it with different layers. The most basic type of Layer is the Dense layer, where each neuron in the input is connected to each neuron in the following layer, like we can see in the image above. Internally, a Dense layer has two variables: a matrix of weights and a vector of bias, but the beauty of Keras is that you don't need to worry about that. All the variables will be correctly created, initialized, trained and possibly regularized for you. Each layer needs to know or be able to calculate al least three things: The size of the input: the number of neurons in the incoming layer. For the first layer this corresponds to the size of each example in our dataset. The next layers can calculate their input size using the output of the previous layer, so we generally don't need to tell them this. The type of activation: this is the function that is applied to the output of each neuron. Will talk in detail about this later. The size of the output: the number of neurons in the next layer. End of explanation model.summary() Explanation: We have successfully build a Neural Network! We can print a description of our architecture using the following command: End of explanation model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.SGD(), metrics=['accuracy']) Explanation: Compiling a model in Keras A very appealing aspect of Deep Learning frameworks is that they solve the implementation of complex algorithms such as Backpropagation. For those with some numerical optimization notions, minimization algorithms often involve the calculation of first defivatives. Neural Networks are huge functions full of non-linearities, and differentiating them is a... nightmare. For this reason, models need to be "compiled". In this stage, the backend builds complex computational graphs, and we don't have to worry about derivatives or gradients. In Keras, a model can be compiled with the method .compile(). The method takes two parameters: loss and optimizer. The loss is the function that calculates how much error we have in each prediction example, and there are a lot of implemented alternatives ready to use. We will talk more about this, for now we use the standard categorical crossentropy. As you can see, we can simply pass a string with the name of the function and Keras will find the implementation for us. The optimizer is the algorithm to minimize the value of the loss function. Again, Keras has many optimizers available. The basic one is the Stochastic Gradient Descent. We pass a third argument to the compile method: the metric. Metrics are measures or statistics that allows us to keep track of the classifier's performance. It's similar to the loss, but the results of the metrics are not use by the optimization algorithm. Besides, metrics are always comparable, while the loss function can take random values depending on your problem. Keras will calculate metrics and loss both on the training and the validation dataset. That way, we can monitor how other performance metrics vary when the loss is optimized and detect anomalies like overfitting. End of explanation from IPython.display import SVG from keras.utils.vis_utils import model_to_dot SVG(model_to_dot(model).create(prog='dot', format='svg')) Explanation: [OPTIONAL] We can now visualize the architecture of our model using the vis_util tools. It's a very schematic view, but you can check it's not that different from the image we saw above (and that we intended to replicate). If you can't execute this step don't worry, you can still finish the tutorial. This step requires graphviz and pydotplus libraries. End of explanation history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)); Explanation: Training Once the model is compiled, everything is ready to train the classifier. Keras' Sequential model has a similar interface as the sklearn library that you have seen before, with fit and predict methods. As usual, we need to pass our training examples and their corresponding labels. Other parameters needed to train a neural network is the size of the batch and the number of epochs. We have two ways of specifying a validation dataset: we can pass the tuple of values and labels directly with the validation_data parameter, or we can pass a proportion to the validation_split argument and Keras will split the training dataset for us. To correctly train our model we need to pass two important parameters to the fit function: * batch_size: is the number of examples to use in each "minibatch" iteration of the Stochastic Gradient Descent algorithm. This is necessary for most optimization algorithms. The size of the batch is important because it defines how fast the algorithm will perform each iteration and also how much memory will be used to load each batch (possibly in the GPU). * epochs: is the number of passes through the entire dataset. We need enough epochs for the classifier to converge, but we need to stop before the classifier starts overfitting. End of explanation import pandas pandas.DataFrame(history.history) Explanation: We have trained our model! Additionally, Keras has printed out a lot of information of the training, thanks to the parameter verbose=1 that we passed to the fit function. We can see how many time it took in each iteration, and the value of the loss and metrics in the training and the validation dataset. The same information is stored in the output of the fit method, which sadly it's not well documented. We can see it in a pretty table with pandas. End of explanation score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) Explanation: Why is this useful? This will give you an insight on how well your network is optimizing the loss, and how much it's actually learning. When training, you need to keep track of two things: Your network is actually learning. This means your training loss is decreasing in average. If it's going up or it's stuck for more than a couple of epochs is safe to stop you training and try again. You network is not overfitting. It's normal to have a gap between the validation and the training metrics, but they should decrease more or less at the same rate. If you see that your metrics for training are getting better but your validation metrics are getting worse, it is also a good point to stop and fix your overfitting problem. Evaluation Keras gives us a very useful method to evaluate the current performance called evaluate (surprise!). Evaluate will return the value of the loss function and all the metrics that we pass to the model when calling compile. End of explanation prediction = model.predict_classes(x_test) import seaborn as sns from sklearn.metrics import confusion_matrix sns.set_style('white') sns.set_palette('colorblind') matrix = confusion_matrix(numpy.argmax(y_test, 1), prediction) figure = sns.heatmap(matrix / matrix.astype(numpy.float).sum(axis=1), xticklabels=range(10), yticklabels=range(10), cmap=sns.cubehelix_palette(8, as_cmap=True)) Explanation: As you can see, using only 10 training epochs we get a very surprising accuracy in the training and test dataset. If you want to take a deeper look into your model, you can obtain the predictions as a vector and then use general purpose tools to explore the results. For example, we can plot the confusion matrix to see the most common errors. End of explanation
14,878
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute a sparse inverse solution using the Gamma-Map empirical Bayesian method See [1]_ for details. References .. [1] D. Wipf, S. Nagarajan "A unified Bayesian framework for MEG/EEG source imaging", Neuroimage, Volume 44, Number 3, pp. 947-966, Feb. 2009. DOI Step1: Plot dipole activations Step2: Show the evoked response and the residual for gradiometers Step3: Generate stc from dipoles Step4: View in 2D and 3D ("glass" brain like 3D plot) Show the sources as spheres scaled by their strength
Python Code: # Author: Martin Luessi <[email protected]> # Daniel Strohmeier <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.inverse_sparse import gamma_map, make_stc_from_dipoles from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif' # Read the evoked response and crop it condition = 'Left visual' evoked = mne.read_evokeds(evoked_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=-50e-3, tmax=300e-3) # Read the forward solution forward = mne.read_forward_solution(fwd_fname) # Read noise noise covariance matrix and regularize it cov = mne.read_cov(cov_fname) cov = mne.cov.regularize(cov, evoked.info) # Run the Gamma-MAP method with dipole output alpha = 0.5 dipoles, residual = gamma_map( evoked, forward, cov, alpha, xyz_same_gamma=True, return_residual=True, return_as_dipoles=True) Explanation: Compute a sparse inverse solution using the Gamma-Map empirical Bayesian method See [1]_ for details. References .. [1] D. Wipf, S. Nagarajan "A unified Bayesian framework for MEG/EEG source imaging", Neuroimage, Volume 44, Number 3, pp. 947-966, Feb. 2009. DOI: 10.1016/j.neuroimage.2008.02.059 End of explanation plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # # Plot dipole locations of all dipoles with MRI slices # for dip in dipoles: # plot_dipole_locations(dip, forward['mri_head_t'], 'sample', # subjects_dir=subjects_dir, mode='orthoview', # idx='amplitude') Explanation: Plot dipole activations End of explanation ylim = dict(grad=[-120, 120]) evoked.pick_types(meg='grad', exclude='bads') evoked.plot(titles=dict(grad='Evoked Response Gradiometers'), ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg='grad', exclude='bads') residual.plot(titles=dict(grad='Residuals Gradiometers'), ylim=ylim, proj=True, time_unit='s') Explanation: Show the evoked response and the residual for gradiometers End of explanation stc = make_stc_from_dipoles(dipoles, forward['src']) Explanation: Generate stc from dipoles End of explanation scale_factors = np.max(np.abs(stc.data), axis=1) scale_factors = 0.5 * (1 + scale_factors / np.max(scale_factors)) plot_sparse_source_estimates( forward['src'], stc, bgcolor=(1, 1, 1), modes=['sphere'], opacity=0.1, scale_factors=(scale_factors, None), fig_name="Gamma-MAP") Explanation: View in 2D and 3D ("glass" brain like 3D plot) Show the sources as spheres scaled by their strength End of explanation
14,879
Given the following text description, write Python code to implement the functionality described below step by step Description: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. Los datos analizados son del filamento de bq el día 20 de Julio del 2015 Step1: Representamos ambos diámetros en la misma gráfica Step2: Mostramos la representación gráfica de la media de las muestras Step3: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento Step4: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. Step5: Representación de X/Y Step6: Analizamos datos del ratio Step7: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$
Python Code: #Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns import sklearn as sk from sklearn.linear_model import Ridge from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) print ("Sklearn v{}".format(sk.__version__)) #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('prueba1.csv') datos_filtrados = datos[(datos['Diametro X'] >= 1.2) & (datos['Diametro Y'] >= 1.2)] %pylab inline #Mostramos un resumen de los datos obtenidoss datos_filtrados.describe() #datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']] #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar #columns = ['Diametro X', 'Diametro Y', 'RPM TRAC'] columns = ['Diametro X', 'RPM TRAC'] #Mostramos en varias gráficas la información obtenida tras el ensayo datos_filtrados[columns].plot(secondary_y=['RPM TRAC'],figsize=(20,20)) #datos_filtrados['RPM TRAC'].plot(secondary_y=True,style='g',figsize=(20,20)).set_ylabel=('RPM') # Buscamos el polinomio de orden 4 que determina la distribución de los datos reg = np.polyfit(datos_filtrados['time'],datos_filtrados['Diametro X'],4) # Calculamos los valores de y con la regresión ry = np.polyval(reg,datos_filtrados['time']) print ('P(x)= {} {}*X {}*X^2 {}*X^3 {}*X^4'.format(reg[0],reg[1],reg[2],reg[3],reg[4]) ) plt.plot(datos_filtrados['time'],datos_filtrados['Diametro X'],'*', label=('f(x)')) plt.plot(datos_filtrados['time'],ry,'ro', label=('regression')) plt.legend(loc=0) plt.grid(True) plt.xlabel('x') plt.ylabel('f(x)') Explanation: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. Los datos analizados son del filamento de bq el día 20 de Julio del 2015 End of explanation datos_filtrados.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,3)) datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') Explanation: Representamos ambos diámetros en la misma gráfica End of explanation pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12)) Explanation: Mostramos la representación gráfica de la media de las muestras End of explanation plt.scatter(x=datos['Diametro X [mm]'], y=datos['Diametro Y [mm]'], marker='.') Explanation: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento End of explanation datos_filtrados = datos[(datos['Diametro X [mm]'] >= 0.9) & (datos['Diametro Y [mm]'] >= 0.9)] Explanation: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. End of explanation plt.scatter(x=datos_filtrados['Diametro X [mm]'], y=datos_filtrados['Diametro Y [mm]'], marker='.') Explanation: Representación de X/Y End of explanation ratio = datos_filtrados['Diametro X [mm]']/datos_filtrados['Diametro Y [mm]'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5)) Explanation: Analizamos datos del ratio End of explanation Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X [mm]'] > Th_u) | (datos['Diametro X [mm]'] < Th_d) | (datos['Diametro Y [mm]'] > Th_u) | (datos['Diametro Y [mm]'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12)) Explanation: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$ End of explanation
14,880
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 4 Step1: Example 1 Step2: Next we set up an instance of NPTFit and add in the data. We'll analyze the entire sky at once, so we won't add in a mask. Step3: Now we add in templates, one to describe isotropic Poissonian emission and one for isotropic point sources. Note the different syntax requires for each. Step4: We add in both models, being careful to select the right template. Here we model the PS point spread function as a singly broken power law, which requires four parameters to describe it Step5: Once everything is setup, we configure and perform the scan, and then show the triangle plot and flux fraction plot. Step6: We see that the Poissonian template has absorbed essentially everything, whilst the non-Poissonian parameters are poorly converged - expected as there were no point sources injected. Example 2 Step7: Now we repeat all the steps used in the example without point sources. Step8: We now see that both the Poissonian and non-Poissonian parameters are quite well converged. Note that the indices both want to have a large magnitude, which makes sense as we have effectively injected a delta function in flux, and the singly broken power law is trying to mimic that. Note that Sb is well converged near 50 counts per source, which is what we injected. Example 3 Step9: Now we again analyze this data. Critically, note that when we configure the scan we set nexp=2 to indicate that the code should run with 2 exposure regions. In this simple example we know that 2 is all we need, but in real situations it is worth trying various values of nexp to see where results converge. Step10: Everything is again well converged. Note that this time Sb has converged near 75, not 50. This is exactly what should be expected though, as the mean number of injected counts per PS over the sky is 50 x mean(exposure) = 75. To highlight the importance of the exposure regions, let's repeat this using only one exposure region which we emphasize is the wrong thing to do.
Python Code: # Import relevant modules %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import healpy as hp import matplotlib.pyplot as plt from matplotlib import rcParams from NPTFit import nptfit # module for performing scan from NPTFit import dnds_analysis # module for analysing the output from __future__ import print_function Explanation: Example 4: Simple NPTF example In this example we perform a non-Poissonian template fit in a simplified setting. Specifically we will restrict ourselves to randomly generated nside=2 maps, which means our data consists only of 48 pixels. Nevertheless in this simple setting we will be able to clearly see the difference between Poissonian and non-Poissonian statistics as well as basic features of how non-Poissonian template fitting is performed with the code. Throughout this example we will assume there is no smearing of the counts coming from all point sources. The effect of a finite point spread function on the statistics and how to account for it is discussed in Example 5. End of explanation nside = 2 npix = hp.nside2npix(nside) data = np.random.poisson(1,npix).astype(np.int32) exposure = np.ones(npix) hp.mollview(data,title='Fake Data') hp.mollview(exposure,title='Exposure Map') Explanation: Example 1: A map without point sources We start out by analyzing a map without any point sources, using a uniform exposure map. First let's create and plot our random data. End of explanation n = nptfit.NPTF(tag='SimpleNPTF_Example') n.load_data(data, exposure) Explanation: Next we set up an instance of NPTFit and add in the data. We'll analyze the entire sky at once, so we won't add in a mask. End of explanation iso = np.ones(npix) n.add_template(iso, 'iso_p', units='flux') n.add_template(iso, 'iso_np', units='PS') Explanation: Now we add in templates, one to describe isotropic Poissonian emission and one for isotropic point sources. Note the different syntax requires for each. End of explanation n.add_poiss_model('iso_p', '$A_\mathrm{iso}$', [0,2], False) n.add_non_poiss_model('iso_np', ['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'], [[-10,1],[2.05,60],[-60,1.95],[0.01,200]], [True,False,False,False]) Explanation: We add in both models, being careful to select the right template. Here we model the PS point spread function as a singly broken power law, which requires four parameters to describe it: the normalization A, the indices above and below the breaks n1 and n2, and the location of the break Sb. More details on the forms for the non-Poissonian model can be found in Example 6. End of explanation n.configure_for_scan() n.perform_scan(nlive=500) n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() plt.show() plt.close() an.plot_intensity_fraction_poiss('iso_p', bins=20, color='cornflowerblue', label='Poissonian') an.plot_intensity_fraction_non_poiss('iso_np', bins=20, color='firebrick', label='non-Poissonian') plt.xlabel('Flux fraction (\%)') plt.legend(fancybox = True) plt.xlim(0,100); plt.ylim(0,0.4); Explanation: Once everything is setup, we configure and perform the scan, and then show the triangle plot and flux fraction plot. End of explanation for ips in range(10): data[np.random.randint(npix)] += np.random.poisson(50) hp.mollview(data,title='Fake Data with point sources') Explanation: We see that the Poissonian template has absorbed essentially everything, whilst the non-Poissonian parameters are poorly converged - expected as there were no point sources injected. Example 2: A map with point sources We now repeat the analysis above, but now add in 10 mean 50 count point sources. First lets take the data from above and add the point sources. End of explanation n = nptfit.NPTF(tag='SimpleNPTF_Example') n.load_data(data,exposure) iso = np.ones(npix) n.add_template(iso, 'iso_p',units='flux') n.add_template(iso, 'iso_np',units='PS') n.add_poiss_model('iso_p', '$A_\mathrm{iso}$', [0,2], False) n.add_non_poiss_model('iso_np', ['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'], [[-10,1],[2.05,60],[-60,1.95],[0.01,200]], [True,False,False,False]) n.configure_for_scan() n.perform_scan(nlive=500) n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() plt.show() plt.close() Explanation: Now we repeat all the steps used in the example without point sources. End of explanation nside = 2 npix = hp.nside2npix(nside) exposure = np.zeros(npix) exposure[0:int(npix/2)] = 1.0 exposure[int(npix/2):npix] = 2.0 data = np.random.poisson(exposure).astype(np.int32) for ips in range(10): loc = np.random.randint(npix) data[loc] += np.random.poisson(50*exposure[loc]) hp.mollview(data,title='Fake Data with point sources') hp.mollview(exposure,title='non-uniform Exposure Map') Explanation: We now see that both the Poissonian and non-Poissonian parameters are quite well converged. Note that the indices both want to have a large magnitude, which makes sense as we have effectively injected a delta function in flux, and the singly broken power law is trying to mimic that. Note that Sb is well converged near 50 counts per source, which is what we injected. Example 3: A map with point sources and non-uniform exposure map We will now repeat the above exercise but on a map without uniform exposure. This will highlight the importance of exposure regions. To begin with let's again create the data, but now we pretend that one side of the sky is expected to obtain twice as many counts as the other (which could occur if the instrument looked at that half of the sky twice as long for example). End of explanation n = nptfit.NPTF(tag='SimpleNPTF_Example') n.load_data(data,exposure) iso = np.ones(npix) n.add_template(iso, 'iso_p',units='flux') n.add_template(iso, 'iso_np',units='PS') n.add_poiss_model('iso_p', '$A_\mathrm{iso}$', [0,2], False) n.add_non_poiss_model('iso_np', ['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'], [[-10,1],[2.05,60],[-60,1.95],[0.01,200]], [True,False,False,False]) n.configure_for_scan(nexp=2) n.perform_scan(nlive=500) n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() plt.show() plt.close() Explanation: Now we again analyze this data. Critically, note that when we configure the scan we set nexp=2 to indicate that the code should run with 2 exposure regions. In this simple example we know that 2 is all we need, but in real situations it is worth trying various values of nexp to see where results converge. End of explanation n = nptfit.NPTF(tag='SimpleNPTF_Example') n.load_data(data,exposure) iso = np.ones(npix) n.add_template(iso, 'iso_p',units='flux') n.add_template(iso, 'iso_np',units='PS') n.add_poiss_model('iso_p', '$A_\mathrm{iso}$', [0,2], False) n.add_non_poiss_model('iso_np', ['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'], [[-10,1],[2.05,60],[-60,1.95],[0.01,200]], [True,False,False,False]) n.configure_for_scan(nexp=1) n.perform_scan(nlive=500) n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() plt.show() plt.close() Explanation: Everything is again well converged. Note that this time Sb has converged near 75, not 50. This is exactly what should be expected though, as the mean number of injected counts per PS over the sky is 50 x mean(exposure) = 75. To highlight the importance of the exposure regions, let's repeat this using only one exposure region which we emphasize is the wrong thing to do. End of explanation
14,881
Given the following text description, write Python code to implement the functionality described below step by step Description: Hospital readmissions data analysis and recommendations for reduction Background In October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions. Exercise overview In this exercise, you will Step1: Preliminary analysis Step2: Preliminary report A. Initial observations based on the plot above + Overall, rate of readmissions is trending down with increasing number of discharges + With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red) + With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) B. Statistics + In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 + In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 C. Conclusions + There is a significant correlation between hospital capacity (number of discharges) and readmission rates. + Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions. D. Regulatory policy recommendations + Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation. + Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges. Exercise Include your work on the following in this notebook and submit to your Github account. A. Do you agree with the above analysis and recommendations? Why or why not? B. Provide support for your arguments and your own recommendations with a statistically sound analysis Step3: Do you agree with the above analysis and recommendation? No the above analysis is extremely week. First of all it talks about a decreasing trend, but visually this seems to only be confirmed by focusing on a few outliers of small discharge numbers and large discharge numbers, completely ignoring the very large amount of data points that make up the bulk of the data. While there may be a difference, there is no actual analysis; no attempt to fit a line or statistically quantify the difference in these means. The conclusions are not made with respect to any actual inferential statistic or fitting, nor is there any rationale given for the choice of small and large hospital cutoffs. Concluding there is a 'strong correlation' when visually it appears weak at best, and without actually trying to quantify it means the conclusions are not trustworthy. Furthermore, the recommendation then makes a sugggestion for hospitals with a small capacity of < 300, but there is no logic or indication of where this cut off was derived. It also makes a very bold recommendation of forcing hospitals to demonstrate upgraded resources and providing incentives to consolidate hospital facilities, which do not directly follow from the analysis. Not enough evidence or analysis was presented for either of the recommendations to be appropriate recommendations. While this project is assigned before one has done linear regressions, it doesn't make sense to discuss a trend without including the bulk of your data, as recommendations will affect these facilities as well as the others. Therefore, we will conduct a ANOVA test to analyse the difference between small, medium and large hosptials. Provide support for your arguments and your own recommendations with a statistically sound analysis Step4: First we'd like to change the two groups under investigation to three, in order to better match the groupings we see visually. Step5: Now our variances are not all the same, so we can't know we can't use the usual normal one-way ANOVA, and instead will use Welch's ANOVA or a Kruskal-Wallis test. First we need to check for normality though. Step6: Visually, the hisogram seems to show that all three distributions are gaussian, the probability plot for the small hospitals has a weaker determination coefficient ($R^2$) than I'd like. There are a lot of outliers at the end. Perhaps we can remedy this by shifting our first group to only start at 100 discharges and go to about 150. So still smaller facilities, but a different subset. If one does however, the improvement ($R^2 \approx 0.90) doesn't bring it up to the level of the other two groups. So this helps, but there is still is still a lot of variance not explained by regressing to a gaussian distribution. Personally, I will proceed by performing both a Kruskal-Wallis test, and not a Welch F-test, being aware that the latter may be affected by only weakly meeting the critieria of all samples being drawn from a normal distribution. Step7: We find a pvalue of $1.8 \times 10^{-16}$, which is much lower than the given $\alpha = 0.01$. Thus we conclude that there is a significant difference between at least two of the means. We need to perform a post-hoc test to determine which of the means is statsitically different. Unfortunately, scipy.stats doesn't seem to have any appropriate built in post-hoc tests for the Kruskal-Wallis test (or indeed for many ANOVA tests). Since I didn't feel like coding the test from scratch, and knowing someone would have already done it, I looked to see if there was some other library which included Post-hoc tests. The following function performs the exact same calculation as above but also includes the results of Dunn's test. The code was found through google at
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt import bokeh.plotting as bkp from mpl_toolkits.axes_grid1 import make_axes_locatable %matplotlib inline # read in readmissions data provided hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv') Explanation: Hospital readmissions data analysis and recommendations for reduction Background In October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions. Exercise overview In this exercise, you will: + critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate + construct a statistically sound analysis and make recommendations of your own More instructions provided below. Include your work in this notebook and submit to your Github account. Resources Data source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3 More information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet End of explanation # deal with missing and inconvenient portions of data clean_hospital_read_df = hospital_read_df[(hospital_read_df['Number of Discharges'] != 'Not Available')] clean_hospital_read_df.loc[:,'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int) clean_hospital_read_df = clean_hospital_read_df.sort('Number of Discharges') # generate a scatterplot for number of discharges vs. excess rate of readmissions # lists work better with matplotlib scatterplot function x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]] y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3]) fig, ax = plt.subplots(figsize=(8,5)) ax.scatter(x, y,alpha=0.2) ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True) ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True) ax.set_xlim([0, max(x)]) ax.set_xlabel('Number of discharges', fontsize=12) ax.set_ylabel('Excess rate of readmissions', fontsize=12) ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14) ax.grid(True) fig.tight_layout() Explanation: Preliminary analysis End of explanation len(clean_hospital_read_df) Explanation: Preliminary report A. Initial observations based on the plot above + Overall, rate of readmissions is trending down with increasing number of discharges + With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red) + With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) B. Statistics + In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 + In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 C. Conclusions + There is a significant correlation between hospital capacity (number of discharges) and readmission rates. + Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions. D. Regulatory policy recommendations + Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation. + Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges. Exercise Include your work on the following in this notebook and submit to your Github account. A. Do you agree with the above analysis and recommendations? Why or why not? B. Provide support for your arguments and your own recommendations with a statistically sound analysis: Setup an appropriate hypothesis test. Compute and report the observed significance value (or p-value). Report statistical significance for $\alpha$ = .01. Discuss statistical significance and practical significance You can compose in notebook cells using Markdown: + In the control panel at the top, choose Cell > Cell Type > Markdown + Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet End of explanation %matplotlib inline import pandas as pd import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) from IPython.core.display import HTML css = open('style-table.css').read() + open('style-notebook.css').read() HTML('<style>{}</style>'.format(css)) clean_hospital_read_df.describe() Explanation: Do you agree with the above analysis and recommendation? No the above analysis is extremely week. First of all it talks about a decreasing trend, but visually this seems to only be confirmed by focusing on a few outliers of small discharge numbers and large discharge numbers, completely ignoring the very large amount of data points that make up the bulk of the data. While there may be a difference, there is no actual analysis; no attempt to fit a line or statistically quantify the difference in these means. The conclusions are not made with respect to any actual inferential statistic or fitting, nor is there any rationale given for the choice of small and large hospital cutoffs. Concluding there is a 'strong correlation' when visually it appears weak at best, and without actually trying to quantify it means the conclusions are not trustworthy. Furthermore, the recommendation then makes a sugggestion for hospitals with a small capacity of < 300, but there is no logic or indication of where this cut off was derived. It also makes a very bold recommendation of forcing hospitals to demonstrate upgraded resources and providing incentives to consolidate hospital facilities, which do not directly follow from the analysis. Not enough evidence or analysis was presented for either of the recommendations to be appropriate recommendations. While this project is assigned before one has done linear regressions, it doesn't make sense to discuss a trend without including the bulk of your data, as recommendations will affect these facilities as well as the others. Therefore, we will conduct a ANOVA test to analyse the difference between small, medium and large hosptials. Provide support for your arguments and your own recommendations with a statistically sound analysis: End of explanation group1 = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'] < 100] group3 = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'] > 700] group2 = clean_hospital_read_df[(clean_hospital_read_df['Number of Discharges'] < 600) & (clean_hospital_read_df['Number of Discharges'] > 450)] n1 = len(group1) n2 = len(group2) n3 = len(group3) print(len(group1)) print(len(group2)) print(len(group3)) print('total size of population: ', len(clean_hospital_read_df['Excess Readmission Ratio'].dropna())) import scipy.stats as stats mean1 = group1['Excess Readmission Ratio'].dropna().mean() mean2 = group2['Excess Readmission Ratio'].dropna().mean() mean3 = group3['Excess Readmission Ratio'].dropna().mean() print(group1['Excess Readmission Ratio'].dropna().mean()) print(group2['Excess Readmission Ratio'].dropna().mean()) print(group3['Excess Readmission Ratio'].dropna().mean()) print('mean admission ratio of total population, sans NaN values: ', clean_hospital_read_df['Excess Readmission Ratio'].dropna().mean()) var1 = group1['Excess Readmission Ratio'].dropna().var() var2 = group2['Excess Readmission Ratio'].dropna().var() var3 = group3['Excess Readmission Ratio'].dropna().var() print(group1['Excess Readmission Ratio'].dropna().var()) print(group2['Excess Readmission Ratio'].dropna().var()) print(group3['Excess Readmission Ratio'].dropna().var()) Explanation: First we'd like to change the two groups under investigation to three, in order to better match the groupings we see visually. End of explanation ratio1 = group1['Excess Readmission Ratio'].dropna() ratio2 = group2['Excess Readmission Ratio'].dropna() ratio3 = group3['Excess Readmission Ratio'].dropna() sns.distplot(ratio1) sns.distplot(ratio2) sns.distplot(ratio3) stats.probplot(ratio1, dist="norm", plot=plt) stats.probplot(ratio2, dist="norm", plot=plt) stats.probplot(ratio3, dist="norm", plot=plt) Explanation: Now our variances are not all the same, so we can't know we can't use the usual normal one-way ANOVA, and instead will use Welch's ANOVA or a Kruskal-Wallis test. First we need to check for normality though. End of explanation stats.kruskal(ratio1,ratio2,ratio3) Explanation: Visually, the hisogram seems to show that all three distributions are gaussian, the probability plot for the small hospitals has a weaker determination coefficient ($R^2$) than I'd like. There are a lot of outliers at the end. Perhaps we can remedy this by shifting our first group to only start at 100 discharges and go to about 150. So still smaller facilities, but a different subset. If one does however, the improvement ($R^2 \approx 0.90) doesn't bring it up to the level of the other two groups. So this helps, but there is still is still a lot of variance not explained by regressing to a gaussian distribution. Personally, I will proceed by performing both a Kruskal-Wallis test, and not a Welch F-test, being aware that the latter may be affected by only weakly meeting the critieria of all samples being drawn from a normal distribution. End of explanation from dunn import kw_dunn kw_dunn([ratio1,ratio2,ratio3], to_compare = [(0,1),(0,2),(1,2)],alpha = 0.01) Explanation: We find a pvalue of $1.8 \times 10^{-16}$, which is much lower than the given $\alpha = 0.01$. Thus we conclude that there is a significant difference between at least two of the means. We need to perform a post-hoc test to determine which of the means is statsitically different. Unfortunately, scipy.stats doesn't seem to have any appropriate built in post-hoc tests for the Kruskal-Wallis test (or indeed for many ANOVA tests). Since I didn't feel like coding the test from scratch, and knowing someone would have already done it, I looked to see if there was some other library which included Post-hoc tests. The following function performs the exact same calculation as above but also includes the results of Dunn's test. The code was found through google at : https://gist.github.com/alimuldal/fbb19b73fa25423f02e8. Interestingly it doesn't look like it's been incorporated into any major libraries. End of explanation
14,882
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Checking-tool" data-toc-modified-id="Checking-tool-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Checking tool</a></span></li><li><span><a href="#Basic-Elements" data-toc-modified-id="Basic-Elements-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Basic Elements</a></span><ul class="toc-item"><li><span><a href="#A------------|-XSPICE-code-model-(not-checked)" data-toc-modified-id="A------------|-XSPICE-code-model-(not-checked)-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>A | XSPICE code model (not checked)</a></span></li><li><span><a href="#B------------|-Behavioral-(arbitrary)-source-(not-checked)" data-toc-modified-id="B------------|-Behavioral-(arbitrary)-source-(not-checked)-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>B | Behavioral (arbitrary) source (not checked)</a></span></li><li><span><a href="#C------------|-Capacitor" data-toc-modified-id="C------------|-Capacitor-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>C | Capacitor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.3.1"><span class="toc-item-num">2.3.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#D------------|-Diode" data-toc-modified-id="D------------|-Diode-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>D | Diode</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.4.1"><span class="toc-item-num">2.4.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#E------------|-Voltage-controlled-voltage-source-(VCVS)" data-toc-modified-id="E------------|-Voltage-controlled-voltage-source-(VCVS)-2.5"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>E | Voltage-controlled voltage source (VCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.5.1"><span class="toc-item-num">2.5.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#F------------|-Current-controlled-current-source-(CCCs)" data-toc-modified-id="F------------|-Current-controlled-current-source-(CCCs)-2.6"><span class="toc-item-num">2.6&nbsp;&nbsp;</span>F | Current-controlled current source (CCCs)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.6.1"><span class="toc-item-num">2.6.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#G------------|-Voltage-controlled-current-source-(VCCS)" data-toc-modified-id="G------------|-Voltage-controlled-current-source-(VCCS)-2.7"><span class="toc-item-num">2.7&nbsp;&nbsp;</span>G | Voltage-controlled current source (VCCS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.7.1"><span class="toc-item-num">2.7.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#H------------|-Current-controlled-voltage-source-(CCVS)" data-toc-modified-id="H------------|-Current-controlled-voltage-source-(CCVS)-2.8"><span class="toc-item-num">2.8&nbsp;&nbsp;</span>H | Current-controlled voltage source (CCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.8.1"><span class="toc-item-num">2.8.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#I------------|-Current-source" data-toc-modified-id="I------------|-Current-source-2.9"><span class="toc-item-num">2.9&nbsp;&nbsp;</span>I | Current source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.9.1"><span class="toc-item-num">2.9.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#J------------|-Junction-field-effect-transistor-(JFET)" data-toc-modified-id="J------------|-Junction-field-effect-transistor-(JFET)-2.10"><span class="toc-item-num">2.10&nbsp;&nbsp;</span>J | Junction field effect transistor (JFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.10.1"><span class="toc-item-num">2.10.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#K------------|-Coupled-(Mutual)-Inductors" data-toc-modified-id="K------------|-Coupled-(Mutual)-Inductors-2.11"><span class="toc-item-num">2.11&nbsp;&nbsp;</span>K | Coupled (Mutual) Inductors</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.11.1"><span class="toc-item-num">2.11.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#L------------|-Inductor" data-toc-modified-id="L------------|-Inductor-2.12"><span class="toc-item-num">2.12&nbsp;&nbsp;</span>L | Inductor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.12.1"><span class="toc-item-num">2.12.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#M------------|-Metal-oxide-field-effect-transistor-(MOSFET)" data-toc-modified-id="M------------|-Metal-oxide-field-effect-transistor-(MOSFET)-2.13"><span class="toc-item-num">2.13&nbsp;&nbsp;</span>M | Metal oxide field effect transistor (MOSFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.13.1"><span class="toc-item-num">2.13.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#Q------------|-Bipolar-junction-transistor-(BJT)" data-toc-modified-id="Q------------|-Bipolar-junction-transistor-(BJT)-2.14"><span class="toc-item-num">2.14&nbsp;&nbsp;</span>Q | Bipolar junction transistor (BJT)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.14.1"><span class="toc-item-num">2.14.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#R------------|-Resistor" data-toc-modified-id="R------------|-Resistor-2.15"><span class="toc-item-num">2.15&nbsp;&nbsp;</span>R | Resistor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.15.1"><span class="toc-item-num">2.15.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#V-|-Voltage-source" data-toc-modified-id="V-|-Voltage-source-2.16"><span class="toc-item-num">2.16&nbsp;&nbsp;</span>V | Voltage source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.16.1"><span class="toc-item-num">2.16.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)" data-toc-modified-id="Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)-2.17"><span class="toc-item-num">2.17&nbsp;&nbsp;</span>Z | Metal semiconductor field effect transistor (MESFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.17.1"><span class="toc-item-num">2.17.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-SinusoidalMixin-Based" data-toc-modified-id="Highlevel-Elements-SinusoidalMixin-Based-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Highlevel Elements <code>SinusoidalMixin</code> Based</a></span><ul class="toc-item"><li><span><a href="#Note-in-Armour's-fort-added-as_phase" data-toc-modified-id="Note-in-Armour's-fort-added-as_phase-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Note in Armour's fort added as_phase</a></span></li><li><span><a href="#SinusoidalMixin-args Step2: Checking tool Step3: Basic Elements A | XSPICE code model (not checked) PySpice/PySpice/Spice/BasicElement.py; (need to find) Step4: D | Diode PySpice/PySpice/Spice/BasicElement.py; class Diode(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="D" ngspice 7.1 Junction Diodes Step5: E | Voltage-controlled voltage source (VCVS) PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledVoltageSource(TwoPortElement) skidl/skidl/libs/pyspice_sklib.py; name="E" ngspice 4.2.2 Exxxx Step6: F | Current-controlled current source (CCCs) PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledCurrentSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="F" ngspice 4.2.3 Fxxxx Step7: G | Voltage-controlled current source (VCCS) PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledCurrentSource(TwoPortElement) skidl/skidl/libs/pyspice_sklib.py; name="G" ngspice 4.2.1 Gxxxx Step8: H | Current-controlled voltage source (CCVS) PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledVoltageSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="H" ngspice 4.2.4 Hxxxx Step9: I | Current source PySpice/PySpice/Spice/BasicElement.py; class CurrentSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="I" ngspice 4.1 Independent Sources for Voltage or Current Step10: J | Junction field effect transistor (JFET) PySpice/PySpice/Spice/BasicElement.py; class JunctionFieldEffectTransistor(JfetElement) skidl/skidl/libs/pyspice_sklib.py; name="J" ngspice 9.1 Junction Field-Effect Transistors (JFETs) Step11: K | Coupled (Mutual) Inductors PySpice/PySpice/Spice/BasicElement.py; class CoupledInductor(AnyPinElement) skidl/skidl/libs/pyspice_sklib.py; name="K" ngspice 3.2.11 Coupled (Mutual) Inductors Step12: L | Inductor PySpice/PySpice/Spice/BasicElement.py; class Inductor(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="L" ngspice 3.2.9 Inductors Step13: M | Metal oxide field effect transistor (MOSFET) PySpice/PySpice/Spice/BasicElement.py; class Mosfet(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="M" ngspice 11.1 MOSFET devices Step14: | N | Numerical device for GSS | | O | Lossy transmission line | | P | Coupled multiconductor line (CPL) | Q | Bipolar junction transistor (BJT) PySpice/PySpice/Spice/BasicElement.py; class BipolarJunctionTransistor(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="Q" ngspice 8.1 Bipolar Junction Transistors (BJTs) Step15: R | Resistor PySpice/PySpice/Spice/BasicElement.py; class Resistor(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="R" ngspice 3.2.1 Resistors Step16: | S | Switch (voltage-controlled) | | T | Lossless transmission line | | U | Uniformly distributed RC line | V | Voltage source PySpice/PySpice/Spice/BasicElement.py; class VoltageSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="V" ngspice 4.1 Independent Sources for Voltage or Current Step17: | W | Switch (current-controlled) | | X | Subcircuit | | Y | Single lossy transmission line (TXL) | | Z | Metal semiconductor field effect transistor (MESFET) | Z | Metal semiconductor field effect transistor (MESFET) PySpice/PySpice/Spice/BasicElement.py; class Mesfet(JfetElement) skidl/skidl/libs/pyspice_sklib.py; name="Z" ngspice 10.1 MESFETs Step18: Highlevel Elements SinusoidalMixin Based Note in Armour's fort added as_phase SinusoidalMixin is the base translation class for sinusoid wave waveform sources, in other words even thou ngspice compines most sinusoid source as just argument extations to exsisting DC source to create AC souces through pyspice to ngspice these elements must be used SinusoidalMixin args Step19: SinusoidalCurrentSource (AC) PySpice/PySpice/Spice/HighLevelElement.py; class class SinusoidalCurrentSource(CurrentSource, CurrentSourceMixinAbc, SinusoidalMixin) Step20: AcLine(SinusoidalVoltageSource) PySpice/PySpice/Spice/HighLevelElement.py; class AcLine(SinusoidalVoltageSource) skidl/skidl/libs/pyspice_sklib.py; NOT IMPLIMENTED ngspice 4.1 Independent Sources for Voltage or Current Step21: Highlevel Elements PulseMixin Based Highlevel Elements ExponentialMixin Based ExponentialMixin is the base translation class for exponential shped sources used for transisint simulations. Typicly used for simulating responce to charing and discharing events from capcitor/inductor networks. Pyspice does not include ac arguements that are technicly allowed by ngspice ExponentialMixin args Step22: ExponentialCurrentSource PySpice/PySpice/Spice/HighLevelElement.py; class ExponentialCurrentSource(VoltageSource, VoltageSourceMixinAbc, ExponentialMixin) skidl/skidl/libs/pyspice_sklib.py; name="EXPI" ngspice 4.1 Independent Sources for Voltage or Current & 4.1.3 Exponential
Python Code: from skidl.pyspice import * from PySpice.Spice.Netlist import Circuit Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Checking-tool" data-toc-modified-id="Checking-tool-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Checking tool</a></span></li><li><span><a href="#Basic-Elements" data-toc-modified-id="Basic-Elements-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Basic Elements</a></span><ul class="toc-item"><li><span><a href="#A------------|-XSPICE-code-model-(not-checked)" data-toc-modified-id="A------------|-XSPICE-code-model-(not-checked)-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>A | XSPICE code model (not checked)</a></span></li><li><span><a href="#B------------|-Behavioral-(arbitrary)-source-(not-checked)" data-toc-modified-id="B------------|-Behavioral-(arbitrary)-source-(not-checked)-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>B | Behavioral (arbitrary) source (not checked)</a></span></li><li><span><a href="#C------------|-Capacitor" data-toc-modified-id="C------------|-Capacitor-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>C | Capacitor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.3.1"><span class="toc-item-num">2.3.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#D------------|-Diode" data-toc-modified-id="D------------|-Diode-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>D | Diode</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.4.1"><span class="toc-item-num">2.4.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#E------------|-Voltage-controlled-voltage-source-(VCVS)" data-toc-modified-id="E------------|-Voltage-controlled-voltage-source-(VCVS)-2.5"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>E | Voltage-controlled voltage source (VCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.5.1"><span class="toc-item-num">2.5.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#F------------|-Current-controlled-current-source-(CCCs)" data-toc-modified-id="F------------|-Current-controlled-current-source-(CCCs)-2.6"><span class="toc-item-num">2.6&nbsp;&nbsp;</span>F | Current-controlled current source (CCCs)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.6.1"><span class="toc-item-num">2.6.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#G------------|-Voltage-controlled-current-source-(VCCS)" data-toc-modified-id="G------------|-Voltage-controlled-current-source-(VCCS)-2.7"><span class="toc-item-num">2.7&nbsp;&nbsp;</span>G | Voltage-controlled current source (VCCS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.7.1"><span class="toc-item-num">2.7.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#H------------|-Current-controlled-voltage-source-(CCVS)" data-toc-modified-id="H------------|-Current-controlled-voltage-source-(CCVS)-2.8"><span class="toc-item-num">2.8&nbsp;&nbsp;</span>H | Current-controlled voltage source (CCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.8.1"><span class="toc-item-num">2.8.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#I------------|-Current-source" data-toc-modified-id="I------------|-Current-source-2.9"><span class="toc-item-num">2.9&nbsp;&nbsp;</span>I | Current source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.9.1"><span class="toc-item-num">2.9.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#J------------|-Junction-field-effect-transistor-(JFET)" data-toc-modified-id="J------------|-Junction-field-effect-transistor-(JFET)-2.10"><span class="toc-item-num">2.10&nbsp;&nbsp;</span>J | Junction field effect transistor (JFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.10.1"><span class="toc-item-num">2.10.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#K------------|-Coupled-(Mutual)-Inductors" data-toc-modified-id="K------------|-Coupled-(Mutual)-Inductors-2.11"><span class="toc-item-num">2.11&nbsp;&nbsp;</span>K | Coupled (Mutual) Inductors</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.11.1"><span class="toc-item-num">2.11.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#L------------|-Inductor" data-toc-modified-id="L------------|-Inductor-2.12"><span class="toc-item-num">2.12&nbsp;&nbsp;</span>L | Inductor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.12.1"><span class="toc-item-num">2.12.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#M------------|-Metal-oxide-field-effect-transistor-(MOSFET)" data-toc-modified-id="M------------|-Metal-oxide-field-effect-transistor-(MOSFET)-2.13"><span class="toc-item-num">2.13&nbsp;&nbsp;</span>M | Metal oxide field effect transistor (MOSFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.13.1"><span class="toc-item-num">2.13.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#Q------------|-Bipolar-junction-transistor-(BJT)" data-toc-modified-id="Q------------|-Bipolar-junction-transistor-(BJT)-2.14"><span class="toc-item-num">2.14&nbsp;&nbsp;</span>Q | Bipolar junction transistor (BJT)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.14.1"><span class="toc-item-num">2.14.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#R------------|-Resistor" data-toc-modified-id="R------------|-Resistor-2.15"><span class="toc-item-num">2.15&nbsp;&nbsp;</span>R | Resistor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.15.1"><span class="toc-item-num">2.15.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#V-|-Voltage-source" data-toc-modified-id="V-|-Voltage-source-2.16"><span class="toc-item-num">2.16&nbsp;&nbsp;</span>V | Voltage source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.16.1"><span class="toc-item-num">2.16.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)" data-toc-modified-id="Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)-2.17"><span class="toc-item-num">2.17&nbsp;&nbsp;</span>Z | Metal semiconductor field effect transistor (MESFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.17.1"><span class="toc-item-num">2.17.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-SinusoidalMixin-Based" data-toc-modified-id="Highlevel-Elements-SinusoidalMixin-Based-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Highlevel Elements <code>SinusoidalMixin</code> Based</a></span><ul class="toc-item"><li><span><a href="#Note-in-Armour's-fort-added-as_phase" data-toc-modified-id="Note-in-Armour's-fort-added-as_phase-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Note in Armour's fort added as_phase</a></span></li><li><span><a href="#SinusoidalMixin-args:" data-toc-modified-id="SinusoidalMixin-args:-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span><code>SinusoidalMixin</code> args:</a></span></li><li><span><a href="#SinusoidalVoltageSource-(AC)" data-toc-modified-id="SinusoidalVoltageSource-(AC)-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>SinusoidalVoltageSource (AC)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.3.1"><span class="toc-item-num">3.3.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#SinusoidalCurrentSource-(AC)" data-toc-modified-id="SinusoidalCurrentSource-(AC)-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>SinusoidalCurrentSource (AC)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.4.1"><span class="toc-item-num">3.4.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#AcLine(SinusoidalVoltageSource)" data-toc-modified-id="AcLine(SinusoidalVoltageSource)-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>AcLine(SinusoidalVoltageSource)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.5.1"><span class="toc-item-num">3.5.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-PulseMixin-Based" data-toc-modified-id="Highlevel-Elements-PulseMixin-Based-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Highlevel Elements <code>PulseMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-ExponentialMixin-Based" data-toc-modified-id="Highlevel-Elements-ExponentialMixin-Based-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Highlevel Elements <code>ExponentialMixin</code> Based</a></span><ul class="toc-item"><li><span><a href="#ExponentialMixin-args:" data-toc-modified-id="ExponentialMixin-args:-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span><code>ExponentialMixin</code> args:</a></span></li><li><span><a href="#ExponentialVoltageSource" data-toc-modified-id="ExponentialVoltageSource-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>ExponentialVoltageSource</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-5.2.1"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li><li><span><a href="#ExponentialCurrentSource" data-toc-modified-id="ExponentialCurrentSource-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>ExponentialCurrentSource</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-5.3.1"><span class="toc-item-num">5.3.1&nbsp;&nbsp;</span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-PieceWiseLinearMixin-Based" data-toc-modified-id="Highlevel-Elements-PieceWiseLinearMixin-Based-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Highlevel Elements <code>PieceWiseLinearMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-SingleFrequencyFMMixin-Based" data-toc-modified-id="Highlevel-Elements-SingleFrequencyFMMixin-Based-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Highlevel Elements <code>SingleFrequencyFMMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-AmplitudeModulatedMixin-Based" data-toc-modified-id="Highlevel-Elements-AmplitudeModulatedMixin-Based-8"><span class="toc-item-num">8&nbsp;&nbsp;</span>Highlevel Elements <code>AmplitudeModulatedMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-RandomMixin-Based" data-toc-modified-id="Highlevel-Elements-RandomMixin-Based-9"><span class="toc-item-num">9&nbsp;&nbsp;</span>Highlevel Elements <code>RandomMixin</code> Based</a></span></li></ul></div> End of explanation def netlist_comp_check(skidl_netlist, pyspice_netlist): Simple dumb check tool to compare the netlist from skidl and pyspice Args: skidl_netlist (PySpice.Spice.Netlist.Circuit): resulting netlist obj from skidl using skidl's `generate_netlist` utlity to compare to pyspice direct creation pyspice_netlist (PySpice.Spice.Netlist.Circuit): circuit obj created directly in pyspice via `PySpice.Spice.Netlist.Circuit` to compare it's netlist to skidl produced one Returns: if skidl_netlist is longer then pyspice_netlist will return string statment saying: 'skidl_netlist is longer then pyspice_netlist' if skidl_netlist is shorter then pyspice_netlist will return string statment saying: 'skidl_netlist is shorter then pyspice_netlist' if skidl_netlist and pyspice_netlist are equall and but there are diffrances then will print message of thoes difrances(|1 indexed) and return a list of indexs where the skidl netlist is differs from the pyspice one if skidl_netlist == pyspice_netlist then will return the word: 'Match' TODO: Where should I start #only care about the final netlist string skidl_netlist=skidl_netlist.str() pyspice_netlist=pyspice_netlist.str() #check the lengths if len(skidl_netlist)>len(pyspice_netlist): return('skidl_netlist is longer then pyspice_netlist') elif len(skidl_netlist)<len(pyspice_netlist): return('skidl_netlist is shorter then pyspice_netlist') #compare strings char by char else: string_check=[i for i in range(len(skidl_netlist)) if skidl_netlist[i] != pyspice_netlist[i]] if string_check==[]: return 'Match' else: print('Match failed skidl_netlist:') print(f'{[i|1 for i in string_check]}') return string_check Explanation: Checking tool End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_C=C(ref='1', value=5, scale=5, temp=5, dtemp=5, ic=5, m=5) skidl_C['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.C('1', 'N1', 'N2', 5, scale=5, temp=5, dtemp=5, ic=5, m=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: Basic Elements A | XSPICE code model (not checked) PySpice/PySpice/Spice/BasicElement.py; (need to find): skidl/skidl/libs/pyspice_sklib.py; name="A" B | Behavioral (arbitrary) source (not checked) PySpice/PySpice/Spice/BasicElement.py; class BehavioralSource: skidl/skidl/libs/pyspice_sklib.py; name="B" ngspice 5.1: Bxxxx: Nonlinear dependent source (ASRC): BXXXXXXX n| n- <i=expr > <v=expr > <tc1=value > <tc2=value > <temp=value > <dtemp=value > C | Capacitor PySpice/PySpice/Spice/BasicElement.py; class Capacitor(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="C" ngspice 3.2.5 Capacitors: CXXXXXXX n| n- <value > <mname > <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <ic=init_condition > Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_D=D(ref='1',model=5, area=5, m=5, pj=5, off=5, temp=5, dtemp=5) skidl_D['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.D('1', 'N1', 'N2', model=5, area=5, m=5, pj=5, off=5, temp=5, dtemp=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: D | Diode PySpice/PySpice/Spice/BasicElement.py; class Diode(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="D" ngspice 7.1 Junction Diodes: DXXXXXXX n| n- mname <area=val> <m=val> <pj=val> <off> <ic=vd> <temp=val> <dtemp=val> Notes ic: did not work in eather skidl or pyspice End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4') skidl_E=E(ref='1', voltage_gain=5) skidl_E['ip', 'in']+=net_1, net_2; skidl_E['op', 'on']+=net_3, net_4 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.VoltageControlledVoltageSource('1', 'N3', 'N4', 'N1', 'N2', voltage_gain=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: E | Voltage-controlled voltage source (VCVS) PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledVoltageSource(TwoPortElement) skidl/skidl/libs/pyspice_sklib.py; name="E" ngspice 4.2.2 Exxxx: Linear Voltage-Controlled Voltage Sources (VCVS): EXXXXXXX N| N- NC| NC- VALUE Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_F=F(ref='1', control='V1', current_gain=5, m=5) skidl_F['p', 'n']+=net_1, net_2; skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.CurrentControlledCurrentSource('1', 'N1', 'N2', 'V1', current_gain=5, m=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: F | Current-controlled current source (CCCs) PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledCurrentSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="F" ngspice 4.2.3 Fxxxx: Linear Current-Controlled Current Sources (CCCS): FXXXXXXX N| N- VNAM VALUE <m=val> Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4') skidl_G=G(ref='1', current_gain=5, m=5) skidl_G['ip', 'in']+=net_1, net_2; skidl_G['op', 'on']+=net_3, net_4 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.VoltageControlledCurrentSource('1', 'N3', 'N4', 'N1', 'N2', transconductance=5, m=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: G | Voltage-controlled current source (VCCS) PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledCurrentSource(TwoPortElement) skidl/skidl/libs/pyspice_sklib.py; name="G" ngspice 4.2.1 Gxxxx: Linear Voltage-Controlled Current Sources (VCCS): GXXXXXXX N| N- NC| NC- VALUE <m=val> Notes 'transconductance' did not work in skidl; but gain did as did current_gain End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_H=H(ref='1', control='V1', transresistance=5) skidl_H['p', 'n']+=net_1, net_2; skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.CurrentControlledVoltageSource('1', 'N1', 'N2', 'V1', transresistance=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: H | Current-controlled voltage source (CCVS) PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledVoltageSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="H" ngspice 4.2.4 Hxxxx: Linear Current-Controlled Voltage Sources (CCVS): HXXXXXXX n| n- vnam val Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_I=I(ref='1', dc_value=5) skidl_I['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.I('1', 'N1', 'N2', dc_value=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: I | Current source PySpice/PySpice/Spice/BasicElement.py; class CurrentSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="I" ngspice 4.1 Independent Sources for Voltage or Current: IYYYYYYY N| N- <<DC> Notes a reduced version of ngspices IYYYYYYY only generating the arguement for <<DC> DC/TRAN VALUE > End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3') skidl_J=J(ref='1',model=5, area=5, m=5, off=5, temp=5) skidl_J['d', 'g', 's']+=net_1, net_2, net_3 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.J('1', 'N1', 'N2', 'N3', model=5, area=5, m=5, off=5, temp=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: J | Junction field effect transistor (JFET) PySpice/PySpice/Spice/BasicElement.py; class JunctionFieldEffectTransistor(JfetElement) skidl/skidl/libs/pyspice_sklib.py; name="J" ngspice 9.1 Junction Field-Effect Transistors (JFETs): JXXXXXXX nd ng ns mname <area > <off> <ic=vds,vgs> <temp=t> Notes ic: did not work in eather skidl or pyspice End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_L1=L(ref='1', value=5, m=5, temp=5, dtemp=5, ic=5) skidl_L1['p', 'n']+=net_1, net_2 skidl_L2=L(ref='2', value=5, m=5, temp=5, dtemp=5, ic=5) skidl_L2['p', 'n']+=net_1, net_2 #need to find out how to use this #skidl_K=K() skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') #inductors need to exsist to then be coupled pyspice_circ.L('1', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5) pyspice_circ.L('2', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5) pyspice_circ.K('1', 'L1', 'L2', coupling_factor=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: K | Coupled (Mutual) Inductors PySpice/PySpice/Spice/BasicElement.py; class CoupledInductor(AnyPinElement) skidl/skidl/libs/pyspice_sklib.py; name="K" ngspice 3.2.11 Coupled (Mutual) Inductors: KXXXXXXX LYYYYYYY LZZZZZZZ value Notes need to get daves help on using K inside skidl the inductors must already exsist for pyspice to work End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_L=L(ref='1', value=5, m=5, temp=5, dtemp=5, ic=5) skidl_L['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.L('1', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: L | Inductor PySpice/PySpice/Spice/BasicElement.py; class Inductor(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="L" ngspice 3.2.9 Inductors: LYYYYYYY n| n- <value > <mname > <nt=val> <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <ic=init_condition > Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4') skidl_M=M(ref='1', model=5, m=5, l=5, w=5, drain_area=5, source_area=5, drain_perimeter=5, source_perimeter=5, drain_number_square=5, source_number_square=5, off=5, temp=5) skidl_M['d', 'g', 's', 'b']+=net_1, net_2, net_3, net_4 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.M('1', 'N1', 'N2', 'N3', 'N4', model=5, m=5, l=5, w=5, drain_area=5, source_area=5, drain_perimeter=5, source_perimeter=5, drain_number_square=5, source_number_square=5, off=5, temp=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: M | Metal oxide field effect transistor (MOSFET) PySpice/PySpice/Spice/BasicElement.py; class Mosfet(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="M" ngspice 11.1 MOSFET devices: MXXXXXXX nd ng ns nb mname <m=val> <l=val> <w=val> <ad=val> <as=val> <pd=val> <ps=val> <nrd=val> <nrs=val> <off> <ic=vds, vgs, vbs> <temp=t> Notes ic: did not work in eather skidl or pyspice End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4') skidl_Q=Q(ref='1',model=5, area=5, areab=5, areac=5, m=5, off=5, temp=5, dtemp=5) skidl_Q['c', 'b', 'e']+=net_1, net_2, net_3 #skidl will make the substrate connection fine but could not get pyspice to do so #therefore skiping for the time being #skidl_Q['s']+=net_4 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.Q('1', 'N1', 'N2', 'N3', model=5, area=5, areab=5, areac=5, m=5, off=5, temp=5, dtemp=5, #could not get the substrate connection working in pyspice #ns='N4' ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: | N | Numerical device for GSS | | O | Lossy transmission line | | P | Coupled multiconductor line (CPL) | Q | Bipolar junction transistor (BJT) PySpice/PySpice/Spice/BasicElement.py; class BipolarJunctionTransistor(FixedPinElement) skidl/skidl/libs/pyspice_sklib.py; name="Q" ngspice 8.1 Bipolar Junction Transistors (BJTs): QXXXXXXX nc nb ne <ns> mname <area=val> <areac=val> <areab=val> <m=val> <off> <ic=vbe,vce> <temp=val> <dtemp=val> Notes could not get the substrate connection working in pyspice but it worked fine with skidl ic: did not work in eather skidl or pyspice End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_R=R(ref='1', value=5, ac=5, m=5, scale=5, temp=5, dtemp=5, noisy=1) skidl_R['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.R('1', 'N1', 'N2', 5, ac=5, m=5, scale=5, temp=5, dtemp=5, noisy=1) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: R | Resistor PySpice/PySpice/Spice/BasicElement.py; class Resistor(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="R" ngspice 3.2.1 Resistors: RXXXXXXX n| n- <resistance|r=>value <ac=val> <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <noisy=0|1> Notes End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_V=V(ref='1', dc_value=5) skidl_V['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.V('1', 'N1', 'N2', dc_value=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: | S | Switch (voltage-controlled) | | T | Lossless transmission line | | U | Uniformly distributed RC line | V | Voltage source PySpice/PySpice/Spice/BasicElement.py; class VoltageSource(DipoleElement) skidl/skidl/libs/pyspice_sklib.py; name="V" ngspice 4.1 Independent Sources for Voltage or Current: VXXXXXXX N| N- <<DC> DC/TRAN VALUE > Notes a reduced version of ngspices VXXXXXXX only generating the arguement for <<DC> DC/TRAN VALUE > End of explanation reset() net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3') skidl_Z=Z(ref='1',model=5, area=5, m=5, off=5) skidl_Z['d', 'g', 's']+=net_1, net_2, net_3 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.Z('1', 'N1', 'N2', 'N3', model=5, area=5, m=5, off=5) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: | W | Switch (current-controlled) | | X | Subcircuit | | Y | Single lossy transmission line (TXL) | | Z | Metal semiconductor field effect transistor (MESFET) | Z | Metal semiconductor field effect transistor (MESFET) PySpice/PySpice/Spice/BasicElement.py; class Mesfet(JfetElement) skidl/skidl/libs/pyspice_sklib.py; name="Z" ngspice 10.1 MESFETs: ZXXXXXXX ND NG NS MNAME <AREA > <OFF> <IC=VDS, VGS> Notes ic: did not work in eather skidl or pyspice End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_SINV=SINEV(ref='1', #transit sim statments offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5, #ac sim statments ac_magnitude=5, dc_offset=5) skidl_SINV['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.SinusoidalVoltageSource('1', 'N1', 'N2', #transit sim statments offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5, #ac sim statments ac_magnitude=5, dc_offset=5 ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: Highlevel Elements SinusoidalMixin Based Note in Armour's fort added as_phase SinusoidalMixin is the base translation class for sinusoid wave waveform sources, in other words even thou ngspice compines most sinusoid source as just argument extations to exsisting DC source to create AC souces through pyspice to ngspice these elements must be used SinusoidalMixin args: | Name | Parameter | Default Value | Units | |------|----------------|---------------|-------| | Vo | offset | | V, A | |------|----------------|---------------|-------| | Va | amplitude | | V, A | |------|----------------|---------------|-------| | f | frequency | 1 / TStop | Hz | |------|----------------|---------------|-------| | Td | delay | 0.0 | sec | |------|----------------|---------------|-------| | Df | damping factor | 0.01 | 1/sec | |------|----------------|---------------|-------| so for a AC SIN voltage sours it's output should be equilint to the following: $$V(t) = \begin{cases} V_o & \text{if}\ 0 \leq t < T_d, \ V_o + V_a e^{-D_f(t-T_d)} \sin\left(2\pi f (t-T_d)\right) & \text{if}\ T_d \leq t < T_{stop}. \end{cases}$$ SinusoidalVoltageSource (AC) PySpice/PySpice/Spice/HighLevelElement.py; class SinusoidalVoltageSource(VoltageSource, VoltageSourceMixinAbc, SinusoidalMixin) skidl/skidl/libs/pyspice_sklib.py; name="SINEV" ngspice 4.1 Independent Sources for Voltage or Current & 4.1.2 Sinusoidal: VXXXXXXX N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>> SIN(VO VA FREQ TD THETA PHASE) Notes a amalgumation of ngspice's Independent Sources for Voltage & Sinusoidal statment for transint simulations End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_SINI=SINEI(ref='1', #transit sim statments offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5, #ac sim statments ac_magnitude=5, dc_offset=5) skidl_SINI['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.SinusoidalCurrentSource('1', 'N1', 'N2', #transit sim statments offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5, #ac sim statments ac_magnitude=5, dc_offset=5 ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: SinusoidalCurrentSource (AC) PySpice/PySpice/Spice/HighLevelElement.py; class class SinusoidalCurrentSource(CurrentSource, CurrentSourceMixinAbc, SinusoidalMixin): skidl/skidl/libs/pyspice_sklib.py; name="SINEI" ngspice 4.1 Independent Sources for Voltage or Current & 4.1.2 Sinusoidal: IYYYYYYY N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>> SIN(VO VA FREQ TD THETA PHASE) Notes a amalgumation of ngspice's Independent Sources for Voltage & Sinusoidal statment for transint simulations End of explanation reset() net_1=Net('N1'); net_2=Net('N2') # Skidle does not impliment an AcLine equivlent at this time skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.AcLine('1', 'N1', 'N2', #transit sim statments rms_voltage=8, frequency=5 ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: AcLine(SinusoidalVoltageSource) PySpice/PySpice/Spice/HighLevelElement.py; class AcLine(SinusoidalVoltageSource) skidl/skidl/libs/pyspice_sklib.py; NOT IMPLIMENTED ngspice 4.1 Independent Sources for Voltage or Current: VXXXXXXX N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>> Notes it's a pyspice only wraper around pyspices SinusoidalVoltageSource that makes a pure for transisint simulation only SIN voltage source with the only arguments being rms_voltage and frequency pyspice does the rms to amplitute conversion internaly pyspice does not have a offset arg pyspice does not have a delay arg pyspice does not have a damping_factor arg pyspice does not have a ac_magnitude arg pyspice does not have a dc_offset arg pspice still gives a AC output of the default 1V; this needs to be changed to be equal to amplitude internal value or else will give aid in producing incorect results with ac simulations End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_EXPV=EXPV(ref='1', #transit sim statments initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5, ) skidl_EXPV['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.ExponentialVoltageSource('1', 'N1', 'N2', #transit sim statments initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5, ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: Highlevel Elements PulseMixin Based Highlevel Elements ExponentialMixin Based ExponentialMixin is the base translation class for exponential shped sources used for transisint simulations. Typicly used for simulating responce to charing and discharing events from capcitor/inductor networks. Pyspice does not include ac arguements that are technicly allowed by ngspice ExponentialMixin args: | Name | Parameter | Default Value | Units | |------|--------------------|---------------|-------| | V1 | Initial value | | V, A | |------|--------------------|---------------|-------| | V2 | pulsed value | | V, A | |------|--------------------|---------------|-------| | Td1 | rise delay time | 0.0 | sec | |------|--------------------|---------------|-------| | tau1 | rise time constant | Tstep | sec | |------|--------------------|---------------|-------| | Td2 | fall delay time | Td1|Tstep | sec | |------|--------------------|---------------|-------| | tau2 | fall time constant | Tstep | sec | |------|--------------------|---------------|-------| so for a expoential based voltage source it's output should be equilint to the following: $$V(t) = \begin{cases} V_1 & \text{if}\ 0 \leq t < T_{d1}, \ V_1 + V_{21} ( 1 − e^{-\frac{t-T_{d1}}{\tau_1}} ) & \text{if}\ T_{d1} \leq t < T_{d2}, \ V_1 + V_{21} ( 1 − e^{-\frac{t-T_{d1}}{\tau_1}} ) + V_{12} ( 1 − e^{-\frac{t-T_{d2}}{\tau_2}} ) & \text{if}\ T_{d2} \leq t < T_{stop} \end{cases}$$ where $V_{21} = V_2 - V_1$ and $V_{12} = V_1 - V_2$ ExponentialVoltageSource PySpice/PySpice/Spice/HighLevelElement.py; class ExponentialVoltageSource(VoltageSource, VoltageSourceMixinAbc, ExponentialMixin) skidl/skidl/libs/pyspice_sklib.py; name="EXPV" ngspice 4.1 Independent Sources for Voltage or Current & 4.1.3 Exponential: VXXXXXXX N+ N- EXP(V1 V2 TD1 TAU1 TD2 TAU2) Notes should technicly also alow dc and ac values from ngspice Independent voltage source statment End of explanation reset() net_1=Net('N1'); net_2=Net('N2') skidl_EXPI=EXPI(ref='1', #transit sim statments initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5, ) skidl_EXPI['p', 'n']+=net_1, net_2 skidl_circ=generate_netlist() print(skidl_circ) pyspice_circ=Circuit('') pyspice_circ.ExponentialCurrentSource('1', 'N1', 'N2', #transit sim statments initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5, ) print(pyspice_circ) netlist_comp_check(skidl_circ, pyspice_circ) Explanation: ExponentialCurrentSource PySpice/PySpice/Spice/HighLevelElement.py; class ExponentialCurrentSource(VoltageSource, VoltageSourceMixinAbc, ExponentialMixin) skidl/skidl/libs/pyspice_sklib.py; name="EXPI" ngspice 4.1 Independent Sources for Voltage or Current & 4.1.3 Exponential: IXXXXXXX N+ N- EXP(I1 I2 TD1 TAU1 TD2 TAU2) Notes should technicly also alow dc and ac values from ngspice Independent voltage source statment End of explanation
14,883
Given the following text description, write Python code to implement the functionality described below step by step Description: Excercises Electric Machinery Fundamentals Chapter 8 Problem 8-10 to Problem 8-11 Step1: Description | | | |-------------------------------------|--------------------------------------------| | $P_\text{rated} = 30\,hp$ | $I_\text{L,rated} = 110\,A$ | | $V_T = 240\,V$ | $n_\text{rated} = 1800\,r/min$ | | $R_A = 0.19\,\Omega$ | $R_S = 0.02\,\Omega$ | | $N_F = 2700 \text{ turns per pole}$ | $N_{SE} = 14 \text{ turns per pole}$ | | $R_F = 75\,\Omega$ | $R_\text{adj} = 100\text{ to }400\,\Omega$ | Rotational losses = 3550 W at full load. Magnetization curve as shown in Figure P8-1. <img src="figs/FigC_P8-1.jpg" width="70%"> <hr> Note Step2: Problem 8-10 Description If the motor is connected cumulatively compounded with $R_\text{adj} = 175\,\Omega$ Step3: SOLUTION At no-load conditions, $E_A = V_T = 240 V$ . The field current is given by Step4: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 241 V at a speed Step5: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be Step6: At full load, the armature current is Step7: The internal generated voltage $E_A$ is Step8: The equivalent field current is Step9: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 279 V at a speed Step10: The speed regulation is Step11: The torque-speed characteristic can best be plotted with a Python program. An appropriate program is shown below. Get the magnetization curve. Note that this curve is defined for a speed of 1200 r/min. Step12: First, initialize the values needed in this program. Step13: Calculate the armature current for each load Step14: Now calculate the internal generated voltage for each armature current. Step15: Calculate the effective field current with and without armature reaction. Step16: Calculate the resulting internal generated voltage at 1800 r/min by interpolating the motor's magnetization curve. Step17: Calculate the resulting speed from Equation (8-13) Step18: Calculate the induced torque corresponding to each speed from Equation (8-10). Step19: Plot the torque-speed curves Step20: Problem 8-11 Description The motor is connected cumulatively compounded and is operating at full load. What will the new speed of the motor be if $R_\text{adj}$ is increased to $250\,\Omega$ ? How does the new speed compared to the full-load speed calculated in Problem 8-10? Step21: SOLUTION If $R_\text{adj}$ is increased to $250\,\Omega$ , the field current is given by Step22: At full load conditions, the armature current is Step23: The internal generated voltage $E_A$ is Step24: The equivalent field current is Step25: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 268 V at a speed Step26: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be
Python Code: %pylab notebook %precision %.4g Explanation: Excercises Electric Machinery Fundamentals Chapter 8 Problem 8-10 to Problem 8-11 End of explanation P_rated = 30 # [hp] Il_rated = 110 # [A] Vt = 240 # [V] Nf = 2700 n_0 = 1800 # [r/min] Nse = 14 Ra = 0.19 # [Ohm] Rf = 75 # [Ohm] Rs = 0.02 # [Ohm] Radj_max = 400 # [Ohm] Radj_min = 100 # [Ohm] Explanation: Description | | | |-------------------------------------|--------------------------------------------| | $P_\text{rated} = 30\,hp$ | $I_\text{L,rated} = 110\,A$ | | $V_T = 240\,V$ | $n_\text{rated} = 1800\,r/min$ | | $R_A = 0.19\,\Omega$ | $R_S = 0.02\,\Omega$ | | $N_F = 2700 \text{ turns per pole}$ | $N_{SE} = 14 \text{ turns per pole}$ | | $R_F = 75\,\Omega$ | $R_\text{adj} = 100\text{ to }400\,\Omega$ | Rotational losses = 3550 W at full load. Magnetization curve as shown in Figure P8-1. <img src="figs/FigC_P8-1.jpg" width="70%"> <hr> Note: An electronic version of this magnetization curve can be found in file p81_mag.dat, which can be used with Python programs. Column 1 contains field current in amps, and column 2 contains the internal generated voltage $E_A$ in volts. <hr> For Problems 8-10 to 8-11, the motor is connected cumulatively compounded as shown in Figure P8-4. <img src="figs/FigC_P8-4.jpg" width="70%"> End of explanation Radj_10 = 175.0 # [Ohm] Explanation: Problem 8-10 Description If the motor is connected cumulatively compounded with $R_\text{adj} = 175\,\Omega$: (a) What is the no-load speed of the motor? (b) What is the full-load speed of the motor? (c) What is its speed regulation? (d) Calculate and plot the torque-speed characteristic for this motor. (Neglect armature effects in this problem.) End of explanation If_10 = Vt / (Radj_10+Rf) If_10 Explanation: SOLUTION At no-load conditions, $E_A = V_T = 240 V$ . The field current is given by: $$I_F = \frac{V_T}{R_\text{adj}+R_F}$$ End of explanation n_0 Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 241 V at a speed End of explanation Ea0_10_nl = 241.0 # [V] Ea_10_nl = 240.0 # [V] n_10_nl = Ea_10_nl / Ea0_10_nl * n_0 print(''' n_10_nl = {:.1f} r/min ======================'''.format(n_10_nl)) Explanation: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be: $$\frac{E_A}{E_{A_0}} = \frac{n}{n_0}$$ End of explanation Ia_10 = Il_rated - Vt/(Radj_10 + Rf) Ia_10 Explanation: At full load, the armature current is: $$I_A = I_L - I_F = I_L - \frac{V_T}{R_\text{adj}+R_F}$$ End of explanation Ea_10_fl = Vt - Ia_10*(Ra+Rs) Ea_10_fl Explanation: The internal generated voltage $E_A$ is: $$E_A = V_T - I_A (R_A+R_S)$$ End of explanation If_10_ = If_10 + Nse/Nf * Ia_10 If_10_ Explanation: The equivalent field current is: $$I_F^* = I_F + \frac{N_{SE}}{N_F}I_A$$ End of explanation n_0 Ea0_10_fl = 279.0 # [V] n_10_fl = Ea_10_fl / Ea0_10_fl * n_0 print(''' n_10_fl = {:.1f} r/min ======================'''.format(n_10_fl)) Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 279 V at a speed End of explanation SR = (n_10_nl - n_10_fl) / n_10_fl print(''' SR = {:.1f} % ==========='''.format(SR*100)) Explanation: The speed regulation is: $$SR = \frac{n_\text{nl}-n_\text{fl}}{n_\text{fl}}$$ End of explanation #Load the magnetization curve data import pandas as pd # The data file is stored in the repository fileUrl = 'data/p81_mag.dat' data = pd.read_csv(fileUrl, # the address where to download the datafile from sep=' ', # our data source uses a blank space as separation comment='%', # ignore lines starting with a "%" skipinitialspace = True, # ignore intital spaces header=None, # we don't have a header line defined... names=['If_values', 'Ea_values'] # ...instead we define the names here ) Explanation: The torque-speed characteristic can best be plotted with a Python program. An appropriate program is shown below. Get the magnetization curve. Note that this curve is defined for a speed of 1200 r/min. End of explanation Radj_10 = 175.0 # [Ohm] Il_10 = linspace(0, 110, 111) Explanation: First, initialize the values needed in this program. End of explanation Ia_10 = Il_10 - Vt / (Rf + Radj_10) Explanation: Calculate the armature current for each load End of explanation Ea_10 = Vt - Ia_10*(Ra+Rs) Explanation: Now calculate the internal generated voltage for each armature current. End of explanation If_10 = Vt / (Rf + Radj_10) + Nse/Nf * Ia_10 Explanation: Calculate the effective field current with and without armature reaction. End of explanation Eao_10 = interp(If_10,data['If_values'],data['Ea_values']) Explanation: Calculate the resulting internal generated voltage at 1800 r/min by interpolating the motor's magnetization curve. End of explanation n_10 = ( Ea_10 / Eao_10 ) * n_0 Explanation: Calculate the resulting speed from Equation (8-13) End of explanation tau_ind_10 = Ea_10 * Ia_10 / (n_10 * 2 * pi / 60) Explanation: Calculate the induced torque corresponding to each speed from Equation (8-10). End of explanation title(r'Shunt DC Motor Torque-Speed Characteristic') xlabel(r'$\tau_{ind}$ [Nm]') ylabel(r'$n_m$ [r/min]') axis([ 0, 170 ,1390,1810]) #set the axis range plot(tau_ind_10,n_10) grid() Explanation: Plot the torque-speed curves End of explanation Radj_11 = 250.0 # [Ohm] Explanation: Problem 8-11 Description The motor is connected cumulatively compounded and is operating at full load. What will the new speed of the motor be if $R_\text{adj}$ is increased to $250\,\Omega$ ? How does the new speed compared to the full-load speed calculated in Problem 8-10? End of explanation If_11 = Vt / (Radj_11+Rf) If_11 Explanation: SOLUTION If $R_\text{adj}$ is increased to $250\,\Omega$ , the field current is given by: End of explanation Ia_11 = Il_rated - Vt/(Radj_11 + Rf) Ia_11 Explanation: At full load conditions, the armature current is: End of explanation Ea_11 = Vt - Ia_11*(Ra+Rs) Ea_11 Explanation: The internal generated voltage $E_A$ is: End of explanation If_11_ = If_11 + Nse/Nf * Ia_11 If_11_ Explanation: The equivalent field current is: End of explanation n_0 Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 268 V at a speed End of explanation Ea0_11 = 268.0 # [r/min] Ea_11 = Ea_10_fl n_11 = Ea_11 / Ea0_11 * n_0 print(''' n_11 = {:.1f} r/min ==================='''.format(n_11)) Explanation: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be: End of explanation
14,884
Given the following text description, write Python code to implement the functionality described below step by step Description: European Extremely Large Telescope site selection a comparison between real selection and multicriteria-decision-analysis suggestions Juan B Cabral – Bruno O Sanchez – Manuel Starck Cuffini Instituto de Astronomía Teórica y Experimental [email protected] [email protected] [email protected] Data Mangling Step1: Final Data Step2: TOPSIS https Step3: Weighted Sum Model https Step4: Weighted Product https Step5: ELECTRE 1 https Step6: Results
Python Code: import pandas as pd import numpy as np import skcriteria as sc from skcriteria.madm import topsis, wsum, moora, wprod, electre df = pd.read_csv("sites.csv")[:-3] df anames = df.columns[2:].values def to_apply(r): new = [] for e in r: if isinstance(e , str): e = float(e.replace("*", "")) new.append(e) return new mtx = df[anames][3:].dropna()[anames].apply(to_apply).T.values criteria = [sc.MIN if c == "min" else sc.MAX for c in df.Criteria[3:][~df.Armazones.isnull()].values] cnames = df["Criteria/Alternatives"][3:].apply(lambda r: r.strip())[~df[3:].Armazones.isnull()].values cnames = map(lambda s: s.decode("utf8"), cnames) data = sc.Data(mtx, criteria, anames=anames, cnames=cnames) Explanation: European Extremely Large Telescope site selection a comparison between real selection and multicriteria-decision-analysis suggestions Juan B Cabral – Bruno O Sanchez – Manuel Starck Cuffini Instituto de Astronomía Teórica y Experimental [email protected] [email protected] [email protected] Data Mangling End of explanation data Explanation: Final Data End of explanation dm = topsis.TOPSIS() topsis_dec = dm.decide(data) topsis_dec Explanation: TOPSIS https://en.wikipedia.org/wiki/TOPSIS End of explanation dm = wsum.MDWeightedSum() wsum_dec = dm.decide(data) wsum_dec Explanation: Weighted Sum Model https://en.wikipedia.org/wiki/Weighted_sum_model End of explanation dm = wprod.WeightedProduct() wprod_dec = dm.decide(data) wprod_dec Explanation: Weighted Product https://en.wikipedia.org/wiki/Weighted_product_model End of explanation dm = electre.ELECTRE1() electre_dec = dm.decide(data) electre_dec Explanation: ELECTRE 1 https://en.wikipedia.org/wiki/ELECTRE End of explanation methods = ["TOPSIS", "WSUM", "WPROD"] kernel = np.array([1 if idx in electre_dec.kernel_ else 0 for idx, _ in enumerate(anames)]) ranks = np.vstack((topsis_dec.rank_, wsum_dec.rank_, wprod_dec.rank_, kernel)).T rdf = pd.DataFrame(ranks, index=anames, columns=methods + ["ELECTRE 1 Kernel"]) rdf rdf[methods].T.describe() Explanation: Results End of explanation
14,885
Given the following text description, write Python code to implement the functionality described below step by step Description: Designing a Python library for building prototypes around MinHash This is very much work-in-progress. May be the software and or ideas presented with be the subject of a peer-reviewed or self-published write-up. For now the URL for this is Step1: Kicking the tires with sourmash The executable sourmash is a nice package from the dib-lab implemented in Python and including a library [add reference here]. Perfect for trying out quick what MinHash sketches can do. We will create a MinHash of maximum size 1000 (1000 elements) and of k-mer size 21 (all ngrams of length 21 across the input sequences will be considered for inclusion in the MinHash. At the time of writing MinHash is implemented in C/C++ and use that as a reference for speed, as we measure the time it takes to process our 1M reference sequence Step2: This is awesome. The sketch for a bacteria-sized DNA sequence can be computed very quickly (about a second on my laptop). Redisigning it all for convenience and flexibility We have redesigned what a class could look like, and implemented that design in Python foremost for our own convenience and to match the claim of convenience. Now how bad is the impact on performance ? Our new design allows flexibility with respect to the hash function used, and to initially illustrate our point we use mmh an existing Python package wrapping MurmurHash3, the hashing function used in MASH and sourmash. Step3: Ah. Our Python implementation only using mmh3 and the standard library is only a bit slower. There is more to it though. The code in "mashingpumpkins" is doing more by keeping track of the k-mer/n-gram along with the hash value in order to allow the generation of inter-operable sketch [add reference to discussion on GitHub]. Our design in computing batches of hash values each time C is reached for MurmurHash3. We have implemented the small C function require to call MurmurHash for several k-mers, and when using it we have interesting performance gains. Step4: Wow! At the time of writing this is between 1.5 and 2.5 times faster than C-implemented sourmash. And we are doing more work (we are keeping the ngrams / kmers associated with hash values). We can modifying our class to stop storing the associated k-mer (only keep the hash value) to see if it improves performances Step5: Still pretty good, the code for the check is not particularly optimal (that's the kind of primitives that would go to C). MASH quirks Unfortunately this is not quite what MASH (sourmash is based on) is doing. Tim highlighted what is happening Step6: So now the claim is that we are just like sourmash/MASH, but mostly in Python and faster. We check that the sketches are identical, and they are Step7: Parallel processing Now what about parallel processing ? Step8: We have just made sourmash/MASH about 2 times faster... some of the times. Parallelization does not always bring speedups (depends on the size of the sketch and on the length of the sequence for which the sketch is built). Scaling up Now how much time should it take to compute signature for various references ? First we check quickly that the time is roughly proportional to the size of the reference Step9: The rate (MB/s) with which a sequence is processed seems to strongly depend on the size of the input sequence for the mashingpumpkins implementation (suggesting a significant setup cost than is amortized as the sequence is getting longer), and parallelization achieve a small boost in performance (with the size of the sketch apparently counteracting that small boost). Our implementation also appears to be scaling better with increasing sequence size (relatively faster as the size is increasing). Keeping the kmers comes with a slight cost for the larger max_size values (not shown). Our Python implementation is otherwise holding up quite well. XXHash appears give slightly faster processing rates in the best case, and makes no difference compared with MurmushHash3 in other cases (not shown). Step10: One can also observe that the performance dip for the largest max_size value is recovering as the input sequence is getting longer. We verifiy this with a .1GB reference and max_size equal to 20,000. Step11: In comparison, this is what sourmash manages to achieve
Python Code: # we take a DNA sequence as an example, but this is arbitrary and not necessary. alphabet = b'ATGC' # create a lookup structure to go from byte to 4-mer # (a arbitrary byte is a bitpacked 4-mer) quad = [None, ]*(len(alphabet)**4) i = 0 for b1 in alphabet: for b2 in alphabet: for b3 in alphabet: for b4 in alphabet: quad[i] = bytes((b1, b2, b3, b4)) i += 1 # random bytes for a 3M genome (order of magnitude for a bacterial genome) import ssl def make_rnd_sequence(size): sequencebitpacked = ssl.RAND_bytes(int(size/4)) sequence = bytearray(int(size)) for i, b in zip(range(0, len(sequence), 4), sequencebitpacked): sequence[i:(i+4)] = quad[b] return bytes(sequence) size = int(2E6) sequence = make_rnd_sequence(size) import time class timedblock(object): def __enter__(self): self.tenter = time.time() return self def __exit__(self, type, value, traceback): self.texit = time.time() @property def duration(self): return self.texit - self.tenter Explanation: Designing a Python library for building prototypes around MinHash This is very much work-in-progress. May be the software and or ideas presented with be the subject of a peer-reviewed or self-published write-up. For now the URL for this is: https://github.com/lgautier/mashing-pumpkins MinHash in the context of biological sequenced was introduced by the Maryland Bioinformatics Lab [add reference here]. Building a MinHash is akin to taking a sample of all k-mers / n-grams found in a sequence and using that sample as a signature or sketch for that sequence. A look at convenience vs performance Moving Python code to C leads to performance improvement... sometimes. Test sequence First we need a test sequence. Generating a random one quickly can be achieved as follows, for example. If you already have you own way to generate a sequence, or your own benchmark sequence, the following code cell can be changed so as to end up with a variable sequence that is a bytes object containing it. End of explanation from sourmash_lib._minhash import MinHash SKETCH_SIZE = 5000 sequence_str = sequence.decode("utf-8") with timedblock() as tb: smh = MinHash(SKETCH_SIZE, 21) smh.add_sequence(sequence_str) t_sourmash = tb.duration print("%.2f seconds / sequence" % t_sourmash) Explanation: Kicking the tires with sourmash The executable sourmash is a nice package from the dib-lab implemented in Python and including a library [add reference here]. Perfect for trying out quick what MinHash sketches can do. We will create a MinHash of maximum size 1000 (1000 elements) and of k-mer size 21 (all ngrams of length 21 across the input sequences will be considered for inclusion in the MinHash. At the time of writing MinHash is implemented in C/C++ and use that as a reference for speed, as we measure the time it takes to process our 1M reference sequence End of explanation # make a hashing function to match our design import mmh3 def hashfun(sequence, nsize, hbuffer, w=100): n = min(len(hbuffer), len(sequence)-nsize+1) for i in range(n): ngram = sequence[i:(i+nsize)] hbuffer[i] = mmh3.hash64(ngram)[0] return n from mashingpumpkins.minhashsketch import MinSketch from array import array with timedblock() as tb: mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42) mhs.add(sequence, hashbuffer=array("q", [0,]*200)) t_basic = tb.duration print("%.2f seconds / sequence" % (t_basic)) print("Our Python implementation is %.2f times slower." % (t_basic / t_sourmash)) Explanation: This is awesome. The sketch for a bacteria-sized DNA sequence can be computed very quickly (about a second on my laptop). Redisigning it all for convenience and flexibility We have redesigned what a class could look like, and implemented that design in Python foremost for our own convenience and to match the claim of convenience. Now how bad is the impact on performance ? Our new design allows flexibility with respect to the hash function used, and to initially illustrate our point we use mmh an existing Python package wrapping MurmurHash3, the hashing function used in MASH and sourmash. End of explanation from mashingpumpkins._murmurhash3 import hasharray hashfun = hasharray with timedblock() as tb: hashbuffer = array('Q', [0, ] * 300) mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42) mhs.add(sequence, hashbuffer=hashbuffer) t_batch = tb.duration print("%.2f seconds / sequence" % (t_batch)) print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch)) Explanation: Ah. Our Python implementation only using mmh3 and the standard library is only a bit slower. There is more to it though. The code in "mashingpumpkins" is doing more by keeping track of the k-mer/n-gram along with the hash value in order to allow the generation of inter-operable sketch [add reference to discussion on GitHub]. Our design in computing batches of hash values each time C is reached for MurmurHash3. We have implemented the small C function require to call MurmurHash for several k-mers, and when using it we have interesting performance gains. End of explanation from mashingpumpkins._murmurhash3 import hasharray hashfun = hasharray from array import array trans_tbl = bytearray(256) for x,y in zip(b'ATGC', b'TACG'): trans_tbl[x] = y def revcomp(sequence): ba = bytearray(sequence) ba.reverse() ba = ba.translate(trans_tbl) return ba class MyMash(MinSketch): def add(self, seq, hashbuffer=array('Q', [0, ]*300)): ba = revcomp(sequence) if ba.find(0) >= 0: raise ValueError("Input sequence is not DNA") super().add(sequence, hashbuffer=hashbuffer) super().add(ba, hashbuffer=hashbuffer) with timedblock() as tb: mhs = MyMash(21, SKETCH_SIZE, hashfun, 42) mhs.add(sequence) t_batch = tb.duration print("%.2f seconds / sequence" % (t_batch)) print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch)) Explanation: Wow! At the time of writing this is between 1.5 and 2.5 times faster than C-implemented sourmash. And we are doing more work (we are keeping the ngrams / kmers associated with hash values). We can modifying our class to stop storing the associated k-mer (only keep the hash value) to see if it improves performances: However, as it was pointed out sourmash's minhash also checking that the sequenceo only uses letters from the DNA alphabet and computes the sketch for both the sequence and its reverse complement. We add these 2 operations (check and reverse complement) in a custom child class: End of explanation from mashingpumpkins import _murmurhash3_mash def hashfun(sequence, nsize, buffer=array('Q', [0,]*300), seed=42): return _murmurhash3_mash.hasharray_withrc(sequence, revcomp(sequence), nsize, buffer, seed) with timedblock() as tb: hashbuffer = array('Q', [0, ] * 300) mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42) mhs.add(sequence) t_batch = tb.duration print("%.2f seconds / sequence" % (t_batch)) print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch)) Explanation: Still pretty good, the code for the check is not particularly optimal (that's the kind of primitives that would go to C). MASH quirks Unfortunately this is not quite what MASH (sourmash is based on) is doing. Tim highlighted what is happening: for every ngram and its reverse complement, the one with the lowest lexicograph order is picked for inclusion in the sketch. Essentially, picking segment chunks depending on the lexicographic order of the chunk's direct sequence vs its reverse complement is a sampling/filtering strategy at local level before the hash value is considered for inclusion in the MinHash. The only possible reason for this could be the because the hash value is expensive to compute (but this does not seem to be the case). Anyway, writing a slightly modified batch C function that does that extra sampling/filtering is easy and let's use conserve our design. We can then implement a MASH-like sampling in literally one line: End of explanation len(set(smh.get_mins()) ^ mhs._heapset) Explanation: So now the claim is that we are just like sourmash/MASH, but mostly in Python and faster. We check that the sketches are identical, and they are: End of explanation from mashingpumpkins.sequence import chunkpos_iter import ctypes import multiprocessing from functools import reduce import time NSIZE = 21 SEED = 42 def build_mhs(args): sketch_size, nsize, sequence = args mhs = MinSketch(nsize, sketch_size, hashfun, SEED) mhs.add(sequence) return mhs res_mp = [] for l_seq in (int(x) for x in (1E6, 5E6, 1E7, 5E7)): sequence = make_rnd_sequence(l_seq) for sketch_size in (1000, 5000, 10000): sequence_str = sequence.decode("utf-8") with timedblock() as tb: smh = MinHash(sketch_size, 21) smh.add_sequence(sequence_str) t_sourmash = tb.duration with timedblock() as tb: ncpu = 2 p = multiprocessing.Pool(ncpu) # map step (parallel in chunks) result = p.imap_unordered(build_mhs, ((sketch_size, NSIZE, sequence[begin:end]) for begin, end in chunkpos_iter(NSIZE, l_seq, l_seq//ncpu))) # reduce step (reducing as chunks are getting ready) mhs_mp = reduce(lambda x, y: x+y, result, next(result)) p.terminate() t_pbatch = tb.duration res_mp.append((l_seq, t_pbatch, sketch_size, t_sourmash)) from rpy2.robjects.lib import dplyr, ggplot2 as ggp from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector from rpy2.robjects import Formula dataf = dplyr.DataFrame({'l_seq': IntVector([x[0] for x in res_mp]), 'time': FloatVector([x[1] for x in res_mp]), 'sketch_size': IntVector([x[2] for x in res_mp]), 'ref_time': FloatVector([x[3] for x in res_mp])}) p = (ggp.ggplot(dataf) + ggp.geom_line(ggp.aes_string(x='l_seq', y='log2(ref_time/time)', color='factor(sketch_size, ordered=TRUE)'), size=3) + ggp.scale_x_sqrt("sequence length") + ggp.theme_gray(base_size=18) + ggp.theme(legend_position="top", axis_text_x = ggp.element_text(angle = 90, hjust = 1)) ) import rpy2.ipython.ggplot rpy2.ipython.ggplot.image_png(p, width=1000, height=500) Explanation: Parallel processing Now what about parallel processing ? End of explanation SEED = 42 def run_sourmash(sketchsize, sequence, nsize): sequence_str = sequence.decode("utf-8") with timedblock() as tb: smh = MinHash(sketchsize, nsize) smh.add_sequence(sequence_str) return {'t': tb.duration, 'what': 'sourmash', 'keepngrams': False, 'l_sequence': len(sequence), 'bufsize': 0, 'nsize': nsize, 'sketchsize': sketchsize} def run_mashingp(cls, bufsize, sketchsize, sequence, hashfun, nsize): hashbuffer = array('Q', [0, ] * bufsize) with timedblock() as tb: mhs = cls(nsize, sketchsize, hashfun, SEED) mhs.add(sequence, hashbuffer=hashbuffer) keepngrams = True return {'t': tb.duration, 'what': 'mashingpumpkins', 'keepngrams': keepngrams, 'l_sequence': len(sequence), 'bufsize': bufsize, 'nsize': nsize, 'sketchsize': sketchsize} import gc def run_mashingmp(cls, bufsize, sketchsize, sequence, hashfun, nsize): with timedblock() as tb: ncpu = 2 p = multiprocessing.Pool(ncpu) l_seq = len(sequence) result = p.imap_unordered(build_mhs, ((sketchsize, NSIZE, sequence[begin:end]) for begin, end in chunkpos_iter(nsize, l_seq, l_seq//ncpu)) ) # reduce step (reducing as chunks are getting ready) mhs_mp = reduce(lambda x, y: x+y, result, next(result)) p.terminate() return {'t': tb.duration, 'what': 'mashingpumpinks-2p', 'keepngrams': True, 'l_sequence': len(sequence), 'bufsize': bufsize, 'nsize': nsize, 'sketchsize': sketchsize} from ipywidgets import FloatProgress from IPython.display import display res = list() bufsize = 300 seqsizes = (5E5, 1E6, 5E6, 1E7) sketchsizes = [int(x) for x in (5E3, 1E4, 5E4, 1E5)] f = FloatProgress(min=0, max=len(seqsizes)*len(sketchsizes)*2) display(f) for seqsize in (int(s) for s in seqsizes): env = dict() sequencebitpacked = ssl.RAND_bytes(int(seqsize/4)) sequencen = bytearray(int(seqsize)) for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked): sequencen[i:(i+4)] = quad[b] sequencen = bytes(sequencen) for sketchsize in sketchsizes: for nsize in (21, 31): tmp = run_sourmash(sketchsize, sequencen, nsize) tmp.update([('hashfun', 'murmurhash3')]) res.append(tmp) for funname, hashfun in (('murmurhash3', hasharray),): tmp = run_mashingp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize) tmp.update([('hashfun', funname)]) res.append(tmp) tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize) tmp.update([('hashfun', funname)]) res.append(tmp) f.value += 1 from rpy2.robjects.lib import dplyr, ggplot2 as ggp from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector from rpy2.robjects import Formula d = dict((n, FloatVector([x[n] for x in res])) for n in ('t',)) d.update((n, StrVector([x[n] for x in res])) for n in ('what', 'hashfun')) d.update((n, BoolVector([x[n] for x in res])) for n in ('keepngrams', )) d.update((n, IntVector([x[n] for x in res])) for n in ('l_sequence', 'bufsize', 'sketchsize', 'nsize')) dataf = dplyr.DataFrame(d) p = (ggp.ggplot((dataf .filter("hashfun != 'xxhash'") .mutate(nsize='paste0("k=", nsize)', implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))'))) + ggp.geom_line(ggp.aes_string(x='l_sequence', y='l_sequence/t/1E6', color='implementation', group='paste(implementation, bufsize, nsize, keepngrams)'), alpha=1) + ggp.facet_grid(Formula('nsize~sketchsize')) + ggp.scale_x_log10('sequence length') + ggp.scale_y_continuous('MB/s') + ggp.scale_color_brewer('Implementation', palette="Set1") + ggp.theme_gray(base_size=18) + ggp.theme(legend_position="top", axis_text_x = ggp.element_text(angle = 90, hjust = 1)) ) import rpy2.ipython.ggplot rpy2.ipython.ggplot.image_png(p, width=1000, height=500) Explanation: We have just made sourmash/MASH about 2 times faster... some of the times. Parallelization does not always bring speedups (depends on the size of the sketch and on the length of the sequence for which the sketch is built). Scaling up Now how much time should it take to compute signature for various references ? First we check quickly that the time is roughly proportional to the size of the reference: End of explanation dataf_plot = ( dataf .filter("hashfun != 'xxhash'") .mutate(nsize='paste0("k=", nsize)', implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))') ) dataf_plot2 = (dataf_plot.filter('implementation!="sourmash"') .inner_join( dataf_plot.filter('implementation=="sourmash"') .select('t', 'nsize', 'sketchsize', 'l_sequence'), by=StrVector(('nsize', 'sketchsize', 'l_sequence')))) p = (ggp.ggplot(dataf_plot2) + ggp.geom_line(ggp.aes_string(x='l_sequence', y='log2(t.y/t.x)', color='implementation', group='paste(implementation, bufsize, nsize, keepngrams)'), alpha=1) + ggp.facet_grid(Formula('nsize~sketchsize')) + ggp.scale_x_log10('sequence length') + ggp.scale_y_continuous('log2(time ratio)') + ggp.scale_color_brewer('Implementation', palette="Set1") + ggp.theme_gray(base_size=18) + ggp.theme(legend_position="top", axis_text_x = ggp.element_text(angle = 90, hjust = 1)) ) import rpy2.ipython.ggplot rpy2.ipython.ggplot.image_png(p, width=1000, height=500) Explanation: The rate (MB/s) with which a sequence is processed seems to strongly depend on the size of the input sequence for the mashingpumpkins implementation (suggesting a significant setup cost than is amortized as the sequence is getting longer), and parallelization achieve a small boost in performance (with the size of the sketch apparently counteracting that small boost). Our implementation also appears to be scaling better with increasing sequence size (relatively faster as the size is increasing). Keeping the kmers comes with a slight cost for the larger max_size values (not shown). Our Python implementation is otherwise holding up quite well. XXHash appears give slightly faster processing rates in the best case, and makes no difference compared with MurmushHash3 in other cases (not shown). End of explanation seqsize = int(1E8) print("generating sequence:") f = FloatProgress(min=0, max=seqsize) display(f) sequencebitpacked = ssl.RAND_bytes(int(seqsize/4)) sequencen = bytearray(int(seqsize)) for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked): sequencen[i:(i+4)] = quad[b] if i % int(1E4) == 0: f.value += int(1E4) f.value = i+4 sequencen = bytes(sequencen) sketchsize = 20000 bufsize = 1000 nsize = 21 funname, hashfun = ('murmurhash3', hasharray) tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize) print("%.2f seconds" % tmp['t']) print("%.2f MB / second" % (tmp['l_sequence']/tmp['t']/1E6)) Explanation: One can also observe that the performance dip for the largest max_size value is recovering as the input sequence is getting longer. We verifiy this with a .1GB reference and max_size equal to 20,000. End of explanation tmp_sm = run_sourmash(sketchsize, sequencen, nsize) print("%.2f seconds" % tmp_sm['t']) print("%.2f MB / second" % (tmp_sm['l_sequence']/tmp_sm['t']/1E6)) Explanation: In comparison, this is what sourmash manages to achieve: End of explanation
14,886
Given the following text description, write Python code to implement the functionality described below step by step Description: Data poisoning attack In this notebook, we use a convex optimization layer to perform a data poisoning attack; i.e., we show how to perturb the data used to train a logistic regression classifier so as to maximally increase the test loss. This example is also presented in section 6.1 of the paper Differentiable convex optimization layers. Step1: We are given training data $(x_i, y_i){i=1}^{N}$, where $x_i\in\mathbf{R}^n$ are feature vectors and $y_i\in{0,1}$ are the labels. Suppose we fit a model for this classification problem by solving \begin{equation} \begin{array}{ll} \mbox{minimize} & \frac{1}{N}\sum{i=1}^N \ell(\theta; x_i, y_i) + r(\theta), \end{array} \label{eq Step2: Assume that our training data is subject to a data poisoning attack, before it is supplied to us. The adversary has full knowledge of our modeling choice, meaning that they know the form of the optimization problem above, and seeks to perturb the data to maximally increase our loss on the test set, to which they also have access. The adversary is permitted to apply an additive perturbation $\delta_i \in \mathbf{R}^n$ to each of the training points $x_i$, with the perturbations satisfying $\|\delta_i\|_\infty \leq 0.01$. Let $\theta^\star$ be optimal. The gradient of the test loss with respect to a training data point, $\nabla_{x_i} \mathcal{L}^{\mathrm{test}}(\theta^\star)$, gives the direction in which the point should be moved to achieve the greatest increase in test loss. Hence, one reasonable adversarial policy is to set $x_i Step3: Below, we plot the gradient of the test loss with respect to the training data points. The blue and orange points are training data, belonging to different classes. The red line is the hyperplane learned by fitting the the model, while the blue line is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set.
Python Code: import cvxpy as cp import matplotlib.pyplot as plt import numpy as np import torch from cvxpylayers.torch import CvxpyLayer Explanation: Data poisoning attack In this notebook, we use a convex optimization layer to perform a data poisoning attack; i.e., we show how to perturb the data used to train a logistic regression classifier so as to maximally increase the test loss. This example is also presented in section 6.1 of the paper Differentiable convex optimization layers. End of explanation from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split torch.manual_seed(0) np.random.seed(0) n = 2 N = 60 X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3) Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5) Xtrain, Xtest, ytrain, ytest = map( torch.from_numpy, [Xtrain, Xtest, ytrain, ytest]) Xtrain.requires_grad_(True) m = Xtrain.shape[0] a = cp.Variable((n, 1)) b = cp.Variable((1, 1)) X = cp.Parameter((m, n)) Y = ytrain.numpy()[:, np.newaxis] log_likelihood = (1. / m) * cp.sum( cp.multiply(Y, X @ a + b) - cp.logistic(X @ a + b) ) regularization = - 0.1 * cp.norm(a, 1) - 0.1 * cp.sum_squares(a) prob = cp.Problem(cp.Maximize(log_likelihood + regularization)) fit_logreg = CvxpyLayer(prob, [X], [a, b]) Explanation: We are given training data $(x_i, y_i){i=1}^{N}$, where $x_i\in\mathbf{R}^n$ are feature vectors and $y_i\in{0,1}$ are the labels. Suppose we fit a model for this classification problem by solving \begin{equation} \begin{array}{ll} \mbox{minimize} & \frac{1}{N}\sum{i=1}^N \ell(\theta; x_i, y_i) + r(\theta), \end{array} \label{eq:trainlinear} \end{equation} where the loss function $\ell(\theta; x_i, y_i)$ is convex in $\theta \in \mathbf{R}^n$ and $r(\theta)$ is a convex regularizer. We hope that the test loss $\mathcal{L}^{\mathrm{test}}(\theta) = \frac{1}{M}\sum_{i=1}^M \ell(\theta; \tilde x_i, \tilde y_i)$ is small, where $(\tilde x_i, \tilde y_i)_{i=1}^{M}$ is our test set. In this example, we use the logistic loss \begin{equation} \ell(\theta; x_i, y_i) = \log(1 + \exp(\beta^Tx_i + b)) - y_i(\beta^Tx_i + b) \end{equation} with elastic net regularization \begin{equation} r(\theta) = 0.1\|\beta\|_1 + 0.1\|\beta\|_2^2. \end{equation} End of explanation from sklearn.linear_model import LogisticRegression a_tch, b_tch = fit_logreg(Xtrain) loss = 300 * torch.nn.BCEWithLogitsLoss()((Xtest @ a_tch + b_tch).squeeze(), ytest*1.0) loss.backward() Xtrain_grad = Xtrain.grad Explanation: Assume that our training data is subject to a data poisoning attack, before it is supplied to us. The adversary has full knowledge of our modeling choice, meaning that they know the form of the optimization problem above, and seeks to perturb the data to maximally increase our loss on the test set, to which they also have access. The adversary is permitted to apply an additive perturbation $\delta_i \in \mathbf{R}^n$ to each of the training points $x_i$, with the perturbations satisfying $\|\delta_i\|_\infty \leq 0.01$. Let $\theta^\star$ be optimal. The gradient of the test loss with respect to a training data point, $\nabla_{x_i} \mathcal{L}^{\mathrm{test}}(\theta^\star)$, gives the direction in which the point should be moved to achieve the greatest increase in test loss. Hence, one reasonable adversarial policy is to set $x_i := x_i + .01\mathrm{sign}(\nabla_{x_i}\mathcal{L}^{\mathrm{test}}(\theta^\star))$. The quantity $0.01\sum_{i=1}^N \|\nabla_{x_i} \mathcal{L}^{\mathrm{test}}(\theta^\star)\|_1$ is the predicted increase in our test loss due to the poisoning. End of explanation lr = LogisticRegression(solver='lbfgs') lr.fit(Xtest.numpy(), ytest.numpy()) beta_train = a_tch.detach().numpy().flatten() beta_test = lr.coef_.flatten() b_train = b_tch.squeeze().detach().numpy() b_test = lr.intercept_[0] hyperplane = lambda x, beta, b: - (b + beta[0] * x) / beta[1] Xtrain_np = Xtrain.detach().numpy() Xtrain_grad_np = Xtrain_grad.numpy() ytrain_np = ytrain.numpy().astype(np.bool) plt.figure() plt.scatter(Xtrain_np[ytrain_np, 0], Xtrain_np[ytrain_np, 1], s=25, marker='+') plt.scatter(Xtrain_np[~ytrain_np, 0], Xtrain_np[~ytrain_np, 1], s=25, marker='*') for i in range(m): plt.arrow(Xtrain_np[i, 0], Xtrain_np[i, 1], Xtrain_grad_np[i, 0], Xtrain_grad_np[i, 1]) plt.xlim(-8, 8) plt.ylim(-8, 8) plt.plot(np.linspace(-8, 8, 100), [hyperplane(x, beta_train, b_train) for x in np.linspace(-8, 8, 100)], '--', color='red', label='train') plt.plot(np.linspace(-8, 8, 100), [hyperplane(x, beta_test, b_test) for x in np.linspace(-8, 8, 100)], '-', color='blue', label='test') plt.legend() plt.savefig("data_poisoning.pdf") plt.show() Explanation: Below, we plot the gradient of the test loss with respect to the training data points. The blue and orange points are training data, belonging to different classes. The red line is the hyperplane learned by fitting the the model, while the blue line is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set. End of explanation
14,887
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing an <span style="font-variant Step1: Note that this grammar does not contain any embedded actions. Hence we cannot compute anything with it. We will only be able to check whether a given string is generated by this grammar. Provided you have stored the file antlr-4.9-complete.jar in the directory /usr/local/lib/ we can generate both the scanner and the parser using the following command Step2: The files ExprLexer.py and ExprParser.py contain the generated scanner and parser, respectively. If we want to test the parser in this notebook, we have to import these files. Step3: Now we can parse a string. The function parser_string takes the string s as its argument and checks, whether this string can be parsed as an arithmetic expression. This is done in five steps Step4: As there is no syntax error, the string '1 + 2 * 3 - 4' adheres to the specification given by our grammar. Lets try a string that is not generated by our grammar. Step5: As the operator ** is not supported by our grammar, we get a syntax error at the last occurrence of the character * in the given string. Step6: This time we get a lexical error as the character &lt; is not a legal token. We can also generate a parse tree with our grammar. However, for this to work <span style="font-variant Step7: This command has generated some files for us that contain a both a lexer and a parser. However, this time these are .java-files. Step8: We have to compile the generated .java files. Below, you might have to change the path to the file antlr-4.8-complete.jar to make this work. Step9: Next, we can start the so called TestRig to generate and display the <em style="color Step10: Let us clean up the working directory.
Python Code: !cat -n Expr.g4 !type Expr.g4 Explanation: Testing an <span style="font-variant:small-caps;">Antlr</span> Grammar via grun In order for the examples using <span style="font-variant:small-caps;">Antlr</span> to work, we first have to install <span style="font-variant:small-caps;">Antlr</span>. This can be done by executing the following commands in an Anaconda environment: conda install -y -c conda-forge antlr4-python3-runtime conda install -y -c conda-forge antlr Alternatively, you can download https://www.antlr.org/download/antlr-4.9-complete.jar. I will assume that this .jarfile is stored in the directory /usr/local/lib/. Furthermore, I assume that both a java runtime and a java compiler are available. Then you also need to install the python language bindings using the following command: pip install antlr4-python3-runtime Our grammar is stored in the file Expr.g4. In order to inspect it, we use the command line tool cat. This will work with MacOs and Linux. On Windows, either use the power shell, which understands cat, or use the command type instead. The option -n of cat provides numbered output. End of explanation !antlr4 -Dlanguage=Python3 Expr.g4 !ls -l !dir /B Explanation: Note that this grammar does not contain any embedded actions. Hence we cannot compute anything with it. We will only be able to check whether a given string is generated by this grammar. Provided you have stored the file antlr-4.9-complete.jar in the directory /usr/local/lib/ we can generate both the scanner and the parser using the following command: End of explanation from ExprLexer import ExprLexer from ExprParser import ExprParser import antlr4 Explanation: The files ExprLexer.py and ExprParser.py contain the generated scanner and parser, respectively. If we want to test the parser in this notebook, we have to import these files. End of explanation def parse_string(string): inputStream = antlr4.InputStream(string) lexer = ExprLexer(inputStream) tokenStream = antlr4.CommonTokenStream(lexer) parser = ExprParser(tokenStream) parser.expr() parse_string('1 + 2 * 3 - 4') Explanation: Now we can parse a string. The function parser_string takes the string s as its argument and checks, whether this string can be parsed as an arithmetic expression. This is done in five steps: - The string is converted into an antlr4.InputStream. - The input stream is converted into a lexer. - The lexer is converted into an antlr4.CommonTokenStream. - The token stream is converted into a parser. - The parser tries to parse with start symbol. End of explanation parse_string('1 + 2 * 3 ** 4') Explanation: As there is no syntax error, the string '1 + 2 * 3 - 4' adheres to the specification given by our grammar. Lets try a string that is not generated by our grammar. End of explanation parse_string('1 < 2') Explanation: As the operator ** is not supported by our grammar, we get a syntax error at the last occurrence of the character * in the given string. End of explanation !java -jar /usr/local/lib/antlr-4.9.2-complete.jar -Dlanguage=Java Expr.g4 !java -jar C:/Users/Karl/anaconda3/envs/fl/Library/lib/antlr4-4.9.2_1-complete.jar -Dlanguage=Java Expr.g4 Explanation: This time we get a lexical error as the character &lt; is not a legal token. We can also generate a parse tree with our grammar. However, for this to work <span style="font-variant:small-caps;">Antlr</span> first has to generate a java parser. Hence we have to call antlr4 again, but this time with Java as the target language. End of explanation !ls -l *.java !dir /B *.java Explanation: This command has generated some files for us that contain a both a lexer and a parser. However, this time these are .java-files. End of explanation !javac -cp .:/usr/local/lib/antlr-4.9.2-complete.jar *.java !javac -cp .;C:/Users/Karl/anaconda3/envs/fl/Library/lib/antlr4-4.9.2_1-complete.jar *.java Explanation: We have to compile the generated .java files. Below, you might have to change the path to the file antlr-4.8-complete.jar to make this work. End of explanation !echo "1+2*3-4*5*(2-3)" | java -cp .:/usr/local/lib/antlr-4.9.2-complete.jar org.antlr.v4.gui.TestRig Expr expr -gui !echo 1+2*3-4 | java -cp .;C:/Users/Karl/anaconda3/envs/fl/Library/lib/antlr4-4.9.2_1-complete.jar org.antlr.v4.gui.TestRig Expr expr -gui Explanation: Next, we can start the so called TestRig to generate and display the <em style="color:blue">parse tree</em> for a given string. End of explanation !ls !dir /B !rm *.py *.tokens *.interp *.java *.class !rm -r __pycache__/ !del *.py *.tokens *.interp *.java *.class /Q !rmdir __pycache__ /S /Q !ls -l !dir /B Explanation: Let us clean up the working directory. End of explanation
14,888
Given the following text description, write Python code to implement the functionality described below step by step Description: Newton's method Step1: Cf. "Why Functional Programming Matters" by John Hughes $a_{i+1} = \frac{(a_i+\frac{n}{a_i})}{2}$ Let's define a function that computes the above equation Step2: And a function to compute the error Step3: Now we can define a recursive program that expects a number n, an initial estimate a, and an epsilon value ε, and that leaves on the stack the square root of n to within the precision of the epsilon value. (Later on we'll refine it to generate the initial estimate and hard-code an epsilon value.) n a ε square-root ----------------- √n If we apply the two functions Q and err defined above we get the next approximation and the error on the stack below the epsilon. n a ε [Q err] dip n a Q err ε n a' err ε n a' e ε Let's define the recursive function from here. Start with ifte; the predicate and the base case behavior are obvious Step4: So now all we need is a way to generate an initial approximation and an epsilon value
Python Code: from notebook_preamble import J, V, define Explanation: Newton's method End of explanation define('Q == [tuck / + 2 /] unary') Explanation: Cf. "Why Functional Programming Matters" by John Hughes $a_{i+1} = \frac{(a_i+\frac{n}{a_i})}{2}$ Let's define a function that computes the above equation: n a Q --------------- (a+n/a)/2 n a tuck / + 2 / a n a / + 2 / a n/a + 2 / a+n/a 2 / (a+n/a)/2 We want it to leave n but replace a, so we execute it with unary: Q == [tuck / + 2 /] unary End of explanation define('err == [sqr - abs] nullary') Explanation: And a function to compute the error: n a sqr - abs |n-a**2| This should be nullary so as to leave both n and a on the stack below the error. err == [sqr - abs] nullary End of explanation define('K == [<] [popop popd] [popd [Q err] dip] primrec') J('25 10 0.001 dup K') J('25 10 0.000001 dup K') Explanation: Now we can define a recursive program that expects a number n, an initial estimate a, and an epsilon value ε, and that leaves on the stack the square root of n to within the precision of the epsilon value. (Later on we'll refine it to generate the initial estimate and hard-code an epsilon value.) n a ε square-root ----------------- √n If we apply the two functions Q and err defined above we get the next approximation and the error on the stack below the epsilon. n a ε [Q err] dip n a Q err ε n a' err ε n a' e ε Let's define the recursive function from here. Start with ifte; the predicate and the base case behavior are obvious: n a' e ε [&lt;] [popop popd] [J] ifte Base-case n a' e ε popop popd n a' popd a' The recursive branch is pretty easy. Discard the error and recur. w/ K == [&lt;] [popop popd] [J] ifte n a' e ε J n a' e ε popd [Q err] dip [K] i n a' ε [Q err] dip [K] i n a' Q err ε [K] i n a'' e ε K This fragment alone is pretty useful. End of explanation define('square-root == dup 3 / 0.000001 dup K') J('36 square-root') J('4895048365636 square-root') 2212475.6192184356 * 2212475.6192184356 Explanation: So now all we need is a way to generate an initial approximation and an epsilon value: square-root == dup 3 / 0.000001 dup K End of explanation
14,889
Given the following text description, write Python code to implement the functionality described below step by step Description: 3T_Pandas Basic (3) - 데이터 그룹화 ( df.groupby ) Group by라는 기능. 그룹을 나눈다는 의미 df에서 관계있는 애들만 뽑아내는 작업 번외로 중복되지 않은 값들을 뽑아내는 것까지 Step1: 중복되지 않는 “시”의 리스트 Step2: 각각의 df으로 만들고 싶다. (서울df, 부산df, 경북df) for문을 돌리는 방법 Step3: group by를 쓸 것이다 위의 과정을 줄여준다.
Python Code: df = pd.DataFrame(columns=["시", "동"]) df df.loc[0] = ["서울", "신사동"] df.loc[1] = ["서울", "대치동"] df.loc[2] = ["서울", "봉천동"] df.loc[3] = ["부산", "부산 1동"] df.loc[4] = ["부산", "부산 2동"] df.loc[5] = ["경북", "효자동"] df.loc[6] = ["경북", "지곡동"] df Explanation: 3T_Pandas Basic (3) - 데이터 그룹화 ( df.groupby ) Group by라는 기능. 그룹을 나눈다는 의미 df에서 관계있는 애들만 뽑아내는 작업 번외로 중복되지 않은 값들을 뽑아내는 것까지 End of explanation df["시"] #이거는 Series list(df["시"]) set(list(df["시"])) list(set(list(df["시"]))) #위의 과정을 압축하면 df["시"].unique() Explanation: 중복되지 않는 “시”의 리스트 End of explanation city_name = "서울" is_city = df["시"] == city_name # 특정 True/False의 Series를 만들고 df[is_city] # 해당하는 row만 뽑는다. city_df_dict = {} for city_name in df["시"].unique(): is_city = df["시"] == city_name city_df = df[is_city] city_df_dict[city_name] = city_df city_df_dict["부산"] #Python의 자료형을 이용한 방법 Explanation: 각각의 df으로 만들고 싶다. (서울df, 부산df, 경북df) for문을 돌리는 방법 End of explanation df.groupby("시") city_groups = df.groupby("시") city_groups.get_group("경북") Explanation: group by를 쓸 것이다 위의 과정을 줄여준다. End of explanation
14,890
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 17 Step1: A list of Dictionaries isa Data Structure. Step2: The Tic-Tac-Toe Game Program We can use data structures to represent values in Python that can be understood. For example, using key-value pairs and strings to represent regions of a tic-tac-toe board. Step3: We will use data structures to create a representation of this board. Use string values to represent the nine spaces. The dictionary values will hold the X's and O's, and the position strings ('low-R') will be the keys. Step4: The keys are arbitrary; they are just used to store and change values. Machine version Step5: Human version Step6: How would we draw this board and include win conditions? Defining functions. Step7: Dictionaries, lists, and strings can be used in combination to simulate real world things, translating machine responses to human responses. If confused about what type of data you're dealing with, use the type() function.
Python Code: cat = {'name' : 'Zophie', 'age': 7, 'color':'gray'} Explanation: Lesson 17: Data Structures Lists and dictionaries organize data in structures for programs. End of explanation allCats = [] allCats.append({'name' : 'Zophie', 'age': 7, 'color':'gray'}) allCats.append({'name' : 'Fooka', 'age': 5, 'color':'black'}) allCats.append({'name' : 'Fat-tail', 'age': 5, 'color':'gray'}) allCats.append({'name' : '???', 'age': -1, 'color':'orange'}) print(allCats) Explanation: A list of Dictionaries isa Data Structure. End of explanation from IPython.display import Image Image(url='https://automatetheboringstuff.com/images/000003.png') Explanation: The Tic-Tac-Toe Game Program We can use data structures to represent values in Python that can be understood. For example, using key-value pairs and strings to represent regions of a tic-tac-toe board. End of explanation import pprint theBoard = { 'top-L':' ', 'top-M':' ', 'top-R':' ', 'mid-L':' ', 'mid-M':' ', 'mid-R':' ', 'low-L':' ', 'low-M':' ', 'low-R':' ', } pprint.pprint(theBoard) Explanation: We will use data structures to create a representation of this board. Use string values to represent the nine spaces. The dictionary values will hold the X's and O's, and the position strings ('low-R') will be the keys. End of explanation theBoard['mid-M'] = 'X' pprint.pprint(theBoard) Explanation: The keys are arbitrary; they are just used to store and change values. Machine version: End of explanation from IPython.display import Image Image(url='https://automatetheboringstuff.com/images/000008.png') Explanation: Human version: End of explanation def printBoard(board): print(board['top-L'] + ' | ' + board['top-M'] + ' | ' + board['top-R']) print('----------') print(board['mid-L'] + ' | ' + board['mid-M'] + ' | ' + board['mid-R']) print('----------') print(board['low-L'] + ' | ' + board['low-M'] + ' | ' + board['low-R']) printBoard(theBoard) Explanation: How would we draw this board and include win conditions? Defining functions. End of explanation print('type(42)') print(type(42)) print('type(hello)') print(type('hello')) print('type(3.14)') print(type(3.14)) print('type(theBoard)') print(type(theBoard)) print('type(theBoard[top-R])') print(type(theBoard['top-R'])) Explanation: Dictionaries, lists, and strings can be used in combination to simulate real world things, translating machine responses to human responses. If confused about what type of data you're dealing with, use the type() function. End of explanation
14,891
Given the following text description, write Python code to implement the functionality described below step by step Description: Math - Linear Algebra Linear Algebra is the branch of mathematics that studies vector spaces and linear transformations between vector spaces, such as rotating a shape, scaling it up or down, translating it (ie. moving it), etc. Machine Learning relies heavily on Linear Algebra, so it is essential to understand what vectors and matrices are, what operations you can perform with them, and how they can be useful. Before we start, let's ensure that this notebook works well in both Python 2 and 3 Step1: Vectors Definition A vector is a quantity defined by a magnitude and a direction. For example, a rocket's velocity is a 3-dimensional vector Step2: Since we plan to do quite a lot of scientific calculations, it is much better to use NumPy's ndarray, which provides a lot of convenient and optimized implementations of essential mathematical operations on vectors (for more details about NumPy, check out the NumPy tutorial). For example Step3: The size of a vector can be obtained using the size attribute Step4: The $i^{th}$ element (also called entry or item) of a vector $\textbf{v}$ is noted $\textbf{v}_i$. Note that indices in mathematics generally start at 1, but in programming they usually start at 0. So to access $\textbf{video}_3$ programmatically, we would write Step5: Plotting vectors To plot vectors we will use matplotlib, so let's start by importing it (for details about matplotlib, check the matplotlib tutorial) Step6: 2D vectors Let's create a couple very simple 2D vectors to plot Step7: These vectors each have 2 elements, so they can easily be represented graphically on a 2D graph, for example as points Step8: Vectors can also be represented as arrows. Let's create a small convenience function to draw nice arrows Step9: Now let's draw the vectors u and v as arrows Step10: 3D vectors Plotting 3D vectors is also relatively straightforward. First let's create two 3D vectors Step12: Now let's plot them using matplotlib's Axes3D Step14: It is a bit hard to visualize exactly where in space these two points are, so let's add vertical lines. We'll create a small convenience function to plot a list of 3d vectors with vertical lines attached Step15: Norm The norm of a vector $\textbf{u}$, noted $\left \Vert \textbf{u} \right \|$, is a measure of the length (a.k.a. the magnitude) of $\textbf{u}$. There are multiple possible norms, but the most common one (and the only one we will discuss here) is the Euclidian norm, which is defined as Step16: However, it is much more efficient to use NumPy's norm function, available in the linalg (Linear Algebra) module Step17: Let's plot a little diagram to confirm that the length of vector $\textbf{v}$ is indeed $\approx5.4$ Step18: Looks about right! Addition Vectors of same size can be added together. Addition is performed elementwise Step19: Let's look at what vector addition looks like graphically Step20: Vector addition is commutative, meaning that $\textbf{u} + \textbf{v} = \textbf{v} + \textbf{u}$. You can see it on the previous image Step21: Finally, substracting a vector is like adding the opposite vector. Multiplication by a scalar Vectors can be multiplied by scalars. All elements in the vector are multiplied by that number, for example Step22: Graphically, scalar multiplication results in changing the scale of a figure, hence the name scalar. The distance from the origin (the point at coordinates equal to zero) is also multiplied by the scalar. For example, let's scale up by a factor of k = 2.5 Step23: As you might guess, dividing a vector by a scalar is equivalent to multiplying by its inverse Step24: Dot product Definition The dot product (also called scalar product or inner product in the context of the Euclidian space) of two vectors $\textbf{u}$ and $\textbf{v}$ is a useful operation that comes up fairly often in linear algebra. It is noted $\textbf{u} \cdot \textbf{v}$, or sometimes $⟨\textbf{u}|\textbf{v}⟩$ or $(\textbf{u}|\textbf{v})$, and it is defined as Step25: But a much more efficient implementation is provided by NumPy with the dot function Step26: Equivalently, you can use the dot method of ndarrays Step27: Caution Step28: Main properties The dot product is commutative Step29: Note Step30: Matrices A matrix is a rectangular array of scalars (ie. any number Step31: A much more efficient way is to use the NumPy library which provides optimized implementations of many matrix operations Step32: By convention matrices generally have uppercase names, such as $A$. In the rest of this tutorial, we will assume that we are using NumPy arrays (type ndarray) to represent matrices. Size The size of a matrix is defined by its number of rows and number of columns. It is noted $rows \times columns$. For example, the matrix $A$ above is an example of a $2 \times 3$ matrix Step33: Caution Step34: Element indexing The number located in the $i^{th}$ row, and $j^{th}$ column of a matrix $X$ is sometimes noted $X_{i,j}$ or $X_{ij}$, but there is no standard notation, so people often prefer to explicitely name the elements, like this Step35: The $i^{th}$ row vector is sometimes noted $M_i$ or $M_{i,}$, but again there is no standard notation so people often prefer to explicitely define their own names, for example Step36: Similarly, the $j^{th}$ column vector is sometimes noted $M^j$ or $M_{,j}$, but there is no standard notation. We will use $M_{,j}$. For example, to access $A_{*,3}$ (ie. $A$'s 3rd column vector) Step37: Note that the result is actually a one-dimensional NumPy array Step38: Square, triangular, diagonal and identity matrices A square matrix is a matrix that has the same number of rows and columns, for example a $3 \times 3$ matrix Step39: If you pass a matrix to the diag function, it will happily extract the diagonal values Step40: Finally, the identity matrix of size $n$, noted $I_n$, is a diagonal matrix of size $n \times n$ with $1$'s in the main diagonal, for example $I_3$ Step41: The identity matrix is often noted simply $I$ (instead of $I_n$) when its size is clear given the context. It is called the identity matrix because multiplying a matrix with it leaves the matrix unchanged as we will see below. Adding matrices If two matrices $Q$ and $R$ have the same size $m \times n$, they can be added together. Addition is performed elementwise Step42: Addition is commutative, meaning that $A + B = B + A$ Step43: It is also associative, meaning that $A + (B + C) = (A + B) + C$ Step44: Scalar multiplication A matrix $M$ can be multiplied by a scalar $\lambda$. The result is noted $\lambda M$, and it is a matrix of the same size as $M$ with all elements multiplied by $\lambda$ Step45: Scalar multiplication is also defined on the right hand side, and gives the same result Step46: This makes scalar multiplication commutative. It is also associative, meaning that $\alpha (\beta M) = (\alpha \times \beta) M$, where $\alpha$ and $\beta$ are scalars. For example Step47: Finally, it is distributive over addition of matrices, meaning that $\lambda (Q + R) = \lambda Q + \lambda R$ Step48: Matrix multiplication So far, matrix operations have been rather intuitive. But multiplying matrices is a bit more involved. A matrix $Q$ of size $m \times n$ can be multiplied by a matrix $R$ of size $n \times q$. It is noted simply $QR$ without multiplication sign or dot. The result $P$ is an $m \times q$ matrix where each element is computed as a sum of products Step49: Let's check this result by looking at one element, just to be sure Step50: Looks good! You can check the other elements until you get used to the algorithm. We multiplied a $2 \times 3$ matrix by a $3 \times 4$ matrix, so the result is a $2 \times 4$ matrix. The first matrix's number of columns has to be equal to the second matrix's number of rows. If we try to multiple $D$ by $A$, we get an error because D has 4 columns while A has 2 rows Step51: This illustrates the fact that matrix multiplication is NOT commutative Step52: On the other hand, matrix multiplication is associative, meaning that $Q(RS) = (QR)S$. Let's create a $4 \times 5$ matrix $G$ to illustrate this Step53: It is also distributive over addition of matrices, meaning that $(Q + R)S = QS + RS$. For example Step54: The product of a matrix $M$ by the identity matrix (of matching size) results in the same matrix $M$. More formally, if $M$ is an $m \times n$ matrix, then Step55: Caution Step56: The @ infix operator Python 3.5 introduced the @ infix operator for matrix multiplication, and NumPy 1.10 added support for it. If you are using Python 3.5+ and NumPy 1.10+, you can simply write A @ D instead of A.dot(D), making your code much more readable (but less portable). This operator also works for vector dot products. Step57: Note Step58: As you might expect, transposing a matrix twice returns the original matrix Step59: Transposition is distributive over addition of matrices, meaning that $(Q + R)^T = Q^T + R^T$. For example Step60: Moreover, $(Q \cdot R)^T = R^T \cdot Q^T$. Note that the order is reversed. For example Step61: A symmetric matrix $M$ is defined as a matrix that is equal to its transpose Step62: Converting 1D arrays to 2D arrays in NumPy As we mentionned earlier, in NumPy (as opposed to Matlab, for example), 1D really means 1D Step63: We want to convert $\textbf{u}$ into a row vector before transposing it. There are a few ways to do this Step64: Notice the extra square brackets Step65: This quite explicit Step66: This is equivalent, but a little less explicit. Step67: This is the shortest version, but you probably want to avoid it because it is unclear. The reason it works is that np.newaxis is actually equal to None, so this is equivalent to the previous version. Ok, now let's transpose our row vector Step68: Great! We now have a nice column vector. Rather than creating a row vector then transposing it, it is also possible to convert a 1D array directly into a column vector Step69: Plotting a matrix We have already seen that vectors can been represented as points or arrows in N-dimensional space. Is there a good graphical representation of matrices? Well you can simply see a matrix as a list of vectors, so plotting a matrix results in many points or arrows. For example, let's create a $2 \times 4$ matrix P and plot it as points Step70: Of course we could also have stored the same 4 vectors as row vectors instead of column vectors, resulting in a $4 \times 2$ matrix (the transpose of $P$, in fact). It is really an arbitrary choice. Since the vectors are ordered, you can see the matrix as a path and represent it with connected dots Step71: Or you can represent it as a polygon Step72: Geometric applications of matrix operations We saw earlier that vector addition results in a geometric translation, vector multiplication by a scalar results in rescaling (zooming in or out, centered on the origin), and vector dot product results in projecting a vector onto another vector, rescaling and measuring the resulting coordinate. Similarly, matrix operations have very useful geometric applications. Addition = multiple geometric translations First, adding two matrices together is equivalent to adding all their vectors together. For example, let's create a $2 \times 4$ matrix $H$ and add it to $P$, and look at the result Step73: If we add a matrix full of identical vectors, we get a simple geometric translation Step74: Although matrices can only be added together if they have the same size, NumPy allows adding a row vector or a column vector to a matrix Step75: Scalar multiplication Multiplying a matrix by a scalar results in all its vectors being multiplied by that scalar, so unsurprisingly, the geometric result is a rescaling of the entire figure. For example, let's rescale our polygon by a factor of 60% (zooming out, centered on the origin) Step76: Matrix multiplication – Projection onto an axis Matrix multiplication is more complex to visualize, but it is also the most powerful tool in the box. Let's start simple, by defining a $1 \times 2$ matrix $U = \begin{bmatrix} 1 & 0 \end{bmatrix}$. This row vector is just the horizontal unit vector. Step77: Now let's look at the dot product $P \cdot U$ Step78: These are the horizontal coordinates of the vectors in $P$. In other words, we just projected $P$ onto the horizontal axis Step79: We can actually project on any other axis by just replacing $U$ with any other unit vector. For example, let's project on the axis that is at a 30° angle above the horizontal axis Step80: Good! Remember that the dot product of a unit vector and a matrix basically performs a projection on an axis and gives us the coordinates of the resulting points on that axis. Matrix multiplication – Rotation Now let's create a $2 \times 2$ matrix $V$ containing two unit vectors that make 30° and 120° angles with the horizontal axis Step81: Let's look at the product $VP$ Step82: The first row is equal to $V_{1,} P$, which is the coordinates of the projection of $P$ onto the 30° axis, as we have seen above. The second row is $V_{2,} P$, which is the coordinates of the projection of $P$ onto the 120° axis. So basically we obtained the coordinates of $P$ after rotating the horizontal and vertical axes by 30° (or equivalently after rotating the polygon by -30° around the origin)! Let's plot $VP$ to see this Step83: Matrix $V$ is called a rotation matrix. Matrix multiplication – Other linear transformations More generally, any linear transformation $f$ that maps n-dimensional vectors to m-dimensional vectors can be represented as an $m \times n$ matrix. For example, say $\textbf{u}$ is a 3-dimensional vector Step84: Let's look at how this transformation affects the unit square Step85: Now let's look at a squeeze mapping Step86: The effect on the unit square is Step87: Let's show a last one Step88: Matrix inverse Now that we understand that a matrix can represent any linear transformation, a natural question is Step89: We applied a shear mapping on $P$, just like we did before, but then we applied a second transformation to the result, and lo and behold this had the effect of coming back to the original $P$ (we plotted the original $P$'s outline to double check). The second transformation is the inverse of the first one. We defined the inverse matrix $F_{shear}^{-1}$ manually this time, but NumPy provides an inv function to compute a matrix's inverse, so we could have written instead Step90: Only square matrices can be inversed. This makes sense when you think about it Step91: Looking at this image, it is impossible to tell whether this is the projection of a cube or the projection of a narrow rectangular object. Some information has been lost in the projection. Even square transformation matrices can lose information. For example, consider this transformation matrix Step92: This transformation matrix performs a projection onto the horizontal axis. Our polygon gets entirely flattened out so some information is entirely lost and it is impossible to go back to the original polygon using a linear transformation. In other words, $F_{project}$ has no inverse. Such a square matrix that cannot be inversed is called a singular matrix (aka degenerate matrix). If we ask NumPy to calculate its inverse, it raises an exception Step93: Here is another example of a singular matrix. This one performs a projection onto the axis at a 30° angle above the horizontal axis Step94: But this time, due to floating point rounding errors, NumPy manages to calculate an inverse (notice how large the elements are, though) Step95: As you might expect, the dot product of a matrix by its inverse results in the identity matrix Step96: Another way to express this is that the inverse of the inverse of a matrix $M$ is $M$ itself Step97: Also, the inverse of scaling by a factor of $\lambda$ is of course scaling by a factor or $\frac{1}{\lambda}$ Step98: Finally, a square matrix $H$ whose inverse is its own transpose is an orthogonal matrix Step99: Determinant The determinant of a square matrix $M$, noted $\det(M)$ or $\det M$ or $|M|$ is a value that can be calculated from its elements $(M_{i,j})$ using various equivalent methods. One of the simplest methods is this recursive approach Step100: One of the main uses of the determinant is to determine whether a square matrix can be inversed or not Step101: That's right, $F_{project}$ is singular, as we saw earlier. Step102: This determinant is suspiciously close to 0 Step103: Perfect! This matrix can be inversed as we saw earlier. Wow, math really works! The determinant can also be used to measure how much a linear transformation affects surface areas Step104: We rescaled the polygon by a factor of 1/2 on both vertical and horizontal axes so the surface area of the resulting polygon is 1/4$^{th}$ of the original polygon. Let's compute the determinant and check that Step105: Correct! The determinant can actually be negative, when the transformation results in a "flipped over" version of the original polygon (eg. a left hand glove becomes a right hand glove). For example, the determinant of the F_reflect matrix is -1 because the surface area is preserved but the polygon gets flipped over Step106: Composing linear transformations Several linear transformations can be chained simply by performing multiple dot products in a row. For example, to perform a squeeze mapping followed by a shear mapping, just write Step107: Since the dot product is associative, the following code is equivalent Step108: Note that the order of the transformations is the reverse of the dot product order. If we are going to perform this composition of linear transformations more than once, we might as well save the composition matrix like this Step109: From now on we can perform both transformations in just one dot product, which can lead to a very significant performance boost. What if you want to perform the inverse of this double transformation? Well, if you squeezed and then you sheared, and you want to undo what you have done, it should be obvious that you should unshear first and then unsqueeze. In more mathematical terms, given two invertible (aka nonsingular) matrices $Q$ and $R$ Step110: Singular Value Decomposition It turns out that any $m \times n$ matrix $M$ can be decomposed into the dot product of three simple matrices Step111: Note that this is just a 1D array containing the diagonal values of Σ. To get the actual matrix Σ, we can use NumPy's diag function Step112: Now let's check that $U \cdot \Sigma \cdot V^T$ is indeed equal to F_shear Step113: It worked like a charm. Let's apply these transformations one by one (in reverse order) on the unit square to understand what's going on. First, let's apply the first rotation $V^T$ Step114: Now let's rescale along the vertical and horizontal axes using $\Sigma$ Step115: Finally, we apply the second rotation $U$ Step116: And we can see that the result is indeed a shear mapping of the original unit square. Eigenvectors and eigenvalues An eigenvector of a square matrix $M$ (also called a characteristic vector) is a non-zero vector that remains on the same line after transformation by the linear transformation associated with $M$. A more formal definition is any vector $v$ such that Step117: Indeed the horizontal vectors are stretched by a factor of 1.4, and the vertical vectors are shrunk by a factor of 1/1.4=0.714…, so far so good. Let's look at the shear mapping matrix $F_{shear}$ Step118: Wait, what!? We expected just one unit eigenvector, not two. The second vector is almost equal to $\begin{pmatrix}-1 \ 0 \end{pmatrix}$, which is on the same line as the first vector $\begin{pmatrix}1 \ 0 \end{pmatrix}$. This is due to floating point errors. We can safely ignore vectors that are (almost) colinear (ie. on the same line). Trace The trace of a square matrix $M$, noted $tr(M)$ is the sum of the values on its main diagonal. For example Step119: The trace does not have a simple geometric interpretation (in general), but it has a number of properties that make it useful in many areas
Python Code: from __future__ import division, print_function, unicode_literals Explanation: Math - Linear Algebra Linear Algebra is the branch of mathematics that studies vector spaces and linear transformations between vector spaces, such as rotating a shape, scaling it up or down, translating it (ie. moving it), etc. Machine Learning relies heavily on Linear Algebra, so it is essential to understand what vectors and matrices are, what operations you can perform with them, and how they can be useful. Before we start, let's ensure that this notebook works well in both Python 2 and 3: End of explanation [10.5, 5.2, 3.25, 7.0] Explanation: Vectors Definition A vector is a quantity defined by a magnitude and a direction. For example, a rocket's velocity is a 3-dimensional vector: its magnitude is the speed of the rocket, and its direction is (hopefully) up. A vector can be represented by an array of numbers called scalars. Each scalar corresponds to the magnitude of the vector with regards to each dimension. For example, say the rocket is going up at a slight angle: it has a vertical speed of 5,000 m/s, and also a slight speed towards the East at 10 m/s, and a slight speed towards the North at 50 m/s. The rocket's velocity may be represented by the following vector: velocity $= \begin{pmatrix} 10 \ 50 \ 5000 \ \end{pmatrix}$ Note: by convention vectors are generally presented in the form of columns. Also, vector names are generally lowercase to distinguish them from matrices (which we will discuss below) and in bold (when possible) to distinguish them from simple scalar values such as ${meters_per_second} = 5026$. A list of N numbers may also represent the coordinates of a point in an N-dimensional space, so it is quite frequent to represent vectors as simple points instead of arrows. A vector with 1 element may be represented as an arrow or a point on an axis, a vector with 2 elements is an arrow or a point on a plane, a vector with 3 elements is an arrow or point in space, and a vector with N elements is an arrow or a point in an N-dimensional space… which most people find hard to imagine. Purpose Vectors have many purposes in Machine Learning, most notably to represent observations and predictions. For example, say we built a Machine Learning system to classify videos into 3 categories (good, spam, clickbait) based on what we know about them. For each video, we would have a vector representing what we know about it, such as: video $= \begin{pmatrix} 10.5 \ 5.2 \ 3.25 \ 7.0 \end{pmatrix}$ This vector could represent a video that lasts 10.5 minutes, but only 5.2% viewers watch for more than a minute, it gets 3.25 views per day on average, and it was flagged 7 times as spam. As you can see, each axis may have a different meaning. Based on this vector our Machine Learning system may predict that there is an 80% probability that it is a spam video, 18% that it is clickbait, and 2% that it is a good video. This could be represented as the following vector: class_probabilities $= \begin{pmatrix} 0.80 \ 0.18 \ 0.02 \end{pmatrix}$ Vectors in python In python, a vector can be represented in many ways, the simplest being a regular python list of numbers: End of explanation import numpy as np video = np.array([10.5, 5.2, 3.25, 7.0]) video Explanation: Since we plan to do quite a lot of scientific calculations, it is much better to use NumPy's ndarray, which provides a lot of convenient and optimized implementations of essential mathematical operations on vectors (for more details about NumPy, check out the NumPy tutorial). For example: End of explanation video.size Explanation: The size of a vector can be obtained using the size attribute: End of explanation video[2] # 3rd element Explanation: The $i^{th}$ element (also called entry or item) of a vector $\textbf{v}$ is noted $\textbf{v}_i$. Note that indices in mathematics generally start at 1, but in programming they usually start at 0. So to access $\textbf{video}_3$ programmatically, we would write: End of explanation %matplotlib inline import matplotlib.pyplot as plt Explanation: Plotting vectors To plot vectors we will use matplotlib, so let's start by importing it (for details about matplotlib, check the matplotlib tutorial): End of explanation u = np.array([2, 5]) v = np.array([3, 1]) Explanation: 2D vectors Let's create a couple very simple 2D vectors to plot: End of explanation x_coords, y_coords = zip(u, v) plt.scatter(x_coords, y_coords, color=["r","b"]) plt.axis([0, 9, 0, 6]) plt.grid() plt.show() Explanation: These vectors each have 2 elements, so they can easily be represented graphically on a 2D graph, for example as points: End of explanation def plot_vector2d(vector2d, origin=[0, 0], **options): return plt.arrow(origin[0], origin[1], vector2d[0], vector2d[1], head_width=0.2, head_length=0.3, length_includes_head=True, **options) Explanation: Vectors can also be represented as arrows. Let's create a small convenience function to draw nice arrows: End of explanation plot_vector2d(u, color="r") plot_vector2d(v, color="b") plt.axis([0, 9, 0, 6]) plt.grid() plt.show() Explanation: Now let's draw the vectors u and v as arrows: End of explanation a = np.array([1, 2, 8]) b = np.array([5, 6, 3]) Explanation: 3D vectors Plotting 3D vectors is also relatively straightforward. First let's create two 3D vectors: End of explanation from mpl_toolkits.mplot3d import Axes3D subplot3d = plt.subplot(111, projection='3d') x_coords, y_coords, z_coords = zip(a,b) subplot3d.scatter(x_coords, y_coords, z_coords) subplot3d.set_zlim3d([0, 9]) plt.show() Explanation: Now let's plot them using matplotlib's Axes3D: End of explanation def plot_vectors3d(ax, vectors3d, z0, **options): for v in vectors3d: x, y, z = v ax.plot([x,x], [y,y], [z0, z], color="gray", linestyle='dotted', marker=".") x_coords, y_coords, z_coords = zip(*vectors3d) ax.scatter(x_coords, y_coords, z_coords, **options) subplot3d = plt.subplot(111, projection='3d') subplot3d.set_zlim([0, 9]) plot_vectors3d(subplot3d, [a,b], 0, color=("r","b")) plt.show() Explanation: It is a bit hard to visualize exactly where in space these two points are, so let's add vertical lines. We'll create a small convenience function to plot a list of 3d vectors with vertical lines attached: End of explanation def vector_norm(vector): squares = [element**2 for element in vector] return sum(squares)**0.5 print("||", u, "|| =") vector_norm(u) Explanation: Norm The norm of a vector $\textbf{u}$, noted $\left \Vert \textbf{u} \right \|$, is a measure of the length (a.k.a. the magnitude) of $\textbf{u}$. There are multiple possible norms, but the most common one (and the only one we will discuss here) is the Euclidian norm, which is defined as: $\left \Vert \textbf{u} \right \| = \sqrt{\sum_{i}{\textbf{u}_i}^2}$ We could implement this easily in pure python, recalling that $\sqrt x = x^{\frac{1}{2}}$ End of explanation import numpy.linalg as LA LA.norm(u) Explanation: However, it is much more efficient to use NumPy's norm function, available in the linalg (Linear Algebra) module: End of explanation radius = LA.norm(u) plt.gca().add_artist(plt.Circle((0,0), radius, color="#DDDDDD")) plot_vector2d(u, color="red") plt.axis([0, 8.7, 0, 6]) plt.grid() plt.show() Explanation: Let's plot a little diagram to confirm that the length of vector $\textbf{v}$ is indeed $\approx5.4$: End of explanation print(" ", u) print("+", v) print("-"*10) u + v Explanation: Looks about right! Addition Vectors of same size can be added together. Addition is performed elementwise: End of explanation plot_vector2d(u, color="r") plot_vector2d(v, color="b") plot_vector2d(v, origin=u, color="b", linestyle="dotted") plot_vector2d(u, origin=v, color="r", linestyle="dotted") plot_vector2d(u+v, color="g") plt.axis([0, 9, 0, 7]) plt.text(0.7, 3, "u", color="r", fontsize=18) plt.text(4, 3, "u", color="r", fontsize=18) plt.text(1.8, 0.2, "v", color="b", fontsize=18) plt.text(3.1, 5.6, "v", color="b", fontsize=18) plt.text(2.4, 2.5, "u+v", color="g", fontsize=18) plt.grid() plt.show() Explanation: Let's look at what vector addition looks like graphically: End of explanation t1 = np.array([2, 0.25]) t2 = np.array([2.5, 3.5]) t3 = np.array([1, 2]) x_coords, y_coords = zip(t1, t2, t3, t1) plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co") plot_vector2d(v, t1, color="r", linestyle=":") plot_vector2d(v, t2, color="r", linestyle=":") plot_vector2d(v, t3, color="r", linestyle=":") t1b = t1 + v t2b = t2 + v t3b = t3 + v x_coords_b, y_coords_b = zip(t1b, t2b, t3b, t1b) plt.plot(x_coords_b, y_coords_b, "b-", x_coords_b, y_coords_b, "bo") plt.text(4, 4.2, "v", color="r", fontsize=18) plt.text(3, 2.3, "v", color="r", fontsize=18) plt.text(3.5, 0.4, "v", color="r", fontsize=18) plt.axis([0, 6, 0, 5]) plt.grid() plt.show() Explanation: Vector addition is commutative, meaning that $\textbf{u} + \textbf{v} = \textbf{v} + \textbf{u}$. You can see it on the previous image: following $\textbf{u}$ then $\textbf{v}$ leads to the same point as following $\textbf{v}$ then $\textbf{u}$. Vector addition is also associative, meaning that $\textbf{u} + (\textbf{v} + \textbf{w}) = (\textbf{u} + \textbf{v}) + \textbf{w}$. If you have a shape defined by a number of points (vectors), and you add a vector $\textbf{v}$ to all of these points, then the whole shape gets shifted by $\textbf{v}$. This is called a geometric translation: End of explanation print("1.5 *", u, "=") 1.5 * u Explanation: Finally, substracting a vector is like adding the opposite vector. Multiplication by a scalar Vectors can be multiplied by scalars. All elements in the vector are multiplied by that number, for example: End of explanation k = 2.5 t1c = k * t1 t2c = k * t2 t3c = k * t3 plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co") plot_vector2d(t1, color="r") plot_vector2d(t2, color="r") plot_vector2d(t3, color="r") x_coords_c, y_coords_c = zip(t1c, t2c, t3c, t1c) plt.plot(x_coords_c, y_coords_c, "b-", x_coords_c, y_coords_c, "bo") plot_vector2d(k * t1, color="b", linestyle=":") plot_vector2d(k * t2, color="b", linestyle=":") plot_vector2d(k * t3, color="b", linestyle=":") plt.axis([0, 9, 0, 9]) plt.grid() plt.show() Explanation: Graphically, scalar multiplication results in changing the scale of a figure, hence the name scalar. The distance from the origin (the point at coordinates equal to zero) is also multiplied by the scalar. For example, let's scale up by a factor of k = 2.5: End of explanation plt.gca().add_artist(plt.Circle((0,0),1,color='c')) plt.plot(0, 0, "ko") plot_vector2d(v / LA.norm(v), color="k") plot_vector2d(v, color="b", linestyle=":") plt.text(0.3, 0.3, "$\hat{u}$", color="k", fontsize=18) plt.text(1.5, 0.7, "$u$", color="b", fontsize=18) plt.axis([-1.5, 5.5, -1.5, 3.5]) plt.grid() plt.show() Explanation: As you might guess, dividing a vector by a scalar is equivalent to multiplying by its inverse: $\dfrac{\textbf{u}}{\lambda} = \dfrac{1}{\lambda} \times \textbf{u}$ Scalar multiplication is commutative: $\lambda \times \textbf{u} = \textbf{u} \times \lambda$. It is also associative: $\lambda_1 \times (\lambda_2 \times \textbf{u}) = (\lambda_1 \times \lambda_2) \times \textbf{u}$. Finally, it is distributive over addition of vectors: $\lambda \times (\textbf{u} + \textbf{v}) = \lambda \times \textbf{u} + \lambda \times \textbf{v}$. Zero, unit and normalized vectors A zero-vector is a vector full of 0s. A unit vector is a vector with a norm equal to 1. The normalized vector of a non-null vector $\textbf{u}$, noted $\hat{\textbf{u}}$, is the unit vector that points in the same direction as $\textbf{u}$. It is equal to: $\hat{\textbf{u}} = \dfrac{\textbf{u}}{\left \Vert \textbf{u} \right \|}$ End of explanation def dot_product(v1, v2): return sum(v1i * v2i for v1i, v2i in zip(v1, v2)) dot_product(u, v) Explanation: Dot product Definition The dot product (also called scalar product or inner product in the context of the Euclidian space) of two vectors $\textbf{u}$ and $\textbf{v}$ is a useful operation that comes up fairly often in linear algebra. It is noted $\textbf{u} \cdot \textbf{v}$, or sometimes $⟨\textbf{u}|\textbf{v}⟩$ or $(\textbf{u}|\textbf{v})$, and it is defined as: $\textbf{u} \cdot \textbf{v} = \left \Vert \textbf{u} \right \| \times \left \Vert \textbf{v} \right \| \times cos(\theta)$ where $\theta$ is the angle between $\textbf{u}$ and $\textbf{v}$. Another way to calculate the dot product is: $\textbf{u} \cdot \textbf{v} = \sum_i{\textbf{u}_i \times \textbf{v}_i}$ In python The dot product is pretty simple to implement: End of explanation np.dot(u,v) Explanation: But a much more efficient implementation is provided by NumPy with the dot function: End of explanation u.dot(v) Explanation: Equivalently, you can use the dot method of ndarrays: End of explanation print(" ",u) print("* ",v, "(NOT a dot product)") print("-"*10) u * v Explanation: Caution: the * operator will perform an elementwise multiplication, NOT a dot product: End of explanation def vector_angle(u, v): cos_theta = u.dot(v) / LA.norm(u) / LA.norm(v) return np.arccos(np.clip(cos_theta, -1, 1)) theta = vector_angle(u, v) print("Angle =", theta, "radians") print(" =", theta * 180 / np.pi, "degrees") Explanation: Main properties The dot product is commutative: $\textbf{u} \cdot \textbf{v} = \textbf{v} \cdot \textbf{u}$. The dot product is only defined between two vectors, not between a scalar and a vector. This means that we cannot chain dot products: for example, the expression $\textbf{u} \cdot \textbf{v} \cdot \textbf{w}$ is not defined since $\textbf{u} \cdot \textbf{v}$ is a scalar and $\textbf{w}$ is a vector. This also means that the dot product is NOT associative: $(\textbf{u} \cdot \textbf{v}) \cdot \textbf{w} ≠ \textbf{u} \cdot (\textbf{v} \cdot \textbf{w})$ since neither are defined. However, the dot product is associative with regards to scalar multiplication: $\lambda \times (\textbf{u} \cdot \textbf{v}) = (\lambda \times \textbf{u}) \cdot \textbf{v} = \textbf{u} \cdot (\lambda \times \textbf{v})$ Finally, the dot product is distributive over addition of vectors: $\textbf{u} \cdot (\textbf{v} + \textbf{w}) = \textbf{u} \cdot \textbf{v} + \textbf{u} \cdot \textbf{w}$. Calculating the angle between vectors One of the many uses of the dot product is to calculate the angle between two non-zero vectors. Looking at the dot product definition, we can deduce the following formula: $\theta = \arccos{\left ( \dfrac{\textbf{u} \cdot \textbf{v}}{\left \Vert \textbf{u} \right \| \times \left \Vert \textbf{v} \right \|} \right ) }$ Note that if $\textbf{u} \cdot \textbf{v} = 0$, it follows that $\theta = \dfrac{π}{4}$. In other words, if the dot product of two non-null vectors is zero, it means that they are orthogonal. Let's use this formula to calculate the angle between $\textbf{u}$ and $\textbf{v}$ (in radians): End of explanation u_normalized = u / LA.norm(u) proj = v.dot(u_normalized) * u_normalized plot_vector2d(u, color="r") plot_vector2d(v, color="b") plot_vector2d(proj, color="k", linestyle=":") plt.plot(proj[0], proj[1], "ko") plt.plot([proj[0], v[0]], [proj[1], v[1]], "b:") plt.text(1, 2, "$proj_u v$", color="k", fontsize=18) plt.text(1.8, 0.2, "$v$", color="b", fontsize=18) plt.text(0.8, 3, "$u$", color="r", fontsize=18) plt.axis([0, 8, 0, 5.5]) plt.grid() plt.show() Explanation: Note: due to small floating point errors, cos_theta may be very slightly outside of the $[-1, 1]$ interval, which would make arccos fail. This is why we clipped the value within the range, using NumPy's clip function. Projecting a point onto an axis The dot product is also very useful to project points onto an axis. The projection of vector $\textbf{v}$ onto $\textbf{u}$'s axis is given by this formula: $\textbf{proj}_{\textbf{u}}{\textbf{v}} = \dfrac{\textbf{u} \cdot \textbf{v}}{\left \Vert \textbf{u} \right \| ^2} \times \textbf{u}$ Which is equivalent to: $\textbf{proj}_{\textbf{u}}{\textbf{v}} = (\textbf{v} \cdot \hat{\textbf{u}}) \times \hat{\textbf{u}}$ End of explanation [ [10, 20, 30], [40, 50, 60] ] Explanation: Matrices A matrix is a rectangular array of scalars (ie. any number: integer, real or complex) arranged in rows and columns, for example: \begin{bmatrix} 10 & 20 & 30 \ 40 & 50 & 60 \end{bmatrix} You can also think of a matrix as a list of vectors: the previous matrix contains either 2 horizontal 3D vectors or 3 vertical 2D vectors. Matrices are convenient and very efficient to run operations on many vectors at a time. We will also see that they are great at representing and performing linear transformations such rotations, translations and scaling. Matrices in python In python, a matrix can be represented in various ways. The simplest is just a list of python lists: End of explanation A = np.array([ [10,20,30], [40,50,60] ]) A Explanation: A much more efficient way is to use the NumPy library which provides optimized implementations of many matrix operations: End of explanation A.shape Explanation: By convention matrices generally have uppercase names, such as $A$. In the rest of this tutorial, we will assume that we are using NumPy arrays (type ndarray) to represent matrices. Size The size of a matrix is defined by its number of rows and number of columns. It is noted $rows \times columns$. For example, the matrix $A$ above is an example of a $2 \times 3$ matrix: 2 rows, 3 columns. Caution: a $3 \times 2$ matrix would have 3 rows and 2 columns. To get a matrix's size in NumPy: End of explanation A.size Explanation: Caution: the size attribute represents the number of elements in the ndarray, not the matrix's size: End of explanation A[1,2] # 2nd row, 3rd column Explanation: Element indexing The number located in the $i^{th}$ row, and $j^{th}$ column of a matrix $X$ is sometimes noted $X_{i,j}$ or $X_{ij}$, but there is no standard notation, so people often prefer to explicitely name the elements, like this: "let $X = (x_{i,j})_{1 ≤ i ≤ m, 1 ≤ j ≤ n}$". This means that $X$ is equal to: $X = \begin{bmatrix} x_{1,1} & x_{1,2} & x_{1,3} & \cdots & x_{1,n}\ x_{2,1} & x_{2,2} & x_{2,3} & \cdots & x_{2,n}\ x_{3,1} & x_{3,2} & x_{3,3} & \cdots & x_{3,n}\ \vdots & \vdots & \vdots & \ddots & \vdots \ x_{m,1} & x_{m,2} & x_{m,3} & \cdots & x_{m,n}\ \end{bmatrix}$ However in this notebook we will use the $X_{i,j}$ notation, as it matches fairly well NumPy's notation. Note that in math indices generally start at 1, but in programming they usually start at 0. So to access $A_{2,3}$ programmatically, we need to write this: End of explanation A[1, :] # 2nd row vector (as a 1D array) Explanation: The $i^{th}$ row vector is sometimes noted $M_i$ or $M_{i,}$, but again there is no standard notation so people often prefer to explicitely define their own names, for example: "let x${i}$ be the $i^{th}$ row vector of matrix $X$". We will use the $M_{i,}$, for the same reason as above. For example, to access $A{2,*}$ (ie. $A$'s 2nd row vector): End of explanation A[:, 2] # 3rd column vector (as a 1D array) Explanation: Similarly, the $j^{th}$ column vector is sometimes noted $M^j$ or $M_{,j}$, but there is no standard notation. We will use $M_{,j}$. For example, to access $A_{*,3}$ (ie. $A$'s 3rd column vector): End of explanation A[1:2, :] # rows 2 to 3 (excluded): this returns row 2 as a one-row matrix A[:, 2:3] # columns 3 to 4 (excluded): this returns column 3 as a one-column matrix Explanation: Note that the result is actually a one-dimensional NumPy array: there is no such thing as a vertical or horizontal one-dimensional array. If you need to actually represent a row vector as a one-row matrix (ie. a 2D NumPy array), or a column vector as a one-column matrix, then you need to use a slice instead of an integer when accessing the row or column, for example: End of explanation np.diag([4, 5, 6]) Explanation: Square, triangular, diagonal and identity matrices A square matrix is a matrix that has the same number of rows and columns, for example a $3 \times 3$ matrix: \begin{bmatrix} 4 & 9 & 2 \ 3 & 5 & 7 \ 8 & 1 & 6 \end{bmatrix} An upper triangular matrix is a special kind of square matrix where all the elements below the main diagonal (top-left to bottom-right) are zero, for example: \begin{bmatrix} 4 & 9 & 2 \ 0 & 5 & 7 \ 0 & 0 & 6 \end{bmatrix} Similarly, a lower triangular matrix is a square matrix where all elements above the main diagonal are zero, for example: \begin{bmatrix} 4 & 0 & 0 \ 3 & 5 & 0 \ 8 & 1 & 6 \end{bmatrix} A triangular matrix is one that is either lower triangular or upper triangular. A matrix that is both upper and lower triangular is called a diagonal matrix, for example: \begin{bmatrix} 4 & 0 & 0 \ 0 & 5 & 0 \ 0 & 0 & 6 \end{bmatrix} You can construct a diagonal matrix using NumPy's diag function: End of explanation D = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]) np.diag(D) Explanation: If you pass a matrix to the diag function, it will happily extract the diagonal values: End of explanation np.eye(3) Explanation: Finally, the identity matrix of size $n$, noted $I_n$, is a diagonal matrix of size $n \times n$ with $1$'s in the main diagonal, for example $I_3$: \begin{bmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{bmatrix} Numpy's eye function returns the identity matrix of the desired size: End of explanation B = np.array([[1,2,3], [4, 5, 6]]) B A A + B Explanation: The identity matrix is often noted simply $I$ (instead of $I_n$) when its size is clear given the context. It is called the identity matrix because multiplying a matrix with it leaves the matrix unchanged as we will see below. Adding matrices If two matrices $Q$ and $R$ have the same size $m \times n$, they can be added together. Addition is performed elementwise: the result is also a $m \times n$ matrix $S$ where each element is the sum of the elements at the corresponding position: $S_{i,j} = Q_{i,j} + R_{i,j}$ $S = \begin{bmatrix} Q_{11} + R_{11} & Q_{12} + R_{12} & Q_{13} + R_{13} & \cdots & Q_{1n} + R_{1n} \ Q_{21} + R_{21} & Q_{22} + R_{22} & Q_{23} + R_{23} & \cdots & Q_{2n} + R_{2n} \ Q_{31} + R_{31} & Q_{32} + R_{32} & Q_{33} + R_{33} & \cdots & Q_{3n} + R_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ Q_{m1} + R_{m1} & Q_{m2} + R_{m2} & Q_{m3} + R_{m3} & \cdots & Q_{mn} + R_{mn} \ \end{bmatrix}$ For example, let's create a $2 \times 3$ matric $B$ and compute $A + B$: End of explanation B + A Explanation: Addition is commutative, meaning that $A + B = B + A$: End of explanation C = np.array([[100,200,300], [400, 500, 600]]) A + (B + C) (A + B) + C Explanation: It is also associative, meaning that $A + (B + C) = (A + B) + C$: End of explanation 2 * A Explanation: Scalar multiplication A matrix $M$ can be multiplied by a scalar $\lambda$. The result is noted $\lambda M$, and it is a matrix of the same size as $M$ with all elements multiplied by $\lambda$: $\lambda M = \begin{bmatrix} \lambda \times M_{11} & \lambda \times M_{12} & \lambda \times M_{13} & \cdots & \lambda \times M_{1n} \ \lambda \times M_{21} & \lambda \times M_{22} & \lambda \times M_{23} & \cdots & \lambda \times M_{2n} \ \lambda \times M_{31} & \lambda \times M_{32} & \lambda \times M_{33} & \cdots & \lambda \times M_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ \lambda \times M_{m1} & \lambda \times M_{m2} & \lambda \times M_{m3} & \cdots & \lambda \times M_{mn} \ \end{bmatrix}$ A more concise way of writing this is: $(\lambda M){i,j} = \lambda (M){i,j}$ In NumPy, simply use the * operator to multiply a matrix by a scalar. For example: End of explanation A * 2 Explanation: Scalar multiplication is also defined on the right hand side, and gives the same result: $M \lambda = \lambda M$. For example: End of explanation 2 * (3 * A) (2 * 3) * A Explanation: This makes scalar multiplication commutative. It is also associative, meaning that $\alpha (\beta M) = (\alpha \times \beta) M$, where $\alpha$ and $\beta$ are scalars. For example: End of explanation 2 * (A + B) 2 * A + 2 * B Explanation: Finally, it is distributive over addition of matrices, meaning that $\lambda (Q + R) = \lambda Q + \lambda R$: End of explanation D = np.array([ [ 2, 3, 5, 7], [11, 13, 17, 19], [23, 29, 31, 37] ]) E = A.dot(D) E Explanation: Matrix multiplication So far, matrix operations have been rather intuitive. But multiplying matrices is a bit more involved. A matrix $Q$ of size $m \times n$ can be multiplied by a matrix $R$ of size $n \times q$. It is noted simply $QR$ without multiplication sign or dot. The result $P$ is an $m \times q$ matrix where each element is computed as a sum of products: $P_{i,j} = \sum_{k=1}^n{Q_{i,k} \times R_{k,j}}$ The element at position $i,j$ in the resulting matrix is the sum of the products of elements in row $i$ of matrix $Q$ by the elements in column $j$ of matrix $R$. $P = \begin{bmatrix} Q_{11} R_{11} + Q_{12} R_{21} + \cdots + Q_{1n} R_{n1} & Q_{11} R_{12} + Q_{12} R_{22} + \cdots + Q_{1n} R_{n2} & \cdots & Q_{11} R_{1q} + Q_{12} R_{2q} + \cdots + Q_{1n} R_{nq} \ Q_{21} R_{11} + Q_{22} R_{21} + \cdots + Q_{2n} R_{n1} & Q_{21} R_{12} + Q_{22} R_{22} + \cdots + Q_{2n} R_{n2} & \cdots & Q_{21} R_{1q} + Q_{22} R_{2q} + \cdots + Q_{2n} R_{nq} \ \vdots & \vdots & \ddots & \vdots \ Q_{m1} R_{11} + Q_{m2} R_{21} + \cdots + Q_{mn} R_{n1} & Q_{m1} R_{12} + Q_{m2} R_{22} + \cdots + Q_{mn} R_{n2} & \cdots & Q_{m1} R_{1q} + Q_{m2} R_{2q} + \cdots + Q_{mn} R_{nq} \end{bmatrix}$ You may notice that each element $P_{i,j}$ is the dot product of the row vector $Q_{i,}$ and the column vector $R_{,j}$: $P_{i,j} = Q_{i,} \cdot R_{,j}$ So we can rewrite $P$ more concisely as: $P = \begin{bmatrix} Q_{1,} \cdot R_{,1} & Q_{1,} \cdot R_{,2} & \cdots & Q_{1,} \cdot R_{,q} \ Q_{2,} \cdot R_{,1} & Q_{2,} \cdot R_{,2} & \cdots & Q_{2,} \cdot R_{,q} \ \vdots & \vdots & \ddots & \vdots \ Q_{m,} \cdot R_{,1} & Q_{m,} \cdot R_{,2} & \cdots & Q_{m,} \cdot R_{,q} \end{bmatrix}$ Let's multiply two matrices in NumPy, using ndarray's dot method: $E = AD = \begin{bmatrix} 10 & 20 & 30 \ 40 & 50 & 60 \end{bmatrix} \begin{bmatrix} 2 & 3 & 5 & 7 \ 11 & 13 & 17 & 19 \ 23 & 29 & 31 & 37 \end{bmatrix} = \begin{bmatrix} 930 & 1160 & 1320 & 1560 \ 2010 & 2510 & 2910 & 3450 \end{bmatrix}$ End of explanation 40*5 + 50*17 + 60*31 E[1,2] # row 2, column 3 Explanation: Let's check this result by looking at one element, just to be sure: looking at $E_{2,3}$ for example, we need to multiply elements in $A$'s $2^{nd}$ row by elements in $D$'s $3^{rd}$ column, and sum up these products: End of explanation try: D.dot(A) except ValueError as e: print("ValueError:", e) Explanation: Looks good! You can check the other elements until you get used to the algorithm. We multiplied a $2 \times 3$ matrix by a $3 \times 4$ matrix, so the result is a $2 \times 4$ matrix. The first matrix's number of columns has to be equal to the second matrix's number of rows. If we try to multiple $D$ by $A$, we get an error because D has 4 columns while A has 2 rows: End of explanation F = np.array([ [5,2], [4,1], [9,3] ]) A.dot(F) F.dot(A) Explanation: This illustrates the fact that matrix multiplication is NOT commutative: in general $QR ≠ RQ$ In fact, $QR$ and $RQ$ are only both defined if $Q$ has size $m \times n$ and $R$ has size $n \times m$. Let's look at an example where both are defined and show that they are (in general) NOT equal: End of explanation G = np.array([ [8, 7, 4, 2, 5], [2, 5, 1, 0, 5], [9, 11, 17, 21, 0], [0, 1, 0, 1, 2]]) A.dot(D).dot(G) # (AB)G A.dot(D.dot(G)) # A(BG) Explanation: On the other hand, matrix multiplication is associative, meaning that $Q(RS) = (QR)S$. Let's create a $4 \times 5$ matrix $G$ to illustrate this: End of explanation (A + B).dot(D) A.dot(D) + B.dot(D) Explanation: It is also distributive over addition of matrices, meaning that $(Q + R)S = QS + RS$. For example: End of explanation A.dot(np.eye(3)) np.eye(2).dot(A) Explanation: The product of a matrix $M$ by the identity matrix (of matching size) results in the same matrix $M$. More formally, if $M$ is an $m \times n$ matrix, then: $M I_n = I_m M = M$ This is generally written more concisely (since the size of the identity matrices is unambiguous given the context): $MI = IM = M$ For example: End of explanation A * B # NOT a matrix multiplication Explanation: Caution: NumPy's * operator performs elementwise multiplication, NOT a matrix multiplication: End of explanation import sys print("Python version: {}.{}.{}".format(*sys.version_info)) print("Numpy version:", np.version.version) # Uncomment the following line if your Python version is ≥3.5 # and your NumPy version is ≥1.10: #A @ D Explanation: The @ infix operator Python 3.5 introduced the @ infix operator for matrix multiplication, and NumPy 1.10 added support for it. If you are using Python 3.5+ and NumPy 1.10+, you can simply write A @ D instead of A.dot(D), making your code much more readable (but less portable). This operator also works for vector dot products. End of explanation A A.T Explanation: Note: Q @ R is actually equivalent to Q.__matmul__(R) which is implemented by NumPy as np.matmul(Q, R), not as Q.dot(R). The main difference is that matmul does not support scalar multiplication, while dot does, so you can write Q.dot(3), which is equivalent to Q * 3, but you cannot write Q @ 3 (more details). Matrix transpose The transpose of a matrix $M$ is a matrix noted $M^T$ such that the $i^{th}$ row in $M^T$ is equal to the $i^{th}$ column in $M$: $ A^T = \begin{bmatrix} 10 & 20 & 30 \ 40 & 50 & 60 \end{bmatrix}^T = \begin{bmatrix} 10 & 40 \ 20 & 50 \ 30 & 60 \end{bmatrix}$ In other words, ($A^T){i,j}$ = $A{j,i}$ Obviously, if $M$ is an $m \times n$ matrix, then $M^T$ is an $n \times m$ matrix. Note: there are a few other notations, such as $M^t$, $M′$, or ${^t}M$. In NumPy, a matrix's transpose can be obtained simply using the T attribute: End of explanation A.T.T Explanation: As you might expect, transposing a matrix twice returns the original matrix: End of explanation (A + B).T A.T + B.T Explanation: Transposition is distributive over addition of matrices, meaning that $(Q + R)^T = Q^T + R^T$. For example: End of explanation (A.dot(D)).T D.T.dot(A.T) Explanation: Moreover, $(Q \cdot R)^T = R^T \cdot Q^T$. Note that the order is reversed. For example: End of explanation D.dot(D.T) Explanation: A symmetric matrix $M$ is defined as a matrix that is equal to its transpose: $M^T = M$. This definition implies that it must be a square matrix whose elements are symmetric relative to the main diagonal, for example: \begin{bmatrix} 17 & 22 & 27 & 49 \ 22 & 29 & 36 & 0 \ 27 & 36 & 45 & 2 \ 49 & 0 & 2 & 99 \end{bmatrix} The product of a matrix by its transpose is always a symmetric matrix, for example: End of explanation u u.T Explanation: Converting 1D arrays to 2D arrays in NumPy As we mentionned earlier, in NumPy (as opposed to Matlab, for example), 1D really means 1D: there is no such thing as a vertical 1D-array or a horizontal 1D-array. So you should not be surprised to see that transposing a 1D array does not do anything: End of explanation u_row = np.array([u]) u_row Explanation: We want to convert $\textbf{u}$ into a row vector before transposing it. There are a few ways to do this: End of explanation u[np.newaxis, :] Explanation: Notice the extra square brackets: this is a 2D array with just one row (ie. a 1x2 matrix). In other words it really is a row vector. End of explanation u[np.newaxis] Explanation: This quite explicit: we are asking for a new vertical axis, keeping the existing data as the horizontal axis. End of explanation u[None] Explanation: This is equivalent, but a little less explicit. End of explanation u_row.T Explanation: This is the shortest version, but you probably want to avoid it because it is unclear. The reason it works is that np.newaxis is actually equal to None, so this is equivalent to the previous version. Ok, now let's transpose our row vector: End of explanation u[:, np.newaxis] Explanation: Great! We now have a nice column vector. Rather than creating a row vector then transposing it, it is also possible to convert a 1D array directly into a column vector: End of explanation P = np.array([ [3.0, 4.0, 1.0, 4.6], [0.2, 3.5, 2.0, 0.5] ]) x_coords_P, y_coords_P = P plt.scatter(x_coords_P, y_coords_P) plt.axis([0, 5, 0, 4]) plt.show() Explanation: Plotting a matrix We have already seen that vectors can been represented as points or arrows in N-dimensional space. Is there a good graphical representation of matrices? Well you can simply see a matrix as a list of vectors, so plotting a matrix results in many points or arrows. For example, let's create a $2 \times 4$ matrix P and plot it as points: End of explanation plt.plot(x_coords_P, y_coords_P, "bo") plt.plot(x_coords_P, y_coords_P, "b--") plt.axis([0, 5, 0, 4]) plt.grid() plt.show() Explanation: Of course we could also have stored the same 4 vectors as row vectors instead of column vectors, resulting in a $4 \times 2$ matrix (the transpose of $P$, in fact). It is really an arbitrary choice. Since the vectors are ordered, you can see the matrix as a path and represent it with connected dots: End of explanation from matplotlib.patches import Polygon plt.gca().add_artist(Polygon(P.T)) plt.axis([0, 5, 0, 4]) plt.grid() plt.show() Explanation: Or you can represent it as a polygon: matplotlib's Polygon class expects an $n \times 2$ NumPy array, not a $2 \times n$ array, so we just need to give it $P^T$: End of explanation H = np.array([ [ 0.5, -0.2, 0.2, -0.1], [ 0.4, 0.4, 1.5, 0.6] ]) P_moved = P + H plt.gca().add_artist(Polygon(P.T, alpha=0.2)) plt.gca().add_artist(Polygon(P_moved.T, alpha=0.3, color="r")) for vector, origin in zip(H.T, P.T): plot_vector2d(vector, origin=origin) plt.text(2.2, 1.8, "$P$", color="b", fontsize=18) plt.text(2.0, 3.2, "$P+H$", color="r", fontsize=18) plt.text(2.5, 0.5, "$H_{*,1}$", color="k", fontsize=18) plt.text(4.1, 3.5, "$H_{*,2}$", color="k", fontsize=18) plt.text(0.4, 2.6, "$H_{*,3}$", color="k", fontsize=18) plt.text(4.4, 0.2, "$H_{*,4}$", color="k", fontsize=18) plt.axis([0, 5, 0, 4]) plt.grid() plt.show() Explanation: Geometric applications of matrix operations We saw earlier that vector addition results in a geometric translation, vector multiplication by a scalar results in rescaling (zooming in or out, centered on the origin), and vector dot product results in projecting a vector onto another vector, rescaling and measuring the resulting coordinate. Similarly, matrix operations have very useful geometric applications. Addition = multiple geometric translations First, adding two matrices together is equivalent to adding all their vectors together. For example, let's create a $2 \times 4$ matrix $H$ and add it to $P$, and look at the result: End of explanation H2 = np.array([ [-0.5, -0.5, -0.5, -0.5], [ 0.4, 0.4, 0.4, 0.4] ]) P_translated = P + H2 plt.gca().add_artist(Polygon(P.T, alpha=0.2)) plt.gca().add_artist(Polygon(P_translated.T, alpha=0.3, color="r")) for vector, origin in zip(H2.T, P.T): plot_vector2d(vector, origin=origin) plt.axis([0, 5, 0, 4]) plt.grid() plt.show() Explanation: If we add a matrix full of identical vectors, we get a simple geometric translation: End of explanation P + [[-0.5], [0.4]] # same as P + H2, thanks to NumPy broadcasting Explanation: Although matrices can only be added together if they have the same size, NumPy allows adding a row vector or a column vector to a matrix: this is called broadcasting and is explained in further details in the NumPy tutorial. We could have obtained the same result as above with: End of explanation def plot_transformation(P_before, P_after, text_before, text_after, axis = [0, 5, 0, 4], arrows=False): if arrows: for vector_before, vector_after in zip(P_before.T, P_after.T): plot_vector2d(vector_before, color="blue", linestyle="--") plot_vector2d(vector_after, color="red", linestyle="-") plt.gca().add_artist(Polygon(P_before.T, alpha=0.2)) plt.gca().add_artist(Polygon(P_after.T, alpha=0.3, color="r")) plt.text(P_before[0].mean(), P_before[1].mean(), text_before, fontsize=18, color="blue") plt.text(P_after[0].mean(), P_after[1].mean(), text_after, fontsize=18, color="red") plt.axis(axis) plt.grid() P_rescaled = 0.60 * P plot_transformation(P, P_rescaled, "$P$", "$0.6 P$", arrows=True) plt.show() Explanation: Scalar multiplication Multiplying a matrix by a scalar results in all its vectors being multiplied by that scalar, so unsurprisingly, the geometric result is a rescaling of the entire figure. For example, let's rescale our polygon by a factor of 60% (zooming out, centered on the origin): End of explanation U = np.array([[1, 0]]) Explanation: Matrix multiplication – Projection onto an axis Matrix multiplication is more complex to visualize, but it is also the most powerful tool in the box. Let's start simple, by defining a $1 \times 2$ matrix $U = \begin{bmatrix} 1 & 0 \end{bmatrix}$. This row vector is just the horizontal unit vector. End of explanation U.dot(P) Explanation: Now let's look at the dot product $P \cdot U$: End of explanation def plot_projection(U, P): U_P = U.dot(P) axis_end = 100 * U plot_vector2d(axis_end[0], color="black") plt.gca().add_artist(Polygon(P.T, alpha=0.2)) for vector, proj_coordinate in zip(P.T, U_P.T): proj_point = proj_coordinate * U plt.plot(proj_point[0][0], proj_point[0][1], "ro") plt.plot([vector[0], proj_point[0][0]], [vector[1], proj_point[0][1]], "r--") plt.axis([0, 5, 0, 4]) plt.grid() plt.show() plot_projection(U, P) Explanation: These are the horizontal coordinates of the vectors in $P$. In other words, we just projected $P$ onto the horizontal axis: End of explanation angle30 = 30 * np.pi / 180 # angle in radians U_30 = np.array([[np.cos(angle30), np.sin(angle30)]]) plot_projection(U_30, P) Explanation: We can actually project on any other axis by just replacing $U$ with any other unit vector. For example, let's project on the axis that is at a 30° angle above the horizontal axis: End of explanation angle120 = 120 * np.pi / 180 V = np.array([ [np.cos(angle30), np.sin(angle30)], [np.cos(angle120), np.sin(angle120)] ]) V Explanation: Good! Remember that the dot product of a unit vector and a matrix basically performs a projection on an axis and gives us the coordinates of the resulting points on that axis. Matrix multiplication – Rotation Now let's create a $2 \times 2$ matrix $V$ containing two unit vectors that make 30° and 120° angles with the horizontal axis: $V = \begin{bmatrix} \cos(30°) & \sin(30°) \ \cos(120°) & \sin(120°) \end{bmatrix}$ End of explanation V.dot(P) Explanation: Let's look at the product $VP$: End of explanation P_rotated = V.dot(P) plot_transformation(P, P_rotated, "$P$", "$VP$", [-2, 6, -2, 4], arrows=True) plt.show() Explanation: The first row is equal to $V_{1,} P$, which is the coordinates of the projection of $P$ onto the 30° axis, as we have seen above. The second row is $V_{2,} P$, which is the coordinates of the projection of $P$ onto the 120° axis. So basically we obtained the coordinates of $P$ after rotating the horizontal and vertical axes by 30° (or equivalently after rotating the polygon by -30° around the origin)! Let's plot $VP$ to see this: End of explanation F_shear = np.array([ [1, 1.5], [0, 1] ]) plot_transformation(P, F_shear.dot(P), "$P$", "$F_{shear} P$", axis=[0, 10, 0, 7]) plt.show() Explanation: Matrix $V$ is called a rotation matrix. Matrix multiplication – Other linear transformations More generally, any linear transformation $f$ that maps n-dimensional vectors to m-dimensional vectors can be represented as an $m \times n$ matrix. For example, say $\textbf{u}$ is a 3-dimensional vector: $\textbf{u} = \begin{pmatrix} x \ y \ z \end{pmatrix}$ and $f$ is defined as: $f(\textbf{u}) = \begin{pmatrix} ax + by + cz \ dx + ey + fz \end{pmatrix}$ This transormation $f$ maps 3-dimensional vectors to 2-dimensional vectors in a linear way (ie. the resulting coordinates only involve sums of multiples of the original coordinates). We can represent this transformation as matrix $F$: $F = \begin{bmatrix} a & b & c \ d & e & f \end{bmatrix}$ Now, to compute $f(\textbf{u})$ we can simply do a matrix multiplication: $f(\textbf{u}) = F \textbf{u}$ If we have a matric $G = \begin{bmatrix}\textbf{u}_1 & \textbf{u}_2 & \cdots & \textbf{u}_q \end{bmatrix}$, where each $\textbf{u}_i$ is a 3-dimensional column vector, then $FG$ results in the linear transformation of all vectors $\textbf{u}_i$ as defined by the matrix $F$: $FG = \begin{bmatrix}f(\textbf{u}_1) & f(\textbf{u}_2) & \cdots & f(\textbf{u}_q) \end{bmatrix}$ To summarize, the matrix on the left hand side of a dot product specifies what linear transormation to apply to the right hand side vectors. We have already shown that this can be used to perform projections and rotations, but any other linear transformation is possible. For example, here is a transformation known as a shear mapping: End of explanation Square = np.array([ [0, 0, 1, 1], [0, 1, 1, 0] ]) plot_transformation(Square, F_shear.dot(Square), "$Square$", "$F_{shear} Square$", axis=[0, 2.6, 0, 1.8]) plt.show() Explanation: Let's look at how this transformation affects the unit square: End of explanation F_squeeze = np.array([ [1.4, 0], [0, 1/1.4] ]) plot_transformation(P, F_squeeze.dot(P), "$P$", "$F_{squeeze} P$", axis=[0, 7, 0, 5]) plt.show() Explanation: Now let's look at a squeeze mapping: End of explanation plot_transformation(Square, F_squeeze.dot(Square), "$Square$", "$F_{squeeze} Square$", axis=[0, 1.8, 0, 1.2]) plt.show() Explanation: The effect on the unit square is: End of explanation F_reflect = np.array([ [1, 0], [0, -1] ]) plot_transformation(P, F_reflect.dot(P), "$P$", "$F_{reflect} P$", axis=[-2, 9, -4.5, 4.5]) plt.show() Explanation: Let's show a last one: reflection through the horizontal axis: End of explanation F_inv_shear = np.array([ [1, -1.5], [0, 1] ]) P_sheared = F_shear.dot(P) P_unsheared = F_inv_shear.dot(P_sheared) plot_transformation(P_sheared, P_unsheared, "$P_{sheared}$", "$P_{unsheared}$", axis=[0, 10, 0, 7]) plt.plot(P[0], P[1], "b--") plt.show() Explanation: Matrix inverse Now that we understand that a matrix can represent any linear transformation, a natural question is: can we find a transformation matrix that reverses the effect of a given transformation matrix $F$? The answer is yes… sometimes! When it exists, such a matrix is called the inverse of $F$, and it is noted $F^{-1}$. For example, the rotation, the shear mapping and the squeeze mapping above all have inverse transformations. Let's demonstrate this on the shear mapping: End of explanation F_inv_shear = LA.inv(F_shear) F_inv_shear Explanation: We applied a shear mapping on $P$, just like we did before, but then we applied a second transformation to the result, and lo and behold this had the effect of coming back to the original $P$ (we plotted the original $P$'s outline to double check). The second transformation is the inverse of the first one. We defined the inverse matrix $F_{shear}^{-1}$ manually this time, but NumPy provides an inv function to compute a matrix's inverse, so we could have written instead: End of explanation plt.plot([0, 0, 1, 1, 0, 0.1, 0.1, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1], [0, 1, 1, 0, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1, 0, 0.1, 0.1], "r-") plt.axis([-0.5, 2.1, -0.5, 1.5]) plt.show() Explanation: Only square matrices can be inversed. This makes sense when you think about it: if you have a transformation that reduces the number of dimensions, then some information is lost and there is no way that you can get it back. For example say you use a $2 \times 3$ matrix to project a 3D object onto a plane. The result may look like this: End of explanation F_project = np.array([ [1, 0], [0, 0] ]) plot_transformation(P, F_project.dot(P), "$P$", "$F_{project} \cdot P$", axis=[0, 6, -1, 4]) plt.show() Explanation: Looking at this image, it is impossible to tell whether this is the projection of a cube or the projection of a narrow rectangular object. Some information has been lost in the projection. Even square transformation matrices can lose information. For example, consider this transformation matrix: End of explanation try: LA.inv(F_project) except LA.LinAlgError as e: print("LinAlgError:", e) Explanation: This transformation matrix performs a projection onto the horizontal axis. Our polygon gets entirely flattened out so some information is entirely lost and it is impossible to go back to the original polygon using a linear transformation. In other words, $F_{project}$ has no inverse. Such a square matrix that cannot be inversed is called a singular matrix (aka degenerate matrix). If we ask NumPy to calculate its inverse, it raises an exception: End of explanation angle30 = 30 * np.pi / 180 F_project_30 = np.array([ [np.cos(angle30)**2, np.sin(2*angle30)/2], [np.sin(2*angle30)/2, np.sin(angle30)**2] ]) plot_transformation(P, F_project_30.dot(P), "$P$", "$F_{project\_30} \cdot P$", axis=[0, 6, -1, 4]) plt.show() Explanation: Here is another example of a singular matrix. This one performs a projection onto the axis at a 30° angle above the horizontal axis: End of explanation LA.inv(F_project_30) Explanation: But this time, due to floating point rounding errors, NumPy manages to calculate an inverse (notice how large the elements are, though): End of explanation F_shear.dot(LA.inv(F_shear)) Explanation: As you might expect, the dot product of a matrix by its inverse results in the identity matrix: $M \cdot M^{-1} = M^{-1} \cdot M = I$ This makes sense since doing a linear transformation followed by the inverse transformation results in no change at all. End of explanation LA.inv(LA.inv(F_shear)) Explanation: Another way to express this is that the inverse of the inverse of a matrix $M$ is $M$ itself: $((M)^{-1})^{-1} = M$ End of explanation F_involution = np.array([ [0, -2], [-1/2, 0] ]) plot_transformation(P, F_involution.dot(P), "$P$", "$F_{involution} \cdot P$", axis=[-8, 5, -4, 4]) plt.show() Explanation: Also, the inverse of scaling by a factor of $\lambda$ is of course scaling by a factor or $\frac{1}{\lambda}$: $ (\lambda \times M)^{-1} = \frac{1}{\lambda} \times M^{-1}$ Once you understand the geometric interpretation of matrices as linear transformations, most of these properties seem fairly intuitive. A matrix that is its own inverse is called an involution. The simplest examples are reflection matrices, or a rotation by 180°, but there are also more complex involutions, for example imagine a transformation that squeezes horizontally, then reflects over the vertical axis and finally rotates by 90° clockwise. Pick up a napkin and try doing that twice: you will end up in the original position. Here is the corresponding involutory matrix: End of explanation F_reflect.dot(F_reflect.T) Explanation: Finally, a square matrix $H$ whose inverse is its own transpose is an orthogonal matrix: $H^{-1} = H^T$ Therefore: $H \cdot H^T = H^T \cdot H = I$ It corresponds to a transformation that preserves distances, such as rotations and reflections, and combinations of these, but not rescaling, shearing or squeezing. Let's check that $F_{reflect}$ is indeed orthogonal: End of explanation M = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 0] ]) LA.det(M) Explanation: Determinant The determinant of a square matrix $M$, noted $\det(M)$ or $\det M$ or $|M|$ is a value that can be calculated from its elements $(M_{i,j})$ using various equivalent methods. One of the simplest methods is this recursive approach: $|M| = M_{1,1}\times|M^{(1,1)}| - M_{2,1}\times|M^{(2,1)}| + M_{3,1}\times|M^{(3,1)}| - M_{4,1}\times|M^{(4,1)}| + \cdots ± M_{n,1}\times|M^{(n,1)}|$ Where $M^{(i,j)}$ is the matrix $M$ without row $i$ and column $j$. For example, let's calculate the determinant of the following $3 \times 3$ matrix: $M = \begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 0 \end{bmatrix}$ Using the method above, we get: $|M| = 1 \times \left | \begin{bmatrix} 5 & 6 \ 8 & 0 \end{bmatrix} \right | - 2 \times \left | \begin{bmatrix} 4 & 6 \ 7 & 0 \end{bmatrix} \right | + 3 \times \left | \begin{bmatrix} 4 & 5 \ 7 & 8 \end{bmatrix} \right |$ Now we need to compute the determinant of each of these $2 \times 2$ matrices (these determinants are called minors): $\left | \begin{bmatrix} 5 & 6 \ 8 & 0 \end{bmatrix} \right | = 5 \times 0 - 6 \times 8 = -48$ $\left | \begin{bmatrix} 4 & 6 \ 7 & 0 \end{bmatrix} \right | = 4 \times 0 - 6 \times 7 = -42$ $\left | \begin{bmatrix} 4 & 5 \ 7 & 8 \end{bmatrix} \right | = 4 \times 8 - 5 \times 7 = -3$ Now we can calculate the final result: $|M| = 1 \times (-48) - 2 \times (-42) + 3 \times (-3) = 27$ To get the determinant of a matrix, you can call NumPy's det function in the numpy.linalg module: End of explanation LA.det(F_project) Explanation: One of the main uses of the determinant is to determine whether a square matrix can be inversed or not: if the determinant is equal to 0, then the matrix cannot be inversed (it is a singular matrix), and if the determinant is not 0, then it can be inversed. For example, let's compute the determinant for the $F_{project}$, $F_{project_30}$ and $F_{shear}$ matrices that we defined earlier: End of explanation LA.det(F_project_30) Explanation: That's right, $F_{project}$ is singular, as we saw earlier. End of explanation LA.det(F_shear) Explanation: This determinant is suspiciously close to 0: it really should be 0, but it's not due to tiny floating point errors. The matrix is actually singular. End of explanation F_scale = np.array([ [0.5, 0], [0, 0.5] ]) plot_transformation(P, F_scale.dot(P), "$P$", "$F_{scale} \cdot P$", axis=[0, 6, -1, 4]) plt.show() Explanation: Perfect! This matrix can be inversed as we saw earlier. Wow, math really works! The determinant can also be used to measure how much a linear transformation affects surface areas: for example, the projection matrices $F_{project}$ and $F_{project_30}$ completely flatten the polygon $P$, until its area is zero. This is why the determinant of these matrices is 0. The shear mapping modified the shape of the polygon, but it did not affect its surface area, which is why the determinant is 1. You can try computing the determinant of a rotation matrix, and you should also find 1. What about a scaling matrix? Let's see: End of explanation LA.det(F_scale) Explanation: We rescaled the polygon by a factor of 1/2 on both vertical and horizontal axes so the surface area of the resulting polygon is 1/4$^{th}$ of the original polygon. Let's compute the determinant and check that: End of explanation LA.det(F_reflect) Explanation: Correct! The determinant can actually be negative, when the transformation results in a "flipped over" version of the original polygon (eg. a left hand glove becomes a right hand glove). For example, the determinant of the F_reflect matrix is -1 because the surface area is preserved but the polygon gets flipped over: End of explanation P_squeezed_then_sheared = F_shear.dot(F_squeeze.dot(P)) Explanation: Composing linear transformations Several linear transformations can be chained simply by performing multiple dot products in a row. For example, to perform a squeeze mapping followed by a shear mapping, just write: End of explanation P_squeezed_then_sheared = (F_shear.dot(F_squeeze)).dot(P) Explanation: Since the dot product is associative, the following code is equivalent: End of explanation F_squeeze_then_shear = F_shear.dot(F_squeeze) P_squeezed_then_sheared = F_squeeze_then_shear.dot(P) Explanation: Note that the order of the transformations is the reverse of the dot product order. If we are going to perform this composition of linear transformations more than once, we might as well save the composition matrix like this: End of explanation LA.inv(F_shear.dot(F_squeeze)) == LA.inv(F_squeeze).dot(LA.inv(F_shear)) Explanation: From now on we can perform both transformations in just one dot product, which can lead to a very significant performance boost. What if you want to perform the inverse of this double transformation? Well, if you squeezed and then you sheared, and you want to undo what you have done, it should be obvious that you should unshear first and then unsqueeze. In more mathematical terms, given two invertible (aka nonsingular) matrices $Q$ and $R$: $(Q \cdot R)^{-1} = R^{-1} \cdot Q^{-1}$ And in NumPy: End of explanation U, S_diag, V_T = LA.svd(F_shear) # note: in python 3 you can rename S_diag to Σ_diag U S_diag Explanation: Singular Value Decomposition It turns out that any $m \times n$ matrix $M$ can be decomposed into the dot product of three simple matrices: * a rotation matrix $U$ (an $m \times m$ orthogonal matrix) * a scaling & projecting matrix $\Sigma$ (an $m \times n$ diagonal matrix) * and another rotation matrix $V^T$ (an $n \times n$ orthogonal matrix) $M = U \cdot \Sigma \cdot V^{T}$ For example, let's decompose the shear transformation: End of explanation S = np.diag(S_diag) S Explanation: Note that this is just a 1D array containing the diagonal values of Σ. To get the actual matrix Σ, we can use NumPy's diag function: End of explanation U.dot(np.diag(S_diag)).dot(V_T) F_shear Explanation: Now let's check that $U \cdot \Sigma \cdot V^T$ is indeed equal to F_shear: End of explanation plot_transformation(Square, V_T.dot(Square), "$Square$", "$V^T \cdot Square$", axis=[-0.5, 3.5 , -1.5, 1.5]) plt.show() Explanation: It worked like a charm. Let's apply these transformations one by one (in reverse order) on the unit square to understand what's going on. First, let's apply the first rotation $V^T$: End of explanation plot_transformation(V_T.dot(Square), S.dot(V_T).dot(Square), "$V^T \cdot Square$", "$\Sigma \cdot V^T \cdot Square$", axis=[-0.5, 3.5 , -1.5, 1.5]) plt.show() Explanation: Now let's rescale along the vertical and horizontal axes using $\Sigma$: End of explanation plot_transformation(S.dot(V_T).dot(Square), U.dot(S).dot(V_T).dot(Square),"$\Sigma \cdot V^T \cdot Square$", "$U \cdot \Sigma \cdot V^T \cdot Square$", axis=[-0.5, 3.5 , -1.5, 1.5]) plt.show() Explanation: Finally, we apply the second rotation $U$: End of explanation eigenvalues, eigenvectors = LA.eig(F_squeeze) eigenvalues # [λ0, λ1, …] eigenvectors # [v0, v1, …] Explanation: And we can see that the result is indeed a shear mapping of the original unit square. Eigenvectors and eigenvalues An eigenvector of a square matrix $M$ (also called a characteristic vector) is a non-zero vector that remains on the same line after transformation by the linear transformation associated with $M$. A more formal definition is any vector $v$ such that: $M \cdot v = \lambda \times v$ Where $\lambda$ is a scalar value called the eigenvalue associated to the vector $v$. For example, any horizontal vector remains horizontal after applying the shear mapping (as you can see on the image above), so it is an eigenvector of $M$. A vertical vector ends up tilted to the right, so vertical vectors are NOT eigenvectors of $M$. If we look at the squeeze mapping, we find that any horizontal or vertical vector keeps its direction (although its length changes), so all horizontal and vertical vectors are eigenvectors of $F_{squeeze}$. However, rotation matrices have no eigenvectors at all (except if the rotation angle is 0° or 180°, in which case all non-zero vectors are eigenvectors). NumPy's eig function returns the list of unit eigenvectors and their corresponding eigenvalues for any square matrix. Let's look at the eigenvectors and eigenvalues of the squeeze mapping matrix $F_{squeeze}$: End of explanation eigenvalues2, eigenvectors2 = LA.eig(F_shear) eigenvalues2 # [λ0, λ1, …] eigenvectors2 # [v0, v1, …] Explanation: Indeed the horizontal vectors are stretched by a factor of 1.4, and the vertical vectors are shrunk by a factor of 1/1.4=0.714…, so far so good. Let's look at the shear mapping matrix $F_{shear}$: End of explanation D = np.array([ [100, 200, 300], [ 10, 20, 30], [ 1, 2, 3], ]) np.trace(D) Explanation: Wait, what!? We expected just one unit eigenvector, not two. The second vector is almost equal to $\begin{pmatrix}-1 \ 0 \end{pmatrix}$, which is on the same line as the first vector $\begin{pmatrix}1 \ 0 \end{pmatrix}$. This is due to floating point errors. We can safely ignore vectors that are (almost) colinear (ie. on the same line). Trace The trace of a square matrix $M$, noted $tr(M)$ is the sum of the values on its main diagonal. For example: End of explanation np.trace(F_project) Explanation: The trace does not have a simple geometric interpretation (in general), but it has a number of properties that make it useful in many areas: * $tr(A + B) = tr(A) + tr(B)$ * $tr(A \cdot B) = tr(B \cdot A)$ * $tr(A \cdot B \cdot \cdots \cdot Y \cdot Z) = tr(Z \cdot A \cdot B \cdot \cdots \cdot Y)$ * $tr(A^T \cdot B) = tr(A \cdot B^T) = tr(B^T \cdot A) = tr(B \cdot A^T) = \sum_{i,j}X_{i,j} \times Y_{i,j}$ * … It does, however, have a useful geometric interpretation in the case of projection matrices (such as $F_{project}$ that we discussed earlier): it corresponds to the number of dimensions after projection. For example: End of explanation
14,892
Given the following text description, write Python code to implement the functionality described below step by step Description: Closures Before getting into closures lets understand nested functions. A function defined inside another function is called a nested function. Nested functions can access variables of the enclosing scope. In Python, non-local variables are read only by default and we must declare them explicitly as non-local in order to modify them. Please find an example below of a nested function accessing a non-local variable. Step1: We can see that the nested function printer() was able to access the non-local variable msg of the enclosing function. In the example above, what would happen if the last line of the function print_msg() returned the printer() function instead of calling it? This means the function was defined as follows Step2: In the example above the print_msg() function was called with the string "Hello" and the returned function was bound to the name another. On calling another(), the message was still remembered although we have already finished executing the print_msg() function. This technique by which some data ("Hello") get attached to the code is called closure in python. This value in the enclosing scope is remembered even when the variable goes out of scope or the function itself is removed from the current namespace. Step3: The criteria that must be met to create closure in python We must have a nested function (function inside a function) Nested function must refer to value defined in the enclosing function. The enclosing function must return the nested function. Closures are good when Step4: All function objects have a __closure__attribute that returns a tuple of cell objects if it is a closure function.
Python Code: def print_msg(msg): # This is the outer enclosing function def printer(): # This is the nested function print(msg) printer() print_msg('Hello') Explanation: Closures Before getting into closures lets understand nested functions. A function defined inside another function is called a nested function. Nested functions can access variables of the enclosing scope. In Python, non-local variables are read only by default and we must declare them explicitly as non-local in order to modify them. Please find an example below of a nested function accessing a non-local variable. End of explanation def print_msg(msg): # This is the outer enclosing function def printer(): # This is the nested function print(msg) return printer # This is changed from the above example another = print_msg("Hello") another() Explanation: We can see that the nested function printer() was able to access the non-local variable msg of the enclosing function. In the example above, what would happen if the last line of the function print_msg() returned the printer() function instead of calling it? This means the function was defined as follows: End of explanation del print_msg another() Explanation: In the example above the print_msg() function was called with the string "Hello" and the returned function was bound to the name another. On calling another(), the message was still remembered although we have already finished executing the print_msg() function. This technique by which some data ("Hello") get attached to the code is called closure in python. This value in the enclosing scope is remembered even when the variable goes out of scope or the function itself is removed from the current namespace. End of explanation def make_multiplier_of(n): def multiplier(x): return x * n return multiplier # multiplier of 3 times3 = make_multiplier_of(3) # multiplier of 5 times5 = make_multiplier_of(5) print(times3(9)) print(times5(3)) print(times5(times3(2))) Explanation: The criteria that must be met to create closure in python We must have a nested function (function inside a function) Nested function must refer to value defined in the enclosing function. The enclosing function must return the nested function. Closures are good when: To avoid global values and provide some form of data hiding. To provide an object oriented solution to the problem. When there are few methods (one method in most cases) to be implemented in a class, closures can provide an alternate and more elegant solutions. But when the number of attributes and methods get larger, better implement a class. End of explanation make_multiplier_of.__closure__ times3.__closure__ times3.__closure__[0].cell_contents times5.__closure__[0].cell_contents Explanation: All function objects have a __closure__attribute that returns a tuple of cell objects if it is a closure function. End of explanation
14,893
Given the following text description, write Python code to implement the functionality described below step by step Description: Toy weather data Here is an example of how to easily manipulate a toy weather dataset using xarray and other recommended Python libraries Step1: Examine a dataset with pandas and seaborn Step2: Probability of freeze by calendar month Step3: Monthly averaging Step4: Note that MS here refers to Month-Start; M labels Month-End (the last day of the month). Calculate monthly anomalies In climatology, “anomalies” refer to the difference between observations and typical weather for a particular season. Unlike observations, anomalies should not show any seasonal cycle. Step5: Fill missing values with climatology The fillna() method on grouped objects lets you easily fill missing values by group
Python Code: import xarray as xr import numpy as np import pandas as pd import seaborn as sns # pandas aware plotting library np.random.seed(123) times = pd.date_range('2000-01-01', '2001-12-31', name='time') annual_cycle = np.sin(2 * np.pi * (times.dayofyear / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset({'tmin': (('time', 'location'), tmin_values), 'tmax': (('time', 'location'), tmax_values)}, {'time': times, 'location': ['IA', 'IN', 'IL']}) Explanation: Toy weather data Here is an example of how to easily manipulate a toy weather dataset using xarray and other recommended Python libraries: Examine a dataset with pandas and seaborn Probability of freeze by calendar month Monthly averaging Calculate monthly anomalies Fill missing values with climatology Shared setup: End of explanation ds df = ds.to_dataframe() df.head() df.describe() ds.mean(dim='location').to_dataframe().plot() sns.pairplot(df.reset_index(), vars=ds.data_vars) Explanation: Examine a dataset with pandas and seaborn End of explanation freeze = (ds['tmin'] <= 0).groupby('time.month').mean('time') freeze freeze.to_pandas().plot() Explanation: Probability of freeze by calendar month End of explanation monthly_avg = ds.resample('1MS', dim='time', how='mean') monthly_avg.sel(location='IA').to_dataframe().plot(style='s-') Explanation: Monthly averaging End of explanation climatology = ds.groupby('time.month').mean('time') anomalies = ds.groupby('time.month') - climatology anomalies.mean('location').to_dataframe()[['tmin', 'tmax']].plot() Explanation: Note that MS here refers to Month-Start; M labels Month-End (the last day of the month). Calculate monthly anomalies In climatology, “anomalies” refer to the difference between observations and typical weather for a particular season. Unlike observations, anomalies should not show any seasonal cycle. End of explanation # throw away the first half of every month some_missing = ds.tmin.sel(time=ds['time.day'] > 15).reindex_like(ds) filled = some_missing.groupby('time.month').fillna(climatology.tmin) both = xr.Dataset({'some_missing': some_missing, 'filled': filled}) both df = both.sel(time='2000').mean('location').reset_coords(drop=True).to_dataframe() df[['filled', 'some_missing']].plot() Explanation: Fill missing values with climatology The fillna() method on grouped objects lets you easily fill missing values by group: End of explanation
14,894
Given the following text description, write Python code to implement the functionality described below step by step Description: Implementing the NFFT Step2: We want to solve the following Step3: Let's try evaluating this on some sinusoidal data, with a frequency of 10 cycles per unit time Step4: As expected, the NFFT shows strong features at wave numbers $\pm 10$. Fast transform Step6: The expanded algorithm Step8: Speedup #1 Step9: Speedup #2 Step11: By design, each row of the matrix contains just a single nonzero clump of entries, of width approximately $2m$. We could make this basis function sum far more efficient if we construct this as a sparse rather than a dense matrix. Because each row has the same number of nonzero entries (that is, $2m$), this can be done quite efficiently using scipy's compressed sparse row (CSR) sparse matrix format. We will build the matrix from an array of values, an array of column indices, and an array of indices specifying the beginning and end of each sequential row within these arrays Step12: Choosing m Finally, we should be a bit more careful about choosing a suitable value of m for our problem. The paper offers a way to estimate m from a desired error tolerance for the result of the computation. Step15: Let's add this to our nfft function
Python Code: from __future__ import division %matplotlib inline import matplotlib.pyplot as plt import numpy as np Explanation: Implementing the NFFT End of explanation def ndft(x, f, N): non-equispaced discrete Fourier transform k = -(N // 2) + np.arange(N) return np.dot(f, np.exp(2j * np.pi * k * x[:, np.newaxis])) Explanation: We want to solve the following: $$ \hat{f}k = \sum{j=0}^{M-1} f_j e^{2\pi i k x_j}, $$ for complex values ${f_j}$ at points ${x_j}$ satisfying $-1/2 \le x_j < 1/2$, and for integer wavenumbers $k$ in the range $-N/2 \le k < N$. A straightforward implementation of this sum would require $\mathcal{O}[MN]$ operations, but the nonequispaced fast Fourier transform (NFFT) allows this to be computed in $\mathcal{O}[M\log N]$. In this post, we'll work on writing a Python implementation of this NFFT algorithm that uses tools in NumPy and SciPy rather than custom compiled extensions. Straightforward implementation As our first step, we'll define a simple reference implementation of the non-equispaced direct Fourier transform (NDFT): End of explanation x = -0.5 + np.random.rand(1000) f = np.sin(10 * 2 * np.pi * x) k = -20 + np.arange(40) f_k = ndft(x, f, len(k)) plt.plot(k, f_k.real, label='real') plt.plot(k, f_k.imag, label='imag') plt.legend() Explanation: Let's try evaluating this on some sinusoidal data, with a frequency of 10 cycles per unit time: End of explanation # equations C.1 from https://www-user.tu-chemnitz.de/~potts/paper/nfft3.pdf def phi(x, n, m, sigma): b = (2 * sigma * m) / ((2 * sigma - 1) * np.pi) return np.exp(-(n * x) ** 2 / b) / np.sqrt(np.pi * b) def phi_hat(k, n, m, sigma): b = (2 * sigma * m) / ((2 * sigma - 1) * np.pi) return np.exp(-b * (np.pi * k / n) ** 2) from numpy.fft import fft, fftshift, ifftshift N = 1000 sigma = 1 n = N * sigma m = 20 # compute phi(x) x = np.linspace(-0.5, 0.5, N, endpoint=False) f = phi(x, n, m, sigma) # compute phi_hat(k) k = -(N // 2) + np.arange(N) f_hat = phi_hat(k, n, m, sigma) # compute the FFT of phi(x) f_fft = fftshift(fft(ifftshift(f))) # assure they match np.allclose(f_fft, f_hat) Explanation: As expected, the NFFT shows strong features at wave numbers $\pm 10$. Fast transform: initial implementation The Kernel function End of explanation import numpy as np def nfft1(x, f, N, sigma=2): Alg 3 from https://www-user.tu-chemnitz.de/~potts/paper/nfft3.pdf n = N * sigma # size of oversampled grid m = 20 # magic number: we'll set this more carefully later # 1. Express f(x) in terms of basis functions phi shift_to_range = lambda x: -0.5 + (x + 0.5) % 1 x_grid = np.linspace(-0.5, 0.5, n, endpoint=False) g = np.dot(f, phi(shift_to_range(x[:, None] - x_grid), n, m, sigma)) # 2. Compute the Fourier transform of g on the oversampled grid k = -(N // 2) + np.arange(N) g_k = np.dot(g, np.exp(2j * np.pi * k * x_grid[:, None])) # 3. Divide by the Fourier transform of the convolution kernel f_k = g_k / phi_hat(k, n, m, sigma) return f_k x = -0.5 + np.random.rand(1000) f = np.sin(10 * 2 * np.pi * x) N = 100 np.allclose(ndft(x, f, N), nfft1(x, f, N)) Explanation: The expanded algorithm End of explanation import numpy as np from numpy.fft import fft, ifft, fftshift, ifftshift def nfft2(x, f, N, sigma=2): Alg 3 from https://www-user.tu-chemnitz.de/~potts/paper/nfft3.pdf n = N * sigma # size of oversampled grid m = 20 # magic number: we'll set this more carefully later # 1. Express f(x) in terms of basis functions phi shift_to_range = lambda x: -0.5 + (x + 0.5) % 1 x_grid = np.linspace(-0.5, 0.5, n, endpoint=False) g = np.dot(f, phi(shift_to_range(x[:, None] - x_grid), n, m, sigma)) # 2. Compute the Fourier transform of g on the oversampled grid k = -(N // 2) + np.arange(N) g_k_n = fftshift(ifft(ifftshift(g))) g_k = n * g_k_n[(n - N) // 2: (n + N) // 2] # 3. Divide by the Fourier transform of the convolution kernel f_k = g_k / phi_hat(k, n, m, sigma) return f_k x = -0.5 + np.random.rand(1000) f = np.sin(10 * 2 * np.pi * x) N = 100 np.allclose(ndft(x, f, N), nfft2(x, f, N)) Explanation: Speedup #1: using an FFT We'll replace this slow sum python g_k = np.dot(g, np.exp(2j * np.pi * k * x_grid[:, None])) With the FFT-based version of the sum python g_k_n = fftshift(ifft(ifftshift(g))) g_k = n * g_k_n[(n - N) // 2: (n + N) // 2] End of explanation sigma = 3 n = sigma * N m = 20 x_grid = np.linspace(-0.5, 0.5, n, endpoint=False) shift_to_range = lambda x: -0.5 + (x + 0.5) % 1 mat = phi(shift_to_range(x[:, None] - x_grid), n, m, sigma) plt.imshow(mat, aspect='auto') plt.colorbar() Explanation: Speedup #2: Truncating the basis function sum The expression of the inputs in terms of the basis functions $\phi$ takes the form of a matrix-vector product: python np.dot(f, phi(shift_to_range(x[:, None] - x_grid), n, m, sigma)) The NFFT algorithm is designed so that this matrix will be sparse, as we can see by visualizing it: End of explanation from scipy.sparse import csr_matrix col_ind = np.floor(n * x[:, np.newaxis]).astype(int) + np.arange(-m, m) vals = phi(shift_to_range(x[:, None] - col_ind / n), n, m, sigma) col_ind = (col_ind + n // 2) % n row_ptr = np.arange(len(x) + 1) * col_ind.shape[1] spmat = csr_matrix((vals.ravel(), col_ind.ravel(), row_ptr), shape=(len(x), n)) plt.imshow(spmat.toarray(), aspect='auto') plt.colorbar() np.allclose(spmat.toarray(), mat) import numpy as np from numpy.fft import fft, ifft, fftshift, ifftshift def nfft3(x, f, N, sigma=2): Alg 3 from https://www-user.tu-chemnitz.de/~potts/paper/nfft3.pdf n = N * sigma # size of oversampled grid m = 20 # magic number: we'll set this more carefully later # 1. Express f(x) in terms of basis functions phi shift_to_range = lambda x: -0.5 + (x + 0.5) % 1 col_ind = np.floor(n * x[:, np.newaxis]).astype(int) + np.arange(-m, m) vals = phi(shift_to_range(x[:, None] - col_ind / n), n, m, sigma) col_ind = (col_ind + n // 2) % n row_ptr = np.arange(len(x) + 1) * col_ind.shape[1] mat = csr_matrix((vals.ravel(), col_ind.ravel(), row_ptr), shape=(len(x), n)) g = mat.T.dot(f) # 2. Compute the Fourier transform of g on the oversampled grid k = -(N // 2) + np.arange(N) g_k_n = fftshift(ifft(ifftshift(g))) g_k = n * g_k_n[(n - N) // 2: (n + N) // 2] # 3. Divide by the Fourier transform of the convolution kernel f_k = g_k / phi_hat(k, n, m, sigma) return f_k x = -0.5 + np.random.rand(1000) f = np.sin(10 * 2 * np.pi * x) N = 100 np.allclose(ndft(x, f, N), nfft3(x, f, N)) Explanation: By design, each row of the matrix contains just a single nonzero clump of entries, of width approximately $2m$. We could make this basis function sum far more efficient if we construct this as a sparse rather than a dense matrix. Because each row has the same number of nonzero entries (that is, $2m$), this can be done quite efficiently using scipy's compressed sparse row (CSR) sparse matrix format. We will build the matrix from an array of values, an array of column indices, and an array of indices specifying the beginning and end of each sequential row within these arrays: End of explanation def C_phi(m, sigma): return 4 * np.exp(-m * np.pi * (1 - 1. / (2 * sigma - 1))) def m_from_C_phi(C, sigma): return np.ceil(-np.log(0.25 * C) / (np.pi * (1 - 1 / (2 * sigma - 1)))) Explanation: Choosing m Finally, we should be a bit more careful about choosing a suitable value of m for our problem. The paper offers a way to estimate m from a desired error tolerance for the result of the computation. End of explanation import numpy as np from numpy.fft import fft, ifft, fftshift, ifftshift def nfft(x, f, N, sigma=2, tol=1E-8): Alg 3 from https://www-user.tu-chemnitz.de/~potts/paper/nfft3.pdf n = N * sigma # size of oversampled grid m = m_from_C_phi(tol / N, sigma) # 1. Express f(x) in terms of basis functions phi shift_to_range = lambda x: -0.5 + (x + 0.5) % 1 col_ind = np.floor(n * x[:, np.newaxis]).astype(int) + np.arange(-m, m) vals = phi(shift_to_range(x[:, None] - col_ind / n), n, m, sigma) col_ind = (col_ind + n // 2) % n indptr = np.arange(len(x) + 1) * col_ind.shape[1] mat = csr_matrix((vals.ravel(), col_ind.ravel(), indptr), shape=(len(x), n)) g = mat.T.dot(f) # 2. Compute the Fourier transform of g on the oversampled grid k = -(N // 2) + np.arange(N) g_k_n = fftshift(ifft(ifftshift(g))) g_k = n * g_k_n[(n - N) // 2: (n + N) // 2] # 3. Divide by the Fourier transform of the convolution kernel f_k = g_k / phi_hat(k, n, m, sigma) return f_k x = -0.5 + np.random.rand(1000) f = np.sin(10 * 2 * np.pi * x) N = 100 np.allclose(ndft(x, f, N), nfft(x, f, N)) from pynfft import NFFT def cnfft(x, f, N): Compute the nfft with pynfft plan = NFFT(N, len(x)) plan.x = x plan.precompute() plan.f = f # need to return a copy because of a # reference counting bug in pynfft return plan.adjoint().copy() np.allclose(cnfft(x, f, N), nfft(x, f, N)) x = -0.5 + np.random.rand(10000) f = np.sin(10 * 2 * np.pi * x) N = 10000 #print("direct ndft:") #%timeit ndft(x, f, N) #print() print("fast nfft:") %timeit nfft(x, f, N) print() print("wrapped C-nfft/pynfft package:") %timeit cnfft(x, f, N) Explanation: Let's add this to our nfft function End of explanation
14,895
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: THU Source ID: SANDBOX-2 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
14,896
Given the following text description, write Python code to implement the functionality described below step by step Description: STELLAB test notebook The STELLAB module (which is a contraction for Stellar Abundances) enables to plot observational data for comparison with galactic chemical evolution (GCE) predictions. The abundance ratios are presented in the following spectroscopic notation Step1: Simple Plot In order to plot observed stellar abundances, you just need to enter the wanted ratios with the xaxis and yaxis parameters. Stellab has been coded in a way that any abundance ratio can be plotted (see Appendix A below), as long as the considered data sets contain the elements. In this example, we consider the Milky Way. Step2: Solar Normalization By default, the solar normalization $\log(n_A/n_B)_\odot$ is taken from the reference paper that provide the data set. But every data point can be re-normalized to any other solar values (see Appendix B), using the norm parameter. This is highly recommended, since the original data points may not have the same solar normalization. Step3: Here is an example of how the observational data can be re-normalized. Step4: Important Note In some papers, I had a hard time finding the solar normalization used by the authors. This means I cannot apply the re-normalization for their data set. When that happens, I print a warning below the plot and add two asterisk after the reference paper in the legend. Personal Selection You can select a subset of the observational data implemented in Stellab. Step5: Galaxy Selection The Milky Way (milky_way) is the default galaxy. But you can select another galaxy among Sculptor, Fornax, and Carina (use lower case letters). Step6: Plot Error Bars It is possible to plot error bars with the show_err parameter, and print the mean errors with the show_mean_err parameter. Step7: Appendix A - Abundance Ratios Let's consider that a data set provides stellar abundances in the form of [X/Y], where Y is the reference element (often H or Fe) and X represents any element. It is possible to change the reference element by using simple substractions and additions. Substraction Let's say we want [Ca/Mg] from [Ca/Fe] and [Mg/Fe]. $$[\mathrm{Ca}/\mathrm{Mg}]=\log(n_\mathrm{Ca}/n_\mathrm{Mg})-\log(n_\mathrm{Ca}/n_\mathrm{Mg})_\odot$$ $$=\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Ca}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{Ca}/n_\mathrm{Fe})\odot+\log(n\mathrm{Mg}/n_\mathrm{Fe})_\odot$$ $$=[\mathrm{Ca}/\mathrm{Fe}]-[\mathrm{Mg}/\mathrm{Fe}]$$ Addition Let's say we want [Mg/H] from [Fe/H] and [Mg/Fe]. $$[\mathrm{Mg}/\mathrm{H}]=\log(n_\mathrm{Mg}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{H})_\odot$$ $$=\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{H}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot+\log(n\mathrm{H}/n_\mathrm{Fe})_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})+\log(n_\mathrm{Fe}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot-\log(n\mathrm{Fe}/n_\mathrm{H})_\odot$$ $$=[\mathrm{Mg}/\mathrm{Fe}]+[\mathrm{Fe}/\mathrm{H}]$$ Test
Python Code: # Import the needed packages import matplotlib import matplotlib.pyplot as plt # Import the observational data module import stellab import sys # Trigger interactive or non-interactive depending on command line argument __RUNIPY__ = sys.argv[0] if __RUNIPY__: %matplotlib inline else: %pylab nbagg Explanation: STELLAB test notebook The STELLAB module (which is a contraction for Stellar Abundances) enables to plot observational data for comparison with galactic chemical evolution (GCE) predictions. The abundance ratios are presented in the following spectroscopic notation : $$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)_\odot.$$ The following sections describe how to use the code. End of explanation # Create an instance of Stellab s = stellab.stellab() # Plot observational data (you can try all the ratios you want) s.plot_spectro(xaxis='[Fe/H]', yaxis='[Eu/Fe]') plt.xlim(-4.5,0.75) plt.ylim(-1.6,1.6) Explanation: Simple Plot In order to plot observed stellar abundances, you just need to enter the wanted ratios with the xaxis and yaxis parameters. Stellab has been coded in a way that any abundance ratio can be plotted (see Appendix A below), as long as the considered data sets contain the elements. In this example, we consider the Milky Way. End of explanation # First, you can see the list of the available solar abundances s.list_solar_norm() Explanation: Solar Normalization By default, the solar normalization $\log(n_A/n_B)_\odot$ is taken from the reference paper that provide the data set. But every data point can be re-normalized to any other solar values (see Appendix B), using the norm parameter. This is highly recommended, since the original data points may not have the same solar normalization. End of explanation # Plot using the default solar normalization of each data set s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.6) # Plot using the same solar normalization for all data sets s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]',norm='Asplund_et_al_2009') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.6) Explanation: Here is an example of how the observational data can be re-normalized. End of explanation # First, you can see the list of the available reference papers s.list_ref_papers() # Create a list of reference papers obs = ['stellab_data/milky_way_data/Jacobson_et_al_2015_stellab',\ 'stellab_data/milky_way_data/Venn_et_al_2004_stellab',\ 'stellab_data/milky_way_data/Yong_et_al_2013_stellab',\ 'stellab_data/milky_way_data/Bensby_et_al_2014_stellab'] # Plot data using your selection of data points s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]', norm='Asplund_et_al_2009', obs=obs) plt.xlim(-4.5,0.7) plt.ylim(-1.4,1.6) Explanation: Important Note In some papers, I had a hard time finding the solar normalization used by the authors. This means I cannot apply the re-normalization for their data set. When that happens, I print a warning below the plot and add two asterisk after the reference paper in the legend. Personal Selection You can select a subset of the observational data implemented in Stellab. End of explanation # Plot data using a specific galaxy s.plot_spectro(xaxis='[Fe/H]', yaxis='[Si/Fe]',norm='Asplund_et_al_2009', galaxy='fornax') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.4) Explanation: Galaxy Selection The Milky Way (milky_way) is the default galaxy. But you can select another galaxy among Sculptor, Fornax, and Carina (use lower case letters). End of explanation # Plot error bars for a specific galaxy s.plot_spectro(xaxis='[Fe/H]',yaxis='[Ti/Fe]',\ norm='Asplund_et_al_2009', galaxy='sculptor', show_err=True, show_mean_err=True) plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.4) Explanation: Plot Error Bars It is possible to plot error bars with the show_err parameter, and print the mean errors with the show_mean_err parameter. End of explanation # Everything should be on a horizontal line s.plot_spectro(xaxis='[Mg/H]', yaxis='[Ti/Ti]') plt.xlim(-1,1) plt.ylim(-1,1) # Everything should be on a vertical line s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Mg]') plt.xlim(-1,1) plt.ylim(-1,1) # Everything should be at zero s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Ti]') plt.xlim(-1,1) plt.ylim(-1,1) Explanation: Appendix A - Abundance Ratios Let's consider that a data set provides stellar abundances in the form of [X/Y], where Y is the reference element (often H or Fe) and X represents any element. It is possible to change the reference element by using simple substractions and additions. Substraction Let's say we want [Ca/Mg] from [Ca/Fe] and [Mg/Fe]. $$[\mathrm{Ca}/\mathrm{Mg}]=\log(n_\mathrm{Ca}/n_\mathrm{Mg})-\log(n_\mathrm{Ca}/n_\mathrm{Mg})_\odot$$ $$=\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Ca}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{Ca}/n_\mathrm{Fe})\odot+\log(n\mathrm{Mg}/n_\mathrm{Fe})_\odot$$ $$=[\mathrm{Ca}/\mathrm{Fe}]-[\mathrm{Mg}/\mathrm{Fe}]$$ Addition Let's say we want [Mg/H] from [Fe/H] and [Mg/Fe]. $$[\mathrm{Mg}/\mathrm{H}]=\log(n_\mathrm{Mg}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{H})_\odot$$ $$=\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{H}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot+\log(n\mathrm{H}/n_\mathrm{Fe})_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})+\log(n_\mathrm{Fe}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot-\log(n\mathrm{Fe}/n_\mathrm{H})_\odot$$ $$=[\mathrm{Mg}/\mathrm{Fe}]+[\mathrm{Fe}/\mathrm{H}]$$ Test End of explanation
14,897
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Step1: This plot shows the simulated data as black points with error bars and the true function is shown as a gray line. Now let's build the celerite model that we'll use to fit the data. We can see that there's some roughly periodic signal in the data as well as a longer term trend. To capture these two features, we will model this as a mixture of two stochastically driven simple harmonic oscillators with the power spectrum Step2: Then we wrap this kernel in a GP object that can be used for computing the likelihood function. Step3: There is a modeling language built into celerite that will come in handy. Other tutorials will go into more detail but here are some of the features that the modeling language exposes Step4: You already saw that it is possible to freeze and thaw parameters above but here's what you would do if you wanted to freeze another parameter Step5: Now we'll use the L-BFGS-B non-linear optimization routine from scipy.optimize to find the maximum likelihood parameters for this model. Step6: With a small dataset like this, this optimization should have only taken a fraction of a second to converge. The maximum likelihood parameters are the following Step7: Finally, let's see what the model predicts for the underlying function. A GP model can predict the (Gaussian) conditional (on the observed data) distribution for new observations. Let's do that on a fine grid Step8: Let's plot this prediction and compare it to the true underlying function. Step9: In this figure, the 1-sigma prediction is shown as an orange band and the mean prediction is indicated by a solid orange line. Comparing this to the true underlying function (shown as a gray line), we see that the prediction is consistent with the truth at all times and the the uncertainty in the region of missing data increases as expected. As the last figure, let's look at the maximum likelihood power spectrum of the model. The following figure shows the model power spectrum as a solid line and the dashed lines show the contributions from the two components.
Python Code: import numpy as np import matplotlib.pyplot as plt np.random.seed(42) t = np.sort(np.append( np.random.uniform(0, 3.8, 57), np.random.uniform(5.5, 10, 68), )) # The input coordinates must be sorted yerr = np.random.uniform(0.08, 0.22, len(t)) y = 0.2 * (t-5) + np.sin(3*t + 0.1*(t-5)**2) + yerr * np.random.randn(len(t)) true_t = np.linspace(0, 10, 5000) true_y = 0.2 * (true_t-5) + np.sin(3*true_t + 0.1*(true_t-5)**2) plt.plot(true_t, true_y, "k", lw=1.5, alpha=0.3) plt.errorbar(t, y, yerr=yerr, fmt=".k", capsize=0) plt.xlabel("x") plt.ylabel("y") plt.xlim(0, 10) plt.ylim(-2.5, 2.5); Explanation: Python: First Steps For this tutorial, we're going to fit a Gaussian Process (GP) model to a simulated dataset with quasiperiodic oscillations. We're also going to leave a gap in the simulated data and we'll use the GP model to predict what we would have observed for those "missing" datapoints. To start, here's some code to simulate the dataset: End of explanation import celerite from celerite import terms # A non-periodic component Q = 1.0 / np.sqrt(2.0) w0 = 3.0 S0 = np.var(y) / (w0 * Q) bounds = dict(log_S0=(-15, 15), log_Q=(-15, 15), log_omega0=(-15, 15)) kernel = terms.SHOTerm(log_S0=np.log(S0), log_Q=np.log(Q), log_omega0=np.log(w0), bounds=bounds) kernel.freeze_parameter("log_Q") # We don't want to fit for "Q" in this term # A periodic component Q = 1.0 w0 = 3.0 S0 = np.var(y) / (w0 * Q) kernel += terms.SHOTerm(log_S0=np.log(S0), log_Q=np.log(Q), log_omega0=np.log(w0), bounds=bounds) Explanation: This plot shows the simulated data as black points with error bars and the true function is shown as a gray line. Now let's build the celerite model that we'll use to fit the data. We can see that there's some roughly periodic signal in the data as well as a longer term trend. To capture these two features, we will model this as a mixture of two stochastically driven simple harmonic oscillators with the power spectrum: $$ S(\omega) = \sqrt{\frac{2}{\pi}}\frac{S_1\,{\omega_1}^4}{(\omega^2 - {\omega_1}^2)^2 + 2\,{\omega_1}^2\,\omega^2} + \sqrt{\frac{2}{\pi}}\frac{S_2\,{\omega_2}^4}{(\omega^2 - {\omega_2}^2)^2 + {\omega_2}^2\,\omega^2/Q^2} $$ This model has 5 free parameters ($S_1$, $\omega_1$, $S_2$, $\omega_2$, and $Q$) and they must all be positive. In celerite, this is how you would build this model, choosing more or less arbitrary initial values for the parameters. End of explanation gp = celerite.GP(kernel, mean=np.mean(y)) gp.compute(t, yerr) # You always need to call compute once. print("Initial log likelihood: {0}".format(gp.log_likelihood(y))) Explanation: Then we wrap this kernel in a GP object that can be used for computing the likelihood function. End of explanation print("parameter_dict:\n{0}\n".format(gp.get_parameter_dict())) print("parameter_names:\n{0}\n".format(gp.get_parameter_names())) print("parameter_vector:\n{0}\n".format(gp.get_parameter_vector())) print("parameter_bounds:\n{0}\n".format(gp.get_parameter_bounds())) Explanation: There is a modeling language built into celerite that will come in handy. Other tutorials will go into more detail but here are some of the features that the modeling language exposes: End of explanation print(gp.get_parameter_names()) gp.freeze_parameter("kernel:terms[0]:log_omega0") print(gp.get_parameter_names()) gp.thaw_parameter("kernel:terms[0]:log_omega0") print(gp.get_parameter_names()) Explanation: You already saw that it is possible to freeze and thaw parameters above but here's what you would do if you wanted to freeze another parameter: End of explanation from scipy.optimize import minimize def neg_log_like(params, y, gp): gp.set_parameter_vector(params) return -gp.log_likelihood(y) initial_params = gp.get_parameter_vector() bounds = gp.get_parameter_bounds() r = minimize(neg_log_like, initial_params, method="L-BFGS-B", bounds=bounds, args=(y, gp)) gp.set_parameter_vector(r.x) print(r) Explanation: Now we'll use the L-BFGS-B non-linear optimization routine from scipy.optimize to find the maximum likelihood parameters for this model. End of explanation gp.get_parameter_dict() Explanation: With a small dataset like this, this optimization should have only taken a fraction of a second to converge. The maximum likelihood parameters are the following: End of explanation x = np.linspace(0, 10, 5000) pred_mean, pred_var = gp.predict(y, x, return_var=True) pred_std = np.sqrt(pred_var) Explanation: Finally, let's see what the model predicts for the underlying function. A GP model can predict the (Gaussian) conditional (on the observed data) distribution for new observations. Let's do that on a fine grid: End of explanation color = "#ff7f0e" plt.plot(true_t, true_y, "k", lw=1.5, alpha=0.3) plt.errorbar(t, y, yerr=yerr, fmt=".k", capsize=0) plt.plot(x, pred_mean, color=color) plt.fill_between(x, pred_mean+pred_std, pred_mean-pred_std, color=color, alpha=0.3, edgecolor="none") plt.xlabel("x") plt.ylabel("y") plt.xlim(0, 10) plt.ylim(-2.5, 2.5); Explanation: Let's plot this prediction and compare it to the true underlying function. End of explanation omega = np.exp(np.linspace(np.log(0.1), np.log(20), 5000)) psd = gp.kernel.get_psd(omega) plt.plot(omega, psd, color=color) for k in gp.kernel.terms: plt.plot(omega, k.get_psd(omega), "--", color=color) plt.yscale("log") plt.xscale("log") plt.xlim(omega[0], omega[-1]) plt.xlabel("$\omega$") plt.ylabel("$S(\omega)$"); Explanation: In this figure, the 1-sigma prediction is shown as an orange band and the mean prediction is indicated by a solid orange line. Comparing this to the true underlying function (shown as a gray line), we see that the prediction is consistent with the truth at all times and the the uncertainty in the region of missing data increases as expected. As the last figure, let's look at the maximum likelihood power spectrum of the model. The following figure shows the model power spectrum as a solid line and the dashed lines show the contributions from the two components. End of explanation
14,898
Given the following text description, write Python code to implement the functionality described below step by step Description: OWSLib versus Birdy This notebook shows a side-by-side comparison of owslib.wps.WebProcessingService and birdy.WPSClient. Step1: Displaying available processes With owslib, wps.processes is the list of processes offered by the server. With birdy, the client is like a module with functions. So you just write cli. and press Tab to display a drop-down menu of processes. Step2: Documentation about a process With owslib, the process title and abstract can be obtained simply by looking at these attributes. For the process inputs, we need to iterate on the inputs and access their individual attributes. To facilitate this, owslib.wps provides the printInputOuput function. With birdy, just type help(cli.hello) and the docstring will show up in your console. With the IPython console or a Jupyter Notebook, cli.hello? would do as well. The docstring follows the NumPy convention. Step3: Launching a process and retrieving literal outputs With owslib, processes are launched using the execute method. Inputs are an an argument to execute and defined by a list of key-value tuples. These keys are the input names, and the values are string representations. The execute method returns a WPSExecution object, which defines a number of methods and attributes, including isComplete and isSucceeded. The process outputs are stored in the processOutputs list, whose content is stored in the data attribute. Note that this data is a list of strings, so we may have to convert it to a float to use it. Step4: With birdy, inputs are just typical keyword arguments, and outputs are already converted into python objects. Since some processes may have multiple outputs, processes always return a namedtuple, even in the case where there is only a single output. Step5: Retrieving outputs by references For ComplexData objects, WPS servers often return a reference to the output (an http link) instead of the actual data. This is useful if that output is to serve as an input to another process, so as to avoid passing back and forth large files for nothing. With owslib, that means that the data attribute of the output is empty, and we instead access the reference attribute. The referenced file can be written to the local disk using the writeToDisk method. With birdy, the outputs are by default the references themselves, but it's also possible to download these references in the background and convert them into python objects. To trigger this automatic conversion, set convert_objects to True when instantating the client WPSClient(url, convert_objects=True). Ini the example below, the first output is a plain text file, and the second output is a json file. The text file is converted into a string, and the json file into a dictionary.
Python Code: from owslib.wps import WebProcessingService from birdy import WPSClient url = "https://bovec.dkrz.de/ows/proxy/emu?Service=WPS&Request=GetCapabilities&Version=1.0.0" wps = WebProcessingService(url) cli = WPSClient(url=url) Explanation: OWSLib versus Birdy This notebook shows a side-by-side comparison of owslib.wps.WebProcessingService and birdy.WPSClient. End of explanation wps.processes Explanation: Displaying available processes With owslib, wps.processes is the list of processes offered by the server. With birdy, the client is like a module with functions. So you just write cli. and press Tab to display a drop-down menu of processes. End of explanation from owslib.wps import printInputOutput p = wps.describeprocess('hello') print("Title: ", p.title) print("Abstract: ", p.abstract) for inpt in p.dataInputs: printInputOutput(inpt) help(cli.hello) Explanation: Documentation about a process With owslib, the process title and abstract can be obtained simply by looking at these attributes. For the process inputs, we need to iterate on the inputs and access their individual attributes. To facilitate this, owslib.wps provides the printInputOuput function. With birdy, just type help(cli.hello) and the docstring will show up in your console. With the IPython console or a Jupyter Notebook, cli.hello? would do as well. The docstring follows the NumPy convention. End of explanation resp = wps.execute('binaryoperatorfornumbers', inputs=[('inputa', '1.0'), ('inputb', '2.0'), ('operator', 'add')]) if resp.isSucceded: output, = resp.processOutputs print(output.data) Explanation: Launching a process and retrieving literal outputs With owslib, processes are launched using the execute method. Inputs are an an argument to execute and defined by a list of key-value tuples. These keys are the input names, and the values are string representations. The execute method returns a WPSExecution object, which defines a number of methods and attributes, including isComplete and isSucceeded. The process outputs are stored in the processOutputs list, whose content is stored in the data attribute. Note that this data is a list of strings, so we may have to convert it to a float to use it. End of explanation z = cli.binaryoperatorfornumbers(1, 2, operator='add').get()[0] z out = cli.inout().get() out.date Explanation: With birdy, inputs are just typical keyword arguments, and outputs are already converted into python objects. Since some processes may have multiple outputs, processes always return a namedtuple, even in the case where there is only a single output. End of explanation resp = wps.execute('multiple_outputs', inputs=[('count', '1')]) output, ref = resp.processOutputs print(output.reference) print(ref.reference) output.writeToDisk('/tmp/output.txt') output = cli.multiple_outputs(1).get()[0] print(output) # as reference output = cli.multiple_outputs(1).get(asobj=True)[0] print(output) Explanation: Retrieving outputs by references For ComplexData objects, WPS servers often return a reference to the output (an http link) instead of the actual data. This is useful if that output is to serve as an input to another process, so as to avoid passing back and forth large files for nothing. With owslib, that means that the data attribute of the output is empty, and we instead access the reference attribute. The referenced file can be written to the local disk using the writeToDisk method. With birdy, the outputs are by default the references themselves, but it's also possible to download these references in the background and convert them into python objects. To trigger this automatic conversion, set convert_objects to True when instantating the client WPSClient(url, convert_objects=True). Ini the example below, the first output is a plain text file, and the second output is a json file. The text file is converted into a string, and the json file into a dictionary. End of explanation
14,899
Given the following text description, write Python code to implement the functionality described below step by step Description: On this page you'll find a series of exercises. We'll be using Python for all the code, but not really. You barely need to know any Python at all. In fact here is all you need to know (at least about Python). All You Need to Know Numbers Step1: Note that head([]) is an error since you can't find the first item in an empty list. Step2: Note that tail([]) is an error since the tail of a list is what's left over when you remove the head, and the empty list has no head. Step3: Note that sub1(0) is an error because you can't subtract 1 from 0. (Actually it is possible if you allow negative numbers, but in these exercises we will not allow such numbers.) All Strings Write a function, is_list_of_strings, that determines whether a list contains only strings. Below are some examples of how it should behave.
Python Code: from basic_functions import * is_empty([1,2]) is_empty([]) head([1,2]) head([1]) Explanation: On this page you'll find a series of exercises. We'll be using Python for all the code, but not really. You barely need to know any Python at all. In fact here is all you need to know (at least about Python). All You Need to Know Numbers: 0, 1, 2, 3, ... (i.e., no negative numbers or decimals) Strings: things like 'hello' and 'the cat on the mat' and the empty string '' Booleans: True, False Lists: [], but you can make lists (see cons below) Functions is_eq_str(x, y): x and y must both be strings; returns whether x equals y is_empty(xx) : xx must be a list; returns whether the xx is the empty list head(xx): xx must be a non-empty list; returns the first item of xx tail(xx): xx must be a non-empty list; returns a list with everything after the head cons(h, tl): returns a list whose first item is h and whose remaining items are the items of tl (i.e. it put backs a list taken apart by head and tail) add1(n): n must be a number; returns a number one bigger than n sub1(n): n must be a number greater than zero; returns a number one less than n is_zero(n): n must be a number; returns whether n is zero is_str(x): returns whether x is a string is_num(x): returns whether x is a number Getting Started The above functions, simple though they are, are not built into Python, so you must download a file that defins them. Download basic_functions.py. End of explanation tail([1,2]) tail([1]) Explanation: Note that head([]) is an error since you can't find the first item in an empty list. End of explanation cons(1, [2,3]) cons(1, []) is_num(99) is_num('hello') is_str(99) is_str('hello') is_str_eq('hello', 'hello') is_str_eq('hello', 'goodbye') add1(99) sub1(99) Explanation: Note that tail([]) is an error since the tail of a list is what's left over when you remove the head, and the empty list has no head. End of explanation from solutions import is_list_of_strings is_list_of_strings(['hello', 'goodbye']) is_list_of_strings([1, 'aa']) is_list_of_strings([]) Explanation: Note that sub1(0) is an error because you can't subtract 1 from 0. (Actually it is possible if you allow negative numbers, but in these exercises we will not allow such numbers.) All Strings Write a function, is_list_of_strings, that determines whether a list contains only strings. Below are some examples of how it should behave. End of explanation