Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
11,200
Given the following text description, write Python code to implement the functionality described below step by step Description: Information Flows in Causal Networks This notebook replicates some examples from Ay & Polani (2008), "Information Flows in Causal Networks" in Advances in Complex Systems Volume 11, Issue 01 Step1: Ay & Polani, Example 3 Step2: Ay & Polani, Example 5.1 Step3: Ay & Polani, Example 5.2
Python Code: from causalinfo import * from numpy import log2 from numpy.testing import assert_allclose # You only need this if you want to draw pretty pictures of the Networksa from nxpd import draw, nxpdParams nxpdParams['show'] = 'ipynb' w, x, y, z = make_variables("W X Y Z", 2) wdist = UniformDist(w) Explanation: Information Flows in Causal Networks This notebook replicates some examples from Ay & Polani (2008), "Information Flows in Causal Networks" in Advances in Complex Systems Volume 11, Issue 01 End of explanation eq1 = Equation('BR', [w], [x, y], equations.branch_same_) eq2 = Equation('XOR', [x, y], [z], equations.xor_) # Build the graph eg3 = CausalGraph([eq1, eq2]) draw(eg3.full_network) eg3 m_eg3 = MeasureCause(eg3, wdist) # See the table on p29a assert m_eg3.mutual_info(x, y) == 1 assert m_eg3.mutual_info(x, y, w) == 0 assert m_eg3.mutual_info(w, z, y) == 0 assert m_eg3.causal_flow(x, y) == 0 assert m_eg3.causal_flow(x, y, w) == 0 assert m_eg3.causal_flow(w, z, y) == 1 Explanation: Ay & Polani, Example 3 End of explanation def copy_first_(i1, i2, o1): o1[i1] = 1.0 eq2 = Equation('COPY_FIRST', [x, y], [z], copy_first_) eg51 = CausalGraph([eq1, eq2]) draw(eg51.full_network) m_eg51 = MeasureCause(eg51, wdist) # See paragraph at top of page 30 assert m_eg51.mutual_info(x, z, y) == 0 assert m_eg51.causal_flow(x, z, y) == 1 assert m_eg51.causal_flow(x, z) == 1 Explanation: Ay & Polani, Example 5.1 End of explanation def random_sometimes_(i1, i2, o1): if i1 != i2: o1[:] = .5 else: equations.xor_(i1, i2, o1) eq2 = Equation('RAND', [x, y], [z], random_sometimes_) eg52 = CausalGraph([eq1, eq2]) draw(eg52.full_network) m_eg52 = MeasureCause(eg52, wdist) # See pg 30 expected = 3.0 / 4.0 * log2(4.0 / 3.0) assert_allclose(m_eg52.causal_flow(x, z, y), expected) Explanation: Ay & Polani, Example 5.2 End of explanation
11,201
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This IPython notebook illustrates how to select the best learning based matcher. First, we need to import py_entitymatching package and other libraries as follows Step1: Then, split the labeled data into development set and evaluation set and convert them into feature vectors Step2: Compute accuracy of X (Decision Tree) on J It involves the following steps
Python Code: # Import py_entitymatching package import py_entitymatching as em import os import pandas as pd # Set the seed value seed = 0 # Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' path_A = datasets_dir + os.sep + 'dblp_demo.csv' path_B = datasets_dir + os.sep + 'acm_demo.csv' path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv' A = em.read_csv_metadata(path_A, key='id') B = em.read_csv_metadata(path_B, key='id') # Load the pre-labeled data S = em.read_csv_metadata(path_labeled_data, key='_id', ltable=A, rtable=B, fk_ltable='ltable_id', fk_rtable='rtable_id') Explanation: Introduction This IPython notebook illustrates how to select the best learning based matcher. First, we need to import py_entitymatching package and other libraries as follows: End of explanation # Split S into I an J IJ = em.split_train_test(S, train_proportion=0.5, random_state=0) I = IJ['train'] J = IJ['test'] # Generate a set of features F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False) # Convert I into feature vectors using updated F H = em.extract_feature_vecs(I, feature_table=F, attrs_after='label', show_progress=False) Explanation: Then, split the labeled data into development set and evaluation set and convert them into feature vectors End of explanation # Instantiate the matcher to evaluate. dt = em.DTMatcher(name='DecisionTree', random_state=0) # Train using feature vectors from I dt.fit(table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], target_attr='label') # Convert J into a set of feature vectors using F L = em.extract_feature_vecs(J, feature_table=F, attrs_after='label', show_progress=False) # Predict on L predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], append=True, target_attr='predicted', inplace=False, return_probs=True, probs_attr='proba') predictions[['_id', 'ltable_id', 'rtable_id', 'predicted', 'proba']].head() # Evaluate the predictions eval_result = em.eval_matches(predictions, 'label', 'predicted') em.print_eval_summary(eval_result) Explanation: Compute accuracy of X (Decision Tree) on J It involves the following steps: Train X using H Convert J into a set of feature vectors (L) Predict on L using X Evaluate the predictions End of explanation
11,202
Given the following text description, write Python code to implement the functionality described below step by step Description: Fitting Models Exercise 1 Imports Step1: Fitting a quadratic curve For this problem we are going to work with the following model Step2: First, generate a dataset using this model using these parameters and the following characteristics Step3: Now fit the model to the dataset to recover estimates for the model's parameters
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Fitting Models Exercise 1 Imports End of explanation a_true = 0.5 b_true = 2.0 c_true = -4.0 Explanation: Fitting a quadratic curve For this problem we are going to work with the following model: $$ y_{model}(x) = a x^2 + b x + c $$ The true values of the model parameters are as follows: End of explanation xdata = np.linspace(-5,5,30) dy = 2.0 ydata = a_true*xdata**2+b_true*xdata+c_true+np.random.normal(0.0,dy,size=30) plt.figure(figsize=(7,5)) plt.errorbar(xdata, ydata, dy, fmt='.b', ecolor='gray') plt.tick_params(right=False,top=False,direction='out') plt.xlabel('x') plt.ylim(-12,25) plt.ylabel('y'); assert True # leave this cell for grading the raw data generation and plot Explanation: First, generate a dataset using this model using these parameters and the following characteristics: For your $x$ data use 30 uniformly spaced points between $[-5,5]$. Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal). After you generate the data, make a plot of the raw data (use points). End of explanation def model(x,a,b,c): return a*x**2+b*x+c theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy) print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2]))) xfit = np.linspace(-5,5,30) yfit = theta_best[0]*xfit**2+theta_best[1]*xfit+theta_best[2] plt.figure(figsize=(7,5)) plt.plot(xfit, yfit,color='r') plt.errorbar(xdata, ydata, dy, fmt='.b', ecolor='gray') plt.tick_params(right=False,top=False,direction='out') plt.ylim(-12,25) plt.title('Best Fit Curve') plt.xlabel('x') plt.ylabel('y'); assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors Explanation: Now fit the model to the dataset to recover estimates for the model's parameters: Print out the estimates and uncertainties of each parameter. Plot the raw data and best fit of the model. End of explanation
11,203
Given the following text description, write Python code to implement the functionality described below step by step Description: Usuarios de Energía Eléctrica Parámetros que se obtienen desde esta fuente ID |Descripción ---| Step1: 2. Descarga de datos Step2: 3. Estandarizacion de datos de Parámetros Step3: Exportar Dataset Antes de exportar el dataset voy a reducir su tamaño porque tiene 82,236 renglones divididos por tarifa. ÚNicamente voy a dejar los totales de todas las tarifas.
Python Code: descripciones = { 'P0609': 'Usuarios Electricos' } # Librerias utilizadas import pandas as pd import sys import urllib import os import csv import zipfile # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) Explanation: Usuarios de Energía Eléctrica Parámetros que se obtienen desde esta fuente ID |Descripción ---|:---------- P0609|Usuarios eléctricos End of explanation url = r'http://datos.cfe.gob.mx/Datos/Usuariosyconsumodeelectricidadpormunicipio.csv' archivo_local = r'D:\PCCS\00_RawData\01_CSV\CFE\UsuariosElec.csv' if os.path.isfile(archivo_local): print('Ya existe el archivo: {}'.format(archivo_local)) else: print('Descargando {} ... ... ... ... ... '.format(archivo_local)) urllib.request.urlretrieve(url, archivo_local) print('se descargó {}'.format(archivo_local)) Explanation: 2. Descarga de datos End of explanation dtypes = { # Los valores numericos del CSV estan guardados como " 000,000 " y requieren limpieza 'Cve Mun':'str', '2010':'str', '2011':'str', '2012':'str', '2013':'str', '2014':'str', '2015':'str', '2016':'str', 'ene-17':'str', 'feb-17':'str', 'mar-17':'str', 'abr-17':'str', 'may-17':'str', 'jun-17':'str', 'jul-17':'str', 'ago-17':'str', 'sep-17':'str', 'oct-17':'str', 'nov-17':'str', 'dic-17':'str'} # Lectura del Dataset dataset = pd.read_csv(archivo_local, skiprows = 2, nrows = 82236, na_values = ' - ', dtype=dtypes) # Lee el dataset dataset['CVE_EDO'] = dataset['Cve Inegi'].apply(lambda x: '{0:0>2}'.format(x)) # CVE_EDO de 2 digitos dataset['CVE_MUN'] = dataset['CVE_EDO'].map(str) + dataset['Cve Mun'] dataset.head() # Quitar espacios en blanco y comas de columnas que deberian ser numericas columnums = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] for columna in columnums: dataset[columna] = dataset[columna].str.replace(' ','') dataset[columna] = dataset[columna].str.replace(',','') dataset.head() # Convertir columnas a numericas columnasanios = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] for columna in columnasanios: dataset[columna] = pd.to_numeric(dataset[columna], errors='coerce', downcast = 'integer') dataset.head() # Quitar columnas que ya no se utilizarán dropcols = ['Cve Edo', 'Cve Inegi', 'Cve Mun', 'Entidad Federativa', 'Municipio', 'Unnamed: 25', 'CVE_EDO'] dataset = dataset.drop(dropcols, axis = 1) # Asignar CVE_EDO como indice dataset = dataset.set_index('CVE_MUN') dataset.head() # Sumar las columnas de 2017 columnas2017 = ['ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] dataset['2017'] = dataset[columnas2017].sum(axis = 1) # Eliminar columnas de 2017 dataset = dataset.drop(columnas2017, axis = 1) dataset.head() Explanation: 3. Estandarizacion de datos de Parámetros End of explanation len(dataset) dataset.head(40) dataset_total = dataset[dataset['Tarifa'] == 'TOTAL'] dataset_total.head() len(dataset_total) Explanation: Exportar Dataset Antes de exportar el dataset voy a reducir su tamaño porque tiene 82,236 renglones divididos por tarifa. ÚNicamente voy a dejar los totales de todas las tarifas. End of explanation
11,204
Given the following text description, write Python code to implement the functionality described below step by step Description: Observations Observations can be thought of as the probability of being in any given state at each time step. For this demonstration, observations are randomly initialized. In a real case, these observations would be the output of a neural network Step1: Posterior vs Truth The posterior is the probability assigned by the hmm of being in each state at each time step. This is a plot if the posterior output compared to the truth. Step2: Gradients This plot shows the gradients which are flowing back to the input of the hmm.
Python Code: observations = np.random.random((1, 90, 2)) * 4 - 2 plot(observations[0,:,:]) grid() observations_variable = tf.Variable(observations) posterior_graph, _, _ = hmm_tf.forward_backward(tf.sigmoid(observations_variable)) # build error function sum_error_squared = tf.reduce_sum(tf.square(truth - posterior_graph)) # calculate d_observation/d_error gradients_graph = tf.gradients(sum_error_squared, observations_variable) session = tf.Session() session.run(tf.initialize_all_variables()) steps = 0 Explanation: Observations Observations can be thought of as the probability of being in any given state at each time step. For this demonstration, observations are randomly initialized. In a real case, these observations would be the output of a neural network End of explanation posterior = session.run(posterior_graph) print 'sum error squared: %.03f' % sum((truth[:,1] - posterior[:,1])**2) plot(posterior[0,:,1], label='posterior') plot(truth[0,:,1], label='truth') grid() legend() Explanation: Posterior vs Truth The posterior is the probability assigned by the hmm of being in each state at each time step. This is a plot if the posterior output compared to the truth. End of explanation gradients = session.run(gradients_graph)[0] def plot_gradients(gradients): gradients = gradients[0] # whiten gradients gradients = gradients / np.std(gradients) plot(-gradients[:,1], label='gradients') plot(truth[0,:,1], label='truth') # plot(sigmoid(observations[0,:,1]), label='observations') plot(observations[0,:,1], label='observations') ylim((-5,5)) grid() legend() plot_gradients(gradients) for i in range(1): # take 1 gradient descent step steps += 1 observations = session.run( observations_variable.assign_sub(gradients * 0.5 * (random.random() - 0.25)) ) plot(observations[0,:,1], label='observations') sigmoid = np.vectorize(lambda(x): 1.0/(1.0+np.exp(-x))) # plot(sigmoid(observations[0,:,1]), label='sigmoid(observations)') legend() grid() hmm_np = hmm.HMMNumpy(np.array([[0.9, 0.1], [0.1, 0.9]]), p0=np.array([0.5, 0.5])) out, _ = hmm_np.viterbi_decode(sigmoid(observations[0,:,:])) print 'gradient steps taken:', steps print 'viterbi error:', sum((truth[0,:,1] - out)**2) plot(truth[0,:,1], label='truth') plot(out, label='out') grid() legend() Explanation: Gradients This plot shows the gradients which are flowing back to the input of the hmm. End of explanation
11,205
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 9 Step1: In this example, we switched the ordering of the arguments between the two function calls; consequently, the ordering of the arguments inside the function were also flipped. Hence, positional Step2: Only this time, we'll use the names of the arguments themselves (aka, keywords) Step3: As you can see, we used the names of the arguments from the function header itself, setting them equal to the variable we wanted to use for that argument. Consequently, order doesn't matter--Python can see that, in both function calls, we're setting name1 = pet1 and name2 = pet2. Keyword arguments are extremely useful when it comes to default arguments. If you take a look at any NumPy API--even the documentation for numpy.array--there are LOTS of default arguments. Trying to remember their ordering is a pointless task. What's much easier is to simply remember the name of the argument--the keyword--and use that to override any default argument you want to change. Ordering of the keyword arguments doesn't matter; that's why we can specify some of the default parameters by keyword, leaving others at their defaults, and Python doesn't complain. Here's an important distinction, though Step4: Part 2 Step5: Inside the function, it's basically treated as a list Step6: Instead of one * in the function header, there are two. And yes, instead of a list when we get to the inside of the function, now we basically have a dictionary! Arbitrary arguments (either "lists" or "dictionaries") can be mixed with positional arguments, as well as with each other. Step7: We have our positional or keyword arguments (they're used as positional arguments here) in the form of firstname and lastname *nicknames is an arbitrary list of arguments, so anything beyond the positional / keyword (or default!) arguments will be considered part of this aggregate **user_info is comprised of any key-value pairs that are not among the default arguments; in this case, those are department and university Part 2 Step8: What will the print() statement at the end print? 10? 20? Something else? Step9: It prints 10. Before explaining, let's take another example. Step10: What will the print() statement at the end print? [10, 10]? [20, 10]? Something else? Step11: It prints [20, 10]. To recap, what we've seen is that We tried to modify an integer function argument. It worked inside the function, but once the function completed, the old value returned. We modified a list element of a function argument. It worked inside the function, and the changes were still there after the function ended. Explaining these seemingly-divergent behaviors is the tricky part, but to give you the punchline Step12: Whenever you operate on some_list, you have to traverse the "arrow" to the object itself, which is separate. Again, think of the house analogy Step13: Here's the thing Step14: When it comes to more "complicated" data types--strings, lists, dictionaries, sets, tuples, generators--we have to deal with two parts Step15: Think of it this way Step16: Notice how we called append on the variable x, and yet when we print y, we see the 4 there as well! This is because x and y are both references that point to the same object. As such, if we reassign x to point to something else, y will remain unchanged.
Python Code: def pet_names(name1, name2): print("Pet 1: {}".format(name1)) print("Pet 2: {}".format(name2)) pet1 = "King" pet2 = "Reginald" pet_names(pet1, pet2) pet_names(pet2, pet1) Explanation: Lecture 9: Functions II CSCI 1360: Foundations for Informatics and Analytics Overview and Objectives In the previous lecture, we went over the basics of functions. Here, we'll expand a little bit on some of the finer points of function arguments that can both be useful but also be huge sources of confusion. By the end of the lecture, you should be able to: Differentiate positional arguments from keyword arguments Construct functions that take any number of arguments, in positional or key-value format Explain "pass by value" and contrast it with "pass by reference", and why certain Python types can be modified in functions while others can't Part 1: Keyword Arguments In the previous lecture we learned about positional arguments. As the name implies, position is key: End of explanation def pet_names(name1, name2): print("Pet 1: {}".format(name1)) print("Pet 2: {}".format(name2)) Explanation: In this example, we switched the ordering of the arguments between the two function calls; consequently, the ordering of the arguments inside the function were also flipped. Hence, positional: position matters. In contrast, Python also has keyword arguments, where order no longer matters as long as you specify the keyword. We can use the same function as before: End of explanation pet1 = "Rocco" pet2 = "Lucy" pet_names(name1 = pet1, name2 = pet2) pet_names(name2 = pet2, name1 = pet1) Explanation: Only this time, we'll use the names of the arguments themselves (aka, keywords): End of explanation # Here's our function with a default argument. def pos_def(x, y = 10): return x + y # Using keywords in the same order they're defined is totally fine. z = pos_def(x = 10, y = 20) print(z) # Mixing their ordering is ok, as long as I'm specifying the keywords. z = pos_def(y = 20, x = 10) print(z) # Only specifying the default argument is a no-no. z = pos_def(y = 20) print(z) Explanation: As you can see, we used the names of the arguments from the function header itself, setting them equal to the variable we wanted to use for that argument. Consequently, order doesn't matter--Python can see that, in both function calls, we're setting name1 = pet1 and name2 = pet2. Keyword arguments are extremely useful when it comes to default arguments. If you take a look at any NumPy API--even the documentation for numpy.array--there are LOTS of default arguments. Trying to remember their ordering is a pointless task. What's much easier is to simply remember the name of the argument--the keyword--and use that to override any default argument you want to change. Ordering of the keyword arguments doesn't matter; that's why we can specify some of the default parameters by keyword, leaving others at their defaults, and Python doesn't complain. Here's an important distinction, though: Default (optional) arguments are always keyword arguments, but... Positional (required) arguments MUST come before default arguments! In essence, you can't mix-and-match the ordering of positional and default arguments using keywords. Here's an example of this behavior in action: End of explanation def make_pizza(*toppings): print("Making a pizza with the following toppings:") for topping in toppings: print(" - {}".format(topping)) make_pizza("pepperoni") make_pizza("pepperoni", "banana peppers", "green peppers", "mushrooms") Explanation: Part 2: Passing an Arbitrary Number of Arguments There are instances where you'll want to pass in an arbitrary number of arguments to a function, a number which isn't known until the function is called and could change from call to call! On one hand, you could consider just passing in a single list, thereby obviating the need. That's more or less what actually happens here, but the syntax is a tiny bit different. Here's an example: a function which lists out pizza toppings. Note the format of the input argument(s): End of explanation def build_profile(**user_info): profile = {} for key, value in user_info.items(): profile[key] = value return profile profile = build_profile(firstname = "Shannon", lastname = "Quinn", university = "UGA") print(profile) profile = build_profile(name = "Shannon Quinn", department = "Computer Science") print(profile) Explanation: Inside the function, it's basically treated as a list: in fact, it is a list. So why not just make the input argument a single variable which is a list? Convenience. In some sense, it's more intuitive to the programmer calling the function to just list out a bunch of things, rather than putting them all in a list structure first. But that argument could go either way depending on the person and the circumstance, most likely. With variable-length arguments, you may very well ask: this is cool, but it doesn't seem like I can make keyword arguments work in this setting? And to that I would say, absolutely correct! So we have a slight variation to accommodate keyword arguments in the realm of including arbitrary numbers of arguments: End of explanation def build_better_profile(firstname, lastname, *nicknames, **user_info): profile = {'First Name': firstname, 'Last Name': lastname} for key, value in user_info.items(): profile[key] = value profile['Nicknames'] = nicknames return profile profile = build_better_profile("Shannon", "Quinn", "Professor", "Doctor", "Master of Science", department = "Computer Science", university = "UGA") for key, value in profile.items(): print("{}: {}".format(key, value)) Explanation: Instead of one * in the function header, there are two. And yes, instead of a list when we get to the inside of the function, now we basically have a dictionary! Arbitrary arguments (either "lists" or "dictionaries") can be mixed with positional arguments, as well as with each other. End of explanation def magic_function(x): x = 20 print("Inside function: {}".format(x)) x = 10 print("Before function: {}".format(x)) magic_function(x) # What is "x" now? Explanation: We have our positional or keyword arguments (they're used as positional arguments here) in the form of firstname and lastname *nicknames is an arbitrary list of arguments, so anything beyond the positional / keyword (or default!) arguments will be considered part of this aggregate **user_info is comprised of any key-value pairs that are not among the default arguments; in this case, those are department and university Part 2: Pass-by-value vs Pass-by-reference This is arguably one of the trickiest parts of programming, so please ask questions if you're having trouble. Let's start with an example to illustrate what's this is. Take the following code: End of explanation print("After function: {}".format(x)) Explanation: What will the print() statement at the end print? 10? 20? Something else? End of explanation def magic_function2(x): x[0] = 20 print("Inside function: {}".format(x)) x = [10, 10] print("Before function: {}".format(x)) magic_function2(x) # What is "x" now? Explanation: It prints 10. Before explaining, let's take another example. End of explanation print("After function: {}".format(x)) Explanation: What will the print() statement at the end print? [10, 10]? [20, 10]? Something else? End of explanation some_list = [1, 2, 3] # some_list -> reference to my list # [1, 2, 3] -> the actual, physical list Explanation: It prints [20, 10]. To recap, what we've seen is that We tried to modify an integer function argument. It worked inside the function, but once the function completed, the old value returned. We modified a list element of a function argument. It worked inside the function, and the changes were still there after the function ended. Explaining these seemingly-divergent behaviors is the tricky part, but to give you the punchline: 1 (attempting to modify an integer argument) is an example of pass by value, in which the value of the argument is copied when the function is called, and then discarded when the function ends, hence the variable retaining its original value. 2 (attempting to modify a list argument) is an example of pass by reference, in which a reference to the list--not the list itself!--is passed to the function. This reference still points to the original list, so any changes made inside the function are also made to the original list, and therefore persist when the function is finished. StackOverflow has a great gif to represent this process in pictures: In pass by value (on the right), the cup (argument) is outright copied, so any changes made to it inside the function vanish when the function is done. In pass by reference (on the left), only a reference to the cup is given to the function. This reference, however, "refers" to the original cup, so changes made to the reference are propagated back to the original. What are "references"? So what are these mysterious references? Imagine you're throwing a party for some friends who have never visited your house before. They ask you for directions (or, given we live in the age of Google Maps, they ask for your home address). Rather than try to hand them your entire house, or put your physical house on Google Maps (I mean this quite literally), what do you do? You write down your home address on a piece of paper (or, realistically, send a text message). This is not your house, but it is a reference to your house. It's small, compact, and easy to give out--as opposed to your physical, literal home--while intrinsically providing a path to the real thing. So it is with references. They hearken back to ye olde computre ayge when fast memory was a precious commodity measured in kilobytes, which is not enough memory to store even the Facebook home page. It was, however, enough to store the address. These addresses, or references, would point to specific locations in the larger, much slower main memory hard disks where all the larger data objects would be saved. Scanning through larger, extremely slow hard disks looking for the object itself would be akin to driving through every neighborhood in the city of Atlanta looking for a specific house. Possible, sure, but not very efficient. Much faster to have the address in-hand and drive directly there whenever you need to. That doesn't explain the differing behavior of lists versus integers as function arguments Very astute. This has to do with a subtle but important difference in how Python passes variables of different types to functions. For the "primitive" variable types--int, float, string--they're passed by value. These are [typically] small enough to be passed directly to functions. However, in doing so, they are copied upon entering the function, and these copies vanish when the function ends. For the "object" variable types--lists, sets, dictionaries, NumPy arrays, generators, and pretty much anything else that builds on "primitive" types--they're passed by reference. This means you can modify the values inside these objects while you're still in the function, and those modifications will persist even after the function ends. Think of references as "arrows"--they refer to your actual objects, like lists or NumPy arrays. The name with which you refer to your object is the reference. End of explanation def set_to_none(some_list): some_list = None # sets the reference "some_list" to point at nothing print("In function: {}".format(some_list)) a_list = [1, 2, 3] print("Before function: {}".format(a_list)) set_to_none(a_list) # What will "a_list" be? print(a_list) Explanation: Whenever you operate on some_list, you have to traverse the "arrow" to the object itself, which is separate. Again, think of the house analogy: whenever you want to clean your house, you have to follow your reference to it first. This YouTube video isn't exactly the same thing, since C++ handles this much more explicitly than Python does. But if you substitute "references" for "pointers", and ignore the little code snippets, it's more or less describing precisely this concept. There's a slight wrinkle in this pass-by-value, pass-by-reference story... Using what you've learned so far, what do you think the output of this function will be? End of explanation def modify_int(x): x = 9238493 # Works as long as we're in this function...once we leave, it goes away. x = 10 modify_int(x) print(x) Explanation: Here's the thing: everything in Python is pass-by-value. But references to non-"primitive" objects still exist. Let's parse this out. We already know any "basic" data type in Python is passed by value, or copied. So any modifications we make inside a function go away outside the function (unless we return it, of course). This has not changed. End of explanation def modify_list(x): x[0] = 9238493 a_list = [1, 2, 3] modify_list(a_list) print(a_list) Explanation: When it comes to more "complicated" data types--strings, lists, dictionaries, sets, tuples, generators--we have to deal with two parts: the reference, and the object itself. When we pass one of the objects into a function, the reference is passed-by-value...meaning the reference is copied! But since it points to the same original object as the original reference, any modifications made to the object persist, even though the copied reference goes away after the function ends. End of explanation x = [1, 2, 3] y = x x.append(4) print(y) Explanation: Think of it this way: x[0] modifies the list itself, of which there is only 1. x = None is modifying the reference to the list, of which we have only a COPY of in the function. This copy then goes away when the function ends, and we're left with the original reference, which still points to the original list! Clear as mud? Excellent! Here is one more example, which illustrates why it's better to think of Python variables as two parts: one part bucket of data (the actual object), and one part pointer to a bucket (the name). End of explanation x = 5 # reassign x print(x) print(y) # same as before! Explanation: Notice how we called append on the variable x, and yet when we print y, we see the 4 there as well! This is because x and y are both references that point to the same object. As such, if we reassign x to point to something else, y will remain unchanged. End of explanation
11,206
Given the following text description, write Python code to implement the functionality described below step by step Description: Display Exercise 1 Imports Put any needed imports needed to display rich output the following cell Step1: Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px. Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
Python Code: # YOUR CODE HERE from IPython.display import display, Image assert True # leave this to grade the import statements Explanation: Display Exercise 1 Imports Put any needed imports needed to display rich output the following cell: End of explanation # YOUR CODE HERE Image(url='http://www.redorbit.com/media/uploads/2005/03/606e55c01ca0cbcda707ef3b5d3fed0d1.jpg', embed=True, width=600, height=600) assert True # leave this to grade the image display Explanation: Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px. End of explanation %%html <table> <tr> <th>Name</th> <th>Symbol</th> <th>Antiparticle</th> <th>Charge $(e)$</th> <th>Mass $(MeV/c^2)$</th> </tr> <tr> <td>up</td> <td>u</td> <td>$\bar{u}$</td> <td>+$\frac{2}{3}$</td> <td>1.5 - 3.3</td> </tr> <tr> <td>down</td> <td>d</td> <td>$\bar{d}$</td> <td>-$\frac{1}{3}$</td> <td>3.5 - 6.0</td> </tr> <tr> <td>charm</td> <td>c</td> <td>$\bar{c}$</td> <td>+$\frac{2}{3}$</td> <td>1,160 - 1,340</td> </tr> <tr> <td>strange</td> <td>s</td> <td>$\bar{s}$</td> <td>-$\frac{1}{3}$</td> <td>70 - 130</td> </tr> <tr> <td>top</td> <td>t</td> <td>$\bar{t}$</td> <td>+$\frac{2}{3}$</td> <td>169,100 - 173,300</td> </tr> <tr> <td>bottom</td> <td>b</td> <td>$\bar{b}$</td> <td>-$\frac{1}{3}$</td> <td>4,130 - 4,370</td> </tr> </table> assert True # leave this here to grade the quark table Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. End of explanation
11,207
Given the following text description, write Python code to implement the functionality described below step by step Description: Load data Let's load our data to analyze. For this example, I'm going to use some stock market data to be able to show some clear trend changes. This data can be downloaded from FRED (https Step1: Prepare for Prophet Step2: let's take a quick look at our data Step3: Running Prophet As before, let's instantiate prophet and fit our data (including our future dataframe). Take a look at http Step4: Now, let's take a look at our changepoints. Prophet creates changespoint for us by default and stores them in .changepoints. You can see below what the possible changepoints are (they are just shown as dates). By default, Prophet adds 25 changepoints into the initial 80% of the dataset. The number of changepoints can be set by using the n_changepoints parameter when initiallizing prophet (e.g., model=Prophet(n_changepoints=30) Step5: We can view the possible changepoints by plotting the forecast and changepoints using the following code Step6: Taking a look at the possible changepoints (drawn in orange/red) in the above chart, we can see they fit pretty well with some of the highs and lows. Prophet will also let us take a look at the magnitudes of these possible changepoints. You can look at this visualization with the following code (edited from the fbprophet example here -> https Step7: We can see from the above chart, that there are quite a few of these changes points (found between 10 and 20 on the chart) that are very minimal in magnitude and are most likely to be ignored by prophet during forecasting be used in the forecasting. Now, if we know where trends changed in the past, we can add these known changepoints into our dataframe for use by Prophet. For this data, I'm going to use the FRED website to find some of the low points and high points to use as trend changepoints. Note
Python Code: market_df = pd.read_csv('../examples/SP500.csv', index_col='DATE', parse_dates=True) market_df.head() Explanation: Load data Let's load our data to analyze. For this example, I'm going to use some stock market data to be able to show some clear trend changes. This data can be downloaded from FRED (https://fred.stlouisfed.org/series/SP500) or just grab it from the examples directory. End of explanation df = market_df.reset_index().rename(columns={'DATE':'ds', 'SP500':'y'}) df['y'] = np.log(df['y']) df.head() Explanation: Prepare for Prophet End of explanation df.set_index('ds').y.plot() Explanation: let's take a quick look at our data End of explanation model = Prophet() model.fit(df); future = model.make_future_dataframe(periods=366) forecast = model.predict(future) Explanation: Running Prophet As before, let's instantiate prophet and fit our data (including our future dataframe). Take a look at http://pythondata.com/forecasting-time-series-data-prophet-jupyter-notebook/ for more information on the basics of Prophet. End of explanation print model.changepoints Explanation: Now, let's take a look at our changepoints. Prophet creates changespoint for us by default and stores them in .changepoints. You can see below what the possible changepoints are (they are just shown as dates). By default, Prophet adds 25 changepoints into the initial 80% of the dataset. The number of changepoints can be set by using the n_changepoints parameter when initiallizing prophet (e.g., model=Prophet(n_changepoints=30) End of explanation figure = model.plot(forecast) for changepoint in model.changepoints: plt.axvline(changepoint,ls='--', lw=1) Explanation: We can view the possible changepoints by plotting the forecast and changepoints using the following code: End of explanation deltas = model.params['delta'].mean(0) fig = plt.figure(facecolor='w') ax = fig.add_subplot(111) ax.bar(range(len(deltas)), deltas) ax.grid(True, which='major', c='gray', ls='-', lw=1, alpha=0.2) ax.set_ylabel('Rate change') ax.set_xlabel('Potential changepoint') fig.tight_layout() Explanation: Taking a look at the possible changepoints (drawn in orange/red) in the above chart, we can see they fit pretty well with some of the highs and lows. Prophet will also let us take a look at the magnitudes of these possible changepoints. You can look at this visualization with the following code (edited from the fbprophet example here -> https://github.com/facebookincubator/prophet/blob/master/notebooks/trend_changepoints.ipynb) End of explanation m = Prophet(changepoints=['2009-03-09', '2010-07-02', '2011-09-26', '2012-03-20', '2010-04-06']) forecast = m.fit(df).predict(future) m.plot(forecast); Explanation: We can see from the above chart, that there are quite a few of these changes points (found between 10 and 20 on the chart) that are very minimal in magnitude and are most likely to be ignored by prophet during forecasting be used in the forecasting. Now, if we know where trends changed in the past, we can add these known changepoints into our dataframe for use by Prophet. For this data, I'm going to use the FRED website to find some of the low points and high points to use as trend changepoints. Note: In actuality, just because there is a low or high doesn't mean its a real changepoint or trend change, but let's assume it does. End of explanation
11,208
Given the following text description, write Python code to implement the functionality described below step by step Description: Building a pipeline Step1: Finding the best model
Python Code: %pylab inline import sklearn from sklearn.linear_model import LogisticRegression from sklearn.datasets import load_digits from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA digits = load_digits() X_digits = digits.data y_digits = digits.target logistic = LogisticRegression() pca = PCA() pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)]) pipe.fit(X_digits, y_digits) pipe.predict(X_digits[:1]) Explanation: Building a pipeline End of explanation from sklearn.grid_search import GridSearchCV n_components = [20, 40, 64] # number of compomentens in PCA Cs = np.logspace(-4, 0, 3, 4) # Inverse of regularization strength penalty = ["l1", "l2"] # Norm used by the Logistic regression penalization class_weight = [None, "balanced"] # Weights associatied with clases estimator = GridSearchCV(pipe, {"pca__n_components": n_components, "logistic__C": Cs, "logistic__class_weight": class_weight, "logistic__penalty": penalty }, n_jobs=8, cv=5) estimator.fit(X_digits, y_digits) estimator.grid_scores_ print(estimator.best_score_) print(estimator.best_params_) Explanation: Finding the best model End of explanation
11,209
Given the following text description, write Python code to implement the functionality described below step by step Description: Plasma comparison Step1: The example tardis_example can be downloaded here tardis_example.yml Step2: Accessing the plasma states In this example, we are accessing Si and also the unionized number density (0) Step3: Updating the plasma state It is possible to update the plasma state with different temperatures or dilution factors (as well as different densities.). We are updating the radiative temperatures and plotting the evolution of the ionization state
Python Code: from tardis.simulation import Simulation from tardis.io.config_reader import Configuration from IPython.display import FileLinks Explanation: Plasma comparison End of explanation config = Configuration.from_yaml('tardis_example.yml') sim = Simulation.from_config(config) Explanation: The example tardis_example can be downloaded here tardis_example.yml End of explanation # All Si ionization states sim.plasma.ion_number_density.loc[14] # Normalizing by si number density sim.plasma.ion_number_density.loc[14] / sim.plasma.number_density.loc[14] # Accessing the first ionization state sim.plasma.ion_number_density.loc[14, 1] sim.plasma.update(density=[1e-13]) sim.plasma.ion_number_density Explanation: Accessing the plasma states In this example, we are accessing Si and also the unionized number density (0) End of explanation si_ionization_state = None for cur_t_rad in range(1000, 20000, 100): sim.plasma.update(t_rad=[cur_t_rad]) if si_ionization_state is None: si_ionization_state = sim.plasma.ion_number_density.loc[14].copy() si_ionization_state.columns = [cur_t_rad] else: si_ionization_state[cur_t_rad] = sim.plasma.ion_number_density.loc[14].copy() %pylab inline fig = figure(0, figsize=(10, 10)) ax = fig.add_subplot(111) si_ionization_state.T.iloc[:, :3].plot(ax=ax) xlabel('radiative Temperature [K]') ylabel('Number density [1/cm$^3$]') Explanation: Updating the plasma state It is possible to update the plasma state with different temperatures or dilution factors (as well as different densities.). We are updating the radiative temperatures and plotting the evolution of the ionization state End of explanation
11,210
Given the following text description, write Python code to implement the functionality described below step by step Description: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. Step3: And we can see the characters encoded as integers. Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. Step6: Making training and validation batches Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. Step7: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. Step8: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this Step9: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. Step10: Building the model Below is a function where I build the graph for the network. Step11: Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular Step12: Training Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt Step13: Saved checkpoints Read up on saving and loading checkpoints here Step14: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. Step15: Here, pass in the path to a checkpoint and sample from the network.
Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. End of explanation text[:100] Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. End of explanation chars[:100] Explanation: And we can see the characters encoded as integers. End of explanation np.max(chars)+1 Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. End of explanation def split_data(chars, batch_size, num_steps, split_frac=0.9): Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the first split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y Explanation: Making training and validation batches Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. End of explanation train_x, train_y, val_x, val_y = split_data(chars, 10, 50) train_x.shape Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. End of explanation train_x[:,:50] Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this: End of explanation def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. End of explanation def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') # One-hot encoding the input and target characters x_one_hot = tf.one_hot(inputs, num_classes) y_one_hot = tf.one_hot(targets, num_classes) ### Build the RNN layers # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) ### Run the data through the RNN layers # This makes a list where each element is on step in the sequence rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] # Run each sequence step through the RNN and collect the outputs outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one output row for each step for each batch seq_output = tf.concat(outputs, axis=1) output = tf.reshape(seq_output, [-1, lstm_size]) # Now connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(num_classes)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and batch logits = tf.matmul(output, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters preds = tf.nn.softmax(logits, name='predictions') # Reshape the targets to match the logits y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) cost = tf.reduce_mean(loss) # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes # NOTE: I'm using a namedtuple here because I think they are cool export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph Explanation: Building the model Below is a function where I build the graph for the network. End of explanation batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 keep_prob = 0.5 Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. End of explanation epochs = 20 # Save every N iterations save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) Explanation: Training Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt End of explanation tf.train.get_checkpoint_state('checkpoints') Explanation: Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables End of explanation def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation checkpoint = "checkpoints/____.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) Explanation: Here, pass in the path to a checkpoint and sample from the network. End of explanation
11,211
Given the following text description, write Python code to implement the functionality described below step by step Description: LRG Position Matching We need a sample of LRGs that span the range of redshifts and i-band magnitudes found in the OM10 catalog. The sample can be kept small by selecting in color as well as magnitude and redshift. CFHTLS is our first choice, given the depth of the OM10 catalog - but a good LRG pre-selection is not available. In this notebook we explore the CFHTLS option, and then try SDSS instead. Step1: 1. The Existing CFHTLS LRGs First, let's plot the old CFHTLS catalog, and overlay the OM10 lens galaxies. We can paint the latter with SDSS colors, to see how they compare. Step2: 1.1 Magnitude vs Redshift Step3: This is an attempt at a more restrictive initial selection, to efficiently focus on the brightest galaxies at each redshift bin. The next thing is to try and focus the sample even more, on the red sequence. 1.2 Color-Color-Magnitude-Redshift Triangle Plot Step4: Look at the g-r vs. redshift panel Step5: We'll need an OM10 catalog to pass to the download script, to tell it what to get. Step6: OK, now let's read in the new catalog in and show it next to the painted OM10 lenses Step7: 3. Could we just use SDSS LRGs instead? The file data/SDSS_LRGs.txt is an alternative to data/CFHTLS_LRGs.txt for use in position matching. It's not as deep, but maybe that's OK for lens searches in PS1, for example? Let's see what it looks like compared to the painted OM10 lenses. Step8: So, we have a problem Step9: Working! Let's verify colors
Python Code: %matplotlib inline import om10,os import numpy as np import matplotlib.pyplot as plt import triangle Explanation: LRG Position Matching We need a sample of LRGs that span the range of redshifts and i-band magnitudes found in the OM10 catalog. The sample can be kept small by selecting in color as well as magnitude and redshift. CFHTLS is our first choice, given the depth of the OM10 catalog - but a good LRG pre-selection is not available. In this notebook we explore the CFHTLS option, and then try SDSS instead. End of explanation db = om10.DB(catalog=os.path.expandvars("$OM10_DIR/data/qso_mock.fits")) db.paint(lrg_input_cat='$OM10_DIR/data/LRGo.txt',qso_input_cat='$OM10_DIR/data/QSOo.txt') data = np.loadtxt(os.path.expandvars("$OM10_DIR/data/CFHTLS_LRGs.txt")) Explanation: 1. The Existing CFHTLS LRGs First, let's plot the old CFHTLS catalog, and overlay the OM10 lens galaxies. We can paint the latter with SDSS colors, to see how they compare. End of explanation fig = plt.figure() fig.set_size_inches(12,9) plt.scatter(db.lenses['ZLENS'],db.lenses['APMAG_I'],color='Orange',marker='.',label='OM10',alpha=1) plt.scatter(data[:,2],data[:,6],color='Blue',marker='.',label='CFHTLS',alpha=0.2) plt.title('CFHT vs. OM10 Catalogs') plt.xlabel('lens redshift z') plt.ylabel('lens galaxy i band magnitude (AB)') plt.legend(loc=4) plt.grid(color='grey', linestyle='--', linewidth=0.5) Explanation: 1.1 Magnitude vs Redshift End of explanation # Make CFHTLS colors: gr = data[:,4] - data[:,5] ri = data[:,5] - data[:,6] iz = data[:,6] - data[:,7] i = data[:,6] z = data[:,2] # Clean out extreme CFHTLS values: index = np.where((np.abs(gr)<3.0)*(np.abs(ri)<3.0)*(np.abs(iz)<3.0)) plot_of_colors = np.array([z[index], gr[index], ri[index], iz[index], i[index]]).transpose() # Now arrange the painted OM10 lenses, again cleaning out extreme values: OMgr = db.sample['MAGG_LENS']-db.sample['MAGR_LENS'] OMri = db.sample['MAGR_LENS']-db.sample['MAGI_LENS'] OMiz = db.sample['MAGI_LENS']-db.sample['MAGZ_LENS'] OMi = db.sample['MAGI_LENS'] OMz = db.sample['ZLENS'] index = np.where((np.abs(OMgr)<3.0)*(np.abs(OMri)<3.0)*(np.abs(OMiz)<3.0)) overlay = np.array([OMz[index], \ OMgr[index], \ OMri[index], \ OMiz[index], \ OMi[index]]).transpose() # Plot with overlay: fig = triangle.corner(plot_of_colors,labels=['redshift z','g-r','r-i','i-z','i magnitude'],color='Blue') _ = triangle.corner(overlay,color='orange',fig=fig) Explanation: This is an attempt at a more restrictive initial selection, to efficiently focus on the brightest galaxies at each redshift bin. The next thing is to try and focus the sample even more, on the red sequence. 1.2 Color-Color-Magnitude-Redshift Triangle Plot End of explanation def download_CFHTLS_LRG_catalog(L,N=100000): i = L['MAGI_LENS'] z = L['ZLENS'] r = L['MAGR_LENS'] g = L['MAGG_LENS'] # c = L['MAGG_LENS'] - L['MAGR_LENS'] # Set up magnitude bins: imin,imax = np.min(i),np.max(i) Nbins = 20 Nperbin = int(N/(Nbins*1.0)) ibins = np.linspace(imin,imax,Nbins+1) # List of filenames to be downloaded: filenames = [] # Loop over bins, downloading sub-catalogs: for k in np.arange(Nbins): imin,imax = ibins[k],ibins[k+1] # Sensible filename: filename = "CFHTLS_LRGs.v2_{:.1f}-i-{:.1f}.txt".format(imin,imax) output = os.path.expandvars("$OM10_DIR/data/"+filename) # Get mag and color limits - sigma clip to avoid outliers: index = np.where((i > imin)*(i < imax)) nsigma = 2.0 # Select this range of redshifts: zmean,zstd = np.mean(z[index]), np.std(z[index]) zmin,zmax = zmean-nsigma*zstd, zmean+nsigma*zstd # Select this range of r-band magnitudes: rmean,rstd = np.mean(r[index]), np.std(r[index]) rmin,rmax = rmean-nsigma*rstd, rmean+nsigma*rstd # Select this range of g-band magnitudes: gmean,gstd = np.mean(g[index]), np.std(g[index]) gmin,gmax = gmean-nsigma*gstd, gmean+nsigma*gstd # Translate (g-r) color into g band mag: # gmin = cmin + rmin # gmax = cmax + rmax print "Querying for LRGs with g, r, i, z in the ranges",[round(gmin,1),round(gmax,1)], \ [round(rmin,1), round(rmax,1)], [round(imin,1),round(imax,1)], [round(zmin,2),round(zmax,2)] # Assemble the URL: url = "http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/community/CFHTLens/cgi/queryt.pl?" url = url+"REQUEST=doQuery&LANG=ADQL&method=sync&format=ascii&query=SELECT%0D%0Atop+"+str(Nperbin) url = url+"%0D%0AALPHA_J2000%2C+DELTA_J2000%2C+FLUX_RADIUS%2C+CLASS_STAR%2C+fitclass%2C+Z_B%2C+Z_B_MIN%2C+Z_B_MAX%2C+T_B%2C+star_flag%2C+MAG_u%2C+MAG_g%2C+MAG_r%2C+MAG_i%2C+MAG_y%2C+MAG_z%0D%0A" url = url+"FROM%0D%0Acfht.clens%0D%0AWHERE%0D%0Afitclass%3E%3D0%0D%0AAND+fitclass%3C%3D0%0D%0AAND+star_flag%3C%3D0.1%0D%0A" url = url+"AND+MAG_i%3E%3D"+str(imin)+"%0D%0AAND+MAG_i%3C%3D"+str(imax)+"%0D%0A" url = url+"AND+MAG_r%3E%3D"+str(rmin)+"%0D%0AAND+MAG_r%3C%3D"+str(rmax)+"%0D%0A" url = url+"AND+MAG_g%3E%3D"+str(gmin)+"%0D%0AAND+MAG_g%3C%3D"+str(gmax)+"%0D%0A" url = url+"AND+Z_B%3E%3D"+str(zmin)+"%0D%0AAND+Z_B%3C%3D"+str(zmax)+"%0D%0A" # The data don't always download, so we to need try any failed searches again... # Such failures are purportedly due to an error on the server side. success = None while success is None: # Download the data with wget: !wget -q -O "$output" "$url" # Comment out the first line: !sed s/'ALPHA'/'# ALPHA'/g "$output" > junk !mv junk "$output" # Check file for download errors (this happens a lot): if 'Error' in open(output).read(): print "Error downloading data, removing file "+output+" and trying again..." !rm $output success = None else: # print " Successfully downloaded "+str(Nperbin)+" LRGs to file "+output !wc -l $output break # Add this file to the list: filenames.append(output) # We should now have all our files downloaded! Concatenate them into one: input = ' '.join(filenames) output = os.path.expandvars("$OM10_DIR/data/CFHTLS_LRGs.v2.txt") !echo "# RA Dec redshift u g r i z" > $output !cat $input | grep -v '#' | awk '{print $1,$2,$6,$11,$12,$13,$14,$16}' >> $output !wc -l $output return output Explanation: Look at the g-r vs. redshift panel: as we go to higher redshift, these galaxies are getting redder as required - that's good. However, the CFHTLS LRGs seem to be systematically bluer in g-r color, and we run out of objects at faint magnitudes - and at high redshifts in a given magnitude bin. We need a good way of selecting old, massive galaxies at redshifts above 1: selecting magnitude-limited samples will just get us a lot of blue, star-forming galaxies. When position-matching, we really need to select objects that could plausibly act as lenses, and then paste lensed images on top of them. In the absence of an actual LRG catalog, we are stuck downloading CFHTLS objects and then making color cuts. So, maybe the best thing to do is to first paint colors onto the CFHTLS lenses, using the SDSS LRG colors (and some extrapolation to higher z), and then find an object with the right colors from the CFHTLS galaxy catalog. Knowing the colors to look for will also help improve the initial download of CFHTLS galaxies - maybe we can restrict ourselves to red galaxies sooner? 2. Downloading New CFHTLS Galaxies Let's try to download a new, fainter CFHTLS_LRGs.txt catalog. Let's do this in pieces, to avoid being overloaded with useless blue galaxies. First we put all the downloading code in a def, and then run it multiple times with hand-crafted redshift ranges. We'll download CFHTLS galaxies in narrow i-band magnitude bins, and put a constraint on (g-r) at the same time. We can use the target OM10 catalog to automatically set the redshift and color selection constraints. End of explanation db = om10.DB(catalog=os.path.expandvars("$OM10_DIR/data/qso_mock.fits")) db.paint(lrg_input_cat='$OM10_DIR/data/LRGo.txt',qso_input_cat='$OM10_DIR/data/QSOo.txt') LRGcatalog = download_CFHTLS_LRG_catalog(db.sample) Explanation: We'll need an OM10 catalog to pass to the download script, to tell it what to get. End of explanation data = np.loadtxt(os.path.expandvars("$OM10_DIR/data/CFHTLS_LRGs.v2.txt")) gr = data[:,4] - data[:,5] ri = data[:,5] - data[:,6] iz = data[:,6] - data[:,7] i = data[:,6] z = data[:,2] # Clean out extreme colors in sample: index = np.where((np.abs(gr)<3.0)*(np.abs(ri)<3.0)*(np.abs(iz)<3.0)) plot_of_colors = np.array([z[index], gr[index], ri[index], iz[index], i[index]]).transpose() # Now arrange the painted OM10 lenses, again cleaning out extreme values: OMgr = db.sample['MAGG_LENS']-db.sample['MAGR_LENS'] OMri = db.sample['MAGR_LENS']-db.sample['MAGI_LENS'] OMiz = db.sample['MAGI_LENS']-db.sample['MAGZ_LENS'] OMi = db.sample['MAGI_LENS'] OMz = db.sample['ZLENS'] index = np.where((np.abs(OMgr)<3.0)*(np.abs(OMri)<3.0)*(np.abs(OMiz)<3.0)) overlay = np.array([OMz[index], \ OMgr[index], \ OMri[index], \ OMiz[index], \ OMi[index]]).transpose() # Plot with overlay: fig = triangle.corner(plot_of_colors,labels=['redshift z','g-r','r-i','i-z','i magnitude'],color='Blue') _ = triangle.corner(overlay,color='orange',fig=fig) Explanation: OK, now let's read in the new catalog in and show it next to the painted OM10 lenses: End of explanation db = om10.DB(catalog=os.path.expandvars("$OM10_DIR/data/qso_mock.fits")) data = np.loadtxt(os.path.expandvars("$OM10_DIR/data/SDSS_LRGs.txt")) db.paint(lrg_input_cat='$OM10_DIR/data/LRGo.txt',qso_input_cat='$OM10_DIR/data/QSOo.txt') gr = data[:,4] - data[:,5] ri = data[:,5] - data[:,6] iz = data[:,6] - data[:,7] i = data[:,6] z = data[:,2] # Clean out extreme colors in SDSS sample: index = np.where((np.abs(gr)<3.0)*(np.abs(ri)<3.0)*(np.abs(iz)<3.0)) plot_of_colors = np.array([z[index], gr[index], ri[index], iz[index], i[index]]).transpose() # Make overlay from painted OM10 db: overlay = np.array([db.sample['ZLENS'], \ db.sample['MAGG_LENS']-db.sample['MAGR_LENS'], \ db.sample['MAGR_LENS']-db.sample['MAGI_LENS'], \ db.sample['MAGI_LENS']-db.sample['MAGZ_LENS'], \ db.sample['MAGI_LENS']]).transpose() # Plot with overlay: fig = triangle.corner(plot_of_colors,labels=['redshift z','g-r','r-i','i-z','i magnitude'],color='Blue') _ = triangle.corner(overlay,color='orange',fig=fig) Explanation: 3. Could we just use SDSS LRGs instead? The file data/SDSS_LRGs.txt is an alternative to data/CFHTLS_LRGs.txt for use in position matching. It's not as deep, but maybe that's OK for lens searches in PS1, for example? Let's see what it looks like compared to the painted OM10 lenses. End of explanation %matplotlib inline import om10,os import numpy as np import matplotlib.pyplot as plt import triangle db = om10.DB(catalog=os.path.expandvars("$OM10_DIR/data/qso_mock.fits")) db.paint() db.get_sky_positions() idx_list = db.assign_sky_positions() Explanation: So, we have a problem: the SDSS LRG selection only returns very bright (i < 19) objects! This was why we went for CFHTLS initially. What we really need is a CFHTLS LRG catalog. Testing color-based positioning code End of explanation idx_list = np.array(idx_list).flatten() #list/array hijinks import scipy from scipy.stats import itemfreq _ = plt.hist(idx_list) test = itemfreq(idx_list) plt.figure() _ = plt.hist(test[:,1].flatten(),bins=200) np.where([test == np.max(test[:,0])]) np.sort(test) test[2265,:] lens = db.sample ref_features = np.array([db.LRGs['redshift'], db.LRGs['g-r'], db.LRGs['r-i'], \ db.LRGs['i-z'], db.LRGs['mag_i']]).transpose() lens_features = np.array([lens['ZLENS'], lens['MAGG_LENS']-lens['MAGR_LENS'], \ lens['MAGR_LENS']-lens['MAGI_LENS'], lens['MAGI_LENS']-lens['MAGZ_LENS'], lens['APMAG_I']]).transpose() fig1 = triangle.corner(ref_features,labels=['z','g-r','r-i','i-z','mag_i'],color='Orange',label='CFHTLS Reference') _ = triangle.corner(lens_features,fig=fig1,color='Blue', label='OM10 lenses') plt.legend() Explanation: Working! Let's verify colors: End of explanation
11,212
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic Python Packages for Science The Aeropython’s guide to the Python Galaxy! Siro Moreno Martín Alejandro Sáez Mollejo 0. Introduction Python in the Scientific environment Principal Python Packages for scientific purposes Anaconda & conda http Step1: Main objectives of this workshop Provide you with a first insight into the principal Python tools & libraries used in Science Step2: Array creation Step3: Basic slicing Step4: [start Step5: 2. Drawing Step6: Operations & linalg Step7: Air quality data Step8: Loading the data Step9: Dealing with missing values Step10: Plotting the data Maximum values from Step11: CO Máxima diaria de las medias móviles octohorarias Step12: O3 Máxima diaria de las medias móviles octohorarias Step13: The file contains data from 2004 to 2015 (included). Each row corresponds to a day of the year, so evey 365 lines contain data from a whole year* Note1 Step14: We can also get information about percentiles! Rearranging the data Step15: Let's visualize data! Using matplotlib styles http Step16: Let's see if 2015 was a normal year... Step17: But the power of Matplotlib does not end here! For example, lets represent a function over a 2D domain! For this we will use the contour function, which requires some special inputs... Step18: In oder to plot the 2D function, we will need a grid. For 1D domain, we just needed one 1D array containning the X position and another 1D array containing the value. Now, we will create a grid, a distribution of points covering a surface. For the 2D domain, we will need Step19: Note that with the meshgrid function we can only create rectangular grids Step20: We can try a little more resolution... Step21: The countourf function is simmilar, but it colours also between the lines. In both functions, we can manually adjust the number of lines/zones we want to differentiate on the plot. Step22: These functions can be enormously useful when you want to visualize something. And remember! Always visualize data! Let's try it with Real data! Step23: The time and frequency vectors contain the values at which the instrument was reading, and the intensity matrix, the postprocessed strength measured for each frequency at each time. We need again to create the 2D arrays of coordinates. Step24: Wow! What is that? Let's zoom into it! Step25: IPython Widgets The IPython Widgets are interactive tools to use in the notebook. They are fun and very useful to quickly understand how different parameters affect a certain function. This is based on a section of the PyConEs 14 talk by Kiko Correoso "Hacking the notebook" Step26: If you want a dropdown menu that passes non-string values to the Python function, you can pass a dictionary. The keys in the dictionary are used for the names in the dropdown menu UI and the values are the arguments that are passed to the underlying Python function. Let's have some fun! We talked before about frequencys and waves. Have you ever learn about AM and FM modulation? It's the process used to send radio communications! Step27: In order to interact with it, we will need to transform it into a function Step28: Other options... 5. Other packages Symbolic calculations with SymPy SymPy is a Python package for symbolic math. We will not cover it in depth, but let's take a picure of the basics! Step29: The basic unit of this package is the symbol. A simbol object has name and graphic representation, which can be different Step30: By default, SymPy takes symbols as complex numbers. That can lead to unexpected results in front of certain operations, like logarithms. We can explicitly signal that a symbol is real when we create it. We can also create several symbols at a time. Step31: Expressions can be created from symbols Step32: We can manipulate the expression in several ways. For example Step33: We can derivate and integrate Step34: We also have ecuations and differential ecuations Step35: Data Analysis with pandas Pandas is a package that focus on data structures and data analysis tools. We will not cover it because the next workshop, by Kiko Correoso, will develop it in depth. Machine Learning with scikit-learn Scikit-learn is a very complete Python package focusing on machin learning, and data mining and analysis. We will not cover it in depth because it will be the focus of many more talks at the PyData. A world of possibilities... Thanks for yor attention! Any Questions?
Python Code: from IPython.display import HTML HTML('<iframe src="http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf" width="700" height="400"></iframe>') Explanation: Basic Python Packages for Science The Aeropython’s guide to the Python Galaxy! Siro Moreno Martín Alejandro Sáez Mollejo 0. Introduction Python in the Scientific environment Principal Python Packages for scientific purposes Anaconda & conda http://conda.pydata.org/docs/intro.html Conda is a package manager application that quickly installs, runs, and updates packages and their dependencies. The conda command is the primary interface for managing installations of various packages. It can query and search the package index and current installation, create new environments, and install and update packages into existing conda environments. End of explanation # importing numpy # performance list sum # performance array sum %timeit np.sum(array) Explanation: Main objectives of this workshop Provide you with a first insight into the principal Python tools & libraries used in Science: conda. Jupyter Notebook. NumPy, matplotlib, SciPy Provide you with the basic skills to face basic tasks such as: Show other common libraries: Pandas, scikit-learn (some talks & workshops will focus on these packages) SymPy Numba ¿? 1. Jupyter Notebook The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. It has been widely recognised as a great way to distribute scientific papers, because of the capability to have an integrated format with text and executable code, highly reproducible. Top level investigators around the world are already using it, like the team behind the Gravitational Waves discovery (LIGO), whose analysis was translated to an interactive dowloadable Jupyter notebook. You can see it here: https://github.com/minrk/ligo-binder/blob/master/GW150914_tutorial.ipynb 2. Using arrays: NumPy ndarray object | index | 0 | 1 | 2 | 3 | ... | n-1 | n | | ---------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | value | 2.1 | 3.6 | 7.8 | 1.5 | ... | 5.4 | 6.3 | N-dimensional data structure. Homogeneously typed. Efficient! A universal function (or ufunc for short) is a function that operates on ndarrays. It is a “vectorized function". End of explanation one_dim_array = two_dim_array = # size & shape # data type # usual arrays # changing the shape # linspace Explanation: Array creation End of explanation one_dim_array two_dim_array Explanation: Basic slicing End of explanation # Chess board chess_board = np.zeros([8, 8], dtype=int) # your code chess_board Explanation: [start:stop:step] End of explanation # drawing the chessboard Explanation: 2. Drawing: Matplotlib End of explanation # numpy functions x = y = # plotting # another function # transpose two_dim_array = # matrix multiplication # matrix vector # inv # eigenvectors & eigenvalues Explanation: Operations & linalg End of explanation from IPython.display import HTML HTML('<iframe src="http://www.mambiente.munimadrid.es/sica/scripts/index.php" \ width="700" height="400"></iframe>') Explanation: Air quality data End of explanation # Linux command !head ./data/barrio_del_pilar-20160322.csv # Windows # !gc log.txt | select -first 10 # head # loading the data # ./data/barrio_del_pilar-20160322.csv data2016 = Explanation: Loading the data End of explanation # mean # masking invalid data data2015 = Explanation: Dealing with missing values End of explanation from IPython.display import HTML HTML('<iframe src="http://ccaa.elpais.com/ccaa/2015/12/24/madrid/1450960217_181674.html" width="700" height="400"></iframe>') Explanation: Plotting the data Maximum values from: http://www.mambiente.munimadrid.es/opencms/export/sites/default/calaire/Anexos/valores_limite_1.pdf NO2 Media anual: 40 µg/m3 Media horaria: 200 µg/m3 End of explanation # http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html def moving_average(x, N=8): return np.convolve(x, np.ones(N)/N, mode='same') Explanation: CO Máxima diaria de las medias móviles octohorarias: 10 mg/m³ End of explanation HTML('<iframe src="http://eportal.magrama.gob.es/websiar/Ficha.aspx?IdProvincia=28&IdEstacion=1" width="700" height="400"></iframe>') Explanation: O3 Máxima diaria de las medias móviles octohorarias: 120 µg/m3 Umbral de información. 180 µg/m3 Media horaria. Umbral de alerta. 240 µg/m3 4. Scientific functions: SciPy ``` scipy.linalg: ATLAS LAPACK and BLAS libraries scipy.stats: distributions, statistical functions... scipy.integrate: integration of functions and ODEs scipy.optimization: local and global optimization, fitting, root finding... scipy.interpolate: interpolation, splines... scipy.fftpack: Fourier trasnforms scipy.signal, scipy.special, scipy.io ``` Temperature data Now, we will use some temperature data from the Spanish Ministry of Agriculture. End of explanation !head data/M01_Center_Finca_temperature_data_2004_2015.csv # Loading the data temp_data = # Importing SciPy stats # Applying some functions: describe, mode, mean... Explanation: The file contains data from 2004 to 2015 (included). Each row corresponds to a day of the year, so evey 365 lines contain data from a whole year* Note1: 29th February has been removed for leap-years. Note2: Missing values have been replaced with the immediately prior valid data. These kind of events are better handled with Pandas! End of explanation temp_data2 = np.zeros([365, 3, 12]) # Calculating mean of mean temp # max of max # min of min Explanation: We can also get information about percentiles! Rearranging the data End of explanation plt.style.available # plotting max_max, min_min, mean_mean Explanation: Let's visualize data! Using matplotlib styles http://matplotlib.org/users/whats_new.html#styles End of explanation # mean vs mean_mean # and max, min 2015 Explanation: Let's see if 2015 was a normal year... End of explanation #we will use numpy functions in order to work with numpy arrays def funcion(x,y): return # 0D: works! funcion(3,5) # 1D: works! x = np. plt.plot( , ) Explanation: But the power of Matplotlib does not end here! For example, lets represent a function over a 2D domain! For this we will use the contour function, which requires some special inputs... End of explanation #We can create the X and Y matrices by hand, or use a function designed to make ir easy: #we create two 1D arrays of the desired lengths: x_1d = np.linspace(0, 5, 5) y_1d = np.linspace(-2, 4, 7) #And we use the meshgrid function to create the X and Y matrices! X, Y = X Y Explanation: In oder to plot the 2D function, we will need a grid. For 1D domain, we just needed one 1D array containning the X position and another 1D array containing the value. Now, we will create a grid, a distribution of points covering a surface. For the 2D domain, we will need: - One 2D array containing the X coordinate of the points. - One 2D array containing the Y coordinate of the points. - One 2D array containing the function value at the points. The three matrices must have the exact same dimensions, because each cell of them represents a particular point. End of explanation #Using Numpy arrays, calculating the function value at the points is easy! Z #Let's plot it! Explanation: Note that with the meshgrid function we can only create rectangular grids End of explanation x_1d = np.???(0, 5, 100) y_1d = np.???(-2, 4, 100) X, Y = np.???( , ) Z = funcion(X,Y) plt.contour(X, Y, Z) plt.colorbar() Explanation: We can try a little more resolution... End of explanation plt.contourf( , , , ,cmap=plt.cm.Spectral) #With cmap, a color map is specified plt.colorbar() plt.contourf( , , , ,cmap=plt.cm.Spectral) plt.colorbar() #We can even combine them! plt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral) plt.colorbar() cs = plt.???(X, Y, Z, np.linspace(-2, 2, 9), colors='k') plt.clabel(cs) Explanation: The countourf function is simmilar, but it colours also between the lines. In both functions, we can manually adjust the number of lines/zones we want to differentiate on the plot. End of explanation time_vector = np. ('data/ligo_tiempos.txt') frequency_vector = np. ('data/ligo_frecuencias.txt') intensity_matrix = np. ('data/ligo_datos.txt') Explanation: These functions can be enormously useful when you want to visualize something. And remember! Always visualize data! Let's try it with Real data! End of explanation time_2D, freq_2D = np. plt. ( ) #We can manually adjust the sice of the picture plt. ( , , ,np.linspace(0, 0.02313, 200),cmap='bone') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.colorbar() Explanation: The time and frequency vectors contain the values at which the instrument was reading, and the intensity matrix, the postprocessed strength measured for each frequency at each time. We need again to create the 2D arrays of coordinates. End of explanation plt.figure(figsize=(10,6)) plt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap = plt.cm.Spectral) plt.colorbar() plt.contour(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 9), colors='k') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.axis([9.9, 10.05, 0, 300]) Explanation: Wow! What is that? Let's zoom into it! End of explanation from ipywidgets import interact #Lets define a extremely simple function: def ejemplo(x): print(x) #Try changing the value of x to True, 'Hello' or ['hello', 'world'] #We can control the slider values with more precission: Explanation: IPython Widgets The IPython Widgets are interactive tools to use in the notebook. They are fun and very useful to quickly understand how different parameters affect a certain function. This is based on a section of the PyConEs 14 talk by Kiko Correoso "Hacking the notebook": http://nbviewer.jupyter.org/github/kikocorreoso/PyConES14_talk-Hacking_the_Notebook/blob/master/notebooks/Using%20Interact.ipynb End of explanation x = np.linspace(-1, 7, 1000) fig = plt.figure() fig.tight_layout() plt.subplot(211)#This allows us to display multiple sub-plots, and where to put them plt.plot(x, np.sin(x)) plt.grid(False) plt.title("Audio signal: modulator") plt.subplot(212) plt.plot(x, np.sin(50 * x)) plt.grid(False) plt.title("Radio signal: carrier") #Am modulation simply works like this: am_wave = np.sin(50 * x) * (0.5 + 0.5 * np.sin(x)) plt.plot(x, am_wave) Explanation: If you want a dropdown menu that passes non-string values to the Python function, you can pass a dictionary. The keys in the dictionary are used for the names in the dropdown menu UI and the values are the arguments that are passed to the underlying Python function. Let's have some fun! We talked before about frequencys and waves. Have you ever learn about AM and FM modulation? It's the process used to send radio communications! End of explanation def am_mod (f_carr=50, f_mod=1, depth=0.5): #The default values will be the starting points of the sliders interact(am_mod, f_carr = (1,100,2), f_mod = (0.2, 2, 0.1), depth = (0, 1, 0.1)) Explanation: In order to interact with it, we will need to transform it into a function End of explanation # Importación from sympy import init_session init_session(use_latex='matplotlib') #We must start calling this function Explanation: Other options... 5. Other packages Symbolic calculations with SymPy SymPy is a Python package for symbolic math. We will not cover it in depth, but let's take a picure of the basics! End of explanation coef_traccion = w = W = w, W Explanation: The basic unit of this package is the symbol. A simbol object has name and graphic representation, which can be different: End of explanation x, y, z, t = symbols('x y z t', real=True) x.assumptions0 Explanation: By default, SymPy takes symbols as complex numbers. That can lead to unexpected results in front of certain operations, like logarithms. We can explicitly signal that a symbol is real when we create it. We can also create several symbols at a time. End of explanation expr = expr #We can substitute pieces of the expression: expr. #We can particularize on a certain value: (sin(x) + 3 * x). #We can evaluate the numerical value with a certain precission: (sin(x) + 3 * x). Explanation: Expressions can be created from symbols: End of explanation expr1 = (x ** 3 + 3 * y + 2) ** 2 expr1 expr1. Explanation: We can manipulate the expression in several ways. For example: End of explanation expr = cos(2*x) expr. expr_xy = y ** 3 * sin(x) ** 2 + x ** 2 * cos(y) expr_xy int2 = 1 / sin(x) x, a = symbols('x a', real=True) int3 = 1 / (x**2 + a**2)**2 Explanation: We can derivate and integrate: End of explanation a, x, t, C = symbols('a, x, t, C', real=True) ecuacion = ecuacion x = symbols('x') f = Function('y') ecuacion_dif = ecuacion_dif Explanation: We also have ecuations and differential ecuations: End of explanation # Notebook style from IPython.core.display import HTML css_file = './static/style.css' HTML(open(css_file, "r").read()) Explanation: Data Analysis with pandas Pandas is a package that focus on data structures and data analysis tools. We will not cover it because the next workshop, by Kiko Correoso, will develop it in depth. Machine Learning with scikit-learn Scikit-learn is a very complete Python package focusing on machin learning, and data mining and analysis. We will not cover it in depth because it will be the focus of many more talks at the PyData. A world of possibilities... Thanks for yor attention! Any Questions? End of explanation
11,213
Given the following text description, write Python code to implement the functionality described below step by step Description: Assembly of system with multiple domains, variables and numerics This tutorial demonstrates how a transient problem may be solved in PorePy. We consider the advective-diffusive tracer transport problem for a slightly compressible flow field described in the darcy_and_transport_equations.ipynb tutorial. The geometry and flow parameters are as in example 2 of this benchmark study. We start by importing modules Step1: Transport parameters The following parameters are added to the data dictionaries Step2: We start by making a grid bucket, pecifying the keywords for the parameters of the flow and transport problem, respectively. Then we assigne the problem data and visualize the domain. Step3: Variable definition and initialization Now, we define the variables on subdomains and interfaces and populate the STATE and ITERATE dictionaries with initial values. Step4: ## Define AD objects We make a DofManager and an EquationManager responsible for global degree-of-freedom handling and equation discretization and assembly, respectively. Then, we define discretizations, AD parameters and variables. Step5: Define flow equation There is one equation on all subdomains with three terms (fluxes, source/sink and accumulation), while there is one flux relation on the interfaces. Step6: Define transport equation There is one equation on all subdomains with four terms (conductive fluxes, advective fluxes, source/sink and accumulation). On the interfaces, there are two flux equations (conduction and advection). A few notes on the advective discretization Step7: Interface fluxes Step8: Solve problem Discretize, assemble and solve the linear system in a time loop. We also have to update the STATE and ITERATE and back-compute the fluid fluxes (which are not primary variables in the primal formulation of the finite volume scheme used) from the pressure solution. We plot the tracer distribution at each time step and the final pressure. The tracer enters from the right boundary, and is transported more quickly through the fractures than the matrix.
Python Code: import numpy as np import scipy.sparse as sps import porepy as pp import data.flow_benchmark_2d_geiger_setup as setup Explanation: Assembly of system with multiple domains, variables and numerics This tutorial demonstrates how a transient problem may be solved in PorePy. We consider the advective-diffusive tracer transport problem for a slightly compressible flow field described in the darcy_and_transport_equations.ipynb tutorial. The geometry and flow parameters are as in example 2 of this benchmark study. We start by importing modules: End of explanation def add_transport_data(gb, parameter_keyword): # Method to assign data. tol = 1e-4 aperture = 1e-4 kappa_f = 1e-4 for g, d in gb: # Boundary conditions: Dirichlet for left and right side of the domain b_faces = g.tags["domain_boundary_faces"].nonzero()[0] bc_val = np.zeros(g.num_faces) unity = np.ones(g.num_cells) empty = np.empty(0) if b_faces.size != 0: b_face_centers = g.face_centers[:, b_faces] b_inflow = b_face_centers[0, :] < tol b_outflow = b_face_centers[0, :] > 1-tol labels = np.array(["neu"] * b_faces.size) labels[np.logical_or(b_inflow, b_outflow)] = "dir" bc = pp.BoundaryCondition(g, b_faces, labels) bc_val[b_faces[b_inflow]] = 1 else: bc = pp.BoundaryCondition(g) #, empty, empty) # Porosity if g.dim == gb.dim_max(): porosity = 0.2 * unity else: porosity = 0.8 * unity specific_volume = np.power(aperture, gb.dim_max() - g.dim) diffusivity = kappa_f * np.ones(g.num_cells) tensor = pp.SecondOrderTensor(diffusivity * specific_volume) # Inherit the aperture assigned for the flow problem specified_parameters = { "bc": bc, "bc_values": bc_val, "mass_weight": porosity * specific_volume, "second_order_tensor": tensor, } pp.initialize_default_data(g, d, parameter_keyword, specified_parameters) # Store the dimension in the dictionary for visualization purposes d[pp.STATE] = {"dimension": g.dim * np.ones(g.num_cells)} for e, d in gb.edges(): mg = d["mortar_grid"] specific_volume_h = np.power(1e-4, gb.dim_max() - (mg.dim + 1)) diffusivity_n = kappa_f / (aperture / 2) * specific_volume_h parameters = { "normal_diffusivity": diffusivity_n, "darcy_flux": np.zeros(mg.num_cells), } pp.initialize_data(g, d, parameter_keyword, parameters) Explanation: Transport parameters The following parameters are added to the data dictionaries: * Heat capacity * (Normal) thermal conductivity * Boundary conditions, both type and values * Darcy flux, needed for initial discretization. These values will be computed from the pressure solution to the fluid flow problem as the simulation proceeds. End of explanation mesh_args = {"mesh_size_frac": .08, "mesh_size_bound": .12} gb, domain = pp.grid_buckets_2d.benchmark_regular(mesh_args) fracture_permeability = 1e4 kw_f = 'flow' kw_t = 'transport' # Add data - will only add flow data setup.add_data(gb, domain, fracture_permeability) # Transport related parameters add_transport_data(gb, kw_t) pp.plot_grid(gb, "dimension", figsize=(12, 10)) Explanation: We start by making a grid bucket, pecifying the keywords for the parameters of the flow and transport problem, respectively. Then we assigne the problem data and visualize the domain. End of explanation subdomain_pressure_variable = "pressure" subdomain_tracer_variable = "tracer" interface_flux_variable = "interface_flux" interface_advection_variable = "interface_advection" interface_conduction_variable = "interface_conduction" # Loop over the nodes in the GridBucket, define primary variables and discretization schemes for g, d in gb: d[pp.PRIMARY_VARIABLES] = {subdomain_pressure_variable: {"cells": 1, "faces": 0}, subdomain_tracer_variable: {"cells": 1, "faces": 0}} vals = { subdomain_pressure_variable: np.zeros(g.num_cells), subdomain_tracer_variable: np.zeros(g.num_cells), } d[pp.STATE] = vals d[pp.STATE][pp.ITERATE] = vals.copy() # Loop over the edges in the GridBucket, define primary variables and discretizations for e, d in gb.edges(): mg = d["mortar_grid"] d[pp.PRIMARY_VARIABLES] = {interface_flux_variable: {"cells": 1}, interface_advection_variable: {"cells": 1}, interface_conduction_variable: {"cells": 1}, } vals = { interface_flux_variable: np.zeros(mg.num_cells), interface_advection_variable: np.zeros(mg.num_cells), interface_conduction_variable: np.zeros(mg.num_cells), } d[pp.STATE] = vals d[pp.STATE][pp.ITERATE] = vals.copy() Explanation: Variable definition and initialization Now, we define the variables on subdomains and interfaces and populate the STATE and ITERATE dictionaries with initial values. End of explanation dof_manager = pp.DofManager(gb) eq_manager = pp.ad.EquationManager(gb, dof_manager) # Ad geometry subdomain_list = [g for g, _ in gb.nodes()] interface_list = [e for e, d in gb.edges()] subdomain_proj = pp.ad.SubdomainProjections(subdomain_list) mortar_proj = pp.ad.MortarProjections( edges=interface_list, grids=subdomain_list, gb=gb, nd=1 ) # Ad discretization objects flow = pp.ad.TpfaAd(kw_f, subdomain_list) conduction = pp.ad.TpfaAd(kw_t, subdomain_list) advection = pp.ad.UpwindAd(kw_t, subdomain_list) accumulation_f = pp.ad.MassMatrixAd(kw_f, subdomain_list) accumulation_t = pp.ad.MassMatrixAd(kw_t, subdomain_list) interface_flow = pp.ad.RobinCouplingAd(kw_f, interface_list) interface_advection = pp.ad.UpwindCouplingAd(kw_t, interface_list) interface_conduction = pp.ad.RobinCouplingAd(kw_t, interface_list) #Operators div = pp.ad.Divergence(grids=subdomain_list) trace = pp.ad.Trace(grids=subdomain_list) # Parameters dt = 5e-2 bc_val_f = pp.ad.BoundaryCondition(kw_f, subdomain_list) bc_val_t = pp.ad.BoundaryCondition(kw_t, subdomain_list) source_f = pp.ad.ParameterArray( param_keyword=kw_f, array_keyword="source", grids=subdomain_list, ) source_t = pp.ad.ParameterArray( param_keyword=kw_t, array_keyword="source", grids=subdomain_list, ) # Ad variables p = eq_manager.merge_variables( [(g, subdomain_pressure_variable) for g in subdomain_list] ) p_prev = p.previous_timestep() interface_flux = eq_manager.merge_variables( [(e, interface_flux_variable) for e in interface_list] ) t = eq_manager.merge_variables( [(g, subdomain_tracer_variable) for g in subdomain_list] ) t_prev = t.previous_timestep() advective_interface_flux = eq_manager.merge_variables( [(e, interface_advection_variable) for e in interface_list] ) conductive_interface_flux = eq_manager.merge_variables( [(e, interface_conduction_variable) for e in interface_list] ) Explanation: ## Define AD objects We make a DofManager and an EquationManager responsible for global degree-of-freedom handling and equation discretization and assembly, respectively. Then, we define discretizations, AD parameters and variables. End of explanation flux = ( flow.flux * p + flow.bound_flux * bc_val_f + flow.bound_flux * mortar_proj.mortar_to_primary_int * interface_flux ) # Optionally, each term may be given a name. This can be highly useful for debugging! flux.set_name("darcy flux") subdomain_flow_eq = ( accumulation_f.mass * (p - p_prev) / dt + div * flux - mortar_proj.mortar_to_secondary_int * interface_flux - source_f ) ## Interface equation # Reconstruct primary/higher-dimensional pressure on internal boundaries p_primary = ( flow.bound_pressure_cell * p + flow.bound_pressure_face * mortar_proj.mortar_to_primary_int * interface_flux + flow.bound_pressure_face * bc_val_f ) # Project the two pressures to the interface and equate with the interface flux interface_flow_eq = ( interface_flow.mortar_discr * ( mortar_proj.primary_to_mortar_avg * p_primary - mortar_proj.secondary_to_mortar_avg * p ) + interface_flux ) Explanation: Define flow equation There is one equation on all subdomains with three terms (fluxes, source/sink and accumulation), while there is one flux relation on the interfaces. End of explanation # Conduction is a direct analogy to fluid flux conductive_flux = ( conduction.flux * t + conduction.bound_flux * bc_val_t + conduction.bound_flux * mortar_proj.mortar_to_primary_int * conductive_interface_flux ) advective_flux = ( flux * ( advection.upwind * t ) - advection.bound_transport_dir * flux * bc_val_t - advection.bound_transport_neu * ( mortar_proj.mortar_to_primary_int * advective_interface_flux + bc_val_t) ) accumulation_term = ( accumulation_t.mass * (t - t_prev) ) / dt subdomain_transport_eq = ( accumulation_term + div * (conductive_flux + advective_flux) - mortar_proj.mortar_to_secondary_int * conductive_interface_flux - mortar_proj.mortar_to_secondary_int * advective_interface_flux - source_t ) Explanation: Define transport equation There is one equation on all subdomains with four terms (conductive fluxes, advective fluxes, source/sink and accumulation). On the interfaces, there are two flux equations (conduction and advection). A few notes on the advective discretization: * Upwind discretization of advective flux. * Dirichlet and Neumann boundary conditions are handled separately. This is needed since Dirichlet values naturally need multiplication with the flux value (the tracer concentration on the boundary is specified), while Neumann values are interpreted as specifying the product between the flux and the conserved quantity. * The advective interface flux corresponding to an internal Neumann condition, the bound_transport_neu discretization is used for that term. * No weight is applied, i.e. c_w is assumed to equal 1. End of explanation # Again, direct analogy to fluid flux. t_primary = ( conduction.bound_pressure_cell * t + conduction.bound_pressure_face * mortar_proj.mortar_to_primary_int * conductive_interface_flux + conduction.bound_pressure_face * bc_val_t ) interface_conduction_eq = ( interface_conduction.mortar_discr * ( mortar_proj.primary_to_mortar_avg * t_primary - mortar_proj.secondary_to_mortar_avg * t ) + conductive_interface_flux ) # Relate advective flux to cell centre tracer of the upwind subdomain (cell-wise) interface_advection_eq = ( interface_flux * (interface_advection.upwind_primary * mortar_proj.primary_to_mortar_avg * trace.trace * t) + interface_flux * (interface_advection.upwind_secondary * mortar_proj.secondary_to_mortar_avg * t) - advective_interface_flux ) equations = { "Subdomain flow": subdomain_flow_eq, "Subdomain transport": subdomain_transport_eq, "Interface flow": interface_flow_eq, "Interface advection": interface_advection_eq, "Interface conduction": interface_conduction_eq, } eq_manager.equations.update(equations) Explanation: Interface fluxes End of explanation # Discretize all terms eq_manager.discretize(gb) time = dt final_time = 1.5e-1 sol_prev = np.zeros(dof_manager.num_dofs()) nonlinear_tolerance = 1e-10 while time < final_time + 1e-6: converged = False while not converged: # Rediscretize advection based on previous iterate of darcy_flux # in case flow field is transient pp.fvutils.compute_darcy_flux(gb, keyword_store=kw_t, lam_name=interface_flux_variable, from_iterate=True) # More targeted approaches are possible: eq_manager.discretize(gb) # Iteration A, b = eq_manager.assemble() sol = sps.linalg.spsolve(A, b) error = np.linalg.norm(sol, np.inf) sol_prev = sol.copy() dof_manager.distribute_variable( values=sol, additive=True, to_iterate=True ) if error < nonlinear_tolerance: converged = True solution = dof_manager.assemble_variable(from_iterate=True) dof_manager.distribute_variable(values=solution, additive=False) pp.plot_grid(gb, subdomain_tracer_variable, figsize=(8, 6), title=f"Tracer at time {time:.2}") time += dt pp.plot_grid(gb, subdomain_pressure_variable, figsize=(8, 6), title="Final pressure") Explanation: Solve problem Discretize, assemble and solve the linear system in a time loop. We also have to update the STATE and ITERATE and back-compute the fluid fluxes (which are not primary variables in the primal formulation of the finite volume scheme used) from the pressure solution. We plot the tracer distribution at each time step and the final pressure. The tracer enters from the right boundary, and is transported more quickly through the fractures than the matrix. End of explanation
11,214
Given the following text description, write Python code to implement the functionality described below step by step Description: Inference Sandbox In this notebook, we'll mock up some data from the linear model, as reviewed here. Then it's your job to implement a Metropolis sampler and constrain the posterior distriubtion. The goal is to play with various strategies for accelerating the convergence and acceptance rate of the chain. Remember to check the convergence and stationarity of your chains, and compare them to the known analytic posterior for this problem! Generate a data set Step1: Package up a log-posterior function. Step2: Convenience functions encoding the exact posterior Step3: Demo some plots of the exact posterior distribution Step4: Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to visually inspect traces of each parameter to see whether they appear converged compare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution (see the snippets farther down) If you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class. Once you have a working sampler, the question is
Python Code: import numpy as np import matplotlib.pyplot as plt import scipy.stats %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 5.0) # the model parameters a = np.pi b = 1.6818 # my arbitrary constants mu_x = np.exp(1.0) # see definitions above tau_x = 1.0 s = 1.0 N = 50 # number of data points # get some x's and y's x = mu_x + tau_x*np.random.randn(N) y = a + b*x + s*np.random.randn(N) plt.plot(x, y, 'o'); Explanation: Inference Sandbox In this notebook, we'll mock up some data from the linear model, as reviewed here. Then it's your job to implement a Metropolis sampler and constrain the posterior distriubtion. The goal is to play with various strategies for accelerating the convergence and acceptance rate of the chain. Remember to check the convergence and stationarity of your chains, and compare them to the known analytic posterior for this problem! Generate a data set: End of explanation def lnPost(params, x, y): # This is written for clarity rather than numerical efficiency. Feel free to tweak it. a = params[0] b = params[1] lnp = 0.0 # Using informative priors to achieve faster convergence is cheating in this exercise! # But this is where you would add them. lnp += -0.5*np.sum((a+b*x - y)**2) return lnp Explanation: Package up a log-posterior function. End of explanation class ExactPosterior: def __init__(self, x, y, a0, b0): X = np.matrix(np.vstack([np.ones(len(x)), x]).T) Y = np.matrix(y).T self.invcov = X.T * X self.covariance = np.linalg.inv(self.invcov) self.mean = self.covariance * X.T * Y self.a_array = np.arange(0.0, 6.0, 0.02) self.b_array = np.arange(0.0, 3.25, 0.02) self.P_of_a = np.array([self.marg_a(a) for a in self.a_array]) self.P_of_b = np.array([self.marg_b(b) for b in self.b_array]) self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array]) self.P_of_ab = np.exp(self.P_of_ab) self.renorm = 1.0/np.sum(self.P_of_ab) self.P_of_ab = self.P_of_ab * self.renorm self.levels = scipy.stats.chi2.cdf(np.arange(1,4)**2, 1) # confidence levels corresponding to contours below self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2)) def lnpost(self, a, b): # the 2D posterior z = self.mean - np.matrix([[a],[b]]) return -0.5 * (z.T * self.invcov * z)[0,0] def marg_a(self, a): # marginal posterior of a return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0])) def marg_b(self, b): # marginal posterior of b return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1])) exact = ExactPosterior(x, y, a, b) Explanation: Convenience functions encoding the exact posterior: End of explanation plt.plot(exact.a_array, exact.P_of_a); plt.plot(exact.b_array, exact.P_of_b); plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels); plt.plot(a, b, 'o', color='red'); Explanation: Demo some plots of the exact posterior distribution End of explanation Nsamples = # fill in a number samples = np.zeros((Nsamples, 2)) # put any more global definitions here for i in range(Nsamples): a_try, b_try = proposal() # propose new parameter value(s) lnp_try = lnPost([a_try,b_try], x, y) # calculate posterior density for the proposal if we_accept_this_proposal(lnp_try, lnp_current): # do something else: # do something else plt.rcParams['figure.figsize'] = (12.0, 3.0) plt.plot(samples[:,0]); plt.plot(samples[:,1]); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.plot(samples[:,0], samples[:,1]); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.hist(samples[:,0], 20, normed=True, color='cyan'); plt.plot(exact.a_array, exact.P_of_a, color='red'); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.hist(samples[:,1], 20, normed=True, color='cyan'); plt.plot(exact.b_array, exact.P_of_b, color='red'); # If you know how to easily overlay the 2D sample and theoretical confidence regions, by all means do so. Explanation: Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to visually inspect traces of each parameter to see whether they appear converged compare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution (see the snippets farther down) If you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class. Once you have a working sampler, the question is: how can we make it converge faster? Experiment! We'll compare notes in a bit. End of explanation
11,215
Given the following text description, write Python code to implement the functionality described below step by step Description: Note Step1: Soluition Step2: Bonus
Python Code: first_commit = git_log.index[-1] first_commit today = pd.to_datetime('today') type(today) Explanation: Note: We are using the UNIX timestamp here because it's superfast to convert it to a real datatime64 data type. Cleaning up wrong timestamps _Note: 'today'is suboptimal End of explanation git_log[(git_log < today) & (git_log >= first_commit)] corrected_dates = git_log.iloc[ -2 ] # & (git_log.index <= 'today') corrected_dates corrected_dates = git_log.iloc[:, -1] # & (git_log.index <= 'today') corrected_dates %matplotlib inline corrected_dates = git_log.loc[ str(git_log.index[-1]) : '2017-1-1' ] # & (git_log.index <= 'today') corrected_dates grouped_by_time = corrected_dates.groupby(pd.TimeGrouper(freq="M")).count() grouped_by_time.plot(figsize=(15,5)) grouped_by_time.head() Explanation: Soluition End of explanation git_log['author'] = git_log['author'].fillna("UNKNOWN") git_log.head() git_log['author'].value_counts().head() git_log[git_log['author'].str.contains('Viro')] Explanation: Bonus: get rid of the incomplete months at the beginning or end (but does it make sens at all to remove them?) One author didn't provide his/her name, so it's null. What to do about it? Remove it or set it to unknown? End of explanation
11,216
Given the following text description, write Python code to implement the functionality described below step by step Description: ============================================== Read and visualize projections (SSP and other) ============================================== This example shows how to read and visualize Signal Subspace Projectors (SSP) vector. Such projections are sometimes referred to as PCA projections. Step1: Load the FIF file and display the projections present in the file. Here the projections are added to the file during the acquisition and are obtained from empty room recordings. Step2: Display the projections one by one Step3: Use the function in mne.viz to display a list of projections Step4: As shown in the tutorial on how to tut-viz-raw the ECG projections can be loaded from a file and added to the raw object Step5: Displaying the projections from a raw object requires no extra information since all the layout information is present in raw.info. MNE is able to automatically determine the layout for some magnetometer and gradiometer configurations but not the layout of EEG electrodes. Here we display the ecg_projs individually and we provide extra parameters for EEG. (Notice that planar projection refers to the gradiometers and axial refers to magnetometers.) Notice that the conditional is just for illustration purposes. We could raw.info in all cases to avoid the guesswork in plot_topomap and ensure that the right layout is always found Step6: The correct layout or a list of layouts from where to choose can also be provided. Just for illustration purposes, here we generate the possible_layouts from the raw object itself, but it can come from somewhere else.
Python Code: # Author: Joan Massich <[email protected]> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import read_proj from mne.io import read_raw_fif from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname = data_path + '/MEG/sample/sample_audvis_raw.fif' ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif' Explanation: ============================================== Read and visualize projections (SSP and other) ============================================== This example shows how to read and visualize Signal Subspace Projectors (SSP) vector. Such projections are sometimes referred to as PCA projections. End of explanation raw = read_raw_fif(fname) empty_room_proj = raw.info['projs'] # Display the projections stored in `info['projs']` from the raw object raw.plot_projs_topomap() Explanation: Load the FIF file and display the projections present in the file. Here the projections are added to the file during the acquisition and are obtained from empty room recordings. End of explanation fig, axes = plt.subplots(1, len(empty_room_proj)) for proj, ax in zip(empty_room_proj, axes): proj.plot_topomap(axes=ax) Explanation: Display the projections one by one End of explanation assert isinstance(empty_room_proj, list) mne.viz.plot_projs_topomap(empty_room_proj) Explanation: Use the function in mne.viz to display a list of projections End of explanation # read the projections ecg_projs = read_proj(ecg_fname) # add them to raw and plot everything raw.add_proj(ecg_projs) raw.plot_projs_topomap() Explanation: As shown in the tutorial on how to tut-viz-raw the ECG projections can be loaded from a file and added to the raw object End of explanation fig, axes = plt.subplots(1, len(ecg_projs)) for proj, ax in zip(ecg_projs, axes): if proj['desc'].startswith('ECG-eeg'): proj.plot_topomap(axes=ax, info=raw.info) else: proj.plot_topomap(axes=ax) Explanation: Displaying the projections from a raw object requires no extra information since all the layout information is present in raw.info. MNE is able to automatically determine the layout for some magnetometer and gradiometer configurations but not the layout of EEG electrodes. Here we display the ecg_projs individually and we provide extra parameters for EEG. (Notice that planar projection refers to the gradiometers and axial refers to magnetometers.) Notice that the conditional is just for illustration purposes. We could raw.info in all cases to avoid the guesswork in plot_topomap and ensure that the right layout is always found End of explanation possible_layouts = [mne.find_layout(raw.info, ch_type=ch_type) for ch_type in ('grad', 'mag', 'eeg')] mne.viz.plot_projs_topomap(ecg_projs, layout=possible_layouts) Explanation: The correct layout or a list of layouts from where to choose can also be provided. Just for illustration purposes, here we generate the possible_layouts from the raw object itself, but it can come from somewhere else. End of explanation
11,217
Given the following text description, write Python code to implement the functionality described below step by step Description: Let's start off with a square figure. Step1: Give it a nice wooden color http Step2: The default background color for Axes is white. Let's propagate the figure background color by making the axis background color transparent, or 'none' Step3: Set the ticks to reflect 19 by 19 board Step4: Add gridlines Step5: Generate and plot the hoshi (star) points Step6: Need to set the x and y lims... Go boards are ordinarily read from the top-left. Step7: The column labels are normally at the top. Step8: The column labels are often the letters of the alphabet, and not just in the Western world of Go. Otherwise, Chinese characters representing the numbers are sometimes used as well, but along the rows. Step9: The hoshi are massive... Let's make them smaller by setting markersize for now. I don't think this will scale with figsize, so well need to revisit this later. Patches apparently do scale. Step10: The stones! Step11: The white stones seem to be transparent, but this is not the issue. It has to do with the order the elements are drawn Step12: Let's enlarge it to make sure everything scales well. This confirms our earlier suspicion that ordinary markers do not scale, and patches do. Step13: For Print Publication Step14: Print publications normally have a thicker border. Step15: Saving/Exporting Let's try saving some of our figures. Step16: http Step17: Annotation
Python Code: fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) plt.show() Explanation: Let's start off with a square figure. End of explanation BOARD_COLOR = '#d7be9f' fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) plt.show() Explanation: Give it a nice wooden color http://encycolorpedia.com/c19a6b End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') plt.show() Explanation: The default background color for Axes is white. Let's propagate the figure background color by making the axis background color transparent, or 'none' End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) plt.show() Explanation: Set the ticks to reflect 19 by 19 board End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) plt.show() Explanation: Add gridlines End of explanation hoshi = list(product(range(3, 19, 6), repeat=2)) hoshi hoshi_rows, hoshi_cols = zip(*hoshi) hoshi_rows, hoshi_cols fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko') plt.show() Explanation: Generate and plot the hoshi (star) points End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko') ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) plt.show() Explanation: Need to set the x and y lims... Go boards are ordinarily read from the top-left. End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko') ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') plt.show() Explanation: The column labels are normally at the top. End of explanation ascii_uppercase fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko') ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) plt.show() Explanation: The column labels are often the letters of the alphabet, and not just in the Western world of Go. Otherwise, Chinese characters representing the numbers are sometimes used as well, but along the rows. End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) plt.show() Explanation: The hoshi are massive... Let's make them smaller by setting markersize for now. I don't think this will scale with figsize, so well need to revisit this later. Patches apparently do scale. End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k')) ax.add_patch(Circle((9, 10), .475, facecolor='w')) ax.add_patch(Circle((8, 9), .475, facecolor='w')) ax.add_patch(Circle((9, 8), .475, facecolor='w')) plt.show() Explanation: The stones! End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) plt.show() Explanation: The white stones seem to be transparent, but this is not the issue. It has to do with the order the elements are drawn: http://stackoverflow.com/questions/5390699/patches-i-add-to-my-graph-are-not-opaque-with-alpha-1-why End of explanation fig = plt.figure(figsize=(10, 10), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) plt.show() Explanation: Let's enlarge it to make sure everything scales well. This confirms our earlier suspicion that ordinary markers do not scale, and patches do. End of explanation fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) plt.show() Explanation: For Print Publication End of explanation fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') for spine in ax.spines.itervalues(): spine.set_linewidth(1.5) ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) plt.show() Explanation: Print publications normally have a thicker border. End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) fig.savefig('test_color1.png') Explanation: Saving/Exporting Let's try saving some of our figures. End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) fig.savefig('test_color2.png', facecolor=fig.get_facecolor()) fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') for spine in ax.spines.itervalues(): spine.set_linewidth(1.5) ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) fig.savefig('test_bw.png') Explanation: http://stackoverflow.com/questions/4804005/matplotlib-figure-facecolor-background-color End of explanation fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=10)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=10)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=10)) ax.text(15, 13, 'a') ax.text(8, 9, 'b') ax.text(9, 9, 'c', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=2)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=2)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=2)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=2)) ax.text(15, 13, 'a') ax.text(8, 9, 'b') ax.text(9, 9, 'c', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=3)) ax.text(15, 13, 'a') ax.text(8, 9, 'b') ax.text(9, 9, 'c', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=3)) ax.text(15, 13, 'a', horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=3)) ax.text(15, 13, 'a', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', color='w', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9, 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.plot([9], [8], marker='^', zorder=10) plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(9, 8, '$\lambda$', fontsize='large', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(9, 8, '$\\bigtriangledown$', fontsize='large', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(8, 8), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(9, 8, '$\\bigtriangledown$', fontsize='large', horizontalalignment='center', verticalalignment='center') plt.show() from matplotlib.collections import PatchCollection fig = plt.figure(figsize=(5, 5), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3) for center in hoshi: ax.add_patch(Circle(center, .01, edgecolor='none', facecolor='k')) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(9, 8, '$\\bigtriangledown$', fontsize='large', horizontalalignment='center', verticalalignment='center') plt.show() fig = plt.figure(figsize=(8, 8), facecolor=BOARD_COLOR) ax = fig.add_subplot(1, 1, 1) ax.set_axis_bgcolor('none') ax.set_xticks(range(19)) ax.set_yticks(range(19)) ax.grid(color='k', linestyle='-', linewidth=1) ax.plot(hoshi_rows, hoshi_cols, 'ko', markersize=3, antialiased=False) ax.set_xlim((0, 18)) ax.set_ylim((18, 0)) ax.xaxis.set_tick_params(labelbottom='off', labeltop='on') ax.set_xticklabels(ascii_uppercase[:19]) ax.set_yticklabels(range(1, 19+1)) ax.add_patch(Circle((9, 9), .475, facecolor='k', zorder=3)) ax.add_patch(Circle((9., 10), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((8, 9), .475, facecolor='w', zorder=3)) ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(15, 13, 'a', fontsize='large', backgroundcolor=BOARD_COLOR, horizontalalignment='center', verticalalignment='center') ax.text(8, 9, 'b', fontsize='large', horizontalalignment='center', verticalalignment='center') ax.text(9, 9, 'c', fontsize='large', color='w', horizontalalignment='center', verticalalignment='center') ax.add_patch(Circle((9, 8), .475, facecolor='w', clip_on=False, zorder=3)) ax.text(9, 8, '$\\bigtriangledown$', fontsize='large', horizontalalignment='center', verticalalignment='center') plt.show() Explanation: Annotation End of explanation
11,218
Given the following text description, write Python code to implement the functionality described below step by step Description: Dropout Dropout是一种用于深度神经网络的方法,用于避免过拟合. 在训练时,每次迭代中每层以提前设定的概率,称为keep_prob,来随机选择保留的节点,其他的节点在该次前向传播被忽略(设为0),同时在后向传播中也忽略.如下图中打叉的节点就是某次迭代中忽略的节点. 作用,应用 Dropout主要有两个作用 Step2: 用pytorch来实现,尽量做到每层保留的节点为keep_prob * 节点数 Step4: 用numpy来实现,比较简单,当节点数量大时,随机的结果基本能够保证实际保留情况符合保留概率
Python Code: keep_prob = 0.5 do_dropout = True Explanation: Dropout Dropout是一种用于深度神经网络的方法,用于避免过拟合. 在训练时,每次迭代中每层以提前设定的概率,称为keep_prob,来随机选择保留的节点,其他的节点在该次前向传播被忽略(设为0),同时在后向传播中也忽略.如下图中打叉的节点就是某次迭代中忽略的节点. 作用,应用 Dropout主要有两个作用: dropout在训练时每个迭代中相当于减小了网络规模,有正则化的作用. dropout可以避免把过多的权重放在某个节点上,而是把权重分散给全部的节点. 在实践中,dropout在CV领域比较有效,因为数据输入一般是图片像素,维度很高,而网络层的形状,一般和输入形状相关,往往也很大. Inverted Dropout 传统的dropout方法在训练时随机扔掉的节点,利用反向传播来更新保留的节点.但是在验证和测试的时候,会为每个节点乘以所在层的保留概率(keep_prob),这个操作叫做scale. 试想,每次使用模型预测的时候我们都要scale,比较麻烦. Hinton等人提出inverted dropout,也就是在训练过程中,把保留的节点除以keep_prob,叫做inverted scale,来弥补随机扔掉的节点.这样,在验证和测试的时候,就不需要scale了. 参考 吴恩达老师解释dropout动机的视频 End of explanation import torch import copy w1 = torch.randn(4, 4) # 某层的weights w = copy.deepcopy(w1) w def dropout_strict(w, keep_prob): implement inverted dropout ensuring that the share of kept neurons is strictly keep_prob. Args: w (torch.tensor) : weights before dropout keep_prob(float) : keep probability k = round(w.shape[1] * keep_prob) _, indices = torch.topk(torch.randn(w.shape[0], w.shape[1]), k) keep = torch.zeros(4, 4).scatter_(dim=1, index=indices, src=torch.ones_like(w)) w *= keep w /= keep_prob if do_dropout: dropout_strict(w, keep_prob) print(w) Explanation: 用pytorch来实现,尽量做到每层保留的节点为keep_prob * 节点数 End of explanation import numpy as np import copy w1 = np.random.randn(4, 4) # 某层的weights w = copy.deepcopy(w1) w def dropout_loose(w, keep_prob): A simple Implementation of inverted dropout. Args: w(np.array) :- neurons subject to dropout keep_prob(float) :- keep probability keep = np.random.rand(w.shape[0], w.shape[1]) < keep_prob w *= keep w /= keep_prob if do_dropout: dropout_loose(w, keep_prob) print(w) Explanation: 用numpy来实现,比较简单,当节点数量大时,随机的结果基本能够保证实际保留情况符合保留概率 End of explanation
11,219
Given the following text description, write Python code to implement the functionality described below step by step Description: This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark. Step1: Script settings Step2: We create one big dataframe, the columns are the sensors of type electricity Step3: Convert Datetimeindex to local time Step5: We define two low-level functions Step6: Data handling We have to filter out the data, we do three things Step7: Plots The next plots are the current benchmarks, anonymous. The left figure shows where the given sensor (or family) is situated compared to all other families. The right plot shows the night-time consumption for this night. In a next step, it would be nice to create an interactive plot (D3.js?) for the right side
Python Code: import os, sys import inspect import numpy as np import datetime as dt import time import pytz import pandas as pd import pdb script_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) # add the path to opengrid to sys.path sys.path.append(os.path.join(script_dir, os.pardir, os.pardir)) from opengrid.library import config c=config.Config() DEV = c.get('env', 'type') == 'dev' # DEV is True if we are in development environment, False if on the droplet if not DEV: # production environment: don't try to display plots import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator # find tmpo sys.path.append(c.get('tmpo', 'folder')) from opengrid.library.houseprint import houseprint if DEV: if c.get('env', 'plots') == 'inline': %matplotlib inline else: %matplotlib qt else: pass # don't try to render plots plt.rcParams['figure.figsize'] = 12,8 Explanation: This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark. End of explanation BXL = pytz.timezone('Europe/Brussels') number_of_days = 7 Explanation: Script settings End of explanation hp = houseprint.load_houseprint_from_file('new_houseprint.pkl') hp.init_tmpo() start = pd.Timestamp(time.time() - number_of_days*86400, unit='s') df = hp.get_data(sensortype='electricity', head=start, resample='s') df = df.resample(rule='60s', how='max') df = df.diff()*3600/60 Explanation: We create one big dataframe, the columns are the sensors of type electricity End of explanation df.index = df.index.tz_convert(BXL) # plot a few dataframes to inspect them if DEV: for sensor in df.columns: plt.figure() df[sensor].plot() Explanation: Convert Datetimeindex to local time End of explanation def testvalid(row): return row['maxima'] > 0 and row['maxima'] <> row['minima'] def get_minima(sensor): Return the standby consumption for the covered days for a given sensor as an array. Take care of days where this sensor has NO VALID standby consumption global minima res = np.ndarray(len(minima)) for i,df in enumerate(minima): try: res[i] = df[sensor] except: res[i] = np.nan return res Explanation: We define two low-level functions End of explanation index_slices = [] # will contain the correct index slices for each of the analysed nights minima = [] # each element in minima is a dataframe with standby consumption per valid sensor valid_sensors = set() # we keep track of all sensors that yield a valid standby consumption for at least one day. # find the date for which we still have the full night (between 01:00 and 05:00). We will store it as datetime at 00:00 (local time) hour = df.index[-1].hour # the hour of the last index. if hour >= 5: last_day = df.index[-1].date() else: last_day = (df.index[-1] - dt.timedelta(days=1)).date() for day in range(number_of_days)[::-1]: #pdb.set_trace() dt_start = dt.datetime.combine(last_day - dt.timedelta(days=day), dt.time(0,0)) # start slicing at 01:00 local time dt_stop = dt.datetime.combine(last_day - dt.timedelta(days=day), dt.time(5,0)) # stop slicing at 05:00 local time df_night = df.ix[dt_start:dt_stop] # contains only data for a single night index_slices.append(df_night.index.copy()) df_results = pd.DataFrame(index=df.columns) #df_results contains the results of the analysis for a single night. Index = sensorid df_results['minima'] = df_night.min(axis=0) df_results['maxima'] = df_night.max(axis=0) df_results['valid'] = df_results.apply(testvalid, axis=1) minima.append(df_results['minima'].ix[df_results.valid]) valid_sensors.update(set(minima[-1].index.tolist())) Explanation: Data handling We have to filter out the data, we do three things: split the data in dataframes per day filter out the night-time hours (between 00h00 and 05h00) we check if the resulting time series contain enough variation (negatives and constant signals are filtered out) End of explanation index_slices_days = [x[0] for x in index_slices[1:]] index = pd.DatetimeIndex(freq='D', start=index_slices_days[0], periods=number_of_days) df_=pd.concat(minima, axis=1) df_.columns = index df_statistics = df_.describe().T df_statistics for sensor in list(valid_sensors)[:]: plt.figure(figsize=(10,8)) ax1=plt.subplot(211) ax1.plot_date(df_statistics.index, df_statistics[u'25%'], '-', lw=2, color='g', label=u'25%') ax1.plot_date(df_statistics.index, df_statistics[u'50%'], '-', lw=2, color='orange', label=u'50%') ax1.plot_date(df_statistics.index, df_statistics[u'75%'], '-', lw=2, color='r', label=u'75%') ax1.plot_date(df_.T.index, df_.T[sensor], 'rD', ms=7) xticks = [x.strftime(format='%d/%m') for x in df_statistics.index] locs, lables=plt.xticks() plt.xticks(locs, xticks, rotation='vertical') plt.title(hp.find_sensor(sensor).device.key + ' - ' + sensor) ax1.grid() ax1.set_ylabel('Watt') ax2=plt.subplot(212) try: ax2.plot(index_slices[-1], df.ix[index_slices[-1]][sensor], 'b-', label='Afgelopen nacht') ax2.xaxis_date(BXL) #Put timeseries plot in local time # rotate the labels plt.xticks(rotation='vertical') plt.legend() ax2.set_ylabel('Watt') except: print "Could not create graph for {}".format(hp.find_sensor(sensor).device.key) else: plt.savefig(os.path.join(c.get('data', 'folder'), 'figures', 'standby_vertical_'+sensor+'.png'), dpi=100) if not DEV: plt.close() try: valid_sensors.remove('565de0a7dc64d8370aa321491217b85f') # the FLM of 3E does not fit in household standby benchmark except: pass for sensor in valid_sensors: plt.figure(figsize=(10,5)) ax1=plt.subplot(121) box = [x.values for x in minima] ax1.boxplot(box, positions=range(len(df_statistics)), notch=False) ax1.plot(range(len(df_statistics)), get_minima(sensor), 'rD', ms=10, label='Sluipverbruik') xticks = [x[0].strftime(format='%d/%m') for x in index_slices] plt.xticks(range(len(df_statistics)), xticks, rotation='vertical') #plt.title(hp.get_flukso_from_sensor(sensor) + ' - ' + sensor) ax1.grid() ax1.set_ylabel('Watt') plt.legend(numpoints=1, frameon=False) #ax1.set_xticklabels([t.strftime(format='%d/%m') for t in df_all_perday.index.tolist()]) ax2=plt.subplot(122) try: ax2.plot(index_slices[-1], df.ix[index_slices[-1]][sensor], 'b-', label='Afgelopen nacht') ax2.xaxis_date(BXL) #Put timeseries plot in local time # rotate the labels plt.xticks(rotation='vertical') ax2.set_ylabel('Watt') ax2.grid() plt.legend(loc='upper right', frameon=False) plt.tight_layout() except Exception as e: print(e) else: plt.savefig(os.path.join(c.get('data', 'folder'), 'figures', 'standby_horizontal_'+sensor+'.png'), dpi=100) if not DEV: plt.close() Explanation: Plots The next plots are the current benchmarks, anonymous. The left figure shows where the given sensor (or family) is situated compared to all other families. The right plot shows the night-time consumption for this night. In a next step, it would be nice to create an interactive plot (D3.js?) for the right side: it should show the night-time consumption for the day over which the mouse hovers in the left graph. End of explanation
11,220
Given the following text description, write Python code to implement the functionality described below step by step Description: Sinusoid autoencoder trained with multiple phases Let's provide more training examples - sinusoid with various phases. Step1: The model should be able to handle noise-corrupted input signal. Step2: This time the model should be able to handle also phase-shifted signal since it was trained such.
Python Code: %pylab inline import keras import numpy as np import keras N = 50 # phase_step = 1 / (2 * np.pi) t = np.arange(50) phases = np.linspace(0, 1, N) * 2 * np.pi x = np.array([np.sin(2 * np.pi / N * t + phi) for phi in phases]) print(x.shape) imshow(x); plot(x[0]); plot(x[1]); plot(x[2]); from keras.models import Sequential from keras.layers import containers from keras.layers.core import Dense, AutoEncoder encoder = containers.Sequential([ Dense(25, input_dim=50), Dense(12) ]) decoder = containers.Sequential([ Dense(25, input_dim=12), Dense(50) ]) model = Sequential() model.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True)) model.compile(loss='mean_squared_error', optimizer='sgd') plot(model.predict(x)[0]); from loss_history import LossHistory loss_history = LossHistory() model.fit(x, x, nb_epoch=1000, batch_size=50, callbacks=[loss_history]) plot(model.predict(x)[0]) plot(x[0]) plot(model.predict(x)[10]) plot(x[10]) print('last loss:', loss_history.losses[-1]) plot(loss_history.losses); imshow(model.get_weights()[0], interpolation='nearest', cmap='gray'); imshow(model.get_weights()[2], interpolation='nearest', cmap='gray'); Explanation: Sinusoid autoencoder trained with multiple phases Let's provide more training examples - sinusoid with various phases. End of explanation x_noised = x + 0.2 * np.random.random(len(x[0])) plot(x_noised[0], label='input') plot(model.predict(x_noised)[0], label='predicted') legend(); Explanation: The model should be able to handle noise-corrupted input signal. End of explanation x_shifted = np.cos(2*np.pi/N * t.reshape(1, -1)) plot(x_shifted[0], label='input') plot(model.predict(x_shifted)[0], label='predicted') legend(); Explanation: This time the model should be able to handle also phase-shifted signal since it was trained such. End of explanation
11,221
Given the following text description, write Python code to implement the functionality described below step by step Description: 1 Introduccion a IPython notebooks/ Jupyter Que es exactamente? Una libreta IPython/Jupyter es un ambiente interactivo para escribir y correr codigo de python. Es un historial completo y auto-contenido de un calculo y puede ser convertido a otros formatos para compartir con otros. En particular es batante popular en la comunidad cientifica porque es una herramienta interactiva, iterativa para analisis de datos, visualizacion y contar historias. Puedes combinar Step1: Computo Iterativo El "kernel" mantiene un estado de todos los calculos del la libreta. Por ejemplo puedes guardar el resultado de un calculo en una variable Step2: y usarlo en otra celda Step3: Parar codigo El codigo se corre en un proceso separado llamado el"kernel Step4: Resetear Puedes resetear usando el boton <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button>. Python Basico Lists and Arrays (Listas y arreglos) Step5: podemos agregar elementos Step6: o podemos usar el modulo Numpy para arreglos numericos (Protip Step7: y que tal arreglos vacios? Aqui creamos un vector de 5 x 1 Step8: podemos cambiar valores usando [ indice ] Step9: y que tal arreglos de numeros aleatorios? Step10: Utilidades de Array Tenemos muchas utilidades para trabajar con arreglos... Sortear valores Step11: For loops (Ciclos For) Un for loop va sobre cada elemento del ciclo, Ojo! nota el espaciamiento/indentacion justo despues del for! Step12: Doble! Step13: un loop pero enumerado, te rergresa un indice y un elemento Step14: Y si quieres un elemento aleatorio de la lista? El modulo random al rescate, corre varias veces la celda y checa si son aleatorios Step15: Que tal funciones? Step16: Actividad 1 Step17: Videos? Step18: External Websites, HTML?
Python Code: print("hola bolivia") Explanation: 1 Introduccion a IPython notebooks/ Jupyter Que es exactamente? Una libreta IPython/Jupyter es un ambiente interactivo para escribir y correr codigo de python. Es un historial completo y auto-contenido de un calculo y puede ser convertido a otros formatos para compartir con otros. En particular es batante popular en la comunidad cientifica porque es una herramienta interactiva, iterativa para analisis de datos, visualizacion y contar historias. Puedes combinar: - Codigo en vivo - Widgets Interactivos - Graficas - Texto Narrrativo - Ecuaciones - Imagenes - Video Un poco mas... El projecto Ipython reciente se expandio en la versio 3.0 para incluir otros kerneles de computo como R, Julia, C++ y Matlab. Para mas informacion/ideas checa los links abajo de este Ipython Notebook. Vamos a empezar! Corriendo codigo Corre tu codido usando Shift-Enter o presionando el boton <button class='btn btn-default btn-xs'><i class="icon-play fa fa-play"></i></button> en la barra de herramientas arriba. End of explanation un_str = "Cuanto es 2 x 4 ?" resultado= 2 * 4 Explanation: Computo Iterativo El "kernel" mantiene un estado de todos los calculos del la libreta. Por ejemplo puedes guardar el resultado de un calculo en una variable End of explanation print(un_str) print(resultado) print("Magia!") Explanation: y usarlo en otra celda End of explanation import time time.sleep(10) Explanation: Parar codigo El codigo se corre en un proceso separado llamado el"kernel:". Este puede ser interumpido o reseteado. Trata de correr el siguiente codigo y dale al boton <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button>. End of explanation a_list = [ "vaca", "taco", "gato"] #list print(a_list) Explanation: Resetear Puedes resetear usando el boton <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button>. Python Basico Lists and Arrays (Listas y arreglos) End of explanation a_list.append("pollo") # now part of the family a_list Explanation: podemos agregar elementos End of explanation import numpy as np # lista de numeros del 0 a 100, en incrementos de 1 numeros = np.arange(0,100,1) print(numeros) Explanation: o podemos usar el modulo Numpy para arreglos numericos (Protip: quieres saber mas? dale click a la imagen) End of explanation array_vacio = np.zeros((5,1)) array_vacio Explanation: y que tal arreglos vacios? Aqui creamos un vector de 5 x 1 End of explanation array_vacio[2] = 8 # manipular el tercer elemento array_vacio Explanation: podemos cambiar valores usando [ indice ]: End of explanation integers = np.random.randint(low=1,high=10, size=100000) integers Explanation: y que tal arreglos de numeros aleatorios? End of explanation print("Sortead :",np.sort(integers)) print("Max:",np.max(integers),", Min:",np.min(integers)) print("Max at:",np.argmax(integers),", Min at:",np.argmin(integers)) print("Mean:",np.mean(integers),", Std:",np.std(integers)) Explanation: Utilidades de Array Tenemos muchas utilidades para trabajar con arreglos... Sortear valores: entoces usamos np.sort(array) Encontrar Maximo, Minimo: entoces usamos np.min(array), np.max(array) Encontrar el indices del Maximo o Minimo: entoces usamos np.argmin(array), np.argmax(array) Calcular medias o desviaciones estandar : usamos np.mean(array),np.std(array) End of explanation for i in range(2,10,2): print(i) print("Fin!") Explanation: For loops (Ciclos For) Un for loop va sobre cada elemento del ciclo, Ojo! nota el espaciamiento/indentacion justo despues del for! End of explanation for i in a_list: for j in a_list: print i, j print("Fin!") Explanation: Doble! End of explanation for index,item in enumerate(a_list): print(index,item) print("Done!") Explanation: un loop pero enumerado, te rergresa un indice y un elemento End of explanation import random random.sample(a_list, 2) # select 1 random.sample(numeros, 10) Explanation: Y si quieres un elemento aleatorio de la lista? El modulo random al rescate, corre varias veces la celda y checa si son aleatorios End of explanation # La definimos.. def reordenar(lista): val=random.sample(lista,1) return val # la llamamos print(reordenar(numeros)) print(reordenar(numeros)) Explanation: Que tal funciones? End of explanation from IPython.display import Image Image(filename='files/large-hadron-collider.jpg') Explanation: Actividad 1 : Ejercicios de Programacion Meta: Obtener confianza con Python. 1.a) Usa un ciclo-for e imprime tus platillos bolivianos favoritos 1.b) Crea un arreglo de numeros aleatorios de tamano $n$ 1.c) Encapsula la funcion pasada en una funcion 1.d) Grafica un histograma de numeros aleatorios con $n=10,50,100,1k,10k$ Para graficar histogramas usa plt.hist(). Extra: Sumatoria de ondas Extra: Mas cosas para darle sabor Celdas de Texto: Latex & Markdown Celdas se crean por default como celdas de codigo, pero se pueden cambiar. Cell are by default created as code cells, can be but can be easily changed to text cells by cliking on the toolbar. In text cells you can embed narrative text using Markdown, HTML code and LaTeX equations with inline dollar signs \$ insert equation \$ and new line as \$\$ insert equation \$\$. For example: $$H\psi = E\psi$$ The code for this cell is: ```markdown Text Cells: Latex & Markdown Cell are by default created as code cells, can be but can be easily changed to text cells by cliking on the toolbar. In text cells you can embed narrative text using Markdown, HTML code and LaTeX equations with inline dollar signs \$ insert equation \$ and new line as \$\$ insert equation \$\$. For example: $$H\psi = E\psi$$ ``` Images We can work with images (JPEG, PNG) and SVG via the Image and SVG class. End of explanation from IPython.display import YouTubeVideo #https://www.youtube.com/watch?v=_6uKZWnJLCM YouTubeVideo('_6uKZWnJLCM') Explanation: Videos? End of explanation from IPython.display import HTML HTML('<iframe src=http://ipython.org/ width=700 height=350></iframe>') Explanation: External Websites, HTML? End of explanation
11,222
Given the following text description, write Python code to implement the functionality described below step by step Description: Try not to peek at the solutions when you go through the exercises. ;-) First let's make sure this notebook works well in both Python 2 and Python 3 Step1: TensorFlow basics Step2: Construction Phase Step3: Execution Phase Step8: Exercise 1 1.1) Create a simple graph that calculates $ c = \exp(\sqrt 8 + 3) $. Tip Step9: Try not to peek at the solution below before you have done the exercise! Step10: 1.2) Step11: 1.3) Step12: Important Step13: 1.4)
Python Code: from __future__ import absolute_import, division, print_function, unicode_literals Explanation: Try not to peek at the solutions when you go through the exercises. ;-) First let's make sure this notebook works well in both Python 2 and Python 3: End of explanation import tensorflow as tf tf.__version__ Explanation: TensorFlow basics End of explanation >>> a = tf.constant(3) >>> b = tf.constant(5) >>> s = a + b a b s tf.get_default_graph() >>> graph = tf.Graph() >>> with graph.as_default(): ... a = tf.constant(3) ... b = tf.constant(5) ... s = a + b ... Explanation: Construction Phase End of explanation >>> with tf.Session(graph=graph) as sess: ... result = s.eval() ... >>> result >>> with tf.Session(graph=graph) as sess: ... result = sess.run(s) ... >>> result >>> with tf.Session(graph=graph) as sess: ... result = sess.run([a,b,s]) ... >>> result Explanation: Execution Phase End of explanation import numpy as np from IPython.display import display, HTML def strip_consts(graph_def, max_const_size=32): Strip large constant values from graph_def. strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = b"<stripped %d bytes>"%size return strip_def def show_graph(graph_def=None, max_const_size=32): Visualize TensorFlow graph. graph_def = graph_def or tf.get_default_graph() if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> .format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> .format(code.replace('"', '&quot;')) display(HTML(iframe)) Explanation: Exercise 1 1.1) Create a simple graph that calculates $ c = \exp(\sqrt 8 + 3) $. Tip: TensorFlow's API documentation is available at: https://www.tensorflow.org/versions/master/api_docs/python/ 1.2) Now create a Session() and evaluate the operation that gives you the result of the equation above: 1.3) Create a graph that evaluates and prints both $ b = \sqrt 8 $ and $ c = \exp(\sqrt 8 + 3) $. Try to implement this in a way that only evaluates $ \sqrt 8 $ once. 1.4) The following code is needed to display TensorFlow graphs in Jupyter. Just run this cell then visualize your graph by calling show_graph(your graph): End of explanation graph = tf.Graph() with graph.as_default(): c = tf.exp(tf.add(tf.sqrt(tf.constant(8.)), tf.constant(3.))) # or simply... c = tf.exp(tf.sqrt(8.) + 3.) Explanation: Try not to peek at the solution below before you have done the exercise! :) Exercise 1 - Solution 1.1) End of explanation with tf.Session(graph=graph): c_val = c.eval() c_val Explanation: 1.2) End of explanation graph = tf.Graph() with graph.as_default(): b = tf.sqrt(8.) c = tf.exp(b + 3) with tf.Session(graph=graph) as sess: b_val, c_val = sess.run([b, c]) b_val c_val Explanation: 1.3) End of explanation # WRONG! with tf.Session(graph=graph): b_val = b.eval() # evaluates b c_val = c.eval() # evaluates c, which means evaluating b again! b_val c_val Explanation: Important: the following implementation gives the right result, but it runs the graph twice, once to evaluate b, and once to evaluate c. Since c depends on b, it means that b will be evaluated twice. Not what we wanted. End of explanation show_graph(graph) Explanation: 1.4) End of explanation
11,223
Given the following text description, write Python code to implement the functionality described below step by step Description: Autoregressive (AR) Models by Maxwell Margenot, Delaney Mackenzie, and Lee Tobey Lee Tobey is the founder of Hedgewise. Part of the Quantopian Lecture Series Step1: Note how this process fluctuates around some central value. This value is the mean of our time series. As we have a constant mean throughout time and the fluctuations seem to all stray within a given distance from the mean, we might hypothesize that this series is stationary. We would want to rigorously test that in practice, which we will explore lightly in the examples at the end of this lecture. Also see the stationarity lecture from the Quantopian Lecture Series. In this case, however, we have constructed the model to be stationary, so no need to worry about testing for stationarity right now. Tail Risk Autoregressive processes will tend to have more extreme values than data drawn from say a normal distribution. This is because the value at each time point is influenced by recent values. If the series randomly jumps up, it is more likely to stay up than a non-autoregressive series. This is known as 'fat-tailledness' (fat-tailed distribution) because the extremes on the pdf will be fatter than in a normal distribution. Much talk of tail risk in finance comes from the fact that tail events do occur and are hard to model due to their infrequent occurrence. If we have reason to suspect that a process is autoregressive, we should expect risk from extreme tail events and adjust accordingly. AR models are just one of the sources of tail risk, so don't assume that because a series is non-AR, it does not have tail risk. We'll check for that behavior now. Step2: Estimations of Variance Will be Wrong Because an AR process has a tail heavy and non-normal distribution of outcomes, estimates of variance on AR processes will be wrong. This is dangerous because variance is used to calculate many quantities in staistics, most importantly confidence intervals and p-values. Because the width of the confidence interval is often based on a variance estimate, we can no longer trust p-values that come from AR processes. For more information on p-values please see the Hypothesis Testing notebook in the Quantopian Lecture Series. Let's check this here. First we'll define some helper functions that compute a naive 95% confidence interval for the true value of the mean on some input series. Step3: Now we'll run an experiment 1000 times in which we compute an AR series, then estimate the mean and take a naive 95% confidence interval around it. Then we'll check if the confidence interval contains 0, the true long-term mean of our series, and record that in our outcomes array. Step4: Finally let's check, if our test is calibrated correctly, then we should have the confidence interval contain 0, 95% of the time. Step5: Looks like something is severly wrong. What's going on here is that the AR series moves around a lot more, but the estimate of variance assumes stationarity and doesn't take into account all of that motion. As a result the confidence intervals are way smaller than they should be and don't contain the correct value nearly enough. This gives us a false sense of security. Stationarity tests should usually catch AR behavior and let us know that estimates of variance will be wrong. For more information please see the Integration, Cointegration, and Stationarity lecture of the Quantopian Lecture Series. Correcting for Variance In practice it can be very difficult to accurately estimate variance on an AR series, but one attempt to do this is the Newey-West estimation. You can find information on it here. Testing for AR Behavior In order to determine the order, $p$, of an AR$(p)$ model, we look at the autocorrelations of the time series. These are the correlations of the series with its past values. The $k$-th order autocorrelation is $$ \rho_k = \frac{COV(x_t, x_{t - k})}{\sigma_x^2} = \frac{E[(x_t - \mu)(x_{t - k} - \mu)}{\sigma_x^2} $$ Where $k$ represents the number of periods lagged. We cannot directly observe the autocorrelations so we estimate them as $$ \hat{\rho}k = \frac{\sum{t = k + 1}^T[(x_t - \bar{x})(x_{t - k} - \bar{x})]}{\sum_{t = 1}^T (x_t - \bar{x})^2} $$ For our purposes, we can use a pair of tools called the autocorrelation function (ACF) and the partial autocorrelation function (PACF) in order to determine the order of our model. The PACF controls for shorter lags, unlike the ACF. These functions are included with many statistical packages and compute the sample autocorrelations for us, allowing us to determine the appropriate value of $p$. We will demonstrate these functions on our above example of a stationary series Step6: Let's plot out the values now. Step7: Statistical Testing Just looking at the graphs alone isn't enough. We need to use some degree of statistical rigor. The acf and pacf functions will return confidence intervals on all the autocorrelations. We can check if these intervals overlap with zero. If they do then we say that zero is within the set confidence interval for the true parameter value, and don't treat the lag as having any meaningful autocorrelation. NOTE Step8: After getting the confidence interval data, we'll write a function to plot it. Step9: Notice how for the PACF, only the first three lags are the only ones that appear to be significantly different from $0$, which makes sense because we directly constructed an AR model of order $3$. However, these results may vary for each random series generated in this notebook. In a real-world time series, we use these plots to determine the order of our model. We would then attempt to fit a model using a maximum likelihood function. Fitting a Model We'll use one of the functions already implemented in Python to fit an AR model. We'll try this on our simulated data first. Step10: The model object has a lot of useful information on it, use the ? notation to find out more. We'll be focusing on a few attributes, starting with model.params the estimated parameters in the model, one for each lag, and model.bse, the estimated standard error for each of the parameters. Step11: Choosing the Number of Lags Estimations Will Yield Too Many Lags We can see our model estimated quite a few parameters. In this case we know there are too many because we simulated the data as an AR(3) process. The reason that AR models will estimate many more lags than is actually the case is due to indirect dependency. If $X_t$ depends on $X_{t-1}$, then indirectly and to a lesser extent it will depend on $X_{t-2}$. In the presence of more than one lag in the data generating process, we will get potentially complex harmonic structures in the lags. These indirect dependencies will be picked up by a simple estimation. You Want the Fewest Parameters That Yield a Decent Model In general it's rarely the case that you can get anything useful out of a model with many parameters, see the Overfitting lecture for why in the Quantopian Lecture Series. In this case we want to select a number of lags that we believe explains what is happening, but without overfitting and choosing a model with way too many lags. Observing the ACF and PACF indicates that only the first 3 lags may be useful. However, we will expand the number of lags to 10 to double-check our initial data. We will use information criterion, specifically Akaike Information Criterion (AIC) and Bayes Information Criterion (BIC) to decide the correct number of parameters. For more information on choosing models using information criterion, please see the corresponding lecture in the Quantopian Lecture Series. Interpreting the AIC and BIC is done as follows. Compute the AIC and BIC for all models we wish to consider, and note the smallest AIC and BIC recorded $AIC_{min}$ and $BIC_{min}$. These are the models which minimize information loss under each metric. For each type of IC We then can compute the relative likelihood of each model $i$ by taking $$l = e^{(IC_{min} - IC_{i})/2}$$ We can interpret $l$ as model $i$ is $l$ times as likely to minimize information loss, compared to the minimum AIC model. It might take a few reads to understand this, so let's just see it in action. Step12: Our conclusion is that the AIC estimates the 4 parameter model as most likely, whereas the BIC estimates 3. Because we are always looking for reasons to knock off a parameter, we choose the 3. In this case it happened to be the exact right answer, but this will not always be the case, especially in noisy real data. Don't assume that using this method will always get you the right answer. Evaluating Residuals One final step we might do before performing an out of sample test for this model would be to evaluate its residual behavior. The AIC and BIC already do this to an extent, effectively measuring how much information is left on the table (in the residuals) after the model has made its predictions. For more information on residuals analysis see the Violations of Regression Models lecture. Here we'll just check for normality of the residuals.
Python Code: import numpy as np import pandas as pd from scipy import stats import statsmodels.api as sm import statsmodels.tsa as tsa import matplotlib.pyplot as plt # ensures experiment runs the same every time np.random.seed(100) # This function simluates an AR process, generating a new value based on historial values, # autoregressive coefficients b1 ... bk, and some randomness. def AR(b, X, mu, sigma): l = min(len(b)-1, len(X)) b0 = b[0] return b0 + np.dot(b[1:l+1], X[-l:]) + np.random.normal(mu, sigma) b = np.array([0, 0.8, 0.1, 0.05]) X = np.array([1]) mu = 0 sigma = 1 for i in range(10000): X = np.append(X, AR(b, X, mu, sigma)) plt.plot(X) plt.xlabel('Time') plt.ylabel('AR Series Value'); Explanation: Autoregressive (AR) Models by Maxwell Margenot, Delaney Mackenzie, and Lee Tobey Lee Tobey is the founder of Hedgewise. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public AR Models An autoregressive, or AR$(p)$, model is created by regressing a time series on its past values, its lags. The simplest form of an autoregressive model is an AR$(1)$ model, signifying using only one lag term. A first order autocorrelation model like this for a time series $x_t$ is: $$ x_t = b_0 + b_1 x_{t - 1} + \epsilon_t $$ Where $x_{t - 1}$ represents the value of the time series at time $(t - 1)$ and $\epsilon_t$ is the error term. We can extend this to an AR$(p)$ model, denoted: $$ x_t = b_0 + b_1 x_{t-1} + b_2 x_{t - 2} \ldots + b_p x_{t - p} + \epsilon_t $$ For an AR model to function properly, we must require that the time series is covariance stationary. This means that it follows three conditions: The expected value of the time series is constant and finite at all times, i.e. $E[y_t] = \mu$ and $\mu < \infty$ for all values of $t$. The variance of the time series is constant and finite for all time periods. The covariance of the time series with itself for a fixed number of periods in either the future or the past is constant and finite for all time periods, i.e $$ COV(y_t, y_{t - s}) = \lambda, \ |\lambda| < \infty, \text{ $\lambda$ constant}, \ t = 1, 2, \ \ldots, T; \ s = 0, \pm 1, \pm 2, \ldots, \pm T $$ Note that this mathematical representation includes condition 2. If these conditions are not satisfied, our estimation results will not have real-world meaning. Our estimates for the parameters will be biased, making any tests that we try to form using the model invalid. Unfortunately, it can be a real pain to find a covariance-stationary time series in the wild in financial markets. For example, when we look at the stock price of Apple, we can clearly see an upward trend. The mean is increasing with time. There are ways, however to make a non-stationary time series stationary. Once we have performed this transformation, we can build an autoregressive models under the above assumptions. Simulating Data Here we will draw data samples from a simulated AR$(3)$ process. End of explanation def compare_tails_to_normal(X): # Define matrix to store comparisons A = np.zeros((2,4)) for k in range(4): #stores tail probabilities of the sample series vs a normal series A[0, k] = len(X[X > (k + 1)]) / float(len(X)) # Estimate tails of X A[1, k] = 1 - stats.norm.cdf(k + 1) # Compare to Gaussian distribution print 'Frequency of std events in X \n1: %s\t2: %s\t3: %s\t4: %s' % tuple(A[0]) print 'Frequency of std events in a normal process \n1: %s\t2: %s\t3: %s\t4: %s' % tuple(A[1]) return A compare_tails_to_normal(X); Explanation: Note how this process fluctuates around some central value. This value is the mean of our time series. As we have a constant mean throughout time and the fluctuations seem to all stray within a given distance from the mean, we might hypothesize that this series is stationary. We would want to rigorously test that in practice, which we will explore lightly in the examples at the end of this lecture. Also see the stationarity lecture from the Quantopian Lecture Series. In this case, however, we have constructed the model to be stationary, so no need to worry about testing for stationarity right now. Tail Risk Autoregressive processes will tend to have more extreme values than data drawn from say a normal distribution. This is because the value at each time point is influenced by recent values. If the series randomly jumps up, it is more likely to stay up than a non-autoregressive series. This is known as 'fat-tailledness' (fat-tailed distribution) because the extremes on the pdf will be fatter than in a normal distribution. Much talk of tail risk in finance comes from the fact that tail events do occur and are hard to model due to their infrequent occurrence. If we have reason to suspect that a process is autoregressive, we should expect risk from extreme tail events and adjust accordingly. AR models are just one of the sources of tail risk, so don't assume that because a series is non-AR, it does not have tail risk. We'll check for that behavior now. End of explanation def compute_unadjusted_interval(X): T = len(X) # Compute mu and sigma MLE mu = np.mean(X) sigma = np.std(X) # Compute the bounds using standard error lower = mu - 1.96 * (sigma/np.sqrt(T)) upper = mu + 1.96 * (sigma/np.sqrt(T)) return lower, upper # We'll make a function that returns true when the computed bounds contain 0 def check_unadjusted_coverage(X): l, u = compute_unadjusted_interval(X) # Check to make sure l <= 0 <= u if l <= 0 and u >= 0: return True else: return False def simululate_AR_process(b, T): X = np.array([1]) mu = 0 sigma = 1 for i in range(T): X = np.append(X, AR(b, X, mu, sigma)) return X Explanation: Estimations of Variance Will be Wrong Because an AR process has a tail heavy and non-normal distribution of outcomes, estimates of variance on AR processes will be wrong. This is dangerous because variance is used to calculate many quantities in staistics, most importantly confidence intervals and p-values. Because the width of the confidence interval is often based on a variance estimate, we can no longer trust p-values that come from AR processes. For more information on p-values please see the Hypothesis Testing notebook in the Quantopian Lecture Series. Let's check this here. First we'll define some helper functions that compute a naive 95% confidence interval for the true value of the mean on some input series. End of explanation trials = 1000 outcomes = np.zeros((trials, 1)) for i in range(trials): #note these are the same values we used to generate the initial AR array Z = simululate_AR_process(np.array([0, 0.8, 0.1, 0.05]), 100) if check_unadjusted_coverage(Z): # The internal contains 0, the true value outcomes[i] = 1 else: outcomes[i] = 0 Explanation: Now we'll run an experiment 1000 times in which we compute an AR series, then estimate the mean and take a naive 95% confidence interval around it. Then we'll check if the confidence interval contains 0, the true long-term mean of our series, and record that in our outcomes array. End of explanation np.sum(outcomes) / trials Explanation: Finally let's check, if our test is calibrated correctly, then we should have the confidence interval contain 0, 95% of the time. End of explanation from statsmodels.tsa.stattools import acf, pacf X = simululate_AR_process(np.array([0, 0.8, 0.1, 0.05]), 1000) # We'll choose 40 lags. This is a bit arbitrary, but you want to include all the lags you think might # feasibly impact the current value. nlags = 40 # Note, this will produce nlags + 1 values, as we include the autocorrelation of # X[-1] with X[-1], which is trivially 1. # The reason this is done is because that is the 0th spot in the array and corresponds # to the 0th lag of X[(-1)-0]. X_acf = acf(X, nlags=nlags) print 'Autocorrelations:\n' + str(X_acf) + '\n' X_pacf = pacf(X, nlags=nlags) print 'Partial Autocorrelations:\n' + str(X_pacf) Explanation: Looks like something is severly wrong. What's going on here is that the AR series moves around a lot more, but the estimate of variance assumes stationarity and doesn't take into account all of that motion. As a result the confidence intervals are way smaller than they should be and don't contain the correct value nearly enough. This gives us a false sense of security. Stationarity tests should usually catch AR behavior and let us know that estimates of variance will be wrong. For more information please see the Integration, Cointegration, and Stationarity lecture of the Quantopian Lecture Series. Correcting for Variance In practice it can be very difficult to accurately estimate variance on an AR series, but one attempt to do this is the Newey-West estimation. You can find information on it here. Testing for AR Behavior In order to determine the order, $p$, of an AR$(p)$ model, we look at the autocorrelations of the time series. These are the correlations of the series with its past values. The $k$-th order autocorrelation is $$ \rho_k = \frac{COV(x_t, x_{t - k})}{\sigma_x^2} = \frac{E[(x_t - \mu)(x_{t - k} - \mu)}{\sigma_x^2} $$ Where $k$ represents the number of periods lagged. We cannot directly observe the autocorrelations so we estimate them as $$ \hat{\rho}k = \frac{\sum{t = k + 1}^T[(x_t - \bar{x})(x_{t - k} - \bar{x})]}{\sum_{t = 1}^T (x_t - \bar{x})^2} $$ For our purposes, we can use a pair of tools called the autocorrelation function (ACF) and the partial autocorrelation function (PACF) in order to determine the order of our model. The PACF controls for shorter lags, unlike the ACF. These functions are included with many statistical packages and compute the sample autocorrelations for us, allowing us to determine the appropriate value of $p$. We will demonstrate these functions on our above example of a stationary series: End of explanation plt.plot(X_acf, 'ro') plt.xlabel('Lag') plt.ylabel('Autocorrelation') plt.title("ACF"); plt.plot(X_pacf, 'ro') plt.xlabel('Lag') plt.ylabel('Autocorrelation') plt.title("PACF"); Explanation: Let's plot out the values now. End of explanation # We have to set a confidence level for our intervals, we choose the standard of 95%, # corresponding with an alpha of 0.05. X_acf, X_acf_confs = acf(X, nlags=nlags, alpha=0.05) X_pacf, X_pacf_confs = pacf(X, nlags=nlags, alpha=0.05) Explanation: Statistical Testing Just looking at the graphs alone isn't enough. We need to use some degree of statistical rigor. The acf and pacf functions will return confidence intervals on all the autocorrelations. We can check if these intervals overlap with zero. If they do then we say that zero is within the set confidence interval for the true parameter value, and don't treat the lag as having any meaningful autocorrelation. NOTE: This only works if the assumptions underlying the confidence interval computations are satisfied. Please check these assumptions before you assume the test is meaningful. The assumptions will differ in every case, so please read the statistical documentation of your own test and go from there. End of explanation def plot_acf(X_acf, X_acf_confs, title='ACF'): # The confidence intervals are returned by the functions as (lower, upper) # The plotting function needs them in the form (x-lower, upper-x) errorbars = np.ndarray((2, len(X_acf))) errorbars[0, :] = X_acf - X_acf_confs[:,0] errorbars[1, :] = X_acf_confs[:,1] - X_acf plt.plot(X_acf, 'ro') plt.errorbar(range(len(X_acf)), X_acf, yerr=errorbars, fmt='none', ecolor='gray', capthick=2) plt.xlabel('Lag') plt.ylabel('Autocorrelation') plt.title(title); plot_acf(X_acf, X_acf_confs) plot_acf(X_pacf, X_pacf_confs, title='PACF') Explanation: After getting the confidence interval data, we'll write a function to plot it. End of explanation # Construct an unfitted model model = tsa.api.AR(X) # Fit it model = model.fit() Explanation: Notice how for the PACF, only the first three lags are the only ones that appear to be significantly different from $0$, which makes sense because we directly constructed an AR model of order $3$. However, these results may vary for each random series generated in this notebook. In a real-world time series, we use these plots to determine the order of our model. We would then attempt to fit a model using a maximum likelihood function. Fitting a Model We'll use one of the functions already implemented in Python to fit an AR model. We'll try this on our simulated data first. End of explanation print 'Parameters' print model.params print 'Standard Error' print model.bse # To plot this we'll need to format a confidence interval 2D array like the previous functions returned # Here is some quick code to do that model_confs = np.asarray((model.params - model.bse, model.params + model.bse)).T plot_acf(model.params, model_confs, title='Model Estimated Parameters') Explanation: The model object has a lot of useful information on it, use the ? notation to find out more. We'll be focusing on a few attributes, starting with model.params the estimated parameters in the model, one for each lag, and model.bse, the estimated standard error for each of the parameters. End of explanation N = 10 AIC = np.zeros((N, 1)) for i in range(N): model = tsa.api.AR(X) model = model.fit(maxlag=(i+1)) AIC[i] = model.aic AIC_min = np.min(AIC) model_min = np.argmin(AIC) print 'Relative Likelihoods' print np.exp((AIC_min-AIC) / 2) print 'Number of parameters in minimum AIC model %s' % (model_min+1) N = 10 BIC = np.zeros((N, 1)) for i in range(N): model = tsa.api.AR(X) model = model.fit(maxlag=(i+1)) BIC[i] = model.bic BIC_min = np.min(BIC) model_min = np.argmin(BIC) print 'Relative Likelihoods' print np.exp((BIC_min-BIC) / 2) print 'Number of parameters in minimum BIC model %s' % (model_min+1) Explanation: Choosing the Number of Lags Estimations Will Yield Too Many Lags We can see our model estimated quite a few parameters. In this case we know there are too many because we simulated the data as an AR(3) process. The reason that AR models will estimate many more lags than is actually the case is due to indirect dependency. If $X_t$ depends on $X_{t-1}$, then indirectly and to a lesser extent it will depend on $X_{t-2}$. In the presence of more than one lag in the data generating process, we will get potentially complex harmonic structures in the lags. These indirect dependencies will be picked up by a simple estimation. You Want the Fewest Parameters That Yield a Decent Model In general it's rarely the case that you can get anything useful out of a model with many parameters, see the Overfitting lecture for why in the Quantopian Lecture Series. In this case we want to select a number of lags that we believe explains what is happening, but without overfitting and choosing a model with way too many lags. Observing the ACF and PACF indicates that only the first 3 lags may be useful. However, we will expand the number of lags to 10 to double-check our initial data. We will use information criterion, specifically Akaike Information Criterion (AIC) and Bayes Information Criterion (BIC) to decide the correct number of parameters. For more information on choosing models using information criterion, please see the corresponding lecture in the Quantopian Lecture Series. Interpreting the AIC and BIC is done as follows. Compute the AIC and BIC for all models we wish to consider, and note the smallest AIC and BIC recorded $AIC_{min}$ and $BIC_{min}$. These are the models which minimize information loss under each metric. For each type of IC We then can compute the relative likelihood of each model $i$ by taking $$l = e^{(IC_{min} - IC_{i})/2}$$ We can interpret $l$ as model $i$ is $l$ times as likely to minimize information loss, compared to the minimum AIC model. It might take a few reads to understand this, so let's just see it in action. End of explanation model = tsa.api.AR(X) model = model.fit(maxlag=3) from statsmodels.stats.stattools import jarque_bera score, pvalue, _, _ = jarque_bera(model.resid) if pvalue < 0.10: print 'We have reason to suspect the residuals are not normally distributed.' else: print 'The residuals seem normally distributed.' Explanation: Our conclusion is that the AIC estimates the 4 parameter model as most likely, whereas the BIC estimates 3. Because we are always looking for reasons to knock off a parameter, we choose the 3. In this case it happened to be the exact right answer, but this will not always be the case, especially in noisy real data. Don't assume that using this method will always get you the right answer. Evaluating Residuals One final step we might do before performing an out of sample test for this model would be to evaluate its residual behavior. The AIC and BIC already do this to an extent, effectively measuring how much information is left on the table (in the residuals) after the model has made its predictions. For more information on residuals analysis see the Violations of Regression Models lecture. Here we'll just check for normality of the residuals. End of explanation
11,224
Given the following text description, write Python code to implement the functionality described below step by step Description: Github https Step3: List Comprehensions Step4: Dictionaries Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well.
Python Code: # Create a [list] days = ['Monday', # multiple lines 'Tuesday', # acceptable 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', # trailing comma is fine! ] days # Simple for-loop for day in days: print(day) # Double for-loop for day in days: for letter in day: print(letter) print(days) print(*days) # Double for-loop for day in days: for letter in day: print(letter) print() for day in days: for letter in day: print(letter.lower()) Explanation: Github https://github.com/jbwhit/OSCON-2015/commit/6750b962606db27f69162b802b5de4f84ac916d5 A few Python Basics End of explanation length_of_days = [len(day) for day in days] length_of_days letters = [letter for day in days for letter in day] print(letters) letters = [letter for day in days for letter in day] print(letters) [num for num in xrange(10) if num % 2] [num for num in xrange(10) if num % 2 else "doesn't work"] [num if num % 2 else "works" for num in xrange(10)] [num for num in xrange(10)] sorted_letters = sorted([x.lower() for x in letters]) print(sorted_letters) unique_sorted_letters = sorted(set(sorted_letters)) print("There are", len(unique_sorted_letters), "unique letters in the days of the week.") print("They are:", ''.join(unique_sorted_letters)) print("They are:", '; '.join(unique_sorted_letters)) def first_three(input_string): Takes an input string and returns the first 3 characters. return input_string[:3] import numpy as np # tab np.linspace() [first_three(day) for day in days] def last_N(input_string, number=2): Takes an input string and returns the last N characters. return input_string[-number:] [last_N(day, 4) for day in days if len(day) > 6] from math import pi print([str(round(pi, i)) for i in xrange(2, 9)]) list_of_lists = [[i, round(pi, i)] for i in xrange(2, 9)] print(list_of_lists) for sublist in list_of_lists: print(sublist) # Let this be a warning to you! # If you see python code like the following in your work: for x in range(len(list_of_lists)): print("Decimals:", list_of_lists[x][0], "expression:", list_of_lists[x][1]) print(list_of_lists) # Change it to look more like this: for decimal, rounded_pi in list_of_lists: print("Decimals:", decimal, "expression:", rounded_pi) # enumerate if you really need the index for index, day in enumerate(days): print(index, day) Explanation: List Comprehensions End of explanation from IPython.display import IFrame, HTML HTML('<iframe src=https://en.wikipedia.org/wiki/Hash_table width=100% height=550></iframe>') fellows = ["Jonathan", "Alice", "Bob"] universities = ["UCSD", "UCSD", "Vanderbilt"] for x, y in zip(fellows, universities): print(x, y) # Don't do this {x: y for x, y in zip(fellows, universities)} # Doesn't work like you might expect {zip(fellows, universities)} dict(zip(fellows, universities)) fellows fellow_dict = {fellow.lower(): university for fellow, university in zip(fellows, universities)} fellow_dict fellow_dict['bob'] rounded_pi = {i:round(pi, i) for i in xrange(2, 9)} rounded_pi[5] sum([i ** 2 for i in range(10)]) sum(i ** 2 for i in range(10)) huh = (i ** 2 for i in range(10)) huh.next() Explanation: Dictionaries Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well. End of explanation
11,225
Given the following text description, write Python code to implement the functionality described below step by step Description: Finding Lane Lines on the Road In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output". The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below. <figure> <img src="line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align Step8: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are Step9: Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos. Step10: run your solution on all test_images and make copies into the test_images directory). Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos Step11: Let's try the one with the solid white lane on the right first ... Step13: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. Step15: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline. Now for the one with the solid yellow lane on the left. This one's more tricky! Step17: Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below! Submission If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review. Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! The current pipeline doesn't work with this video for several reasons
Python Code: #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimesions:', image.shape) plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image Explanation: Finding Lane Lines on the Road In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output". The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below. <figure> <img src="line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> End of explanation import math def grayscale(img): Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale you should call plt.imshow(gray, cmap='gray') return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): Applies the Canny transform return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): Applies a Gaussian Noise kernel return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines_naive(img, lines, color=[255, 0, 0], thickness=2): for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def draw_lines(img, lines, color=[255, 0, 0], thickness=10): NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below lanes = [ [] for i in range(2) ] mx = img.shape[1]/2 CY = 320 for line in lines: for x1,y1,x2,y2 in line: slope = ((y2-y1)/(x2-x1)) points = [[x1, y1], [x2, y2]] if (slope > 0 and x1 > mx and x2 > mx): lanes[1].extend(points) elif (slope < 0 and x1 < mx and x2 < mx): lanes[0].extend(points) for i in range(len(lanes)): if (len(lanes[i])): # least minimum squares to find the best fitting m,b parameters x = np.array([p[0] for p in lanes[i]]) y = np.array([p[1] for p in lanes[i]]) A = np.vstack([x, np.ones(len(x))]).T m, b = np.linalg.lstsq(A, y)[0] if (abs(m) > 0.1): x_low = int((image.shape[0] - b)/m) x_high = int((CY - b)/m) cv2.line(img, (x_low, image.shape[0]), (x_high, CY), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): `img` should be the output of a Canny transform. Returns an image with hough lines drawn. lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((*img.shape, 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! return cv2.addWeighted(initial_img, α, img, β, λ) Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are: cv2.inRange() for color selection cv2.fillPoly() for regions selection cv2.line() to draw lines on an image given endpoints cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file cv2.bitwise_and() to apply a mask to an image Check out the OpenCV documentation to learn about these and discover even more awesome functionality! Below are some helper functions to help get you started. They should look familiar from the lesson! End of explanation import os os.listdir("test_images/") Explanation: Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos. End of explanation # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image with lines are drawn on lanes) h, w, c = image.shape H, W = 540, 960 #this helps in examples like 'challenge.mp4', where the video is larger than expected #crops the image centered if (w > W or h > H): OFF_H = (h - H)/2 OFF_W = (w - W)/2 image = image[h-OFF_H-H:h-OFF_H, w-OFF_W-W:w-OFF_W] gray = grayscale(image) gauss_img = gaussian_blur(gray, 5) canny_img = canny(gauss_img, 50, 150) vertices = [[450, 320], [520, 320], [W, H], [0, H]]; interest_img = region_of_interest(canny_img, np.int32([vertices])) #maybe something like this could help smooth for the different colors in the highway? #interest_img = gaussian_blur(interest_img, 7) hough_img = hough_lines(interest_img, 2, np.pi/180, 80, 120, 40) result = weighted_img(image, hough_img) return result #reading in an image image = mpimg.imread('test_images/solidYellowLeft.jpg') plt.imshow(process_image(image)) import os directory = "test_images" files = os.listdir(directory) for f in files: if ((not f.startswith("new")) and f.endswith(".jpg")): image = mpimg.imread(directory + "/"+f) new_image = process_image(image) plt.imshow(new_image) plt.savefig(directory + "/" + "new_"+f) Explanation: run your solution on all test_images and make copies into the test_images directory). Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: solidWhiteRight.mp4 solidYellowLeft.mp4 End of explanation white_output = 'white.mp4' clip1 = VideoFileClip("solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) Explanation: Let's try the one with the solid white lane on the right first ... End of explanation HTML( <video width="960" height="540" controls> <source src="{0}"> </video> .format(white_output)) Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. End of explanation yellow_output = 'yellow.mp4' clip2 = VideoFileClip('solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML( <video width="960" height="540" controls> <source src="{0}"> </video> .format(yellow_output)) Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline. Now for the one with the solid yellow lane on the left. This one's more tricky! End of explanation challenge_output = 'extra.mp4' clip2 = VideoFileClip('challenge.mp4') challenge_clip = clip2.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML( <video width="960" height="540" controls> <source src="{0}"> </video> .format(challenge_output)) Explanation: Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below! Submission If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review. Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! The current pipeline doesn't work with this video for several reasons: 1) The video size is bigger than expected (1280x720 instead of 960x540) 2) The video shows part of the car's hood (with its edges being detected) (video is not centered in the same fashion) 3) The highway has different 'patches' with different colors, leading to even more edges detected To improve the pipeline's robustness, these things among others should be addressed. A possible strategy would be: 1) and 2) can be address with cropping centered (see process_image) 3) should be address in a different way (maybe a low/band/high pass filter?, I tried applying a Gaussian filter after selecting the region of interest with little result) End of explanation
11,226
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow Serving in 10 minutes! TensorFlow SERVING is Googles' recommended way to deploy TensorFlow models. Without proper computer engineering background, it can be quite intimidating, even for people who feel comfortable with TensorFlow itself. Few things that I've found particularly hard were Step1: Additionally, we declare pred value, which is the actual prediction. Step2: We download training examples and train the model. Step3: Let's make sure that everything is working as expected. We have our number. Step4: Make sure that our model can efficiently predict it Step5: Now we want to save this model, and serve it with TensorFlow Serving. We define the path where we store the weights and the model version. Please note that you would need to increment VERSION number and re-create your graph (restart this notebook) if you want to save another model. Step6: And here we are saving the actual weights. Step7: Let's make sure the weights were saved correctly Step8: Services When this Docker Image started, it run example_jupyter/setup.sh. It started following services Step9: REST request Following part can run independently from what happened before - so you can run it in a different notebook, or on even on the host machine. Here's an example of a function we can use to query our model via REST. Step10: Let's make sure our train data still makes sense Step11: Running prediction And finally - let's run a prediction on TensorFlow Serving! Step12: It's easy to extract the actual prediction value from here
Python Code: import tensorflow as tf x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, shape=[None, 10]) W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.matmul(x,W) + b cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) Explanation: TensorFlow Serving in 10 minutes! TensorFlow SERVING is Googles' recommended way to deploy TensorFlow models. Without proper computer engineering background, it can be quite intimidating, even for people who feel comfortable with TensorFlow itself. Few things that I've found particularly hard were: - Tutorial examples have C++ code (which I don't know) - Tutorials have Kubernetes, gRPC, Bezel (some of which I saw for the first time) - It needs to be compiled. That process takes forever! After all, it worked just fine. Here I present an easiest possible way to deploy your models with TensorFlow Serving. You will have your self-built model running inside TF-Serving by the end of this tutorial. It will be scalable, and you will be able to query it via REST. The Tutorial uses the Docker image. You can use Kitematic to start the image: avloss/tensorflow-serving-rest. At first, I tried building it on "DockerHub" - but it hit the limit of 2 hours, so I had to use https://quay.io. I've uploaded finished result to DockerHub manually, but please feel free to pull from https://quay.io/repository/avloss/tensorflow-serving-rest in case you want to make sure it's what's it says it is. You can start Docker Container from Kitematic, or use this command from console: docker run --rm -it -p 8888:8888 -p 9000:9000 -p 8915:8915 quay.io/avloss/tensorflow-serving-rest Once it's running, please navigate to http://localhost:8888/notebooks/tf_serving_rest_example.ipynb. (Use different port if using Kitematic) From here it's best to continue from within Jupyter Notebook! To demonstrate how it's working, we are going to use the typical MNIST example from the official TF tutorial page: https://www.tensorflow.org/get_started/mnist/pros We instantiate a standard model. End of explanation pred = tf.argmax(y,axis=1) Explanation: Additionally, we declare pred value, which is the actual prediction. End of explanation from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) for _ in range(1000): batch = mnist.train.next_batch(100) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) Explanation: We download training examples and train the model. End of explanation %matplotlib inline import matplotlib.pyplot as plt number = mnist.train.next_batch(1)[0] plt.imshow(number.reshape(28,28)) Explanation: Let's make sure that everything is working as expected. We have our number. End of explanation sess.run(pred,feed_dict={x: number})[0] Explanation: Make sure that our model can efficiently predict it: This example works 99% of the time! ;-) End of explanation EXPORT_PATH = "/tmp/models" VERSION=1 Explanation: Now we want to save this model, and serve it with TensorFlow Serving. We define the path where we store the weights and the model version. Please note that you would need to increment VERSION number and re-create your graph (restart this notebook) if you want to save another model. End of explanation from tensorflow.contrib.session_bundle import exporter saver = tf.train.Saver(sharded=True) model_exporter = exporter.Exporter(saver) model_exporter.init( sess.graph.as_graph_def(), named_graph_signatures={ 'inputs': exporter.generic_signature({'x': x}), 'outputs': exporter.generic_signature({'pred': pred})}) model_exporter.export(EXPORT_PATH, tf.constant(VERSION), sess) Explanation: And here we are saving the actual weights. End of explanation !ls -lhR /tmp/models Explanation: Let's make sure the weights were saved correctly End of explanation !tail -n2 /tmp/models/model.log Explanation: Services When this Docker Image started, it run example_jupyter/setup.sh. It started following services: jupyter notebook This is jupyter notebook which we are using right now. /serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server This is TF Model Server running. It came as a part of TF-Serving standard distribution. It serves models using gRPC protocol. /serving/bazel-bin/tensorflow_serving/example/flask_client I've added this Flask application to convert REST requests into gPRC requests. Perhaps this takes away from speed, but at least it's clear what is going on - you can find the code here: tensorflow_serving/example/flask_client.py. Let's check TF Model Server. Until now it waited idly for a model to appear in that folder. We can now check the logs to make sure it recognised and loaded the model we just saved: End of explanation import numpy as np import cPickle as pickle import requests def test_flask_client(x): URL = "http://localhost:8915/model_prediction" s = pickle.dumps({"x":x}, protocol=0) DATA = {"model_name": "default", "input": requests.utils.quote(s)} r = requests.get(URL, data=DATA) return r.json() Explanation: REST request Following part can run independently from what happened before - so you can run it in a different notebook, or on even on the host machine. Here's an example of a function we can use to query our model via REST. End of explanation %matplotlib inline import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) number = mnist.train.next_batch(1)[0] plt.imshow(number.reshape(28,28)) Explanation: Let's make sure our train data still makes sense: End of explanation test_flask_client(number) Explanation: Running prediction And finally - let's run a prediction on TensorFlow Serving! End of explanation int(test_flask_client(number)["outputs"]["pred"]["int64Val"][0]) Explanation: It's easy to extract the actual prediction value from here End of explanation
11,227
Given the following text description, write Python code to implement the functionality described below step by step Description: Factory methods Factory methods are the primary way of creating useful geometries fast. They form an abstraction level up from knot vectors and control-points to give a cleaner simpler interface. The factory methods need to be imported. Step1: In addition to the splipy libraries, we will include matplotlib to use as our plotting tool. For convenience, we will create a few plotting functions, which will make the rest of the code that much shorter. The details on the plotting commands are of secondary nature as it mostly rely on matplotlib-specific things. For a more comprehensive introduction into these functions, consider reading Matplotlib for 3D or Pyplot (2D). Step2: Curves The traditional construction technique is to start bottoms up and create curves. Surfaces are then created by the manipulation of curves, and finally volumes from the manipulation of surfaces. Lines and polygons Step3: Circles and Circle segments Step4: Bezier curves Step5: Cubic Spline Interpolation Assume we are given a set of points that we want to fit a curve to. This set can either be a measured set of points, or one given from a parametrized curve. For moderatly sized datasets, it is convenient to do curve interpolation on these points Step6: Note that there exist a multitude of other interpolation algorithms that you can use, for instance Hermite interpolation or general spline interpolation. Spline curve approximations If the number of points to interpolate grow too large, it is impractical to create a global interpolation problem. For these cases, we may use a least square fit approximation Step7: Surfaces simple 2D shapes Step8: Sphere Step9: Torus Step10: Cylinder
Python Code: import splipy as sp import numpy as np import splipy.curve_factory as curve_factory import splipy.surface_factory as surface_factory import splipy.volume_factory as volume_factory Explanation: Factory methods Factory methods are the primary way of creating useful geometries fast. They form an abstraction level up from knot vectors and control-points to give a cleaner simpler interface. The factory methods need to be imported. End of explanation import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def plot_2D_curve(curve, show_controlpoints=False): t = np.linspace(curve.start(), curve.end(), 150) x = curve(t) plt.plot(x[:,0], x[:,1]) if(show_controlpoints): plt.plot(curve[:,0], curve[:,1], 'rs-') plt.axis('equal') plt.show() def plot_2D_surface(surface): u = np.linspace(surface.start('u'), surface.end('u'), 30) v = np.linspace(surface.start('v'), surface.end('v'), 30) x = surface(u,v) plt.plot(x[:,:,0], x[:,:,1], 'k-') plt.plot(x[:,:,0].T, x[:,:,1].T, 'k-') plt.axis('equal') plt.show() def plot_3D_curve(curve): t = np.linspace(curve.start(), curve.end(), 150) x = curve(t) fig = plt.gcf() ax = fig.add_subplot(111, projection='3d') ax.plot(x[:,0], x[:,1], x[:,2]) plt.show() def plot_3D_surface(surface): u = np.linspace(surface.start('u'), surface.end('u'), 30) v = np.linspace(surface.start('v'), surface.end('v'), 30) x = surface(u,v) fig = plt.gcf() ax = fig.add_subplot(111, projection='3d') ax.plot_surface( x[:,:,0], x[:,:,1], x[:,:,2]) ax.plot_wireframe(x[:,:,0], x[:,:,1], x[:,:,2], edgecolor='k', linewidth=1) plt.show() def plot_3D_volumes(volume): fig = plt.gcf() ax = fig.add_subplot(111, projection='3d') for face in volume.edges(): u = np.linspace(face.start('u'), face.end('u'), 30) v = np.linspace(face.start('v'), face.end('v'), 30) x = face(u,v) ax.plot_surface( x[:,:,0], x[:,:,1], x[:,:,2]) ax.plot_wireframe(x[:,:,0], x[:,:,1], x[:,:,2], edgecolor='k', linewidth=1) plt.show() Explanation: In addition to the splipy libraries, we will include matplotlib to use as our plotting tool. For convenience, we will create a few plotting functions, which will make the rest of the code that much shorter. The details on the plotting commands are of secondary nature as it mostly rely on matplotlib-specific things. For a more comprehensive introduction into these functions, consider reading Matplotlib for 3D or Pyplot (2D). End of explanation line = curve_factory.line([0,1], [1,0]) plt.title('curve_factory.line') plot_2D_curve(line) line_segment = curve_factory.polygon([0,1], [3,0], [3,3], [0,2], [0,4]) plt.title('curve_factory.polygon') plot_2D_curve(line_segment) Explanation: Curves The traditional construction technique is to start bottoms up and create curves. Surfaces are then created by the manipulation of curves, and finally volumes from the manipulation of surfaces. Lines and polygons End of explanation circle = curve_factory.circle(r=2.0) plt.title('curve_factory.circle') plot_2D_curve(circle) circle_seg = curve_factory.circle_segment(theta=5*np.pi/4, r=2.0) plt.title('curve_factory.circle_segment') plot_2D_curve(circle_seg) ngon = curve_factory.n_gon(5) plt.title('curve_factory.n_gon(5)') plot_2D_curve(ngon) Explanation: Circles and Circle segments End of explanation bezier = curve_factory.bezier([[0,0], [0,2], [2,2], [2,0]]) plt.title('curve_factory.bezier') plot_2D_curve(bezier, show_controlpoints=True) Explanation: Bezier curves End of explanation # We will create a B-spline curve approximation of a helix (spiral/spring). t = np.linspace(0,6*np.pi, 50) # generate 50 points which we will interpolate x = np.array([np.cos(t), np.sin(t), t]) curve = curve_factory.cubic_curve(x.T) # transpose input so x[i,:] is one (x,y,z)-interpolation point plt.title('curve_facfiguretory.cubic_curve') plot_3D_curve(curve) Explanation: Cubic Spline Interpolation Assume we are given a set of points that we want to fit a curve to. This set can either be a measured set of points, or one given from a parametrized curve. For moderatly sized datasets, it is convenient to do curve interpolation on these points End of explanation # We will create a B-spline curve approximation of the trefoil knot t = np.linspace(0,2*np.pi, 5000) # 5000 points is far too many than we need to represent this smooth shape x = np.array([np.sin(t) + 2*np.sin(2*t), np.cos(t) - 2*np.cos(2*t), -np.sin(3*t)]) # create the basis onto which we will fit our curve basis = sp.BSplineBasis(4, [-1,0,0,0,1,2,3,4,5,6,7,8,9,9,9,10],0) t = t/2.0/np.pi * 9 # scale evaluation points so they lie on the parametric space [0,9] curve = curve_factory.least_square_fit(x.T, basis, t) # transpose input so x[i,:] is one (x,y,z)-interpolation point plt.title('curve_facfiguretory.least_square_fit') plot_3D_curve(curve) plot_2D_curve(curve, show_controlpoints=True) Explanation: Note that there exist a multitude of other interpolation algorithms that you can use, for instance Hermite interpolation or general spline interpolation. Spline curve approximations If the number of points to interpolate grow too large, it is impractical to create a global interpolation problem. For these cases, we may use a least square fit approximation End of explanation square = surface_factory.square(size=2) plt.title('surface_factory.square') plot_2D_surface(square) disc = surface_factory.disc(type='radial') plt.title('surface_factory.disc(type=radial)') plot_2D_surface(disc) disc = surface_factory.disc(type='square') plt.title('surface_factory.disc(type=square)') plot_2D_surface(disc) Explanation: Surfaces simple 2D shapes End of explanation sphere = surface_factory.sphere() plt.title('surface_factory.sphere') plot_3D_surface(sphere) Explanation: Sphere End of explanation torus = surface_factory.torus() plt.title('surface_factory.torus') plot_3D_surface(torus) Explanation: Torus End of explanation cylinder = surface_factory.cylinder() plt.figure() plt.title('surface_factory.cylinder') plot_3D_surface(cylinder) Explanation: Cylinder End of explanation
11,228
Given the following text description, write Python code to implement the functionality described below step by step Description: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource. Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below). Step1: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary. Step2: Question 0 (Example) How many features does the breast cancer dataset have? This function should return an integer. Step3: Question 1 Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset cancer to a DataFrame. *This function should return a (569, 31) DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target'] *and index = * RangeIndex(start=0, stop=569, step=1) Step4: Question 2 What is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?) This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign'] Step5: Question 3 Split the DataFrame into X (the data) and y (the labels). This function should return a tuple of length 2 Step6: Question 4 Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test). Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder! This function should return a tuple of length 4 Step7: Question 5 Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1). *This function should return a * sklearn.neighbors.classification.KNeighborsClassifier. Step8: Question 6 Using your knn classifier, predict the class label using the mean value for each feature. Hint Step9: Question 7 Using your knn classifier, predict the class labels for the test set X_test. This function should return a numpy array with shape (143,) and values either 0.0 or 1.0. Step10: Question 8 Find the score (mean accuracy) of your knn classifier using X_test and y_test. This function should return a float between 0 and 1 Step11: Optional plot Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
Python Code: import numpy as np import pandas as pd from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() # print(cancer.DESCR) Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource. Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below). End of explanation cancer.keys() Explanation: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary. End of explanation # You should write your whole answer within the function provided. The autograder will call # this function and compare the return value against the correct solution value def answer_zero(): # This function returns the number of features of the breast cancer dataset, which is an integer. # The assignment question description will tell you the general format the autograder is expecting return len(cancer['feature_names']) # You can examine what your function returns by calling it in the cell. If you have questions # about the assignment formats, check out the discussion forums for any FAQs answer_zero() Explanation: Question 0 (Example) How many features does the breast cancer dataset have? This function should return an integer. End of explanation def answer_one(): # Your code here columns = cancer['feature_names'] columns = np.append(columns, ["target"]) index = range(0, 569, 1) cancerdf = pd.DataFrame(data=np.c_[cancer.data, cancer.target], columns=columns, index=index) return cancerdf answer_one() Explanation: Question 1 Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset cancer to a DataFrame. *This function should return a (569, 31) DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target'] *and index = * RangeIndex(start=0, stop=569, step=1) End of explanation def answer_two(): cancerdf = answer_one() series = cancerdf['target'] malignant = series[series == 0] benign = series[series == 1] target = pd.Series(np.array([len(malignant), len(benign)]), index=['malignant', 'benign']) return target answer_two() Explanation: Question 2 What is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?) This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign'] End of explanation def answer_three(): cancerdf = answer_one() X = cancerdf.iloc[:,:30] y = cancerdf["target"] return X, y answer_three() Explanation: Question 3 Split the DataFrame into X (the data) and y (the labels). This function should return a tuple of length 2: (X, y), where * X has shape (569, 30) * y has shape (569,). End of explanation from sklearn.model_selection import train_test_split def answer_four(): X, y = answer_three() X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) return X_train, X_test, y_train, y_test answer_four() Explanation: Question 4 Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test). Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder! This function should return a tuple of length 4: (X_train, X_test, y_train, y_test), where * X_train has shape (426, 30) * X_test has shape (143, 30) * y_train has shape (426,) * y_test has shape (143,) End of explanation from sklearn.neighbors import KNeighborsClassifier def answer_five(): X_train, X_test, y_train, y_test = answer_four() knn = KNeighborsClassifier(n_neighbors=1) knn.fit(X_train, y_train) return knn answer_five() Explanation: Question 5 Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1). *This function should return a * sklearn.neighbors.classification.KNeighborsClassifier. End of explanation def answer_six(): cancerdf = answer_one() means = cancerdf.mean()[:-1].values.reshape(1, -1) knn = answer_five() score = knn.predict(means) return score answer_six() Explanation: Question 6 Using your knn classifier, predict the class label using the mean value for each feature. Hint: You can use cancerdf.mean()[:-1].values.reshape(1, -1) which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier). This function should return a numpy array either array([ 0.]) or array([ 1.]) End of explanation def answer_seven(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() np_val = [] for row in X_test.iterrows(): np_val.append(knn.predict(row[1].values.reshape(1,-1))[0]) return np_val answer_seven() Explanation: Question 7 Using your knn classifier, predict the class labels for the test set X_test. This function should return a numpy array with shape (143,) and values either 0.0 or 1.0. End of explanation def answer_eight(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() score = knn.score(X_test, y_test) return score answer_eight() Explanation: Question 8 Find the score (mean accuracy) of your knn classifier using X_test and y_test. This function should return a float between 0 and 1 End of explanation def accuracy_plot(): import matplotlib.pyplot as plt %matplotlib notebook X_train, X_test, y_train, y_test = answer_four() # Find the training and testing accuracies by target value (i.e. malignant, benign) mal_train_X = X_train[y_train==0] mal_train_y = y_train[y_train==0] ben_train_X = X_train[y_train==1] ben_train_y = y_train[y_train==1] mal_test_X = X_test[y_test==0] mal_test_y = y_test[y_test==0] ben_test_X = X_test[y_test==1] ben_test_y = y_test[y_test==1] knn = answer_five() scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y), knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)] plt.figure() # Plot the scores as a bar chart bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868']) # directly label the score onto the bars for bar in bars: height = bar.get_height() plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2), ha='center', color='w', fontsize=11) # remove all the ticks (both axes), and tick labels on the Y axis plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on') # remove the frame of the chart for spine in plt.gca().spines.values(): spine.set_visible(False) plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8); plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8) # Uncomment the plotting function to see the visualization, # Comment out the plotting function when submitting your notebook for grading accuracy_plot() Explanation: Optional plot Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells. End of explanation
11,229
Given the following text description, write Python code to implement the functionality described below step by step Description: Optimization Exercise 1 Imports Step1: Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential" Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$ Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Optimization Exercise 1 Imports End of explanation def hat(x,a,b): v = -a*(x**2) + b*(x**4) return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0 Explanation: Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function: End of explanation a = 5.0 b = 1.0 x = np.linspace(-3.0, 3.0) plt.plot(x, hat(x,a,b)) plt.plot(-1.5811388304396232, hat(-1.5811388304396232,a,b), 'ro') plt.plot(1.58113882, hat(1.58113882,a,b), 'ro') plt.xlabel('X') plt.ylabel('V(x)') plt.title('Hat Potential') plt.grid(True) plt.box(False); assert True # leave this to grade the plot Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$: End of explanation opt.minimize(hat, -3, args=(a,b), method = "Powell") opt.minimize(hat, -3, args=(a,b)) assert True # leave this for grading the plot Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective. End of explanation
11,230
Given the following text description, write Python code to implement the functionality described below step by step Description: Abstract Neural networking is a form of machine learning inspired by the biological function of the human brain. By giving the program known inputs and outputs of a function, the computer learns the outputs of the function. In this program, the computer is given a set of 1792 hand written digits from zero to nine. Each 64 pixel digit is given as a 2D array of values correspoding to the pixel shade; similar images will have similar arrays. After a training period is completed, the computer will be able to output the value for any of the 1792 hand written digits with an accuracy significantly greater than guessing. This neural network is contructed with 64 inputs in a single input layer, a variable number of hidden nuerons in a single hidden layer, and 10 outputs in a single output layer. The training set is also variable. After the computer is put through a training session involving only a portion of the 1792 images, it will be given the entire set of images. The computer's outputs will be compared to the true values and a percent error will verify the code is working. Once the program is working, and the computer can accurately determine the image array values, the program is then modified to allow for multiple hidden layers and again trained and tested with the 1792 digit set. Then, it will be programmed to translate morse using non-handwritten dashes and periods. From this, the training set will be the entire morse dictionary and the "test" set will be human entered morse code sentences. Because there will be no variation in the test set and the training set, this is more of a question of how many iterations does it take to learn identical data. Also, this will be a neural network with 4 nodes in the input layer and 26 nodes in the output layer. It will interesting to see how long many interations it will take to get 100% and the time it takes to get it. Step5: A neuron in the brain can either fire or not fire. Computer based neural networks attempt to imitate the operations of a brain. This is done with an activation function. An ideal activation function would be a step function. At some threshold value, the neuron "fires" and produced a value of 1. If this threshold is not met, it produces 0. However, computationally, it is mathmatically convienient to have a smooth function. Thus, the sigmoid function is used. Since this activation function will be used extensively in the devepment of a neural network, the first step is defining a function that calculates the sigmoid of a value. $$\sigma(x) = \frac {1}{1+e^{-x}}$$ Step8: Creating the solutions to the training input is a bit more involved. The simplest way to approach this is with 10 output neurons in a single output layer. Each of these neurons will represent a different number from 0-9 based on their index. Here, a function is created that takes a set of single digit numbers and creates a 2D array that represent those numbers. This is the solution array to the training inputs. Step9: This is the bulk of the program. Here is where the training and backpropogation takes place to generate weights that best fit the data. I, J, and K each represent a layer. "I" is the input layer, "J" is the hidden layer, and "K" is the output layer. "WIJ" is then the array of weights between the I and J layers and "WJK" being the same between the J and K layers. These arrays are initially randomized, and change with iterations depending on the output error. The easiest way to process what exactly is going on below is to think of it very simplistically and mathematically. The computer is given an input. The input is multiplied by weights, summed and fed into a sigmoid function. This process occurs for each of the hidden neurons, each using the same input, but different weights. Since only a single number is fed into the sigmoid function of each neuron, only a single number is outputed by each hidden neuron. These hidden neuron outputs are then treated as inputs for the output layer. The hidden layer outputs are multiplied by weights, summed and fed into a sigmoid function. The outputs of this sigmoid function, are the outputs of the output layer, thus the final output. The difference between the expected output and the actual output is computed, and this difference is used to changed the weights. Once the weights have been changed, the process is repeated with a new image. Step10: This diagram is "messy" simply because that is the nature of neutral networks! It has an organization, but everything connects to the layer before it. This diagram also depicts multiple output neurons which is essential to this neural network. To explain mathmatically what is going on, let's do some calculations Step12: Now that the neural network has been constructed, it is time to train it. First, create an instance of a class with 30 hidden nuerons, 60 iterations, and a learning rate of 0.7. Step14: Let x be the final WIJ and y be the final WJK. Step15: Saving the weights for the class demonstration. Step18: Although there is a bulk of code below, it is almost entirely identical to the training class above. The essential difference being the weights are fed into the NN_ask class, no iterations are taking place, no weights are being changed, and no solutions are fed into the program. The purpose here is to give the program inputs and see if it has "learned". This is the test for the learning code above. With the calculated weights, this program should be able to calculate an output for all 1797 hand written digits that matches the solutions with minimal error. Step20: "comp_vals" is set equal to the computer's output for all the 1797 images Step22: And finally, the error is calculated between the computer's answers and the actual answers for all the digits.
Python Code: import numpy as np from sklearn.datasets import load_digits digits = load_digits() from IPython.html.widgets import interact %matplotlib inline import matplotlib.pyplot as plt import timeit from IPython.display import Image import NNpix as npx import timeit Explanation: Abstract Neural networking is a form of machine learning inspired by the biological function of the human brain. By giving the program known inputs and outputs of a function, the computer learns the outputs of the function. In this program, the computer is given a set of 1792 hand written digits from zero to nine. Each 64 pixel digit is given as a 2D array of values correspoding to the pixel shade; similar images will have similar arrays. After a training period is completed, the computer will be able to output the value for any of the 1792 hand written digits with an accuracy significantly greater than guessing. This neural network is contructed with 64 inputs in a single input layer, a variable number of hidden nuerons in a single hidden layer, and 10 outputs in a single output layer. The training set is also variable. After the computer is put through a training session involving only a portion of the 1792 images, it will be given the entire set of images. The computer's outputs will be compared to the true values and a percent error will verify the code is working. Once the program is working, and the computer can accurately determine the image array values, the program is then modified to allow for multiple hidden layers and again trained and tested with the 1792 digit set. Then, it will be programmed to translate morse using non-handwritten dashes and periods. From this, the training set will be the entire morse dictionary and the "test" set will be human entered morse code sentences. Because there will be no variation in the test set and the training set, this is more of a question of how many iterations does it take to learn identical data. Also, this will be a neural network with 4 nodes in the input layer and 26 nodes in the output layer. It will interesting to see how long many interations it will take to get 100% and the time it takes to get it. End of explanation The activation function. def sigmoid(x): return 1/(1+np.exp(-x)) assert sigmoid(np.log(2)) == 2/3 The def sigmoid_prime(x): return sigmoid(x)*(1-sigmoid(x)) x = np.linspace(-10,10,100) y = sigmoid(x) a = [-10,-1,0,1,10] b = [0,0,0,1,1] plt.plot(x,y, label="Sigmoid Function") plt.step(a,b, "r", label="Step Function") plt.xlim(-10,10) plt.ylim(-0.1,1.1) plt.title("Activation Functions") plt.legend(loc=4) plt.show() Using random permutations to train with random values of the total set perm = np.random.permutation(1797) assert len(perm) == 1797 Turn each 2D array into 1D array, turn all integers into decimals, append 1 for the input bias # training_input = np.array([np.append((digits.images[perm[i]].flatten())/100,[1]) for i in range(1000)]) training_input = np.array([digits.images[perm[i]].flatten() for i in range(1000)])/100 test_input = np.array([digits.images[perm[i]].flatten() for i in range(1000,1797)])/100 assert len(training_input[0]) == 64 assert len(test_input[0]) == 64 Explanation: A neuron in the brain can either fire or not fire. Computer based neural networks attempt to imitate the operations of a brain. This is done with an activation function. An ideal activation function would be a step function. At some threshold value, the neuron "fires" and produced a value of 1. If this threshold is not met, it produces 0. However, computationally, it is mathmatically convienient to have a smooth function. Thus, the sigmoid function is used. Since this activation function will be used extensively in the devepment of a neural network, the first step is defining a function that calculates the sigmoid of a value. $$\sigma(x) = \frac {1}{1+e^{-x}}$$ End of explanation def create_training_soln(training_numbers): Creates 2D array for training solutions a = np.repeat(0,10,None) a = np.repeat([a], len(training_numbers), 0) for i in range(len(training_numbers)): a[i][training_numbers[i]] = 1 return a Generat a training solution training_solution = create_training_soln([digits.target[perm[i]] for i in range(1000)]) test_solution = create_training_soln([digits.target[perm[i]] for i in range(1000,1797)]) number_solution = np.array([digits.target[perm[i]] for i in range(1000,1797)]) assert len(training_solution[0]) == 10 assert len(test_solution[0]) == 10 Explanation: Creating the solutions to the training input is a bit more involved. The simplest way to approach this is with 10 output neurons in a single output layer. Each of these neurons will represent a different number from 0-9 based on their index. Here, a function is created that takes a set of single digit numbers and creates a 2D array that represent those numbers. This is the solution array to the training inputs. End of explanation Image(url='http://mechanicalforex.com/wp-content/uploads/2011/06/NN.png', embed=True, width = 400, height = 400) Explanation: This is the bulk of the program. Here is where the training and backpropogation takes place to generate weights that best fit the data. I, J, and K each represent a layer. "I" is the input layer, "J" is the hidden layer, and "K" is the output layer. "WIJ" is then the array of weights between the I and J layers and "WJK" being the same between the J and K layers. These arrays are initially randomized, and change with iterations depending on the output error. The easiest way to process what exactly is going on below is to think of it very simplistically and mathematically. The computer is given an input. The input is multiplied by weights, summed and fed into a sigmoid function. This process occurs for each of the hidden neurons, each using the same input, but different weights. Since only a single number is fed into the sigmoid function of each neuron, only a single number is outputed by each hidden neuron. These hidden neuron outputs are then treated as inputs for the output layer. The hidden layer outputs are multiplied by weights, summed and fed into a sigmoid function. The outputs of this sigmoid function, are the outputs of the output layer, thus the final output. The difference between the expected output and the actual output is computed, and this difference is used to changed the weights. Once the weights have been changed, the process is repeated with a new image. End of explanation class NN_training(object): def __init__(self, input_array, soln, hidnum, iters, lr): self.input_array = input_array self.soln = soln #Number of hidden nodes self.hidnum = hidnum #Number of iterations through the training set self.iters = iters #Initalize WIJ weights (input to hidden) self.wij = np.random.uniform(-.5,0.5,(hidnum,65)) #Initalize WJK weights (hidden to output) self.wjk = np.random.uniform(-0.5,0.5,(10,hidnum+1)) #Set a learning rate self.lr = lr def train(self): iters = self.iters for n in range(iters): for i in range(len(self.input_array)): soln = self.soln[i] hidnum = self.hidnum input_array = np.append(self.input_array[i],[1]) #Find sum of weights x input array values for each hidden self.hidden_sums = (sum((input_array * self.wij).T)).T #Find outputs of hidden neurons; include bias self.hidden_out = np.append(sigmoid(self.hidden_sums),[1]) #Find sums of weights x hidden outs for each neuron in output layer self.output_sums = (sum((self.hidden_out * self.wjk).T)).T #Find output of the outputs self.output_out = sigmoid(self.output_sums) self.E = self.output_out - soln #Find delta values for each output self.output_deltas = self.E * sigmoid_prime(self.output_sums) #Find delta values for each hidden self.hidden_deltas = sigmoid_prime(np.delete(self.hidden_out,[hidnum],None)) * sum((self.output_deltas * (np.delete(self.wjk, [hidnum], 1)).T).T) #Change weights self.wij = -self.lr * (self.hidden_deltas*(np.repeat([input_array],hidnum,0)).T).T + self.wij self.wjk = -self.lr * (self.output_deltas*(np.repeat([self.hidden_out],10,0)).T).T + self.wjk return (self.wij, self.wjk) Explanation: This diagram is "messy" simply because that is the nature of neutral networks! It has an organization, but everything connects to the layer before it. This diagram also depicts multiple output neurons which is essential to this neural network. To explain mathmatically what is going on, let's do some calculations: Here is the cost function: $$C = \frac{1}{2}(y-\hat{y})^{2} $$ Note that $ \hat{y} $ is the computer solution array and $ y $ is the solution value array. We do not know $ \frac{\partial{\hat{y}}}{\partial{W_2}} $ but we can break it into partials that we do know. $$ \frac{\partial{C}}{\partial{W_2}} = -(y-\hat{y})^{2} \frac{\partial{\hat{y}}}{\partial{c}} \frac{\partial{c}}{\partial{W_2}} $$ And we know $\hat{y} = \sigma(c) $ where $c$ is the array of outputs. End of explanation Create an instance of a class my_net = NN_training(training_input, training_solution, 40, 90, 0.7) Explanation: Now that the neural network has been constructed, it is time to train it. First, create an instance of a class with 30 hidden nuerons, 60 iterations, and a learning rate of 0.7. End of explanation Get final weights x,y = my_net.train() Explanation: Let x be the final WIJ and y be the final WJK. End of explanation np.savez("NNweights.npz", x,y) Explanation: Saving the weights for the class demonstration. End of explanation class NN_ask (object): Feed forward using final weights from training backpropagation def __init__(self, input_array, wij, wjk): self.input_array = input_array self.wij = wij self.wjk = wjk def get_ans(self): wij = self.wij wjk = self.wjk soln = [] for i in range(len(self.input_array)): input_array = np.append(self.input_array[i],[1]) self.hidden_sums = (sum((input_array * wij).T)).T self.hidden_out = np.append(sigmoid(self.hidden_sums),[1]) self.output_sums = (sum((self.hidden_out * wjk).T)).T self.output_out = sigmoid(self.output_sums) for i in range(len(self.output_out)): if self.output_out[i] == max(self.output_out): a = i soln.append(a) return soln Instance of NN_ask class using calculated weights and all 1797 images test_net = NN_ask(test_input, x, y) Explanation: Although there is a bulk of code below, it is almost entirely identical to the training class above. The essential difference being the weights are fed into the NN_ask class, no iterations are taking place, no weights are being changed, and no solutions are fed into the program. The purpose here is to give the program inputs and see if it has "learned". This is the test for the learning code above. With the calculated weights, this program should be able to calculate an output for all 1797 hand written digits that matches the solutions with minimal error. End of explanation Get the computer's output for all 1797 images comp_vals = test_net.get_ans() Explanation: "comp_vals" is set equal to the computer's output for all the 1797 images End of explanation Calculate error print(((sum((comp_vals-number_solution == 0).astype(int)) / (1797-1000)) * 100), "%") Explanation: And finally, the error is calculated between the computer's answers and the actual answers for all the digits. End of explanation
11,231
Given the following text description, write Python code to implement the functionality described below step by step Description: Goal Step1: Load some data I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b. The one limitation here is that this data has already cut out the fission chamber neighbors. det_df without fission chamber neighbors Step2: I am going to add in a new optional input parameter in bicorr.load_det_df that will let you provide this det_df without fission chamber neighbors directly. Try it out. Step3: singles_hist.npz Step4: Load bhp_nn for all pairs I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook. Step5: The fission chamber neighbors have already been removed Step6: Specify energy range Step7: Calculate sums Singles- set up singles_df I will store this in a pandas dataframe. Columns Step8: Singles- calculate sums Step9: Doubles- set up det_df Step10: Doubles- Calculate sums Step11: Perform the correction Now I am going to loop through all pairs and calculate $W$. Loop through each pair Identify $i$, $j$ Fetch $S_i$, $S_j$ Calculate $W$ Propagate error for $W_{err}$ Store in det_df Add W, W_err columns to det_df Step12: Loop through det_df, store singles rates Fill the S and S_err values for each channel in each detector pair. Step13: This is much "tighter" than the raw counts. Functionalize Write functions to perform all of these calculations. Demo them here. The functions are in a new script called bicorr_sums.py. You have to specify emin, emax. Step14: Data you have to have loaded Step15: Expand, fill det_df Step16: Condense into angle bins Step17: Put all of this into one function Returns
Python Code: import os import sys import matplotlib.pyplot as plt import numpy as np import imageio import pandas as pd import seaborn as sns sns.set(style='ticks') sys.path.append('../scripts/') import bicorr as bicorr import bicorr_e as bicorr_e import bicorr_plot as bicorr_plot import bicorr_sums as bicorr_sums import bicorr_math as bicorr_math %load_ext autoreload %autoreload 2 Explanation: Goal: Correct for singles rate with $W$ calculation In order to correct for differences in detection efficiencies and solid angles, we will divide all of the doubles rates by the singles rates of the two detectors as follows: $ W_{i,j} = \frac{D_{i,j}}{S_i*S_j}$ This requires calculating $S_i$ and $S_j$ from the cced files. I need to rewrite my analysis from the beginning, or write another function that parses the cced file. In this file, I will import the singles and bicorr data and calculate all $D_{i,j}$, $S_i$, $S_j$, and $W_{i,j}$. End of explanation det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv') pair_is = bicorr.generate_pair_is(det_df,ignore_fc_neighbors_flag=True) det_df = det_df.loc[pair_is].reset_index().rename(columns={'index':'index_og'}).copy() det_df.head() Explanation: Load some data I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b. The one limitation here is that this data has already cut out the fission chamber neighbors. det_df without fission chamber neighbors End of explanation det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv', remove_fc_neighbors=True) det_df.head() chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists() dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df) num_fissions = 2194651200.00 Explanation: I am going to add in a new optional input parameter in bicorr.load_det_df that will let you provide this det_df without fission chamber neighbors directly. Try it out. End of explanation singles_hist, dt_bin_edges_sh, dict_det_to_index, dict_index_to_det = bicorr.load_singles_hist(filepath='../analysis/Cf072115_to_Cf072215b/datap',plot_flag=True,show_flag=True) Explanation: singles_hist.npz End of explanation npzfile = np.load('../analysis/Cf072115_to_Cf072215b/datap/bhp_nn_by_pair_1ns.npz') pair_is = npzfile['pair_is'] bhp_nn_pos = npzfile['bhp_nn_pos'] bhp_nn_neg = npzfile['bhp_nn_neg'] dt_bin_edges = npzfile['dt_bin_edges'] Explanation: Load bhp_nn for all pairs I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook. End of explanation len(pair_is) Explanation: The fission chamber neighbors have already been removed End of explanation emin = 0.62 emax = 12 Explanation: Specify energy range End of explanation singles_df = pd.DataFrame.from_dict(dict_index_to_det,orient='index',dtype=np.int8).rename(columns={0:'ch'}) chIgnore = [1,17,33] singles_df = singles_df[~singles_df['ch'].isin(chIgnore)].copy() singles_df['Sp']= 0.0 singles_df['Sn']= 0.0 singles_df['Sd']= 0.0 singles_df['Sd_err'] = 0.0 Explanation: Calculate sums Singles- set up singles_df I will store this in a pandas dataframe. Columns: Channel number Sp - Singles counts, positive Sn - Singles counts, negative Sd - Singles counts, br-subtracted Sd_err - Singles counts, br-subtracted, err End of explanation for index in singles_df.index.values: Sp, Sn, Sd, Sd_err = bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, index, emin=emin, emax=emax) singles_df.loc[index,'Sp'] = Sp singles_df.loc[index,'Sn'] = Sn singles_df.loc[index,'Sd'] = Sd singles_df.loc[index,'Sd_err'] = Sd_err singles_df.head() bicorr_plot.Sd_vs_angle_all(singles_df) Explanation: Singles- calculate sums End of explanation det_df.head() det_df['Cp'] = 0.0 det_df['Cn'] = 0.0 det_df['Cd'] = 0.0 det_df['Cd_err'] = 0.0 det_df['Np'] = 0.0 det_df['Nn'] = 0.0 det_df['Nd'] = 0.0 det_df['Nd_err'] = 0.0 det_df.head() Explanation: Doubles- set up det_df End of explanation for index in det_df.index.values: Cp, Cn, Cd, err_Cd = bicorr.calc_nn_sum_br(bhp_nn_pos[index,:,:], bhp_nn_neg[index,:,:], dt_bin_edges, emin=emin, emax=emax) det_df.loc[index,'Cp'] = Cp det_df.loc[index,'Cn'] = Cn det_df.loc[index,'Cd'] = Cd det_df.loc[index,'Cd_err'] = err_Cd Np, Nn, Nd, err_Nd = bicorr.calc_nn_sum_br(bhp_nn_pos[index,:,:], bhp_nn_neg[index,:,:], dt_bin_edges, emin=emin, emax=emax, norm_factor = num_fissions) det_df.loc[index,'Np'] = Np det_df.loc[index,'Nn'] = Nn det_df.loc[index,'Nd'] = Nd det_df.loc[index,'Nd_err'] = err_Nd det_df.head() bicorr_plot.counts_vs_angle_all(det_df, normalized=True) Explanation: Doubles- Calculate sums End of explanation det_df['Sd1'] = 0.0 det_df['Sd1_err'] = 0.0 det_df['Sd2'] = 0.0 det_df['Sd2_err'] = 0.0 det_df['W'] = 0.0 det_df['W_err'] = 0.0 det_df.head() singles_df.head() Explanation: Perform the correction Now I am going to loop through all pairs and calculate $W$. Loop through each pair Identify $i$, $j$ Fetch $S_i$, $S_j$ Calculate $W$ Propagate error for $W_{err}$ Store in det_df Add W, W_err columns to det_df End of explanation # Fill S columns in det_df for index in singles_df.index.values: ch = singles_df.loc[index,'ch'] d1_indices = (det_df[det_df['d1'] == ch]).index.tolist() d2_indices = (det_df[det_df['d2'] == ch]).index.tolist() det_df.loc[d1_indices,'Sd1'] = singles_df.loc[index,'Sd'] det_df.loc[d1_indices,'Sd1_err'] = singles_df.loc[index,'Sd_err'] det_df.loc[d2_indices,'Sd2'] = singles_df.loc[index,'Sd'] det_df.loc[d2_indices,'Sd2_err'] = singles_df.loc[index,'Sd_err'] # Calculate W, W_err from S columns det_df['W'] = det_df['Cd']/(det_df['Sd1']*det_df['Sd2']) det_df['W_err'] = det_df['W'] * np.sqrt((det_df['Cd_err']/det_df['Cd'])**2 + (det_df['Sd1_err']/det_df['Sd1'])**2 + (det_df['Sd2_err']/det_df['Sd2'])**2) det_df.head() bicorr_plot.W_vs_angle_all(det_df) Explanation: Loop through det_df, store singles rates Fill the S and S_err values for each channel in each detector pair. End of explanation emin = 0.62 emax = 12 Explanation: This is much "tighter" than the raw counts. Functionalize Write functions to perform all of these calculations. Demo them here. The functions are in a new script called bicorr_sums.py. You have to specify emin, emax. End of explanation singles_df = bicorr_sums.init_singles_df(dict_index_to_det) singles_df.head() singles_df = bicorr_sums.fill_singles_df(dict_index_to_det, singles_hist, dt_bin_edges_sh, emin, emax) singles_df.head() bicorr_plot.Sd_vs_angle_all(singles_df) Explanation: Data you have to have loaded: det_df dict_index_to_det singles_hist dt_bin_edges_sh bhp_nn_pos bhp_nn_neg dt_bin_edges emin emax num_fissions angle_bin_edges Produce and fill singles_df: End of explanation det_df.head() det_df = bicorr_sums.init_det_df_sums(det_df, t_flag = True) det_df = bicorr_sums.fill_det_df_singles_sums(det_df, singles_df) det_df = bicorr_sums.fill_det_df_doubles_t_sums(det_df, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, emin, emax) det_df = bicorr_sums.calc_det_df_W(det_df) det_df.head() Explanation: Expand, fill det_df End of explanation angle_bin_edges = np.arange(8,190,10) by_angle_df = bicorr_sums.condense_det_df_by_angle(det_df,angle_bin_edges) by_angle_df.head() bicorr_plot.W_vs_angle(det_df, by_angle_df, save_flag=False) Explanation: Condense into angle bins End of explanation angle_bin_edges = np.arange(8,190,10) singles_df, det_df, by_angle_df = bicorr_sums.perform_W_calcs(det_df, dict_index_to_det, singles_hist, dt_bin_edges_sh, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, num_fissions, emin, emax, angle_bin_edges) det_df.head() bicorr_plot.W_vs_angle(det_df, by_angle_df, save_flag = False) Explanation: Put all of this into one function Returns: singles_df, det_df, by_angle_df End of explanation
11,232
Given the following text description, write Python code to implement the functionality described below step by step Description: Air-Standard Brayton Cycle Example Imports Step1: Definitions Step2: Problem Statement An ideal air-standard Brayton cycle operates at steady state with compressor inlet conditions of 300.0 K and 1.0 bar and a fixed turbine inlet temperature of 1700.0 K and a compressor pressure ratio of 8.0. For the cycle, determine the net work developed per unit mass flowing, in kJ/kg determine the thermal efficiency plot the net work developed per unit mass flowing, in kJ/kg, as a function of the compressor pressure ratio from 2.0 to 50.0 plot the thermal efficiency as a function of the compressor pressure ratio from 2.0 to 50.0 Discuss any trends you find in parts 3 and 4 Solution 1. the net work developed per unit mass flowing The ideal Brayton cycle is made of 4 processes Step3: Summarizing the states, | State | T | p | h | s | |-------|---------------------------|---------------------------|---------------------------|---------------------------| | 1 | 300.00 K | 1.00 bar | 426.30 kJ/kg | 3.89 kJ/(K kg) | | 2 | 540.13 K | 8.00 bar | 670.65 kJ/kg | 3.89 kJ/(K kg) | | 3 | 1700.00 K | 8.00 bar | 2007.09 kJ/kg | 5.19 kJ/(K kg) | | 4 | 1029.42 K | 1.00 bar | 1206.17 kJ/kg | 5.19 kJ/(K kg) | Plotting the p-v and T-s diagrams of the cycle, Step4: Then, the net work is calculated by Step5: <div class="alert alert-success"> **Answer Step6: <div class="alert alert-success"> **Answer
Python Code: from thermostate import State, Q_, units from thermostate.plotting import IdealGas import numpy as np %matplotlib inline import matplotlib.pyplot as plt Explanation: Air-Standard Brayton Cycle Example Imports End of explanation substance = 'air' p_1 = Q_(1.0, 'bar') T_1 = Q_(300.0, 'K') T_3 = Q_(1700.0, 'K') p2_p1 = Q_(8.0, 'dimensionless') p_low = Q_(2.0, 'dimensionless') p_high = Q_(50.0, 'dimensionless') Explanation: Definitions End of explanation st_1 = State(substance, T=T_1, p=p_1) h_1 = st_1.h.to('kJ/kg') s_1 = st_1.s.to('kJ/(kg*K)') s_2 = s_1 p_2 = p_1*p2_p1 st_2 = State(substance, p=p_2, s=s_2) h_2 = st_2.h.to('kJ/kg') T_2 = st_2.T p_3 = p_2 st_3 = State(substance, p=p_3, T=T_3) h_3 = st_3.h.to('kJ/kg') s_3 = st_3.s.to('kJ/(kg*K)') s_4 = s_3 p_4 = p_1 st_4 = State(substance, p=p_4, s=s_4) h_4 = st_4.h.to('kJ/kg') T_4 = st_4.T Explanation: Problem Statement An ideal air-standard Brayton cycle operates at steady state with compressor inlet conditions of 300.0 K and 1.0 bar and a fixed turbine inlet temperature of 1700.0 K and a compressor pressure ratio of 8.0. For the cycle, determine the net work developed per unit mass flowing, in kJ/kg determine the thermal efficiency plot the net work developed per unit mass flowing, in kJ/kg, as a function of the compressor pressure ratio from 2.0 to 50.0 plot the thermal efficiency as a function of the compressor pressure ratio from 2.0 to 50.0 Discuss any trends you find in parts 3 and 4 Solution 1. the net work developed per unit mass flowing The ideal Brayton cycle is made of 4 processes: 1. Isentropic compression 2. Isobaric heat input 3. Isentropic expansion 4. Isobaric heat rejection The following properties are used to fix the four states: State | Property 1 | Property 2 :-----:|:-----:|:-----: 1|$$T_1$$|$$p_1$$ 2|$$p_2$$|$$s_2=s_1$$ 3|$$p_3=p_2$$|$$T_3$$ 4|$$s_4=s_3$$|$$p_4=p_1$$ In the ideal Brayton cycle, work occurs in the isentropic compression and expansion. Therefore, the works are $$ \begin{aligned} \frac{\dot{W}_c}{\dot{m}} &= h_1 - h_2 & \frac{\dot{W}_t}{\dot{m}} &= h_3 - h_4 \end{aligned} $$ First, fixing the four states End of explanation Brayton = IdealGas(substance, ('s', 'T'), ('v', 'p')) Brayton.add_process(st_1, st_2, 'isentropic') Brayton.add_process(st_2, st_3, 'isobaric') Brayton.add_process(st_3, st_4, 'isentropic') Brayton.add_process(st_4, st_1, 'isobaric') Explanation: Summarizing the states, | State | T | p | h | s | |-------|---------------------------|---------------------------|---------------------------|---------------------------| | 1 | 300.00 K | 1.00 bar | 426.30 kJ/kg | 3.89 kJ/(K kg) | | 2 | 540.13 K | 8.00 bar | 670.65 kJ/kg | 3.89 kJ/(K kg) | | 3 | 1700.00 K | 8.00 bar | 2007.09 kJ/kg | 5.19 kJ/(K kg) | | 4 | 1029.42 K | 1.00 bar | 1206.17 kJ/kg | 5.19 kJ/(K kg) | Plotting the p-v and T-s diagrams of the cycle, End of explanation W_c = h_1 - h_2 W_t = h_3 - h_4 W_net = W_c + W_t Explanation: Then, the net work is calculated by End of explanation Q_23 = h_3 - h_2 eta = W_net/Q_23 Explanation: <div class="alert alert-success"> **Answer:** The works are $\dot{W}_c/\dot{m} =$ -244.35 kJ/kg, $\dot{W}_t/\dot{m} =$ 800.92 kJ/kg, and $\dot{W}_{net}/\dot{m} =$ 556.57 kJ/kg </div> 2. the thermal efficiency End of explanation p_range = np.linspace(p_low, p_high, 50) eta_l = np.zeros(shape=p_range.shape) * units.dimensionless W_net_l = np.zeros(shape=p_range.shape) * units.kJ / units.kg for i, p_ratio in enumerate(p_range): s_2 = s_1 p_2 = p_1*p_ratio st_2 = State(substance, p=p_2, s=s_2) h_2 = st_2.h.to('kJ/kg') T_2 = st_2.T p_3 = p_2 st_3 = State(substance, p=p_3, T=T_3) h_3 = st_3.h.to('kJ/kg') s_3 = st_3.s.to('kJ/(kg*K)') s_4 = s_3 p_4 = p_1 st_4 = State(substance, p=p_4, s=s_4) h_4 = st_4.h.to('kJ/kg') T_4 = st_4.T W_c = h_1 - h_2 W_t = h_3 - h_4 W_net = W_c + W_t W_net_l[i] = W_net Q_23 = h_3 - h_2 eta = W_net/Q_23 eta_l[i] = eta fig, work_ax = plt.subplots() work_ax.plot(p_range, W_net_l, label='Net work per unit mass flowing', color='C0') eta_ax = work_ax.twinx() eta_ax.plot(p_range, eta_l, label='Thermal efficiency', color='C1') work_ax.set_xlabel('Pressure ratio $p_2/p_1$') work_ax.set_ylabel('Net work per unit mass flowing (kJ/kg)') eta_ax.set_ylabel('Thermal efficiency') lines, labels = work_ax.get_legend_handles_labels() lines2, labels2 = eta_ax.get_legend_handles_labels() work_ax.legend(lines + lines2, labels + labels2, loc='best'); Explanation: <div class="alert alert-success"> **Answer:** The thermal efficiency is $\eta =$ 0.42 = 41.65% </div> 3. and 4. plot the net work per unit mass flowing and thermal efficiency End of explanation
11,233
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: TEST-INSTITUTE-3 Source ID: SANDBOX-3 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:46 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
11,234
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Landice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Ice Albedo Is Required Step7: 1.4. Atmospheric Coupling Variables Is Required Step8: 1.5. Oceanic Coupling Variables Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required Step11: 2.2. Code Version Is Required Step12: 2.3. Code Languages Is Required Step13: 3. Grid Land ice grid 3.1. Overview Is Required Step14: 3.2. Adaptive Grid Is Required Step15: 3.3. Base Resolution Is Required Step16: 3.4. Resolution Limit Is Required Step17: 3.5. Projection Is Required Step18: 4. Glaciers Land ice glaciers 4.1. Overview Is Required Step19: 4.2. Description Is Required Step20: 4.3. Dynamic Areal Extent Is Required Step21: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required Step22: 5.2. Grounding Line Method Is Required Step23: 5.3. Ice Sheet Is Required Step24: 5.4. Ice Shelf Is Required Step25: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required Step26: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required Step27: 7.2. Ocean Is Required Step28: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required Step29: 8.2. Melting Is Required Step30: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required Step31: 9.2. Approximation Is Required Step32: 9.3. Adaptive Timestep Is Required Step33: 9.4. Timestep Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'landice') Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: DWD Source ID: SANDBOX-1 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:57 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation
11,235
Given the following text description, write Python code to implement the functionality described below step by step Description: Neighborhood Structures in the ArcGIS Spatial Statistics Library Spatial Weights Matrix On-the-fly Neighborhood Iterators [GA Table] Contructing PySAL Spatial Weights Spatial Weight Matrix File Stores the spatial weights so they do not have to be re-calculated for each analysis. In row-compressed format. Little endian byte encoded. Requires a unique long/short field to identify each features. Can NOT be the OID/FID. Construction Step1: Distance-Based Options INPUTS Step2: Example Step3: k-Nearest Neighbors Options INPUTS Step4: Example Step5: Delaunay Triangulation Options INPUTS Step6: Polygon Contiguity Options <a id="poly_options"></a> ``` INPUTS Step7: *Example Step8: Example Step9: On-the-fly Neighborhood Iterators [GA Table] Reads centroids of input features into spatial tree structure. Distance Based Queries. Scalable Step10: Example Step11: Example Step15: Contructing PySAL Spatial Weights Convert masterID to orderID when using ssdo.obtainData (SWM File, Polygon Contiguity) Data is already in orderID when using ssdo.obtainDataGA (Distance Based) Methods in next cell can be imported from pysal2ArcGIS.py Step16: Converting Spatial Weight Matrix Formats (e.g. .swm, .gwt, *.gal) Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https Step17: Calling MaxP Regions Using SWM Based on Rook Contiguity, No Row Standardization Step18: Calling MaxP Regions Using Rook Contiguity, No Row Standardization Step19: Identical results because the random seed was set to 100 and they have the same spatial neighborhood Calling MaxP Regions Using Fixed Distance 250000, Hyrbid to Assure at least 2 Neighbors
Python Code: import Weights as WEIGHTS import os as OS inputFC = r'../data/CA_Polygons.shp' fullFC = OS.path.abspath(inputFC) fullPath, fcName = OS.path.split(fullFC) masterField = "MYID" Explanation: Neighborhood Structures in the ArcGIS Spatial Statistics Library Spatial Weights Matrix On-the-fly Neighborhood Iterators [GA Table] Contructing PySAL Spatial Weights Spatial Weight Matrix File Stores the spatial weights so they do not have to be re-calculated for each analysis. In row-compressed format. Little endian byte encoded. Requires a unique long/short field to identify each features. Can NOT be the OID/FID. Construction End of explanation swmFile = OS.path.join(fullPath, "fixed250k.swm") fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, threshold = 250000) Explanation: Distance-Based Options INPUTS: inputFC (str): path to the input feature class swmFile (str): path to the SWM file. masterField (str): field in table that serves as the mapping. fixed (boolean): fixed (1) or inverse (0) distance? concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN exponent {float, 1.0}: distance decay threshold {float, None}: distance threshold kNeighs (int): number of neighbors to return rowStandard {bool, True}: row standardize weights? Example: Fixed Distance End of explanation swmFile = OS.path.join(fullPath, "inv2_250k.swm") fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, fixed = False, exponent = 2.0, threshold = 250000) Explanation: Example: Inverse Distance Squared End of explanation swmFile = OS.path.join(fullPath, "knn8.swm") fixedSWM = WEIGHTS.kNearest2SWM(fullFC, swmFile, masterField, kNeighs = 8) Explanation: k-Nearest Neighbors Options INPUTS: inputFC (str): path to the input feature class swmFile (str): path to the SWM file. masterField (str): field in table that serves as the mapping. concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN kNeighs {int, 1}: number of neighbors to return rowStandard {bool, True}: row standardize weights? Example: 8-nearest neighbors End of explanation swmFile = OS.path.join(fullPath, "fixed250k_knn8.swm") fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, kNeighs = 8, threshold = 250000) Explanation: Example: Fixed Distance - k-nearest neighbor hybrid [i.e. at least k neighbors but may have more...] End of explanation swmFile = OS.path.join(fullPath, "delaunay.swm") fixedSWM = WEIGHTS.delaunay2SWM(fullFC, swmFile, masterField) Explanation: Delaunay Triangulation Options INPUTS: inputFC (str): path to the input feature class swmFile (str): path to the SWM file. masterField (str): field in table that serves as the mapping. rowStandard {bool, True}: row standardize weights? Example: delaunay End of explanation swmFile = OS.path.join(fullPath, "rook_bin.swm") WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, rowStandard = False) Explanation: Polygon Contiguity Options <a id="poly_options"></a> ``` INPUTS: inputFC (str): path to the input feature class swmFile (str): path to the SWM file. masterField (str): field in table that serves as the mapping. concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN kNeighs {int, 0}: number of neighbors to return (1) rowStandard {bool, True}: row standardize weights? contiguityType {str, Rook}: {Rook = Edges Only, Queen = Edges/Vertices} NOTES: (1) kNeighs is an option often used when you know there are polygon features that are not contiguous (e.g. islands). A kNeighs value of 2 will assure that ALL features have at least 2 neighbors. If a polygon is determined to only touch a single other polygon, then a nearest neighbor search based on true centroids are used to find the additional neighbor. ``` Example: Rook [Binary] End of explanation swmFile = OS.path.join(fullPath, "queen.swm") WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, contiguityType = "QUEEN") Explanation: *Example: Queen Contiguity [Row Standardized] End of explanation swmFile = OS.path.join(fullPath, "hybrid.swm") WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, kNeighs = 4) Explanation: Example: Queen Contiguity - KNN Hybrid [Prevents Islands w/ no Neighbors] (1) End of explanation import SSDataObject as SSDO inputFC = r'../data/CA_Polygons.shp' ssdo = SSDO.SSDataObject(inputFC) uniqueIDField = ssdo.oidName ssdo.obtainData(uniqueIDField, requireSearch = True) Explanation: On-the-fly Neighborhood Iterators [GA Table] Reads centroids of input features into spatial tree structure. Distance Based Queries. Scalable: In-memory/disk-space swap for large data. Requires a unique long/short field to identify each features. Can be the OID/FID. Uses requireSearch = True when using ssdo.obtainData Pre-Example: Load the Data into GA Version of SSDataObject End of explanation import arcgisscripting as ARC import WeightsUtilities as WU import gapy as GAPY gaSearch = GAPY.ga_nsearch(ssdo.gaTable) concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef) gaSearch.init_nearest(0.0, 4, gaConcept) neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch) for i in range(len(neighSearch)): neighOrderIDs = neighSearch[i] if i < 5: print(neighOrderIDs) import arcgisscripting as ARC import WeightsUtilities as WU import gapy as GAPY import SSUtilities as UTILS inputGrid = r'D:\Data\UC\UC17\Island\Dykstra\Dykstra.gdb\emerge' ssdo = SSDO.SSDataObject(inputGrid) ssdo.obtainData(ssdo.oidName, requireSearch = True) gaSearch = GAPY.ga_nsearch(ssdo.gaTable) concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef) gaSearch.init_nearest(300., 0, gaConcept) neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch) for i in range(len(neighSearch)): neighOrderIDs = neighSearch[i] x0,y0 = ssdo.xyCoords[i] if i < 5: nhs = ", ".join([str(i) for i in neighOrderIDs]) dist = [] for nh in neighOrderIDs: x1,y1 = ssdo.xyCoords[nh] dij = WU.euclideanDistance(x0,y0,x1,y1) dist.append(UTILS.formatValue(dij, "%0.2f")) print("ID {0} has {1} neighs, they are {2}".format(i, len(neighOrderIDs), nhs)) print("The Distances are... {0}".format(", ".join(dist))) Explanation: Example: NeighborSearch - When you only need your Neighbor IDs gaSearch.init_nearest(distance_band, minimum_num_neighs, {"euclidean", "manhattan") End of explanation gaSearch = GAPY.ga_nsearch(ssdo.gaTable) gaSearch.init_nearest(250000, 0, gaConcept) neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 0, exponent = 2.0) for i in range(len(neighSearch)): neighOrderIDs, neighWeights = neighSearch[i] if i < 3: print(neighOrderIDs) print(neighWeights) Explanation: Example: NeighborWeights - When you need non-uniform spatial weights (E.g. Inverse Distance Squared) NeighborWeights(gaTable, gaSearch, weight_type [0: inverse_distance, 1: fixed_distance], exponent = 1.0, row_standard = True, include_self = False) End of explanation import pysal as PYSAL import WeightsUtilities as WU import SSUtilities as UTILS def swm2Weights(ssdo, swmfile): Converts ArcGIS Sparse Spatial Weights Matrix (*.swm) file to PySAL Sparse Spatial Weights Class. INPUTS: ssdo (class): instance of SSDataObject [1,2] swmFile (str): full path to swm file NOTES: (1) Data must already be obtained using ssdo.obtainData() (2) The masterField for the swm file and the ssdo object must be the same and may NOT be the OID/FID/ObjectID neighbors = {} weights = {} #### Create SWM Reader Object #### swm = WU.SWMReader(swmfile) #### SWM May NOT be a Subset of the Data #### if ssdo.numObs > swm.numObs: ARCPY.AddIDMessage("ERROR", 842, ssdo.numObs, swm.numObs) raise SystemExit() #### Parse All SWM Records #### for r in UTILS.ssRange(swm.numObs): info = swm.swm.readEntry() masterID, nn, nhs, w, sumUnstandard = info #### Must Have at Least One Neighbor #### if nn: #### Must be in Selection Set (If Exists) #### if masterID in ssdo.master2Order: outNHS = [] outW = [] #### Transform Master ID to Order ID #### orderID = ssdo.master2Order[masterID] #### Neighbors and Weights Adjusted for Selection #### for nhInd, nhVal in enumerate(nhs): try: nhOrder = ssdo.master2Order[nhVal] outNHS.append(nhOrder) weightVal = w[nhInd] if swm.rowStandard: weightVal = weightVal * sumUnstandard[0] outW.append(weightVal) except KeyError: pass #### Add Selected Neighbors/Weights #### if len(outNHS): neighbors[orderID] = outNHS weights[orderID] = outW swm.close() #### Construct PySAL Spatial Weights and Standardize as per SWM #### w = PYSAL.W(neighbors, weights) if swm.rowStandard: w.transform = 'R' return w def poly2Weights(ssdo, contiguityType = "ROOK", rowStandard = True): Uses GP Polygon Neighbor Tool to construct contiguity relationships and stores them in PySAL Sparse Spatial Weights class. INPUTS: ssdo (class): instance of SSDataObject [1] contiguityType {str, ROOK}: ROOK or QUEEN contiguity rowStandard {bool, True}: whether to row standardize the spatial weights NOTES: (1) Data must already be obtained using ssdo.obtainData() or ssdo.obtainDataGA () neighbors = {} weights = {} polyNeighDict = WU.polygonNeighborDict(ssdo.inputFC, ssdo.masterField, contiguityType = contiguityType) for masterID, neighIDs in UTILS.iteritems(polyNeighDict): orderID = ssdo.master2Order[masterID] neighbors[orderID] = [ssdo.master2Order[i] for i in neighIDs] w = PYSAL.W(neighbors) if rowStandard: w.transform = 'R' return w def distance2Weights(ssdo, neighborType = 1, distanceBand = 0.0, numNeighs = 0, distanceType = "euclidean", exponent = 1.0, rowStandard = True, includeSelf = False): Uses ArcGIS Neighborhood Searching Structure to create a PySAL Sparse Spatial Weights Matrix. INPUTS: ssdo (class): instance of SSDataObject [1] neighborType {int, 1}: 0 = inverse distance, 1 = fixed distance, 2 = k-nearest-neighbors, 3 = delaunay distanceBand {float, 0.0}: return all neighbors within this distance for inverse/fixed distance numNeighs {int, 0}: number of neighbors for k-nearest-neighbor, can also be used to set a minimum number of neighbors for inverse/fixed distance distanceType {str, euclidean}: manhattan or euclidean distance [2] exponent {float, 1.0}: distance decay factor for inverse distance rowStandard {bool, True}: whether to row standardize the spatial weights includeSelf {bool, False}: whether to return self as a neighbor NOTES: (1) Data must already be obtained using ssdo.obtainDataGA() (2) Chordal Distance is used for GCS Data neighbors = {} weights = {} gaSearch = GAPY.ga_nsearch(ssdo.gaTable) if neighborType == 3: gaSearch.init_delaunay() neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 1) else: if neighborType == 2: distanceBand = 0.0 weightType = 1 else: weightType = neighborType concept, gaConcept = WU.validateDistanceMethod(distanceType.upper(), ssdo.spatialRef) gaSearch.init_nearest(distanceBand, numNeighs, gaConcept) neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = weightType, exponent = exponent, include_self = includeSelf) for i in range(len(neighSearch)): neighOrderIDs, neighWeights = neighSearch[i] neighbors[i] = neighOrderIDs weights[i] = neighWeights w = PYSAL.W(neighbors, weights) if rowStandard: w.transform = 'R' return w Explanation: Contructing PySAL Spatial Weights Convert masterID to orderID when using ssdo.obtainData (SWM File, Polygon Contiguity) Data is already in orderID when using ssdo.obtainDataGA (Distance Based) Methods in next cell can be imported from pysal2ArcGIS.py End of explanation import WeightConvertor as W_CONVERT swmFile = OS.path.join(fullPath, "queen.swm") galFile = OS.path.join(fullPath, "queen.gal") convert = W_CONVERT.WeightConvertor(swmFile, galFile, inputFC, "MYID", "SWM", "GAL") convert.createOutput() Explanation: Converting Spatial Weight Matrix Formats (e.g. .swm, .gwt, *.gal) Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https://github.com/Esri/PySAL-ArcGIS-Toolbox] Please make note of the section on Adding a Git Project to your ArcGIS Installation Python Path. End of explanation import numpy as NUM NUM.random.seed(100) ssdo = SSDO.SSDataObject(inputFC) uniqueIDField = "MYID" fieldNames = ['PCR2010', 'POP2010', 'PERCNOHS'] ssdo.obtainDataGA(uniqueIDField, fieldNames) df = ssdo.getDataFrame() X = df.as_matrix() swmFile = OS.path.join(fullPath, "rook_bin.swm") w = swm2Weights(ssdo, swmFile) maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2]) maxpGroups = NUM.empty((ssdo.numObs,), int) for regionID, orderIDs in enumerate(maxp.regions): maxpGroups[orderIDs] = regionID print((regionID, orderIDs)) Explanation: Calling MaxP Regions Using SWM Based on Rook Contiguity, No Row Standardization End of explanation NUM.random.seed(100) w = poly2Weights(ssdo, rowStandard = False) maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2]) maxpGroups = NUM.empty((ssdo.numObs,), int) for regionID, orderIDs in enumerate(maxp.regions): maxpGroups[orderIDs] = regionID print((regionID, orderIDs)) Explanation: Calling MaxP Regions Using Rook Contiguity, No Row Standardization End of explanation NUM.random.seed(100) w = distance2Weights(ssdo, distanceBand = 250000.0, numNeighs = 2) maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2]) maxpGroups = NUM.empty((ssdo.numObs,), int) for regionID, orderIDs in enumerate(maxp.regions): maxpGroups[orderIDs] = regionID print((regionID, orderIDs)) Explanation: Identical results because the random seed was set to 100 and they have the same spatial neighborhood Calling MaxP Regions Using Fixed Distance 250000, Hyrbid to Assure at least 2 Neighbors End of explanation
11,236
Given the following text description, write Python code to implement the functionality described below step by step Description: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. Step1: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following Step3: "What are all possible labels?" 0 airplane 1 automobile 2 bird 3 cat 4 deer 5 dog 6 frog 7 horse 8 ship 9 truck "What is the range of values for the image data?" 32 x 32 x 3 x {0..255} "Are the labels in order or random?" random Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. Step5: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Step6: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. Step7: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step11: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note Step13: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling Step15: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option Step17: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option Step19: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option Step21: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model Step23: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following Step25: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. Step26: Hyperparameters Tune the following parameters Step27: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. Step28: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. Step30: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
Python Code: from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 4 sample_id = 9 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation def normalize(x): Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data return np.array(x) / 255 tests.test_normalize(normalize) Explanation: "What are all possible labels?" 0 airplane 1 automobile 2 bird 3 cat 4 deer 5 dog 6 frog 7 horse 8 ship 9 truck "What is the range of values for the image data?" 32 x 32 x 3 x {0..255} "Are the labels in order or random?" random Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation def one_hot_encode(x): One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels from sklearn.preprocessing import LabelBinarizer encoder = LabelBinarizer() encoder.fit([i for i in range(10)]) return encoder.transform(x) tests.test_one_hot_encode(one_hot_encode) Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. End of explanation # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) valid_labels Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation import tensorflow as tf def neural_net_image_input(image_shape): Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. x = tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name='x') return x def neural_net_label_input(n_classes): Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. y = tf.placeholder(tf.int32, [None, n_classes], name='y') return y def neural_net_keep_prob_input(): Return a Tensor for keep probability : return: Tensor for keep probability. return tf.placeholder(tf.float32, name='keep_prob') tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor num_units = (int(x_tensor.shape[2]) / conv_strides[0] / pool_strides[0])**2 * conv_num_outputs deviation = 1/np.sqrt(num_units) W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], int(x_tensor.shape[3]), conv_num_outputs], mean=0.0, stddev=deviation)) b = tf.Variable(tf.truncated_normal([conv_num_outputs], mean=0.0, stddev=deviation)) x_conved = tf.nn.conv2d(x_tensor, W, strides=[1, conv_strides[0], conv_strides[0], 1], padding='SAME') x_biased = tf.nn.bias_add(x_conved, b) x_rected = tf.nn.relu(x_biased) return tf.nn.max_pool(x_rected, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') tests.test_con_pool(conv2d_maxpool) Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation def flatten(x_tensor): Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). height = int(x_tensor.shape[1]) width = int(x_tensor.shape[2]) depth = int(x_tensor.shape[3]) volume = width * height * depth return tf.reshape(x_tensor, [-1, volume]) tests.test_flatten(flatten) Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation def fully_conn(x_tensor, num_outputs): Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. height = int(x_tensor.shape[1]) deviation = 1/np.sqrt(num_outputs) W = tf.Variable(tf.truncated_normal([height, num_outputs], mean=0.0, stddev=deviation)) b = tf.Variable(tf.truncated_normal([num_outputs], mean=0.0, stddev=deviation)) fc1 = tf.add(tf.matmul(x_tensor, W), b) return tf.nn.relu(fc1) tests.test_fully_conn(fully_conn) Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation def output(x_tensor, num_outputs): Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. height = int(x_tensor.shape[1]) deviation = 1/np.sqrt(num_outputs) W = tf.Variable(tf.truncated_normal([height, num_outputs], mean=0.0, stddev=deviation)) b = tf.Variable(tf.truncated_normal([num_outputs], mean=0.0, stddev=deviation)) return tf.add(tf.matmul(x_tensor, W), b) tests.test_output(output) Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation def conv_net(x, keep_prob): Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits # Play around with different number of outputs, kernel size and stride # 32x32x3 conv1 = conv2d_maxpool(x, 16, [5, 5], [1, 1], [2, 2], [2, 2]) # 16x16x16 conv2 = conv2d_maxpool(conv1, 32, [3, 3], [1, 1], [2, 2], [2, 2]) # 8x8x32 conv2f = flatten(conv2) # 2048 fc1 = fully_conn(conv2f, 256) fc1d = tf.nn.dropout(fc1, keep_prob) # 128 logits = output(fc1d, 10) #10 return logits ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) tests.test_train_nn(train_neural_network) Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation def print_stats(session, feature_batch, label_batch, cost, accuracy): Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) tr_cost, tr_acc = session.run([cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) va_cost, va_acc = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0}) print("Losses: {:>7.1f} {:>7.1f}\tAccuracies: {:>5.3f} {:>5.3f}".format(tr_cost, va_cost, tr_acc, va_acc)) Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation # TODO: Tune Parameters epochs = 100 batch_size = 4096 keep_probability = 0.5 Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): Test the saved model against the test dataset test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation
11,237
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow's Deep MNIST tutorial https Step1: Initiate a tf.session We're going to eventually define a graph which will represent a "dataflow computation". Before we start buiding our graph by creating nodes, we first initial a tf.session. A session allows us to execute graphs. It also allows for the specification of resource allocation (more than one CPU/GPU/machine). The session also holds the values of our intermediate results during training and the values of variables during training. Step2: Variables A Variable is a value that "lives in TensorFlow's computation graph". Step3: Before we can use them, gotta intialize them Step4: Let's add in a regression model. Step5: Specify a corss-entropy loss function. So, the cross-entropy between the target and the softmax activation function applied to the model's prediction. Note that tf.nn.softmax_cross_entropy_with_logits internally applies the softmax on the model's unnormalized model prediction and sums across all classes, and tf.reduce_mean takes the average over these sums. Step6: Train the model Step7: Evaluate the model Step8: Now let's build a multilayer convolutional network Step9: Convolution and pooling
Python Code: #load mnist data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) Explanation: TensorFlow's Deep MNIST tutorial https://www.tensorflow.org/get_started/mnist/pros start tf.session define a model define a training loss function train using TensorFlow End of explanation #Start TensorFlow InteractiveSession import tensorflow as tf sess = tf.InteractiveSession() # create placeholder nodes for the input images and target output #x will consist of a 2d tensor floating point numbers. 784 = 28*28 pixels # None indicates the batch size, because we specify 'None' can be a variable size x = tf.placeholder(tf.float32, shape=[None, 784]) #y_ is another 2d tensor, where each row isicallly a one-hot 10 diminsional vector #shape option allows TF to automatically catch bugs due to inconsistent # tensor shapes y_ = tf.placeholder(tf.float32, shape=[None, 10]) Explanation: Initiate a tf.session We're going to eventually define a graph which will represent a "dataflow computation". Before we start buiding our graph by creating nodes, we first initial a tf.session. A session allows us to execute graphs. It also allows for the specification of resource allocation (more than one CPU/GPU/machine). The session also holds the values of our intermediate results during training and the values of variables during training. End of explanation W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) Explanation: Variables A Variable is a value that "lives in TensorFlow's computation graph". End of explanation sess.run(tf.global_variables_initializer()) Explanation: Before we can use them, gotta intialize them End of explanation #x: input images #W: weight matrix #b: bias y = tf.matmul(x,W) + b Explanation: Let's add in a regression model. End of explanation cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) Explanation: Specify a corss-entropy loss function. So, the cross-entropy between the target and the softmax activation function applied to the model's prediction. Note that tf.nn.softmax_cross_entropy_with_logits internally applies the softmax on the model's unnormalized model prediction and sums across all classes, and tf.reduce_mean takes the average over these sums. End of explanation %%time train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) for _ in range(1000): batch = mnist.train.next_batch(100) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) batch = mnist.train.next_batch(100) batch[1].shape Explanation: Train the model End of explanation #tf.argmax gives an index of the highest entry in a tensor along some axis correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) #we can take this list of booleans and calculate the fraction correct accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) #print the accuracy print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})) Explanation: Evaluate the model End of explanation #because we're gonna need def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) Explanation: Now let's build a multilayer convolutional network End of explanation def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #first convo layer W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) #reshape x to a 4d tensor x_image = tf.reshape(x, [-1,28,28,1]) #convolve x_image with the weight tensor, add bias, #apply the ReLU function, and finally max pool h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) #second convo layer W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) # densely connected layer W_fc1 = weight_variable([7 * 7 * 64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) # dropout keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) # readout layer W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 %% time cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.global_variables_initializer()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) Explanation: Convolution and pooling End of explanation
11,238
Given the following text description, write Python code to implement the functionality described below step by step Description: Import libraries Step1: Define source paths Step2: Import data Step3: Data exploration Shape, types Step4: Missing values Step5: We want to know what you look for in the opposite sex. Step6: What is your primary goal in participating in this event? Seemed like a fun night out=1, To meet new people=2, To get a date=3, Looking for a serious relationship=4, To say I did it=5, Other=6 Step7: In general, how frequently do you go on dates? Several times a week=1 Twice a week=2 Once a week=3 Twice a month=4 Once a month=5 Several times a year=6 Almost never=7
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('display.max_columns', None) %matplotlib inline Explanation: Import libraries End of explanation source_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/" Explanation: Define source paths End of explanation raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv") Explanation: Import data End of explanation raw_dataset.shape raw_dataset.head() raw_dataset.dtypes.value_counts() Explanation: Data exploration Shape, types End of explanation raw_dataset.isnull().sum().head(10) summary = raw_dataset.describe().transpose() print summary.head(15) plt.hist(raw_dataset['age'].dropna()); Explanation: Missing values End of explanation # Attractiveness plt.hist(raw_dataset['attr_o'].dropna()); # Sincere plt.hist(raw_dataset['sinc_o'].dropna()); # Intelligent plt.hist(raw_dataset['intel_o'].dropna()) ; # Fun plt.hist(raw_dataset['fun_o'].dropna()); # Ambitious plt.hist(raw_dataset['amb_o'].dropna()); Explanation: We want to know what you look for in the opposite sex. End of explanation raw_dataset.groupby('date').iid.nunique().sort_values(ascending=False) Explanation: What is your primary goal in participating in this event? Seemed like a fun night out=1, To meet new people=2, To get a date=3, Looking for a serious relationship=4, To say I did it=5, Other=6 End of explanation raw_dataset.groupby('go_out').iid.nunique().sort_values(ascending=False) raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(10) Explanation: In general, how frequently do you go on dates? Several times a week=1 Twice a week=2 Once a week=3 Twice a month=4 Once a month=5 Several times a year=6 Almost never=7 End of explanation
11,239
Given the following text description, write Python code to implement the functionality described below step by step Description: Example to_hdf calls Initialize the simulation with the tardis_example.yml configuration file. Step1: Run the simulation while storing all its iterations to an HDF file. The first parameter is the path where the HDF file should be stored. The second parameter determines which properties will be stored. When its value is 'input', only Input plasma properties will be stored. The third parameter, hdf_last_only, if True will only store the last iteration of the simulation, otherwise every iteration will be stored. Step2: Open the stored HDF file with pandas and print its structure. Step3: Access model.plasma.density of the 9th simulation, which is a one-dimensional array Step4: Scalars are stored in a scalars pandas.Series for every module. For example to access model.t_inner of the 9th iteration of the simulation, one would need to do the following. Note Step5: Breakdown of the various to_hdf methods Every module in TARDIS has its own to_hdf method responsible to store its own data to an HDF file. Plasma The following call will store every plasma property to /tmp/plasma_output.hdf under /parent/plasma Step6: Plasma's to_hdf method can also accept a collection parameter which can specify which types of plasma properties will be stored. For example if we wanted to only store Input plasma properties, we would do the following Step7: Model The following call will store properties of the Radial1DModel to /tmp/model_output.hdf under /model. Additionally, it will automatically call model.plasma.to_hdf, since plasma is also a property of the model. Step8: MontecarloRunner The following call will store properties of the MontecarloRunner to /tmp/runner_output.hdf under /runner
Python Code: from tardis.io.config_reader import Configuration from tardis.model import Radial1DModel from tardis.simulation import Simulation # Must have the tardis_example folder in the working directory. config_fname = 'tardis_example/tardis_example.yml' tardis_config = Configuration.from_yaml(config_fname) model = Radial1DModel(tardis_config) simulation = Simulation(tardis_config) Explanation: Example to_hdf calls Initialize the simulation with the tardis_example.yml configuration file. End of explanation simulation.legacy_run_simulation(model, '/tmp/full_example.hdf', 'full', hdf_last_only=False) Explanation: Run the simulation while storing all its iterations to an HDF file. The first parameter is the path where the HDF file should be stored. The second parameter determines which properties will be stored. When its value is 'input', only Input plasma properties will be stored. The third parameter, hdf_last_only, if True will only store the last iteration of the simulation, otherwise every iteration will be stored. End of explanation import pandas as pd data = pd.HDFStore('/tmp/full_example.hdf') print data Explanation: Open the stored HDF file with pandas and print its structure. End of explanation print data['/simulation9/model/plasma/density'] Explanation: Access model.plasma.density of the 9th simulation, which is a one-dimensional array End of explanation print data['/simulation9/model/scalars']['t_inner'] Explanation: Scalars are stored in a scalars pandas.Series for every module. For example to access model.t_inner of the 9th iteration of the simulation, one would need to do the following. Note: Quantities are always stored as their SI values. End of explanation model.plasma.to_hdf('/tmp/plasma_output.hdf', path='parent') import pandas with pandas.HDFStore('/tmp/plasma_output.hdf') as data: print data Explanation: Breakdown of the various to_hdf methods Every module in TARDIS has its own to_hdf method responsible to store its own data to an HDF file. Plasma The following call will store every plasma property to /tmp/plasma_output.hdf under /parent/plasma End of explanation from tardis.plasma.properties.base import Input model.plasma.to_hdf('/tmp/plasma_input_output.hdf', collection=[Input]) import pandas with pandas.HDFStore('/tmp/plasma_input_output.hdf') as data: print data Explanation: Plasma's to_hdf method can also accept a collection parameter which can specify which types of plasma properties will be stored. For example if we wanted to only store Input plasma properties, we would do the following: End of explanation model.to_hdf('/tmp/model_output.hdf') Explanation: Model The following call will store properties of the Radial1DModel to /tmp/model_output.hdf under /model. Additionally, it will automatically call model.plasma.to_hdf, since plasma is also a property of the model. End of explanation simulation.runner.to_hdf('/tmp/runner_output.hdf') import pandas with pandas.HDFStore('/tmp/runner_output.hdf') as data: print data Explanation: MontecarloRunner The following call will store properties of the MontecarloRunner to /tmp/runner_output.hdf under /runner End of explanation
11,240
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Preparations</a></div><div class="lev2 toc-item"><a href="#Get-fulltext-" data-toc-modified-id="Get-fulltext--11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Get fulltext </a></div><div class="lev2 toc-item"><a href="#Segment-source-text" data-toc-modified-id="Segment-source-text-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Segment source text</a></div><div class="lev2 toc-item"><a href="#Read-segments-into-a-variable-" data-toc-modified-id="Read-segments-into-a-variable--13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Read segments into a variable </a></div><div class="lev2 toc-item"><a href="#Tokenising-" data-toc-modified-id="Tokenising--14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Tokenising </a></div><div class="lev2 toc-item"><a href="#Stemming-/-Lemmatising-" data-toc-modified-id="Stemming-/-Lemmatising--15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Stemming / Lemmatising </a></div><div class="lev2 toc-item"><a href="#Eliminate-Stopwords-" data-toc-modified-id="Eliminate-Stopwords--16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Eliminate Stopwords </a></div><div class="lev1 toc-item"><a href="#Characterise-passages Step1: Segment source text<a name="SegmentSourceText"></a> Next, as mentioned above, we want to associate information with only passages of the text, not the text as a whole. Therefore, the text has to be segmented. The one single file is being split into meaningful smaller chunks. What exactly constitutes a meaningful chunk -- a chapter, an article, a paragraph etc. -- cannot be known independently of the text in question and of the research questions. Therefore, it is suggested that the scholar either splits the text manually or inserts some symbols that otherwise do not appear in the text. Then, processing tools can identify these and split the file accordingly. For keeping things neat and orderly, the resulting files should be saved in a directory of their own... Here, I am splitting the file arbitrarily every 80 lines... Note though, that this leads to a rather unusual condition Step2: Read segments into a variable <a name="ReadSegmentsIntoVariable"></a> From the segments, we rebuild our corpus, iterating through them and reading them into another variable (which now stores, technically speaking, a list of strings). Step3: Now we should have 45 strings in the variable corpus to play around with Step4: For a quick impression, let's see the opening 500 characters of an arbitrary one of them Step5: Tokenising <a name="Tokenising"></a> "Tokenising" means splitting the long lines of the input into single words. Since we are dealing with plain latin, we can use the default split method which relies on spaces to identify word boundaries. (In languages like Japanese or scripts like Arabic, this is more difficult.) Note that we do not compensate for words that are hyphenated/split across lines here! Step6: For our examples, let's have a look at (the first 50 words of) an arbitrary one of those segments Step7: Already, we can have a first go at finding the most frequent words for a segment. (For this we use a simple library of functions that we import by the name of 'collections'.) Step8: Perhaps now is a good opportunity for a small excursus. What we have printed in the last code is a series of pairs Step9: Looks better now, doesn't it? Stemming / Lemmatising <a name="StemmingLemmatising"></a> Next, since we prefer to count different word forms as one and the same "lemma", we need to do a step called "lemmatisation". In languages like English, that are not strongly inflected, one can get away with "stemming", i.e. just eliminating the ending of words Step10: So, we can again build a dictionary of key-value pairs associating all the lemmata ("values") with their wordforms ("keys") Step11: Again, a quick test Step12: Now we can use this dictionary to build a new list of words, where only lemmatised forms occur Step13: As you can see, the original text is lost now from the data that we are currently working with (unless we add another dimension to our lemmatised variable which can keep the original word form). But let us see if something in the 10 most frequent words has changed Step14: Yes, things have changed Step15: Now let's try and suppress the stopwords in the segments... Step16: With this, we can already create a first "profile" of our first 4 segments
Python Code: bigsourcefile = 'TextProcessing_2017/W0013.orig.txt' # This is the path to our file input = open(bigsourcefile, encoding='utf-8').readlines() # We use a variable 'input' for # keeping its contents. input[:10] # Just for information, # let's see the first 10 lines of the file. Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Preparations</a></div><div class="lev2 toc-item"><a href="#Get-fulltext-" data-toc-modified-id="Get-fulltext--11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Get fulltext </a></div><div class="lev2 toc-item"><a href="#Segment-source-text" data-toc-modified-id="Segment-source-text-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Segment source text</a></div><div class="lev2 toc-item"><a href="#Read-segments-into-a-variable-" data-toc-modified-id="Read-segments-into-a-variable--13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Read segments into a variable </a></div><div class="lev2 toc-item"><a href="#Tokenising-" data-toc-modified-id="Tokenising--14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Tokenising </a></div><div class="lev2 toc-item"><a href="#Stemming-/-Lemmatising-" data-toc-modified-id="Stemming-/-Lemmatising--15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Stemming / Lemmatising </a></div><div class="lev2 toc-item"><a href="#Eliminate-Stopwords-" data-toc-modified-id="Eliminate-Stopwords--16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Eliminate Stopwords </a></div><div class="lev1 toc-item"><a href="#Characterise-passages:-TF/IDF" data-toc-modified-id="Characterise-passages:-TF/IDF-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Characterise passages: TF/IDF</a></div><div class="lev1 toc-item"><a href="#Find-similar-passages:-Clustering" data-toc-modified-id="Find-similar-passages:-Clustering-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Find similar passages: Clustering</a></div><div class="lev1 toc-item"><a href="#Topic-Modelling" data-toc-modified-id="Topic-Modelling-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Topic Modelling</a></div><div class="lev1 toc-item"><a href="#Manual-Annotation" data-toc-modified-id="Manual-Annotation-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Manual Annotation</a></div><div class="lev1 toc-item"><a href="#Cope-with-different-languages" data-toc-modified-id="Cope-with-different-languages-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Cope with different languages</a></div><div class="lev1 toc-item"><a href="#Further-information" data-toc-modified-id="Further-information-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Further information</a></div> **Text Processing** This is an introduction to some algorithms used in text analysis. While I cannot define **what questions** a scholar can ask, I can and do describe here **the kind of information** about text that some popular methods deliver. From this, you need to draw on your own research interests and creativity... I will describe methods of finding words that are characteristic for a certain passage ("tf/tdf"), constructing fingerprints for passages that go beyond the most significant words ("word vectors"), group passages according to their similarity ("clustering"), and forming an idea about different contexts being treated in a passage ("topic modelling"). Of course, an important resource in text analysis is the hermeneutic interpretation of the scholar herself, so I will present a method of adding manual annotations to the text, and finally I will also say something about possible approaches to working across languages. This page will *not* cover stylistic analyses ("stylometry") and typical neighborship relations between words ("collocation", "word2vec"). Maybe these can be discussed at another occasion and on another page. For many of the steps discussed on this page there are ready-made tools and libraries, often with easy interfaces. But first, it is important to understand what these tools are actually doing and how their results are affected by the selection of parameters (that one can or cannot modify). And second, most of these tools expect the input to be in some particular format, say, a series of plaintext files in their own directory. So, by understanding the process, you should be better prepared to provide your text to the tools in the most productive way. Finally, it is important to be aware of what information has been **lost** at which point in the process. If the research requires so, one can then either look for a different tool or approach to this step (e.g. using an additional dimension in the list of words to keep both original and regularized word forms, or to remember the position of the current token in the original text), or one can compensate for the data loss (e.g. offering a lemmatised search to find occurrences after the analysis returns only normalised word forms)... # Preparations As indicated above, before doing maths, language processing tools normally expect their input to be in a certain format. First of all, you have to have an input in the first place: Therefore, a scholar wishing to experiment with such methods should avail herself of the text that should be studied, as a full transcription. This can be done by transcribing it herself, using transcriptions that are available from elsewhere, or even from OCR. (Although in the latter case, the results depend of course on the quality of the OCR output.) Second, many tools get tripped up when formatting or bibliographical metainformation is included in their input. And since the approaches presented here are not concerned with a digital edition or any other form of true representation of the source, *markup* (e.g. for bold font, heading or note elements) should be *suppressed*. (Other tools accept marked up text and strip the formatting internally.) For another detail regarding these plain text files, we have to make a short excursus, because even with plain text, there are some important aspects to consider: As you surely know, computers understand number only and as you probably also know, the first standards to encode alphanumeric characters, like ASCII, in numbers were designed for teleprinters and the reduced character set of the english language. When more extraordinary characters, like *Umlauts* or *accents* were to be encoded, one had to rely on extra rules, of which - unfortunately - there have been quite a lot. These are called "encodings" and one of the more important set of such rules are the windows encodings (e.g. CP-1252), another one is called Latin-9/ISO 8859-15 (it differs from the older Latin-1 encoding among others by including the Euro sign). Maybe you have seen web pages with garbled *Umlauts* or other special characters, then that was probably because your browser interpreted the numbers according to an encoding different from the one that the webpage author used. Anyway, the point here is that there is another standard encompassing virtually all the special signs from all languages and for a few years now, it is also supported quite well by operating systems, programming languages and linguistic tools. This standard is called "Unicode" and the encoding you want to use is called *utf-8*. So when you export or import your texts, try to make sure that this is what is used. ([Here](https://unicode-table.com/) is a webpage with the complete unicode table - it is loaded incrementally, so make sure to scroll down. But on the other hand, it is so extensive that you don't want to scroll through all the table...) Also, you should consider whether or not you can replace *abbreviations* with their expanded versions. While at some points (e.g. when lemmatising), you can associate expansions to abbreviations, the whole processing is easier when words in the text are indeed words, and periods are rather sentence punctuation than abbreviation signs. Of course, this also depends on the effort you can spend on the text... This section describes how the plaintext can further be prepared for analyses: E.g. if you want to process the *distribution* of words in the text, the processing method has to have some notion of different places in the text -- normally you want to manage words not according to their absolute position in the whole work (say, the 6.349th word and the 3.100th), but according to their occurrence in a particular section (say, in the third chapter, without caring too much whether it is in the 13th or in the 643th position in this chapter). So, you partition the text into meaningful segments which you can then label, compare etc. Other preparatory work includes suppressing stopwords (like "the", "is", "of" in english) or making the tools manage different forms of the same word or different historical writings identically. Here is what falls under this category: 1. [Get fulltext](#GetFulltext) 2. [Segment source text](#SegmentSourceText) 3. [Read segments into Variable/List](#ReadSegmentsIntoVariable) 4. [Tokenising](#Tokenising) 5. [Stemming/Lemmatising](#StemmingLemmatising) 6. [Eliminate stopwords](#EliminateStopwords) ## Get fulltext <a name="GetFulltext"></a> For the examples given on this page, I have loaded a plaintext export of Francisco de Vitoria's "Relectiones" from the School of Salamanca's project, available as one single file at this URL: [http://api.salamanca.school/txt/works.W0013.orig]. I have saved this to the file **TextProcessing_2017/W0013.orig.txt**. End of explanation splitLen = 80 # 80 lines per file outputBase = 'TextProcessing_2017/segment' # source/segment.1.txt, source/segment.2.txt, etc. count = 0 # initialise some variables. at = 0 dest = None # this later takes our destination files for line in input: if count % splitLen == 0: if dest: dest.close() dest = open(outputBase + '.' + str(at) + '.txt', encoding='utf-8', mode='w') # 'w' is for writing: here we open the file the current segment is being written to at += 1 dest.write(line.strip()) count += 1 print(str(at - 1) + ' files written.') Explanation: Segment source text<a name="SegmentSourceText"></a> Next, as mentioned above, we want to associate information with only passages of the text, not the text as a whole. Therefore, the text has to be segmented. The one single file is being split into meaningful smaller chunks. What exactly constitutes a meaningful chunk -- a chapter, an article, a paragraph etc. -- cannot be known independently of the text in question and of the research questions. Therefore, it is suggested that the scholar either splits the text manually or inserts some symbols that otherwise do not appear in the text. Then, processing tools can identify these and split the file accordingly. For keeping things neat and orderly, the resulting files should be saved in a directory of their own... Here, I am splitting the file arbitrarily every 80 lines... Note though, that this leads to a rather unusual condition: all segments are of (roughly) the same length. When counting words and assessing their relative "importance", if a word occurs twice in a very short passage, this is more telling about the passage than if the passage was very, very long. Later, we will see ways to compensate for the normal variance in passage length. End of explanation import sys import glob import errno path = 'TextProcessing_2017' filename = 'segment.' suffix = '.txt' corpus = [] for i in range(0, at - 1): try: with open(path + '/' + filename + str(i) + suffix, encoding='utf-8') as f: corpus.append(f.read()) f.close() except IOError as exc: if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it. raise # Propagate other kinds of IOError. Explanation: Read segments into a variable <a name="ReadSegmentsIntoVariable"></a> From the segments, we rebuild our corpus, iterating through them and reading them into another variable (which now stores, technically speaking, a list of strings). End of explanation len(corpus) Explanation: Now we should have 45 strings in the variable corpus to play around with: End of explanation corpus[5][:500] Explanation: For a quick impression, let's see the opening 500 characters of an arbitrary one of them: End of explanation import re tokenised = [] for segment in corpus: tokenised.append(list(filter(None, (word.lower() for word in re.split('\W+', segment))))) Explanation: Tokenising <a name="Tokenising"></a> "Tokenising" means splitting the long lines of the input into single words. Since we are dealing with plain latin, we can use the default split method which relies on spaces to identify word boundaries. (In languages like Japanese or scripts like Arabic, this is more difficult.) Note that we do not compensate for words that are hyphenated/split across lines here! End of explanation print(tokenised[5][:50]) Explanation: For our examples, let's have a look at (the first 50 words of) an arbitrary one of those segments: End of explanation import collections counter = collections.Counter(tokenised[5]) print(counter.most_common(10)) Explanation: Already, we can have a first go at finding the most frequent words for a segment. (For this we use a simple library of functions that we import by the name of 'collections'.): End of explanation import pandas as pd df1 = pd.DataFrame.from_dict(counter, orient='index').reset_index() df2 = df1.rename(columns={'index':'lemma',0:'count'}) df2.sort_values('count',0,False)[:10] Explanation: Perhaps now is a good opportunity for a small excursus. What we have printed in the last code is a series of pairs: Words and their number of occurrences, sorted by the latter. Yet the display looks a bit ugly. With another library called "pandas" (for python data analysis), we can make this more intuitive. (Of course, your system must have this library installed in the first place so that we can import it in our code.): End of explanation wordfile_path = 'TextProcessing_2017/wordforms-lat.txt' wordfile = open(wordfile_path, encoding='utf-8') print (wordfile.read()[:59]) wordfile.close; # (The semicolon suppresses the returned object in cell output) Explanation: Looks better now, doesn't it? Stemming / Lemmatising <a name="StemmingLemmatising"></a> Next, since we prefer to count different word forms as one and the same "lemma", we need to do a step called "lemmatisation". In languages like English, that are not strongly inflected, one can get away with "stemming", i.e. just eliminating the ending of words: "wish", "wished", "wishing", "wishes" all can count as instances of "wish*". With Latin this is not so easy: we want to count occurrences of "legum", "leges", "lex" as one and the same word, but if we truncate after "le", we get too many hits that have nothing to do with lex at all. There are a couple of "lemmatising" tools available, we do our own with a dictionary approach... First, we have to have a dictionary which associates all known word forms to their lemma. This also helps us with historical orthography. Suppose from some other context, we have a file "wordforms-lat.txt" at our disposal in the TextProcessing_2017 directory. Its contents looks like this: End of explanation lemma = {} # we build a so-called dictionary for the lookups tempdict = [] wordfile = open(wordfile_path, encoding='utf-8') for line in wordfile.readlines(): tempdict.append(tuple(line.split('>'))) lemma = {k.strip(): v.strip() for k, v in tempdict} wordfile.close; print(str(len(lemma)) + ' wordforms registered.') Explanation: So, we can again build a dictionary of key-value pairs associating all the lemmata ("values") with their wordforms ("keys"): End of explanation lemma['ciuicior'] Explanation: Again, a quick test: Let's see with which basic word the wordform "ciuicior" is associated, or, in other words, what value our lemma variable returns when we query for the key "ciuicior": End of explanation lemmatised = [[lemma[word] if word in lemma else word for word in segment] \ for segment in tokenised] print(lemmatised[5][:50]) Explanation: Now we can use this dictionary to build a new list of words, where only lemmatised forms occur: End of explanation counter2 = collections.Counter(lemmatised[5]) df1 = pd.DataFrame.from_dict(counter2, orient='index').reset_index() df2 = df1.rename(columns={'index':'lemma',0:'count'}) df2.sort_values('count',0,False)[:10] Explanation: As you can see, the original text is lost now from the data that we are currently working with (unless we add another dimension to our lemmatised variable which can keep the original word form). But let us see if something in the 10 most frequent words has changed: End of explanation stopwords_path = 'TextProcessing_2017/stopwords-lat.txt' stopwords = open(stopwords_path, encoding='utf-8').read().splitlines() print(str(len(stopwords)) + ' stopwords, e.g.: ' + str(stopwords[24:54])) Explanation: Yes, things have changed: "esse/sum" has moved to the most frequent place, "non" is now counted among the "nolo" (I am not sure this makes sense, but such is the dictionary of wordforms we have used) and "potestas" has now made it from the eighth to the second place! Eliminate Stopwords <a name="EliminateStopwords"></a> Probably "sum/esse", "non/nolo", "in", "ad" and the like are not really very informative words. They are what one calls stopwords, and we have another list of such words that we would rather want to ignore. End of explanation stopped = [[item for item in lemmatised_segment if item not in stopwords] \ for lemmatised_segment in lemmatised] print(stopped[5][:20]) Explanation: Now let's try and suppress the stopwords in the segments... End of explanation counter3 = collections.Counter(stopped[0]) counter4 = collections.Counter(stopped[1]) counter5 = collections.Counter(stopped[2]) counter6 = collections.Counter(stopped[3]) df0_1 = pd.DataFrame.from_dict(counter3, orient='index').reset_index() df0_2 = df0_1.rename(columns={'index':'lemma',0:'count'}) df1_1 = pd.DataFrame.from_dict(counter4, orient='index').reset_index() df1_2 = df1_1.rename(columns={'index':'lemma',0:'count'}) df2_1 = pd.DataFrame.from_dict(counter5, orient='index').reset_index() df2_2 = df2_1.rename(columns={'index':'lemma',0:'count'}) df3_1 = pd.DataFrame.from_dict(counter6, orient='index').reset_index() df3_2 = df3_1.rename(columns={'index':'lemma',0:'count'}) print(' ') print(' Most frequent lemmata in the first text segment') print(df0_2.sort_values(by='count',axis=0,ascending=False)[:10]) print(' ') print(' ') print(' Most frequent lemmata in the second text segment') print(df1_2.sort_values(by='count',axis=0,ascending=False)[:10]) print(' ') print(' ') print(' Most frequent lemmata in the third text segment') print(df2_2.sort_values(by='count',axis=0,ascending=False)[:10]) print(' ') print(' ') print(' Most frequent lemmata in the fourth text segment') print(df3_2.sort_values(by='count',axis=0,ascending=False)[:10]) Explanation: With this, we can already create a first "profile" of our first 4 segments: End of explanation
11,241
Given the following text description, write Python code to implement the functionality described below step by step Description: <small><i>This notebook was prepared by wdonahoe. Source and license info is on GitHub.</i></small> Challenge Notebook Problem Step1: Unit Test The following unit test is expected to fail until you solve the challenge.
Python Code: def group_ordered(list_in): # TODO: Implement me pass Explanation: <small><i>This notebook was prepared by wdonahoe. Source and license info is on GitHub.</i></small> Challenge Notebook Problem: Implement a function that groups identical items based on their order in the list. Constraints Test Cases Algorithm Code Unit Test Solution Notebook Constraints Can we use extra data structures? Yes Test Cases group_ordered([1,2,1,3,2]) -> [1,1,2,2,3] group_ordered(['a','b','a') -> ['a','a','b'] group_ordered([1,1,2,3,4,5,2,1]-> [1,1,1,2,2,3,4,5] group_ordered([]) -> [] group_ordered([1]) -> [1] group_ordered(None) -> None Algorithm Refer to the solution notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code End of explanation # %load test_group_ordered.py from nose.tools import assert_equal class TestGroupOrdered(object): def test_group_ordered(self, func): assert_equal(func(None), None) print('Success: ' + func.__name__ + " None case.") assert_equal(func([]), []) print('Success: ' + func.__name__ + " Empty case.") assert_equal(func([1]), [1]) print('Success: ' + func.__name__ + " Single element case.") assert_equal(func([1, 2, 1, 3, 2]), [1, 1, 2, 2, 3]) assert_equal(func(['a', 'b', 'a']), ['a', 'a', 'b']) assert_equal(func([1, 1, 2, 3, 4, 5, 2, 1]), [1, 1, 1, 2, 2, 3, 4, 5]) assert_equal(func([1, 2, 3, 4, 3, 4]), [1, 2, 3, 3, 4, 4]) print('Success: ' + func.__name__) def main(): test = TestGroupOrdered() test.test_group_ordered(group_ordered) try: test.test_group_ordered(group_ordered_alt) except NameError: # Alternate solutions are only defined # in the solutions file pass if __name__ == '__main__': main() Explanation: Unit Test The following unit test is expected to fail until you solve the challenge. End of explanation
11,242
Given the following text description, write Python code to implement the functionality described below step by step Description: A Tour of SciKit-Learn + TensorFlow + SkFlow When we talk about Data Science and the Data Science Pipeline, we are typically talking about the management of data flows for a specific purpose - the modeling of some hypothesis. The models that we construct can then be used in Data Products as an engine to create more data and actionable results. Machine learning is the art of training some model by using existing data along with a statistical method to create a parametric representation of a model that fits the data. That’s kind of a mouthful, but what that essentially means is that a machine learning algorithm uses statistical processes to learn from examples, then applies what it has learned to future inputs to predict an outcome. Machine learning can classically be summarized with two methodologies Step1: The datasets that come with Scikit Learn demonstrate the properties of classification and regression algorithms, as well as how the data should fit. They are also small and are easy to train models that work. As such they are ideal for pedagogical purposes. The datasets module also contains functions for loading data from the mldata.org repository as well as for generating random data. Step2: Regressions Regressions are a type of supervised learning algorithm, where, given continuous input data, the object is to fit a function that is able to predict the continuous value of input features. Linear Regression Linear regression fits a linear model (a line in two dimensions) to the data. Step3: Linear Regresion (SkFlow) Step4: Perceptron -> Deep Neural Network (SkFlow) A primitive neural network that learns weights for input vectors and transfers the weights through a network to make a prediction. Step5: Perceptron A primitive neural network that learns weights for input vectors and transfers the weights through a network to make a prediction. Step6: k-Nearest Neighbor Regression Makes predictions by locating similar cases and returning the average majority. Step7: Classification and Regression Trees (CART) Makes splits of the best separation of the data for the predictions being made. Step8: DecisionTree Neural Network with Scikit-Learn + TensorFlow Step9: Random Forest Random forest is an ensemble method that creates a number of decision trees using the CART algorithm, each on a different subset of the data. The general approach to creating the ensemble is bootstrap aggregation of the decision trees (bagging). Step10: AdaBoost Adaptive Boosting (AdaBoost) is an ensemble method that sums the predictions made by multiple decision trees. Additional models are added and trained on instances that were incorrectly predicted (boosting) Step11: Support Vector Machines Uses the SVM algorithm (transforming the problem space into higher dimensions in order to use kernel methods) to make predictions for a linear function. Step12: Regularization Regularization methods decrease the over-fitting of a model by penalizing complexity. These are usually demonstrated on regression algorithms, which is why they are included in this section. Ridge Regression Also known as Tikhonov regularization penalizes a least squares regression model on the square of the absolute magnitiude of the coefficients (the L2 norm) Step13: LASSO Least Absolute Shrinkage and Selection Operator (LASSO) penalizes the least squares regression on the absolute magnitude of the coefficients (the L1 norm) Step14: Classification Classification is a supervised machine learning problem where, given labeled input data (with two or more labels), the task is to fit a function that can predict the discrete class of input data. Logistic Regression Fits a logistic model to data and makes predictions about the probability of a categorical event (between 0 and 1). Logistic regressions make predictions between 0 and 1, so in order to classify multiple classes a one-vs-all scheme is used (one model per class, winner-takes-all). Step15: Logistic Regression (SkFlow) Step16: LDA Linear Discriminate Analysis (LDA) fits a conditional probability density function (Gaussian) to the attributes of the classes. The discrimination function is linear. Step17: Naive Bayes Uses Bayes Theorem (with a naive assumption) to model the conditional relationship of each attribute to the class. Step18: k-Nearest Neighbor Makes predictions by locating similar instances via a similarity function or distance and averaging the majority of the most similar. Step19: Decision Trees Decision trees use the CART algorithm to make predictions by making splits that best fit the data. Step20: SVMs Support Vector Machines (SVM) uses points in transformed problem space that separates the classes into groups. Step21: Random Forest Random Forest is an ensemble of decision trees on different subsets of the dataset. The ensemble is created by bootstrap aggregation (bagging). Step22: Clustering Clustering algorithms attempt to find patterns in unlabeled data. They are usually grouped into two main categories Step23: K-Means Clustering Partition N samples into k clusters, where each sample belongs to a cluster to which it has the closest mean of the neighbors. This problem is NP-hard, but there are good estimations. Step24: Affinity Propagation Clustering based on the concept of "message passing" between data points. Unlike clustering algorithms such as k-means or k-medoids, AP does not require the number of clusters to be determined or estimated before running the algorithm. Like k-medoids, AP finds "exemplars", members of the input set that are representative of clusters
Python Code: %matplotlib inline # Things we'll need later import time import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error as mse from sklearn.metrics import r2_score from sklearn.metrics import classification_report from sklearn import cross_validation as cv # Load the example datasets from sklearn.datasets import load_boston from sklearn.datasets import load_iris from sklearn.datasets import load_diabetes from sklearn.datasets import load_digits from sklearn.datasets import load_linnerud # Boston house prices dataset (reals, regression) boston = load_boston() print "Boston: %i samples %i features" % boston.data.shape # Iris flower dataset (reals, multi-label classification) iris = load_iris() print "Iris: %i samples %i features" % iris.data.shape # Diabetes dataset (reals, regression) diabetes = load_diabetes() print "Diabetes: %i samples %i features" % diabetes.data.shape # Hand-written digit dataset (multi-label classification) digits = load_digits() print "Digits: %i samples %i features" % digits.data.shape # Linnerud psychological and exercise dataset (multivariate regression) linnerud = load_linnerud() print "Linnerud: %i samples %i features" % linnerud.data.shape Explanation: A Tour of SciKit-Learn + TensorFlow + SkFlow When we talk about Data Science and the Data Science Pipeline, we are typically talking about the management of data flows for a specific purpose - the modeling of some hypothesis. The models that we construct can then be used in Data Products as an engine to create more data and actionable results. Machine learning is the art of training some model by using existing data along with a statistical method to create a parametric representation of a model that fits the data. That’s kind of a mouthful, but what that essentially means is that a machine learning algorithm uses statistical processes to learn from examples, then applies what it has learned to future inputs to predict an outcome. Machine learning can classically be summarized with two methodologies: supervised and unsupervised learning. In supervised learning, the “correct answers” are annotated ahead of time and the algorithm tries to fit a decision space based on those answers. In unsupervised learning, algorithms try to group like examples together, inferring similarities via distance metrics. Machine learning allows us to handle new data in a meaningful way, predicting where new data will fit into our models. Scikit-Learn is a powerful machine learning library implemented in Python with numeric and scientific computing powerhouses Numpy, Scipy, and matplotlib for extremely fast analysis of small to medium sized data sets. It is open source, commercially usable and contains many modern machine learning algorithms for classification, regression, clustering, feature extraction, and optimization. For this reason Scikit-Learn is often the first tool in a Data Scientists toolkit for machine learning of incoming data sets. The purpose of this notebook is to serve as an introduction to Machine Learning with Scikit-Learn. We will explore several clustering, classification, and regression algorithms. In particular, we will structure our machine learning models as though we were producing a data product, an actionable model that can be used in larger programs or algorithms; rather than as simply a research or investigation methodology. For more on Scikit-Learn see: Six Reasons why I recommend Scikit-Learn (O’Reilly Radar). End of explanation import pandas as pd from pandas.tools.plotting import scatter_matrix df = pd.DataFrame(iris.data) df.columns = iris.feature_names fig = scatter_matrix(df, alpha=0.2, figsize=(16, 10), diagonal='kde') df = pd.DataFrame(diabetes.data) fig = scatter_matrix(df, alpha=0.2, figsize=(16, 10), diagonal='kde') import random plt.figure(1, figsize=(3, 3)) plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest') plt.show() Explanation: The datasets that come with Scikit Learn demonstrate the properties of classification and regression algorithms, as well as how the data should fit. They are also small and are easy to train models that work. As such they are ideal for pedagogical purposes. The datasets module also contains functions for loading data from the mldata.org repository as well as for generating random data. End of explanation from sklearn.linear_model import LinearRegression # Fit regression to diabetes dataset model = LinearRegression() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Regressions Regressions are a type of supervised learning algorithm, where, given continuous input data, the object is to fit a function that is able to predict the continuous value of input features. Linear Regression Linear regression fits a linear model (a line in two dimensions) to the data. End of explanation import skflow model = skflow.TensorFlowLinearRegressor(steps=10000) model.fit(diabetes.data, diabetes.target, logdir='/tmp/skflow/linear-regression/') expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Linear Regresion (SkFlow) End of explanation import tensorflow as tf import skflow options = [[1], [10], [20], [25], [30], [40]] for hidden_units in options: print "hidden layers = ", str(hidden_units) def tanh_dnn(X, y): features = skflow.ops.dnn(X, hidden_units=hidden_units, activation=skflow.tf.tanh) return skflow.models.linear_regression(features, y) model = skflow.TensorFlowEstimator(model_fn=tanh_dnn, n_classes=0, steps=1000, learning_rate=0.1, batch_size=100, verbose=2) model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Perceptron -> Deep Neural Network (SkFlow) A primitive neural network that learns weights for input vectors and transfers the weights through a network to make a prediction. End of explanation from sklearn.linear_model import Perceptron model = Perceptron() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Perceptron A primitive neural network that learns weights for input vectors and transfers the weights through a network to make a prediction. End of explanation from sklearn.neighbors import KNeighborsRegressor model = KNeighborsRegressor() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: k-Nearest Neighbor Regression Makes predictions by locating similar cases and returning the average majority. End of explanation from sklearn.tree import DecisionTreeRegressor model = DecisionTreeRegressor() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Classification and Regression Trees (CART) Makes splits of the best separation of the data for the predictions being made. End of explanation import tensorflow as tf import skflow options = [[1,1], [10, 10], [15, 15], [20,20], [25,25]] for hidden_units in options: print "hidden layers = ", str(hidden_units) def tanh_dnn(X, y): features = skflow.ops.dnn(X, hidden_units=hidden_units, activation=skflow.tf.tanh) return skflow.models.linear_regression(features, y) model = skflow.TensorFlowEstimator(model_fn=tanh_dnn, n_classes=0, steps=5000, learning_rate=0.1, batch_size=100) model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: DecisionTree Neural Network with Scikit-Learn + TensorFlow End of explanation from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor() model.fit(diabetes. model.fit(diabetes.data, diabetes.target, logdir='/tmp/skflow/decision-tree/') expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Random Forest Random forest is an ensemble method that creates a number of decision trees using the CART algorithm, each on a different subset of the data. The general approach to creating the ensemble is bootstrap aggregation of the decision trees (bagging). End of explanation from sklearn.ensemble import AdaBoostRegressor model = AdaBoostRegressor() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: AdaBoost Adaptive Boosting (AdaBoost) is an ensemble method that sums the predictions made by multiple decision trees. Additional models are added and trained on instances that were incorrectly predicted (boosting) End of explanation from sklearn.svm import SVR model = SVR() model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Support Vector Machines Uses the SVM algorithm (transforming the problem space into higher dimensions in order to use kernel methods) to make predictions for a linear function. End of explanation from sklearn.linear_model import Ridge model = Ridge(alpha=0.1) model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: Regularization Regularization methods decrease the over-fitting of a model by penalizing complexity. These are usually demonstrated on regression algorithms, which is why they are included in this section. Ridge Regression Also known as Tikhonov regularization penalizes a least squares regression model on the square of the absolute magnitiude of the coefficients (the L2 norm) End of explanation from sklearn.linear_model import Lasso model = Lasso(alpha=0.1) model.fit(diabetes.data, diabetes.target) expected = diabetes.target predicted = model.predict(diabetes.data) # Evaluate fit of the model print "Mean Squared Error: %0.3f" % mse(expected, predicted) print "Coefficient of Determination: %0.3f" % r2_score(expected, predicted) Explanation: LASSO Least Absolute Shrinkage and Selection Operator (LASSO) penalizes the least squares regression on the absolute magnitude of the coefficients (the L1 norm) End of explanation from sklearn.linear_model import LogisticRegression splits = cv.train_test_split(iris.data, iris.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = LogisticRegression() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: Classification Classification is a supervised machine learning problem where, given labeled input data (with two or more labels), the task is to fit a function that can predict the discrete class of input data. Logistic Regression Fits a logistic model to data and makes predictions about the probability of a categorical event (between 0 and 1). Logistic regressions make predictions between 0 and 1, so in order to classify multiple classes a one-vs-all scheme is used (one model per class, winner-takes-all). End of explanation import skflow splits = cv.train_test_split(iris.data, iris.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = skflow.TensorFlowLinearClassifier(n_classes=3, steps=5000, learning_rate=0.1, batch_size=100) model.fit(X_train, y_train, logdir='/tmp/skflow/logistic-regression/') expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) model_path = '/tmp/skflow_models/logistic-regression' model.save(model_path) restored_model = skflow.TensorFlowEstimator.restore(model_path) print predicted == restored_model.predict(X_test) Explanation: Logistic Regression (SkFlow) End of explanation from sklearn.lda import LDA splits = cv.train_test_split(digits.data, digits.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = LDA() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: LDA Linear Discriminate Analysis (LDA) fits a conditional probability density function (Gaussian) to the attributes of the classes. The discrimination function is linear. End of explanation from sklearn.naive_bayes import GaussianNB splits = cv.train_test_split(iris.data, iris.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = GaussianNB() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: Naive Bayes Uses Bayes Theorem (with a naive assumption) to model the conditional relationship of each attribute to the class. End of explanation from sklearn.neighbors import KNeighborsClassifier splits = cv.train_test_split(digits.data, digits.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = KNeighborsClassifier() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: k-Nearest Neighbor Makes predictions by locating similar instances via a similarity function or distance and averaging the majority of the most similar. End of explanation from sklearn.tree import DecisionTreeClassifier splits = cv.train_test_split(iris.data, iris.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = DecisionTreeClassifier() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: Decision Trees Decision trees use the CART algorithm to make predictions by making splits that best fit the data. End of explanation from sklearn.svm import SVC kernels = ['linear', 'poly', 'rbf'] splits = cv.train_test_split(digits.data, digits.target, test_size=0.2) X_train, X_test, y_train, y_test = splits for kernel in kernels: if kernel != 'poly': model = SVC(kernel=kernel) else: model = SVC(kernel=kernel, degree=3) model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: SVMs Support Vector Machines (SVM) uses points in transformed problem space that separates the classes into groups. End of explanation from sklearn.ensemble import RandomForestClassifier splits = cv.train_test_split(digits.data, digits.target, test_size=0.2) X_train, X_test, y_train, y_test = splits model = RandomForestClassifier() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print classification_report(expected, predicted) Explanation: Random Forest Random Forest is an ensemble of decision trees on different subsets of the dataset. The ensemble is created by bootstrap aggregation (bagging). End of explanation from sklearn.datasets import make_circles from sklearn.datasets import make_moons from sklearn.datasets import make_blobs from sklearn.preprocessing import StandardScaler N = 1000 # Number of samples in each cluster # Some colors for later colors = np.array([x for x in 'bgrcmykbgrcmykbgrcmykbgrcmyk']) colors = np.hstack([colors] * 20) circles = make_circles(n_samples=N, factor=.5, noise=.05) moons = make_moons(n_samples=N, noise=.08) blobs = make_blobs(n_samples=N, random_state=9) noise = np.random.rand(N, 2), None # Let's see what the data looks like! fig, axe = plt.subplots(figsize=(18, 4)) for idx, dataset in enumerate((circles, moons, blobs, noise)): X, y = dataset X = StandardScaler().fit_transform(X) plt.subplot(1,4,idx+1) plt.scatter(X[:,0], X[:,1], marker='.') plt.xticks(()) plt.yticks(()) plt.ylabel('$x_1$') plt.xlabel('$x_0$') plt.show() Explanation: Clustering Clustering algorithms attempt to find patterns in unlabeled data. They are usually grouped into two main categories: centroidal (find the centers of clusters) and hierarchical (find clusters of clusters). In order to explore clustering, we'll have to generate some fake datasets to use. End of explanation from sklearn.cluster import MiniBatchKMeans fig, axe = plt.subplots(figsize=(18, 4)) for idx, dataset in enumerate((circles, moons, blobs, noise)): X, y = dataset X = StandardScaler().fit_transform(X) # Fit the model with our algorithm model = MiniBatchKMeans(n_clusters=2) model.fit(X) # Make Predictions predictions = model.predict(X) # Find centers centers = model.cluster_centers_ center_colors = colors[:len(centers)] plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors) plt.subplot(1,4,idx+1) plt.scatter(X[:, 0], X[:, 1], color=colors[predictions].tolist(), s=10) plt.xticks(()) plt.yticks(()) plt.ylabel('$x_1$') plt.xlabel('$x_0$') plt.show() Explanation: K-Means Clustering Partition N samples into k clusters, where each sample belongs to a cluster to which it has the closest mean of the neighbors. This problem is NP-hard, but there are good estimations. End of explanation from sklearn.cluster import AffinityPropagation fig, axe = plt.subplots(figsize=(18, 4)) for idx, dataset in enumerate((circles, moons, blobs, noise)): X, y = dataset X = StandardScaler().fit_transform(X) # Fit the model with our algorithm model = AffinityPropagation(damping=.9, preference=-200) model.fit(X) # Make Predictions predictions = model.predict(X) # Find centers centers = model.cluster_centers_ center_colors = colors[:len(centers)] plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors) plt.subplot(1,4,idx+1) plt.scatter(X[:, 0], X[:, 1], color=colors[predictions].tolist(), s=10) plt.xticks(()) plt.yticks(()) plt.ylabel('$x_1$') plt.xlabel('$x_0$') plt.show() Explanation: Affinity Propagation Clustering based on the concept of "message passing" between data points. Unlike clustering algorithms such as k-means or k-medoids, AP does not require the number of clusters to be determined or estimated before running the algorithm. Like k-medoids, AP finds "exemplars", members of the input set that are representative of clusters End of explanation
11,243
Given the following text description, write Python code to implement the functionality described below step by step Description: Styling New in version 0.17.1 <span style="color Step1: Here's a boring example of rendering a DataFrame, without any (visible) styles Step2: Note Step4: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames). When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell. Let's write a simple style function that will color negative numbers red and positive numbers black. Step5: In this case, the cell's style depends only on it's own value. That means we should use the Styler.applymap method which works elementwise. Step6: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames. Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a &lt;style&gt; tag. This will be a common theme. Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output. Now suppose you wanted to highlight the maximum value in each column. We can't use .applymap anymore since that operated elementwise. Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself. Step7: In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches the input shape, an array with len(s) items. We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain. Step8: Above we used Styler.apply to pass in each column one at a time. <span style="background-color Step9: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels. Step10: Building Styles Summary Style functions should return strings with one or more CSS attribute Step11: For row and column slicing, any valid indexer to .loc will work. Step12: Only label-based slicing is supported right now, not positional. If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword. python my_func2 = functools.partial(my_func, subset=42) Finer Control Step13: Use a dictionary to format specific columns. Step14: Or pass in a callable (or dictionary of callables) for more flexible handling. Step15: Builtin Styles Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself. Step16: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap. Step17: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still. Step18: There's also .highlight_min and .highlight_max. Step19: Use Styler.set_properties when the style doesn't actually depend on the values. Step20: Bar charts You can include "bar charts" in your DataFrame. Step21: New in version 0.20.0 is the ability to customize further the bar chart Step24: The following example aims to give a highlight of the behavior of the new align options Step25: Sharing Styles Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set Step26: Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon. Other Options You've seen a few methods for data-driven styling. Styler also provides a few other options for styles that don't depend on the data. precision captions table-wide styles hiding the index or columns Each of these can be specified in two ways Step27: Or through a set_precision method. Step28: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start. Captions Regular table captions can be added in a few ways. Step29: Table Styles The next option you have are "table styles". These are styles that apply to the table as a whole, but don't look at the data. Certain sytlings, including pseudo-selectors like Step30: table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys. The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id, unique to each Styler. This selector is in addition to that id. The value for props should be a list of tuples of ('attribute', 'value'). table_styles are extremely flexible, but not as fun to type out by hand. We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here. Hiding the Index or Columns The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns. Step31: CSS Classes Certain CSS classes are attached to cells. Index and Column names include index_name and level&lt;k&gt; where k is its level in a MultiIndex Index label cells include row_heading row&lt;n&gt; where n is the numeric position of the row level&lt;k&gt; where k is the level in a MultiIndex Column label cells include col_heading col&lt;n&gt; where n is the numeric position of the column level&lt;k&gt; where k is the level in a MultiIndex Blank cells include blank Data cells include data Limitations DataFrame only (use Series.to_frame().style) The index and columns must be unique No large repr, and performance isn't great; this is intended for summary DataFrames You can only style the values, not the index or columns You can only apply styles, you can't insert new HTML entities Some of these will be addressed in the future. Terms Style function Step32: Export to Excel New in version 0.20.0 <span style="color Step33: A screenshot of the output Step34: This next cell writes the custom template. We extend the template html.tpl, which comes with pandas. Step35: Now that we've created a template, we need to set up a subclass of Styler that knows about it. Step36: Notice that we include the original loader in our environment's loader. That's because we extend the original template, so the Jinja environment needs to be able to find it. Now we can use that custom styler. It's __init__ takes a DataFrame. Step37: Our custom template accepts a table_title keyword. We can provide the value in the .render method. Step38: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass. Step39: Here's the template structure Step40: See the template in the GitHub repo for more details.
Python Code: import matplotlib.pyplot # We have this here to trigger matplotlib's font cache stuff. # This cell is hidden from the output import pandas as pd import numpy as np np.random.seed(24) df = pd.DataFrame({'A': np.linspace(1, 10, 10)}) df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))], axis=1) df.iloc[0, 2] = np.nan Explanation: Styling New in version 0.17.1 <span style="color: red">Provisional: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span> This document is written as a Jupyter Notebook, and can be viewed or downloaded here. You can apply conditional formatting, the visual styling of a DataFrame depending on the data within, by using the DataFrame.style property. This is a property that returns a Styler object, which has useful methods for formatting and displaying DataFrames. The styling is accomplished using CSS. You write "style functions" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS "attribute: value" pairs for the values. These functions can be incrementally passed to the Styler which collects the styles before rendering. Building Styles Pass your style functions into one of the following methods: Styler.applymap: elementwise Styler.apply: column-/row-/table-wise Both of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame in a certain way. Styler.applymap works through the DataFrame elementwise. Styler.apply passes each column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument. For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None. For Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value pair. For Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair. Let's see some examples. End of explanation df.style Explanation: Here's a boring example of rendering a DataFrame, without any (visible) styles: End of explanation df.style.highlight_null().render().split('\n')[:10] Explanation: Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string. The above output looks very similar to the standard DataFrame HTML representation. But we've done some work behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method. End of explanation def color_negative_red(val): Takes a scalar and returns a string with the css property `'color: red'` for negative strings, black otherwise. color = 'red' if val < 0 else 'black' return 'color: %s' % color Explanation: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames). When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell. Let's write a simple style function that will color negative numbers red and positive numbers black. End of explanation s = df.style.applymap(color_negative_red) s Explanation: In this case, the cell's style depends only on it's own value. That means we should use the Styler.applymap method which works elementwise. End of explanation def highlight_max(s): ''' highlight the maximum in a Series yellow. ''' is_max = s == s.max() return ['background-color: yellow' if v else '' for v in is_max] df.style.apply(highlight_max) Explanation: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames. Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a &lt;style&gt; tag. This will be a common theme. Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output. Now suppose you wanted to highlight the maximum value in each column. We can't use .applymap anymore since that operated elementwise. Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself. End of explanation df.style.\ applymap(color_negative_red).\ apply(highlight_max) Explanation: In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches the input shape, an array with len(s) items. We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain. End of explanation def highlight_max(data, color='yellow'): ''' highlight the maximum in a Series or DataFrame ''' attr = 'background-color: {}'.format(color) if data.ndim == 1: # Series from .apply(axis=0) or axis=1 is_max = data == data.max() return [attr if v else '' for v in is_max] else: # from .apply(axis=None) is_max = data == data.max().max() return pd.DataFrame(np.where(is_max, attr, ''), index=data.index, columns=data.columns) Explanation: Above we used Styler.apply to pass in each column one at a time. <span style="background-color: #DEDEBE">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style="background-color: #DEDEBE">DataFrame.apply</code>. Internally, <code style="background-color: #DEDEBE">Styler.apply</code> uses <code style="background-color: #DEDEBE">DataFrame.apply</code> so the result should be the same.</span> What if you wanted to highlight just the maximum value in the entire table? Use .apply(function, axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let's try that next. We'll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from .apply(axis=None)). We'll also allow the color to be adjustable, to demonstrate that .apply, and .applymap pass along keyword arguments. End of explanation df.style.apply(highlight_max, color='darkorange', axis=None) Explanation: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels. End of explanation df.style.apply(highlight_max, subset=['B', 'C', 'D']) Explanation: Building Styles Summary Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use Styler.applymap(func) for elementwise styles Styler.apply(func, axis=0) for columnwise styles Styler.apply(func, axis=1) for rowwise styles Styler.apply(func, axis=None) for tablewise styles And crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.shape. Finer Control: Slicing Both Styler.apply, and Styler.applymap accept a subset keyword. This allows you to apply styles to specific rows or columns, without having to code that logic into your style function. The value passed to subset behaves similar to slicing a DataFrame. A scalar is treated as a column label A list (or series or numpy array) A tuple is treated as (row_indexer, column_indexer) Consider using pd.IndexSlice to construct the tuple for the last one. End of explanation df.style.applymap(color_negative_red, subset=pd.IndexSlice[2:5, ['B', 'D']]) Explanation: For row and column slicing, any valid indexer to .loc will work. End of explanation df.style.format("{:.2%}") Explanation: Only label-based slicing is supported right now, not positional. If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword. python my_func2 = functools.partial(my_func, subset=42) Finer Control: Display Values We distinguish the display value from the actual value in Styler. To control the display value, the text is printed in each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a single value and returns a string. End of explanation df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'}) Explanation: Use a dictionary to format specific columns. End of explanation df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))}) Explanation: Or pass in a callable (or dictionary of callables) for more flexible handling. End of explanation df.style.highlight_null(null_color='red') Explanation: Builtin Styles Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself. End of explanation import seaborn as sns cm = sns.light_palette("green", as_cmap=True) s = df.style.background_gradient(cmap=cm) s Explanation: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap. End of explanation # Uses the full color range df.loc[:4].style.background_gradient(cmap='viridis') # Compress the color range (df.loc[:4] .style .background_gradient(cmap='viridis', low=.5, high=0) .highlight_null('red')) Explanation: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still. End of explanation df.style.highlight_max(axis=0) Explanation: There's also .highlight_min and .highlight_max. End of explanation df.style.set_properties(**{'background-color': 'black', 'color': 'lawngreen', 'border-color': 'white'}) Explanation: Use Styler.set_properties when the style doesn't actually depend on the values. End of explanation df.style.bar(subset=['A', 'B'], color='#d65f5f') Explanation: Bar charts You can include "bar charts" in your DataFrame. End of explanation df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d']) Explanation: New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive]. Here's how you can change the above with the new align='mid' option: End of explanation import pandas as pd from IPython.display import HTML # Test series test1 = pd.Series([-100,-60,-30,-20], name='All Negative') test2 = pd.Series([10,20,50,100], name='All Positive') test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg') head = <table> <thead> <th>Align</th> <th>All Negative</th> <th>All Positive</th> <th>Both Neg and Pos</th> </thead> </tbody> aligns = ['left','zero','mid'] for align in aligns: row = "<tr><th>{}</th>".format(align) for serie in [test1,test2,test3]: s = serie.copy() s.name='' row += "<td>{}</td>".format(s.to_frame().style.bar(align=align, color=['#d65f5f', '#5fba7d'], width=100).render()) #testn['width'] row += '</tr>' head += row head+= </tbody> </table> HTML(head) Explanation: The following example aims to give a highlight of the behavior of the new align options: End of explanation df2 = -df style1 = df.style.applymap(color_negative_red) style1 style2 = df2.style style2.use(style1.export()) style2 Explanation: Sharing Styles Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set End of explanation with pd.option_context('display.precision', 2): html = (df.style .applymap(color_negative_red) .apply(highlight_max)) html Explanation: Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon. Other Options You've seen a few methods for data-driven styling. Styler also provides a few other options for styles that don't depend on the data. precision captions table-wide styles hiding the index or columns Each of these can be specified in two ways: A keyword argument to Styler.__init__ A call to one of the .set_ or .hide_ methods, e.g. .set_caption or .hide_columns The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames that should all share the same properties. For interactive use, the.set_ and .hide_ methods are more convenient. Precision You can control the precision of floats using pandas' regular display.precision option. End of explanation df.style\ .applymap(color_negative_red)\ .apply(highlight_max)\ .set_precision(2) Explanation: Or through a set_precision method. End of explanation df.style.set_caption('Colormaps, with a caption.')\ .background_gradient(cmap=cm) Explanation: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start. Captions Regular table captions can be added in a few ways. End of explanation from IPython.display import HTML def hover(hover_color="#ffff99"): return dict(selector="tr:hover", props=[("background-color", "%s" % hover_color)]) styles = [ hover(), dict(selector="th", props=[("font-size", "150%"), ("text-align", "center")]), dict(selector="caption", props=[("caption-side", "bottom")]) ] html = (df.style.set_table_styles(styles) .set_caption("Hover to highlight.")) html Explanation: Table Styles The next option you have are "table styles". These are styles that apply to the table as a whole, but don't look at the data. Certain sytlings, including pseudo-selectors like :hover can only be used this way. End of explanation df.style.hide_index() df.style.hide_columns(['C','D']) Explanation: table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys. The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id, unique to each Styler. This selector is in addition to that id. The value for props should be a list of tuples of ('attribute', 'value'). table_styles are extremely flexible, but not as fun to type out by hand. We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here. Hiding the Index or Columns The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns. End of explanation from IPython.html import widgets @widgets.interact def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)): return df.style.background_gradient( cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l, as_cmap=True) ) def magnify(): return [dict(selector="th", props=[("font-size", "4pt")]), dict(selector="td", props=[('padding', "0em 0em")]), dict(selector="th:hover", props=[("font-size", "12pt")]), dict(selector="tr:hover td:hover", props=[('max-width', '200px'), ('font-size', '12pt')]) ] np.random.seed(25) cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True) bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum() bigdf.style.background_gradient(cmap, axis=1)\ .set_properties(**{'max-width': '80px', 'font-size': '1pt'})\ .set_caption("Hover to magnify")\ .set_precision(2)\ .set_table_styles(magnify()) Explanation: CSS Classes Certain CSS classes are attached to cells. Index and Column names include index_name and level&lt;k&gt; where k is its level in a MultiIndex Index label cells include row_heading row&lt;n&gt; where n is the numeric position of the row level&lt;k&gt; where k is the level in a MultiIndex Column label cells include col_heading col&lt;n&gt; where n is the numeric position of the column level&lt;k&gt; where k is the level in a MultiIndex Blank cells include blank Data cells include data Limitations DataFrame only (use Series.to_frame().style) The index and columns must be unique No large repr, and performance isn't great; this is intended for summary DataFrames You can only style the values, not the index or columns You can only apply styles, you can't insert new HTML entities Some of these will be addressed in the future. Terms Style function: a function that's passed into Styler.apply or Styler.applymap and returns values like 'css attribute: value' Builtin style functions: style functions that are methods on Styler table style: a dictionary with the two keys selector and props. selector is the CSS selector that props will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler. Fun stuff Here are a few interesting examples. Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette. End of explanation df.style.\ applymap(color_negative_red).\ apply(highlight_max).\ to_excel('styled.xlsx', engine='openpyxl') Explanation: Export to Excel New in version 0.20.0 <span style="color: red">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span> Some support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include: background-color border-style, border-width, border-color and their {top, right, bottom, left variants} color font-family font-style font-weight text-align text-decoration vertical-align white-space: nowrap Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported. The following pseudo CSS properties are also available to set excel specific style properties: - number-format End of explanation from jinja2 import Environment, ChoiceLoader, FileSystemLoader from IPython.display import HTML from pandas.io.formats.style import Styler %mkdir templates Explanation: A screenshot of the output: Extensibility The core of pandas is, and will remain, its "high-performance, easy-to-use data structures". With that in mind, we hope that DataFrame.style accomplishes two goals Provide an API that is pleasing to use interactively and is "good enough" for many tasks Provide the foundations for dedicated libraries to build on If you build a great library on top of this, let us know and we'll link to it. Subclassing If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template. We'll show an example of extending the default template to insert a custom header before each table. End of explanation %%file templates/myhtml.tpl {% extends "html.tpl" %} {% block table %} <h1>{{ table_title|default("My Table") }}</h1> {{ super() }} {% endblock table %} Explanation: This next cell writes the custom template. We extend the template html.tpl, which comes with pandas. End of explanation class MyStyler(Styler): env = Environment( loader=ChoiceLoader([ FileSystemLoader("templates"), # contains ours Styler.loader, # the default ]) ) template = env.get_template("myhtml.tpl") Explanation: Now that we've created a template, we need to set up a subclass of Styler that knows about it. End of explanation MyStyler(df) Explanation: Notice that we include the original loader in our environment's loader. That's because we extend the original template, so the Jinja environment needs to be able to find it. Now we can use that custom styler. It's __init__ takes a DataFrame. End of explanation HTML(MyStyler(df).render(table_title="Extending Example")) Explanation: Our custom template accepts a table_title keyword. We can provide the value in the .render method. End of explanation EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl") EasyStyler(df) Explanation: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass. End of explanation with open("template_structure.html") as f: structure = f.read() HTML(structure) Explanation: Here's the template structure: End of explanation # Hack to get the same style in the notebook as the # main site. This is hidden in the docs. from IPython.display import HTML with open("themes/nature_with_gtoc/static/nature.css_t") as f: css = f.read() HTML('<style>{}</style>'.format(css)) Explanation: See the template in the GitHub repo for more details. End of explanation
11,244
Given the following text description, write Python code to implement the functionality described below step by step Description: Title Step1: Create Data Step2: View All Rows Step3: View Rows Where Age Is Greater Than 20 And City Is San Francisco Step4: View Rows Where Age Is Greater Than 20 or City Is San Francisco
Python Code: # Ignore %load_ext sql %sql sqlite:// %config SqlMagic.feedback = False Explanation: Title: Multiple Conditional Statements Slug: multiple_conditional_statements Summary: Multiple Conditional Statements in SQL. Date: 2017-01-16 12:00 Category: SQL Tags: Basics Authors: Chris Albon Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax. For more, check out Learning SQL by Alan Beaulieu. End of explanation %%sql -- Create a table of criminals CREATE TABLE criminals (pid, name, age, sex, city, minor); INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1); INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0); INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0); Explanation: Create Data End of explanation %%sql -- Select all SELECT * -- From the criminals table FROM criminals Explanation: View All Rows End of explanation %%sql -- Select all unique SELECT distinct * -- From the criminals table FROM criminals -- Where age is greater than 20 and city is San Francisco WHERE age > 20 AND city == 'San Francisco' Explanation: View Rows Where Age Is Greater Than 20 And City Is San Francisco End of explanation %%sql -- Select all unique SELECT distinct * -- From the criminals table FROM criminals -- Where age is greater than 20 and city is San Francisco WHERE age > 20 OR city == 'San Francisco' Explanation: View Rows Where Age Is Greater Than 20 or City Is San Francisco End of explanation
11,245
Given the following text description, write Python code to implement the functionality described below step by step Description: 目录遍历 下面整理常见的目录遍历方式 Step1: os.scandir 返回可迭代的对象os.DirEntry ,可以直接判断是文件还是目录等. Step2: os.walk 可以递归遍历目录, 另外还支持遍历符号链接指向的目录. Step3: glob.glob 如果文件比较多的话可以使用glob.iglob提高性能. Step4: pathlib.Path 自从python3开始,有了Path 这个表示路径的对象,写起来很面向对象.
Python Code: import os os.listdir('traverse-directories') Explanation: 目录遍历 下面整理常见的目录遍历方式: os.listdir os.scandir os.walk os.listdir glob.glob pathlib.Path 目录结构为: ``` traverse-directories sell.txt fuit-shop\ orange.txt apple.txt car\ small-car\ 奔驰.txt big-car\ 拖拉机.txt ``` os.listdir 返回当前目录文件或目录名列表 End of explanation with os.scandir('traverse-directories') as it: for entry in it: print(entry.name,entry.is_file()) Explanation: os.scandir 返回可迭代的对象os.DirEntry ,可以直接判断是文件还是目录等. End of explanation for root, dirs, files in os.walk("traverse-directories"): for d in dirs: print (os.path.join(root,d)) for f in files: print (os.path.join(root,f)) Explanation: os.walk 可以递归遍历目录, 另外还支持遍历符号链接指向的目录. End of explanation import glob print('普通遍历:') for i in glob.glob('traverse-directories/*'): print(i) print('递归遍历:') for i in glob.glob('traverse-directories/**',recursive=True): print(i) Explanation: glob.glob 如果文件比较多的话可以使用glob.iglob提高性能. End of explanation from pathlib import Path root = Path('traverse-directories') print("遍历目录方式一:") for child in root.iterdir(): print (child) print("遍历目录方式二:") for i in root.glob('*'): print(i) print("遍历目录方式三(递归):") for i in root.glob('**/*'): print(i) Explanation: pathlib.Path 自从python3开始,有了Path 这个表示路径的对象,写起来很面向对象. End of explanation
11,246
Given the following text description, write Python code to implement the functionality described below step by step Description: Async optimization Loop Bayesian optimization is used to tune parameters for walking robots or other experiments that are not a simple (expensive) function call. Tim Head, February 2017. Reformatted by Holger Nahrstaedt 2020 .. currentmodule Step1: The Setup We will use a simple 1D problem to illustrate the API. This is a little bit artificial as you normally would not use the ask-and-tell interface if you had a function you can call to evaluate the objective. Step2: Our 1D toy problem, this is the function we are trying to minimize Step3: Here a quick plot to visualize what the function looks like Step4: Now we setup the Step5: In a real world use case you would probably go away and use this parameter in your experiment and come back a while later with the result. In this example we can simply evaluate the objective function and report the value back to the optimizer Step6: Like *_minimize() the first few points are random suggestions as there is no data yet with which to fit a surrogate model. Step7: We can now plot the random suggestions and the first model that has been fit Step8: Let us sample a few more points and plot the optimizer again Step9: By using the
Python Code: print(__doc__) import numpy as np np.random.seed(1234) import matplotlib.pyplot as plt Explanation: Async optimization Loop Bayesian optimization is used to tune parameters for walking robots or other experiments that are not a simple (expensive) function call. Tim Head, February 2017. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt They often follow a pattern a bit like this: ask for a new set of parameters walk to the experiment and program in the new parameters observe the outcome of running the experiment walk back to your laptop and tell the optimizer about the outcome go to step 1 A setup like this is difficult to implement with the _minimize()* function interface. This is why scikit-optimize** has a ask-and-tell interface that you can use when you want to control the execution of the optimization loop. This notebook demonstrates how to use the ask and tell interface. End of explanation from skopt.learning import ExtraTreesRegressor from skopt import Optimizer noise_level = 0.1 Explanation: The Setup We will use a simple 1D problem to illustrate the API. This is a little bit artificial as you normally would not use the ask-and-tell interface if you had a function you can call to evaluate the objective. End of explanation def objective(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\ + np.random.randn() * noise_level Explanation: Our 1D toy problem, this is the function we are trying to minimize End of explanation # Plot f(x) + contours plt.set_cmap("viridis") x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = np.array([objective(x_i, noise_level=0.0) for x_i in x]) plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])), alpha=.2, fc="r", ec="None") plt.legend() plt.grid() plt.show() Explanation: Here a quick plot to visualize what the function looks like: End of explanation opt = Optimizer([(-2.0, 2.0)], "ET", acq_optimizer="sampling") # To obtain a suggestion for the point at which to evaluate the objective # you call the ask() method of opt: next_x = opt.ask() print(next_x) Explanation: Now we setup the :class:Optimizer class. The arguments follow the meaning and naming of the *_minimize() functions. An important difference is that you do not pass the objective function to the optimizer. End of explanation f_val = objective(next_x) opt.tell(next_x, f_val) Explanation: In a real world use case you would probably go away and use this parameter in your experiment and come back a while later with the result. In this example we can simply evaluate the objective function and report the value back to the optimizer: End of explanation for i in range(9): next_x = opt.ask() f_val = objective(next_x) opt.tell(next_x, f_val) Explanation: Like *_minimize() the first few points are random suggestions as there is no data yet with which to fit a surrogate model. End of explanation from skopt.acquisition import gaussian_ei def plot_optimizer(opt, x, fx): model = opt.models[-1] x_model = opt.space.transform(x.tolist()) # Plot true function. plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([fx - 1.9600 * noise_level, fx[::-1] + 1.9600 * noise_level]), alpha=.2, fc="r", ec="None") # Plot Model(x) + contours y_pred, sigma = model.predict(x_model, return_std=True) plt.plot(x, y_pred, "g--", label=r"$\mu(x)$") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([y_pred - 1.9600 * sigma, (y_pred + 1.9600 * sigma)[::-1]]), alpha=.2, fc="g", ec="None") # Plot sampled points plt.plot(opt.Xi, opt.yi, "r.", markersize=8, label="Observations") acq = gaussian_ei(x_model, model, y_opt=np.min(opt.yi)) # shift down to make a better plot acq = 4 * acq - 2 plt.plot(x, acq, "b", label="EI(x)") plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue') # Adjust plot layout plt.grid() plt.legend(loc='best') plot_optimizer(opt, x, fx) Explanation: We can now plot the random suggestions and the first model that has been fit: End of explanation for i in range(10): next_x = opt.ask() f_val = objective(next_x) opt.tell(next_x, f_val) plot_optimizer(opt, x, fx) Explanation: Let us sample a few more points and plot the optimizer again: End of explanation import pickle with open('my-optimizer.pkl', 'wb') as f: pickle.dump(opt, f) with open('my-optimizer.pkl', 'rb') as f: opt_restored = pickle.load(f) Explanation: By using the :class:Optimizer class directly you get control over the optimization loop. You can also pickle your :class:Optimizer instance if you want to end the process running it and resume it later. This is handy if your experiment takes a very long time and you want to shutdown your computer in the meantime: End of explanation
11,247
Given the following text description, write Python code to implement the functionality described below step by step Description: Outline Glossary 2. Mathematical Groundwork Previous Step1: Import section specific modules
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS Explanation: Outline Glossary 2. Mathematical Groundwork Previous: 2.5 Convolution Next: 2.7 Fourier Theorems Import standard modules: End of explanation pass Explanation: Import section specific modules: End of explanation
11,248
Given the following text description, write Python code to implement the functionality described below step by step Description: Imports and configuration We set the path to the config.cfg file using the environment variable 'PYPMJ_CONFIG_FILE'. If you do not have a configuration file yet, please look into the Setting up a configuration file example. Step1: Now we can import pypmj and numpy. Since the parent directory, which contains the pypmj module, is not automatically in our path, we need to append it before. Step2: Load the materials extension. Step3: What this extension is for The materials extension provides access to tabulated and formula-based optical material property data. It can load such data from different data bases, extract additional information such as citations, as well as automatically interpolate, extrapolate and plot the data. Usage The functionality is provided by the class MaterialData. Step4: Note Step5: We show the abilities using the gallium arsenide data set. Step6: Get some metadata Step7: The default for unitOfLength is 1., which defaults to meter. We can get refractive index data (here called $n$-$k$-data, where $n$ is the real and $k$ the imaginary part of the complex refractive index) for specific wavelengths like this Step8: Or for multiple wavelengths values Step9: Or we can get the permittivity. Step10: We can also plot the complete known data to get an overview of the data set. Step11: Or we can plot data in a specific wavelength range, thistime also showing the known (tabulated) points to show case the interpolation.
Python Code: import os os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg' Explanation: Imports and configuration We set the path to the config.cfg file using the environment variable 'PYPMJ_CONFIG_FILE'. If you do not have a configuration file yet, please look into the Setting up a configuration file example. End of explanation import sys sys.path.append('..') import pypmj as jpy import numpy as np Explanation: Now we can import pypmj and numpy. Since the parent directory, which contains the pypmj module, is not automatically in our path, we need to append it before. End of explanation jpy.load_extension('materials') Explanation: Load the materials extension. End of explanation jpy.MaterialData? Explanation: What this extension is for The materials extension provides access to tabulated and formula-based optical material property data. It can load such data from different data bases, extract additional information such as citations, as well as automatically interpolate, extrapolate and plot the data. Usage The functionality is provided by the class MaterialData. End of explanation jpy.MaterialData.materials.keys() Explanation: Note: The current implementation is a bit inflexible/incomplete and will probably be changed/completed in a future version. There is currently only access to the following materials, although adding more materials is easily done by extending this dict in the materials.py file: End of explanation GaAs = jpy.MaterialData(material = 'gallium_arsenide') Explanation: We show the abilities using the gallium arsenide data set. End of explanation GaAs.getAllInfo() Explanation: Get some metadata: End of explanation wvl = 600.e-9 # = 600nm GaAs.getNKdata(wvl) Explanation: The default for unitOfLength is 1., which defaults to meter. We can get refractive index data (here called $n$-$k$-data, where $n$ is the real and $k$ the imaginary part of the complex refractive index) for specific wavelengths like this End of explanation wvls = np.linspace(600.e-9, 1000.e-9, 6) # = 600nm to 1000nm GaAs.getNKdata(wvls) Explanation: Or for multiple wavelengths values End of explanation GaAs.getPermittivity(wvls) Explanation: Or we can get the permittivity. End of explanation %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(8,6)) GaAs.plotData() Explanation: We can also plot the complete known data to get an overview of the data set. End of explanation plt.figure(figsize=(8,6)) GaAs.plotData(wvlRange=(200.e-9, 1000.e-9), plotKnownValues=True) Explanation: Or we can plot data in a specific wavelength range, thistime also showing the known (tabulated) points to show case the interpolation. End of explanation
11,249
Given the following text description, write Python code to implement the functionality described below step by step Description: Numpy Exercise 2 Imports Step2: Factorial Write a function that computes the factorial of small numbers using np.arange and np.cumprod. Step4: Write a function that computes the factorial of small numbers using a Python loop. Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is
Python Code: import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns Explanation: Numpy Exercise 2 Imports End of explanation def np_fact(n): Compute n! = n*(n-1)*...*1 using Numpy. if n==0: return 1 else: a=np.arange(1.0, (n+1), 1.0) b= a.cumprod() c=max(b) return c print(np_fact(10)) assert np_fact(0)==1 assert np_fact(1)==1 assert np_fact(10)==3628800 assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800] Explanation: Factorial Write a function that computes the factorial of small numbers using np.arange and np.cumprod. End of explanation def loop_fact(n): Compute n! using a Python for loop. a=[] c=1 if n>0: b= list(range(1,n+1)) # so b is [ 1,2,3,4,...n] for x in b: c=c*x return c else: return 1 print(loop_fact(5)) assert loop_fact(0)==1 assert loop_fact(1)==1 assert loop_fact(10)==3628800 assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800] Explanation: Write a function that computes the factorial of small numbers using a Python loop. End of explanation %timeit -n1 -r1 np_fact(50) %timeit -n1 -r1 loop_fact(50) Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is: python %timeit -n1 -r1 function_to_time() End of explanation
11,250
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: Helpers Step3: Load exekall's ValueDB This notebook is meant for analysing a set of results coming from a test session executed using exekall (potentially via bisector). Since it only relies on TestMetric, it can also be used with ResultBundle instances generated directly in a notebook (e.g. using test_task_placement), without the intervention of any other tool like exekall.
Python Code: def collect_value(db, cls, key_path): Collect objects computed for the exekall subexpression pointed at by ``key_path``, starting from objects of type ``cls``. The path is a list of parameter names that allows locating a node in the graph of an expression. The path from z to x is ['param2', 'param'] x = 3 y = f(param=x) z = g(param2=y) data = {} for froz_val in db.get_by_type(cls): key = get_nested_key(froz_val, key_path) data.setdefault(key, []).append(froz_val.value) return data def collect_metric(db, metric_name, key_path): Collect a given metric and use the exekall's FrozenExprVal that can be found at key_path, starting from the root. See the documentation for exekall's public API: https://lisa-linux-integrated-system-analysis.readthedocs.io/en/master/exekall/public_api.html return { key: [ val.metrics[metric_name].data for val in val_list if metric_name in val.metrics ] for key, val_list in collect_value(db, ResultBundle, key_path).items() } Explanation: Helpers End of explanation # These ValueDB.pickle.xz files can be found in either: # 1) exekall's artifact dir # 2) exported from a bisector's report using: # bisector report myreport.yml.gz -oexport-db=ValueDB.pickle.xz. path_map = { 'juno R0': 'juno-r0.report.ValueDB.pickle.xz', 'juno R2': 'uno-r2.report.ValueDB.pickle.xz', 'TC2': 'TC2.report.ValueDB.pickle.xz', 'hikey 960': 'hikey960.report.ValueDB.pickle.xz', } db_map = {board: ValueDB.from_path(path) for board, path in path_map.items()} # refer to 'self' parameter of the test method, which is a TestBundle for board, db in db_map.items(): print('\n{}:'.format(board)) noisiest_task = collect_metric(db, 'noisiest task', ['self']) data = {} # Index on the TestBundle to make sure to only have one entry per TestBundle for froz_val, metric_list in noisiest_task.items(): if not metric_list: continue assert len(metric_list) == 1 # All noise analysis are done on the same TestBundle, so we can pick any metric = metric_list[0] # test_bundle = froz_val.value # print(froz_val.get_id(qual=False)) exec_time = metric['duration (rel)'].data data.setdefault(metric['comm'], []).append(exec_time) fig, ax = plt.subplots(figsize=(20, 5)) # plt.yscale('log') plt.xscale('log') # Total "noise" time. # Ideally, we would like to get the total noise for each trace, but we only have the noisiest task. # That approximation should keep relative order in values we compute from it. total_noise_time = sum(sum(l) for l in data.values()) # Check how annoying was that comm. Higher number means it contributed to a lot of traces, or that it had very high contribution to a few traces. compute_contrib = lambda exec_time: sum(exec_time) * 100 / total_noise_time def key(item): comm, exec_time = item return compute_contrib(exec_time) for comm, exec_time in sorted(data.items(), key=key, reverse=True): noise_contribution = compute_contrib(exec_time) avg_exec_time = sum(exec_time)/len(exec_time) if noise_contribution < 1: continue series = pd.Series(exec_time) series.plot.hist(ax=ax, label=comm, bins=40) print('{:<15}: avg={:.2f}% contrib={:.2f}%'.format(comm, avg_exec_time, noise_contribution)) # Show the usual 1% undecided threshold. Note that some tests may use another threshold, # see RTATestBundle.check_noisy_tasks(noise_threshold_pct=XX) # https://lisa-linux-integrated-system-analysis.readthedocs.io/en/master/kernel_tests.html?highlight=check_noisy_tasks#lisa.tests.base.RTATestBundle.check_noisy_tasks ax.axvline(1, color='red') ax.set_title('noise on {}'.format(board)) ax.set_xlabel('noise exec time (rel)') ax.set_ylabel('#occurences') ax.legend() ax.grid(True) Explanation: Load exekall's ValueDB This notebook is meant for analysing a set of results coming from a test session executed using exekall (potentially via bisector). Since it only relies on TestMetric, it can also be used with ResultBundle instances generated directly in a notebook (e.g. using test_task_placement), without the intervention of any other tool like exekall. End of explanation
11,251
Given the following text description, write Python code to implement the functionality described below step by step Description: !!! D . R . A . F . T !!! Lightness Lightness is defined as the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting. <a name="back_reference_1"></a><a href="#reference_1">[1]</a> Colour defines the following Lightness computation methods Step1: Note Step2: Note Step3: Wyszecki (1963) Method Wyszecki (1963) recommended the following cube root function to compute Lightness $W$ as a function of the luminance factor $Y$ within the practically important range of $1.0\%<Y<98\%$ Step4: Note Step5: CIE 1976 Method The CIE $L^a^b^$ approximately uniform colourspace defined in 1976 computes the Lightness $L^$ quantity as follows Step6: Note Step7: Fairchild and Wyble (2010) Method Step8: Fairchild and Chen (2011) Method
Python Code: import colour colour.utilities.filter_warnings(True, False) sorted(colour.LIGHTNESS_METHODS.keys()) Explanation: !!! D . R . A . F . T !!! Lightness Lightness is defined as the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting. <a name="back_reference_1"></a><a href="#reference_1">[1]</a> Colour defines the following Lightness computation methods: End of explanation colour.colorimetry.lightness_Glasser1958(10.08) Explanation: Note: 'Lstar1976' is a convenient aliases for 'CIE 1976'. Glasser, Mckinney, Reilly and Schnelle (1958) Method Glasser, Mckinney, Reilly and Schnelle (1958) described a visually uniform colour coordinate system close to Adams chromatic-value system but where the quintic-parabola function has been replaced with a cube-root function: the Cube-Root Color Coordinate System. Lightness $L$ in the Cube-Root Color Coordinate System is calculated as follows: <a name="back_reference_2"></a><a href="#reference_2">[2]</a> $$ \begin{equation} L=25.29Y^{1/3}-18.38 \end{equation} $$ where $Y$ defines the luminance in domain [0, 100]. The colour.lightness_Glasser1958 definition is used to compute Lightness $L$: End of explanation colour.lightness(10.08, method='Glasser 1958') %matplotlib inline from colour.plotting import * colour_plotting_defaults() # Plotting the "Glasser (1958)" "Lightness" function. single_lightness_function_plot('Glasser 1958') Explanation: Note: Input luminance $Y$ is in domain [0, 100], output Lightness $L$ is in domain [0, 100]. The colour.lightness definition is implemented as a wrapper for various lightness computation methods: End of explanation colour.colorimetry.lightness_Wyszecki1963(10.08) Explanation: Wyszecki (1963) Method Wyszecki (1963) recommended the following cube root function to compute Lightness $W$ as a function of the luminance factor $Y$ within the practically important range of $1.0\%<Y<98\%$: <a name="back_reference_3"></a><a href="#reference_3">[3]</a> $$ \begin{equation} W=25Y^{1/3}-17 \end{equation} $$ The colour.lightness_Wyszecki1963 definition is used to compute Lightness $W$: End of explanation colour.lightness(10.08, method='Wyszecki 1963') # Plotting the "Wyszecki (1963)" "Lightness" function. single_lightness_function_plot('Wyszecki 1963') Explanation: Note: Input luminance $Y$ is in domain [0, 100], output Lightness $W$ is in domain [0, 100]. Using the colour.lightness wrapper definition: End of explanation colour.colorimetry.lightness_CIE1976(10.08) Explanation: CIE 1976 Method The CIE $L^a^b^$ approximately uniform colourspace defined in 1976 computes the Lightness $L^$ quantity as follows: <a name="back_reference_4"></a><a href="#reference_4">[4]</a> $$ \begin{equation} L^=\begin{cases}116\biggl(\cfrac{Y}{Y_n}\biggr)^{1/3}-16 & for\ \cfrac{Y}{Y_n}>\epsilon\ \kappa\biggl(\cfrac{Y}{Y_n}\biggr) & for\ \cfrac{Y}{Y_n}<=\epsilon \end{cases} \end{equation} $$ where $Y_n$ is the reference white luminance. with $$ \begin{equation} \begin{aligned} \epsilon&\ =\begin{cases}0.008856 & Actual\ CIE\ Standard\ 216\ /\ 24389 & Intent\ of\ the\ CIE\ Standard \end{cases}\ \kappa&\ =\begin{cases}903.3 & Actual\ CIE\ Standard\ 24389\ /\ 27 & Intent\ of\ the\ CIE\ Standard \end{cases} \end{aligned} \end{equation} $$ The original $\epsilon$ and $\kappa$ constants values have been shown to exhibit discontinuity at the junction point of the two functions grafted together to create the Lightness $L^*$ function. <a name="back_reference_5"></a><a href="#reference_5">[5]</a> Colour uses the rational values instead of the decimal values for these constants. See Also: The CIE $L^a^b^*$ Colourspace notebook for in-depth informations about the CIE $L^a^b^$* colourspace. The colour.lightness_CIE1976 definition is used to compute Lightness $L^*$: End of explanation colour.lightness(10.08) colour.lightness(10.08, method='CIE 1976', Y_n=95) colour.lightness(10.08, method='Lstar1976', Y_n=95) # Plotting the "CIE 1976" "Lightness" function. single_lightness_function_plot('CIE 1976') # Plotting multiple "Lightness" functions for comparison. multi_lightness_function_plot(['CIE 1976', 'Glasser 1958']) Explanation: Note: Input luminance $Y$ and $Y_n$ are in domain [0, 100], output Lightness $L^*$ is in domain [0, 100]. Using the colour.lightness wrapper definition: End of explanation colour.colorimetry.lightness_Fairchild2010(10.08 / 100, 1.836) colour.lightness(10.08 / 100, method='Fairchild 2010', epsilon=1.836) # Plotting the "Fairchild and Wyble (2010)" "Lightness" function. single_lightness_function_plot('Fairchild 2010') # Plotting multiple "Lightness" functions for comparison. multi_lightness_function_plot(['CIE 1976', 'Fairchild 2010']) Explanation: Fairchild and Wyble (2010) Method End of explanation colour.colorimetry.lightness_Fairchild2011(10.08 / 100, 0.710) colour.lightness(10.08 / 100, method='Fairchild 2011', epsilon=0.710) # Plotting the "Fairchild and Chen (2011)" "Lightness" function. single_lightness_function_plot('Fairchild 2011') # Plotting multiple "Lightness" functions for comparison. multi_lightness_function_plot(['CIE 1976', 'Fairchild 2011']) Explanation: Fairchild and Chen (2011) Method End of explanation
11,252
Given the following text description, write Python code to implement the functionality described below step by step Description: Fourier analysis & resonances A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database Step1: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period. Step2: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum. Step3: Let's see what the eccentricity evolution looks like with matplotlib Step4: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps). Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position. Step5: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range Step6: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5 Step7: Now we construct $\phi_{5 Step8: We see that the resonant angle $\phi_{5
Python Code: import rebound import numpy as np sim = rebound.Simulation() sim.units = ('AU', 'yr', 'Msun') sim.add("Sun") sim.add("Jupiter") sim.add("Saturn") Explanation: Fourier analysis & resonances A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database: End of explanation sim.integrator = "whfast" sim.dt = 1. # in years. About 10% of Jupiter's period sim.move_to_com() Explanation: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period. End of explanation Nout = 100000 tmax = 3.e5 Nplanets = 2 x = np.zeros((Nplanets,Nout)) ecc = np.zeros((Nplanets,Nout)) longitude = np.zeros((Nplanets,Nout)) varpi = np.zeros((Nplanets,Nout)) times = np.linspace(0.,tmax,Nout) ps = sim.particles for i,time in enumerate(times): sim.integrate(time) os = sim.calculate_orbits() for j in range(Nplanets): x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0 ecc[j][i] = os[j].e longitude[j][i] = os[j].l varpi[j][i] = os[j].Omega + os[j].omega Explanation: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum. End of explanation %matplotlib inline labels = ["Jupiter", "Saturn"] import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) plt.plot(times,ecc[0],label=labels[0]) plt.plot(times,ecc[1],label=labels[1]) ax.set_xlabel("Time (yrs)", fontsize=20) ax.set_ylabel("Eccentricity", fontsize=20) ax.tick_params(labelsize=20) plt.legend(); Explanation: Let's see what the eccentricity evolution looks like with matplotlib: End of explanation from scipy import signal Npts = 3000 logPmin = np.log10(10.) logPmax = np.log10(1.e5) Ps = np.logspace(logPmin,logPmax,Npts) ws = np.asarray([2*np.pi/P for P in Ps]) periodogram = signal.lombscargle(times,x[0],ws) fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([10**logPmin,10**logPmax]) ax.set_ylim([0,0.15]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) Explanation: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps). Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position. End of explanation fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([600,1600]) ax.set_ylim([0,0.003]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) Explanation: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range: End of explanation def zeroTo360(val): while val < 0: val += 2*np.pi while val > 2*np.pi: val -= 2*np.pi return val*180/np.pi Explanation: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. In this case, we have that the mean longitude of Jupiter $\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\phi_{5:2} = 5\lambda_S - 2\lambda_J - 3\varpi_J$, where $\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates. To see a clear trend, we have to shift each value of $\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees: End of explanation phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{5:2}$", fontsize=20) ax.tick_params(labelsize=20) Explanation: Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs. End of explanation phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi2) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{2:1}$", fontsize=20) ax.tick_params(labelsize=20) Explanation: We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup! This resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\phi_{2:1} = 2\lambda_S - \lambda_J - \varpi_J$, End of explanation
11,253
Given the following text description, write Python code to implement the functionality described below step by step Description: Occasionally, absolutely crazy ideas crop up into my noggin. Recently, I've had two take up residence almost simultaneously, both related to pynads. Haskell Type Signatures Since Pynads is, nominally, a learning exercise for me to understand some concepts in functional programming -- specifically in terms of Haskell -- a little more deeply. I found myself wanting to using Haskell type signatures with some of my functions. The reason why is because I like Haskell's type signatures. They stay out of the way of my actual function signatures. Here's the current way Python 3 annotates functions Step1: That's so much line noise. Like what. Look at that default assignment. Like, I get why the annotations are inlined with the signature. But they're just ugly. Meanwhile, here's a similar Haskell function with the type signature Step2: Nice. We retain that information in a dictionary out of the way. What if we could combine these two things? Step3: I like this. It stays out of the way, it uses a decorator (never enough decorators). Let's checkout the __annotations__ dict Step4: Uh....Well, it didn't fill anything in. What did it do? Well, it attaches the signature to the docstring... Step5: That's...nice. I'm actually perfectly content with this solution currently. But wouldn't it be cool? (this is a phrase that only preceeds michief and trouble). Wouldn't it be cool if that Haskell type was somehow transformed into a Python annotations dictionary and on the other end we'd be able to inspect the annotation and get this Step6: However, this is complicated because what if we had a higher order function? The Haskell type signature looks like this Step7: Oops. But this works
Python Code: def my_func(a: int, b: str = 'hello') -> tuple: return (a, b) my_func(1, 'wut') Explanation: Occasionally, absolutely crazy ideas crop up into my noggin. Recently, I've had two take up residence almost simultaneously, both related to pynads. Haskell Type Signatures Since Pynads is, nominally, a learning exercise for me to understand some concepts in functional programming -- specifically in terms of Haskell -- a little more deeply. I found myself wanting to using Haskell type signatures with some of my functions. The reason why is because I like Haskell's type signatures. They stay out of the way of my actual function signatures. Here's the current way Python 3 annotates functions: End of explanation my_func.__annotations__ Explanation: That's so much line noise. Like what. Look at that default assignment. Like, I get why the annotations are inlined with the signature. But they're just ugly. Meanwhile, here's a similar Haskell function with the type signature: myFunc :: Int -&gt; String -&gt; (Int, String) myFunc a b = (a, b) That type signature is both helpful AND out of the way. However, there's one really nice thing that Python does with these annotations: End of explanation from pynads.utils.decorators import annotate @annotate(type="Int -> String -> (Int, String)") def my_func(a, b='hello'): return (a, b) Explanation: Nice. We retain that information in a dictionary out of the way. What if we could combine these two things? End of explanation my_func.__annotations__ Explanation: I like this. It stays out of the way, it uses a decorator (never enough decorators). Let's checkout the __annotations__ dict: End of explanation print(my_func.__doc__) Explanation: Uh....Well, it didn't fill anything in. What did it do? Well, it attaches the signature to the docstring... End of explanation {'a': 'Int', 'b': 'String', 'returns': '(Int, String)' } Explanation: That's...nice. I'm actually perfectly content with this solution currently. But wouldn't it be cool? (this is a phrase that only preceeds michief and trouble). Wouldn't it be cool if that Haskell type was somehow transformed into a Python annotations dictionary and on the other end we'd be able to inspect the annotation and get this: End of explanation from pynads.do import do, mreturn from pynads import List @do(monad=List) def chessboard(ranks, files): r = yield List(*ranks) f = yield List(*files) mreturn((r,f)) #chessboard('abcdefgh', range(1,9)) Explanation: However, this is complicated because what if we had a higher order function? The Haskell type signature looks like this: map :: (a -&gt; b) -&gt; [a] -&gt; [b] "Take a function of type a to type b, a list of as and return a list of bs." Simple. How is this parsed? What if we put type class restrictions on the type: (Monad m) =&gt; m a -&gt; (a -&gt; b) -&gt; m b Where m is an instance of Monad, take a m a and a function of type a to b and return a m b. What if we have multiple typeclass restrictions: (Monad m, Monoid s) =&gt; m s -&gt; m s -&gt; (s -&gt; s -&gt; m s) -&gt; m s Maybe we're lifting mappend into a monad? Let's also pretend that this isn't a bad way of doing it as well. How do we parse this? What about "existentially types", aka. forall a b. (which is something I've not used, nor understand, but apparently it's useful because reasons). Of course, there are cons with this: Haskell type signatures are too complex to be matched with regular expressions. How do you write a regex for forall a b. ((a, b) -&gt; (Char, Bool))? Parsers can be slow. They can be fast. But since this is a library built on understand functional things like applicatives and monads, of course I'd use an applicative/monadic combinatorial parser, which in Python will be slow. So many competing Python typing libraries. mypy seems to have gotten the BDFL nod and in fact, seems to be on track for inclusion to Python 3.5. Is it worth taking a second step and translating the parsed type signature to a type system? __annotations__ are use for things that aren't type annotations. Is this good? Is this bad? I don't know. Well, I think they should be for type signatures, but there's some cool stuff that is done with them. What about typeclass restrictions? Do we create a special entry? How do we handle collisions? Etc. Things to think about. My Broken Do Notation I implemented a rudimentary do-notation esque syntax sourced from Peter Thatcher's post on monads in Python. Here's the thing, it works...except with the List monad, because we need to repeatedly call the bound function and then flatten the list out. Except with this implementation, it hits mreturn and the whole thing blows up. Wah wah wah. But I like this style: End of explanation def chessboard(ranks, files): return List(*ranks) >> (lambda r: List(*files) >> (lambda f: List.unit((r,f)) )) chessboard('abcdefgh', range(1,9)) Explanation: Oops. But this works: End of explanation
11,254
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#ATENÇÃO Step1: Note que a imagem possui 174 linhas e 314 colunas, totalizando mais de 54 mil pixels. A representação do pixel é pelo tipo uint8, isto é, valores de 8 bits sem sinal, de 0 a 255. Note também que a impressão de todos os pixels é feita de forma especial. Se todos os 54 mil pixels tivessem que ser impressos, o resultado da impressão seria proibitivo. Neste caso, quando a imagem (matriz) for muito grande, o NumPy imprime apenas os pixels dos quatro cantos da imagem. Visualização de uma imagem No Adessowiki, a visualização de uma imagem é feita unicamente pela função adshow, que internamente utiliza o pacote PIL já mencionado. O processo de exibição de uma imagem cria uma representação gráfica desta matriz em que os valores do pixel é atribuído a um nível de cinza (imagem monocromática) ou a uma cor particular. Quando o pixel da imagem é uint8, o valor zero é atribuído ao preto e o valor 255 ao branco e gerando um tom de cinza proporcional ao valor do pixel. Veja abaixo a visualização da imagem cookies.tif já lida no trecho de programa anterior. Note que a função adshow possui dois parâmetros, a imagem e um string para ser exibido na legenda da visualização da imagem. Step2: O segundo tipo de imagem que o adshow visualiza é a imagem com pixels do tipo booleano. Como ilustração, faremos uma operação comparando cada pixel da imagem cookies com o valor 128 gerando assim uma nova imagem f_bin onde cada pixel será True ou False dependendo do resultado da comparação. O adshow mapeia os pixels verdadeiros como branco e os pixels falsos como preto Step3: Por fim, além destes dois modos de exibição, o adshow pode também exibir imagens coloridas no formato RGB e tipo de pixel uint8. No NumPy a imagem RGB é representada como três images armazenadas na dimensão profundidade. Neste caso o array tem 3 dimensões e seu shape tem o formato (3,H,W). Step4: Neste curso, por motivos didáticos, o adshow somente visualiza estes 3 tipos de imagens. Qualquer outro tipo de imagem, seja de valores maiores que 255, negativos ou complexos, precisam ser explicitamente convertidos para os valores entre 0 e 255 ou True e False. Maiores informações no uso do adshow podem ser vistas em ia636
Python Code: %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np !ls ../data f = mpimg.imread('../data/cameraman.tif') print('Tamanho de f: ', f.shape) print('Tipo do pixel:', f.dtype) print('Número total de pixels:', f.size) print('Pixels:\n', f) Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#ATENÇÃO:-este-notebook-ainda-não-está-pronto" data-toc-modified-id="ATENÇÃO:-este-notebook-ainda-não-está-pronto-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>ATENÇÃO: este notebook ainda não está pronto</a></div><div class="lev1 toc-item"><a href="#Representação,-Leitura-e-Visualização-de-Imagens" data-toc-modified-id="Representação,-Leitura-e-Visualização-de-Imagens-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Representação, Leitura e Visualização de Imagens</a></div><div class="lev2 toc-item"><a href="#Imagem-como-matriz" data-toc-modified-id="Imagem-como-matriz-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Imagem como matriz</a></div><div class="lev2 toc-item"><a href="#Leitura-de-uma-imagem" data-toc-modified-id="Leitura-de-uma-imagem-22"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Leitura de uma imagem</a></div><div class="lev2 toc-item"><a href="#Visualização-de-uma-imagem" data-toc-modified-id="Visualização-de-uma-imagem-23"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Visualização de uma imagem</a></div><div class="lev2 toc-item"><a href="#Visualizando-numericamente-uma-pequena-região-de-interesse-da-imagem" data-toc-modified-id="Visualizando-numericamente-uma-pequena-região-de-interesse-da-imagem-24"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Visualizando numericamente uma pequena região de interesse da imagem</a></div><div class="lev2 toc-item"><a href="#Criando-legenda-da-imagem-com-impressão-de-variáveis" data-toc-modified-id="Criando-legenda-da-imagem-com-impressão-de-variáveis-25"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>Criando legenda da imagem com impressão de variáveis</a></div> # ATENÇÃO: este notebook ainda não está pronto # Representação, Leitura e Visualização de Imagens Uma imagem digital pode ser representada por uma matriz bidimensional, onde os seus elementos são chamados de pixels (abreviatura de *picture elements*). Existem vários pacotes de processamento de imagens onde a imagem é representada por uma estrutura de dados específica. No nosso caso, no Adessowiki iremos utilizar a matriz disponível no *ndarray NumPy*. A vantagem é que todas as operações disponíveis para processamento matricial podem ser utilizados como processamento de imagens. Este é um dos principais objetivos deste curso: como utilizar linguagens de processamento matricial para fazermos processamento de imagens. ## Imagem como matriz Neste curso, uma imagem é definida pelo seu cabeçalho (tamanho da matriz e tipo de pixel) e pelos pixels em si. Estas informações são inerentes ao tipo ``ndarray`` do NumPy. O tamanho da matriz é caracterizado pelas suas dimensões: vertical e horizontal. A dimensão vertical é definida pelo número de linhas (*rows*) ou altura H (*height*) e a dimensão horizontal é definida pelo número de colunas (*cols*) ou largura W (*width*). No NumPy, as dimensões são armazenadas no ``shape`` da matriz como uma tupla (H,W). Uma imagem pode ter valores de pixels que podem ser armazenados em vários tipos de dados: a imagem binária tem apenas dois valores possíveis, muitas vezes atribuídos a preto e branco; uma imagem em nível de cinza tem valores inteiros positivos, muitas vezes, de 0 a um valor máximo. É possível ter pixels com valores negativos, com números reais, e até mesmo pixels com valores complexos. Um exemplo de uma imagem com valores de pixel negativos são imagens térmicas com temperaturas negativas. As imagens com pixels que são números reais podem ser encontradas nas imagens que representam uma onda senóide com valores que variam de -1 a +1. As imagens com os valores de pixel complexos podem ser encontrados em algumas transformações da imagem como a Transformada Discreta de Fourier. Como as imagens usualmente possuem centenas de milhares ou milhões de pixels, é importante escolher a menor representação do pixel para economizar o uso da memória do computador e usar a representação que seja mais eficiente para processamento. No Numpy, o tipo do pixel é armazenado no ``dtype`` que pode assumir vários tipos. Os quatro tipos que mais usaremos neste curso são indicados na tabela: ====== =============================== dtype valores ====== =============================== bool True, False uint8 8 bits sem sinal, de 0 a 255 uint16 16 bits sem sinal, de 0 a 65535 int 64 bits com sinal float ponto flutuante ====== =============================== ## Leitura de uma imagem Neste curso iremos trabalhar com imagens criadas sinteticamente e com imagens guardadas em arquivos. A leitura de uma imagem no Adessowiki é feita pelas funções ``adread`` e ``adreadgray`` que utilizam o pacote `http://effbot.org/imagingbook/ PIL` de processamento de imagens. Neste curso não utilizaremos as funções de processamento de imagens do PIL, mas sim utilizaremos as operações matriciais do NumPy. Existem diversas formas de salvar uma imagem em arquivo e utilizaremos as mais comuns: png, jpg, tif. As imagens disponíveis podem ser visualizadas na toolbox ia636 do Adessowiki: `ia636:iaimages`. Veja a seguir um exemplo de leitura de imagem e a impressão de seu cabeçalho e de seus pixels: End of explanation plt.imshow(f,cmap='gray') Explanation: Note que a imagem possui 174 linhas e 314 colunas, totalizando mais de 54 mil pixels. A representação do pixel é pelo tipo uint8, isto é, valores de 8 bits sem sinal, de 0 a 255. Note também que a impressão de todos os pixels é feita de forma especial. Se todos os 54 mil pixels tivessem que ser impressos, o resultado da impressão seria proibitivo. Neste caso, quando a imagem (matriz) for muito grande, o NumPy imprime apenas os pixels dos quatro cantos da imagem. Visualização de uma imagem No Adessowiki, a visualização de uma imagem é feita unicamente pela função adshow, que internamente utiliza o pacote PIL já mencionado. O processo de exibição de uma imagem cria uma representação gráfica desta matriz em que os valores do pixel é atribuído a um nível de cinza (imagem monocromática) ou a uma cor particular. Quando o pixel da imagem é uint8, o valor zero é atribuído ao preto e o valor 255 ao branco e gerando um tom de cinza proporcional ao valor do pixel. Veja abaixo a visualização da imagem cookies.tif já lida no trecho de programa anterior. Note que a função adshow possui dois parâmetros, a imagem e um string para ser exibido na legenda da visualização da imagem. End of explanation f_bin = f > 128 print('Tipo do pixel:', f_bin.dtype) plt.imshow(f_bin,cmap='gray') plt.colorbar() print(f_bin.min(), f_bin.max()) f_f = f_bin.astype(np.float) f_i = f_bin.astype(np.int) print(f_f.min(),f_f.max()) print(f_i.min(),f_i.max()) Explanation: O segundo tipo de imagem que o adshow visualiza é a imagem com pixels do tipo booleano. Como ilustração, faremos uma operação comparando cada pixel da imagem cookies com o valor 128 gerando assim uma nova imagem f_bin onde cada pixel será True ou False dependendo do resultado da comparação. O adshow mapeia os pixels verdadeiros como branco e os pixels falsos como preto: End of explanation f_cor = mpimg.imread('../data/boat.tif') print('Dimensões: ', f_cor.shape) print('Tipo do pixel:', f_cor.dtype) plt.imshow(f_cor) f_roi = f_cor[:2,:3,:] print(f_roi) Explanation: Por fim, além destes dois modos de exibição, o adshow pode também exibir imagens coloridas no formato RGB e tipo de pixel uint8. No NumPy a imagem RGB é representada como três images armazenadas na dimensão profundidade. Neste caso o array tem 3 dimensões e seu shape tem o formato (3,H,W). End of explanation f= mpimg.imread('../data/gull.pgm') plt.imshow(f,cmap='gray') g = f[:7,:10] print('g=') print(g) Explanation: Neste curso, por motivos didáticos, o adshow somente visualiza estes 3 tipos de imagens. Qualquer outro tipo de imagem, seja de valores maiores que 255, negativos ou complexos, precisam ser explicitamente convertidos para os valores entre 0 e 255 ou True e False. Maiores informações no uso do adshow podem ser vistas em ia636:adshow. .. note:: Uma das principais causas de erro em processamento de imagens é não prestar atenção no tipo do pixel ou nas dimensões da imagem. Recomenda-se verificar esta informações. Uma função que é bastante útil é a ia636:iaimginfo que foi criada para verificar rapidamente o tipo de pixel, dimensões e os valores mínimo e máximo da imagem. Veja a seguir um exemplo do seu uso nas três imagens processadas anteriormente: import ia636 print 'f: ', ia636.iaimginfo(f) print 'f_bin:', ia636.iaimginfo(f_bin) print 'f_cor:', ia636.iaimginfo(f_cor) Visualizando numericamente uma pequena região de interesse da imagem Para verificar que a imagem lida é composta de valores entre 0 e 255, vamos imprimir numericamente apenas uma pequena região de 7 linhas e 10 colunas do canto superior esquerdo da imagem. Fazemos isto com fatiamento: End of explanation
11,255
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have an array :
Problem: import numpy as np a = np.array([[ 0, 1, 2, 3, 5, 6, 7, 8], [ 4, 5, 6, 7, 5, 3, 2, 5], [ 8, 9, 10, 11, 4, 5, 3, 5]]) low = 1 high = 5 result = a[:, low:high]
11,256
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute sparse inverse solution with mixed norm Step1: Run solver with SURE criterion Step2: Plot dipole activations Step3: Plot residual Step4: Generate stc from dipoles Step5: View in 2D and 3D ("glass" brain like 3D plot) Step6: Morph onto fsaverage brain and view
Python Code: # Author: Alexandre Gramfort <[email protected]> # Daniel Strohmeier <[email protected]> # # License: BSD-3-Clause import numpy as np import mne from mne.datasets import sample from mne.inverse_sparse import mixed_norm, make_stc_from_dipoles from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif' subjects_dir = data_path + '/subjects' # Read noise covariance matrix cov = mne.read_cov(cov_fname) # Handling average file condition = 'Left Auditory' evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=0, tmax=0.3) # Handling forward solution forward = mne.read_forward_solution(fwd_fname) Explanation: Compute sparse inverse solution with mixed norm: MxNE and irMxNE Runs an (ir)MxNE (L1/L2 :footcite:GramfortEtAl2012 or L0.5/L2 :footcite:StrohmeierEtAl2014 mixed norm) inverse solver. L0.5/L2 is done with irMxNE which allows for sparser source estimates with less amplitude bias due to the non-convexity of the L0.5/L2 mixed norm penalty. End of explanation alpha = "sure" # regularization parameter between 0 and 100 or SURE criterion loose, depth = 0.9, 0.9 # loose orientation & depth weighting n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver # if n_mxne_iter > 1 dSPM weighting can be avoided. # Compute dSPM solution to be used as weights in MxNE inverse_operator = make_inverse_operator(evoked.info, forward, cov, depth=depth, fixed=True, use_cps=True) stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9., method='dSPM') # Compute (ir)MxNE inverse solution with dipole output dipoles, residual = mixed_norm( evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000, tol=1e-4, active_set_size=10, debias=False, weights=stc_dspm, weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True, return_as_dipoles=True, verbose=True, random_state=0, # for this dataset we know we should use a high alpha, so avoid some # of the slower (lower) alpha values sure_alpha_grid=np.linspace(100, 40, 10), ) t = 0.083 tidx = evoked.time_as_index(t) for di, dip in enumerate(dipoles, 1): print(f'Dipole #{di} GOF at {1000 * t:0.1f} ms: ' f'{float(dip.gof[tidx]):0.1f}%') Explanation: Run solver with SURE criterion :footcite:DeledalleEtAl2014 End of explanation plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # Plot dipole locations of all dipoles with MRI slices for dip in dipoles: plot_dipole_locations(dip, forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') Explanation: Plot dipole activations End of explanation ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600]) evoked.pick_types(meg=True, eeg=True, exclude='bads') evoked.plot(ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg=True, eeg=True, exclude='bads') residual.plot(ylim=ylim, proj=True, time_unit='s') Explanation: Plot residual End of explanation stc = make_stc_from_dipoles(dipoles, forward['src']) Explanation: Generate stc from dipoles End of explanation solver = "MxNE" if n_mxne_iter == 1 else "irMxNE" plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), fig_name="%s (cond %s)" % (solver, condition), opacity=0.1) Explanation: View in 2D and 3D ("glass" brain like 3D plot) End of explanation morph = mne.compute_source_morph(stc, subject_from='sample', subject_to='fsaverage', spacing=None, sparse=True, subjects_dir=subjects_dir) stc_fsaverage = morph.apply(stc) src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif' src_fsaverage = mne.read_source_spaces(src_fsaverage_fname) plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1), fig_name="Morphed %s (cond %s)" % (solver, condition), opacity=0.1) Explanation: Morph onto fsaverage brain and view End of explanation
11,257
Given the following text description, write Python code to implement the functionality described below step by step Description: Wayne Nixalo - 4 Jun 2017 Codealong of Practical Deep Learning I Lesson 4 statefarm JNB. My comments are in italics. 6 Jun 2017 NOTE Step1: Setup Batches Step2: Rather than using batches, we could just import all the data into an array to save some processing time. (In mose examples, I'm using the batches, however - just because that's how I happened to start out.) Step3: Re-run sample experiments on full dataset We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models. Single Conv Layer Step4: Interestingly, with no regularization or augmentation, we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results. Data Augmentation Step5: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. Noted. I'm seeing the same numbers Four Conv/Pooling pairs + Dropout Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help. Step6: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however... Imagenet Conv Features Since we have so little data, and it is similar to ImageNet images (full-color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.) NOTE Step7: (Working on getting conv_test_feat. For some reason getting a nameless "MemoryError Step8: BatchNorm Dense layers on pretrained Conv layers Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers. Step9: NOTE Step10: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model. Pre-computed DataAugmentation + Dropout We'll use our usual data augmentation parameters Step11: We'll use those to create a dataset of convolutional features 5x bigger than the training set. Step12: Let's include the real trianing data as well in its non-augmented form. Step13: Since we've now got a dataset 6x bigger than before, we'll need tocopy our labels 6 times too. Step14: Based on some experiments the previous model works well, with bigger dense layers. Step15: Now we can train the model as usual, with pre-computed augmented data. Step16: Looks good - let's save those weights. Step17: Pseudo-Labeling We're going to try using a combination of psudeo labeling and knowledge distillation to allow us to use unlabeled data (ie Step18: ...concatenate them with our training labels... Step19: ...and fine-tune our model using that data. Step20: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set. Step21: Submit We'll find a good clipping amount using the validation set, prior to submitting.
Python Code: import theano import os, sys sys.path.insert(1, os.path.join('utils')) %matplotlib inline from __future__ import print_function, division path = "data/statefarm/" import utils; reload(utils) from utils import * from IPython.display import FileLink # batch_size=32 batch_size=16 Explanation: Wayne Nixalo - 4 Jun 2017 Codealong of Practical Deep Learning I Lesson 4 statefarm JNB. My comments are in italics. 6 Jun 2017 NOTE: notebook incomplete. Unable to generate convolutional-model features on test data: "MemoryError:" Enter State Farm End of explanation batches = get_batches(path + 'train', batch_size=batch_size) val_batches = get_batches(path + 'valid', batch_size=batch_size*2, shuffle=False) # test_batches = get_batches(path + 'test', batch_size=batch_size, shuffle=False) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, trn_filenames, test_filenames) = get_classes(path) Explanation: Setup Batches End of explanation # trn = get_data(path + 'train') # val = get_data(path + 'valid') # save_array(path + 'results/val.dat', val) # save_array(path + 'results/trn.dat', trn) # val = load_array(path + 'results/val.dat') # trn = load_array(path + 'results/trn.dat') Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In mose examples, I'm using the batches, however - just because that's how I happened to start out.) End of explanation def conv1(batches): model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Convolution2D(32, 3, 3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Convolution2D(64, 3, 3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr = 1e-3 model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) return model model = conv1(batches) Explanation: Re-run sample experiments on full dataset We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models. Single Conv Layer End of explanation gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path + 'train', gen_t, batch_size=batch_size) model = conv1(batches) model.optimizer.lr = 1e-4 model.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) Explanation: Interestingly, with no regularization or augmentation, we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results. Data Augmentation End of explanation gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path + 'train', gen_t, batch_size=batch_size) model = Sequential([ BatchNormalization(axis=1, input_shape=(3, 224, 224)), Convolution2D(32, 3, 3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Convolution2D(64, 3, 3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Convolution2D(128, 3, 3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr=1e-3 model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr=1e-5 model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) # os.mkdir(path + 'models') model.save_weights(path + 'models/conv8_prelim.h5') Explanation: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. Noted. I'm seeing the same numbers Four Conv/Pooling pairs + Dropout Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help. End of explanation vgg = Vgg16() model = vgg.model last_conv_idx = [i for i, l in enumerate(model.layers) if type(l) is Convolution2D][-1] conv_layers = model.layers[:last_conv_idx + 1] conv_model = Sequential(conv_layers) # ¡ batches shuffle must be set to False when pre-computing features ! batches = get_batches(path + 'train', batch_size=batch_size, shuffle=False) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) conv_feat = conv_model.predict_generator(batches, batches.nb_sample) conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample) # conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample) save_array(path + 'results/conv_feat.dat', conv_feat) save_array(path + 'results/conv_val_feat.dat', conv_val_feat) # save_array(path + 'results/conv_test_feat.dat', conv_test_feat) conv_feat = load_array(path + 'results/conv_feat.dat') conv_val_feat = load_array(path + 'results/conv_val_feat.dat') # conv_test_feat = load_array(path + 'results/conv_test_feat.dat') conv_val_feat.shape Explanation: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however... Imagenet Conv Features Since we have so little data, and it is similar to ImageNet images (full-color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.) NOTE: there is a work-around to this, discussed in lecture: add augmented-versions of the data to the dataset first. End of explanation test_batches = get_batches(path + 'test', batch_size=1, shuffle=False, class_mode=None) save_array(path + '/results/conv_test_feat.dat', conv_model.predict_generator(test_batches, test_batches.nb_sample)) save_array(path + 'results/conv_test_feat.dat', conv_test_feat) Explanation: (Working on getting conv_test_feat. For some reason getting a nameless "MemoryError:" every time I run conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample) Update: this doesn't throw an error on the Mac using CPU, however, unable on Linux machine to generate test convolutional features. Throwing "MemoryError" Will see if able to generate predictions on test data through full model. Thought: loading convolutional training and validation features raises memory load from ~2.3 GB to ~10.5 GB.. That's on ~20k imgs. Test data is 80k.. Could the MemoryError be from overloading RAM? But then why is that working just fine on the Mac? Is it an issue with the version of Theano? It's 0.9.0 on both machines... Maybe I should find a way to save generated convolutional test features straight to disk as they're created in batches.. End of explanation def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p = 0.8 bn_model = Sequential(get_bn_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2, validation_data=(conv_val_feat, val_labels)) bn_model.save_weights(path + 'models/conv8.h5') # bn_model.load_weights(path + 'models/conv8.h5') Explanation: BatchNorm Dense layers on pretrained Conv layers Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers. End of explanation bn_model.optimizer.lr=0.001 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.0001 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.00001 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=8, validation_data=(conv_val_feat, val_labels)) Explanation: NOTE: I'm going to leave off the following sections on concatenating DataAugmented versions w/ training data features; and Pseudolabeling, for time. For the massive memory-overhead of concatenating data augmented files/features -- use bcolz to save them and work on it in batches. Sure I'll get experience with that soon.I may train the model w/ dropout below End of explanation gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shif_range=20, width_shift_range=0.1) da_batches = get_batches(path + 'train', gen_t, batch_size=batch_size, shuffle=False) Explanation: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model. Pre-computed DataAugmentation + Dropout We'll use our usual data augmentation parameters: End of explanation da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_smaple*5) save_array(path + 'results/da_conv_feat.dat', da_conv_feat) da_conv_feat = load_array('results/da_conv_feat.dat') Explanation: We'll use those to create a dataset of convolutional features 5x bigger than the training set. End of explanation da_conv_feat = np.concatenate([da_conv_feat, conv_feat]) Explanation: Let's include the real trianing data as well in its non-augmented form. End of explanation da_trn_labels = np.concatenate([trn_labels]*6) Explanation: Since we've now got a dataset 6x bigger than before, we'll need tocopy our labels 6 times too. End of explanation def get_bn_da_layers(p): return [ MaxPooling2D(input_shape = conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_da_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) Explanation: Based on some experiments the previous model works well, with bigger dense layers. End of explanation bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=1e-4 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) Explanation: Now we can train the model as usual, with pre-computed augmented data. End of explanation bn_model.save_weights(path + 'models/da_conv8_1.h5') bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=1e-4 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.save_weights(path + 'models/conv8_bn_1.h5') Explanation: Looks good - let's save those weights. End of explanation val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size) Explanation: Pseudo-Labeling We're going to try using a combination of psudeo labeling and knowledge distillation to allow us to use unlabeled data (ie: do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabled data, so that we can see that it is working without using the test set. At a layer date we'll try using the test set. To do this, we can simply calculate the predictions of our model... End of explanation comb_pseudo = np.concatenate([trn_labels, val_pseudo]) comb_feat = np.concatenate([trn_labels, conv_val_feat]) comb_pseudo = np.concatenate([da_trn_labels, val_pseudo]) comb_feat = np.concatenate([da_conv_feat, conv_val_feat]) Explanation: ...concatenate them with our training labels... End of explanation bn_model.load_weights(path _ + 'models/da_conv8_1.h5') bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=1e-5 bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) Explanation: ...and fine-tune our model using that data. End of explanation bn_model.save_weights(path + 'models/bn-ps8.h5') Explanation: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set. End of explanation def do_clip(arr, mx): return np.clip(arr, (1 - mx)/9, mx) val_preds = bn_model.predict(conv_val_feat, batch_size=batch_size) keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval() conb_test_feat = conv_model.predict_generator(test_batches, test_batches.n) conv_test_feat = load_array(path + 'results/conv_test_feat.dat') preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2) subm = do_clip(preds, 0.93) subm_name = path + 'results/subm.gz' classes = sorted(batches.class_indices, key=batches.class_indices.get) submission = pd.DataFrame(subm, columns=classes) submission.insert(0, 'img', [a[4:] for a in test_filenames]) # <-- why a[4:]? # submission.insert(0, 'img', [f[8:] for f in test_filenames]) submission.head() submission.to_csv(subm_name, index=False, compression='gzip') FileLink(subm_name) Explanation: Submit We'll find a good clipping amount using the validation set, prior to submitting. End of explanation
11,258
Given the following text description, write Python code to implement the functionality described below step by step Description: TFRecord and tf.Example Learning Objectives Understand the TFRecord format for storing data Understand the tf.Example message type Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the tf.Example message, and then serialize, write, and read tf.Example messages to and from .tfrecord files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by .proto files, these are often the easiest way to understand a message type. The tf.Example message (or protobuf) is a flexible message type that represents a {"string" Step4: Please ignore any incompatibility warnings and errors. tf.Example Data types for tf.Example Fundamentally, a tf.Example is a {"string" Step5: Note Step6: All proto messages can be serialized to a binary-string using the .SerializeToString method Step7: Creating a tf.Example message Suppose you want to create a tf.Example message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the tf.Example message from a single observation will be the same Step9: Each of these features can be coerced into a tf.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.Example message from these encoded features Step10: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.Example message is just a wrapper around the Features message Step11: You can parse TFRecords using the standard protocol buffer .FromString method To decode the message use the tf.train.Example.FromString method. Step12: TFRecords format details A TFRecord file contains a sequence of records. The file can only be read sequentially. Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking. Each record is stored in the following formats Step13: Applied to a tuple of arrays, it returns a dataset of tuples Step14: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset. The mapped function must operate in TensorFlow graph mode—it must operate on and return tf.Tensors. A non-tensor function, like serialize_example, can be wrapped with tf.py_function to make it compatible. Using tf.py_function requires to specify the shape and type information that is otherwise unavailable Step15: Apply this function to each element in the dataset Step16: And write them to a TFRecord file Step17: Reading a TFRecord file You can also read the TFRecord file using the tf.data.TFRecordDataset class. More information on consuming TFRecord files using tf.data can be found here. Using TFRecordDatasets can be useful for standardizing input data and optimizing performance. Step18: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors. Use the .take method to only show the first 10 records. Note Step19: These tensors can be parsed using the function below. Note that the feature_description is necessary here because datasets use graph-execution, and need this description to build their shape and type signature Step20: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method Step21: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature Step22: Here, the tf.parse_example function unpacks the tf.Example fields into standard tensors. TFRecord files in Python The tf.io module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file test.tfrecord. Each observation is converted to a tf.Example message, then written to file. You can then verify that the file test.tfrecord has been created Step23: Reading a TFRecord file These serialized tensors can be easily parsed using tf.train.Example.ParseFromString Step24: Walkthrough Step25: Write the TFRecord file As before, encode the features as types compatible with tf.Example. This stores the raw image string feature, as well as the height, width, depth, and arbitrary label feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use 0 for the cat image, and 1 for the bridge image Step26: Notice that all of the features are now stored in the tf.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords Step27: Read the TFRecord file You now have the file—images.tfrecords—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely example.features.feature['image_raw'].bytes_list.value[0]. You can also use the labels to determine which record is the cat and which one is the bridge Step28: Recover the images from the TFRecord file
Python Code: # Run the chown command to change the ownership of the repository !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # You can use any Python source file as a module by executing an import statement in some other Python source file. # The import statement combines two operations; it searches for the named module, then it binds the results of that search # to a name in the local scope. #!pip install --upgrade tensorflow==2.5 import tensorflow as tf import numpy as np import IPython.display as display print("TensorFlow version: ",tf.version.VERSION) Explanation: TFRecord and tf.Example Learning Objectives Understand the TFRecord format for storing data Understand the tf.Example message type Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the tf.Example message, and then serialize, write, and read tf.Example messages to and from .tfrecord files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by .proto files, these are often the easiest way to understand a message type. The tf.Example message (or protobuf) is a flexible message type that represents a {"string": value} mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as TFX. Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using tf.data and reading data is still the bottleneck to training. See Data Input Pipeline Performance for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab. End of explanation # TODO 1a # The following functions can be used to convert a value to a type compatible # with tf.Example. def _bytes_feature(value): Returns a bytes_list from a string / byte. if isinstance(value, type(tf.constant(0))): value = value.numpy() # BytesList won't unpack a string from an EagerTensor. return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def _float_feature(value): Returns a float_list from a float / double. return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) def _int64_feature(value): Returns an int64_list from a bool / enum / int / uint. return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) Explanation: Please ignore any incompatibility warnings and errors. tf.Example Data types for tf.Example Fundamentally, a tf.Example is a {"string": tf.train.Feature} mapping. The tf.train.Feature message type can accept one of the following three types (See the .proto file for reference). Most other generic types can be coerced into one of these: tf.train.BytesList (the following types can be coerced) string byte tf.train.FloatList (the following types can be coerced) float (float32) double (float64) tf.train.Int64List (the following types can be coerced) bool enum int32 uint32 int64 uint64 In order to convert a standard TensorFlow type to a tf.Example-compatible tf.train.Feature, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a tf.train.Feature containing one of the three list types above: End of explanation print(_bytes_feature(b'test_string')) print(_bytes_feature(u'test_bytes'.encode('utf-8'))) print(_float_feature(np.exp(1))) print(_int64_feature(True)) print(_int64_feature(1)) Explanation: Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use tf.serialize_tensor to convert tensors to binary-strings. Strings are scalars in tensorflow. Use tf.parse_tensor to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. _int64_feature(1.0) will error out, since 1.0 is a float, so should be used with the _float_feature function instead): End of explanation # TODO 1b feature = _float_feature(np.exp(1)) # `SerializeToString()` serializes the message and returns it as a string feature.SerializeToString() Explanation: All proto messages can be serialized to a binary-string using the .SerializeToString method: End of explanation # The number of observations in the dataset. n_observations = int(1e4) # Boolean feature, encoded as False or True. feature0 = np.random.choice([False, True], n_observations) # Integer feature, random from 0 to 4. feature1 = np.random.randint(0, 5, n_observations) # String feature strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat']) feature2 = strings[feature1] # Float feature, from a standard normal distribution feature3 = np.random.randn(n_observations) Explanation: Creating a tf.Example message Suppose you want to create a tf.Example message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the tf.Example message from a single observation will be the same: Within each observation, each value needs to be converted to a tf.train.Feature containing one of the 3 compatible types, using one of the functions above. You create a map (dictionary) from the feature name string to the encoded feature value produced in #1. The map produced in step 2 is converted to a Features message. In this notebook, you will create a dataset using NumPy. This dataset will have 4 features: a boolean feature, False or True with equal probability an integer feature uniformly randomly chosen from [0, 5] a string feature generated from a string table by using the integer feature as an index a float feature from a standard normal distribution Consider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions: End of explanation def serialize_example(feature0, feature1, feature2, feature3): Creates a tf.Example message ready to be written to a file. # Create a dictionary mapping the feature name to the tf.Example-compatible # data type. feature = { 'feature0': _int64_feature(feature0), 'feature1': _int64_feature(feature1), 'feature2': _bytes_feature(feature2), 'feature3': _float_feature(feature3), } # Create a Features message using tf.train.Example. example_proto = tf.train.Example(features=tf.train.Features(feature=feature)) return example_proto.SerializeToString() Explanation: Each of these features can be coerced into a tf.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.Example message from these encoded features: End of explanation # This is an example observation from the dataset. example_observation = [] serialized_example = serialize_example(False, 4, b'goat', 0.9876) serialized_example Explanation: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.Example message is just a wrapper around the Features message: End of explanation # TODO 1c example_proto = tf.train.Example.FromString(serialized_example) example_proto Explanation: You can parse TFRecords using the standard protocol buffer .FromString method To decode the message use the tf.train.Example.FromString method. End of explanation tf.data.Dataset.from_tensor_slices(feature1) Explanation: TFRecords format details A TFRecord file contains a sequence of records. The file can only be read sequentially. Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking. Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data The records are concatenated together to produce the file. CRCs are described here, and the mask of a CRC is: masked_crc = ((crc &gt;&gt; 15) | (crc &lt;&lt; 17)) + 0xa282ead8ul Note: There is no requirement to use tf.Example in TFRecord files. tf.Example is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using tf.io.serialize_tensor, and tf.io.parse_tensor when loading). See the tf.io module for more options. TFRecord files using tf.data The tf.data module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord file The easiest way to get the data into a dataset is to use the from_tensor_slices method. Applied to an array, it returns a dataset of scalars: End of explanation features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3)) features_dataset # Use `take(1)` to only pull one example from the dataset. for f0,f1,f2,f3 in features_dataset.take(1): print(f0) print(f1) print(f2) print(f3) Explanation: Applied to a tuple of arrays, it returns a dataset of tuples: End of explanation # TODO 2a def tf_serialize_example(f0,f1,f2,f3): tf_string = tf.py_function( serialize_example, (f0,f1,f2,f3), # pass these args to the above function. tf.string) # the return type is `tf.string`. return tf.reshape(tf_string, ()) # The result is a scalar tf_serialize_example(f0,f1,f2,f3) Explanation: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset. The mapped function must operate in TensorFlow graph mode—it must operate on and return tf.Tensors. A non-tensor function, like serialize_example, can be wrapped with tf.py_function to make it compatible. Using tf.py_function requires to specify the shape and type information that is otherwise unavailable: End of explanation # TODO 2b # `.map` function maps across the elements of the dataset. serialized_features_dataset = features_dataset.map(tf_serialize_example) serialized_features_dataset def generator(): for features in features_dataset: yield serialize_example(*features) # Create a Dataset whose elements are generated by generator using `.from_generator` function serialized_features_dataset = tf.data.Dataset.from_generator( generator, output_types=tf.string, output_shapes=()) serialized_features_dataset Explanation: Apply this function to each element in the dataset: End of explanation filename = 'test.tfrecord' # `.TFRecordWriter` function writes a dataset to a TFRecord file writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_features_dataset) Explanation: And write them to a TFRecord file: End of explanation # TODO 2c filenames = [filename] raw_dataset = tf.data.TFRecordDataset(filenames) raw_dataset Explanation: Reading a TFRecord file You can also read the TFRecord file using the tf.data.TFRecordDataset class. More information on consuming TFRecord files using tf.data can be found here. Using TFRecordDatasets can be useful for standardizing input data and optimizing performance. End of explanation # Use the `.take` method to pull ten examples from the dataset. for raw_record in raw_dataset.take(10): print(repr(raw_record)) Explanation: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors. Use the .take method to only show the first 10 records. Note: iterating over a tf.data.Dataset only works with eager execution enabled. End of explanation # Create a description of the features. feature_description = { 'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0), 'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0), 'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''), 'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0), } def _parse_function(example_proto): # Parse the input `tf.Example` proto using the dictionary above. return tf.io.parse_single_example(example_proto, feature_description) Explanation: These tensors can be parsed using the function below. Note that the feature_description is necessary here because datasets use graph-execution, and need this description to build their shape and type signature: End of explanation parsed_dataset = raw_dataset.map(_parse_function) parsed_dataset Explanation: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method: End of explanation for parsed_record in parsed_dataset.take(10): print(repr(parsed_record)) Explanation: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature: End of explanation # Write the `tf.Example` observations to the file. with tf.io.TFRecordWriter(filename) as writer: for i in range(n_observations): example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i]) writer.write(example) # `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory. !du -sh {filename} Explanation: Here, the tf.parse_example function unpacks the tf.Example fields into standard tensors. TFRecord files in Python The tf.io module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file test.tfrecord. Each observation is converted to a tf.Example message, then written to file. You can then verify that the file test.tfrecord has been created: End of explanation filenames = [filename] raw_dataset = tf.data.TFRecordDataset(filenames) raw_dataset for raw_record in raw_dataset.take(1): example = tf.train.Example() example.ParseFromString(raw_record.numpy()) print(example) Explanation: Reading a TFRecord file These serialized tensors can be easily parsed using tf.train.Example.ParseFromString: End of explanation # Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function. cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg') williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg') # Check the image file display.display(display.Image(filename=cat_in_snow)) display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>')) display.display(display.Image(filename=williamsburg_bridge)) display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>')) Explanation: Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image. This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling. First, let's download this image of a cat in the snow and this photo of the Williamsburg Bridge, NYC under construction. Fetch the images End of explanation image_labels = { cat_in_snow : 0, williamsburg_bridge : 1, } # This is an example, just using the cat image. image_string = open(cat_in_snow, 'rb').read() label = image_labels[cat_in_snow] # Create a dictionary with features that may be relevant. def image_example(image_string, label): image_shape = tf.image.decode_jpeg(image_string).shape feature = { 'height': _int64_feature(image_shape[0]), 'width': _int64_feature(image_shape[1]), 'depth': _int64_feature(image_shape[2]), 'label': _int64_feature(label), 'image_raw': _bytes_feature(image_string), } return tf.train.Example(features=tf.train.Features(feature=feature)) for line in str(image_example(image_string, label)).split('\n')[:15]: print(line) print('...') Explanation: Write the TFRecord file As before, encode the features as types compatible with tf.Example. This stores the raw image string feature, as well as the height, width, depth, and arbitrary label feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use 0 for the cat image, and 1 for the bridge image: End of explanation # Write the raw image files to `images.tfrecords`. # First, process the two images into `tf.Example` messages. # Then, write to a `.tfrecords` file. record_file = 'images.tfrecords' with tf.io.TFRecordWriter(record_file) as writer: for filename, label in image_labels.items(): image_string = open(filename, 'rb').read() tf_example = image_example(image_string, label) writer.write(tf_example.SerializeToString()) # `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory. !du -sh {record_file} Explanation: Notice that all of the features are now stored in the tf.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords: End of explanation raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords') # Create a dictionary describing the features. image_feature_description = { 'height': tf.io.FixedLenFeature([], tf.int64), 'width': tf.io.FixedLenFeature([], tf.int64), 'depth': tf.io.FixedLenFeature([], tf.int64), 'label': tf.io.FixedLenFeature([], tf.int64), 'image_raw': tf.io.FixedLenFeature([], tf.string), } def _parse_image_function(example_proto): # Parse the input tf.Example proto using the dictionary above. return tf.io.parse_single_example(example_proto, image_feature_description) parsed_image_dataset = raw_image_dataset.map(_parse_image_function) parsed_image_dataset Explanation: Read the TFRecord file You now have the file—images.tfrecords—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely example.features.feature['image_raw'].bytes_list.value[0]. You can also use the labels to determine which record is the cat and which one is the bridge: End of explanation for image_features in parsed_image_dataset: image_raw = image_features['image_raw'].numpy() display.display(display.Image(data=image_raw)) Explanation: Recover the images from the TFRecord file: End of explanation
11,259
Given the following text description, write Python code to implement the functionality described below step by step Description: Morph volumetric source estimate This example demonstrates how to morph an individual subject's Step1: Setup paths Step2: Compute example data. For reference see ex-inverse-volume. Load data Step3: Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from Step4: Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the Step5: Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using Step6: Plot results
Python Code: # Author: Tommy Clausner <[email protected]> # # License: BSD (3-clause) import os import nibabel as nib import mne from mne.datasets import sample, fetch_fsaverage from mne.minimum_norm import apply_inverse, read_inverse_operator from nilearn.plotting import plot_glass_brain print(__doc__) Explanation: Morph volumetric source estimate This example demonstrates how to morph an individual subject's :class:mne.VolSourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Data will be morphed based on an affine transformation and a nonlinear registration method known as Symmetric Diffeomorphic Registration (SDR) by :footcite:AvantsEtAl2008. Transformation is estimated from the subject's anatomical T1 weighted MRI (brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain) &lt;https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage&gt;__. Afterwards the transformation will be applied to the volumetric source estimate. The result will be plotted, showing the fsaverage T1 weighted anatomical MRI, overlaid with the morphed volumetric source estimate. End of explanation sample_dir_raw = sample.data_path() sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample') subjects_dir = os.path.join(sample_dir_raw, 'subjects') fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif') fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif') fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri', 'brain.mgz') fetch_fsaverage(subjects_dir) # ensure fsaverage src exists fname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif' Explanation: Setup paths End of explanation evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) # Apply inverse operator stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM") # To save time stc.crop(0.09, 0.09) Explanation: Compute example data. For reference see ex-inverse-volume. Load data: End of explanation src_fs = mne.read_source_spaces(fname_src_fsaverage) morph = mne.compute_source_morph( inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir, niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed src_to=src_fs, verbose=True) Explanation: Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from :class:src &lt;mne.SourceSpaces&gt;, and subject_to is set to 'fsaverage' by default. subjects_dir can be None when set in the environment. In that case SourceMorph can be initialized taking src as only argument. See :class:mne.SourceMorph for more details. The default parameter setting for zooms will cause the reference volumes to be resliced before computing the transform. A value of '5' would cause the function to reslice to an isotropic voxel size of 5 mm. The higher this value the less accurate but faster the computation will be. The recommended way to use this is to morph to a specific destination source space so that different subject_from morphs will go to the same space.` A standard usage for volumetric data reads: End of explanation stc_fsaverage = morph.apply(stc) Explanation: Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the :meth:morph.apply() &lt;mne.SourceMorph.apply&gt; method. <div class="alert alert-info"><h4>Note</h4><p>Volumetric morphing is much slower than surface morphing because the volume for each time point is individually resampled and SDR morphed. The :meth:`mne.SourceMorph.compute_vol_morph_mat` method can be used to compute an equivalent sparse matrix representation by computing the transformation for each source point individually. This generally takes a few minutes to compute, but can be :meth:`saved <mne.SourceMorph.save>` to disk and be reused. The resulting sparse matrix operation is very fast (about 400× faster) to :meth:`apply <mne.SourceMorph.apply>`. This approach is more efficient when the number of time points to be morphed exceeds the number of source space points, which is generally in the thousands. This can easily occur when morphing many time points and multiple conditions.</p></div> End of explanation # Create mri-resolution volume of results img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1') Explanation: Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using :meth:morph.apply(..., output='nifti1') &lt;mne.SourceMorph.apply&gt;. End of explanation # Load fsaverage anatomical image t1_fsaverage = nib.load(fname_t1_fsaverage) # Plot glass brain (change to plot_anat to display an overlaid anatomical T1) display = plot_glass_brain(t1_fsaverage, title='subject results to fsaverage', draw_cross=False, annotate=True) # Add functional data as overlay display.add_overlay(img_fsaverage, alpha=0.75) Explanation: Plot results End of explanation
11,260
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This notebook shows how to plot an XRD plot for the two polymorphs of CsCl ($Pm\overline{3}m$ and $Fm\overline{3}m$). You can also use matgenie.py's diffraction command to plot an XRD pattern from a structure file. Step1: $\alpha$-CsCl ($Pm\overline{3}m$) Let's start with the typical $\alpha$ form of CsCl. Step2: Compare it with the experimental XRD pattern below. Step3: $\beta$-CsCl ($Fm\overline{3}m$) Let's now look at the $\beta$ (high-temperature) form of CsCl. Step4: Compare it with the experimental XRD pattern below.
Python Code: # Set up some imports that we will need from pymatgen import Lattice, Structure from pymatgen.analysis.diffraction.xrd import XRDCalculator from IPython.display import Image, display %matplotlib inline Explanation: Introduction This notebook shows how to plot an XRD plot for the two polymorphs of CsCl ($Pm\overline{3}m$ and $Fm\overline{3}m$). You can also use matgenie.py's diffraction command to plot an XRD pattern from a structure file. End of explanation # Create CsCl structure a = 4.209 #Angstrom latt = Lattice.cubic(a) structure = Structure(latt, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]]) c = XRDCalculator() c.show_xrd_plot(structure) Explanation: $\alpha$-CsCl ($Pm\overline{3}m$) Let's start with the typical $\alpha$ form of CsCl. End of explanation display(Image(filename=('./PDF - alpha CsCl.png'))) Explanation: Compare it with the experimental XRD pattern below. End of explanation # Create CsCl structure a = 6.923 #Angstrom latt = Lattice.cubic(a) structure = Structure(latt, ["Cs", "Cs", "Cs", "Cs", "Cl", "Cl", "Cl", "Cl"], [[0, 0, 0], [0.5, 0.5, 0], [0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0.5], [0, 0, 0.5], [0, 0.5, 0], [0.5, 0, 0]]) c.show_xrd_plot(structure) Explanation: $\beta$-CsCl ($Fm\overline{3}m$) Let's now look at the $\beta$ (high-temperature) form of CsCl. End of explanation display(Image(filename=('./PDF - beta CsCl.png'))) Explanation: Compare it with the experimental XRD pattern below. End of explanation
11,261
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualization in Depth With Bokeh For details on bokeh, see http Step1: Objectives Describe how to create interactive visualizations using bokeh Running example and visualization goals How to approach a new system Hover Widgets Study bokeh as a system Function specification - what it provides programmers Design (which has client and server parts) How to Learn a New System Steps - Find an example close to what you want - Create an environment that runs the example - Abstract the key concepts of how it works - Transform the example into what you want Running Example - Biological Data Step2: Desired visualization - Scatterplot of rate vs. yield - Hover shows the evolutionary "line" - Widgets can specify color (and legend) for values of line Step 1 Step3: Step 1a Step4: What colors are possible to use? Check out bokeh.palettes Step5: Exercise Step6: Bokeh tools Tools can be specified and positioned when the Figure is created. The interaction workflow is (a) select a tool (identified by vertical blue line), (b) perform gesture for tool. Step7: Synthesizing Bokeh Concepts (Classes) Figure - Created using the figure() - Controls the size of the plot - Allows other elements to be added - Has properties for title, x-axis label, y-axis label Glyph - Mark that's added to the plot - circle, line, polygon - Created using Figure methods plot.circle(df['rate'], df['yield'], color=color, legend=line) Tool - Provides user interactions with the graph using gestures - Created using a separate constructor ( Adding a Hover Tool Based on our knowledge of Bokeh concepts, is a Tool associated with Figure or Glyph? Which classes will be involved in hovering Step8: Now add ad-hoc data Step9: Exercise
Python Code: from bokeh.plotting import figure, output_file, show, output_notebook, vplot import random import numpy as np import pandas as pd output_notebook() # Use so see output in the Jupyter notebook import bokeh bokeh.__version__ Explanation: Visualization in Depth With Bokeh For details on bokeh, see http://bokeh.pydata.org/en/latest/docs/user_guide.html#userguide Setup End of explanation from IPython.display import Image Image(filename='biological_data.png') df_bio = pd.read_csv("biological_data.csv") df_bio.head() Explanation: Objectives Describe how to create interactive visualizations using bokeh Running example and visualization goals How to approach a new system Hover Widgets Study bokeh as a system Function specification - what it provides programmers Design (which has client and server parts) How to Learn a New System Steps - Find an example close to what you want - Create an environment that runs the example - Abstract the key concepts of how it works - Transform the example into what you want Running Example - Biological Data End of explanation plot = figure(plot_width=400, plot_height=400) plot.circle(df_bio['rate'], df_bio['yield']) plot.xaxis.axis_label = 'rate' plot.yaxis.axis_label = 'yield' show(plot) Explanation: Desired visualization - Scatterplot of rate vs. yield - Hover shows the evolutionary "line" - Widgets can specify color (and legend) for values of line Step 1: Find something close End of explanation # What are the possible colors df_bio['line'].unique() # Generate a plot with a different color for each line colors = {'HA': 'red', 'HR': 'green', 'UA': 'blue', 'WT': 'purple'} plot = figure(plot_width=700, plot_height=800) plot.title.text = 'Phenotypes for evolutionary lines.' for line in list(colors.keys()): df = df_bio[df_bio.line == line] color = colors[line] plot.circle(df['rate'], df['yield'], color=color, legend=line) plot.legend.location = "top_right" show(plot) Explanation: Step 1a: Distinguish "evolutionary lines" by color Let's distinguish the lines with colors. First, how many lines are there? End of explanation import bokeh.palettes as palettes print palettes.__doc__ #palettes.magma(4) Explanation: What colors are possible to use? Check out bokeh.palettes End of explanation # Generate the colors dictionary # Fill this in.... # Plot with the generated palette # Fill this in ... Explanation: Exercise: Handle colors for the plot for an arbitrary number of evolutionary lines. (Hint: construct the colors dictionary using the values of 'line' and a palette.) End of explanation TOOLS = 'box_zoom,box_select,resize,reset' plot = figure(plot_width=200, plot_height=200, title=None, tools=TOOLS) plot.scatter(range(10), range(10)) show(plot) from bokeh.models import HoverTool, BoxSelectTool TOOLS = [HoverTool(), BoxSelectTool()] plot = figure(plot_width=200, plot_height=200, title=None, tools=TOOLS) show(plot) Explanation: Bokeh tools Tools can be specified and positioned when the Figure is created. The interaction workflow is (a) select a tool (identified by vertical blue line), (b) perform gesture for tool. End of explanation from bokeh.plotting import figure, output_file, show from bokeh.models import HoverTool, BoxSelectTool output_file("toolbar.html") TOOLS = [BoxSelectTool(), HoverTool()] p = figure(plot_width=400, plot_height=400, title=None, tools=TOOLS) p.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10) show(p) Explanation: Synthesizing Bokeh Concepts (Classes) Figure - Created using the figure() - Controls the size of the plot - Allows other elements to be added - Has properties for title, x-axis label, y-axis label Glyph - Mark that's added to the plot - circle, line, polygon - Created using Figure methods plot.circle(df['rate'], df['yield'], color=color, legend=line) Tool - Provides user interactions with the graph using gestures - Created using a separate constructor ( Adding a Hover Tool Based on our knowledge of Bokeh concepts, is a Tool associated with Figure or Glyph? Which classes will be involved in hovering: - Plot & Tool only - Glyph only - Tool and Glyph Start with some examples. First, simple hovering. End of explanation from bokeh.plotting import figure, output_file, show, ColumnDataSource from bokeh.models import HoverTool output_file("toolbar.html") hover = HoverTool( tooltips=[ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("desc", "@desc"), ] ) p = figure(plot_width=400, plot_height=400, tools=[hover], title="Mouse over the dots") source = ColumnDataSource( data={ 'x': [1, 2, 3, 4, 5], 'y': [2, 5, 8, 2, 7], 'desc': ['A', 'b', 'C', 'd', 'E'], } ) p.circle('x', 'y', size=20, source=source) show(p) Explanation: Now add ad-hoc data End of explanation Image(filename='BokehArchitecture.png') Explanation: Exercise: Plot the biological data with colors and a hover that shows the evolutionary line. Bokeh Widgets See widget.py and my_app.py Bokeh Server End of explanation
11,262
Given the following text description, write Python code to implement the functionality described below step by step Description: GRID_LRT Testbed Notebook 1. Setting up the Environment The GRID LOFAR TOOLS have several infrastructure requirements. They are as follows Step1: This should give a confirmation of that your LOFAR ASTRON credentials were properly read Step2: Next, we'll use the test srm.txt to show off our staging chops Step3: You can now re-run the cell below to check the current status of your staging request Step4: You can also check the status of the srms two different ways (with srmls and with gfal) Step5: 3. srm lists Step6: The above commands show that you can load a set of srm links into a srmlist object, and it will also hold the LTA location and the OBSID. Each srmlist object can hold only one OBSID and one LTA location, and makes checks on each append Step7: You can create generators that transform all srm links into either gsiftp links (for use with globustools) or http links (for use with wget). Additionally gfal links can be made. These links can be used with the (old) staging scripts as well as to check the status of (sara and poznan) files. Step8: The gsiftp links are used with the globus-url-copy and uberftp -ls tools. The http links can be downloaded with wget the gfal links can be fed into the state_all script which returns the status of the files on the LTA. Finally Step9: This tool can be used to batch create tokens such as in section 4b) 4. Tokens! 4. a) The manual way Next we'll interface with PiCaS and start making tokens for our Observation Step10: We can also manually create a Token with an automatic attachment Step11: We can also list the views and the tokens from each view Step12: You can set all tokens in a view to a Status, say 'locked'. This automatically locks the tokens!! Step13: Finally, you can create your own view. Views collect tokens that satisfy a certain boolean expression (where the token is referenced as 'doc' For example Step14: Now we can delete all tokens in this view easily! Step15: On the login node, you sholdn't lock tokens, that's responsibility of the launcher script. After the jobs finish, you can iterate over the 'error' view and reset the tokens if you wish. This makes re-running failed jobs easy, You just have to re-submit the jdl to the Workload Manager! 4b) The automatic way! When you need to create tokens in bulk, you can do so using a .yaml file and a python dictionary. Now introducing Token Sets Step16: You can use a dictionary to automatically create a set of tokens. Each item in the dictionary will be its own token. The contents of the dict will be attached as a file (We use srm.txt Step17: Now when you look at the database, six tokens exist, each with a respective srm.txt file attached to it. They're in the 'todo' state since they were just created but you can change that with the 'th' object Step18: Now let's delete these guys and try to make more complex tokens!
Python Code: import os import GRID_LRT print(GRID_LRT.__file__) import subprocess from GRID_LRT.get_picas_credentials import picas_cred from GRID_LRT.Staging import stage_all_LTA from GRID_LRT.Staging import state_all from GRID_LRT.Staging import stager_access from GRID_LRT.Staging.srmlist import srmlist from GRID_LRT import token pc=picas_cred() Explanation: GRID_LRT Testbed Notebook 1. Setting up the Environment The GRID LOFAR TOOLS have several infrastructure requirements. They are as follows: ASTRON LOFAR staging credentials PiCaS database access Valid GRID proxy Here, we'll test that all of the above are enabled and work: End of explanation print(pc.user) print(pc.database) Explanation: This should give a confirmation of that your LOFAR ASTRON credentials were properly read: 2017-12-04 17:15:29.097902 stager_access: Parsing user credentials from /home/apmechev/.awe/Environment.cfg 2017-12-04 17:15:29.097973 stager_access: Creating proxy Next, we check that your PiCaS User and Database are set properly. You can also verify your password End of explanation test_srm_file='/home/apmechev/t/GRID_LRT/GRID_LRT/tests/srm_50_sara.txt' os.path.exists(test_srm_file) with open(test_srm_file,'r') as f: file_contents = f.read() print(file_contents.split()[0:3]) stageID=stage_all_LTA.main(test_srm_file) # NOTE! You (oll get two emails every time you do this! print(stageID) Explanation: Next, we'll use the test srm.txt to show off our staging chops: Stage the test srm.txt file. You'll get a StageID that you can use later. 2. Staging files: End of explanation print(stage_all_LTA.get_stage_status(stageID)) #crashes (py2.7?) #The code below can also show you a more detailed status statuses=stager_access.get_progress() print(statuses) statuses=stage_all_LTA.get_stage_status(stageID) ## When the staging completes, your stageID magically disappears from the database # Neat, huh? if not statuses: print("Staging status no longer in LTA Database") #This happens because bad programming else: print("Staging request "+str(stageID)+" has status: "+str(statuses)) Explanation: You can now re-run the cell below to check the current status of your staging request: End of explanation print(state_all.__file__) staged_status = state_all.main(test_srm_file) #Only works for Sara and Poznan files! #You can also supress the printing of statuses staged_status1 = state_all.main(test_srm_file, printout=False) Explanation: You can also check the status of the srms two different ways (with srmls and with gfal) End of explanation test_srm_file='/home/apmechev/t/GRID_LRT/GRID_LRT/tests/srm_50_sara.txt' s_list=srmlist() #Empty list of srms with open(test_srm_file,'r') as f: for i in f.read().split(): s_list.append(i) print(s_list.OBSID) print(s_list.LTA_location) print(len(s_list)) #len works as with a normal list Explanation: 3. srm lists: A dedicated class exists to handle lists of srmfiles. This class is a child of the python 'list' class and thus has all the capabilites of a list with some bells and whistles. It contains as properties the OBSID and LTA location of the files. Additionally, it can create generators that convert the srm:// links to gsiftp:// links, as well as staging links (Ones that can be fed into the state_all.py script) End of explanation juelich_srm=str("srm://lofar-srm.fz-juelich.de:8443/pnfs/"+ "fz-juelich.de/data/lofar/ops/projects/lc7_012/583139/L583139_SB000_uv.MS_900c9fcf.tar") try: s_list.append(juelich_srm) except AttributeError as e: print("Should return Different OBSID than previous items:\n"+str(e)) Explanation: The above commands show that you can load a set of srm links into a srmlist object, and it will also hold the LTA location and the OBSID. Each srmlist object can hold only one OBSID and one LTA location, and makes checks on each append: End of explanation gsi_generator=s_list.gsi_links() g_list=[] for i in gsi_generator: g_list.append(i) h_list=[] for i in s_list.http_links(): h_list.append(i) stage_list=[] for i in s_list.gfal_links(): stage_list.append(i) print("four different links for the same file:") print("") print(s_list[0]) print("") print(g_list[-1]) #for some reason list is backwards?? print("") print(h_list[-1]) print("") print(stage_list[-1]) Explanation: You can create generators that transform all srm links into either gsiftp links (for use with globustools) or http links (for use with wget). Additionally gfal links can be made. These links can be used with the (old) staging scripts as well as to check the status of (sara and poznan) files. End of explanation from GRID_LRT.Staging.srmlist import slice_dicts d_10=slice_dicts(s_list.sbn_dict()) print(d_10.keys()) # Will show the 'names' of the chunks of 10 (the starting SB) print("") print("d_10['140'] =") print(d_10['140']) #10 srms here print("") print("d_10['150'] =") print(d_10['150']) #1 srm here print("") print("type(d_10['100']) = "+str(type(d_10['100']))) #the dict values are srmlist() themselves! print("d_10['100'].OBSID = "+d_10['100'].OBSID) d_50=slice_dicts(s_list.sbn_dict(),50) print(d_50.keys()) # Will show the 'names' of the chunks of 50 (the starting SB)\ print("") Explanation: The gsiftp links are used with the globus-url-copy and uberftp -ls tools. The http links can be downloaded with wget the gfal links can be fed into the state_all script which returns the status of the files on the LTA. Finally: If you need to split your srmlist in a set of equally-sized chunks, this can be done with srmlist.slice_dicts. This is useful when creating jobs that run on multiple files at the same time (for example dppconcat, or even losoto steps!) End of explanation uname = os.environ['USER'] th = token.TokenHandler(t_type="jupyter_demo_"+uname, uname=pc.user, pwd=pc.password, dbn='sksp_dev') #Create the overview_view (has the number of todo, done, error, running, [...] tokens) th.add_overview_view() #Add the satus views (By default 'todo', 'locked', 'done', 'error') th.add_status_views() #Manually create a token: manual_keys = {'manual_key':'manual_value','manual_int':1024} man_token_1 = th.create_token(keys=manual_keys, append="manual") #will return the id of the manual token print('manual_token_ID = ' + man_token_1) Explanation: This tool can be used to batch create tokens such as in section 4b) 4. Tokens! 4. a) The manual way Next we'll interface with PiCaS and start making tokens for our Observation: here we need a string to link all the tokens in one Observation. We'll use the string 'demo_'+username in the sksp_dev database End of explanation manual_keys = {'manual_key':'manual_value','manual_int':0} man_token_2 = th.create_token(keys=manual_keys, append="manual_with_attach", attach=[open(test_srm_file),'srm_at_token_create.txt']) ##We can also attach files after the token's been created: th.add_attachment(man_token_2, open(test_srm_file), 'srm_added_later.txt') #Double check that both files were attached. Returns a list of filenames: man_2_attachies = th.list_attachments(man_token_2) print("The two attached files are: "+str(man_2_attachies)) # We can also of course download attachments: saved_attach=th.get_attachment(man_token_2,man_2_attachies[0],savename=man_2_attachies[0]) print("") print('The attachemnt '+str(man_2_attachies[0])+" was saved at "+saved_attach) assert(os.path.exists(saved_attach)) os.remove(saved_attach) assert(not os.path.exists(saved_attach)) Explanation: We can also manually create a Token with an automatic attachment: End of explanation print(th.views.keys()) #the views member of th is a dictionary of views locked_tokens = th.list_tokens_from_view('locked') print(type(locked_tokens)) #It's not a list!! print("There are "+str(len(locked_tokens))+" 'locked' tokens") todo_tokens = th.list_tokens_from_view('todo') # It's not a list because it procedurally pings CouchDB, ~generator #Use the help below to browse how it works!! ##help(todo_tokens) print("There are "+str(len(todo_tokens))+" 'todo' tokens") print("") print("They are:") for i in todo_tokens: print("CouchDB token keys: "+str(i.keys()),"Token ID: "+i.id) Explanation: We can also list the views and the tokens from each view: End of explanation print('Lock status of the token: '+str(th.database[man_token_2]['lock'])+".") print('Scrub count of the token: '+str(th.database[man_token_2]['scrub_count'])+".") print("There are "+str(len(th.list_tokens_from_view('todo')))+" 'todo' tokens") print("There are "+str(len(th.list_tokens_from_view('locked')))+" 'locked' tokens") print("") print("Setting status to locked for all todo tokens") th.set_view_to_status(view_name='todo',status='locked') #Sets all todo tokens to "locked" todo_tokens = th.list_tokens_from_view('todo') print("") print("There are "+str(len(todo_tokens))+" 'todo' tokens") ### No more todo tokens! locked_tokens = th.list_tokens_from_view('locked') print("There are "+str(len(locked_tokens))+" 'locked' tokens") ##Now they're all locked! print('Lock status of the token: '+str(th.database[man_token_2]['lock'])+".") #You can reset all tokens from a view back to 'todo'. This increments the scrub_count field resetted_tokens=th.reset_tokens('locked') print("") print("Resetting the locked tokens") print('Scrub count of the token: '+str(th.database[man_token_2]['scrub_count'])+".") print("There are "+str(len(th.list_tokens_from_view('todo')))+" 'todo' tokens") print("There are "+str(len(th.list_tokens_from_view('locked')))+" 'locked' tokens") Explanation: You can set all tokens in a view to a Status, say 'locked'. This automatically locks the tokens!! End of explanation th.add_view(v_name="demo_view",cond='doc.manual_int == 0 ') #Only one of our tokens has manual_int==0 print(th.views['demo_view']) #new view is here! assert(len(th.list_tokens_from_view('demo_view'))==1) print("There is "+str(len(th.list_tokens_from_view('demo_view')))+" tokens in the demo_view") #Creating 2 more tokens for this view. If append isn't changed, the id is the same, so #new tokens won't be created! But you can imagine a loop will make creation easy right? _ = th.create_token(keys=manual_keys, append="manual_with_attach_1", attach=[open(test_srm_file),'srm_at_token_create.txt']) _ = th.create_token(keys=manual_keys, append="manual_with_attach_2", attach=[open(test_srm_file),'srm_at_token_create.txt']) print("There are "+str(len(th.list_tokens_from_view('demo_view')))+" tokens in the demo_view") assert(len(th.list_tokens_from_view('demo_view'))==3) Explanation: Finally, you can create your own view. Views collect tokens that satisfy a certain boolean expression (where the token is referenced as 'doc' For example: The todo view satsifies: 'doc.lock == 0 &amp;&amp; doc.done == 0 ' The locked view satisfies: 'doc.lock &gt; 0 &amp;&amp; doc.done == 0 ' The done view satsifies: 'doc.status == "done" ' End of explanation th.delete_tokens('demo_view') assert(len(th.list_tokens_from_view('demo_view'))==0) print("There are "+str(len(th.list_tokens_from_view('demo_view')))+" tokens in the demo_view") # You can also clear all the views from the database th.clear_all_views() # And you can remove the head document (will no longer be visible in the dropdown) th.purge_tokens() Explanation: Now we can delete all tokens in this view easily! End of explanation th = token.TokenHandler(t_type="jupyter_demo_"+uname, uname=pc.user, pwd=pc.password, dbn='sksp_unittest') th.add_overview_view() th.add_status_views() #Re-creating the documents we purged above ts=token.TokenSet(th=th) #You need a Token_handler object to create tokensets (token sets can only be of one 'type') #(TokenHandler manages the authentification, views and token_type selection) config_file='/home/apmechev/t/GRID_LRT/config/tutorial.cfg' #This config file contains Token and sandbox creation instructions #Remember this guy from Step 3? We'll now use him to create tokens automatically print(type(d_10)) d_10.keys() d_10['140'] Explanation: On the login node, you sholdn't lock tokens, that's responsibility of the launcher script. After the jobs finish, you can iterate over the 'error' view and reset the tokens if you wish. This makes re-running failed jobs easy, You just have to re-submit the jdl to the Workload Manager! 4b) The automatic way! When you need to create tokens in bulk, you can do so using a .yaml file and a python dictionary. Now introducing Token Sets: Just an easy way to create tokens from a dictionary using a yaml file! End of explanation ts.create_dict_tokens(iterable=d_10, key_name='start_SB',file_upload='srm.txt') #This will create tokens, making the iterable key as 'start_SB' field of each token Explanation: You can use a dictionary to automatically create a set of tokens. Each item in the dictionary will be its own token. The contents of the dict will be attached as a file (We use srm.txt: It's the list of links to download on the worker node) End of explanation for t in th.list_tokens_from_view('todo'): print("Token ID is "+t['key']+" AND start_SB Value is " +th.database[t['key']]['start_SB']) tokens=ts.tokens print(th.database[ts.tokens[0]]) Explanation: Now when you look at the database, six tokens exist, each with a respective srm.txt file attached to it. They're in the 'todo' state since they were just created but you can change that with the 'th' object End of explanation th.delete_tokens('error') ts=token.TokenSet(th=th, tok_config=config_file) #Now we'll use the YAML configuration file to create more fields! #Slicing our list of srms to have one token per Dict Item d_1=slice_dicts(s_list.sbn_dict(),1) ts.create_dict_tokens(iterable=d_1, key_name='STARTSB',file_upload='srm.txt') #Let's make some more Tokens this time #You can also see which tokens are in the database: ts.tokens[0:10] #And look at each one's fields individually. Notice there's more fields than before! print(ts.th.database[ts.tokens[33]]) ts.add_keys_to_list('OBSID',s_list.OBSID) ts.add_keys_to_list('PIPELINE','tutorial1') #ts.add_attach_to_list('/home/apmechev/test/GRID_LRT/parsets/Pre-Facet-Calibrator-1.parset',name='Pre-Facet-Calibrator-1.parset') #These come from the config file #Let's delete these tokens for now ts.th.delete_tokens('todo') Explanation: Now let's delete these guys and try to make more complex tokens! End of explanation
11,263
Given the following text description, write Python code to implement the functionality described below step by step Description: Self-Driving Car Engineer Nanodegree Deep Learning Project Step1: Step 1 Step2: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include Step3: Step 2 Step4: Model Architecture Train, Validate and Test the Model Step5: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. Step6: Evaluate the Model evaluate the performance of the model on the test set. Step7: Step 3 Step8: Predict the Sign Type for Each Image Step9: Analyze Performance Step10: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability Step11: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note
Python Code: # Load pickled data import pickle import cv2 # for grayscale and normalize # TODO: Fill this in based on where you saved the training and testing data training_file ='traffic-signs-data/train.p' validation_file='traffic-signs-data/valid.p' testing_file = 'traffic-signs-data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_trainLd, y_trainLd = train['features'], train['labels'] X_validLd, y_validLd = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] #X_trainLd=X_trainLd.astype(float) #y_trainLd=y_trainLd.astype(float) #X_validLd=X_validLd.astype(float) #y_validLd=y_validLd.astype(float) print("Xtrain shape : "+str(X_trainLd.shape)) print("ytrain shape : "+str(y_trainLd.shape)) print("ytrain shape : "+str(y_trainLd.shape)) print("label : "+str(y_trainLd[22])) from sklearn.model_selection import train_test_split Explanation: Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition Classifier In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project. The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Step 0: Load The Data End of explanation ### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results import numpy as np # TODO: Number of training examples n_train = X_trainLd.shape[0] # TODO: Number of validation examples n_validation = X_validLd.shape[0] # TODO: Number of testing examples. n_test = X_test.shape[0] # TODO: What's the shape of an traffic sign image? image_shape = X_trainLd.shape[1:4] # TODO: How many unique classes/labels there are in the dataset. #n_classes = n_train+n_validation+n_test -- this doesn't seem correct 43 in excel file n_classes = 43 print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) Explanation: Step 1: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs: 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id. 'sizes' is a list containing tuples, (width, height) representing the original width and height the image. 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas End of explanation import random ### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt # Visualizations will be shown in the notebook. %matplotlib inline index = random.randint(0, len(X_trainLd)) image = X_trainLd[100] #squeeze : Remove single-dimensional entries from the shape of an array. image = image.astype(float) #normalise def normit(img): size = img.shape[2] imagenorm = cv2.normalize(img, dst =image_shape, alpha=0, beta=25, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1) image = img.astype(float) norm = (image-128.0)/128.0 return norm temp = normit(image) plt.figure(figsize=(1,1)) plt.imshow(temp.squeeze()) Explanation: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python. NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? End of explanation ### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. import cv2 from sklearn.utils import shuffle print("Test") ## xtrain grey_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2]]) norm_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2],3]) norm_X_train = norm_X_train.astype(float) X_train, y_train = shuffle(X_trainLd, y_trainLd) shuff_X_train, shuff_y_train =X_train, y_train X_valid, y_valid = X_validLd, y_validLd i=0 for p in X_train: t = normit(p) norm_X_train[i] = t i=i+1 print("after normalise") ##validate norm_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2],3]) norm_X_valid=norm_X_valid.astype(float) i=0 for v in X_valid: tv = normit(v) #tempv = tv.reshape(32,32,1) norm_X_valid[i] = tv #print(norm_X_valid[i]) i=i+1 ##test norm_X_test = np.zeros(shape=[X_test.shape[0],X_test.shape[1],X_test.shape[2],3]) norm_X_test=X_test.astype(float) i=0 for z in X_test: tt = normit(z) norm_X_test[i] = tt i=i+1 print("fin") image22 = norm_X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array imageb4 = X_train[110] imaget=norm_X_test[100] plt.figure(figsize=(1,1)) plt.imshow(imaget.squeeze()) Explanation: Step 2: Design and Test a Model Architecture Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset. The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: Neural network architecture (is the network over or underfitting?) Play around preprocessing techniques (normalization, rgb to grayscale, etc) Number of examples per label (some have more than others). Generate fake data. Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. End of explanation ### Define your architecture here. ### Feel free to use as many code cells as needed. import tensorflow as tf EPOCHS = 25 BATCH_SIZE = 128 #SMcM change to 256 from 128 #X_train=X_train.astype(float) X_train=norm_X_train #print(X_train[20]) #X_train=shuff_X_train #X_valid=norm_X_valid from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0.0 sigma = 0.1 #SMcM changed from 0.1 to 0.2 # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5,3, 6), mean = mu, stddev = sigma)) #SMcM depth cahnged to 3 conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b #try same should be better (padding) # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) #conv1 = tf.nn.relu(conv1) #SMcM add an extra relu # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits print("model") image22 = X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array print(norm_X_train.shape) print(X_train.shape) plt.figure(figsize=(1,1)) plt.imshow(image22.squeeze()) #print(image22) Explanation: Model Architecture Train, Validate and Test the Model End of explanation ### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. #Features and Labels x = tf.placeholder(tf.float32, (None, 32, 32, 3)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 43) print("start") #Training Pipeline rate = 0.0025 # SMCM decreased rate to .0008 from 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() #Model Evaluation def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples #Train the Model with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(norm_X_valid, y_valid) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() if (validation_accuracy) > .944 : break saver.save(sess, './lenet') print("Model saved") Explanation: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. End of explanation with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(norm_X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) Explanation: Evaluate the Model evaluate the performance of the model on the test set. End of explanation ### Load the images and plot them here. ### Feel free to use as many code cells as needed. ### Load the images and plot them here. ### Feel free to use as many code cells as needed. import numpy as np import random ### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt import os import tensorflow as tf import cv2 #os.listdir("path") file5 =['GTSRB5/00015b.bmp','GTSRB5/02329b.bmp','GTSRB5/03363b.bmp','GTSRB5/05312b.bmp','GTSRB5/03978b.bmp'] #img5 = np.empty(5, dtype=object) img5=[] img5 = np.zeros(shape=[5, 32, 32, 3]) img5 = img5.astype(float) label5 = [2,2,34,1,35] i=0 # Load an color image in grayscale for file in file5: temp = cv2.imread(file,3) img5[i] =temp i=i+1 print(img5[1].shape) #file1 = cv2.imread(file1,3) #print(file1) #print(img5.shape) temp=img5[1] i=0 for p in img5: t = normit(p) img5[i] = t i=i+1 for img in img5: plt.figure(figsize=(1,1)) plt.imshow(img.squeeze()) Explanation: Step 3: Test a Model on New Images To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images End of explanation ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) #test_accuracy = evaluate(img5, label5) #print("Test Accuracy = {:.3f}".format(test_accuracy)) lmax = tf.argmax(logits, 1) sm = sess.run(lmax,feed_dict={x: img5}) print("The Predictions are") print ( sm) print("The Labels are :") print(label5) print("Guide 3 = Speed limit (60km/h) 35 = Ahead Only 17 = No entry 4 =Speed limit (70km/h) 9=No passing") Explanation: Predict the Sign Type for Each Image End of explanation ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(img5, label5)*100 print("Test Accuracy = {:.2f}%".format(test_accuracy)) Explanation: Analyze Performance End of explanation ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) softmax = tf.nn.softmax(logits) top = tf.nn.top_k(softmax,5) sm = sess.run(top,feed_dict={x:img5}) print (sm) Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability: ``` (5, 6) array a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]]) ``` Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces: TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32)) Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices. End of explanation ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") Explanation: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. <figure> <img src="visualize_cnn.png" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above)</p> </figcaption> </figure> <p></p> End of explanation
11,264
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting the cloudsat groundtrack on a modis raster This notebook is my solution to Assignment 16, satellite groundtrack assigned on Day 26 Environment requires Step1: 1. Read in the groundtrack data Step2: 2. use the modis corner lats and lons to clip the cloudsat lats and lons to the same region Step3: Find all the cloudsat points that are between the min/max by construting a logical True/False vector. As with matlab, this vector can be used as an index to pick out those points at the indices where it evaluates to True. Also as in matlab if a logical vector is passed to a numpy function like sum, the True values are cast to 1 and the false values are cast to 0, so summing a logical vector tells you the number of true values. Step4: 3. Reproject MYD021KM channel 1 to a lambert azimuthal projection If we are on OSX we can run the a301utils.modis_to_h5 script to turn the h5 level 1b files into a pyresample projected file for channel 1 by running python using the os.system command If we are on windows, a201utils.modis_to_h5 needs to be run in the pyre environment in a separate shell Step5: read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads Step6: write the groundtrack out for future use
Python Code: from a301utils.a301_readfile import download from a301lib.cloudsat import get_geo import glob import os from pathlib import Path import sys import json import numpy as np import h5py from matplotlib import pyplot as plt from mpl_toolkits.basemap import Basemap rad_file='MYD021KM.A2006303.2220.006.2012078143305.h5' geom_file='MYD03.A2006303.2220.006.2012078135515.h5' lidar_file='2006303212128_02702_CS_2B-GEOPROF-LIDAR_GRANULE_P2_R04_E02.h5' download(rad_file) download(geom_file) download(lidar_file) Explanation: Plotting the cloudsat groundtrack on a modis raster This notebook is my solution to Assignment 16, satellite groundtrack assigned on Day 26 Environment requires: h5py, matplotlib, pyresample, requests, basemap End of explanation lats,lons,date_times,prof_times,dem_elevation=get_geo(lidar_file) Explanation: 1. Read in the groundtrack data End of explanation from a301utils.modismeta_read import parseMeta metadict=parseMeta(rad_file) corner_keys = ['min_lon','max_lon','min_lat','max_lat'] min_lon,max_lon,min_lat,max_lat=[metadict[key] for key in corner_keys] Explanation: 2. use the modis corner lats and lons to clip the cloudsat lats and lons to the same region End of explanation lon_hit=np.logical_and(lons>min_lon,lons<max_lon) lat_hit = np.logical_and(lats>min_lat,lats< max_lat) in_box=np.logical_and(lon_hit,lat_hit) print("ground track has {} points, we've selected {}".format(len(lon_hit),np.sum(in_box)) ) box_lons,box_lats=lons[in_box],lats[in_box] Explanation: Find all the cloudsat points that are between the min/max by construting a logical True/False vector. As with matlab, this vector can be used as an index to pick out those points at the indices where it evaluates to True. Also as in matlab if a logical vector is passed to a numpy function like sum, the True values are cast to 1 and the false values are cast to 0, so summing a logical vector tells you the number of true values. End of explanation from a301lib.modis_reproject import make_projectname reproject_name=make_projectname(rad_file) reproject_path = Path(reproject_name) if reproject_path.exists(): print('using reprojected h5 file {}'.format(reproject_name)) else: #need to create reproject.h5 for channel 1 channels='-c 1 4 3 31' template='python -m a301utils.modis_to_h5 {} {} {}' command=template.format(rad_file,geom_file,channels) if 'win' in sys.platform[:3]: print('platform is {}, need to run modis_to_h5.py in new environment' .format(sys.platform)) print('open an msys terminal and run \n{}\n'.format(command)) else: #osx, so presample is available print('running \n{}\n'.format(command)) out=os.system(command) the_size=reproject_path.stat().st_size print('generated reproject file for 4 channels, size is {} bytes'.format(the_size)) Explanation: 3. Reproject MYD021KM channel 1 to a lambert azimuthal projection If we are on OSX we can run the a301utils.modis_to_h5 script to turn the h5 level 1b files into a pyresample projected file for channel 1 by running python using the os.system command If we are on windows, a201utils.modis_to_h5 needs to be run in the pyre environment in a separate shell End of explanation with h5py.File(reproject_name,'r') as h5_file: basemap_args=json.loads(h5_file.attrs['basemap_args']) chan1=h5_file['channels']['1'][...] geo_string = h5_file.attrs['geotiff_args'] geotiff_args = json.loads(geo_string) print('basemap_args: \n{}\n'.format(basemap_args)) print('geotiff_args: \n{}\n'.format(geotiff_args)) %matplotlib inline from matplotlib import cm from matplotlib.colors import Normalize cmap=cm.autumn #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps cmap.set_over('w') cmap.set_under('b',alpha=0.2) cmap.set_bad('0.75') #75% grey plt.close('all') fig,ax = plt.subplots(1,1,figsize=(14,14)) # # set up the Basemap object # basemap_args['ax']=ax basemap_args['resolution']='c' bmap = Basemap(**basemap_args) # # transform the ground track lons/lats to x/y # cloudsatx,cloudsaty=bmap(box_lons,box_lats) # # plot as blue circles # bmap.plot(cloudsatx,cloudsaty,'bo') # # now plot channel 1 # num_meridians=180 num_parallels = 90 col = bmap.imshow(chan1, origin='upper',cmap=cmap, vmin=0, vmax=0.4) lon_sep, lat_sep = 5,5 parallels = np.arange(-90, 90, lat_sep) meridians = np.arange(0, 360, lon_sep) bmap.drawparallels(parallels, labels=[1, 0, 0, 0], fontsize=10, latmax=90) bmap.drawmeridians(meridians, labels=[0, 0, 0, 1], fontsize=10, latmax=90) bmap.drawcoastlines() colorbar=fig.colorbar(col, shrink=0.5, pad=0.05,extend='both') colorbar.set_label('channel1 reflectivity',rotation=-90,verticalalignment='bottom') _=ax.set(title='vancouver') Explanation: read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads End of explanation groundtrack_name = reproject_name.replace('reproject','groundtrack') print('writing groundtrack to {}'.format(groundtrack_name)) box_times=date_times[in_box] # # h5 files can't store dates, but they can store floating point # seconds since 1970, which is called POSIX timestamp # timestamps = [item.timestamp() for item in box_times] timestamps= np.array(timestamps) with h5py.File(groundtrack_name,'w') as groundfile: groundfile.attrs['cloudsat_filename']=lidar_file groundfile.attrs['modis_filename']=rad_file groundfile.attrs['reproject_filename']=reproject_name dset=groundfile.create_dataset('cloudsat_lons',box_lons.shape,box_lons.dtype) dset[...] = box_lons[...] dset.attrs['long_name']='cloudsat longitude' dset.attrs['units']='degrees East' dset=groundfile.create_dataset('cloudsat_lats',box_lats.shape,box_lats.dtype) dset[...] = box_lats[...] dset.attrs['long_name']='cloudsat latitude' dset.attrs['units']='degrees North' dset= groundfile.create_dataset('cloudsat_times',timestamps.shape,timestamps.dtype) dset[...] = timestamps[...] dset.attrs['long_name']='cloudsat UTC datetime timestamp' dset.attrs['units']='seconds since Jan. 1, 1970' Explanation: write the groundtrack out for future use End of explanation
11,265
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: MOHC Source ID: HADGEM3-GC31-LL Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:14 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
11,266
Given the following text description, write Python code to implement the functionality described below step by step Description: <p><font size="6"><b> Case study Step1: Some of the data files that are available from AirBase were included in the data folder Step2: Processing a single file We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file Step3: So we will need to do some manual processing. Just reading the tab-delimited data Step4: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. <div class="alert alert-success"> <b>EXERCISE</b> Step5: For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). <div class="alert alert-success"> <b>EXERCISE</b> Step6: Now, we want to reshape it Step7: Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex Step9: Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. <div class="alert alert-success"> <b>EXERCISE</b> Step10: Test the function on the data file from above Step11: We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. <div class="alert alert-success"> <b>EXERCISE</b> Step12: <div class="alert alert-success"> <b>EXERCISE</b> Step13: Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. Step14: Working with time series data We processed the individual data files above, and saved it to a csv file airbase_data.csv. Let's import the file here (if you didn't finish the above exercises, a version of the dataset is also available in data/airbase_data.csv) Step15: We only use the data from 1999 onwards Step16: Som first exploration with the typical functions Step17: Quickly visualizing the data Step18: This does not say too much .. We can select part of the data (eg the latest 500 data points) Step19: Exercises <div class="alert alert-warning"> <b>REMINDER</b> Step20: <div class="alert alert-success"> <b>QUESTION</b> Step21: <div class="alert alert-success"> <b>QUESTION</b> Step22: <div class="alert alert-success"> <b>QUESTION</b> Step23: Combination with groupby resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex). Step24: But, groupby is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle. <div class="alert alert-success"> <b>QUESTION</b> Step25: 2. Now, we can calculate the mean of each month over the different years Step26: 3. plot the typical monthly profile of the different stations Step27: <div class="alert alert-success"> <b>QUESTION</b> Step28: <div class="alert alert-success"> <b>QUESTION</b> Step29: <div class="alert alert-success"> <b>QUESTION</b> Step30: <div class="alert alert-success"> <b>QUESTION</b> Step31: <div class="alert alert-success"> <b>QUESTION</b> Step32: Add a column indicating week/weekend Step33: <div class="alert alert-success"> <b>QUESTION</b> Step34: An alternative method using groupby and unstack Step35: <div class="alert alert-success"> <b>QUESTION</b> Step36: <div class="alert alert-success"> <b>QUESTION</b>
Python Code: from IPython.display import HTML HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>') Explanation: <p><font size="6"><b> Case study: air quality data of European monitoring stations (AirBase)</b></font></p> <br> AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe. © 2016, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. End of explanation %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.options.display.max_rows = 8 Explanation: Some of the data files that are available from AirBase were included in the data folder: the hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations: FR04037 (PARIS 13eme): urban background site at Square de Choisy FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia BETR802: urban traffic site in Antwerp, Belgium BETN029: rural background site in Houtem, Belgium See http://www.eea.europa.eu/themes/air/interactive/no2 End of explanation with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) Explanation: Processing a single file We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file: End of explanation data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() Explanation: So we will need to do some manual processing. Just reading the tab-delimited data: End of explanation # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag']*24) for item in pair] # %load snippets/07 - Case study - air quality data7.py # %load snippets/07 - Case study - air quality data8.py Explanation: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Clean up this dataframe by using more options of `read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) <ul> <li>specify the correct delimiter</li> <li>specify that the values of -999 and -9999 should be regarded as NaN</li> <li>specify are own column names (for how the column names are made up, see See http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) </ul> </div> End of explanation flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns # %load snippets/07 - Case study - air quality data10.py data.head() Explanation: For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Drop all 'flag' columns ('flag1', 'flag2', ...) End of explanation # %load snippets/07 - Case study - air quality data12.py # %load snippets/07 - Case study - air quality data13.py # %load snippets/07 - Case study - air quality data14.py # %load snippets/07 - Case study - air quality data15.py # %load snippets/07 - Case study - air quality data16.py data_stacked.head() Explanation: Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li>Recap: reshaping your data with [`stack` and `unstack`](./pandas_07_reshaping_data.ipynb)</li> </ul> <img src="img/schema-stack.svg" width=70%> </div> <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Reshape the dataframe to a timeseries. The end result should look like:<br><br> <div class='center'> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>BETR801</th> </tr> </thead> <tbody> <tr> <th>1990-01-02 09:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 12:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 13:00:00</th> <td>50.0</td> </tr> <tr> <th>1990-01-02 14:00:00</th> <td>55.0</td> </tr> <tr> <th>...</th> <td>...</td> </tr> <tr> <th>2012-12-31 20:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 21:00:00</th> <td>14.5</td> </tr> <tr> <th>2012-12-31 22:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 23:00:00</th> <td>15.0</td> </tr> </tbody> </table> <p style="text-align:center">170794 rows × 1 columns</p> </div> <ul> <li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li> <li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li> <li>Set the new datetime values as the index, and remove the original columns with date and hour values</li> </ul> **NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. </div> End of explanation data_stacked.index data_stacked.plot() Explanation: Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex: End of explanation def read_airbase_file(filename, station): Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. ... return ... # %load snippets/07 - Case study - air quality data21.py Explanation: Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Write a function `read_airbase_file(filename, station)`, using the above steps the read in and process the data, and that returns a processed timeseries.</li> </ul> </div> End of explanation filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = filename.split("/")[-1][:7] station test = read_airbase_file(filename, station) test.head() Explanation: Test the function on the data file from above: End of explanation import glob # %load snippets/07 - Case study - air quality data33.py Explanation: We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Use the `glob.glob` function to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.</li> </ul> </div> End of explanation # %load snippets/07 - Case study - air quality data34.py # %load snippets/07 - Case study - air quality data35.py combined_data.head() Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.</li> <li>Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.</li> </ul> </div> End of explanation combined_data.to_csv("airbase_data.csv") Explanation: Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. End of explanation alldata = pd.read_csv('airbase_data.csv', index_col=0, parse_dates=True) Explanation: Working with time series data We processed the individual data files above, and saved it to a csv file airbase_data.csv. Let's import the file here (if you didn't finish the above exercises, a version of the dataset is also available in data/airbase_data.csv): End of explanation data = alldata['1999':].copy() Explanation: We only use the data from 1999 onwards: End of explanation data.head() # tail() data.info() data.describe(percentiles=[0.1, 0.5, 0.9]) Explanation: Som first exploration with the typical functions: End of explanation data.plot(kind='box', ylim=[0,250]) data['BETR801'].plot(kind='hist', bins=50) data.plot(figsize=(12,6)) Explanation: Quickly visualizing the data End of explanation data[-500:].plot(figsize=(12,6)) Explanation: This does not say too much .. We can select part of the data (eg the latest 500 data points): End of explanation # %load snippets/07 - Case study - air quality data50.py # %load snippets/07 - Case study - air quality data51.py Explanation: Exercises <div class="alert alert-warning"> <b>REMINDER</b>: <br><br> Take a look at the [Timeseries notebook](05 - Time series data.ipynb) when you require more info about... <ul> <li>`resample`</li> <li>string indexing of DateTimeIndex</li> </ul><br><br> </div> <div class="alert alert-success"> <b>QUESTION</b>: plot the monthly mean and median concentration of the 'FR04037' station for the years 2009-2012 </div> End of explanation # %load snippets/07 - Case study - air quality data52.py # %load snippets/07 - Case study - air quality data53.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: plot the monthly mininum and maximum daily concentration of the 'BETR801' station </div> End of explanation # %load snippets/07 - Case study - air quality data54.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: make a bar plot of the mean of the stations in year of 2012 </div> End of explanation # %load snippets/07 - Case study - air quality data55.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The evolution of the yearly averages with, and the overall mean of all stations (indicate the overall mean with a thicker black line)? </div> End of explanation data.groupby(data.index.year).mean().plot() Explanation: Combination with groupby resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex). End of explanation # %load snippets/07 - Case study - air quality data57.py Explanation: But, groupby is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle. <div class="alert alert-success"> <b>QUESTION</b>: how does the *typical monthly profile* look like for the different stations? </div> 1. add a column to the dataframe that indicates the month (integer value of 1 to 12): End of explanation # %load snippets/07 - Case study - air quality data58.py Explanation: 2. Now, we can calculate the mean of each month over the different years: End of explanation # %load snippets/07 - Case study - air quality data59.py data = data.drop('month', axis=1, errors='ignore') Explanation: 3. plot the typical monthly profile of the different stations: End of explanation df2011 = data['2011'].dropna() # %load snippets/07 - Case study - air quality data64.py # %load snippets/07 - Case study - air quality data65.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: plot the weekly 95% percentiles of the concentration in 'BETR801' and 'BETN029' for 2011 </div> End of explanation # %load snippets/07 - Case study - air quality data66.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The typical diurnal profile for the different stations? </div> End of explanation # %load snippets/07 - Case study - air quality data67.py # %load snippets/07 - Case study - air quality data68.py # %load snippets/07 - Case study - air quality data69.py # %load snippets/07 - Case study - air quality data70.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 for each year/station? </div> End of explanation # %load snippets/07 - Case study - air quality data72.py # %load snippets/07 - Case study - air quality data73.py # %load snippets/07 - Case study - air quality data74.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: And are there exceedances of the yearly limit value of 40 µg/m3 since 200 ? </div> End of explanation data.index.weekday? # %load snippets/07 - Case study - air quality data76.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: What is the difference in the typical diurnal profile between week and weekend days? (and visualise it) </div> End of explanation # %load snippets/07 - Case study - air quality data77.py # %load snippets/07 - Case study - air quality data78.py # %load snippets/07 - Case study - air quality data79.py # %load snippets/07 - Case study - air quality data80.py # %load snippets/07 - Case study - air quality data81.py Explanation: Add a column indicating week/weekend End of explanation # %load snippets/07 - Case study - air quality data82.py # %load snippets/07 - Case study - air quality data83.py # %load snippets/07 - Case study - air quality data84.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: Visualize the typical week profile for the different stations as boxplots (where the values in one boxplot are the daily means for the different weeks for a certain weekday). </div> Tip: the boxplot method of a DataFrame expects the data for the different boxes in different columns). For this, you can either use pivot_table as a combination of groupby and unstack End of explanation # %load snippets/07 - Case study - air quality data85.py Explanation: An alternative method using groupby and unstack: End of explanation # %load snippets/07 - Case study - air quality data86.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The maximum daily 8 hour mean should be below 100 µg/m³. What are the number of exceedances of this limit for each year/station? </div> Tip: have a look at the rolling method to perform moving window operations. Note: this is not an actual limit for NO2, but a nice exercise to introduce the rolling method. Other pollutans, such as 03 have actually such kind of limit values. End of explanation # %load snippets/07 - Case study - air quality data87.py # %load snippets/07 - Case study - air quality data88.py Explanation: <div class="alert alert-success"> <b>QUESTION</b>: Calculate the correlation between the different stations </div> End of explanation
11,267
Given the following text description, write Python code to implement the functionality described below step by step Description: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out Step1: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise Step2: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement Step3: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise Step4: Hyperparameters Step5: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise Step6: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise Step7: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise Step8: Training Step9: Training loss Here we'll check out the training losses for the generator and discriminator. Step10: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
Python Code: %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out: ''' with tf.variable_scope('Generator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(h1 * alpha, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('Discriminator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(h1 * alpha, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 Explanation: Hyperparameters End of explanation tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation # Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('Generator')] d_vars = [var for var in t_vars if var.name.startswith('Discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) Explanation: Training End of explanation %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation _ = view_samples(-1, samples) Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation
11,268
Given the following text description, write Python code to implement the functionality described below step by step Description: Using LAMMPS with iPython and Jupyter LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up. Installation Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on) Compile LAMMPS as a shared library and enable exceptions and PNG support bash cd $LAMMPS_DIR/src make yes-molecule python Make.py -m mpi -png -s exceptions -a file make mode=shlib auto Create a python virtualenv bash virtualenv testing source testing/bin/activate Inside the virtualenv install the lammps package (testing) cd $LAMMPS_DIR/python (testing) python install.py (testing) cd # move to your working directory Install jupyter and ipython in the virtualenv bash (testing) pip install ipython jupyter Run jupyter notebook bash (testing) jupyter notebook Example Step1: Queries about LAMMPS simulation Step2: Working with LAMMPS Variables Step3: Accessing Atom data
Python Code: from lammps import IPyLammps L = IPyLammps() # 2d circle of particles inside a box with LJ walls import math b = 0 x = 50 y = 20 d = 20 # careful not to slam into wall too hard v = 0.3 w = 0.08 L.units("lj") L.dimension(2) L.atom_style("bond") L.boundary("f f p") L.lattice("hex", 0.85) L.region("box", "block", 0, x, 0, y, -0.5, 0.5) L.create_box(1, "box", "bond/types", 1, "extra/bond/per/atom", 6) L.region("circle", "sphere", d/2.0+1.0, d/2.0/math.sqrt(3.0)+1, 0.0, d/2.0) L.create_atoms(1, "region", "circle") L.mass(1, 1.0) L.velocity("all create 0.5 87287 loop geom") L.velocity("all set", v, w, 0, "sum yes") L.pair_style("lj/cut", 2.5) L.pair_coeff(1, 1, 10.0, 1.0, 2.5) L.bond_style("harmonic") L.bond_coeff(1, 10.0, 1.2) L.create_bonds("all", "all", 1, 1.0, 1.5) L.neighbor(0.3, "bin") L.neigh_modify("delay", 0, "every", 1, "check yes") L.fix(1, "all", "nve") L.fix(2, "all wall/lj93 xlo 0.0 1 1 2.5 xhi", x, "1 1 2.5") L.fix(3, "all wall/lj93 ylo 0.0 1 1 2.5 yhi", y, "1 1 2.5") L.image(zoom=1.8) L.thermo_style("custom step temp epair press") L.thermo(100) output = L.run(40000) L.image(zoom=1.8) Explanation: Using LAMMPS with iPython and Jupyter LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up. Installation Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on) Compile LAMMPS as a shared library and enable exceptions and PNG support bash cd $LAMMPS_DIR/src make yes-molecule python Make.py -m mpi -png -s exceptions -a file make mode=shlib auto Create a python virtualenv bash virtualenv testing source testing/bin/activate Inside the virtualenv install the lammps package (testing) cd $LAMMPS_DIR/python (testing) python install.py (testing) cd # move to your working directory Install jupyter and ipython in the virtualenv bash (testing) pip install ipython jupyter Run jupyter notebook bash (testing) jupyter notebook Example End of explanation L.system L.system.natoms L.system.nbonds L.system.nbondtypes L.communication L.fixes L.computes L.dumps L.groups Explanation: Queries about LAMMPS simulation End of explanation L.variable("a index 2") L.variables L.variable("t equal temp") L.variables import sys if sys.version_info < (3, 0): # In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead. x = float(L.lmp_print('"${a}"')) else: # In Python 3 the print function can be redefined. # x = float(L.print('"${a}"')") # To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement x = float(eval("L.print('\"${a}\"')")) x L.variables['t'].value L.eval("v_t/2.0") L.variable("b index a b c") L.variables['b'].value L.eval("v_b") L.variables['b'].definition L.variable("i loop 10") L.variables['i'].value L.next("i") L.variables['i'].value L.eval("ke") Explanation: Working with LAMMPS Variables End of explanation L.atoms[0] [x for x in dir(L.atoms[0]) if not x.startswith('__')] L.atoms[0].position L.atoms[0].id L.atoms[0].velocity L.atoms[0].force L.atoms[0].type Explanation: Accessing Atom data End of explanation
11,269
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook covers using metrics to analyze the 'accuracy' of prophet models. In this notebook, we will extend the previous example (http Step1: Read in the data Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file. Step2: Prepare for Prophet As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'. Step3: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index. Step4: Now's a good time to take a look at your data. Plot the data using pandas' plot function Step5: Running Prophet Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast Note Step6: We've instantiated the model, now we need to build some future dates to forecast into. Step7: To forecast this future data, we need to run it through Prophet's model. Step8: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe Step9: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with Step10: Plotting Prophet results Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area). Step11: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here Step12: Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE). To do this, we need to build a combined dataframe with yhat from the forecasts and the original 'y' values from the data. Step13: You can see from the above, that the last part of the dataframe has "NaN" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these "NaN" values. Step14: Now let's take a look at our R-Squared value Step15: An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit). Step16: That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better. Now...let's see what the Mean Absolute Error (MAE) looks like. Step17: Not good. Not good at all. BUT...the purpose of this particular post is to show some usage of R-Squared, MAE and MSE's as metrics and I think we've done that. I can tell you from experience that part of the problem with this particular data is that its monthly and there aren't that many data points to start with (only 72 data points...not ideal for modeling). Another approach for metrics While writing this post, I came across ML Metrics (https Step18: Same value for MAE as before...which is a good sign for this new metrics library. Let's take a look at a few more. Here's the Absolute Error (pointwise...shows the error of each date's predicted value vs actual value) Step19: Let's look at Root Mean Square Error
Python Code: import pandas as pd import numpy as np from fbprophet import Prophet import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error %matplotlib inline plt.rcParams['figure.figsize']=(20,10) plt.style.use('ggplot') Explanation: This notebook covers using metrics to analyze the 'accuracy' of prophet models. In this notebook, we will extend the previous example (http://pythondata.com/forecasting-time-series-data-prophet-part-3/). Import necessary libraries End of explanation sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True) sales_df.head() Explanation: Read in the data Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file. End of explanation df = sales_df.reset_index() df.head() Explanation: Prepare for Prophet As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'. End of explanation df=df.rename(columns={'date':'ds', 'sales':'y'}) df.head() Explanation: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index. End of explanation df.set_index('ds').y.plot() Explanation: Now's a good time to take a look at your data. Plot the data using pandas' plot function End of explanation model = Prophet(weekly_seasonality=True) model.fit(df); Explanation: Running Prophet Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast Note: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are workign with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet. End of explanation future = model.make_future_dataframe(periods=24, freq = 'm') future.tail() Explanation: We've instantiated the model, now we need to build some future dates to forecast into. End of explanation forecast = model.predict(future) Explanation: To forecast this future data, we need to run it through Prophet's model. End of explanation forecast.tail() Explanation: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe: End of explanation forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail() Explanation: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with: End of explanation model.plot(forecast); Explanation: Plotting Prophet results Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area). End of explanation model.plot_components(forecast); Explanation: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here: https://github.com/urgedata/pythondata/blob/master/fbprophet/fbprophet_part_one.ipynb. Additionally, prophet let's us take a at the components of our model, including the holidays. This component plot is an important plot as it lets you see the components of your model including the trend and seasonality (identified in the yearly pane). End of explanation metric_df = forecast.set_index('ds')[['yhat']].join(df.set_index('ds').y).reset_index() metric_df.tail() Explanation: Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE). To do this, we need to build a combined dataframe with yhat from the forecasts and the original 'y' values from the data. End of explanation metric_df.dropna(inplace=True) metric_df.tail() Explanation: You can see from the above, that the last part of the dataframe has "NaN" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these "NaN" values. End of explanation r2_score(metric_df.y, metric_df.yhat) Explanation: Now let's take a look at our R-Squared value End of explanation mean_squared_error(metric_df.y, metric_df.yhat) Explanation: An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit). End of explanation mean_absolute_error(metric_df.y, metric_df.yhat) Explanation: That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better. Now...let's see what the Mean Absolute Error (MAE) looks like. End of explanation import ml_metrics as metrics metrics.mae(metric_df.y, metric_df.yhat) Explanation: Not good. Not good at all. BUT...the purpose of this particular post is to show some usage of R-Squared, MAE and MSE's as metrics and I think we've done that. I can tell you from experience that part of the problem with this particular data is that its monthly and there aren't that many data points to start with (only 72 data points...not ideal for modeling). Another approach for metrics While writing this post, I came across ML Metrics (https://github.com/benhamner/Metrics), which provides 17 metrics for Python (python version here --> https://github.com/benhamner/Metrics/tree/master/Python). Let's give it a go and see what these metrics show us. End of explanation metrics.ae(metric_df.y, metric_df.yhat) Explanation: Same value for MAE as before...which is a good sign for this new metrics library. Let's take a look at a few more. Here's the Absolute Error (pointwise...shows the error of each date's predicted value vs actual value) End of explanation metrics.rmse(metric_df.y, metric_df.yhat) Explanation: Let's look at Root Mean Square Error End of explanation
11,270
Given the following text description, write Python code to implement the functionality described below step by step Description: Training and Inference Module We modularized commonly used codes for training and inference in the module (or mod for short) package. This package provides intermediate-level and high-level interface for executing predefined networks. Basic Usage Preliminary In this tutorial, we will use a simple multilayer perception for 10 classes and a synthetic dataset. Step1: Create Module The most widely used module class is Module, which wraps a Symbol and one or more Executors. We construct a module by specify symbol Step2: Train, Predict, and Evaluate Modules provide high-level APIs for training, predicting and evaluating. To fit a module, simply call the fit function with some DataIters. Step3: To predict with a module, simply call predict() with a DataIter. It will collect and return all the prediction results. Step4: Another convenient API for prediction in the case where the prediction results might be too large to fit in the memory is iter_predict Step5: If we do not need the prediction outputs, but just need to evaluate on a test set, we can call the score() function with a DataIter and a EvalMetric Step6: Save and Load We can save the module parameters in each training epoch by using a checkpoint callback. Step7: To load the saved module parameters, call the load_checkpoint function. It load the Symbol and the associated parameters. We can then set the loaded parameters into the module. Step8: Or if we just want to resume training from a saved checkpoint, instead of calling set_params(), we can directly call fit(), passing the loaded parameters, so that fit() knows to start from those parameters instead of initializing from random. We also set the begin_epoch so that so that fit() knows we are resuming from a previous saved epoch. Step9: Module as a computation "machine" We already seen how to module for basic training and inference. Now we are going to show a more flexiable usage of module. A module represents a computation component. The design purpose of a module is that it abstract a computation “machine”, that accpets Symbol programs and data, and then we can run forward, backward, update parameters, etc. We aim to make the APIs easy and flexible to use, especially in the case when we need to use imperative API to work with multiple modules (e.g. stochastic depth network). A module has several states Step10: Beside the operations, a module provides a lot of useful information. basic names
Python Code: import mxnet as mx from data_iter import SyntheticData # mlp net = mx.sym.Variable('data') net = mx.sym.FullyConnected(net, name='fc1', num_hidden=64) net = mx.sym.Activation(net, name='relu1', act_type="relu") net = mx.sym.FullyConnected(net, name='fc2', num_hidden=10) net = mx.sym.SoftmaxOutput(net, name='softmax') # synthetic 10 classes dataset with 128 dimension data = SyntheticData(10, 128) mx.viz.plot_network(net) Explanation: Training and Inference Module We modularized commonly used codes for training and inference in the module (or mod for short) package. This package provides intermediate-level and high-level interface for executing predefined networks. Basic Usage Preliminary In this tutorial, we will use a simple multilayer perception for 10 classes and a synthetic dataset. End of explanation mod = mx.mod.Module(symbol=net, context=mx.cpu(), data_names=['data'], label_names=['softmax_label']) Explanation: Create Module The most widely used module class is Module, which wraps a Symbol and one or more Executors. We construct a module by specify symbol : the network Symbol context : the device (or a list of devices) for execution data_names : the list of data variable names label_names : the list of label variable names One can refer to data.ipynb for more explanations about the last two arguments. Here we have only one data named data, and one label, with the name softmax_label, which is automatically named for us following the name softmax we specified for the SoftmaxOutput operator. End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL import logging logging.basicConfig(level=logging.INFO) batch_size=32 mod.fit(data.get_iter(batch_size), eval_data=data.get_iter(batch_size), optimizer='sgd', optimizer_params={'learning_rate':0.1}, eval_metric='acc', num_epoch=5) Explanation: Train, Predict, and Evaluate Modules provide high-level APIs for training, predicting and evaluating. To fit a module, simply call the fit function with some DataIters. End of explanation y = mod.predict(data.get_iter(batch_size)) 'shape of predict: %s' % (y.shape,) Explanation: To predict with a module, simply call predict() with a DataIter. It will collect and return all the prediction results. End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL for preds, i_batch, batch in mod.iter_predict(data.get_iter(batch_size)): pred_label = preds[0].asnumpy().argmax(axis=1) label = batch.label[0].asnumpy().astype('int32') print('batch %d, accuracy %f' % (i_batch, float(sum(pred_label==label))/len(label))) Explanation: Another convenient API for prediction in the case where the prediction results might be too large to fit in the memory is iter_predict: End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL mod.score(data.get_iter(batch_size), ['mse', 'acc']) Explanation: If we do not need the prediction outputs, but just need to evaluate on a test set, we can call the score() function with a DataIter and a EvalMetric: End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL # construct a callback function to save checkpoints model_prefix = 'mx_mlp' checkpoint = mx.callback.do_checkpoint(model_prefix) mod = mx.mod.Module(symbol=net) mod.fit(data.get_iter(batch_size), num_epoch=5, epoch_end_callback=checkpoint) Explanation: Save and Load We can save the module parameters in each training epoch by using a checkpoint callback. End of explanation sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 3) print(sym.tojson() == net.tojson()) # assign the loaded parameters to the module mod.set_params(arg_params, aux_params) Explanation: To load the saved module parameters, call the load_checkpoint function. It load the Symbol and the associated parameters. We can then set the loaded parameters into the module. End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL mod = mx.mod.Module(symbol=sym) mod.fit(data.get_iter(batch_size), num_epoch=5, arg_params=arg_params, aux_params=aux_params, begin_epoch=3) Explanation: Or if we just want to resume training from a saved checkpoint, instead of calling set_params(), we can directly call fit(), passing the loaded parameters, so that fit() knows to start from those parameters instead of initializing from random. We also set the begin_epoch so that so that fit() knows we are resuming from a previous saved epoch. End of explanation # @@@ AUTOTEST_OUTPUT_IGNORED_CELL # initial state mod = mx.mod.Module(symbol=net) # bind, tell the module the data and label shapes, so # that memory could be allocated on the devices for computation train_iter = data.get_iter(batch_size) mod.bind(data_shapes=train_iter.provide_data, label_shapes=train_iter.provide_label) # init parameters mod.init_params(initializer=mx.init.Xavier(magnitude=2.)) # init optimizer mod.init_optimizer(optimizer='sgd', optimizer_params=(('learning_rate', 0.1), )) # use accuracy as the metric metric = mx.metric.create('acc') # train one epoch, i.e. going over the data iter one pass for batch in train_iter: mod.forward(batch, is_train=True) # compute predictions mod.update_metric(metric, batch.label) # accumulate prediction accuracy mod.backward() # compute gradients mod.update() # update parameters using SGD # training accuracy print(metric.get()) Explanation: Module as a computation "machine" We already seen how to module for basic training and inference. Now we are going to show a more flexiable usage of module. A module represents a computation component. The design purpose of a module is that it abstract a computation “machine”, that accpets Symbol programs and data, and then we can run forward, backward, update parameters, etc. We aim to make the APIs easy and flexible to use, especially in the case when we need to use imperative API to work with multiple modules (e.g. stochastic depth network). A module has several states: - Initial state. Memory is not allocated yet, not ready for computation yet. - Binded. Shapes for inputs, outputs, and parameters are all known, memory allocated, ready for computation. - Parameter initialized. For modules with parameters, doing computation before initializing the parameters might result in undefined outputs.| - Optimizer installed. An optimizer can be installed to a module. After this, the parameters of the module can be updated according to the optimizer after gradients are computed (forward-backward). The following codes implement a simplified fit(). Here we used other components including initializer, optimizer, and metric, which are explained in other notebooks. End of explanation print((mod.data_shapes, mod.label_shapes, mod.output_shapes)) print(mod.get_params()) Explanation: Beside the operations, a module provides a lot of useful information. basic names: - data_names: list of string indicating the names of the required data. - output_names: list of string indicating the names of the outputs. state information - binded: bool, indicating whether the memory buffers needed for computation has been allocated. - for_training: whether the module is binded for training (if binded). - params_initialized: bool, indicating whether the parameters of this modules has been initialized. - optimizer_initialized: bool, indicating whether an optimizer is defined and initialized. - inputs_need_grad: bool, indicating whether gradients with respect to the input data is needed. Might be useful when implementing composition of modules. input/output information - data_shapes: a list of (name, shape). In theory, since the memory is allocated, we could directly provide the data arrays. But in the case of data parallelization, the data arrays might not be of the same shape as viewed from the external world. - label_shapes: a list of (name, shape). This might be [] if the module does not need labels (e.g. it does not contains a loss function at the top), or a module is not binded for training. - output_shapes: a list of (name, shape) for outputs of the module. parameters (for modules with parameters) - get_params(): return a tuple (arg_params, aux_params). Each of those is a dictionary of name to NDArray mapping. Those NDArray always lives on CPU. The actual parameters used for computing might live on other devices (GPUs), this function will retrieve (a copy of) the latest parameters. - get_outputs(): get outputs of the previous forward operation. - get_input_grads(): get the gradients with respect to the inputs computed in the previous backward operation. End of explanation
11,271
Given the following text description, write Python code to implement the functionality described below step by step Description: Source of the materials Step1: The 'catch' is that you have to work with SeqRecord objects (see Chapter 4), which contain a Seq object (Chapter 3) plus annotation like an identifier and description. Note that when dealing with very large FASTA or FASTQ files, the overhead of working with all these objects can make scripts too slow. In this case consider the low-level SimpleFastaParser and FastqGeneralIterator parsers which return just a tuple of strings for each record (see Section 5.6) 5.1 Parsing or Reading Sequences The workhorse function Bio.SeqIO.parse() is used to read in sequence data as SeqRecord objects. This function expects two arguments Step2: The above example is repeated from the introduction in Section 2.4, and will load the orchid DNA sequences in the FASTA format file ls_orchid.fasta. If instead you wanted to load a GenBank format file like ls_orchid.gbk then all you need to do is change the filename and the format string Step3: Similarly, if you wanted to read in a file in another file format, then assuming Bio.SeqIO.parse() supports it you would just need to change the format string as appropriate, for example 'swiss' for SwissProt files or 'embl' for EMBL text files. There is a full listing on the wiki page (http Step4: There are more examples using SeqIO.parse() in a list comprehension like this in Section 20.2 (e.g. for plotting sequence lengths or GC%). 5.1.2 Iterating over the records in a sequence file In the above examples, we have usually used a for loop to iterate over all the records one by one. You can use the for loop with all sorts of Python objects (including lists, tuples and strings) which support the iteration interface. The object returned by Bio.SeqIO is actually an iterator which returns SeqRecord objects. You get to see each record in turn, but once and only once. The plus point is that an iterator can save you memory when dealing with large files. Instead of using a for loop, can also use the next() function on an iterator to step through the entries, like this Step5: Note that if you try to use next() and there are no more results, you'll get the special StopIteration exception. One special case to consider is when your sequence files have multiple records, but you only want the first one. In this situation the following code is very concise Step6: A word of warning here -- using the next() function like this will silently ignore any additional records in the file. If your files have one and only one record, like some of the online examples later in this chapter, or a GenBank file for a single chromosome, then use the new Bio.SeqIO.read() function instead. This will check there are no extra unexpected records present. 5.1.3 Getting a list of the records in a sequence file In the previous section we talked about the fact that Bio.SeqIO.parse() gives you a SeqRecord iterator, and that you get the records one by one. Very often you need to be able to access the records in any order. The Python list data type is perfect for this, and we can turn the record iterator into a list of SeqRecord objects using the built-in Python function list() like so Step7: You can of course still use a for loop with a list of SeqRecord objects. Using a list is much more flexible than an iterator (for example, you can determine the number of records from the length of the list), but does need more memory because it will hold all the records in memory at once. 5.1.4 Extracting data The SeqRecord object and its annotation structures are described more fully in Chapter 4. As an example of how annotations are stored, we'll look at the output from parsing the first record in the GenBank file ls_orchid.gbk. Step8: This gives a human readable summary of most of the annotation data for the SeqRecord. For this example we're going to use the .annotations attribute which is just a Python dictionary. The contents of this annotations dictionary were shown when we printed the record above. You can also print them out directly Step9: In general, 'organism' is used for the scientific name (in Latin, e.g. Arabidopsis thaliana), while 'source' will often be the common name (e.g. thale cress). In this example, as is often the case, the two fields are identical. Now let's go through all the records, building up a list of the species each orchid sequence is from Step10: Another way of writing this code is to use a list comprehension Step11: Great. That was pretty easy because GenBank files are annotated in a standardised way. Now, let's suppose you wanted to extract a list of the species from a FASTA file, rather than the GenBank file. The bad news is you will have to write some code to extract the data you want from the record's description line - if the information is in the file in the first place! Our example FASTA format file ls_orchid.fasta starts like this Step12: The concise alternative using list comprehensions would be Step13: In general, extracting information from the FASTA description line is not very nice. If you can get your sequences in a well annotated file format like GenBank or EMBL, then this sort of annotation information is much easier to deal with. 5.1.5 Modifying data In the previus section, we demostrated how to extract data from a SeqRecord. Another common task is to alter this data. The attributes of a SeqRecord can be modified directly, for example Step14: Note, if you want to change the way FASTA is output when written to a file (see Section 5.5), then you should modify both the id and description attributes. To ensure the correct behaviour, it is best to include the id plus a space at the start of the desired description Step15: 5.2 Parsing sequences from compressed files In the previous section, we looked at parsing sequence data from a file. Instead of using a filename, you can give Bio.SeqIO a handle (see Section 24.1), and in this section we'll use handles to parse sequence from compressed files. As you'll have seen above, we can use Bio.SeqIO.read() or Bio.SeqIO.parse() with a filename - for instance this quick example calculates the total length of the sequences in a multiple record GenBank file using a generator expression Step16: Here we use a file handle instead, using the \verb|with| statement to close the handle automatically Step17: Or, the old fashioned way where you manually close the handle Step18: Now, suppose we have a gzip compressed file instead? These are very commonly used on Linux. We can use Python's gzip module to open the compressed file for reading - which gives us a handle object Step19: Similarly if we had a bzip2 compressed file Step20: There is a gzip (GNU Zip) variant called BGZF (Blocked GNU Zip Format), which can be treated like an ordinary gzip file for reading, but has advantages for random access later which we'll talk about later in Section 5.4.4. 5.3 Parsing sequences from the net In the previous sections, we looked at parsing sequence data from a file (using a filename or handle), and from compressed files (using a handle). Here we'll use Bio.SeqIO with another type of handle, a network connection, to download and parse sequences from the internet. Note that just because you can download sequence data and parse it into a SeqRecord object in one go doesn't mean this is a good idea. In general, you should probably download sequences once and save them to a file for reuse. 5.3.1 Parsing GenBank records from the net Section 9.6 talks about the Entrez EFetch interface in more detail, but for now let's just connect to NCBI and get a few Opuntia (prickly-pear) sequences from GenBank using their GI numbers. First of all, let's fetch just one record. If you don't care about the annotations and features downloading a FASTA file is a good choice as these are compact. Now remember, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function Step21: The NCBI will also let you ask for the file in other formats, in particular as a GenBank file. Until Easter 2009, the Entrez EFetch API let you use ``genbank'' as the return type, however the NCBI now insist on using the official return types of 'gb' (or 'gp' for proteins) as described on EFetch for Sequence and other Molecular Biology Databases. As a result, in Biopython 1.50 onwards, we support “gb” as an alias for “genbank” in Bio.SeqIO. Step22: Notice this time we have three features. Now let's fetch several records. This time the handle contains multiple records, so we must use the Bio.SeqIO.parse() function Step23: See Chapter 9 for more about the Bio.Entrez module, and make sure to read about the NCBI guidelines for using Entrez (Section 9.1). 5.3.2 Parsing SwissProt sequences from the net Now let's use a handle to download a SwissProt file from ExPASy, something covered in more depth in Chapter 10. As mentioned above, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function Step24: 5.4 5Sequence files as dictionaries We're now going to introduce three related functions in the \verb|Bio.SeqIO| module which allow dictionary like random access to a multi-sequence file. There is a trade off here between flexibility and memory usage. In summary Step25: There is just one required argument for Bio.SeqIO.to_dict(), a list or generator giving SeqRecord objects. Here we have just used the output from the SeqIO.parse function. As the name suggests, this returns a Python dictionary. Since this variable orchid_dict is an ordinary Python dictionary, we can look at all of the keys we have available Step26: Under Python 3 the dictionary methods like ".keys()" and ".values()" are iterators rather than lists. If you really want to, you can even look at all the records at once Step27: We can access a single SeqRecord object via the keys and manipulate the object as normal Step28: So, it is very easy to create an in memory 'database' of our GenBank records. Next we'll try this for the FASTA file instead. Note that those of you with prior Python experience should all be able to construct a dictionary like this 'by hand'. However, typical dictionary construction methods will not deal with the case of repeated keys very nicely. Using the Bio.SeqIO.to_dict() will explicitly check for duplicate keys, and raise an exception if any are found. 5.4.1.1 pecifying the dictionary keys Using the same code as above, but for the FASTA file instead Step30: You should recognise these strings from when we parsed the FASTA file earlier in Section 2.4.1. Suppose you would rather have something else as the keys - like the accession numbers. This brings us nicely to SeqIO.to_dict()'s optional argument key_function, which lets you define what to use as the dictionary key for your records. First you must write your own function to return the key you want (as a string) when given a SeqRecord object. In general, the details of function will depend on the sort of input records you are dealing with. But for our orchids, we can just split up the record's identifier using the 'pipe' character (the vertical line) and return the fourth entry (field three) Step31: Then we can give this function to the SeqIO.to_dict() function to use in building the dictionary Step32: Finally, as desired, the new dictionary keys Step33: Not to complicated, I hope! 5.4.1.2 Indexing a dictionary using the SEGUID checksum To give another example of working with dictionaries of SeqRecord objects, we'll use the SEGUID checksum function. This is a relatively recent checksum, and collisions should be very rare (i.e. two different sequences with the same checksum), an improvement on the CRC64 checksum. Once again, working with the orchids GenBank file Step34: Now, recall the Bio.SeqIO.to_dict() function's key_function argument expects a function which turns a SeqRecord into a string. We can't use the seguid() function directly because it expects to be given a Seq object (or a string). However, we can use Python's lambda feature to create a 'one off' function to give to Bio.SeqIO.to_dict() instead Step35: That should have retrieved the record Z78532.1, the second entry in the file. 5.4.2 Sequence files as Dictionaries - Indexed files As the previous couple of examples tried to illustrate, using Bio.SeqIO.to_dict() is very flexible. However, because it holds everything in memory, the size of file you can work with is limited by your computer's RAM. In general, this will only work on small to medium files. For larger files you should consider Bio.SeqIO.index(), which works a little differently. Although it still returns a dictionary like object, this does not keep everything in memory. Instead, it just records where each record is within the file -- when you ask for a particular record, it then parses it on demand. As an example, let's use the same GenBank file as before Step36: Note that Bio.SeqIO.index() won’t take a handle, but only a filename. There are good reasons for this, but it is a little technical. The second argument is the file format (a lower case string as used in the other Bio.SeqIO functions). You can use many other simple file formats, including FASTA and FASTQ files (see the example in Section 20.1.11). However, alignment formats like PHYLIP or Clustal are not supported. Finally as an optional argument you can supply a key function. Here is the same example using the FASTA file - all we change is the filename and the format name Step38: 5.4.2.1 Specifying the dictionary keys Suppose you want to use the same keys as before? Much like with the Bio.SeqIO.to_dict() example in Section 5.4.1.1, you’ll need to write a tiny function to map from the FASTA identifier (as a string) to the key you want Step39: Then we can give this function to the Bio.SeqIO.index() function to use in building the dictionary Step40: Easy when you know how? 5.4.2.2 Getting the raw data for a record The dictionary-like object from Bio.SeqIO.index() gives you each entry as a SeqRecord object. However, it is sometimes useful to be able to get the original raw data straight from the file. For this use the get_raw() method which takes a single argument (the record identifier) and returns a string (extracted from the file without modification). A motivating example is extracting a subset of a records from a large file where either Bio.SeqIO.write() does not (yet) support the output file format (e.g. the plain text SwissProt file format) or where you need to preserve the text exactly (e.g. GenBank or EMBL output from Biopython does not yet preserve every last bit of annotation). Let's suppose you have download the whole of UniProt in the plain text SwissPort file format from their FTP site (ftp Step41: Note with Python 3 onwards, we have to open the file for writing in binary mode because the get_raw() method returns bytes strings. There is a longer example in Section 20.1.5 using the SeqIO.index() function to sort a large sequence file (without loading everything into memory at once). 5.4.3 Sequence files as Dictionaries - Database indexed files Biopython 1.57 introduced an alternative, Bio.SeqIO.index_db(), which can work on even extremely large files since it stores the record information as a file on disk (using an SQLite3 database) rather than in memory. Also, you can index multiple files together (providing all the record identifiers are unique). The Bio.SeqIO.index() function takes three required arguments Step42: Unless you care about viruses, that’s a lot of data to download just for this example - so let’s download just the first four chunks (about 25MB each compressed), and decompress them (taking in all about 1GB of space) Step43: Now, in Python, index these GenBank files as follows Step44: Indexing the full set of virus GenBank files took about ten minutes on my machine, just the first four files took about a minute or so. However, once done, repeating this will reload the index file gbvrl.idx in a fraction of a second. You can use the index as a read only Python dictionary - without having to worry about which file the sequence comes from, e.g. Step45: 5.4.3.1 Getting the raw data for a record Just as with the Bio.SeqIO.index() function discussed above in Section 5.4.2.2, the dictionary like object also lets you get at the raw bytes of each record Step46: 5.4.4 Indexing compressed files Very often when you are indexing a sequence file it can be quite large - so you may want to compress it on disk. Unfortunately efficient random access is difficult with the more common file formats like gzip and bzip2. In this setting, BGZF (Blocked GNU Zip Format) can be very helpful. This is a variant of gzip (and can be decompressed using standard gzip tools) popularised by the BAM file format, samtools, and tabix. To create a BGZF compressed file you can use the command line tool bgzip which comes with samtools. In our examples we use a filename extension .bgz, so they can be distinguished from normal gzipped files (named .gz). You can also use the Bio.bgzf module to read and write BGZF files from within Python. The Bio.SeqIO.index() and Bio.SeqIO.index_db() can both be used with BGZF compressed files. For example, if you started with an uncompressed GenBank file Step47: You could compress this (while keeping the original file) at the command line using the following command Step48: or Step49: The SeqIO indexing automatically detects the BGZF compression. Note that you can't use the same index file for the uncompressed and compressed files. 5.4.5 Discussion So, which of these methods should you use and why? It depends on what you are trying to do (and how much data you are dealing with). However, in general picking Bio.SeqIO.index() is a good starting point. If you are dealing with millions of records, multiple files, or repeated analyses, then look at Bio.SeqIO.index_db(). Reasons to choose Bio.SeqIO.to_dict() over either Bio.SeqIO.index() or Bio.SeqIO.index_db() boil down to a need for flexibility despite its high memory needs. The advantage of storing the SeqRecord objects in memory is they can be changed, added to, or removed at will. In addition to the downside of high memory consumption, indexing can also take longer because all the records must be fully parsed. Both Bio.SeqIO.index() and Bio.SeqIO.index_db() only parse records on demand. When indexing, they scan the file once looking for the start of each record and do as little work as possible to extract the identifier. Reasons to choose Bio.SeqIO.index() over Bio.SeqIO.index_db() include Step50: Now we have a list of SeqRecord objects, we'll write them to a FASTA format file Step51: And if you open this file in your favourite text editor it should look like this Step52: Still, that is a little bit complicated. So, because file conversion is such a common task, there is a helper function letting you replace that with just Step53: The Bio.SeqIO.convert() function will take handles or filenames. Watch out though - if the output file already exists, it will overwrite it! To find out more, see the built in help Step54: In principle, just by changing the filenames and the format names, this code could be used to convert between any file formats available in Biopython. However, writing some formats requires information (e.g. quality scores) which other files formats don’t contain. For example, while you can turn a FASTQ file into a FASTA file, you can’t do the reverse. See also Sections 20.1.9 and 20.1.10 in the cookbook chapter which looks at inter-converting between different FASTQ formats. Finally, as an added incentive for using the Bio.SeqIO.convert() function (on top of the fact your code will be shorter), doing it this way may also be faster! The reason for this is the convert function can take advantage of several file format specific optimisations and tricks. 5.5.3 Converting a file of sequences to their reverse complements Suppose you had a file of nucleotide sequences, and you wanted to turn it into a file containing their reverse complement sequences. This time a little bit of work is required to transform the SeqRecord objects we get from our input file into something suitable for saving to our output file. To start with, we’ll use Bio.SeqIO.parse() to load some nucleotide sequences from a file, then print out their reverse complements using the Seq object’s built in .reverse_complement() method (see Section 3.6) Step55: Now, if we want to save these reverse complements to a file, we’ll need to make SeqRecord objects. We can use the SeqRecord object’s built in .reverse_complement() method (see Section 4.9) but we must decide how to name our new records. This is an excellent place to demonstrate the power of list comprehensions which make a list in memory Step56: Now list comprehensions have a nice trick up their sleeves, you can add a conditional statement Step57: That would create an in memory list of reverse complement records where the sequence length was under 700 base pairs. However, we can do exactly the same with a generator expression - but with the advantage that this does not create a list of all the records in memory at once Step58: As a complete example Step59: There is a related example in Section 20.1.3, translating each record in a FASTA file from nucleotides to amino acids. 5.5.4 Getting your SeqRecord objects as formatted strings Suppose that you don't really want to write your records to a file or handle - instead you want a string containing the records in a particular file format. The Bio.SeqIO interface is based on handles, but Python has a useful built in module which provides a string based handle. For an example of how you might use this, let's load in a bunch of SeqRecord objects from our orchids GenBank file, and create a string containing the records in FASTA format Step60: This isn’t entirely straightforward the first time you see it! On the bright side, for the special case where you would like a string containing a single record in a particular file format, use the the SeqRecord class’ format() method (see Section 4.6). Note that although we don’t encourage it, you can use the format() method to write to a file, for example something like this Step61: While this style of code will work for a simple sequential file format like FASTA or the simple tab separated format used here, it will not work for more complex or interlaced file formats. This is why we still recommend using Bio.SeqIO.write(), as in the following example Step62: Making a single call to SeqIO.write(...) is also much quicker than multiple calls to the SeqRecord.format(...) method. 5.6 Low level FASTA and FASTQ parsers Working with the low-level SimpleFastaParser or FastqGeneralIterator is often more practical than Bio.SeqIO.parse when dealing with large high-throughput FASTA or FASTQ sequencing files where speed matters. As noted in the introduction to this chapter, the file-format neutral Bio.SeqIO interface has the overhead of creating many objects even for simple formats like FASTA. When parsing FASTA files, internally Bio.SeqIO.parse() calls the low-level SimpleFastaParser with the file handle. You can use this directly - it iterates over the file handle returning each record as a tuple of two strings, the title line (everything after the > character) and the sequence (as a plain string) Step63: As long as you don’t care about line wrapping (and you probably don’t for short read high-througput data), then outputing FASTA format from these strings is also very fast Step64: Likewise, when parsing FASTQ files, internally Bio.SeqIO.parse() calls the low-level FastqGeneralIterator with the file handle. If you don’t need the quality scores turned into integers, or can work with them as ASCII strings this is ideal Step65: There are more examples of this in the Cookbook (Chapter 20), including how to output FASTQ efficiently from strings using this code snippet
Python Code: from Bio import SeqIO help(SeqIO) Explanation: Source of the materials: Biopython Tutorial and Cookbook (adapted) Sequence Input/Output In this notebook we'll discuss in more detail the Bio.SeqIO module, which was briefly introduced before. This aims to provide a simple interface for working with assorted sequence file formats in a uniform way. See also the Bio.SeqIO wiki page (http://biopython.org/wiki/SeqIO), and the built in documentation (also online): End of explanation # we show the first 3 only from Bio import SeqIO for i, seq_record in enumerate(SeqIO.parse("data/ls_orchid.fasta", "fasta")): print(seq_record.id) print(repr(seq_record.seq)) print(len(seq_record)) if i == 2: break Explanation: The 'catch' is that you have to work with SeqRecord objects (see Chapter 4), which contain a Seq object (Chapter 3) plus annotation like an identifier and description. Note that when dealing with very large FASTA or FASTQ files, the overhead of working with all these objects can make scripts too slow. In this case consider the low-level SimpleFastaParser and FastqGeneralIterator parsers which return just a tuple of strings for each record (see Section 5.6) 5.1 Parsing or Reading Sequences The workhorse function Bio.SeqIO.parse() is used to read in sequence data as SeqRecord objects. This function expects two arguments: The first argument is a handle to read the data from, or a filename. A handle is typically a file opened for reading, but could be the output from a command line program, or data downloaded from the internet (see Section 5.3). See Section 24.1 for more about handles. The second argument is a lower case string specifying sequence format -- we don't try and guess the file format for you! See http://biopython.org/wiki/SeqIO for a full listing of supported formats. The Bio.SeqIO.parse() function returns an iterator which gives SeqRecord objects. Iterators are typically used in a for loop as shown below. Sometimes you'll find yourself dealing with files which contain only a single record. For this situation use the function Bio.SeqIO.read() which takes the same arguments. Provided there is one and only one record in the file, this is returned as a SeqRecord object. Otherwise an exception is raised. 5.1.1 Reading Sequence Files In general Bio.SeqIO.parse() is used to read in sequence files as SeqRecord objects, and is typically used with a for loop like this: End of explanation #we show the frist 3 from Bio import SeqIO for i, seq_record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")): print(seq_record.id) print(seq_record.seq) print(len(seq_record)) if i == 2: break Explanation: The above example is repeated from the introduction in Section 2.4, and will load the orchid DNA sequences in the FASTA format file ls_orchid.fasta. If instead you wanted to load a GenBank format file like ls_orchid.gbk then all you need to do is change the filename and the format string: End of explanation from Bio import SeqIO identifiers=[seq_record.id for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank")][:10] # ten only identifiers Explanation: Similarly, if you wanted to read in a file in another file format, then assuming Bio.SeqIO.parse() supports it you would just need to change the format string as appropriate, for example 'swiss' for SwissProt files or 'embl' for EMBL text files. There is a full listing on the wiki page (http://biopython.org/wiki/SeqIO) and in the built in documentation (also online). Another very common way to use a Python iterator is within a list comprehension (or a generator expression). For example, if all you wanted to extract from the file was a list of the record identifiers we can easily do this with the following list comprehension: End of explanation record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta") first_record = next(record_iterator) print(first_record.id) print(first_record.description) second_record = next(record_iterator) print(second_record.id) print(second_record.description) Explanation: There are more examples using SeqIO.parse() in a list comprehension like this in Section 20.2 (e.g. for plotting sequence lengths or GC%). 5.1.2 Iterating over the records in a sequence file In the above examples, we have usually used a for loop to iterate over all the records one by one. You can use the for loop with all sorts of Python objects (including lists, tuples and strings) which support the iteration interface. The object returned by Bio.SeqIO is actually an iterator which returns SeqRecord objects. You get to see each record in turn, but once and only once. The plus point is that an iterator can save you memory when dealing with large files. Instead of using a for loop, can also use the next() function on an iterator to step through the entries, like this: End of explanation from Bio import SeqIO next(SeqIO.parse("data/ls_orchid.gbk", "genbank")) Explanation: Note that if you try to use next() and there are no more results, you'll get the special StopIteration exception. One special case to consider is when your sequence files have multiple records, but you only want the first one. In this situation the following code is very concise: End of explanation from Bio import SeqIO records = list(SeqIO.parse("data/ls_orchid.gbk", "genbank")) print("Found %i records" % len(records)) print("The last record") last_record = records[-1] #using Python's list tricks print(last_record.id) print(repr(last_record.seq)) print(len(last_record)) print("The first record") first_record = records[0] #remember, Python counts from zero print(first_record.id) print(repr(first_record.seq)) print(len(first_record)) Explanation: A word of warning here -- using the next() function like this will silently ignore any additional records in the file. If your files have one and only one record, like some of the online examples later in this chapter, or a GenBank file for a single chromosome, then use the new Bio.SeqIO.read() function instead. This will check there are no extra unexpected records present. 5.1.3 Getting a list of the records in a sequence file In the previous section we talked about the fact that Bio.SeqIO.parse() gives you a SeqRecord iterator, and that you get the records one by one. Very often you need to be able to access the records in any order. The Python list data type is perfect for this, and we can turn the record iterator into a list of SeqRecord objects using the built-in Python function list() like so: End of explanation from Bio import SeqIO record_iterator = SeqIO.parse("data/ls_orchid.gbk", "genbank") first_record = next(record_iterator) print(first_record) Explanation: You can of course still use a for loop with a list of SeqRecord objects. Using a list is much more flexible than an iterator (for example, you can determine the number of records from the length of the list), but does need more memory because it will hold all the records in memory at once. 5.1.4 Extracting data The SeqRecord object and its annotation structures are described more fully in Chapter 4. As an example of how annotations are stored, we'll look at the output from parsing the first record in the GenBank file ls_orchid.gbk. End of explanation print(first_record.annotations["source"]) print(first_record.annotations["organism"]) Explanation: This gives a human readable summary of most of the annotation data for the SeqRecord. For this example we're going to use the .annotations attribute which is just a Python dictionary. The contents of this annotations dictionary were shown when we printed the record above. You can also print them out directly: print(first_record.annotations) Like any Python dictionary, you can easily get a list of the keys: print(first_record.annotations.keys()) or values: print(first_record.annotations.values()) In general, the annotation values are strings, or lists of strings. One special case is any references in the file get stored as reference objects. Suppose you wanted to extract a list of the species from the ls_orchid.gbk GenBank file. The information we want, Cypripedium irapeanum, is held in the annotations dictionary under 'source' and 'organism', which we can access like this: End of explanation from Bio import SeqIO all_species = [] for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank"): all_species.append(seq_record.annotations["organism"]) print(all_species[:10]) # we print only 10 Explanation: In general, 'organism' is used for the scientific name (in Latin, e.g. Arabidopsis thaliana), while 'source' will often be the common name (e.g. thale cress). In this example, as is often the case, the two fields are identical. Now let's go through all the records, building up a list of the species each orchid sequence is from: End of explanation from Bio import SeqIO all_species = [seq_record.annotations["organism"] for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank") ] print(all_species[:10]) Explanation: Another way of writing this code is to use a list comprehension: End of explanation from Bio import SeqIO all_species = [] for seq_record in SeqIO.parse("data/ls_orchid.fasta", "fasta"): all_species.append(seq_record.description.split()[1]) print(all_species[:10]) Explanation: Great. That was pretty easy because GenBank files are annotated in a standardised way. Now, let's suppose you wanted to extract a list of the species from a FASTA file, rather than the GenBank file. The bad news is you will have to write some code to extract the data you want from the record's description line - if the information is in the file in the first place! Our example FASTA format file ls_orchid.fasta starts like this: &gt;gi|2765658|emb|Z78533.1|CIZ78533 C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGGAATAAACGATCGAGTG AATCCGGAGGACCGGTGTACTCAGCTCACCGGGGGCATTGCTCCCGTGGTGACCCTGATTTGTTGTTGGG ... You can check by hand, but for every record the species name is in the description line as the second word. This means if we break up each record's .description at the spaces, then the species is there as field number one (field zero is the record identifier). That means we can do this: End of explanation from Bio import SeqIO all_species == [ seq_record.description.split()[1] for seq_record in SeqIO.parse("data/ls_orchid.fasta", "fasta")] print(all_species[:10]) Explanation: The concise alternative using list comprehensions would be: End of explanation from Bio import SeqIO record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta") first_record = next(record_iterator) first_record.id first_record.id = "new_id" first_record.id Explanation: In general, extracting information from the FASTA description line is not very nice. If you can get your sequences in a well annotated file format like GenBank or EMBL, then this sort of annotation information is much easier to deal with. 5.1.5 Modifying data In the previus section, we demostrated how to extract data from a SeqRecord. Another common task is to alter this data. The attributes of a SeqRecord can be modified directly, for example: End of explanation from Bio import SeqIO record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta") first_record = next(record_iterator) first_record.id = "new_id" first_record.description = first_record.id + " " + "desired new description" print(first_record.format("fasta")[:200]) Explanation: Note, if you want to change the way FASTA is output when written to a file (see Section 5.5), then you should modify both the id and description attributes. To ensure the correct behaviour, it is best to include the id plus a space at the start of the desired description: End of explanation from Bio import SeqIO print(sum(len(r) for r in SeqIO.parse("data/ls_orchid.gbk", "gb"))) Explanation: 5.2 Parsing sequences from compressed files In the previous section, we looked at parsing sequence data from a file. Instead of using a filename, you can give Bio.SeqIO a handle (see Section 24.1), and in this section we'll use handles to parse sequence from compressed files. As you'll have seen above, we can use Bio.SeqIO.read() or Bio.SeqIO.parse() with a filename - for instance this quick example calculates the total length of the sequences in a multiple record GenBank file using a generator expression: End of explanation from Bio import SeqIO with open("data/ls_orchid.gbk") as handle: print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) Explanation: Here we use a file handle instead, using the \verb|with| statement to close the handle automatically: End of explanation from Bio import SeqIO handle = open("data/ls_orchid.gbk") print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) handle.close() Explanation: Or, the old fashioned way where you manually close the handle: End of explanation import gzip from Bio import SeqIO with gzip.open("data/ls_orchid.gbk.gz", "rt") as handle: print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) Explanation: Now, suppose we have a gzip compressed file instead? These are very commonly used on Linux. We can use Python's gzip module to open the compressed file for reading - which gives us a handle object: End of explanation import bz2 from Bio import SeqIO with bz2.open("data/ls_orchid.gbk.bz2", "rt") as handle: print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) Explanation: Similarly if we had a bzip2 compressed file: End of explanation from Bio import Entrez from Bio import SeqIO Entrez.email = "[email protected]" with Entrez.efetch( db="nucleotide", rettype="fasta", retmode="text", id="6273291" ) as handle: seq_record = SeqIO.read(handle, "fasta") print("%s with %i features" % (seq_record.id, len(seq_record.features))) Explanation: There is a gzip (GNU Zip) variant called BGZF (Blocked GNU Zip Format), which can be treated like an ordinary gzip file for reading, but has advantages for random access later which we'll talk about later in Section 5.4.4. 5.3 Parsing sequences from the net In the previous sections, we looked at parsing sequence data from a file (using a filename or handle), and from compressed files (using a handle). Here we'll use Bio.SeqIO with another type of handle, a network connection, to download and parse sequences from the internet. Note that just because you can download sequence data and parse it into a SeqRecord object in one go doesn't mean this is a good idea. In general, you should probably download sequences once and save them to a file for reuse. 5.3.1 Parsing GenBank records from the net Section 9.6 talks about the Entrez EFetch interface in more detail, but for now let's just connect to NCBI and get a few Opuntia (prickly-pear) sequences from GenBank using their GI numbers. First of all, let's fetch just one record. If you don't care about the annotations and features downloading a FASTA file is a good choice as these are compact. Now remember, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function: End of explanation from Bio import Entrez from Bio import SeqIO Entrez.email = "[email protected]" with Entrez.efetch( db="nucleotide", rettype="gb", retmode="text", id="6273291" ) as handle: seq_record = SeqIO.read(handle, "gb") # using "gb" as an alias for "genbank" print("%s with %i features" % (seq_record.id, len(seq_record.features))) Explanation: The NCBI will also let you ask for the file in other formats, in particular as a GenBank file. Until Easter 2009, the Entrez EFetch API let you use ``genbank'' as the return type, however the NCBI now insist on using the official return types of 'gb' (or 'gp' for proteins) as described on EFetch for Sequence and other Molecular Biology Databases. As a result, in Biopython 1.50 onwards, we support “gb” as an alias for “genbank” in Bio.SeqIO. End of explanation from Bio import Entrez from Bio import SeqIO Entrez.email = "[email protected]" with Entrez.efetch( db="nucleotide", rettype="gb", retmode="text", id="6273291,6273290,6273289" ) as handle: for seq_record in SeqIO.parse(handle, "gb"): print("%s %s..." % (seq_record.id, seq_record.description[:50])) print( "Sequence length %i, %i features, from: %s" % ( len(seq_record), len(seq_record.features), seq_record.annotations["source"], ) ) Explanation: Notice this time we have three features. Now let's fetch several records. This time the handle contains multiple records, so we must use the Bio.SeqIO.parse() function: End of explanation from Bio import ExPASy from Bio import SeqIO with ExPASy.get_sprot_raw("O23729") as handle: seq_record = SeqIO.read(handle, "swiss") print(seq_record.id) print(seq_record.name) print(seq_record.description) print(repr(seq_record.seq)) print("Length %i" % len(seq_record)) print(seq_record.annotations["keywords"]) Explanation: See Chapter 9 for more about the Bio.Entrez module, and make sure to read about the NCBI guidelines for using Entrez (Section 9.1). 5.3.2 Parsing SwissProt sequences from the net Now let's use a handle to download a SwissProt file from ExPASy, something covered in more depth in Chapter 10. As mentioned above, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function: End of explanation from Bio import SeqIO orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.gbk", "genbank")) Explanation: 5.4 5Sequence files as dictionaries We're now going to introduce three related functions in the \verb|Bio.SeqIO| module which allow dictionary like random access to a multi-sequence file. There is a trade off here between flexibility and memory usage. In summary: Bio.SeqIO.to_dict() is the most flexible but also the most memory demanding option (see Section 5.4.1). This is basically a helper function to build a normal Python dictionary with each entry held as a SeqRecord object in memory, allowing you to modify the records. Bio.SeqIO.index() is a useful middle ground, acting like a read only dictionary and parsing sequences into SeqRecord objects on demand (see Section 5.4.2). Bio.SeqIO.index_db() also acts like a read only dictionary but stores the identifiers and file offsets in a file on disk (as an SQLite3 database), meaning it has very low memory requirements (see Section 5.4.3), but will be a little bit slower. See the discussion for an broad overview (Section 5.4.5). 5.4.1 Sequence files as Dictionaries -- In memory The next thing that we'll do with our ubiquitous orchid files is to show how to index them and access them like a database using the Python dictionary data type (like a hash in Perl). This is very useful for moderately large files where you only need to access certain elements of the file, and makes for a nice quick 'n dirty database. For dealing with larger files where memory becomes a problem, see Section 5.4.2 below. You can use the function Bio.SeqIO.to_dict() to make a SeqRecord dictionary (in memory). By default this will use each record's identifier (i.e. the .id attribute) as the key. Let's try this using our GenBank file: End of explanation len(orchid_dict) list(orchid_dict.keys())[:10] #ten only Explanation: There is just one required argument for Bio.SeqIO.to_dict(), a list or generator giving SeqRecord objects. Here we have just used the output from the SeqIO.parse function. As the name suggests, this returns a Python dictionary. Since this variable orchid_dict is an ordinary Python dictionary, we can look at all of the keys we have available: End of explanation list(orchid_dict.values())[:5] # Ok not all at once... Explanation: Under Python 3 the dictionary methods like ".keys()" and ".values()" are iterators rather than lists. If you really want to, you can even look at all the records at once: End of explanation seq_record = orchid_dict["Z78475.1"] print(seq_record.description) seq_record.seq Explanation: We can access a single SeqRecord object via the keys and manipulate the object as normal: End of explanation from Bio import SeqIO orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.fasta", "fasta")) print(list(orchid_dict.keys())[:10]) Explanation: So, it is very easy to create an in memory 'database' of our GenBank records. Next we'll try this for the FASTA file instead. Note that those of you with prior Python experience should all be able to construct a dictionary like this 'by hand'. However, typical dictionary construction methods will not deal with the case of repeated keys very nicely. Using the Bio.SeqIO.to_dict() will explicitly check for duplicate keys, and raise an exception if any are found. 5.4.1.1 pecifying the dictionary keys Using the same code as above, but for the FASTA file instead: End of explanation def get_accession(record): "Given a SeqRecord, return the accession number as a string. e.g. "gi|2765613|emb|Z78488.1|PTZ78488" -> "Z78488.1" parts = record.id.split("|") assert len(parts) == 5 and parts[0] == "gi" and parts[2] == "emb" return parts[3] Explanation: You should recognise these strings from when we parsed the FASTA file earlier in Section 2.4.1. Suppose you would rather have something else as the keys - like the accession numbers. This brings us nicely to SeqIO.to_dict()'s optional argument key_function, which lets you define what to use as the dictionary key for your records. First you must write your own function to return the key you want (as a string) when given a SeqRecord object. In general, the details of function will depend on the sort of input records you are dealing with. But for our orchids, we can just split up the record's identifier using the 'pipe' character (the vertical line) and return the fourth entry (field three): End of explanation from Bio import SeqIO orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.fasta", "fasta"), key_function=get_accession) print(orchid_dict.keys()) Explanation: Then we can give this function to the SeqIO.to_dict() function to use in building the dictionary: End of explanation print(list(orchid_dict.keys())[:10]) Explanation: Finally, as desired, the new dictionary keys: End of explanation from Bio import SeqIO from Bio.SeqUtils.CheckSum import seguid for i, record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")): print(record.id, seguid(record.seq)) if i == 4: # OK, 5 is enough! break Explanation: Not to complicated, I hope! 5.4.1.2 Indexing a dictionary using the SEGUID checksum To give another example of working with dictionaries of SeqRecord objects, we'll use the SEGUID checksum function. This is a relatively recent checksum, and collisions should be very rare (i.e. two different sequences with the same checksum), an improvement on the CRC64 checksum. Once again, working with the orchids GenBank file: End of explanation from Bio import SeqIO from Bio.SeqUtils.CheckSum import seguid seguid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.gbk", "genbank"), lambda rec : seguid(rec.seq)) record = seguid_dict["MN/s0q9zDoCVEEc+k/IFwCNF2pY"] print(record.id) print(record.description) Explanation: Now, recall the Bio.SeqIO.to_dict() function's key_function argument expects a function which turns a SeqRecord into a string. We can't use the seguid() function directly because it expects to be given a Seq object (or a string). However, we can use Python's lambda feature to create a 'one off' function to give to Bio.SeqIO.to_dict() instead: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index("data/ls_orchid.gbk", "genbank") len(orchid_dict) print(list(orchid_dict.keys())) seq_record = orchid_dict["Z78475.1"] print(seq_record.description) seq_record.seq orchid_dict.close() Explanation: That should have retrieved the record Z78532.1, the second entry in the file. 5.4.2 Sequence files as Dictionaries - Indexed files As the previous couple of examples tried to illustrate, using Bio.SeqIO.to_dict() is very flexible. However, because it holds everything in memory, the size of file you can work with is limited by your computer's RAM. In general, this will only work on small to medium files. For larger files you should consider Bio.SeqIO.index(), which works a little differently. Although it still returns a dictionary like object, this does not keep everything in memory. Instead, it just records where each record is within the file -- when you ask for a particular record, it then parses it on demand. As an example, let's use the same GenBank file as before: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index("data/ls_orchid.fasta", "fasta") len(orchid_dict) print(list(orchid_dict.keys())[:10]) Explanation: Note that Bio.SeqIO.index() won’t take a handle, but only a filename. There are good reasons for this, but it is a little technical. The second argument is the file format (a lower case string as used in the other Bio.SeqIO functions). You can use many other simple file formats, including FASTA and FASTQ files (see the example in Section 20.1.11). However, alignment formats like PHYLIP or Clustal are not supported. Finally as an optional argument you can supply a key function. Here is the same example using the FASTA file - all we change is the filename and the format name: End of explanation def get_acc(identifier): "Given a SeqRecord identifier string, return the accession number as a string. e.g. "gi|2765613|emb|Z78488.1|PTZ78488" -> "Z78488.1" parts = identifier.split("|") assert len(parts) == 5 and parts[0] == "gi" and parts[2] == "emb" return parts[3] Explanation: 5.4.2.1 Specifying the dictionary keys Suppose you want to use the same keys as before? Much like with the Bio.SeqIO.to_dict() example in Section 5.4.1.1, you’ll need to write a tiny function to map from the FASTA identifier (as a string) to the key you want: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index("data/ls_orchid.fasta", "fasta", key_function=get_acc) print(list(orchid_dict.keys())) Explanation: Then we can give this function to the Bio.SeqIO.index() function to use in building the dictionary: End of explanation #Use this to download the file !wget -c ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz -O data/uniprot_sprot.dat.gz !gzip -d data/uniprot_sprot.dat.gz from Bio import SeqIO uniprot = SeqIO.index("data/uniprot_sprot.dat", "swiss") with open("selected.dat", "wb") as out_handle: for acc in ["P33487", "P19801", "P13689", "Q8JZQ5", "Q9TRC7"]: out_handle.write(uniprot.get_raw(acc)) Explanation: Easy when you know how? 5.4.2.2 Getting the raw data for a record The dictionary-like object from Bio.SeqIO.index() gives you each entry as a SeqRecord object. However, it is sometimes useful to be able to get the original raw data straight from the file. For this use the get_raw() method which takes a single argument (the record identifier) and returns a string (extracted from the file without modification). A motivating example is extracting a subset of a records from a large file where either Bio.SeqIO.write() does not (yet) support the output file format (e.g. the plain text SwissProt file format) or where you need to preserve the text exactly (e.g. GenBank or EMBL output from Biopython does not yet preserve every last bit of annotation). Let's suppose you have download the whole of UniProt in the plain text SwissPort file format from their FTP site (ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz - Careful big download) and uncompressed it as the file uniprot_sprot.dat, and you want to extract just a few records from it: End of explanation # For illustration only, see reduced example below $ rsync -avP ("ftp.ncbi.nih.gov::genbank/gbvrl*.seq.gz") $ gunzip gbvrl*.seq.gz Explanation: Note with Python 3 onwards, we have to open the file for writing in binary mode because the get_raw() method returns bytes strings. There is a longer example in Section 20.1.5 using the SeqIO.index() function to sort a large sequence file (without loading everything into memory at once). 5.4.3 Sequence files as Dictionaries - Database indexed files Biopython 1.57 introduced an alternative, Bio.SeqIO.index_db(), which can work on even extremely large files since it stores the record information as a file on disk (using an SQLite3 database) rather than in memory. Also, you can index multiple files together (providing all the record identifiers are unique). The Bio.SeqIO.index() function takes three required arguments: Index filename, we suggest using something ending .idx. This index file is actually an SQLite3 database. List of sequence filenames to index (or a single filename) File format (lower case string as used in the rest of the SeqIO module). As an example, consider the GenBank flat file releases from the NCBI FTP site, ftp://ftp.ncbi.nih.gov/genbank/, which are gzip compressed GenBank files. As of GenBank release 210, there are 38 files making up the viral sequences, gbvrl1.seq, ..., gbvrl16.seq, talking about 8GB on disk once decompressed, and containing in total early two million records. If you were interested in the viruses, you could download all the virus files from the command line very easily with the rsync command, and then decompress them with gunzip: End of explanation # Reduced example, download only the first four chunks $ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl1.seq.gz $ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl2.seq.gz $ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl3.seq.gz $ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl4.seq.gz $ gunzip gbvrl*.seq.gz Explanation: Unless you care about viruses, that’s a lot of data to download just for this example - so let’s download just the first four chunks (about 25MB each compressed), and decompress them (taking in all about 1GB of space): End of explanation #this will download the files - Currently there are more than 16, but we will do only 4 import os for i in range(1, 5): os.system('wget ftp://ftp.ncbi.nih.gov/genbank/gbvrl%i.seq.gz -O data/gbvrl%i.seq.gz' % (i, i)) os.system('gzip -d data/gbvrl%i.seq.gz' % i) files = ["data/gbvrl%i.seq" % i for i in range(1, 5)] gb_vrl = SeqIO.index_db("data/gbvrl.idx", files, "genbank") print("%i sequences indexed" % len(gb_vrl)) Explanation: Now, in Python, index these GenBank files as follows: End of explanation print(gb_vrl["AB811634.1"].description) Explanation: Indexing the full set of virus GenBank files took about ten minutes on my machine, just the first four files took about a minute or so. However, once done, repeating this will reload the index file gbvrl.idx in a fraction of a second. You can use the index as a read only Python dictionary - without having to worry about which file the sequence comes from, e.g. End of explanation print(gb_vrl.get_raw("AB811634.1")) Explanation: 5.4.3.1 Getting the raw data for a record Just as with the Bio.SeqIO.index() function discussed above in Section 5.4.2.2, the dictionary like object also lets you get at the raw bytes of each record: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index("data/ls_orchid.gbk", "genbank") len(orchid_dict) orchid_dict.close() Explanation: 5.4.4 Indexing compressed files Very often when you are indexing a sequence file it can be quite large - so you may want to compress it on disk. Unfortunately efficient random access is difficult with the more common file formats like gzip and bzip2. In this setting, BGZF (Blocked GNU Zip Format) can be very helpful. This is a variant of gzip (and can be decompressed using standard gzip tools) popularised by the BAM file format, samtools, and tabix. To create a BGZF compressed file you can use the command line tool bgzip which comes with samtools. In our examples we use a filename extension .bgz, so they can be distinguished from normal gzipped files (named .gz). You can also use the Bio.bgzf module to read and write BGZF files from within Python. The Bio.SeqIO.index() and Bio.SeqIO.index_db() can both be used with BGZF compressed files. For example, if you started with an uncompressed GenBank file: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index("data/ls_orchid.gbk.bgz", "genbank") len(orchid_dict) orchid_dict.close() Explanation: You could compress this (while keeping the original file) at the command line using the following command: $ bgzip -c ls_orchid.gbk &gt; ls_orchid.gbk.bgz You can use the compressed file in exactly the same way: End of explanation from Bio import SeqIO orchid_dict = SeqIO.index_db("data/ls_orchid.gbk.bgz.idx", "data/ls_orchid.gbk.bgz", "genbank") len(orchid_dict) orchid_dict.close() Explanation: or End of explanation from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord rec1 = SeqRecord( Seq( "MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD" \ +"GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK" \ +"NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM" \ +"SSAC", ), id="gi|14150838|gb|AAK54648.1|AF376133_1", description="chalcone synthase [Cucumis sativus]") rec2 = SeqRecord( Seq( "YPDYYFRITNREHKAELKEKFQRMCDKSMIKKRYMYLTEEILKENPSMCEYMAPSLDARQ" \ +"DMVVVEIPKLGKEAAVKAIKEWGQ", ), id="gi|13919613|gb|AAK33142.1|", description="chalcone synthase [Fragaria vesca subsp. bracteata]") rec3 = SeqRecord( Seq( "MVTVEEFRRAQCAEGPATVMAIGTATPSNCVDQSTYPDYYFRITNSEHKVELKEKFKRMC" \ +"EKSMIKKRYMHLTEEILKENPNICAYMAPSLDARQDIVVVEVPKLGKEAAQKAIKEWGQP" \ +"KSKITHLVFCTTSGVDMPGCDYQLTKLLGLRPSVKRFMMYQQGCFAGGTVLRMAKDLAEN" \ +"NKGARVLVVCSEITAVTFRGPNDTHLDSLVGQALFGDGAAAVIIGSDPIPEVERPLFELV" \ +"SAAQTLLPDSEGAIDGHLREVGLTFHLLKDVPGLISKNIEKSLVEAFQPLGISDWNSLFW" \ +"IAHPGGPAILDQVELKLGLKQEKLKATRKVLSNYGNMSSACVLFILDEMRKASAKEGLGT" \ +"TGEGLEWGVLFGFGPGLTVETVVLHSVAT", ), id="gi|13925890|gb|AAK49457.1|", description="chalcone synthase [Nicotiana tabacum]") my_records = [rec1, rec2, rec3] Explanation: The SeqIO indexing automatically detects the BGZF compression. Note that you can't use the same index file for the uncompressed and compressed files. 5.4.5 Discussion So, which of these methods should you use and why? It depends on what you are trying to do (and how much data you are dealing with). However, in general picking Bio.SeqIO.index() is a good starting point. If you are dealing with millions of records, multiple files, or repeated analyses, then look at Bio.SeqIO.index_db(). Reasons to choose Bio.SeqIO.to_dict() over either Bio.SeqIO.index() or Bio.SeqIO.index_db() boil down to a need for flexibility despite its high memory needs. The advantage of storing the SeqRecord objects in memory is they can be changed, added to, or removed at will. In addition to the downside of high memory consumption, indexing can also take longer because all the records must be fully parsed. Both Bio.SeqIO.index() and Bio.SeqIO.index_db() only parse records on demand. When indexing, they scan the file once looking for the start of each record and do as little work as possible to extract the identifier. Reasons to choose Bio.SeqIO.index() over Bio.SeqIO.index_db() include: Faster to build the index (more noticeable in simple file formats) Slightly faster access as SeqRecord objects (but the difference is only really noticeable for simple to parse file formats). Can use any immutable Python object as the dictionary keys (e.g. a tuple of strings, or a frozen set) not just strings. Don't need to worry about the index database being out of date if the sequence file being indexed has changed. Reasons to choose Bio.SeqIO.index_db() over Bio.SeqIO.index() include: Not memory limited - this is already important with files from second generation sequencing where 10s of millions of sequences are common, and using Bio.SeqIO.index() can require more than 4GB of RAM and therefore a 64bit version of Python. Because the index is kept on disk, it can be reused. Although building the index database file takes longer, if you have a script which will be rerun on the same datafiles in future, this could save time in the long run. Indexing multiple files together The get_raw() method can be much faster, since for most file formats the length of each record is stored as well as its offset. 5.5 Writing sequence files We've talked about using Bio.SeqIO.parse() for sequence input (reading files), and now we'll look at Bio.SeqIO.write() which is for sequence output (writing files). This is a function taking three arguments: some SeqRecord objects, a handle or filename to write to, and a sequence format. Here is an example, where we start by creating a few SeqRecord objects the hard way (by hand, rather than by loading them from a file): End of explanation SeqIO.write(my_records, "data/my_example.faa", "fasta") Explanation: Now we have a list of SeqRecord objects, we'll write them to a FASTA format file: End of explanation from Bio import SeqIO records = SeqIO.parse("data/ls_orchid.gbk", "genbank") count = SeqIO.write(records, "data/my_example.fasta", "fasta") print("Converted %i records" % count) Explanation: And if you open this file in your favourite text editor it should look like this: &gt;gi|14150838|gb|AAK54648.1|AF376133_1 chalcone synthase [Cucumis sativus] MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM SSAC &gt;gi|13919613|gb|AAK33142.1| chalcone synthase [Fragaria vesca subsp. bracteata] YPDYYFRITNREHKAELKEKFQRMCDKSMIKKRYMYLTEEILKENPSMCEYMAPSLDARQ DMVVVEIPKLGKEAAVKAIKEWGQ &gt;gi|13925890|gb|AAK49457.1| chalcone synthase [Nicotiana tabacum] MVTVEEFRRAQCAEGPATVMAIGTATPSNCVDQSTYPDYYFRITNSEHKVELKEKFKRMC EKSMIKKRYMHLTEEILKENPNICAYMAPSLDARQDIVVVEVPKLGKEAAQKAIKEWGQP KSKITHLVFCTTSGVDMPGCDYQLTKLLGLRPSVKRFMMYQQGCFAGGTVLRMAKDLAEN NKGARVLVVCSEITAVTFRGPNDTHLDSLVGQALFGDGAAAVIIGSDPIPEVERPLFELV SAAQTLLPDSEGAIDGHLREVGLTFHLLKDVPGLISKNIEKSLVEAFQPLGISDWNSLFW IAHPGGPAILDQVELKLGLKQEKLKATRKVLSNYGNMSSACVLFILDEMRKASAKEGLGT TGEGLEWGVLFGFGPGLTVETVVLHSVAT Suppose you wanted to know how many records the Bio.SeqIO.write() function wrote to the handle? If your records were in a list you could just use len(my_records), however you can't do that when your records come from a generator/iterator. TheBio.SeqIO.write() function returns the number of SeqRecord objects written to the file. Note - If you tell the Bio.SeqIO.write() function to write to a file that already exists, the old file will be overwritten without any warning. 5.5.1 Round trips Some people like their parsers to be 'round-tripable', meaning if you read in a file and write it back out again it is unchanged. This requires that the parser must extract enough information to reproduce the original file exactly. Bio.SeqIO does not aim to do this. As a trivial example, any line wrapping of the sequence data in FASTA files is allowed. An identical SeqRecord would be given from parsing the following two examples which differ only in their line breaks: &gt;YAL068C-7235.2170 Putative promoter sequence TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCACAGTTTTCGTTAAGA GAACTTAACATTTTCTTATGACGTAAATGAAGTTTATATATAAATTTCCTTTTTATTGGA &gt;YAL068C-7235.2170 Putative promoter sequence TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCA CAGTTTTCGTTAAGAGAACTTAACATTTTCTTATGACGTAAATGA AGTTTATATATAAATTTCCTTTTTATTGGA To make a round-tripable FASTA parser you would need to keep track of where the sequence line breaks occurred, and this extra information is usually pointless. Instead Biopython uses a default line wrapping of $60$ characters on output. The same problem with white space applies in many other file formats too. Another issue in some cases is that Biopython does not (yet) preserve every last bit of annotation (e.g. GenBank and EMBL). Occasionally preserving the original layout (with any quirks it may have) is important. See Section 5.4.2.2 about the get_raw() method of the Bio.SeqIO.index() dictionary-like object for one potential solution. 5.5.2 Converting between sequence file formats In previous example we used a list of SeqRecord objects as input to the Bio.SeqIO.write() function, but it will also accept a SeqRecord iterator like we get from Bio.SeqIO.parse() - this lets us do file conversion by combining these two functions. For this example we'll read in the GenBank format file and write it out in FASTA format: End of explanation from Bio import SeqIO count = SeqIO.convert("data/ls_orchid.gbk", "genbank", "data/my_example.fasta", "fasta") print("Converted %i records" % count) Explanation: Still, that is a little bit complicated. So, because file conversion is such a common task, there is a helper function letting you replace that with just: End of explanation from Bio import SeqIO help(SeqIO.convert) Explanation: The Bio.SeqIO.convert() function will take handles or filenames. Watch out though - if the output file already exists, it will overwrite it! To find out more, see the built in help: End of explanation from Bio import SeqIO for i, record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")): print(record.id) print(record.seq.reverse_complement()) if i == 2: # 3 is enough break Explanation: In principle, just by changing the filenames and the format names, this code could be used to convert between any file formats available in Biopython. However, writing some formats requires information (e.g. quality scores) which other files formats don’t contain. For example, while you can turn a FASTQ file into a FASTA file, you can’t do the reverse. See also Sections 20.1.9 and 20.1.10 in the cookbook chapter which looks at inter-converting between different FASTQ formats. Finally, as an added incentive for using the Bio.SeqIO.convert() function (on top of the fact your code will be shorter), doing it this way may also be faster! The reason for this is the convert function can take advantage of several file format specific optimisations and tricks. 5.5.3 Converting a file of sequences to their reverse complements Suppose you had a file of nucleotide sequences, and you wanted to turn it into a file containing their reverse complement sequences. This time a little bit of work is required to transform the SeqRecord objects we get from our input file into something suitable for saving to our output file. To start with, we’ll use Bio.SeqIO.parse() to load some nucleotide sequences from a file, then print out their reverse complements using the Seq object’s built in .reverse_complement() method (see Section 3.6): End of explanation from Bio import SeqIO records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta")] len(records) Explanation: Now, if we want to save these reverse complements to a file, we’ll need to make SeqRecord objects. We can use the SeqRecord object’s built in .reverse_complement() method (see Section 4.9) but we must decide how to name our new records. This is an excellent place to demonstrate the power of list comprehensions which make a list in memory: End of explanation records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700] len(records) Explanation: Now list comprehensions have a nice trick up their sleeves, you can add a conditional statement: End of explanation records = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700) Explanation: That would create an in memory list of reverse complement records where the sequence length was under 700 base pairs. However, we can do exactly the same with a generator expression - but with the advantage that this does not create a list of all the records in memory at once: End of explanation from Bio import SeqIO records = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700) SeqIO.write(records, "data/rev_comp.fasta", "fasta") Explanation: As a complete example: End of explanation from Bio import SeqIO from io import StringIO records = SeqIO.parse("data/ls_orchid.gbk", "genbank") out_handle = StringIO() SeqIO.write(records, out_handle, "fasta") fasta_data = out_handle.getvalue() print(fasta_data[:500]) Explanation: There is a related example in Section 20.1.3, translating each record in a FASTA file from nucleotides to amino acids. 5.5.4 Getting your SeqRecord objects as formatted strings Suppose that you don't really want to write your records to a file or handle - instead you want a string containing the records in a particular file format. The Bio.SeqIO interface is based on handles, but Python has a useful built in module which provides a string based handle. For an example of how you might use this, let's load in a bunch of SeqRecord objects from our orchids GenBank file, and create a string containing the records in FASTA format: End of explanation from Bio import SeqIO with open("data/ls_orchid_long.tab", "w") as out_handle: for record in SeqIO.parse("data/ls_orchid.gbk", "genbank"): if len(record) > 100: out_handle.write(record.format("tab")) Explanation: This isn’t entirely straightforward the first time you see it! On the bright side, for the special case where you would like a string containing a single record in a particular file format, use the the SeqRecord class’ format() method (see Section 4.6). Note that although we don’t encourage it, you can use the format() method to write to a file, for example something like this: End of explanation from Bio import SeqIO records = (rec for rec in SeqIO.parse("data/ls_orchid.gbk", "genbank") if len(rec) > 100) SeqIO.write(records, "data/ls_orchid.tab", "tab") Explanation: While this style of code will work for a simple sequential file format like FASTA or the simple tab separated format used here, it will not work for more complex or interlaced file formats. This is why we still recommend using Bio.SeqIO.write(), as in the following example: End of explanation from Bio.SeqIO.FastaIO import SimpleFastaParser count = 0 total_len = 0 with open("data/ls_orchid.fasta") as in_handle: for title, seq in SimpleFastaParser(in_handle): count += 1 total_len += len(seq) print("%i records with total sequence length %i" % (count, total_len)) Explanation: Making a single call to SeqIO.write(...) is also much quicker than multiple calls to the SeqRecord.format(...) method. 5.6 Low level FASTA and FASTQ parsers Working with the low-level SimpleFastaParser or FastqGeneralIterator is often more practical than Bio.SeqIO.parse when dealing with large high-throughput FASTA or FASTQ sequencing files where speed matters. As noted in the introduction to this chapter, the file-format neutral Bio.SeqIO interface has the overhead of creating many objects even for simple formats like FASTA. When parsing FASTA files, internally Bio.SeqIO.parse() calls the low-level SimpleFastaParser with the file handle. You can use this directly - it iterates over the file handle returning each record as a tuple of two strings, the title line (everything after the > character) and the sequence (as a plain string): End of explanation out_handle.write(">%s\n%s\n" % (title, seq)) Explanation: As long as you don’t care about line wrapping (and you probably don’t for short read high-througput data), then outputing FASTA format from these strings is also very fast: End of explanation from Bio.SeqIO.QualityIO import FastqGeneralIterator count = 0 total_len = 0 with open("data/example.fastq") as in_handle: for title, seq, qual in FastqGeneralIterator(in_handle): count += 1 total_len += len(seq) print("%i records with total sequence length %i" % (count, total_len)) Explanation: Likewise, when parsing FASTQ files, internally Bio.SeqIO.parse() calls the low-level FastqGeneralIterator with the file handle. If you don’t need the quality scores turned into integers, or can work with them as ASCII strings this is ideal: End of explanation out_handle.write("@%s\n%s\n+\n%s\n" % (title, seq, qual)) Explanation: There are more examples of this in the Cookbook (Chapter 20), including how to output FASTQ efficiently from strings using this code snippet: End of explanation
11,272
Given the following text description, write Python code to implement the functionality described below step by step Description: 2.0. Co-author Graphs Social network analysis is a popular method in the field of historical network research. By far the most accessible source of information about social networks in science are bibliographic metadata Step1: We can see how many records were parsed by taking the len() of the metadata. Step2: Create the graph Tethne's graph-building methods create NetworkX Graph objects. Step3: We can see how many authors (nodes) and edges are present by using the .order() and .size() methods, respectively. Step4: YIKES! That's really big. Of course, for analytic purposes this shouldn't scare us. But it would be nice to make a visualization, and that's an awfully large graph to lay out in the time available during the class. So we'll be a bit more choosy. The coauthors() function accepts a few optional arguments. min_weight sets the minimum number of papers that two authors must publish together for an edge to be created between them. Step5: That has a fairly significant impact on the overall size of the graph. Step6: The resulting graph looks something like this Step7: Closeness Centrality Now let's try closeness centrality. Recall that Step8: Betweenness Centrality Betweenness centrality is often interpreted as a measure of "power" in social networks -- the extent to which an individual controls information flow across the graph. Recall Step9: Change over time We can use Tethne to look at how graphs evolve. To do this, we use the slice() method. slice() is an iterator, which means that you can iterate over it (e.g. use it in a for-loop). It yields subsets of the metadata, and we can use those build a graph for each time period. By default, it returns a subset for each year. Step10: The resulting graph is pretty sparse. Sometimes it's helpful to use a sliding time-window Step11: This looks a bit better Step12: International collaboration We can use the information in the authorAddress attribute of each record to look at collaboration between scientists in different countries. Note that the address is a bit "messy" -- it was written for human use, not for computers. Step13: So we define a procedure to extract the country from each address. Note that the country seems to come at the end, so... * We split on commas and take the last element; * We have to strip the '.' at the end; * Note that USA addresses have state zip and country in the last position; Step14: Let's make a graph by hand. Step15: Step16: With whom does the Netherlands collaborate? Step17: Has collaboration between Netherlands and specific neighbors changed over time? Step18: How about with itself? I.e. is Netherlands becoming more or less externally collaborative over time? Step19: How central is the Netherlands in terms of international collaboration? Step20: Below Step21: I wonder if that upward trend has anything to do with the overall number of authors or number of papers in each time period. Step22: That's an interesting pattern -- the average number of authors in each period steadily increases, while the number of papers decreases. That certainly might explain the additional clustering! Let's combine these two values, so that we'll have $ \frac{N_{Authors}}{N_{Papers}} * \frac{1}{N_{Papers}} = \frac{N_{Authors}}{N_{Papers}^2} $ And compare that to the average clustering coefficient.
Python Code: from tethne.readers import wos metadata = wos.read('../data/Baldwin/PlantPhysiology', streaming=True, index_fields=['date'], index_features=['authors']) Explanation: 2.0. Co-author Graphs Social network analysis is a popular method in the field of historical network research. By far the most accessible source of information about social networks in science are bibliographic metadata: we can examine relationships among published authors by modeling coauthorship as a graph. Of course, using bibliographic metadata alone has some drawbacks; for one, it renders many participants in scientific production invisible by focusing only on those who "merit" authorship -- laboratory technicians and other support staff, for example, aren't usually included as authors. Nevertheless, the sheer volume of available data makes bibliographic metadata an attractive place to start. In this notebook, we'll use metadata collected from the Web of Science database to build a coauthor graph. For background, reading, and resources (including instructions on how to download WoS metadata in the correct format), please see this page. Loading data I have provided some sample WoS metadata in the data directory. Tethne (the Python package that we will use to parse these metadata) can parse multiple files at once; we just pass the path to the folder that contains our metadata files. This set of metadata is from several years of the journal Plant Physiology -- so this is an artificially myopic sample of texts, but it works just fine for demonstration purposes. End of explanation len(metadata) Explanation: We can see how many records were parsed by taking the len() of the metadata. End of explanation from tethne import coauthors graph = coauthors(metadata, edge_attrs=['date']) # Just pass the metadata! Explanation: Create the graph Tethne's graph-building methods create NetworkX Graph objects. End of explanation graph.order(), graph.size(), nx.number_connected_components(graph) Explanation: We can see how many authors (nodes) and edges are present by using the .order() and .size() methods, respectively. End of explanation graph = coauthors(metadata, min_weight=2., edge_attrs=['date']) graph.order(), graph.size(), nx.number_connected_components(graph) Explanation: YIKES! That's really big. Of course, for analytic purposes this shouldn't scare us. But it would be nice to make a visualization, and that's an awfully large graph to lay out in the time available during the class. So we'll be a bit more choosy. The coauthors() function accepts a few optional arguments. min_weight sets the minimum number of papers that two authors must publish together for an edge to be created between them. End of explanation nx.write_graphml(graph, 'coauthors.graphml') Explanation: That has a fairly significant impact on the overall size of the graph. End of explanation degree_data = pd.DataFrame(columns=['Surname', 'Forename', 'Degree']) i = 0 for (surname, forename), d in nx.degree(graph).items(): degree_data.loc[i] = [surname, forename, d] i += 1 plt.hist(degree_data.Degree, bins=np.arange(0, 70, 1)) plt.ylabel('Number of nodes') plt.xlabel('Degree') plt.show() # np.argsort() gives us row indices in ascending order of # value. [::-1] reverses the array, so that the values are # descending. sort_indices = np.argsort(degree_data.Degree)[::-1] # Here are the nodes with the highest degree. for i in sort_indices[:5]: # (just the top 5) print degree_data.loc[i] print '-' * 25 Explanation: The resulting graph looks something like this: There is a large connected component in the upper left, and then a whole host of smaller connected components. Central authors Depending on our research question, we may be interested in identifying the most "important" or "central" nodes in the coauthor graph. Networkx has a whole bunch of algorithms that you can use to analyze your graph. Please take a look through the list; there are quite a few useful functions in there! Degree Centrality Let's evaluate the degree-centrality of the nodes in our network. End of explanation closeness_data = pd.DataFrame(columns=['Surname', 'Forename', 'Closeness']) i = 0 for (surname, forename), d in nx.closeness_centrality(graph).items(): closeness_data.loc[i] = [surname, forename, d] i += 1 plt.hist(closeness_data.Closeness) plt.ylabel('Number of nodes') plt.xlabel('Closeness Centrality') plt.show() # And we can get the nodes with the highest closeness... sort_indices = np.argsort(closeness_data.Closeness)[::-1] # Here are the nodes with the highest degree. for i in sort_indices[:5]: # (just the top 5) print closeness_data.loc[i] print '-' * 25 Explanation: Closeness Centrality Now let's try closeness centrality. Recall that: $ C(x) = \frac{1}{\sum_y d(y, x)} $ Where $d(y, x)$ is the distance between the nodes $y$ and $x$. $d(y, x)$ can be any function that calculates a distance value for two nodes. $y$ is all of the other nodes in the network (i.e. that are not $x$) reachable from $x$. By default, NetworkX uses shortest path length as the distance parameter, and normalizes closeness based on the size of the connected component (i.e. the number of possible paths). End of explanation betweenness_data = pd.DataFrame(columns=['Surname', 'Forename', 'Betweenness']) i = 0 for (surname, forename), d in nx.betweenness_centrality(graph).items(): betweenness_data.loc[i] = [surname, forename, d] i += 1 plt.hist(betweenness_data.Betweenness) # It's a pretty lopsided distribution, so a log scale makes it # easier to see. plt.yscale('log') plt.ylabel('Number of nodes') plt.xlabel('Betweenness Centrality') plt.show() Explanation: Betweenness Centrality Betweenness centrality is often interpreted as a measure of "power" in social networks -- the extent to which an individual controls information flow across the graph. Recall: $ g(x) = \sum_{s \neq x \neq t} \frac{\sigma_{st}(x)}{\sigma_{st}} $ Where $\sigma_{st}$ is the total number of shortest paths from node $s$ to node $t$, and $\sigma_{st}(x)$ is the number of those paths that pass through node $x$. Note that betweenness centrality is fairly computationally expensive, since we have to calculate all of the shortest paths for all of the nodes in the network. So this may take a minute or two. End of explanation graphs = [] years = [] for year, subset in metadata.slice(feature_name='authors'): graph = coauthors(subset) graphs.append(graph) years.append(year) print '\r', year, for year, graph in zip(years, graphs): print year, graph.order(), graph.size(), nx.number_connected_components(graph) nx.write_graphml(graphs[0], 'coauthors_1999.graphml') Explanation: Change over time We can use Tethne to look at how graphs evolve. To do this, we use the slice() method. slice() is an iterator, which means that you can iterate over it (e.g. use it in a for-loop). It yields subsets of the metadata, and we can use those build a graph for each time period. By default, it returns a subset for each year. End of explanation graphs = [] years = [] for year, subset in metadata.slice(window_size=3, feature_name='authors'): graph = coauthors(subset) graphs.append(graph) years.append(year) print '\r', year, for year, graph in zip(years, graphs): print year, graph.order(), graph.size(), nx.number_connected_components(graph) nx.write_graphml(graphs[0], 'coauthors_1999-2001.graphml') Explanation: The resulting graph is pretty sparse. Sometimes it's helpful to use a sliding time-window: select subsets of several years, that overlap. This "smooths" things out. We'll try 3 years, since that's a pretty typical funding cycle. End of explanation focal_author = ('FERNIE', 'ALISDAIR R') fernie_data = pd.DataFrame(columns=['Year', 'Degree']) i = 0 for year, graph in zip(years, graphs): degree = nx.degree(graph) # If the focal author is not in the graph for this time-period, then # we will assign a closeness of 0.0. focal_degree = degree.get(focal_author, 0.0) fernie_data.loc[i] = [year, focal_degree] i += 1 plt.scatter(fernie_data.Year, fernie_data.Degree) plt.ylabel('Degree Centrality') plt.xlabel('Year') fernie_closeness = pd.DataFrame(columns=['Year', 'Closeness']) i = 0 for year, subset in metadata.slice(window_size=3): graph = coauthors(subset, min_weight=2.) if focal_author in graph.nodes(): focal_closeness = nx.algorithms.closeness_centrality(graph, u=focal_author) else: focal_closeness = 0.0 fernie_closeness.loc[i] = [year, focal_closeness] print '\r', year, i += 1 plt.scatter(fernie_closeness.Year, fernie_closeness.Closeness) plt.ylabel('Closeness Centrality') plt.xlabel('Year') plt.show() Explanation: This looks a bit better: Following a specific node. End of explanation metadata[5].authorAddress metadata[205].authorAddress len(metadata) Explanation: International collaboration We can use the information in the authorAddress attribute of each record to look at collaboration between scientists in different countries. Note that the address is a bit "messy" -- it was written for human use, not for computers. End of explanation def extract_country(address): country = address.split(',')[-1].strip().replace('.', '') if country.endswith('USA'): return u'USA' return country print metadata[205].authorAddress[0] print extract_country(metadata[205].authorAddress[0]) Explanation: So we define a procedure to extract the country from each address. Note that the country seems to come at the end, so... * We split on commas and take the last element; * We have to strip the '.' at the end; * Note that USA addresses have state zip and country in the last position; End of explanation from collections import Counter from itertools import combinations node_counts = Counter() edge_counts = Counter() for paper in metadata: if not hasattr(paper, 'authorAddress'): continue addresses = getattr(paper, 'authorAddress', []) if not type(addresses) is list: addresses = [addresses] countries = [extract_country(address) for address in addresses] # Combinations is pretty cool. It will give us all of the # possible combinations of countries in this paper. for u, v in combinations(countries, 2): edge_key = tuple(sorted([u, v])) edge_counts[edge_key] += 1. for u in set(countries): node_counts[u] += 1. international = nx.Graph() for u, count in node_counts.items(): international.add_node(u, weight=count) for (u, v), count in edge_counts.items(): if count > 1. and u != v: international.add_edge(u, v, weight=count) nx.adjacency_matrix(international).todense() plt.imshow(nx.adjacency_matrix(international).todense(), interpolation='none') print international.order(), \ international.size(), \ nx.number_connected_components(international) nx.write_graphml(international, 'international.graphml') Explanation: Let's make a graph by hand. End of explanation def international_collaboration(subset): node_counts = Counter() edge_counts = Counter() for paper in subset: if not hasattr(paper, 'authorAddress'): continue addresses = getattr(paper, 'authorAddress', []) if not type(addresses) is list: addresses = [addresses] countries = [extract_country(address) for address in addresses] # Combinations is pretty cool. It will give us all of the # possible combinations of countries in this paper. for u, v in combinations(countries, 2): edge_key = tuple(sorted([u, v])) edge_counts[edge_key] += 1. for u in set(countries): node_counts[u] += 1. graph = nx.Graph() for u, count in node_counts.items(): graph.add_node(u, weight=count) for (u, v), count in edge_counts.items(): if count > 1.: graph.add_edge(u, v, weight=count) return graph years = [] graphs = [] for year, subset in metadata.slice(window_size=3): graph = international_collaboration(subset) graphs.append(graph) years.append(year) print year, graph.order(), graph.size(), Explanation: End of explanation netherlands_data = pd.DataFrame(columns=['Year', 'Neighbor', 'Collaboration']) i = 0 for year, graph in zip(years, graphs): if 'Netherlands' not in graph.nodes(): continue counts = Counter() for neighbor in graph.neighbors('Netherlands'): counts[neighbor] += graph['Netherlands'][neighbor]['weight'] N_all = sum(counts.values()) for neighbor, count in counts.items(): netherlands_data.loc[i] = [year, neighbor, count/N_all] i += 1 print '\r', year, grouped = netherlands_data.groupby('Neighbor') collaboration_means = grouped.Collaboration.mean() collaboration_std = grouped.Collaboration.std() grouped.Collaboration.describe() positions = np.arange(len(collaboration_means)) plt.bar(positions, collaboration_means.values, yerr=collaboration_std, alpha=0.5) plt.xticks(positions + 0.4, collaboration_means.keys(), rotation=90) plt.ylabel('Proportion of collaborations (with standard error)') plt.show() Explanation: With whom does the Netherlands collaborate? End of explanation collaboration_england = netherlands_data[netherlands_data.Neighbor == 'England'] plt.scatter(collaboration_england.Year, collaboration_england.Collaboration) plt.show() collaboration_usa = netherlands_data[netherlands_data.Neighbor == 'USA'] plt.scatter(collaboration_usa.Year, collaboration_usa.Collaboration) plt.show() Explanation: Has collaboration between Netherlands and specific neighbors changed over time? End of explanation collaboration_self = netherlands_data[netherlands_data.Neighbor == 'Netherlands'] plt.scatter(collaboration_self.Year, collaboration_self.Collaboration) plt.show() Explanation: How about with itself? I.e. is Netherlands becoming more or less externally collaborative over time? End of explanation netherlands_centrality = pd.DataFrame(columns=['Year', 'Centrality']) i = 0 for year, graph in zip(years, graphs): if 'Netherlands' not in graph.nodes(): continue centrality = nx.closeness_centrality(graph, u='Netherlands') netherlands_centrality.loc[i] = [year, centrality] i += 1 plt.scatter(netherlands_centrality.Year, netherlands_centrality.Centrality) Explanation: How central is the Netherlands in terms of international collaboration? End of explanation clustering_data = pd.DataFrame(columns=['Year', 'Clustering']) i = 0 # zip() zips two lists together into a list of 2-tuples. for year, graph in zip(years, graphs): clustering_data.loc[i] = [year, nx.algorithms.average_clustering(graph)] i += 1 plt.scatter(clustering_data.Year, clustering_data.Clustering) plt.ylabel('Average Clustering Coefficient') plt.xlabel('Year') plt.show() Explanation: Below: to be completed Connectivity over time Ok, we have a graph for each time-period. Let's look at some whole-graph parameters. How about average clustering. $$ C = \frac{1}{n} \sum_{v \in G} c_v $$ and $$ c_u = \frac{2 T}{deg(u)(deg(u) - 1)} $$ where $T(u)$ is the number of triangles through node $u$, and $deg(u)$ is the degree of $u$. End of explanation authorship_data = pd.DataFrame(columns=['Year', 'AverageNumAuthors', 'NumPapers']) i = 0 for year, subset in metadata.slice(window_size=3): avg_no_authors = np.mean([len(paper.authors) for paper in subset]) authorship_data.loc[i] = [year, avg_no_authors, len(subset)] i += 1 plt.scatter(authorship_data.Year, authorship_data.AverageNumAuthors, c='green') plt.legend(loc=4) plt.ylabel('Average Number of Authors (green)') ax = plt.gca() ax2 = plt.twinx() ax2.scatter(authorship_data.Year, authorship_data.NumPapers) plt.ylabel('Number of Papers (blue)') plt.xlabel('Year') plt.legend(loc='best') plt.show() Explanation: I wonder if that upward trend has anything to do with the overall number of authors or number of papers in each time period. End of explanation plt.scatter(authorship_data.AverageNumAuthors/authorship_data.NumPapers, clustering_data.Clustering) xlim(0.002, 0.006) plt.xlabel('$\\frac{N_{Authors}}{N_{Papers}^2}$', size=24) plt.ylabel('Average Clustering Coefficient') plt.show() from scipy.stats import linregress authorship = authorship_data.AverageNumAuthors/authorship_data.NumPapers Beta, Beta0, r, p, stderr = linregress(authorship, clustering_data.Clustering) plt.scatter(authorship_data.AverageNumAuthors/authorship_data.NumPapers, clustering_data.Clustering) X = np.arange(authorship.min(), authorship.max(), 0.0001) plt.plot(X, Beta0 + Beta*X) xlim(0.002, 0.006) plt.xlabel('$\\frac{N_{Authors}}{N_{Papers}^2}$', size=24) plt.ylabel('Average Clustering Coefficient') plt.show() Y_hat = Beta0 + Beta*authorship residuals = clustering_data.Clustering - Y_hat plt.scatter(clustering_data.Year, residuals) Explanation: That's an interesting pattern -- the average number of authors in each period steadily increases, while the number of papers decreases. That certainly might explain the additional clustering! Let's combine these two values, so that we'll have $ \frac{N_{Authors}}{N_{Papers}} * \frac{1}{N_{Papers}} = \frac{N_{Authors}}{N_{Papers}^2} $ And compare that to the average clustering coefficient. End of explanation
11,273
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Data We use the "Howell" dataset, which consists of measurements of height, weight, age and sex, of a certain foraging tribe, collected by Nancy Howell. Step2: Empirical mean and std. Step3: Model We use the following model for the heights (in cm) Step4: Posterior samples. Step5: posterior marginals. Step6: Laplace approximation See the documentation Optimization Step7: Posterior samples. Step8: Extract 2d joint posterior The Gaussian approximation is over transformed parameters. Step9: We can sample from the posterior, which return results in the original parameterization. Step10: Variational inference We use $q(\mu,\sigma) = N(\mu|m,s) Ga(\sigma|a,b)$ Step11: Extract Variational parameters. Step12: Posterior samples Step13: MCMC
Python Code: import numpy as np np.set_printoptions(precision=3) import matplotlib.pyplot as plt import math import os import warnings import pandas as pd # from scipy.interpolate import BSpline # from scipy.stats import gaussian_kde !mkdir figures !pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro import jax print("jax version {}".format(jax.__version__)) print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform)) import jax.numpy as jnp from jax import random, vmap rng_key = random.PRNGKey(0) rng_key, rng_key_ = random.split(rng_key) import numpyro import numpyro.distributions as dist from numpyro.distributions import constraints from numpyro.distributions.transforms import AffineTransform from numpyro.diagnostics import hpdi, print_summary from numpyro.infer import Predictive from numpyro.infer import MCMC, NUTS from numpyro.infer import SVI, Trace_ELBO, init_to_value from numpyro.infer.autoguide import AutoLaplaceApproximation import numpyro.optim as optim !pip install arviz import arviz as az Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/gaussian_param_inf_1d_numpyro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Inference for the parameters of a 1d Gaussian using a non-conjugate prior We illustrate various inference methods using the example in sec 4.3 ("Gaussian model of height") of Statistical Rethinking ed 2. This requires computing $p(\mu,\sigma|D)$ using a Gaussian likelihood but a non-conjugate prior. The numpyro code is from Du Phan's site. End of explanation # url = 'https://github.com/fehiepsi/rethinking-numpyro/tree/master/data/Howell1.csv?raw=True' url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/Howell1.csv" Howell1 = pd.read_csv(url, sep=";") d = Howell1 d.info() d.head() # get data for adults d2 = d[d.age >= 18] N = len(d2) ndx = jax.random.permutation(rng_key, N) data = d2.height.values[ndx] N = 20 # take a subset of the 354 samples data = data[:N] Explanation: Data We use the "Howell" dataset, which consists of measurements of height, weight, age and sex, of a certain foraging tribe, collected by Nancy Howell. End of explanation print(len(data)) print(np.mean(data)) print(np.std(data)) Explanation: Empirical mean and std. End of explanation mu_prior = dist.Normal(178, 20) sigma_prior = dist.Uniform(0, 50) mu_range = [150, 160] sigma_range = [4, 14] ngrid = 100 plot_square = False mu_list = jnp.linspace(start=mu_range[0], stop=mu_range[1], num=ngrid) sigma_list = jnp.linspace(start=sigma_range[0], stop=sigma_range[1], num=ngrid) mesh = jnp.meshgrid(mu_list, sigma_list) print([mesh[0].shape, mesh[1].shape]) print(mesh[0].reshape(-1).shape) post = {"mu": mesh[0].reshape(-1), "sigma": mesh[1].reshape(-1)} post["LL"] = vmap(lambda mu, sigma: jnp.sum(dist.Normal(mu, sigma).log_prob(data)))(post["mu"], post["sigma"]) logprob_mu = mu_prior.log_prob(post["mu"]) logprob_sigma = sigma_prior.log_prob(post["sigma"]) post["prob"] = post["LL"] + logprob_mu + logprob_sigma post["prob"] = jnp.exp(post["prob"] - jnp.max(post["prob"])) prob = post["prob"] / jnp.sum(post["prob"]) # normalize over the grid prob2d = prob.reshape(ngrid, ngrid) prob_mu = jnp.sum(prob2d, axis=0) prob_sigma = jnp.sum(prob2d, axis=1) plt.figure() plt.plot(mu_list, prob_mu, label="mu") plt.legend() plt.savefig("figures/gauss_params_1d_post_grid_marginal_mu.pdf", dpi=300) plt.show() plt.figure() plt.plot(sigma_list, prob_sigma, label="sigma") plt.legend() plt.savefig("figures/gauss_params_1d_post_grid_marginal_sigma.pdf", dpi=300) plt.show() plt.contour( post["mu"].reshape(ngrid, ngrid), post["sigma"].reshape(ngrid, ngrid), post["prob"].reshape(ngrid, ngrid), ) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_grid_contours.pdf", dpi=300) plt.show() plt.imshow( post["prob"].reshape(ngrid, ngrid), origin="lower", extent=(mu_range[0], mu_range[1], sigma_range[0], sigma_range[1]), aspect="auto", ) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_grid_heatmap.pdf", dpi=300) plt.show() Explanation: Model We use the following model for the heights (in cm): $$ \begin{align} h_i &\sim N(\mu,\sigma) \ \mu &\sim N(178, 20) \ \sigma &\sim U(0,50) \end{align} $$ The prior for $\mu$ has a mean 178cm, since that is the height of Richard McElreath, the author of the "Statisical Rethinking" book. The standard deviation is 20, so that 90\% of people lie in the range 138--218. The prior for $\sigma$ has a lower bound of 0 (since it must be positive), and an upper bound of 50, so that the interval $[\mu-\sigma, \mu+\sigma]$ has width 100cm, which seems sufficiently large to capture human heights. Note that this is not a conjugate prior, so we will just approximate the posterior. But since there are just 2 unknowns, this will be easy. Grid posterior End of explanation nsamples = 5000 # int(1e4) sample_rows = dist.Categorical(probs=prob).sample(random.PRNGKey(0), (nsamples,)) sample_mu = post["mu"][sample_rows] sample_sigma = post["sigma"][sample_rows] samples = {"mu": sample_mu, "sigma": sample_sigma} print_summary(samples, 0.95, False) plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none") plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") plt.axis("square") plt.show() az.plot_kde(samples["mu"], samples["sigma"]) plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_grid.pdf", dpi=300) plt.show() Explanation: Posterior samples. End of explanation print(hpdi(samples["mu"], 0.95)) print(hpdi(samples["sigma"], 0.95)) fig, ax = plt.subplots() az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$") fig, ax = plt.subplots() az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$") Explanation: posterior marginals. End of explanation def model(data): mu = numpyro.sample("mu", mu_prior) sigma = numpyro.sample("sigma", sigma_prior) numpyro.sample("height", dist.Normal(mu, sigma), obs=data) guide = AutoLaplaceApproximation(model) svi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), data=data) svi_result = svi.run(random.PRNGKey(0), 2000) plt.figure() plt.plot(svi_result.losses) start = {"mu": data.mean(), "sigma": data.std()} guide = AutoLaplaceApproximation(model, init_loc_fn=init_to_value(values=start)) svi = SVI(model, guide, optim.Adam(0.1), Trace_ELBO(), data=data) svi_result = svi.run(random.PRNGKey(0), 2000) plt.figure() plt.plot(svi_result.losses) Explanation: Laplace approximation See the documentation Optimization End of explanation samples = guide.sample_posterior(random.PRNGKey(1), svi_result.params, (nsamples,)) print_summary(samples, 0.95, False) plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none") plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") plt.show() az.plot_kde(samples["mu"], samples["sigma"]) plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_laplace.pdf", dpi=300) plt.show() print(hpdi(samples["mu"], 0.95)) print(hpdi(samples["sigma"], 0.95)) fig, ax = plt.subplots() az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$") fig, ax = plt.subplots() az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$") Explanation: Posterior samples. End of explanation post = guide.get_posterior(svi_result.params) print(post.mean) print(post.covariance_matrix) def logit(p): return jnp.log(p / (1 - p)) def sigmoid(a): return 1 / (1 + jnp.exp(-a)) scale = 50 print(logit(7.7 / scale)) print(sigmoid(-1.7) * scale) unconstrained_samples = post.sample(rng_key, sample_shape=(nsamples,)) constrained_samples = guide._unpack_and_constrain(unconstrained_samples, svi_result.params) print(unconstrained_samples.shape) print(jnp.mean(unconstrained_samples, axis=0)) print(jnp.mean(constrained_samples["mu"], axis=0)) print(jnp.mean(constrained_samples["sigma"], axis=0)) Explanation: Extract 2d joint posterior The Gaussian approximation is over transformed parameters. End of explanation samples = guide.sample_posterior(random.PRNGKey(1), params, (nsamples,)) x = jnp.stack(list(samples.values()), axis=0) print(x.shape) print("mean of ssamples\n", jnp.mean(x, axis=1)) vcov = jnp.cov(x) print("cov of samples\n", vcov) # variance-covariance matrix # correlation matrix R = vcov / jnp.sqrt(jnp.outer(jnp.diagonal(vcov), jnp.diagonal(vcov))) print("corr of samples\n", R) Explanation: We can sample from the posterior, which return results in the original parameterization. End of explanation def guide(data): data_mean = jnp.mean(data) data_std = jnp.std(data) m = numpyro.param("m", data_mean) s = numpyro.param("s", 10, constraint=constraints.positive) a = numpyro.param("a", data_std, constraint=constraints.positive) b = numpyro.param("b", 1, constraint=constraints.positive) mu = numpyro.sample("mu", dist.Normal(m, s)) sigma = numpyro.sample("sigma", dist.Gamma(a, b)) optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1) svi = SVI(model, guide, optimizer, loss=Trace_ELBO()) nsteps = 2000 svi_result = svi.run(rng_key_, nsteps, data=data) print(svi_result.params) print(svi_result.losses.shape) plt.plot(svi_result.losses) plt.title("ELBO") plt.xlabel("step") plt.ylabel("loss"); Explanation: Variational inference We use $q(\mu,\sigma) = N(\mu|m,s) Ga(\sigma|a,b)$ End of explanation print(svi_result.params) a = np.array(svi_result.params["a"]) b = np.array(svi_result.params["b"]) m = np.array(svi_result.params["m"]) s = np.array(svi_result.params["s"]) print("empirical mean", jnp.mean(data)) print("empirical std", jnp.std(data)) print(r"posterior mean and std of $\mu$") post_mean = dist.Normal(m, s) print([post_mean.mean, jnp.sqrt(post_mean.variance)]) print(r"posterior mean and std of unconstrained $\sigma$") post_sigma = dist.Gamma(a, b) print([post_sigma.mean, jnp.sqrt(post_sigma.variance)]) Explanation: Extract Variational parameters. End of explanation predictive = Predictive(guide, params=svi_result.params, num_samples=nsamples) samples = predictive(rng_key, data) print_summary(samples, 0.95, False) plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none") plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") plt.show() az.plot_kde(samples["mu"], samples["sigma"]) plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_vi.pdf", dpi=300) plt.show() print(hpdi(samples["mu"], 0.95)) print(hpdi(samples["sigma"], 0.95)) fig, ax = plt.subplots() az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$") fig, ax = plt.subplots() az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$") Explanation: Posterior samples End of explanation conditioned_model = numpyro.handlers.condition(model, {"data": data}) nuts_kernel = NUTS(conditioned_model) mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=nsamples) mcmc.run(rng_key_, data) mcmc.print_summary() samples = mcmc.get_samples() print_summary(samples, 0.95, False) plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none") plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") plt.show() az.plot_kde(samples["mu"], samples["sigma"]) plt.xlim(mu_range[0], mu_range[1]) plt.ylim(sigma_range[0], sigma_range[1]) plt.xlabel(r"$\mu$") plt.ylabel(r"$\sigma$") if plot_square: plt.axis("square") plt.savefig("figures/gauss_params_1d_post_mcmc.pdf", dpi=300) plt.show() Explanation: MCMC End of explanation
11,274
Given the following text description, write Python code to implement the functionality described below step by step Description: Graph Construction and feature engineering Load library Step1: Load data Step2: Graph generation & analysis Build proper edge array Step3: Generate a MultiDigraph with networkx and edge array Step4: Features Engineering Step5: <hr> Features description Roman (I-XXI) Step6: <hr> Add degree in and degree out [II] [III] Step7: <hr> Add unique predecessors and unique successors (must be < degree_in and out) [IV][V] Step8: <hr> Add mean ether value going in the node [VI] Write a function Step9: <hr> Add mean ether value going out the node [VII] Step10: <hr> Add std ether value going in the node [VIII] Step11: <hr> Add std ether value going out the node [IX] Step12: <hr> Add the ratio of the number of incoming transactions to the number of unique timestamps for those transactions [X] Step13: <hr> Add the ratio of the number of outgoing transactions to the number of unique timestamps for those transactions [XI] Step14: <hr> the incoming transaction frequency for the user (#in transactions / max date - min date) [XII] Step15: <hr> the outgoing transaction frequency for the user (#out transactions / max date - min date) [XIII] Step16: <hr> ether balance [XIV] Step18: <hr> Average Velocity In [XV] Step20: <hr> Average Velocity Out [XVI] Step22: <hr> Std Velocity In [XVII] Step24: <hr> Std Velocity Out [XVIII] Step26: <hr> Average Acceleration In [XIX] Step28: <hr> Average Velocity Out [XX] Step29: <hr> Getting Rogue nodes Step30: <hr> Min path to a rogue node [&alpha;] Step31: <hr> Min path from a rogue node [&beta;] Step32: <hr> Amount of ether flown from node to the closest rogue [&delta;] <hr> Amount of ether flown to node from the closest rogue [&epsilon;]
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import networkx as nx import pygraphviz as pgv import pydot as pyd from networkx.drawing.nx_agraph import graphviz_layout from networkx.drawing.nx_agraph import write_dot Explanation: Graph Construction and feature engineering Load library End of explanation %%time edges = pd.read_csv('../data/edges.csv').drop('Unnamed: 0',1) nodes = pd.read_csv('../data/nodes.csv').drop('Unnamed: 0',1) rogues = pd.read_csv('../data/rogues.csv') Explanation: Load data End of explanation %%time #Simple way (non parallel computing) edge_array = [] for i in range(0,1000000): edge_array.append((edges['from'][i],edges['to'][i],{'value':edges['value'][i],'time':edges['timestamp'][i],'hash':edges['hash'][i]})) Explanation: Graph generation & analysis Build proper edge array End of explanation %%time TG=nx.MultiDiGraph() TG.add_weighted_edges_from(edge_array) %%time # Network Characteristics print 'Number of nodes:', TG.number_of_nodes() print 'Number of edges:', TG.number_of_edges() print 'Number of connected components:', nx.number_connected_components(TG.to_undirected()) # Degree degree_sequence = TG.degree().values() degree_out_sequence = TG.out_degree().values() degree_in_sequence = TG.in_degree().values() print "Min degree ", np.min(degree_sequence) print "Max degree ", np.max(degree_sequence) print "Median degree ", np.median(degree_sequence) print "Mean degree ", np.mean(degree_sequence) print "Min degree IN", np.min(degree_in_sequence) print "Max degree IN", np.max(degree_in_sequence) print "Median degree IN", np.median(degree_in_sequence) print "Mean degree IN", np.mean(degree_in_sequence) print "Min degree OUT", np.min(degree_out_sequence) print "Max degree OUT", np.max(degree_out_sequence) print "Median degree OUT", np.median(degree_out_sequence) print "Mean degree OUT", np.mean(degree_out_sequence) %%time # Degree distribution y=nx.degree_histogram(TG) plt.figure(1) plt.loglog(y,'b-',marker='o') plt.ylabel("Frequency") plt.xlabel("Degree") plt.draw() plt.show() Explanation: Generate a MultiDigraph with networkx and edge array End of explanation #New dataframe for feature engineering df = pd.DataFrame() df['nodes']=TG.nodes() Explanation: Features Engineering End of explanation df['total_degree']=df['nodes'].map(lambda x: TG.degree(x)) Explanation: <hr> Features description Roman (I-XXI) : Node intrinsec charateristics feature set Arabic (1-5) : Neighbors behaviours feature set Greek (&alpha;-&omega;) : External data features | # |Description |Variable | |---|---|---| | I |Degree | total_degree | | II |Degree in |degree_in | | III|Degree out | degree_out | | IV |Number of unique predecessors | unique_predecessors | | V |Number of unique successors | unique_successors | | VI|Mean ether amount in incoming transactions |mean_value_in | | VII |Mean ether amount in outgoing transactions |mean_value_out | | VIII|Std ether amount in incoming transactions | std_value_in | | IX|Std ether amount in outgoing transactions | std_value_out | | X |Ratio of the number of incoming transactions to the number of unique timestamps | ratio_in_timestamp | | XI |Ratio of the number of outgoing transactions to the number of unique timestamps | ratio_out_timestamp | | XII| Frequency of incoming transactions |frequency_in | | XIII |Frequency of outgoing transactions |frequency_out | | XIV |Ether balance of the node |balance | | XVI|Average velocity in | mean_velocity_out | | XVII|Average velocity out| mean_velocity_out | | XVIII|Std velocity in | std_velocity_in | | XIX|Std velocity out | std_velocity_out | | XX|Average acceleration in | mean_acceleration_in | | XXI|Average acceleration out | mean_acceleration_out | | &alpha; |Min path to a rogue node | min_path_to_rogue | | &beta; |Min path from a rogue node | min_path_from_rogue | | &delta; |Amount of ether on the min path to a rogue node | amount_to_rogue | | &epsilon; |Amount of ether on the min path from a rogue node | amount_from_rogue | | 1 |Average neighbours velocity | | | 2 |Average neighbours acceleration | - | <hr> Add total degree [I] End of explanation df['degree_in']=df['nodes'].map(lambda x: TG.in_degree(x)) df['degree_out']=df['nodes'].map(lambda x: TG.out_degree(x)) Explanation: <hr> Add degree in and degree out [II] [III] End of explanation df['unique_successors']=df['nodes'].map(lambda x: len((TG.successors(x)))) df['unique_predecessors']=df['nodes'].map(lambda x: len((TG.predecessors(x)))) Explanation: <hr> Add unique predecessors and unique successors (must be < degree_in and out) [IV][V] End of explanation def get_mean_value_in(node): ''' Return the mean value of all the in transactions of a given node ''' #Get the in edges list edges = TG.in_edges_iter(node, keys=False, data=True) #Build a list of all the values of the in edges list values=[] for edge in edges: values.append(float(edge[2]['weight']['value'])) #Compute the mean of this list mean = np.average(values) return mean %%time #Add the feature df['mean_value_in']=df['nodes'].map(lambda x: get_mean_value_in(x)) Explanation: <hr> Add mean ether value going in the node [VI] Write a function End of explanation #Write a function def get_mean_value_out(node): ''' Return the mean value of all the out transactions of a given node ''' #Get the out edges list edges = TG.out_edges_iter(node, keys=False, data=True) #Build a list of all the values of the out edges list values=[] for edge in edges: values.append(float(edge[2]['weight']['value'])) #Compute the mean of this list mean = np.average(values) return mean %%time #Add the feature df['mean_value_out']=df['nodes'].map(lambda x: get_mean_value_out(x)) Explanation: <hr> Add mean ether value going out the node [VII] End of explanation #Write a function def get_std_value_in(node): ''' Return the std value of all the in transactions of a given node ''' #Get the in edges list edges = TG.in_edges_iter(node, keys=False, data=True) #Build a list of all the values of the in edges list values=[] for edge in edges: values.append(float(edge[2]['weight']['value'])) #Compute the std of this list std = np.std(values) return std %%time #Add the feature df['std_value_in']=df['nodes'].map(lambda x: get_std_value_in(x)) Explanation: <hr> Add std ether value going in the node [VIII] End of explanation #Write a function def get_std_value_out(node): ''' Return the std value of all the out transactions of a given node ''' #Get the out edges list edges = TG.out_edges_iter(node, keys=False, data=True) #Build a list of all the values of the out edges list values=[] for edge in edges: values.append(float(edge[2]['weight']['value'])) #Compute the std of this list std = np.std(values) return std %%time #Add the feature df['std_value_out']=df['nodes'].map(lambda x: get_std_value_out(x)) Explanation: <hr> Add std ether value going out the node [IX] End of explanation #Write a function def get_ratio_in_timestamp(node): ''' Return the ratio between the number of incoming transaction to the number of unique timestamp for these transactions ''' #Get the list of incoming transactions edges = TG.in_edges(node,keys=False, data=True) #Build the list of timestamps timestamps=[] for edge in edges: timestamps.append(edge[2]['weight']['time']) #Compute the ratio unique_time = float(len(np.unique(timestamps))) transactions = float(len(edges)) if unique_time !=0: ratio = transactions / unique_time else: ratio = np.nan return ratio %%time #Add the feature df['ratio_in_timestamp']=df['nodes'].map(lambda x: get_ratio_in_timestamp(x)) Explanation: <hr> Add the ratio of the number of incoming transactions to the number of unique timestamps for those transactions [X] End of explanation #Write a function def get_ratio_out_timestamp(node): ''' Return the ratio between the number of incoming transaction to the number of unique timestamp for these transactions ''' #Get the list of outgoing transactions edges = TG.out_edges(node,keys=False, data=True) #Build the list of timestamps timestamps=[] for edge in edges: timestamps.append(edge[2]['weight']['time']) #Compute the ratio unique_time = float(len(np.unique(timestamps))) transactions = float(len(edges)) if unique_time !=0: ratio = transactions / unique_time else: ratio = np.nan return ratio %%time #Add the feature df['ratio_out_timestamp']=df['nodes'].map(lambda x: get_ratio_out_timestamp(x)) Explanation: <hr> Add the ratio of the number of outgoing transactions to the number of unique timestamps for those transactions [XI] End of explanation #write function def get_in_frequency(node): ''' Return the incoming transaction frequency for the user (#in transactions / max date - min date) ''' #Get the list of incoming transactions edges = TG.in_edges(node,keys=False, data=True) #Build the list of timestamps timestamps=[] for edge in edges: timestamps.append(edge[2]['weight']['time']) #Build the delta in seconds date = pd.to_datetime(pd.Series(timestamps)) dt = date.max()-date.min() #deltaseconds = dt.item().total_seconds() if dt.total_seconds()!=0: ratio = len(edges)/dt.total_seconds() else: ratio = np.nan return ratio %%time #Add the feature df['frequency_in']=df['nodes'].map(lambda x: get_in_frequency(x)) Explanation: <hr> the incoming transaction frequency for the user (#in transactions / max date - min date) [XII] End of explanation #write function def get_out_frequency(node): ''' Return the outgoing transaction frequency for the user (#in transactions / max date - min date) ''' #Get the list of incoming transactions edges = TG.out_edges(node,keys=False, data=True) #Build the list of timestamps timestamps=[] for edge in edges: timestamps.append(edge[2]['weight']['time']) #Build the delta in seconds date = pd.to_datetime(pd.Series(timestamps)) dt = date.max()-date.min() #deltaseconds = dt.item().total_seconds() if dt.total_seconds()!=0: ratio = len(edges)/dt.total_seconds() else: ratio = np.nan return ratio %%time #Add the feature df['frequency_out']=df['nodes'].map(lambda x: get_out_frequency(x)) Explanation: <hr> the outgoing transaction frequency for the user (#out transactions / max date - min date) [XIII] End of explanation #write function def get_balance(node): ''' Return the balance (in wei) of a given node ''' #Get edges in and edges out edges_in = TG.in_edges(node,keys=False, data=True) edges_out = TG.out_edges(node,keys=False, data=True) #Build value in array and value out array values_in=[] for edge in edges_in: values_in.append(float(edge[2]['weight']['value'])) values_out=[] for edge in edges_out: values_out.append(float(edge[2]['weight']['value'])) #Compute balance balance = np.sum(values_in)-np.sum(values_out) return balance %%time #Add the feature df['balance']=df['nodes'].map(lambda x: get_balance(x)) Explanation: <hr> ether balance [XIV] End of explanation #write function def get_mean_velocity_in(node): Return the average ether velocitiy incoming into the node in wei/s #Get edges in collection edges_in = TG.in_edges(node,keys=False, data=True) values_in=[] timestamps=[] #Collect values and timestamps for edge in edges_in: values_in.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(1,(len(edges_in)-1)): if dates[i+1]!=dates[i-1]: velocity = np.absolute(values_in[i+1]-values_in[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Return the velocities average return np.average(np.absolute(velocities)) %%time #Add the feature df['mean_velocity_in']=df['nodes'].map(lambda x: get_mean_velocity_in(x)) Explanation: <hr> Average Velocity In [XV] End of explanation #write function def get_mean_velocity_out(node): Return the average ether velocitiy outgoing from the node in wei/s #Get edges out collection edges_out = TG.out_edges(node,keys=False, data=True) values_out=[] timestamps=[] #Collect values and timestamps for edge in edges_out: values_out.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(1,(len(edges_out)-1)): if dates[i+1]!=dates[i-1]: velocity = np.absolute(values_out[i+1]-values_out[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Return the velocities average return np.average(np.absolute(velocities)) %%time #Add the feature df['mean_velocity_out']=df['nodes'].map(lambda x: get_mean_velocity_out(x)) Explanation: <hr> Average Velocity Out [XVI] End of explanation #write function def get_std_velocity_in(node): Return the std ether velocitiy incoming into the node in wei/s #Get edges in collection edges_in = TG.in_edges(node,keys=False, data=True) values_in=[] timestamps=[] #Collect values and timestamps for edge in edges_in: values_in.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(0,(len(edges_in)-1)): if dates[i+1]!=dates[i-1]: velocity = np.absolute(values_in[i+1]-values_in[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Return the velocities average return np.std(np.absolute(velocities)) %%time #Add the feature df['std_velocity_in']=df['nodes'].map(lambda x: get_std_velocity_in(x)) Explanation: <hr> Std Velocity In [XVII] End of explanation #write function def get_std_velocity_out(node): Return the std ether velocitiy outgoing from the node in wei/s #Get edges out collection edges_out = TG.out_edges(node,keys=False, data=True) values_out=[] timestamps=[] #Collect values and timestamps for edge in edges_out: values_out.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(0,(len(edges_out)-1)): if dates[i+1]!=dates[i]: velocity = np.absolute(values_out[i+1]-values_out[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Return the velocities average return np.std(np.absolute(velocities)) %%time #Add the feature df['std_velocity_out']=df['nodes'].map(lambda x: get_std_velocity_out(x)) Explanation: <hr> Std Velocity Out [XVIII] End of explanation #write function def get_mean_acceleration_in(node): Return the average ether acceleration incoming into the node in wei.s-2 #Get edges in collection edges_in = TG.in_edges(node,keys=False, data=True) values_in=[] timestamps=[] #Collect values and timestamps for edge in edges_in: values_in.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(1,(len(edges_in)-1)): if dates[i+1]!=dates[i-1]: velocity = np.absolute(values_in[i+1]-values_in[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Make sure we have abs ... velocities=np.absolute(velocities) #Velocities range from 1 to N-1 (no 0 and N) #Accelerations range from 2 to N-2 #Build the acceleration array accelerations=[] for i in range(1,(len(velocities)-1)): if dates[i+1]!=dates[i-1]: acceleration = np.absolute(velocities[i+1]-velocities[i-1])/(dates[i+1]-dates[i-1]).total_seconds() accelerations.append(acceleration) #Return the velocities average return np.average(np.absolute(accelerations)) %%time #Add the feature df['mean_acceleration_in']=df['nodes'].map(lambda x: get_mean_acceleration_in(x)) Explanation: <hr> Average Acceleration In [XIX] End of explanation #write function def get_mean_acceleration_out(node): Return the average ether acceleration outgoing into the node in wei.s-2 #Get edges out collection edges_out = TG.out_edges(node,keys=False, data=True) values_out=[] timestamps=[] #Collect values and timestamps for edge in edges_out: values_out.append(float(edge[2]['weight']['value'])) timestamps.append(edge[2]['weight']['time']) #Create Velocity list velocities = [] #Convert date str to datetime dates = pd.to_datetime(pd.Series(timestamps)) #Build the velocity array for i in range(1,(len(edges_out)-1)): if dates[i+1]!=dates[i-1]: velocity = np.absolute(values_out[i+1]-values_out[i-1])/(dates[i+1]-dates[i-1]).total_seconds() velocities.append(velocity) #Make sure we have abs ... velocities=np.absolute(velocities) #Velocities range from 1 to N-1 (no 0 and N) #Accelerations range from 2 to N-2 #Build the acceleration array accelerations=[] for i in range(1,(len(velocities)-1)): if dates[i+1]!=dates[i-1]: acceleration = np.absolute(velocities[i+1]-velocities[i-1])/(dates[i+1]-dates[i-1]).total_seconds() accelerations.append(acceleration) #Return the velocities average return np.average(np.absolute(accelerations)) %%time #Add the feature df['mean_acceleration_out']=df['nodes'].map(lambda x: get_mean_acceleration_out(x)) Explanation: <hr> Average Velocity Out [XX] End of explanation rogues = pd.read_csv("../data/rogues.csv") rogues_id = np.array(rogues['id']) fake_rogues = ['0x223294182093bfc6b11e8ef5722d496f066036c2','0xec1ebac9da3430213281c80fa6d46378341a96ae','0xe6447ae67346b5fb7ebd65ebfc4c7e6521b21f8a'] Explanation: <hr> Getting Rogue nodes End of explanation #write function def min_path_to_rogue(node,rogues): paths_lengths=[] for rogue in rogues: if nx.has_path(TG,node,rogue): paths_lengths.append(len(nx.shortest_path(TG,node,rogue))) if len(paths_lengths)!=0: return np.min(paths_lengths) else: return np.nan %%time #Add the feature df['min_path_to_rogue']=df['nodes'].map(lambda x: min_path_to_rogue(x,fake_rogues)) Explanation: <hr> Min path to a rogue node [&alpha;] End of explanation #write function def min_path_from_rogue(node,rogues): paths_lengths=[] for rogue in rogues: if nx.has_path(TG,rogue,node): paths_lengths.append(len(nx.shortest_path(TG,rogue,node))) if len(paths_lengths)!=0: return np.min(paths_lengths) else: return np.nan %%time #Add the feature df['min_path_from_rogue']=df['nodes'].map(lambda x: min_path_from_rogue(x,fake_rogues)) Explanation: <hr> Min path from a rogue node [&beta;] End of explanation df.tail(100) len(nx.shortest_path(TG,source="0x61c9d63697b8ea3c387ccb3e693f02d4f597b763",target="0xbaecc9abb79a87eb7e7d5942ffe42c34e8c8abc7")) len(nx.shortest_path(TG,source="0xbaecc9abb79a87eb7e7d5942ffe42c34e8c8abc7",target="0x61c9d63697b8ea3c387ccb3e693f02d4f597b763")) Explanation: <hr> Amount of ether flown from node to the closest rogue [&delta;] <hr> Amount of ether flown to node from the closest rogue [&epsilon;] End of explanation
11,275
Given the following text description, write Python code to implement the functionality described below step by step Description: Before you leave, the Elves in accounting just need you to fix your expense report (your puzzle input); apparently, something isn't quite adding up. Specifically, they need you to find the two entries that sum to 2020 and then multiply those two numbers together. For example, suppose your expense report contained the following Step1: The Elves in accounting are thankful for your help; one of them even offers you a starfish coin they had left over from a past vacation. They offer you a second one if you can find three numbers in your expense report that meet the same criteria. Using the above example again, the three entries that sum to 2020 are 979, 366, and 675. Multiplying them together produces the answer, 241861950. In your expense report, what is the product of the three entries that sum to 2020?
Python Code: for a,b in itertools.permutations(list(map(int, data)), 2): if a+b == 2020: print(a*b) break Explanation: Before you leave, the Elves in accounting just need you to fix your expense report (your puzzle input); apparently, something isn't quite adding up. Specifically, they need you to find the two entries that sum to 2020 and then multiply those two numbers together. For example, suppose your expense report contained the following: 1721 979 366 299 675 1456 In this list, the two entries that sum to 2020 are 1721 and 299. Multiplying them together produces 1721 * 299 = 514579, so the correct answer is 514579. Of course, your expense report is much larger. Find the two entries that sum to 2020; what do you get if you multiply them together? End of explanation for a,b,c in itertools.permutations(list(map(int, data)), 3): if a+b+c == 2020: print(a*b*c) break Explanation: The Elves in accounting are thankful for your help; one of them even offers you a starfish coin they had left over from a past vacation. They offer you a second one if you can find three numbers in your expense report that meet the same criteria. Using the above example again, the three entries that sum to 2020 are 979, 366, and 675. Multiplying them together produces the answer, 241861950. In your expense report, what is the product of the three entries that sum to 2020? End of explanation
11,276
Given the following text description, write Python code to implement the functionality described below step by step Description: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps Step2: Inline Question #1 Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5 Step5: You should expect to see a slightly better performance than with k = 1. Step6: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
Python Code: # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) Explanation: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages: During training, the classifier takes the training data and simply remembers it During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. End of explanation # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: First we must compute the distances between all test examples and all train examples. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. End of explanation # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) What in the data is the cause behind the distinctly bright rows? What causes the columns? Your Answer: fill this in. End of explanation y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: End of explanation # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): Call a function f with args and return the time (in seconds) that it took to execute. import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation Explanation: You should expect to see a slightly better performance than with k = 1. End of explanation num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ X_train_folds = np.array_split(X_train, num_folds) y_train_folds = np.array_split(y_train, num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: k_to_accuracies[k] = [] for i in range(num_folds): train_indices = [j for j in range(num_folds) if j != i] validate_index = i X_for_train = np.vstack([X_train_folds[j] for j in train_indices]) y_for_train = np.hstack([y_train_folds[j] for j in train_indices]) X_for_validate = X_train_folds[validate_index] y_for_validate = y_train_folds[validate_index] classifier = KNearestNeighbor() classifier.train(X_for_train, y_for_train) validate_dists = classifier.compute_distances_no_loops(X_for_validate) y_validate_pred = classifier.predict_labels(validate_dists, k=k) num_correct = np.sum(y_validate_pred == y_for_validate) accuracy = float(num_correct) / len(y_for_validate) k_to_accuracies[k].append(accuracy) ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. mean_accuracies = [np.mean(k_to_accuracies[k]) for k in sorted(k_to_accuracies)] best_k = k_choices[np.argmax(mean_accuracies)] print('Best k is: %d' % best_k) classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. End of explanation
11,277
Given the following text description, write Python code to implement the functionality described below step by step Description: Ants in Space! An introduction to the code in beam_paco__gtoc5 Luís F. Simões 2017-04 <h1 id="tocheading">Table of Contents</h1> <div id="toc"></div> Step1: Taking a look at our Python environment. Step2: Solving a TSPLIB problem with P-ACO In this section we show the steps to solve a Travelling Salesman Problem (TSP) instance with the Population-based Ant Colony Optimization (P-ACO) algorithm. We'll use a TSP instance downloaded from the TSPLIB Step3: Load each city's (x, y) coordinates. Step4: Calculate distances matrix. Step5: Instantiate the TSP "path handler" with this distances matrix, and P-ACO, with its default parameters. Step6: Solve it. Step7: Continue refining the solution for a few more generations. Step8: Let's see what we found. Step9: Basic steps for assembling GTOC5 trajectories The two primary functions for assembling a GTOC5 trajectory are here mission_to_1st_asteroid() and add_asteroid(). The first initializes the mission's data structure with the details of the Earth launch leg, that takes the spacecraft towards the mission's first asteroid. Subsequently, via multiple calls to add_asteroid(), the mission is extended with additional exploration targets. Each call to add_asteroid() creates a rendezvous leg towards the specified asteroid, immediately followed by a flyby of that same asteroid, and so increases the mission's overall score by 1. Here's an example of a mission that launches towards asteroid 1712, and moves next to asteroid 4893. The True value returned by add_asteroid() indicates that a feasible transfer leg was found, and asteroid 4893 was therefore added to the mission. Step10: We can evaluate this trajectory with respect to its score (number of asteroids fully explored), final mass (in kg), and time of flight (here converted from days to years). Step11: An aggregation of the mission's mass and time costs can be obtained with resource_rating(). It measures the extent to which the mass and time budgets available for the mission have been depleted by the trajectory. It produces a value of 1.0 at the start of the mission, and a value of 0.0 when the mission has exhausted its 3500 kg of available mass, or its maximum duration of 15 years. Step12: As the score increments discretely by 1.0 with each added asteroid, and the resource rating evaluates mass and time available in a range of [0, 1], both can be combined to give a single-objective evaluation of the trajectory, that should be maximized Step13: Calling seq(), we can see either the full sequence of asteroids visited in each leg, or just the distinct asteroids visited in the mission. In this example, we see that the mission starts on Earth (id 0), performs a rendezvous with asteroid 1712, followed by a flyby of the same asteroid, and then repeats the pattern at asteroid 4893. Step14: The trajectory data structure built by mission_to_1st_asteroid() and add_asteroid() is a list of tuples summarizing the evolution of the spacecraft's state. It provides the minimal sufficient information from which a more detailed view can be reproduced, if so desired. Each tuple contains Step15: Epochs are given here as Modified Julian Dates (MJD), and can be converted as Step16: Greedy search In this section we perform a Greedy search for a GTOC5 trajectory. We'll start by going to asteroid 1712. Then, and at every following step, we attempt to create legs towards all still available asteroids. Among equally-scored alternatives, we greedily pick the one with highest resource rating to adopt into the trajectory, and continue from there. Search stops when no feasible legs are found that can take us to another asteroid. This will happen either because no solutions were found that would allow for a leg to be created, or because adding a found solution would require the spacecraft to exceed the mission's mass or time budgets. Step17: Greedy search gave us a trajectory that is able to visit 14 distinct asteroids. However, by the 14th, the spacecraft finds itself unable to find a viable target to fly to next, even though it has 84.8 kg of mass still available (the spacecraft itself weighs 500 kg, so the mission cannot go below that value), and 2 years remain in its 15 year mission. Phasing indicators A big disadvantage of the approach followed above is the high computational cost of deciding which asteroid to go to next. It entails the optimization of up to 7075 legs, only to then pick a single one and discard all the other results. An alternative is to use one of the indicators available in gtoc5/phasing.py. They can provide an indication of how likely a specific asteroid is to be an easily reachable target. Step18: We use here the (improved) orbital phasing indicator to rate destinations with respect to the estimated ΔV of hypothetical legs that would depart from dep_ast, at epoch dep_t, towards each possible asteroid, arriving there within leg_dT days. We don't know exactly how long the transfer time chosen by add_asteroid() would be, but we take leg_dT=125 days as reference transfer time. Step19: Below are the 5 asteroids the indicator estimates would be most easily reachable. As we've seen above in the results from the greedy search, asteroid 4893, here the 2nd best rated alternative, would indeed be the target reachable with lowest ΔV. Step20: The indicator is however not infallible. If we attempt to go from asteroid 1712 towards each of these asteroids, we find that none of them are actually reachable, except for 4893! Still, the indicator allows us to narrow our focus considerably. Step21: Armed with the indicator, we can reimplement the greedy search, so it will only optimize legs towards a number of top rated alternatives, and then proceed with the best out of those. Step22: We were able to find another score 14 trajectory, but this time it took us ~1 second, whereas before it was taking us 2 and a half minutes. Finding a GTOC5 trajectory of score 17 with Beam Search Step23: Generate the Table of Contents
Python Code: # https://esa.github.io/pykep/ # https://github.com/esa/pykep # https://pypi.python.org/pypi/pykep/ import PyKEP as pk import numpy as np from tqdm import tqdm, trange import matplotlib.pylab as plt %matplotlib inline import seaborn as sns plt.rcParams['figure.figsize'] = 10, 8 from gtoc5 import * from gtoc5.multiobjective import * from gtoc5.phasing import * from paco import * from paco_traj import * from experiments import * from experiments__paco import * Explanation: Ants in Space! An introduction to the code in beam_paco__gtoc5 Luís F. Simões 2017-04 <h1 id="tocheading">Table of Contents</h1> <div id="toc"></div> End of explanation %load_ext watermark %watermark -v -m -p PyKEP,numpy,scipy,tqdm,pandas,matplotlib,seaborn # https://github.com/rasbt/watermark Explanation: Taking a look at our Python environment. End of explanation from urllib.request import urlretrieve import gzip urlretrieve('http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/tsp/eil101.tsp.gz', filename='eil101.tsp.gz'); Explanation: Solving a TSPLIB problem with P-ACO In this section we show the steps to solve a Travelling Salesman Problem (TSP) instance with the Population-based Ant Colony Optimization (P-ACO) algorithm. We'll use a TSP instance downloaded from the TSPLIB: eil101 (symmetric, 101 cities, total distance in best known solution: 629). End of explanation with gzip.open('eil101.tsp.gz') as f: xy_locs = np.loadtxt(f, skiprows=6, usecols=(1,2), comments='EOF', dtype=np.int) nr_cities = len(xy_locs) xy_locs[:5] Explanation: Load each city's (x, y) coordinates. End of explanation distances = np.zeros((nr_cities, nr_cities)) for a in range(nr_cities): for b in range(a, nr_cities): distances[a,b] = distances[b,a] = np.linalg.norm(xy_locs[a] - xy_locs[b]) distances[:4, :4] Explanation: Calculate distances matrix. End of explanation rng, seed = initialize_rng(seed=None) print('Seed:', seed) path_handler = tsp_path(distances, random_state=rng) aco = paco(path_handler.nr_nodes, path_handler, random_state=rng) Explanation: Instantiate the TSP "path handler" with this distances matrix, and P-ACO, with its default parameters. End of explanation %time (quality, best) = aco.solve(nr_generations=100) quality Explanation: Solve it. End of explanation %time (quality, best) = aco.solve(nr_generations=400, reinitialize=False) quality Explanation: Continue refining the solution for a few more generations. End of explanation xy = np.vstack([xy_locs[best], xy_locs[best][0]]) # to connect back to the start line, = plt.plot(xy[:,0], xy[:,1], 'go-') Explanation: Let's see what we found. End of explanation t = mission_to_1st_asteroid(1712) add_asteroid(t, 4893) Explanation: Basic steps for assembling GTOC5 trajectories The two primary functions for assembling a GTOC5 trajectory are here mission_to_1st_asteroid() and add_asteroid(). The first initializes the mission's data structure with the details of the Earth launch leg, that takes the spacecraft towards the mission's first asteroid. Subsequently, via multiple calls to add_asteroid(), the mission is extended with additional exploration targets. Each call to add_asteroid() creates a rendezvous leg towards the specified asteroid, immediately followed by a flyby of that same asteroid, and so increases the mission's overall score by 1. Here's an example of a mission that launches towards asteroid 1712, and moves next to asteroid 4893. The True value returned by add_asteroid() indicates that a feasible transfer leg was found, and asteroid 4893 was therefore added to the mission. End of explanation score(t), final_mass(t), tof(t) * DAY2YEAR Explanation: We can evaluate this trajectory with respect to its score (number of asteroids fully explored), final mass (in kg), and time of flight (here converted from days to years). End of explanation resource_rating(t) Explanation: An aggregation of the mission's mass and time costs can be obtained with resource_rating(). It measures the extent to which the mass and time budgets available for the mission have been depleted by the trajectory. It produces a value of 1.0 at the start of the mission, and a value of 0.0 when the mission has exhausted its 3500 kg of available mass, or its maximum duration of 15 years. End of explanation score(t) + resource_rating(t) Explanation: As the score increments discretely by 1.0 with each added asteroid, and the resource rating evaluates mass and time available in a range of [0, 1], both can be combined to give a single-objective evaluation of the trajectory, that should be maximized: End of explanation print(seq(t)) print(seq(t, incl_flyby=False)) Explanation: Calling seq(), we can see either the full sequence of asteroids visited in each leg, or just the distinct asteroids visited in the mission. In this example, we see that the mission starts on Earth (id 0), performs a rendezvous with asteroid 1712, followed by a flyby of the same asteroid, and then repeats the pattern at asteroid 4893. End of explanation t[-1] Explanation: The trajectory data structure built by mission_to_1st_asteroid() and add_asteroid() is a list of tuples summarizing the evolution of the spacecraft's state. It provides the minimal sufficient information from which a more detailed view can be reproduced, if so desired. Each tuple contains: asteroid ID spacecraft mass epoch the leg's $\Delta T$ the leg's $\Delta V$ The mass and epoch values correspond to the state at the given asteroid, at the end of a rendezvous or self-fly-by leg, after deploying the corresponding payload. The $\Delta T$ and $\Delta V$ values refer to that leg that just ended. End of explanation pk.epoch(t[-1][2], 'mjd') Explanation: Epochs are given here as Modified Julian Dates (MJD), and can be converted as: End of explanation import os from copy import copy def greedy_step(traj): traj_asts = set(seq(traj, incl_flyby=False)) progress_bar_args = dict(leave=False, file=os.sys.stdout, desc='attempting score %d' % (score(traj)+1)) extended = [] for a in trange(len(asteroids), **progress_bar_args): if a in traj_asts: continue tt = copy(traj) if add_asteroid(tt, next_ast=a, use_cache=False): extended.append(tt) return max(extended, key=resource_rating, default=[]) # measure time taken at one level to attempt legs towards all asteroids (that aren't already in the traj.) %time _ = greedy_step(mission_to_1st_asteroid(1712)) def greedy_search(first_ast): t = mission_to_1st_asteroid(first_ast) while True: tt = greedy_step(t) if tt == []: # no more asteroids could be added return t t = tt %time T = greedy_search(first_ast=1712) score(T), resource_rating(T), final_mass(T), tof(T) * DAY2YEAR print(seq(T, incl_flyby=False)) Explanation: Greedy search In this section we perform a Greedy search for a GTOC5 trajectory. We'll start by going to asteroid 1712. Then, and at every following step, we attempt to create legs towards all still available asteroids. Among equally-scored alternatives, we greedily pick the one with highest resource rating to adopt into the trajectory, and continue from there. Search stops when no feasible legs are found that can take us to another asteroid. This will happen either because no solutions were found that would allow for a leg to be created, or because adding a found solution would require the spacecraft to exceed the mission's mass or time budgets. End of explanation t = mission_to_1st_asteroid(1712) Explanation: Greedy search gave us a trajectory that is able to visit 14 distinct asteroids. However, by the 14th, the spacecraft finds itself unable to find a viable target to fly to next, even though it has 84.8 kg of mass still available (the spacecraft itself weighs 500 kg, so the mission cannot go below that value), and 2 years remain in its 15 year mission. Phasing indicators A big disadvantage of the approach followed above is the high computational cost of deciding which asteroid to go to next. It entails the optimization of up to 7075 legs, only to then pick a single one and discard all the other results. An alternative is to use one of the indicators available in gtoc5/phasing.py. They can provide an indication of how likely a specific asteroid is to be an easily reachable target. End of explanation r = rate__orbital_2(dep_ast=t[-1][0], dep_t=t[-1][2], leg_dT=125) r[seq(t)] = np.inf # (exclude bodies already visited) Explanation: We use here the (improved) orbital phasing indicator to rate destinations with respect to the estimated ΔV of hypothetical legs that would depart from dep_ast, at epoch dep_t, towards each possible asteroid, arriving there within leg_dT days. We don't know exactly how long the transfer time chosen by add_asteroid() would be, but we take leg_dT=125 days as reference transfer time. End of explanation r.argsort()[:5] Explanation: Below are the 5 asteroids the indicator estimates would be most easily reachable. As we've seen above in the results from the greedy search, asteroid 4893, here the 2nd best rated alternative, would indeed be the target reachable with lowest ΔV. End of explanation [add_asteroid(copy(t), a) for a in r.argsort()[:5]] Explanation: The indicator is however not infallible. If we attempt to go from asteroid 1712 towards each of these asteroids, we find that none of them are actually reachable, except for 4893! Still, the indicator allows us to narrow our focus considerably. End of explanation def narrowed_greedy_step(traj, top=10): traj_asts = set(seq(traj, incl_flyby=False)) extended = [] ratings = rate__orbital_2(dep_ast=traj[-1][0], dep_t=traj[-1][2], leg_dT=125) for a in ratings.argsort()[:top]: if a in traj_asts: continue tt = copy(traj) if add_asteroid(tt, next_ast=a, use_cache=False): extended.append(tt) return max(extended, key=resource_rating, default=[]) def narrowed_greedy_search(first_ast, **kwargs): t = mission_to_1st_asteroid(first_ast) while True: tt = narrowed_greedy_step(t, **kwargs) if tt == []: # no more asteroids could be added return t t = tt # measure time taken at one level to attempt legs towards the best `top` asteroids %time _ = narrowed_greedy_step(mission_to_1st_asteroid(1712), top=10) %time T = narrowed_greedy_search(first_ast=1712, top=10) score(T), resource_rating(T), final_mass(T), tof(T) * DAY2YEAR print(seq(T, incl_flyby=False)) Explanation: Armed with the indicator, we can reimplement the greedy search, so it will only optimize legs towards a number of top rated alternatives, and then proceed with the best out of those. End of explanation gtoc_ph = init__path_handler(multiobj_evals=True) # configuring Beam P-ACO to behave as a deterministic multi-objective Beam Search _args = { 'beam_width': 20, 'branch_factor': 250, 'alpha': 0.0, # 0.0: no pheromones used 'beta': 1.0, 'prob_greedy': 1.0, # 1.0: deterministic, greedy branching decisions } bpaco = beam_paco_pareto(nr_nodes=len(asteroids), path_handler=gtoc_ph, random_state=None, **_args) # start the search # given we're running the algoritm in deterministic mode, we execute it for a single generation %time best_pf = bpaco.solve(nr_generations=1) # being this a `_pareto` class, .best returns a Pareto front # pick the first solution from the Pareto front best_eval, best = best_pf[0] # Evaluation of the best found solution # (score, mass consumed, time of flight) best_eval # sequence of asteroids visited (0 is the Earth) print(seq(best, incl_flyby=False)) # mission data structure, up to the full scoring of the first two asteroids best[:5] Explanation: We were able to find another score 14 trajectory, but this time it took us ~1 second, whereas before it was taking us 2 and a half minutes. Finding a GTOC5 trajectory of score 17 with Beam Search End of explanation %%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') // https://github.com/kmahelona/ipython_notebook_goodies Explanation: Generate the Table of Contents End of explanation
11,278
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook downloads and cleans the SCOTUS subnetwork data. It can be modified to create any jurisdiction subnetwork and also the federal appelate subnetwork. You have to modify the two paths in the cell below for your own computer. - repo_directory is the path to the cloned github repo - data_dir is the path to the data directory - I suggest putting this outside the code repo and not on dropbox since there these files can start to get large (order 10s of GBs for the text data). This code is a little jenky and subject to change. outline import code set up the data directory folder and subfolders download data from CourtListener and SCDB clean the network case metadata and edgelist make the network with metadata and save it as a graphml file set up the NLP data (you can skip this for the purpose of network analysis) Step1: network_name is the subnetwork you want to work with. It can be either a single jurisdiction (scotus, ca1, etc) or a collection of jurisdiction (such as the federal appellate courts). Currently the federal appellate courts are implemented as 'federal'. network_name is used in the make_network_data.py file. You can modify the get_courts function in this file to create other collections of courts. Step2: set up the data directory Step3: data download get opinion and cluster files from CourtListener opinions/cluster files are saved in data_dir/raw/court/ Step4: get the master edgelist from CL master edgelist is saved in data_dir/raw/ Step5: download scdb data from SCDB scdb data is saved in data_dir/scdb Step6: network data make the case metadata and edgelist add the raw case metadata data frame to the raw/ folder remove cases missing scdb ids remove detroit lumber case get edgelist of cases within desired subnetwork save case metadata and edgelist to the experiment_dir/ Step7: make graph creates the network with the desired case metadata and saves it as a .graphml file in experiment_dir/ Step8: NLP data make case text files grabs the opinion text for each case in the network and saves them as a text file in experiment_dir/textfiles/ Step9: make tf-idf matrix creates the tf-idf matrix for the corpus of cases in the network and saves them to subnet_dir + 'nlp/' Step10: Load network
Python Code: # modify these for your own computer repo_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/' data_dir = '/Users/iaincarmichael/data/courtlistener/' Explanation: This notebook downloads and cleans the SCOTUS subnetwork data. It can be modified to create any jurisdiction subnetwork and also the federal appelate subnetwork. You have to modify the two paths in the cell below for your own computer. - repo_directory is the path to the cloned github repo - data_dir is the path to the data directory - I suggest putting this outside the code repo and not on dropbox since there these files can start to get large (order 10s of GBs for the text data). This code is a little jenky and subject to change. outline import code set up the data directory folder and subfolders download data from CourtListener and SCDB clean the network case metadata and edgelist make the network with metadata and save it as a graphml file set up the NLP data (you can skip this for the purpose of network analysis) End of explanation # which network to download data for network_name = 'scotus' # 'federal', 'ca1', etc import sys # graph package import igraph as ig # our code sys.path.append(repo_directory + 'code/') from setup_data_dir import setup_data_dir, make_subnetwork_directory from pipeline.download_data import download_bulk_resource, download_master_edgelist, download_scdb from helpful_functions import case_info sys.path.append(repo_directory + 'vertex_metrics_experiment/code/') from make_network_data import * from make_graph import make_graph from bag_of_words import make_tf_idf # some sub directories that get used raw_dir = data_dir + 'raw/' subnet_dir = data_dir + network_name + '/' text_dir = subnet_dir + 'textfiles/' # jupyter notebook settings %load_ext autoreload %autoreload 2 %matplotlib inline Explanation: network_name is the subnetwork you want to work with. It can be either a single jurisdiction (scotus, ca1, etc) or a collection of jurisdiction (such as the federal appellate courts). Currently the federal appellate courts are implemented as 'federal'. network_name is used in the make_network_data.py file. You can modify the get_courts function in this file to create other collections of courts. End of explanation setup_data_dir(data_dir) make_subnetwork_directory(data_dir, network_name) Explanation: set up the data directory End of explanation download_op_and_cl_files(data_dir, network_name) Explanation: data download get opinion and cluster files from CourtListener opinions/cluster files are saved in data_dir/raw/court/ End of explanation download_master_edgelist(data_dir) Explanation: get the master edgelist from CL master edgelist is saved in data_dir/raw/ End of explanation download_scdb(data_dir) Explanation: download scdb data from SCDB scdb data is saved in data_dir/scdb End of explanation # create the raw case metadata data frame in the raw/ folder make_subnetwork_raw_case_metadata(data_dir, network_name) # create clean case metadata and edgelist from raw data clean_metadata_and_edgelist(data_dir, network_name) Explanation: network data make the case metadata and edgelist add the raw case metadata data frame to the raw/ folder remove cases missing scdb ids remove detroit lumber case get edgelist of cases within desired subnetwork save case metadata and edgelist to the experiment_dir/ End of explanation make_graph(subnet_dir, network_name) Explanation: make graph creates the network with the desired case metadata and saves it as a .graphml file in experiment_dir/ End of explanation # make the textfiles for give court make_network_textfiles(data_dir, network_name) Explanation: NLP data make case text files grabs the opinion text for each case in the network and saves them as a text file in experiment_dir/textfiles/ End of explanation make_tf_idf(text_dir, subnet_dir + 'nlp/') Explanation: make tf-idf matrix creates the tf-idf matrix for the corpus of cases in the network and saves them to subnet_dir + 'nlp/' End of explanation # load the graph G = ig.Graph.Read_GraphML(subnet_dir + network_name +'_network.graphml') G.summary() Explanation: Load network End of explanation
11,279
Given the following text description, write Python code to implement the functionality described below step by step Description: Car Evaluation using Decision trees and Random Forests <hr> Decision tree learning Decision tree classifiers are attractive models of Machine Learning as they emphasize on interpretability. Like the name decision tree suggests, we can think of this model as breaking down our data by making decisions based on asking a series of questions. Let's consider the following example where we use a decision tree to decide upon an activity on a particular day Step1: Next we load our dataset into a pandas dataframe. We specify the column names before-hand. Step2: Since the target class labels are strings, we'll have to convert in a format that our classifier would understand. For this we would use the LabelEncoder class of Scikit-Learn module 'preprocessing'. This converts our class labels into [1,2,3,4] where the integers would correspond to the respective class. Step3: We define our features. Next we convert the dataset into binarized form. This makes the raw textual data which contains categorized values like vhigh etc and converts them into a dummy format better understandable to our classifier. We use the pd.get_dummies function to do it for us. Step4: Now our data looks like this Step5: We split the dataset into 60% training and 40% testing sets. Step6: Finding the Feature importances with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability. Step7: Plot the top 5 feature importances of the forest Step8: Next we implement our DecisionTreeClassifier Step9: As you can see, we achieved an accuracy of about 75% on the test set. Cross validation for Decision Trees Step10: Thus the overall accuracy of 70% helps us to better understand our classifier. It is comparatively low, but still far better than random guessing at 50%. One reason that the accuracy is a bit on the low sized might be the small size of the dataset. Several other factors might affect the accuracy. A nice feature in scikit-learn is that it allows us to export the decision tree as a .dot file after training, which we can visualize using the GraphViz program. This program is freely available at http Step11: After we have installed GraphViz on our computer, we can convert the tree.dot file into a PNG file by executing the following command from the command line in the location where we saved the tree.dot file
Python Code: import os from sklearn.tree import DecisionTreeClassifier, export_graphviz import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn import cross_validation, metrics from sklearn.ensemble import RandomForestClassifier from time import time from sklearn import preprocessing from sklearn.pipeline import Pipeline from sklearn.metrics import roc_auc_score , classification_report from sklearn.grid_search import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report Explanation: Car Evaluation using Decision trees and Random Forests <hr> Decision tree learning Decision tree classifiers are attractive models of Machine Learning as they emphasize on interpretability. Like the name decision tree suggests, we can think of this model as breaking down our data by making decisions based on asking a series of questions. Let's consider the following example where we use a decision tree to decide upon an activity on a particular day: <img src="images/decisiontrees.png"> [Image source] The decision tree model learns a series of questions to infer the class labels of the samples according to the features in our training set. Although the preceding figure illustrated the concept of a decision tree based on categorical variables, the same concept applies to our features. Using the decision algorithm, we start at the tree root and split the data on the feature that results in the largest information gain (IG). In an iterative process, we can then repeat this splitting procedure at each child node until the leaves are pure. This means that the samples at each node all belong to the same class. In practice, this can result in a very deep tree with many nodes, which can easily lead to overfitting. Thus, we typically want to prune the tree by setting a limit for the maximal depth of the tree. Parameters in decision trees One of the most important features for a decision tree is the stopping criterion. As a tree is built, the final few decisions can often be somewhat random and rely on only a small number of samples to make their decision. Using such specific nodes can result in overfitting of the training data. Instead, a stopping criterion can be used to ensure that the decision tree does not reach this exactness. Instead of using a stopping criterion, the tree could be created in full and then pruned. This pruning process removes nodes that do not provide much information to the overall process. The decision tree implementation in scikit-learn provides a method to stop the building of a tree using the following options: min_samples_split: This specifies how many samples are needed in order to create a new node in the decision tree min_samples_leaf: This specifies how many samples must be resulting from a node for it to stay The first dictates whether a decision node will be created, while the second dictates whether a decision node will be kept. Another parameter for decision tress is the criterion for creating a decision. Gini impurity and Information gain are two popular ones: Gini impurity: This is a measure of how often a decision node would incorrectly predict a sample's class Information gain: This uses information-theory-based entropy to indicate how much extra information is gained by the decision node Building a decision tree Decision trees can build complex decision boundaries by dividing the feature space into rectangles. Generally Decision trees with lower heights are preferred over deeper trees, since the deeper the decision tree, the more complex the decision boundary becomes, which can easily result in overfitting. Using scikit-learn, we will now train a decision tree to evaluation the condition of a car. Dataset To implement our Decision Tree Classifier we will use The Car Evaluation Database. It contains examples with the structural information removed, i.e., it directly relates CAR to the six input attributes: buying, maint, doors, persons, lug_boot, safety. Basically, we have to build a classifier to classify a car as 'Unacceptable', 'Acceptable', 'Good' and 'Very Good' based on the attributes. The different attributes values are given as follows: buying: vhigh, high, med, low. maint: vhigh, high, med, low. doors: 2, 3, 4, 5, more. persons: 2, 4, more. lug_boot: small, med, big. safety: low, med, high. You can download the dataset from here : https://archive.ics.uci.edu/ml/datasets/Car+Evaluation Once downloaded, we can move on with the code. Firstly, we manage our imports: End of explanation # read .csv from provided dataset csv_filename="car.data" # df=pd.read_csv(csv_filename,index_col=0) df=pd.read_csv(csv_filename, names=["Buying", "Maintenance" , "Doors" , "Persons" , "Lug-Boot" , "Safety", "Class"]) df.head() Explanation: Next we load our dataset into a pandas dataframe. We specify the column names before-hand. End of explanation #Convert car-class labels to numbers le = preprocessing.LabelEncoder() df['Class'] = le.fit_transform(df.Class) df['Class'].unique() features = list(df.columns) features.remove('Class') Explanation: Since the target class labels are strings, we'll have to convert in a format that our classifier would understand. For this we would use the LabelEncoder class of Scikit-Learn module 'preprocessing'. This converts our class labels into [1,2,3,4] where the integers would correspond to the respective class. End of explanation for f in features: #Get binarized columns df[f] = pd.get_dummies(df[f]) Explanation: We define our features. Next we convert the dataset into binarized form. This makes the raw textual data which contains categorized values like vhigh etc and converts them into a dummy format better understandable to our classifier. We use the pd.get_dummies function to do it for us. End of explanation df.head() X = df[features] y = df['Class'] Explanation: Now our data looks like this: End of explanation # split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test) Explanation: We split the dataset into 60% training and 40% testing sets. End of explanation %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X, y) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] # Print the feature ranking print("Feature ranking:") for f in range(X.shape[1]): print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]])) # Plot the feature importances of the forest plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k') plt.title("Feature importances") plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X.shape[1]), indices) plt.xlim([-1, X.shape[1]]) plt.show() for f in range(5): print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]])) best_features = [] for i in indices[:5]: best_features.append(features[i]) Explanation: Finding the Feature importances with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability. End of explanation plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k') plt.title("Feature importances") plt.bar(range(5), importances[indices][:5], color="r", yerr=std[indices][:5], align="center") plt.xticks(range(5), best_features) plt.xlim([-1, 5]) plt.show() Explanation: Plot the top 5 feature importances of the forest End of explanation t0=time() print ("DecisionTree") dt = DecisionTreeClassifier(min_samples_split=20,random_state=99) # dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99) clf_dt=dt.fit(X_train_std,y_train) print ("Acurracy: ", clf_dt.score(X_test_std,y_test)) t1=time() print ("time elapsed: ", t1-t0) Explanation: Next we implement our DecisionTreeClassifier: End of explanation tt0=time() print ("cross result========") scores = cross_validation.cross_val_score(dt, X, y, cv=3) print (scores) print (scores.mean()) tt1=time() print ("time elapsed: ", tt1-tt0) Explanation: As you can see, we achieved an accuracy of about 75% on the test set. Cross validation for Decision Trees: End of explanation from sklearn.tree import export_graphviz export_graphviz(tree, out_file='tree.dot', feature_names=['Persons', 'Safety']) Explanation: Thus the overall accuracy of 70% helps us to better understand our classifier. It is comparatively low, but still far better than random guessing at 50%. One reason that the accuracy is a bit on the low sized might be the small size of the dataset. Several other factors might affect the accuracy. A nice feature in scikit-learn is that it allows us to export the decision tree as a .dot file after training, which we can visualize using the GraphViz program. This program is freely available at http://www.graphviz.org and supported by Linux, Windows, and Mac OS X. End of explanation t2=time() print ("RandomForest") rf = RandomForestClassifier(n_estimators=100,n_jobs=-1) clf_rf = rf.fit(X_train,y_train) print ("Acurracy: ", clf_rf.score(X_test,y_test)) t3=time() print ("time elapsed: ", t3-t2) tt2=time() print ("cross result========") scores = cross_validation.cross_val_score(rf, X, y, cv=3) print (scores) print (scores.mean()) tt3=time() print ("time elapsed: ", tt3-tt2) Explanation: After we have installed GraphViz on our computer, we can convert the tree.dot file into a PNG file by executing the following command from the command line in the location where we saved the tree.dot file: dot -Tpng tree.dot -o tree.png <img src="images/tree.png"> Combining weak to strong learners via random forests Random forests have gained huge popularity in applications of machine learning during the last decade due to their good classification performance, scalability, and ease of use. Intuitively, a random forest can be considered as an ensemble of decision trees. The idea behind ensemble learning is to combine weak learners to build a more robust model, a strong learner, that has a better generalization error and is less susceptible to overfitting. The random forest algorithm can be summarized in four simple steps: Draw a random bootstrap sample of size n (randomly choose n samples from the training set with replacement). Grow a decision tree from the bootstrap sample. At each node: Randomly select d features without replacement. Split the node using the feature that provides the best split according to the objective function, for instance, by maximizing the information gain. Repeat the steps 1 to 2 k times. Aggregate the prediction by each tree to assign the class label by majority vote. There is a slight modification in step 2 when we are training the individual decision trees: instead of evaluating all features to determine the best split at each node, we only consider a random subset of those. Although random forests don't offer the same level of interpretability as decision trees, a big advantage of random forests is that we don't have to worry so much about choosing good hyperparameter values. We typically don't need to prune the random forest since the ensemble model is quite robust to noise from the individual decision trees. The only parameter that we really need to care about in practice is the number of trees k (step 3) that we choose for the random forest. Typically, the larger the number of trees, the better the performance of the random forest classifier at the expense of an increased computational cost. Via the sample size n of the bootstrap sample, we control the bias-variance tradeoff of the random forest. By choosing a larger value for n, we decrease the randomness and thus the forest is more likely to overfit. On the other hand, we can reduce the degree of overfitting by choosing smaller values for n at the expense of the model performance. In most implementations, including the RandomForestClassifier implementation in scikit-learn, the sample size of the bootstrap sample is chosen to be equal to the number of samples in the original training set, which usually provides a good bias-variance tradeoff. For the number of features d at each split, we want to choose a value that is smaller than the total number of features in the training set. A reasonable default that is used in scikit-learn and other implementations is d = sqroot(m), where m is the number of features in the training set. Parameters in Random forests The Random forest implementation in scikit-learn is called RandomForestClassifier, and it has a number of parameters. As Random forests use many instances of DecisionTreeClassifier, they share many of the same parameters such as the criterion (Gini Impurity or Entropy/Information Gain), max_features, and min_samples_split. Also, there are some new parameters that are used in the ensemble process: n_estimators: This dictates how many decision trees should be built. A higher value will take longer to run, but will (probably) result in a higher accuracy. oob_score: If true, the method is tested using samples that aren't in the random subsamples chosen for training the decision trees. n_jobs: This specifies the number of cores to use when training the decision trees in parallel. The scikit-learn package uses a library called Joblib for in-built parallelization. This parameter dictates how many cores to use. By default, only a single core is used—if you have more cores, you can increase this, or set it to -1 to use all cores. Applying Random forests Random forests in scikit-learn use the estimator interface, allowing us to use almost the exact same code as before to do cross fold validation: End of explanation
11,280
Given the following text description, write Python code to implement the functionality described below step by step Description: Title Step1: Create some text Step2: Apply regex
Python Code: # Load regex package import re Explanation: Title: Match A Word Slug: match_a_word Summary: Match A Word Date: 2016-05-01 12:00 Category: Regex Tags: Basics Authors: Chris Albon Based on: Regular Expressions Cookbook Preliminaries End of explanation # Create a variable containing a text string text = 'The quick brown fox jumped over the lazy brown bear.' Explanation: Create some text End of explanation # Find any word of three letters re.findall(r'\b...\b', text) Explanation: Apply regex End of explanation
11,281
Given the following text description, write Python code to implement the functionality described below step by step Description: COVID19 Exposure Notification System Risk Simulator [email protected], [email protected] (broken link) Last update Step1: Infectiousness vs time since onset of symptoms (TOST) Let $\Delta=T^e - T^s$ be the time between when Alice got exposed to Bob and when Bob first showed symptoms. Let $f_{\rm{inf}}(\Delta)$ be the infectiousness. Gaussian approximation We use the Gaussian approximation from the following paper. Risk scoring in contact tracing apps, Mark Briers, Marcos Charalambides, Christophe Fraser, Chris Holmes, Radka Jersakova, James Lomax, and Tom Lovett. 26 July 2020 $$ f_{inf}(\Delta) = \exp\left( -\frac{ (\Delta - \mu)^2 }{2 \sigma^2} \right) $$ where $\mu=-0.3$ , $\sigma=2.75$ (units of days). We plot this below. Step2: Skew-logistic distribution In Ferretti et al 2020, they note that the infectiousness profile varies depending on the incubation period. We model this as shown below. Step3: Dose vs distance Briers (2020) propose the following quadratic model $$ g(d) = \min(1, D^2_{\min}/d^2) $$ They set $D^2_{\min}=1$ based on argument of the physics of droplet spread. Step4: Wilson (2020) use a physical simulator of droplet spread. We fit a cubic spline to their Monte Carlo simulation. Results are shown below. Step5: Bluetooth simulator Step6: Noisy simulation Step7: Probability of getting infected This depends on 3 factors Step8: Risk score Step9: Attenuation Step10: Infectiousness levels Here is a figure from Wilson et al, "Quantifying SARS-CoV-2 infection risk within the Google/Apple exposure notification framework to inform quarantine recommendations". Colors are the 6 transmission levels supported by GAEN v1.1. <img src="https Step11: Probabilistic risk Step12: Risk score plots Step13: True risk curve vs approximation Step14: ROC plots Step15: ROC with noise Step16: Interactive
Python Code: import itertools import numpy as np import matplotlib.pyplot as plt import scipy.stats import pandas as pd from collections import namedtuple from enum import Enum, IntEnum from dataclasses import dataclass import matplotlib.cm as cm import sklearn from sklearn import metrics # Configure plot style sheet plt.style.use('fivethirtyeight') plt.rcParams['axes.titlesize'] = 'medium' # can take 'large', 'x-large' plt.rcParams['axes.labelsize'] = 'medium' import jax import jax.numpy as jnp Explanation: COVID19 Exposure Notification System Risk Simulator [email protected], [email protected] (broken link) Last update: 22 August 2020 References We base our approach on these papers Quantifying SARS-CoV-2-infection risk withing the Apple/Google exposure notification framework to inform quarantine recommendations, Amanda Wilson, Nathan Aviles, Paloma Beamer, Zsombor Szabo, Kacey Ernst, Joanna Masel. July 2020 The timing of COVID-19 transmission, Luca Ferretti et al, Sept. 2020 Risk scoring in contact tracing apps, Mark Briers, Marcos Charalambides, Christophe Fraser, Chris Holmes, Radka Jersakova, James Lomax, and Tom Lovett. 26 July 2020 End of explanation def infectiousness_gaussian(deltas): mu = -0.3; s = 2.75; ps = np.exp(- np.power(deltas-mu,2) / (2*s*s)) return ps deltas = np.arange(-10, 10, 0.1) ps = infectiousness_gaussian(deltas) plt.figure(); plt.plot(deltas, ps) plt.xlabel('days since symptom onset'); plt.ylabel('infectiousness'); Explanation: Infectiousness vs time since onset of symptoms (TOST) Let $\Delta=T^e - T^s$ be the time between when Alice got exposed to Bob and when Bob first showed symptoms. Let $f_{\rm{inf}}(\Delta)$ be the infectiousness. Gaussian approximation We use the Gaussian approximation from the following paper. Risk scoring in contact tracing apps, Mark Briers, Marcos Charalambides, Christophe Fraser, Chris Holmes, Radka Jersakova, James Lomax, and Tom Lovett. 26 July 2020 $$ f_{inf}(\Delta) = \exp\left( -\frac{ (\Delta - \mu)^2 }{2 \sigma^2} \right) $$ where $\mu=-0.3$ , $\sigma=2.75$ (units of days). We plot this below. End of explanation def skew_logistic_scaled(x, alpha, mu, sigma): return scipy.stats.genlogistic.pdf(x, alpha, loc=mu, scale=sigma) def ptost_conditional(ts, incubation): mu = -4 sigma = 1.85 alpha = 5.85 tau = 5.42 fpos = skew_logistic_scaled(ts, alpha, mu, sigma) #fneg = skew_logistic_scaled(ts, alpha, mu, sigma*incubation/tau) # error in paper fneg = skew_logistic_scaled(ts*tau/incubation, alpha, mu, sigma) ps = fpos neg = np.where(ts < 0) ps[neg] = fneg[neg] ps = ps/np.max(ps) return ps def incubation_dist(t): mu = 1.621 sig = 0.418 rv = scipy.stats.lognorm(sig, scale=np.exp(mu)) return rv.pdf(t) def ptost_uncond(tost_times): #p(t) = sum_{k=1}^14 p(incubation=k) ptost(t | k) / max_t( ptost(t|k) ) incub_times = np.arange(1, 14, 1) incub_probs = incubation_dist(incub_times) tost_probs = np.zeros_like(tost_times, dtype=float) for k, incub in enumerate(incub_times): ps = ptost_conditional(tost_times, incub) tost_probs += incub_probs[k] * ps #tost_probs = tost_probs/np.max(tost_probs) return tost_probs infectiousness_curve_times = np.arange(-14, 14+1, 0.1) infectiousness_curve_vals = ptost_uncond(infectiousness_curve_times) def infectiousness_skew_logistic(delta): return np.interp(delta, infectiousness_curve_times, infectiousness_curve_vals) print(infectiousness_skew_logistic(5)) print(infectiousness_skew_logistic(np.array([5]))) tost = np.arange(-10, 10, 0.1) incubs = np.array([3, 5.5, 9]) #https://matplotlib.org/3.1.1/tutorials/colors/colors.html colors = ['tab:blue', 'tab:purple', 'tab:red'] plt.figure() for i, incub in enumerate(incubs): ps = ptost_conditional(tost, incub) #ps = ps/np.max(ps) name = 'incubation = {:0.2f}'.format(incub) plt.plot(tost, ps, label=name, color=colors[i]) ps = ptost_uncond(tost) ps = [infectiousness_skew_logistic(t) for t in tost] qs = infectiousness_skew_logistic(tost) assert np.allclose(ps, qs) plt.plot(tost, ps, label='avg', color='k') plt.legend() plt.xlabel('days since onset of symptoms') plt.ylabel('prob(transmission)') Explanation: Skew-logistic distribution In Ferretti et al 2020, they note that the infectiousness profile varies depending on the incubation period. We model this as shown below. End of explanation def dose_curve_quadratic(d, Dmin=1): Dmin = 1 m = np.power(Dmin,2)/np.power(d, 2) return np.minimum(1, m) d = np.linspace(0, 5, 100) p = dose_curve_quadratic(d) plt.figure() plt.plot(d, p) plt.xlabel('distance (meters)'); plt.ylabel('dose') Explanation: Dose vs distance Briers (2020) propose the following quadratic model $$ g(d) = \min(1, D^2_{\min}/d^2) $$ They set $D^2_{\min}=1$ based on argument of the physics of droplet spread. End of explanation # from scipy.interpolate import splev, splrep # def dose_curve_spline_fit(): # url = "https://raw.githubusercontent.com/probml/covid19/master/WilsonMasel/stelios-dose-data-scaled.csv" # df = pd.read_csv(url) # distances = df['distance'].to_numpy() # doses = df['dose'].to_numpy() # ndx1 = (distances <= 1) # ndx2 = (distances > 1) # x = distances[ndx1] # y = doses[ndx1] # spline1 = splrep(x, y) # (t, c, k), contains knots, coefficients, degree # x = distances[ndx2] # y = doses[ndx2] # spline2 = splrep(x, y) # return spline1, spline2 # def dose_curve_spline(x, spline1, spline2): # if np.isscalar(x): # x = np.array([x]) # scalar = True # else: # scalar = False # n = len(x) # ndx = np.where(x <= 1) # y1 = splev(x, spline1) # y2 = splev(x, spline2) # y = np.zeros(n) # y[x <= 1] = y1[x <= 1] # y[x > 1] = y2[x > 1] # if scalar: # y = y[0] # return y # spline1, spline2 = dose_curve_spline_fit() # d = np.linspace(0, 5, 100) # p = dose_curve_spline(d, spline1, spline2) # plt.figure() # plt.plot(d, p) # plt.xlabel('distance (meters)'); # plt.ylabel('dose') # plt.yscale('log') Explanation: Wilson (2020) use a physical simulator of droplet spread. We fit a cubic spline to their Monte Carlo simulation. Results are shown below. End of explanation # Lovett paper https://arxiv.org/abs/2007.05057 # Lognormal noise model # mu = slope * log(distance) + intercept' # log(-rssi) ~ N(mu, sigma) # E log(-R) = slope*log(D) + inter # D = exp( (log(-R) - inter) / slope) # attenuation = tx - rx - rssi def atten_to_dist(atten, params): rssi = params.tx - (atten + params.correction) return np.exp((np.log(-rssi) - params.intercept)/params.slope) def dist_to_atten(distance, params): mu = params.intercept + params.slope * np.log(distance) rssi = -np.exp(mu) atten = params.tx - (rssi + params.correction) return atten def dist_to_atten_sample_lognormal(distances, params): if params.sigma == 0: return dist_to_atten(distances, params) N = len(distances) mus = params.intercept + params.slope * np.log(distances) rssi = -scipy.stats.lognorm(s=params.sigma, scale=np.exp(mus)).rvs() atten = params.tx - (rssi + params.correction) return atten # We use regression parameters from Fig 4 of # Lovett paper https://arxiv.org/abs/2007.05057 # estimated from H0H1 data @dataclass class BleParams: slope: float = 0.21 intercept: float = 3.92 sigma: float = np.sqrt(0.33) tx: float = 0.0 correction: float=2.398 name: str = 'briers-lognormal' ble_params = BleParams() attens = np.arange(40, 90) distances = atten_to_dist(attens, ble_params) fig, axs = plt.subplots(1,1) axs = np.reshape(axs, (1,)) ax = axs[0] ax.plot(attens, distances) ax.set_xlabel('attenuation (dB)') ax.set_ylabel('distance (m)') np.sqrt(0.33) Explanation: Bluetooth simulator End of explanation ble_params_mle = BleParams(sigma=np.sqrt(0.33), name = 'briers-mle') ble_params_low_noise = BleParams(sigma=0.01, name = 'briers-low-noise') ble_params_no_noise = BleParams(sigma=0, name = 'briers-no-noise') ble_params_list = [ble_params_no_noise, ble_params_low_noise, ble_params_mle] distances = [] nrep = 100 for d in range(1, 10+1): for i in range(nrep): distances.append(d) distances = np.array(distances) fig, axs = plt.subplots(1,3, figsize=(15,5)) axs = np.reshape(axs, (3,)) for i, params in enumerate(ble_params_list): mu = dist_to_atten(distances, params) np.random.seed(0) attens = dist_to_atten_sample_lognormal(distances, params) ax = axs[i] ax.plot(mu, '-', linewidth=3) ax.plot(attens, '.') ax.set_ylabel('attenuation (dB)') ax.set_xlabel('sample') ax.set_title(params.name) #fname = '../Figures/bluetoothSamples_{}.png'.format(params.name) #plt.savefig(fname) Explanation: Noisy simulation End of explanation @dataclass class Exposure: duration: float = np.nan distance: float = np.nan atten: float = np.nan days_exp: int = np.nan # days since exposure days_sym: int = np.nan # days since symptom onset @dataclass class ModelParams: ble_params: BleParams = ble_params distance_fun: str = 'quadratic' # quadratic or spline Dmin: float = 1 infectiousness_fun: str = 'skew-logistic' # gaussian or skew-logistic beta: float = 1e-3 params = ModelParams() def compute_dose(expo, params): if not np.isnan(expo.atten): dist = atten_to_dist(expo.atten, params.ble_params) else: dist = expo.distance if params.distance_fun == 'quadratic': fd = dose_curve_quadratic(dist, params.Dmin) elif params.distance_fun == 'spline': fd = dose_curve_spline(dist) else: fd = 1 if not np.isnan(expo.days_sym): if params.infectiousness_fun == 'gaussian': finf = infectiousness_gaussian(expo.days_sym) elif params.infectiousness_fun == 'skew-logistic': finf = infectiousness_skew_logistic(expo.days_sym) else: finf = 1 else: finf = 1 dose = expo.duration * fd * finf return dose def prob_infection(expo, params): dose = compute_dose(expo, params) return 1-np.exp(-params.beta * dose) def prob_infections(exposures, params): dose = 0 for expo in exposures: dose += compute_dose(expo, params) return 1-np.exp(-params.beta * dose) def prob_infection_batch(attenuations, durations, symptom_days, params, distances=None): if distances is None: distances = atten_to_dist(attenuations, params.ble_params) if params.distance_fun == 'quadratic': fd = dose_curve_quadratic(distances) elif params.distance_fun == 'spline': fd = dose_curve_spline(distances) if params.infectiousness_fun == 'gaussian': finf = infectiousness_gaussian(symptom_days) elif params.infectiousness_fun == 'skew-logistic': finf = infectiousness_skew_logistic(symptom_days) doses = durations * fd * finf return 1-np.exp(-params.beta * doses) distances = np.array([0.8, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0]) dur = 8*60 ps = [] for dist in distances: expo = Exposure(distance=dist, duration=dur, days_sym = 0) p = prob_infection(expo, params) ps.append(p) qs = prob_infection_batch(None, dur, 0, params, distances) assert np.allclose(ps, qs) print(ps) Explanation: Probability of getting infected This depends on 3 factors: how long was the exposure, how far, and how infectious was the transmitter. End of explanation # https://enconfig.storage.googleapis.com/enconfig_fixed.html @dataclass class RiskConfig: ble_thresholds: np.array = np.array([]) ble_weights: np.array = np.array([]) inf_levels: np.array = np.array([]) inf_weights: np.array = np.array([]) name: str = '' beta: float = 3.1 * 1e-6 # Wilson table 1 Explanation: Risk score End of explanation config_swiss = RiskConfig(ble_thresholds = np.array([53, 60]), ble_weights = np.array([1.0, 0.5, 0.0]), name= 'Switzerland') config_germany = RiskConfig(ble_thresholds = np.array([55, 63]), ble_weights = np.array([1.0, 0.5, 0.0]), name= 'Germany') config_ireland = RiskConfig(ble_thresholds = np.array([56, 62]), ble_weights = np.array([1.0, 1.0, 0.0]), name= 'Ireland') config_wilson = RiskConfig(ble_thresholds = np.array([50, 70]), ble_weights = np.array([2.39, 0.6, 0.06]), name= 'Arizona') config_list = [config_swiss, config_germany, config_ireland, config_wilson] attens = np.arange(40, 80) distances = atten_to_dist(attens, ble_params) fig, axs = plt.subplots(1,1, figsize=(8,8)) axs = np.reshape(axs, (1,)) ax = axs[0] ax.plot(attens, distances) ax.set_xlabel('attenuation (dB)') ax.set_ylabel('distance (m)') names = [config.name for config in config_list] colors = ['r', 'g', 'b', 'k'] handles = [] for i, config in enumerate(config_list): for j, thresh in enumerate(config.ble_thresholds): dist = atten_to_dist(thresh, ble_params) handle = ax.vlines(thresh, 0, dist, color=colors[i]) ax.hlines(dist, np.min(attens), thresh, color=colors[i]) if j==0: handles.append(handle) ax.legend(handles, names) plt.show() def attenuation_score(atten, thresholds, weights): bin = np.digitize(atten, thresholds) watten = weights[bin] return watten def attenuation_score_batch(attenuations, thresholds, weights): attenuations = np.atleast_1d(attenuations) labels = np.digitize(attenuations, thresholds) vecs = jax.nn.one_hot(labels, num_classes = len(weights)) tmp = jnp.multiply(weights, vecs) scores = jnp.sum(tmp, 1) return scores thresholds = np.array([50, 70]) weights = np.array([2.39, 0.6, 0.06]) attens = np.array([40, 52, 66, 99]) buckets = np.digitize(attens, thresholds) print(buckets) ps = [attenuation_score(a, thresholds, weights) for a in attens] qs = attenuation_score_batch(attens, thresholds, weights) assert np.allclose(ps, qs) rs = np.array([attenuation_score_batch(a, thresholds, weights) for a in attens]).flatten() assert np.allclose(ps, rs) Explanation: Attenuation End of explanation def make_infectiousness_params_v1(): inf_pre = np.zeros((9), dtype=int) inf_post = np.zeros((5), dtype=int) inf_mid = np.array([1, 3, 4, 5, 6, 6, 6, 6, 5, 4, 3, 2, 2, 1, 1]) inf_levels = np.concatenate((inf_pre, inf_mid, inf_post)) inf_weights = np.array([0, 10**1, 10**1.2, 10**1.4, 10**1.6, 10**1.8, 10**2]) return inf_levels, inf_weights def make_infectiousness_params_v2(): inf_pre = np.zeros((9), dtype=int) inf_post = np.zeros((5), dtype=int) inf_mid6 = np.array([1, 3, 4, 5, 6, 6, 6, 6, 5, 4, 3, 2, 2, 1, 1]) inf_mid = np.ones_like(inf_mid6) ndx = (inf_mid6 >= 5) inf_mid[ndx] = 2 #inf_mid = np.array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1]) inf_levels = np.concatenate((inf_pre, inf_mid, inf_post)) inf_weights = np.array([0, 10**1.6, 10**2]) return inf_levels, inf_weights def infectiousness_score(days_since_symptoms, inf_levels, inf_weights): # days_since_symptoms can be -14..14 i = days_since_symptoms+14 level = inf_levels[i] return inf_weights[level] def infectiousness_score_batch(symptom_days, inf_levels, inf_weights): # symptom_days is an array of ints in -14..14 symptom_days = np.atleast_1d(symptom_days) inf_labels = inf_levels[symptom_days + 14] inf_vecs = jax.nn.one_hot(inf_labels, num_classes = len(inf_weights)) tmp = jnp.multiply(inf_weights, inf_vecs) scores = jnp.sum(tmp, 1) return scores # debugging ts = np.arange(0, 5, 1) levels, weights = make_infectiousness_params_v1() ps = [infectiousness_score(t, levels, weights) for t in ts] qs = infectiousness_score_batch(ts, levels, weights) assert np.allclose(ps, qs) qs = [infectiousness_score_batch(t, levels, weights) for t in ts] qs2 = np.array(qs).flatten() print(ps) print(qs) print(qs2) assert np.allclose(ps, qs2) ts = np.arange(-14, 14+1, 1) levels, weights = make_infectiousness_params_v1() ps = infectiousness_score_batch(ts, levels, weights) import matplotlib.cm as cm cmap = cm.get_cmap('jet') colors = [cmap(c/7) for c in levels] plt.figure() plt.bar(ts, ps, color = colors) plt.xlabel('days since symptom onset'); plt.ylabel('transmission risk'); levels, weights = make_infectiousness_params_v2() ps = infectiousness_score_batch(ts, levels, weights) palette =['black', 'green', 'red'] colors = [palette[c] for c in levels] plt.figure() plt.bar(ts, ps, color = colors) plt.xlabel('days since symptom onset'); plt.ylabel('transmission risk'); Explanation: Infectiousness levels Here is a figure from Wilson et al, "Quantifying SARS-CoV-2 infection risk within the Google/Apple exposure notification framework to inform quarantine recommendations". Colors are the 6 transmission levels supported by GAEN v1.1. <img src="https://github.com/probml/covid19/blob/master/Figures/infectiousness-TOST-MaselFig5A.png?raw=true"> End of explanation def risk_score(expo, config): winf = infectiousness_score(expo.days_sym, config.inf_levels, config.inf_weights) watten = attenuation_score(expo.atten, config.ble_thresholds, config.ble_weights) return expo.duration * watten * winf def prob_risk_score(expo, config): r = risk_score(expo, config) return 1-np.exp(-config.beta * r) # Same interface as # prob_infection_batch(attenuations, durations, symptom_days, params) def prob_risk_score_batch(attenuations, durations, symptom_days, config): attenuations = np.atleast_1d(attenuations) durations = np.atleast_1d(durations) symptom_days = np.atleast_1d(symptom_days) winf = infectiousness_score_batch(symptom_days, config.inf_levels, config.inf_weights) watten = attenuation_score_batch(attenuations, config.ble_thresholds, config.ble_weights) risks = durations * watten * winf return 1-np.exp(-config.beta * risks) Explanation: Probabilistic risk End of explanation levels, weights = make_infectiousness_params_v1() config_wilsonv1 = RiskConfig(ble_thresholds = np.array([50, 70]), ble_weights = np.array([2.39, 0.6, 0.06]), inf_weights = weights, inf_levels = levels, name = 'thresh2_inf6') levels, weights = make_infectiousness_params_v2() config_wilsonv2 = RiskConfig(ble_thresholds = np.array([50, 70]), ble_weights = np.array([2.39, 0.6, 0.06]), inf_weights = weights, inf_levels = levels, name = 'thresh2_inf2') levels, weights = make_infectiousness_params_v1() config_wilsonv3 = RiskConfig(ble_thresholds = np.array([50, 60, 70]), # made up ble_weights = np.array([2.39, 0.6, 0.06]), # made up inf_weights = weights, inf_levels = levels, name = 'thresh3_inf6') levels, weights = make_infectiousness_params_v2() config_wilsonv4 = RiskConfig(ble_thresholds = np.array([50, 60, 70]), # made up ble_weights = np.array([2.39, 0.6, 0.2, 0.06]), # made up inf_weights = weights, inf_levels = levels, name = 'thresh3_inf2') attens = 50 # np.linspace(40, 80, 3, endpoint=True) symptoms = np.arange(0,5) durations = 80 ps = prob_infection_batch(attens, durations, symptoms, params) print(ps) qs = prob_risk_score_batch(attens, durations, symptoms, config_wilsonv1) print(qs) Explanation: Risk score plots End of explanation def plot_risk_vs_symptoms_and_durations(attens, durations, symptoms, config, params): ndur = len(durations) if ndur==4: fig, axs = plt.subplots(2,2, figsize=(15,15), sharex=True, sharey=True) axs = np.reshape(axs, (4,)) elif ndur==2: fig, axs = plt.subplots(1,2, figsize=(15,10), sharex=True, sharey=True) axs = np.reshape(axs, (2,)) elif ndur==1: fig, axs = plt.subplots(1,1) axs = np.reshape(axs, (1,)) else: print('unknown figure layout') return cmap = cm.get_cmap('plasma') nattens = len(attens) colors = [cmap(c/nattens) for c in range(nattens)] for i, dur in enumerate(durations): ax = axs[i] labels = [] handles = [] for j, atten in enumerate(attens): ps = prob_infection_batch(atten, dur, symptoms, params) qs = prob_risk_score_batch(atten, dur, symptoms, config) label = 'atten={}'.format(atten) labels.append(label) h = ax.plot(symptoms, ps, '-', color=colors[j], label=label) handles.append(h) ax.plot(symptoms, qs, ':', linewidth=3, color=colors[j]) ax.set_yscale('log') ax.set_title('config = {}, dur = {}, atten = {} to {}'.format( config.name, dur, np.min(attens), np.max(attens))) #ax.legend(handles, labels) ax.set_xlabel('days since symptom onset') ax.set_ylabel('prob. infection') def plot_risk_vs_symptoms(attens, dur, symptoms, config, params, ax): cmap = cm.get_cmap('plasma') nattens = len(attens) colors = [cmap(c/nattens) for c in range(nattens)] for j, atten in enumerate(attens): ps = prob_infection_batch(atten, dur, symptoms, params) qs = prob_risk_score_batch(atten, dur, symptoms, config) label = 'atten={}'.format(atten) h = ax.plot(symptoms, ps, '-', color=colors[j], label=label) ax.plot(symptoms, qs, ':', linewidth=3, color=colors[j]) ax.set_yscale('log') ax.set_title('{}, dur={}, A={}:{}'.format( config.name, dur, np.min(attens), np.max(attens))) ax.set_xlabel('days since symptom onset') ax.set_ylabel('prob. infection') attens = np.linspace(40, 80, 10, endpoint=True) symptoms = np.arange(-12,12) #durations = np.array([15,1*60,4*60,8*60]) duration = 15 config_list = [config_wilsonv1, config_wilsonv2, config_wilsonv3, config_wilsonv4] fig, axs = plt.subplots(2,2, figsize=(12,12), sharex=True, sharey=True) axs = np.reshape(axs, (4,)) for i, config in enumerate(config_list): plot_risk_vs_symptoms(attens, duration, symptoms, config, params, axs[i]) Explanation: True risk curve vs approximation End of explanation # compute min acceptible probability of infection atten = dist_to_atten(2, ble_params) expo = Exposure(atten=atten, duration=15, days_sym = 0) pthresh = prob_infection(expo, params) print(pthresh) expo = Exposure(atten=atten, duration=15, days_sym = 5) pthresh = prob_infection(expo, params) print(pthresh) def make_curves_batch(attens, durations, symptoms, config, params): vals = itertools.product(durations, attens, symptoms) X = np.vstack([np.array(v) for v in vals]) durations_grid = X[:,0] attens_grid = X[:,1] sym_grid = np.array(X[:,2], dtype=int) ps = prob_infection_batch(attens_grid, durations_grid, sym_grid, params) qs = prob_risk_score_batch(attens_grid, durations_grid, sym_grid, config) return ps, qs def make_curves_batch_noise(attens, durations, symptoms, config, params, ble_params): distances = atten_to_dist(attens, ble_params) attens = dist_to_atten_sample_lognormal(distances, ble_params) vals = itertools.product(durations, attens, symptoms) X = np.vstack([np.array(v) for v in vals]) durations_grid = X[:,0] attens_grid = X[:,1] sym_grid = np.array(X[:,2], dtype=int) ps = prob_infection_batch(attens_grid, durations_grid, sym_grid, params) qs = prob_risk_score_batch(attens_grid, durations_grid, sym_grid, config) return ps, qs import itertools attens = np.linspace(40, 80, 10, endpoint=True) symptoms = np.arange(-5, 10) # must be int durations = np.linspace(5, 1*60, 10, endpoint=True) config_list = [config_wilsonv1, config_wilsonv2, config_wilsonv3, config_wilsonv4] #config_list = [config_wilsonv1] fig, axs = plt.subplots(2,2, figsize=(8,8), sharex=True, sharey=True) axs = np.reshape(axs, (4,)) for i, config in enumerate(config_list): ps, qs = make_curves_batch(attens, durations, symptoms, config, params) yhat = (ps > pthresh) fpr, tpr, thresholds = metrics.roc_curve(yhat, qs) auc = metrics.auc(fpr, tpr) frac_pos = np.sum(yhat)/len(yhat) #print(frac_pos) ax = axs[i] ax.plot(fpr, tpr) ax.set_title('AUC={:0.2f}, config={}'.format(auc, config.name)) ax.set_xlabel('FPR') ax.set_ylabel('TPR') print(ble_params) Explanation: ROC plots End of explanation import itertools n = 10 attens = np.linspace(40, 80, 10, endpoint=True) symptoms = np.arange(-5, 10) # must be int durations = np.linspace(5, 1*60, 10, endpoint=True) config_list = [config_wilsonv1, config_wilsonv2, config_wilsonv3, config_wilsonv4] ble_params_roc = ble_params_mle fig, axs = plt.subplots(2,2, figsize=(8,8), sharex=True, sharey=True) axs = np.reshape(axs, (4,)) for i, config in enumerate(config_list): tprs = [] aucs = [] median_fpr = np.linspace(0, 1, 100) np.random.seed(1041) for j in range(n): ps, qs = make_curves_batch_noise(attens, durations, symptoms, config, params, ble_params_roc) yhat = (ps > pthresh) fpr, tpr, threshold = metrics.roc_curve(yhat, qs) auc = metrics.auc(fpr, tpr) frac_pos = np.sum(yhat)/len(yhat) interp_tpr = np.interp(median_fpr, fpr, tpr) interp_tpr[0] = 0.0 tprs.append(interp_tpr) aucs.append(auc) if j % 10 == 0: ax = axs[i] ax.plot(fpr, tpr, color='blue', lw=1, alpha=0.1) ax.set_xlabel('FPR') ax.set_ylabel('TPR') ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='grey', alpha=0.8) median_tpr = np.median(tprs, axis=0) median_tpr[-1] = 1.0 median_auc = metrics.auc(median_fpr, median_tpr) auc_lo = np.quantile(aucs, 0.025, axis=0) auc_hi = np.quantile(aucs, 0.975, axis=0) std_auc = np.std(aucs) ax.plot(median_fpr, median_tpr, color='red', label=r'Median ROC (AUC = %0.2f $\pm$ %0.2f)' % (median_auc, std_auc), lw=2, alpha=0.8) ax.set_title('AUC={:0.2f}-{:0.2f}, config={}'.format(auc_lo, auc_hi, config.name)) tprs_hi = np.quantile(tprs, 0.025, axis=0) tprs_lo = np.quantile(tprs, 0.975, axis=0) ax.fill_between(median_fpr, tprs_lo, tprs_hi, color='grey', alpha=0.4, label=r'$\pm$ 1 std. dev.') # ax.legend(loc="lower right") Explanation: ROC with noise End of explanation #@title # Simulation configuration #@markdown Make selections and then **click the Play button on the left top #@markdown corner**. #@markdown --- #@markdown ## Attenuation #@markdown ### Functional form #@markdown Default = quadratic - not implemented yet (need to simulate distances). distance_fun = 'spline' #@param ['quadratic', 'spline'] {type:"string"} #@markdown ### Quadratic parameters #@markdown Any value is for D_min is possible, but for any value outwith [0, 5] #@markdown this is a straight line (default = 1). Dmin = 1 #@param {type:"slider", min:0.5, max:5, step:0.1} def prob_infection_batch(attenuations, durations, symptom_days, params, Dmin, distances=None): if distances is None: distances = atten_to_dist(attenuations, params.ble_params) if params.distance_fun == 'quadratic': fd = dose_curve_quadratic(distances, Dmin) elif params.distance_fun == 'spline': fd = dose_curve_spline(distances) if params.infectiousness_fun == 'gaussian': finf = infectiousness_gaussian(symptom_days) elif params.infectiousness_fun == 'skew-logistic': finf = infectiousness_skew_logistic(symptom_days) doses = durations * fd * finf return 1-np.exp(-params.beta * doses) def make_curves_batch(attens, durations, symptoms, config, params, Dmin): vals = itertools.product(durations, attens, symptoms) X = np.vstack([np.array(v) for v in vals]) durations_grid = X[:,0] attens_grid = X[:,1] sym_grid = np.array(X[:,2], dtype=int) ps = prob_infection_batch(attens_grid, durations_grid, sym_grid, params, Dmin) qs = prob_risk_score_batch(attens_grid, durations_grid, sym_grid, config) return ps, qs #@markdown ### Noise in converson of attenuation to distance #@markdown Default = 0.01 sigma = 0.01 #@param {type:"slider", min:0, max:0.05, step:0.001} ble_params = BleParams(sigma = sigma) #@markdown --- #@markdown ## Infectiousness #@markdown ### Functional form #@markdown Default = skew-logistic infectiousness_fun = 'skew-logistic' #@param ['skew-logistic', 'gaussian'] {type:"string"} params = ModelParams(distance_fun = 'quadratic', infectiousness_fun = infectiousness_fun) #@title # Express Notification configuration #@markdown Make selections and then **click the Play button on the left top #@markdown corner**. #@markdown --- #@markdown ## Attenuation weight (%) immediate = 129 #@param {type:"slider", min:0, max:200, step:1} near = 80 #@param {type:"slider", min:0, max:200, step:1} medium = 27 #@param {type:"slider", min:0, max:200, step:1} other = 0 #@param {type:"slider", min:0, max:200, step:1} atten_weights = [immediate, near, medium, other] #@markdown --- #@markdown ## Attenuation thresholds (dB) #@markdown Please make sure that each successive threshold is equal to or less #@markdown than the previous. immediate_near = 60 #@param {type:"slider", min:0, max:255, step:1} near_medium = 70 #@param {type:"slider", min:0, max:255, step:1} medium_far = 80 #@param {type:"slider", min:0, max:255, step:1} atten_thresholds = [immediate_near, near_medium, medium_far] #@markdown --- #@markdown ## Infectiousness weight (%) standard = 44 #@param {type:"slider", min:0, max:250, step:1} high= 75 #@param {type:"slider", min:0, max:250, step:1} inf_weights = [0, standard, high] #@markdown --- #@markdown ## Symptom onset (days since onset) minus_14 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_13 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_12 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_11 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_10 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_09 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_08 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_07 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_06 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_05 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_04 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_03 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_02 = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} minus_01 = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} day_zero = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_01 = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_02 = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_03 = 'High' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_04 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_05 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_06 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_07 = 'Standard' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_08 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_09 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_10 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_11 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_12 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_13 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_14 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} plus_15 = 'Drop' #@param ['Drop', 'Standard', 'High'] {type:"string"} symptom_onset = [minus_14, minus_13, minus_12, minus_11, minus_10, minus_09, minus_08, minus_07, minus_06, minus_05, minus_04, minus_03, minus_02, minus_01, day_zero, plus_01, plus_02, plus_03, plus_04, plus_05, plus_06, plus_07, plus_08, plus_09, plus_10, plus_11, plus_12, plus_13, plus_14] import ipywidgets as widgets from ipywidgets import interact, interactive, fixed, interact_manual from IPython.display import display, clear_output def infectiousness_levels(): levels = [] for index, level in enumerate(symptom_onset): if level == 'Drop': levels.append(0) elif level == 'Standard': levels.append(1) else: levels.append(2) return levels # https://enconfig.storage.googleapis.com/enconfig_fixed.html levels = [0] * 10 + [1, 0, 2, 1] + [2] * 4 + [1] * 6 + [0, 1, 0, 0, 1, 0] config_default = RiskConfig(ble_thresholds = np.array([30, 50, 60]), ble_weights = np.array([150, 100, 50, 0]), inf_weights = np.array([0, 100, 100]), inf_levels = np.array(levels), name = 'Default') config_custom = RiskConfig(ble_thresholds = np.array(atten_thresholds), ble_weights = np.array(atten_weights), inf_weights = np.array(inf_weights), inf_levels = np.array(infectiousness_levels()), name = 'Custom') levels, weights = make_infectiousness_params_v2() config_swiss = RiskConfig(ble_thresholds = np.array([53, 60]), ble_weights = np.array([1.0, 0.5, 0.0]), inf_weights = weights, inf_levels = levels, name= 'Switzerland') config_germany = RiskConfig(ble_thresholds = np.array([55, 63]), ble_weights = np.array([1.0, 0.5, 0.0]), inf_weights = weights, inf_levels = levels, name= 'Germany') config_ireland = RiskConfig(ble_thresholds = np.array([56, 62]), ble_weights = np.array([1.0, 1.0, 0.0]), inf_weights = weights, inf_levels = levels, name= 'Ireland') config_arizona = RiskConfig(ble_thresholds = np.array([50, 70]), ble_weights = np.array([2.39, 0.6, 0.06]), inf_weights = weights, inf_levels = levels, name = 'Arizona') ble_thresholds = {} ble_thresholds['swiss'] = np.array([53, 60]) ble_thresholds['germany'] = np.array([55, 63]) ble_thresholds['ireland'] = np.array([56, 62]) ble_thresholds['arizona'] = np.array([50, 70]) threshold_mean = np.mean(np.array([ble_thresholds[k] for k in ble_thresholds]), axis=0) ble_weights = {} ble_weights['swiss'] = np.array([1.0, 0.5, 0.0]) ble_weights['germany'] = np.array([1.0, 0.5, 0.0]) ble_weights['ireland'] = np.array([1.0, 1.0, 0.0]) ble_weights['arizona'] = np.array([2.39, 0.6, 0.06]) weight_mean = np.mean(np.array([ble_weights[k] for k in ble_weights]), axis=0) config_mean = RiskConfig(ble_thresholds = threshold_mean, ble_weights = weight_mean, inf_weights = weights, inf_levels = levels, name= 'Mean') attens = np.linspace(40, 80, 10, endpoint=True) symptoms = np.arange(-5, 10) # must be int durations = np.linspace(5, 1*60, 10, endpoint=True) config_list = [config_swiss, config_germany, config_ireland, config_arizona, config_mean, config_default, config_custom] # Plot fig, axs = plt.subplots(4,4, figsize=(24,14), sharex=True, sharey=True) for i, config in enumerate(config_list): ps, qs = make_curves_batch(attens, durations, symptoms, config, params, Dmin) yhat = (ps > pthresh) # ROC fpr, tpr, thresholds = metrics.roc_curve(yhat, qs) auc = metrics.auc(fpr, tpr) ax = axs[int(i / 2),2 if i % 2 else 0] fpr_array = np.array(fpr) idx = [np.abs(fpr_array - 0.1).argmin()] ax.plot(fpr, tpr, '-go' if config.name=='Custom' else '-bo', markevery=idx) ax.text(fpr[idx] + 0.03, tpr[idx] - 0.07, ['risk = %.3f' % thresholds[i] for i in idx][0]) ax.set_title('AUC={:0.2f}, config={}'.format(auc, config.name)) ax.set_xlabel('FPR') ax.set_ylabel('TPR') # PR precision, recall, thresholds = metrics.precision_recall_curve(yhat, qs) auc = metrics.auc(recall, precision) frac_pos = np.sum(yhat)/len(yhat) ax = axs[int(i / 2),3 if i % 2 else 1] ax.plot(recall, precision, color='orange' if config.name=='Custom' else 'red') ax.hlines(frac_pos, 0, 1, linestyles='dashed') ax.text(0, frac_pos + 0.03, 'prevalence = %.2f' % frac_pos) ax.set_title('AUC={:0.2f}, config={}'.format(auc, config.name)) ax.set_xlabel('Recall') ax.set_ylabel('Precision') Explanation: Interactive End of explanation
11,282
Given the following text description, write Python code to implement the functionality described below step by step Description: 2A.algo - Réflexions autour du voyage de commerce (TSP) Le problème du voyageur de commerce consiste à trouver le plus court chemin passant par toutes les villes. On parle aussi de circuit hamiltonien qui consiste à trouver le plus court chemin passant par tous les noeuds d'un graphe. Le notebook explore quelques solutions approchées et intuitives. Ce problème est NP-complet à savoir qu'il n'existe pas d'algorithme qui permette de trouver la solution avec un coût polynômiale. C'est aussi un problème différent du plus court chemin dans un graphe qui consiste à trouver le plus court chemin reliant deux noeuds d'un graphe (mais pas forcément tous les noeuds de ce graphe). Step1: Un parcours aléatoire de tous les noeuds de graphe donnera quelque chose de très éloigné de la solution optimale Step2: La première constation est que le chemin ne peut pas être optimal car des arcs se croisent. On en déduit qu'une façon d'améliorer ce chemin est de décroiser certaines parties. On peut par exemple choisir deux points au hasard, retourner la partie du chemin au milieu de ces deux points et voir si la longueur du chemin s'en trouve diminuée. On peut également parcourir toutes les paires de noeuds possibles. C'est ce qui est implémenté ci-dessous. Step3: Voilà qui est mieux. Maintenant, supposons que nous faisons une erreur lors du calcul de la distance Step4: Jusque ici, tout concorde. Le chemin est plus court en ce sens qu'il oublie délibérément l'arc de bouclage que l'algorithme a tendance à choisir grand. Pour gagner du temps de calcul, un développeur se dit que le noeud de départ peut être constant. Après tout, le chemin est une boucle, elle passera toujours par le premier noeud. Qu'il soit en première position ne change rien et puis inverser une moitié, c'est équivalent à inverser l'autre moitié. On fait donc juste une modification Step5: Le résultat attendu n'est pas celui qu'on observe. Est-ce une erreur d'implémentation ou une erreur de raisonnement ? J'étais pourtant sûr que mon raisonnement était correct et j'aurais tort d'en douter. C'est une erreur d'implémentation. Lorsqu'onfor j in range(i+2,len(ordre)) Step6: Pas parfait mais conforme à nos attentes (les miennes en tout cas) ! Soit dit en passant, la première version de l'algorithme laissait déjà le dernier noeud inchangé. La solution n'est pas parfaite en ce sens que visuellement, on voit que certaines partie du chemin pourraient être facilement améliorées. Mais si la solution était parfaite en toute circonstance, nous aurions trouvé un algorithme à temps polynômial ce qui est impossible. Dans notre cas, l'algorithme produit toujours la même solution car il parcourt les noeuds toujours dans le même sens. Un peu d'aléa devrait l'aider à trouver de meilleures solutions après quelques essais. Step7: Ca a l'air de marcher un peu mieux mais quelques aberrations car l'aléatoire n'est pas un parcours systématique de toutes les pairs. Par conséquent, il peut rester des croisements Step8: Pour éviter cela, on peut imposer un nombre d'itérations minimum et recommencer plusieurs à partir d'ordre initiaux aléatoires
Python Code: %matplotlib inline import random n = 30 x = [ random.random() for _ in range(n) ] y = [ random.random() for _ in range(n) ] import matplotlib.pyplot as plt plt.plot(x,y,"o") Explanation: 2A.algo - Réflexions autour du voyage de commerce (TSP) Le problème du voyageur de commerce consiste à trouver le plus court chemin passant par toutes les villes. On parle aussi de circuit hamiltonien qui consiste à trouver le plus court chemin passant par tous les noeuds d'un graphe. Le notebook explore quelques solutions approchées et intuitives. Ce problème est NP-complet à savoir qu'il n'existe pas d'algorithme qui permette de trouver la solution avec un coût polynômiale. C'est aussi un problème différent du plus court chemin dans un graphe qui consiste à trouver le plus court chemin reliant deux noeuds d'un graphe (mais pas forcément tous les noeuds de ce graphe). End of explanation plt.plot(x + [ x[0] ], y + [ y[0] ], "o-") Explanation: Un parcours aléatoire de tous les noeuds de graphe donnera quelque chose de très éloigné de la solution optimale : End of explanation def longueur (x,y, ordre): i = ordre[-1] x0,y0 = x[i], y[i] d = 0 for o in ordre: x1,y1 = x[o], y[o] d += (x0-x1)**2 + (y0-y1)**2 x0,y0 = x1,y1 return d ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) def permutation(x,y,ordre): d = longueur(x,y,ordre) d0 = d+1 it = 1 while d < d0 : it += 1 print("iteration",it, "d=",d) d0 = d for i in range(0,len(ordre)-1) : for j in range(i+2,len(ordre)): r = ordre[i:j].copy() r.reverse() ordre2 = ordre[:i] + r + ordre[j:] t = longueur(x,y,ordre2) if t < d : d = t ordre = ordre2 return ordre ordre = permutation (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") Explanation: La première constation est que le chemin ne peut pas être optimal car des arcs se croisent. On en déduit qu'une façon d'améliorer ce chemin est de décroiser certaines parties. On peut par exemple choisir deux points au hasard, retourner la partie du chemin au milieu de ces deux points et voir si la longueur du chemin s'en trouve diminuée. On peut également parcourir toutes les paires de noeuds possibles. C'est ce qui est implémenté ci-dessous. End of explanation def longueur (x,y, ordre): # on change cette fonction d = 0 for i in range(1,len(ordre)): n = ordre[i-1] o = ordre[i] x0,y0 = x[n], y[n] x1,y1 = x[o], y[o] d += (x0-x1)**2 + (y0-y1)**2 return d ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) ordre = permutation (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre] yo = [ y[o] for o in ordre] plt.plot(xo,yo, "o-") Explanation: Voilà qui est mieux. Maintenant, supposons que nous faisons une erreur lors du calcul de la distance : nous oublions le dernier arc qui boucle le chemin du dernier noeud au premier. End of explanation def longueur (x,y, ordre): i = ordre[-1] x0,y0 = x[i], y[i] d = 0 for o in ordre: x1,y1 = x[o], y[o] d += (x0-x1)**2 + (y0-y1)**2 x0,y0 = x1,y1 return d ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) def permutation(x,y,ordre): d = longueur(x,y,ordre) d0 = d+1 it = 1 while d < d0 : it += 1 print("iteration",it, "d=",d, "ordre[0]", ordre[0]) d0 = d for i in range(1,len(ordre)-1) : # on part de 1 et plus de 0, on est sûr que le premier noeud ne bouge pas for j in range(i+2,len(ordre)): r = ordre[i:j].copy() r.reverse() ordre2 = ordre[:i] + r + ordre[j:] t = longueur(x,y,ordre2) if t < d : d = t ordre = ordre2 return ordre ordre = permutation (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") plt.text(xo[0],yo[0],"0",color="r",weight="bold",size="x-large") plt.text(xo[-2],yo[-2],"N-1",color="r",weight="bold",size="x-large") Explanation: Jusque ici, tout concorde. Le chemin est plus court en ce sens qu'il oublie délibérément l'arc de bouclage que l'algorithme a tendance à choisir grand. Pour gagner du temps de calcul, un développeur se dit que le noeud de départ peut être constant. Après tout, le chemin est une boucle, elle passera toujours par le premier noeud. Qu'il soit en première position ne change rien et puis inverser une moitié, c'est équivalent à inverser l'autre moitié. On fait donc juste une modification : End of explanation ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) def permutation(x,y,ordre): d = longueur(x,y,ordre) d0 = d+1 it = 1 while d < d0 : it += 1 print("iteration",it, "d=",d, "ordre[0]", ordre[0]) d0 = d for i in range(1,len(ordre)-1) : # on part de 1 et plus de 0, on est sûr que le premier noeud ne bouge pas for j in range(i+2,len(ordre)+ 1): # correction ! r = ordre[i:j].copy() r.reverse() ordre2 = ordre[:i] + r + ordre[j:] t = longueur(x,y,ordre2) if t < d : d = t ordre = ordre2 return ordre ordre = permutation (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") plt.text(xo[0],yo[0],"0",color="r",weight="bold",size="x-large") plt.text(xo[-2],yo[-2],"N-1",color="r",weight="bold",size="x-large") Explanation: Le résultat attendu n'est pas celui qu'on observe. Est-ce une erreur d'implémentation ou une erreur de raisonnement ? J'étais pourtant sûr que mon raisonnement était correct et j'aurais tort d'en douter. C'est une erreur d'implémentation. Lorsqu'onfor j in range(i+2,len(ordre)): et r = ordre[i:j].copy(), on écrit que j va de i+2 inclus à len(ordre) exclu. Puis lorsqu'on écrit ordre[i:j], l'indice j est exclu ! Autrement dit, dans cette implémentation, le premier noeud et le dernier noeud ne bougeront jamais ! On s'empresse de corriger cela. End of explanation ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) def permutation_rnd(x,y,ordre): d = longueur(x,y,ordre) d0 = d+1 it = 1 while d < d0 : it += 1 print("iteration",it, "d=",d, "ordre[0]", ordre[0]) d0 = d for i in range(1,len(ordre)-1) : for j in range(i+2,len(ordre)+ 1): k = random.randint(1,len(ordre)-1) l = random.randint(k+1,len(ordre)) r = ordre[k:l].copy() r.reverse() ordre2 = ordre[:k] + r + ordre[l:] t = longueur(x,y,ordre2) if t < d : d = t ordre = ordre2 return ordre ordre = permutation_rnd (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") plt.text(xo[0],yo[0],"0",color="r",weight="bold",size="x-large") plt.text(xo[-2],yo[-2],"N-1",color="r",weight="bold",size="x-large") Explanation: Pas parfait mais conforme à nos attentes (les miennes en tout cas) ! Soit dit en passant, la première version de l'algorithme laissait déjà le dernier noeud inchangé. La solution n'est pas parfaite en ce sens que visuellement, on voit que certaines partie du chemin pourraient être facilement améliorées. Mais si la solution était parfaite en toute circonstance, nous aurions trouvé un algorithme à temps polynômial ce qui est impossible. Dans notre cas, l'algorithme produit toujours la même solution car il parcourt les noeuds toujours dans le même sens. Un peu d'aléa devrait l'aider à trouver de meilleures solutions après quelques essais. End of explanation ordre = permutation_rnd (x,y,list(range(len(x)))) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") plt.text(xo[0],yo[0],"0",color="r",weight="bold",size="x-large") plt.text(xo[-2],yo[-2],"N-1",color="r",weight="bold",size="x-large") Explanation: Ca a l'air de marcher un peu mieux mais quelques aberrations car l'aléatoire n'est pas un parcours systématique de toutes les pairs. Par conséquent, il peut rester des croisements : End of explanation ordre = list(range(len(x))) print("longueur initiale", longueur(x,y,ordre)) def permutation_rnd(x,y,ordre,miniter): d = longueur(x,y,ordre) d0 = d+1 it = 1 while d < d0 or it < miniter : it += 1 d0 = d for i in range(1,len(ordre)-1) : for j in range(i+2,len(ordre)+ 1): k = random.randint(1,len(ordre)-1) l = random.randint(k+1,len(ordre)) r = ordre[k:l].copy() r.reverse() ordre2 = ordre[:k] + r + ordre[l:] t = longueur(x,y,ordre2) if t < d : d = t ordre = ordre2 return ordre def n_permutation(x,y, miniter): ordre = list(range(len(x))) bordre = ordre.copy() d0 = longueur(x,y,ordre) for i in range(0,20): print("iteration",i, "d=",d0) random.shuffle(ordre) ordre = permutation_rnd (x,y,ordre, 20) d = longueur(x,y,ordre) if d < d0 : d0 = d bordre = ordre.copy() return bordre ordre = n_permutation (x,y, 20) print("longueur min", longueur(x,y,ordre)) xo = [ x[o] for o in ordre + [ordre[0]]] yo = [ y[o] for o in ordre + [ordre[0]]] plt.plot(xo,yo, "o-") plt.text(xo[0],yo[0],"0",color="r",weight="bold",size="x-large") plt.text(xo[-2],yo[-2],"N-1",color="r",weight="bold",size="x-large") Explanation: Pour éviter cela, on peut imposer un nombre d'itérations minimum et recommencer plusieurs à partir d'ordre initiaux aléatoires : End of explanation
11,283
Given the following text description, write Python code to implement the functionality described below step by step Description: ================================= Decoding sensor space data (MVPA) ================================= Decoding, a.k.a MVPA or supervised machine learning, is applied to MEG data in sensor space. Here the classifier is applied to every time point. Step1: Set parameters Step2: Temporal decoding We'll use a Logistic Regression for a binary classification as machine learning model. Step3: Temporal Generalization This runs the analysis used in [1] and further detailed in [2] The idea is to fit the models on each time instant and see how it generalizes to any other time point.
Python Code: import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression import mne from mne.datasets import sample from mne.decoding import (SlidingEstimator, GeneralizingEstimator, cross_val_multiscore, LinearModel, get_coef) data_path = sample.data_path() plt.close('all') # sphinx_gallery_thumbnail_number = 4 Explanation: ================================= Decoding sensor space data (MVPA) ================================= Decoding, a.k.a MVPA or supervised machine learning, is applied to MEG data in sensor space. Here the classifier is applied to every time point. End of explanation raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' tmin, tmax = -0.200, 0.500 event_id = dict(audio_left=1, visual_left=3) # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) # The subsequent decoding analyses only capture evoked responses, so we can # low-pass the MEG data. Usually a value more like 40 Hz would be used, # but here low-pass at 20 so we can more heavily decimate, and allow # the examlpe to run faster. raw.filter(None, 20., fir_design='firwin') events = mne.find_events(raw, 'STI 014') # Set up pick list: EEG + MEG - bad channels (modify to your needs) raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=(None, 0.), preload=True, reject=dict(grad=4000e-13, eog=150e-6), decim=10) epochs.pick_types(meg=True, exclude='bads') Explanation: Set parameters End of explanation # We will train the classifier on all left visual vs auditory trials on MEG X = epochs.get_data() # MEG signals: n_epochs, n_channels, n_times y = epochs.events[:, 2] # target: Audio left or right clf = make_pipeline(StandardScaler(), LogisticRegression()) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc') scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits scores = np.mean(scores, axis=0) # Plot fig, ax = plt.subplots() ax.plot(epochs.times, scores, label='score') ax.axhline(.5, color='k', linestyle='--', label='chance') ax.set_xlabel('Times') ax.set_ylabel('AUC') # Area Under the Curve ax.legend() ax.axvline(.0, color='k', linestyle='-') ax.set_title('Sensor space decoding') plt.show() # You can retrieve the spatial filters and spatial patterns if you explicitly # use a LinearModel clf = make_pipeline(StandardScaler(), LinearModel(LogisticRegression())) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc') time_decod.fit(X, y) coef = get_coef(time_decod, 'patterns_', inverse_transform=True) evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0]) joint_kwargs = dict(ts_args=dict(time_unit='s'), topomap_args=dict(time_unit='s')) evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns', **joint_kwargs) Explanation: Temporal decoding We'll use a Logistic Regression for a binary classification as machine learning model. End of explanation # define the Temporal Generalization object time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc') scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits scores = np.mean(scores, axis=0) # Plot the diagonal (it's exactly the same as the time-by-time decoding above) fig, ax = plt.subplots() ax.plot(epochs.times, np.diag(scores), label='score') ax.axhline(.5, color='k', linestyle='--', label='chance') ax.set_xlabel('Times') ax.set_ylabel('AUC') ax.legend() ax.axvline(.0, color='k', linestyle='-') ax.set_title('Decoding MEG sensors over time') plt.show() # Plot the full matrix fig, ax = plt.subplots(1, 1) im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r', extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.) ax.set_xlabel('Testing Time (s)') ax.set_ylabel('Training Time (s)') ax.set_title('Temporal Generalization') ax.axvline(0, color='k') ax.axhline(0, color='k') plt.colorbar(im, ax=ax) plt.show() Explanation: Temporal Generalization This runs the analysis used in [1] and further detailed in [2] The idea is to fit the models on each time instant and see how it generalizes to any other time point. End of explanation
11,284
Given the following text description, write Python code to implement the functionality described below step by step Description: Contact Trajectories Sometimes you're interested in how contacts evolve in a trajectory, frame-by-frame. Contact Map Explorer provides the ContactTrajectory class for this purpose. We'll look at this using a trajectory of a specific inhibitor during its binding process to GSK3B. This system is also studied in the notebook on contact concurrences (with very similar initial discussion). Step1: First, we'll use MDTraj's atom selection language to split out the protein and the ligand, which has residue name YYG in the input files. We're only interested in contacts between the protein and the ligand (not contacts within the protein). We'll also only look at heavy atom contacts. Step2: Making an accessing a contact trajectory Contact trajectories have the same keyword arguments as other contact objects Step3: Once the ContactTrajectory has been made, contacts for individual frames can be accessed either by taking the index of the ContactTrajectory itself, or by getting the list of contact (e.g., all the residue contacts frame-by-frame) and selecting the frame of interest. Step4: Advanced Python indexing is also allowed. In this example, note how the most common partners for YYG change! This is also what we see in the contact concurrences example. Step5: We can easily turn the ContactTrajectory into ContactFrequency Step6: Rolling Contact Frequencies A ContactTrajectory keeps all the time-dependent information about the contacts, whereas a ContactFrequency, as plotted above, loses all of it. What about something in between? For this, we have a RollingContactFrequency, which acts like a rolling average. It creates a contact frequency over a certain window of frames, with a certain step size between each window. This can be created either with the RollingContactFrequency object, or, more easily, with the ContactTrajectory.rolling_frequency() method. Step7: Now we'll plot each windowed frequency, and we will see the transition as some contacts fade out and others grow in.
Python Code: from __future__ import print_function %matplotlib inline import matplotlib.pyplot as plt import numpy as np from contact_map import ContactTrajectory, RollingContactFrequency import mdtraj as md traj = md.load("data/gsk3b_example.h5") print(traj) # to see number of frames; size of system Explanation: Contact Trajectories Sometimes you're interested in how contacts evolve in a trajectory, frame-by-frame. Contact Map Explorer provides the ContactTrajectory class for this purpose. We'll look at this using a trajectory of a specific inhibitor during its binding process to GSK3B. This system is also studied in the notebook on contact concurrences (with very similar initial discussion). End of explanation topology = traj.topology yyg = topology.select('resname YYG and element != "H"') protein = topology.select('protein and element != "H"') Explanation: First, we'll use MDTraj's atom selection language to split out the protein and the ligand, which has residue name YYG in the input files. We're only interested in contacts between the protein and the ligand (not contacts within the protein). We'll also only look at heavy atom contacts. End of explanation contacts = ContactTrajectory(traj, query=yyg, haystack=protein) Explanation: Making an accessing a contact trajectory Contact trajectories have the same keyword arguments as other contact objects End of explanation contacts[0].residue_contacts.most_common() contacts.residue_contacts[0].most_common() Explanation: Once the ContactTrajectory has been made, contacts for individual frames can be accessed either by taking the index of the ContactTrajectory itself, or by getting the list of contact (e.g., all the residue contacts frame-by-frame) and selecting the frame of interest. End of explanation for contact in contacts[50:80:4]: print(contact.residue_contacts.most_common()[:3]) Explanation: Advanced Python indexing is also allowed. In this example, note how the most common partners for YYG change! This is also what we see in the contact concurrences example. End of explanation freq = contacts.contact_frequency() fig, ax = plt.subplots(figsize=(5.5,5)) freq.residue_contacts.plot_axes(ax=ax) Explanation: We can easily turn the ContactTrajectory into ContactFrequency: End of explanation RollingContactFrequency(contacts, width=30, step=14) rolling_frequencies = contacts.rolling_frequency(window_size=30, step=14) rolling_frequencies Explanation: Rolling Contact Frequencies A ContactTrajectory keeps all the time-dependent information about the contacts, whereas a ContactFrequency, as plotted above, loses all of it. What about something in between? For this, we have a RollingContactFrequency, which acts like a rolling average. It creates a contact frequency over a certain window of frames, with a certain step size between each window. This can be created either with the RollingContactFrequency object, or, more easily, with the ContactTrajectory.rolling_frequency() method. End of explanation fig, axs = plt.subplots(3, 2, figsize=(12, 10)) for ax, freq in zip(axs.flatten(), rolling_frequencies): freq.residue_contacts.plot_axes(ax=ax) Explanation: Now we'll plot each windowed frequency, and we will see the transition as some contacts fade out and others grow in. End of explanation
11,285
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Open your dataset up using pandas in a Jupyter notebook Step1: 3. Do a .columns to get a feel for your data Step2: 4. Do a .head() to get a feel for your data Step3: 4. Write down 12 questions to ask your data, or 12 things to hunt for in the data Find all the agencies that house a NASA Lab facility. Count the number of each agencies that house a Nasa Lab facility. Find the different centers that run a NASA lab, and their count. List the status of the lab facilities and their count? Find how many active labs are operating today. Which of the US states has the highest number of NASA Lab facilities ? Of all the states that has a NASA lab, which state has the lowest number in the country ? Group them by agencies. Group them by centers, and discover how many active labs are operating under each of them. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. Group the states and find how many active labs are operating in each city. Find all the point of contacts. Find how many lab facilities each one handles. Find the current status of lab facilities under each point of contacts. Find the year in which they have made the last update. Find the number of centers that begin between 1960 - 1969, 1970 - 1979, 1980 - 1989, 1990 - 1999, and 2000 and beyond 1.Find all the agencies that house a NASA Lab facility. Step4: 2. Find the different centers that run a NASA lab, and their count. Step5: 3. List the status of the lab facilities and their count? Step6: 4. Find how many active labs are operating today. Step7: 5. Which of the US states has the highest number of NASA Lab facilities ? Step8: 6. Which of the US states has the lowest number of NASA Lab facilities ? Step9: 7. Group them by agencies, and count the centers operating under each agency. Step10: 8. Group them by centers, and discover how many active labs are operating under each of them. Step11: 9. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. Step12: 10. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. Step13: 11. Group the states and find how many active labs are operating in each city Step14: 12. Find all the point of contacts. Find how many lab facilities each one handles. Step15: 13.Find all the point of contacts. Find various status of lab facilities. Step16: 14. Sort by the year in which they have made the last update. Step17: 15. Find all the centers that begin between 1960 - 1969, 1970 - 1979, 1980 - 1989, 1990 - 1999, and 2000 and beyond Step18: Graph 1 - NASA Lab Center Distribution in Different States Step19: Graph 2 NASA Lab Facilities Distribution by Agencies Step20: Graph 3 NASA Lab Facilities Started Over the Last 60 Years
Python Code: df Explanation: 1. Open your dataset up using pandas in a Jupyter notebook End of explanation df.columns Explanation: 3. Do a .columns to get a feel for your data End of explanation df.head() Explanation: 4. Do a .head() to get a feel for your data End of explanation df['Agency'].value_counts() print("NASA runs a total of 397 lab facilities, Nasa Intelsat runs 17, Department of Defence(DOD) runs 7 labs, Department of Energy runs 12 labs, Raytheon runs 5, and Orbital Sciences Corporation (osc) runs only one NASA lab facility") Explanation: 4. Write down 12 questions to ask your data, or 12 things to hunt for in the data Find all the agencies that house a NASA Lab facility. Count the number of each agencies that house a Nasa Lab facility. Find the different centers that run a NASA lab, and their count. List the status of the lab facilities and their count? Find how many active labs are operating today. Which of the US states has the highest number of NASA Lab facilities ? Of all the states that has a NASA lab, which state has the lowest number in the country ? Group them by agencies. Group them by centers, and discover how many active labs are operating under each of them. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. Group the states and find how many active labs are operating in each city. Find all the point of contacts. Find how many lab facilities each one handles. Find the current status of lab facilities under each point of contacts. Find the year in which they have made the last update. Find the number of centers that begin between 1960 - 1969, 1970 - 1979, 1980 - 1989, 1990 - 1999, and 2000 and beyond 1.Find all the agencies that house a NASA Lab facility. End of explanation df['Center'].value_counts() Explanation: 2. Find the different centers that run a NASA lab, and their count. End of explanation df['Status'].value_counts() Explanation: 3. List the status of the lab facilities and their count? End of explanation print("There are 388 active lab facilities in the US") Explanation: 4. Find how many active labs are operating today. End of explanation df['State'].value_counts() print("Alabama has the highest number of NASA Lab facilities in the country.") Explanation: 5. Which of the US states has the highest number of NASA Lab facilities ? End of explanation print("Alabama has the highest number of NASA Lab facilities in the country") Explanation: 6. Which of the US states has the lowest number of NASA Lab facilities ? End of explanation df.groupby("Agency")['Center'].value_counts() Explanation: 7. Group them by agencies, and count the centers operating under each agency. End of explanation df.groupby("Agency")['Status'].value_counts() Explanation: 8. Group them by centers, and discover how many active labs are operating under each of them. End of explanation df['City'].value_counts() Explanation: 9. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. End of explanation df.groupby("City")['Status'].value_counts() Explanation: 10. Find all the cities with a Lab facility. Group them and find how many active labs are operating in each city. End of explanation state= df.groupby("State")['Status'].value_counts() state Explanation: 11. Group the states and find how many active labs are operating in each city End of explanation df['Contact'].value_counts() Explanation: 12. Find all the point of contacts. Find how many lab facilities each one handles. End of explanation df.groupby("Contact")['Status'].value_counts() Explanation: 13.Find all the point of contacts. Find various status of lab facilities. End of explanation for x in (df['Last Update']): if isinstance(x, str): print(x) dropped = df[df['Last Update'] != "300 E St, SW"] for x in (dropped['Last Update']): if isinstance(x, str): print(x) df['Last Update'].str[:5] df dropped.sort_values(by='Last Update',ascending=False).head(3) Explanation: 14. Sort by the year in which they have made the last update. End of explanation year60 = df[(df['Occupied'] >= 1960) & (df['Occupied'] <=1969)] year60 year60['Occupied'].value_counts() print("There are", year60['Occupied'].value_counts().sum(), "Lab facilities started between 1960-1969") year70 = df[(df['Occupied'] >= 1970) & (df['Occupied'] <=1979)] year70 year70['Occupied'].value_counts() print("There are", year70['Occupied'].value_counts().sum(), "Lab facilities started between 1970-1979") year80 = df[(df['Occupied'] >= 1980) & (df['Occupied'] <=1989)] year80 year80['Occupied'].value_counts() print("There are", year80['Occupied'].value_counts().sum(), "Lab facilities started between 1980-1989") year90 = df[(df['Occupied'] >= 1990) & (df['Occupied'] <=1999)] year90 year90['Occupied'].value_counts() print("There are", year90['Occupied'].value_counts().sum(), "Lab facilities started between 1990-1999") year00 = df[(df['Occupied'] >= 2000) & (df['Occupied'] <=2009)] year00 year00['Occupied'].value_counts() print("There are", year00['Occupied'].value_counts().sum(), "Lab facilities started between 2000-2009") year10 = df[(df['Occupied'] >= 2010) & (df['Occupied'] <=2017)] year10 year10['Occupied'].value_counts() year10['Facility'] print("There are only", year10['Occupied'].value_counts().sum(), "Lab facilities started since 2010") Explanation: 15. Find all the centers that begin between 1960 - 1969, 1970 - 1979, 1980 - 1989, 1990 - 1999, and 2000 and beyond End of explanation df['State'].value_counts().plot(kind='bar', x='State') Explanation: Graph 1 - NASA Lab Center Distribution in Different States End of explanation df['Agency'].value_counts().plot(kind='bar', x='Agency') Explanation: Graph 2 NASA Lab Facilities Distribution by Agencies End of explanation df = pd.read_excel("year-num-nasa.xlsx") df df['Number'].plot(kind='pie', labels=df['Year']) Explanation: Graph 3 NASA Lab Facilities Started Over the Last 60 Years End of explanation
11,286
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Step1: Let's get started with some basic imports Step2: And then we'll build a synthetic "dataset" and initialize a new bundle with those data Step3: solver_times parameter and options Step4: The logic for solver times is generally only used internally within b.run_solver (for optimizers and samplers which require a forward-model to be computed). However, it is useful (in order to diagnose any issues, for example) to be able to see how the combination of solver_times, times, compute_times/compute_phases, mask_enabled, and mask_phases will be interpretted within PHOEBE during b.run_solver. See also Step5: Additionally, we can pass solver to b.run_compute to have the forward-model computed as it would be within the solver itself (this just calls run_compute with the compute option referenced by the solver and with the parsed solver_times). Below we'll go through each of the scenarios listed above and demonstrate how that logic changes the times at which the forward model will be computed within b.run_solver (with the cost-function interpolating between the resulting forward-model and the observations as necessary). The messages regarding the internal choice of logic for solver_times will be exposed at the 'info' level of the logger. We'll leave that off here to avoid the noise of the logger messages from run_compute calls, but you can uncomment the following line to see those messages. Step6: solver_times = 'times' without phase_mask enabled Step7: with phase_mask enabled Step8: solver_times = 'compute_times' without phase_mask enabled Step9: with phase_mask enabled and time-independent hierarchy Step10: with phase_mask enabled and time-dependent hierarchy Step11: In the case where we have a time-dependent system b.run_solver will fail with an error from b.run_checks_solver if compute_times does not fully encompass the dataset times. Step12: This will always be the case when providing compute_phases when the dataset times cover more than a single cycle. Here we'll follow the advice from the error and provide compute_times instead. Step13: Now we'll just flip the constraint back for the remaining examples Step14: solver_times = 'auto' solver_times='auto' determines the times array under both conditions (solver_times='times' and solver_times='compute_times') and ultimately chooses whichever of the two is shorter. To see this, we'll stick with the no-mask, time-independent case and change the length of compute_phases to show the switch to the shorter of the available options. compute_times shorter Step15: times shorter
Python Code: #!pip install -I "phoebe>=2.4,<2.5" Explanation: Advanced: solver_times Setup Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab). End of explanation import phoebe import numpy as np import matplotlib.pyplot as plt Explanation: Let's get started with some basic imports End of explanation b = phoebe.default_binary() b.add_dataset('lc', times=phoebe.linspace(0,5,1001)) b.add_compute('ellc', compute='ellc01') b.set_value_all('ld_mode', 'lookup') b.run_compute('ellc01') times = b.get_value('times@model') fluxes = b.get_value('fluxes@model') sigmas = np.ones_like(times) * 0.01 b = phoebe.default_binary() b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101), times=times, fluxes=fluxes, sigmas=sigmas) b.add_compute('ellc', compute='ellc01') b.set_value_all('ld_mode', 'lookup') b.add_solver('optimizer.nelder_mead', compute='ellc01', fit_parameters=['teff'], solver='nm_solver') Explanation: And then we'll build a synthetic "dataset" and initialize a new bundle with those data End of explanation print(b.filter(qualifier='solver_times')) print(b.get_parameter(qualifier='solver_times', dataset='lc01').choices) Explanation: solver_times parameter and options End of explanation help(b.parse_solver_times) Explanation: The logic for solver times is generally only used internally within b.run_solver (for optimizers and samplers which require a forward-model to be computed). However, it is useful (in order to diagnose any issues, for example) to be able to see how the combination of solver_times, times, compute_times/compute_phases, mask_enabled, and mask_phases will be interpretted within PHOEBE during b.run_solver. See also: * Advanced: mask_phases * Advanced: Compute Times & Phases To access the underlying times that would be used, we can call b.parse_solver_times. Let's first look at the docstring (also available from the link above): End of explanation #logger = phoebe.logger('info') Explanation: Additionally, we can pass solver to b.run_compute to have the forward-model computed as it would be within the solver itself (this just calls run_compute with the compute option referenced by the solver and with the parsed solver_times). Below we'll go through each of the scenarios listed above and demonstrate how that logic changes the times at which the forward model will be computed within b.run_solver (with the cost-function interpolating between the resulting forward-model and the observations as necessary). The messages regarding the internal choice of logic for solver_times will be exposed at the 'info' level of the logger. We'll leave that off here to avoid the noise of the logger messages from run_compute calls, but you can uncomment the following line to see those messages. End of explanation b.set_value('solver_times', 'times') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', False) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() print(solver_times) _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: solver_times = 'times' without phase_mask enabled End of explanation b.set_value('solver_times', 'times') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', True) b.set_value('mask_phases', [(-0.1, 0.1), (0.45,0.55)]) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: with phase_mask enabled End of explanation b.set_value('solver_times', 'compute_times') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', False) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: solver_times = 'compute_times' without phase_mask enabled End of explanation b.set_value('solver_times', 'compute_times') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', True) b.set_value('mask_phases', [(-0.1, 0.1), (0.45,0.55)]) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: with phase_mask enabled and time-independent hierarchy End of explanation b.set_value('solver_times', 'compute_times') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', True) b.set_value('mask_phases', [(-0.1, 0.1), (0.45,0.55)]) b.set_value('dperdt', 0.1) print(b.hierarchy.is_time_dependent()) Explanation: with phase_mask enabled and time-dependent hierarchy End of explanation print(b.run_checks_solver()) Explanation: In the case where we have a time-dependent system b.run_solver will fail with an error from b.run_checks_solver if compute_times does not fully encompass the dataset times. End of explanation b.flip_constraint('compute_times', solve_for='compute_phases') b.set_value('compute_times', phoebe.linspace(0,5,501)) print(b.run_checks_solver()) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: This will always be the case when providing compute_phases when the dataset times cover more than a single cycle. Here we'll follow the advice from the error and provide compute_times instead. End of explanation _ = b.flip_constraint('compute_phases', solve_for='compute_times') Explanation: Now we'll just flip the constraint back for the remaining examples End of explanation b.set_value('solver_times', 'auto') b.set_value('compute_phases', phoebe.linspace(0,1,101)) b.set_value('mask_enabled', False) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: solver_times = 'auto' solver_times='auto' determines the times array under both conditions (solver_times='times' and solver_times='compute_times') and ultimately chooses whichever of the two is shorter. To see this, we'll stick with the no-mask, time-independent case and change the length of compute_phases to show the switch to the shorter of the available options. compute_times shorter End of explanation b.set_value('solver_times', 'auto') b.set_value('compute_phases', phoebe.linspace(0,1,2001)) b.set_value('mask_enabled', False) b.set_value('dperdt', 0.0) dataset_times = b.get_value('times', context='dataset') _ = plt.plot(times, np.ones_like(times)*1, 'k.') compute_times = b.get_value('compute_times', context='dataset') _ = plt.plot(compute_times, np.ones_like(compute_times)*2, 'b.') solver_times = b.parse_solver_times() _ = plt.plot(solver_times['lc01'], np.ones_like(solver_times['lc01'])*3, 'g.') b.run_compute(solver='nm_solver') _ = b.plot(show=True) Explanation: times shorter End of explanation
11,287
Given the following text description, write Python code to implement the functionality described below step by step Description: Oracle and Python with oracledb This is an example of how to query Oracle from Python Setup and prerequisites This is how you can setup an Oracle instance for testing using a docker image for oracle-xe 1. run oracle xe on a container from gvenzl dockerhub repo https Step1: oracledb integration with Pandas Step2: Use of bind variables Step3: Basic visualization
Python Code: # connect to Oracle using oracledb # !pip install oracledb import oracledb db_user = 'scott' db_connect_string = 'localhost:1521/XEPDB1' db_pass = 'tiger' # To avoid storig connection passwords use getpas or db_config # db_connect_string = 'dbserver:1521/orcl.mydomain.com' # import getpass # db_pass = getpass.getpass() ora_conn = oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) # open a cursor, run a query and fetch the results cursor = ora_conn.cursor() cursor.execute('select ename, sal from emp') res = cursor.fetchall() cursor.close() print(res) Explanation: Oracle and Python with oracledb This is an example of how to query Oracle from Python Setup and prerequisites This is how you can setup an Oracle instance for testing using a docker image for oracle-xe 1. run oracle xe on a container from gvenzl dockerhub repo https://github.com/gvenzl/oci-oracle-xe docker run -d --name mydb1 -e ORACLE_PASSWORD=oracle -p 1521:1521 gvenzl/oracle-xe:latest # or use :slim wait till the DB is started, check logs at: docker logs -f mydb1 2. Install the scott/tiger schema with the emp table in PDB xepdb1: docker exec -it mydb1 /bin/bash sed -e s=SCOTT/tiger=SCOTT/tiger@xepdb1= -e s/OFF/ON/ /opt/oracle/product/21c/dbhomeXE/rdbms/admin/utlsampl.sql &gt; script.sql sqlplus system/oracle@xepdb1 &lt;&lt;EOF @script.sql EOF exit oracledb library: This uses oracledb to connect to oracle, so no need to install the Oracle client. Note: oracledb can also work with the oracle client as cx_Oracle did, see documentation for details. Query Oracle from Python using the oracledb library End of explanation import pandas as pd # query Oracle using ora_conn and put the result into a pandas Dataframe df_ora = pd.read_sql('select * from emp', con=ora_conn) df_ora Explanation: oracledb integration with Pandas End of explanation df_ora = pd.read_sql('select * from emp where empno=:myempno', params={"myempno":7839}, con=ora_conn) df_ora Explanation: Use of bind variables End of explanation import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') df_ora = pd.read_sql('select ename "Name", sal "Salary" from emp', con=ora_conn) ora_conn.close() df_ora.plot(x='Name', y='Salary', title='Salary details, from Oracle demo table', figsize=(10, 6), kind='bar', color='blue'); Explanation: Basic visualization End of explanation
11,288
Given the following text description, write Python code to implement the functionality described below step by step Description: <!-- 29/10 Archivos de texto. Operaciones con cadenas de caracteres. String formaters. CLASE DE LABORATORIO (Gonzalo) --> Manejo de strings Con los strings podemos hacer muchas operaciones Step1: String formating Existen varias formas crear, concatenar e imprimir strings Step2: Pero si no queremos imprimir ese último Enter lo que tenemos que hacer es poner una coma al final de la línea Step3: ¿Y si lo que quiero es imprimir un número pegado al string? Step4: String formating Pero en algunas ocasiones vamos a tener que crear springs más complejos y agregarle una coma no va a ser suficiente, o tal vez queremos crearlos pero no para imprimirlos, por lo que no podríamos usar la función print. <br> En esos casos, podemos usar la función format Step5: Format lo que hace es reemplazar las llaves con los parámetros que le pasen Step6: Aunque en realidad los números no son obligatorios Step7: Pero la ventaja de usar los números es que podemos imprimir ese parámetro varias veces, y no necesariamente en el órden que figura Step8: Incluso, se pueden usar parámetros nombrados Step9: Incluso, si en lugar de pasarle cada uno de los parámetros le pasamos un diccionario usando el operador ** Step10: Incluso, si lo que le pasamos es una lista, podemos acceder a una posición en particular Step11: Incluso puedo alinear el texto que pongo usando los dos puntos ( Step12: Pueden ver más ejemplos en la documentación oficial de Python<br> También se puese usar el signo % para construir un string, aunque no suele quedar tan claro el código Step13: Y también podemos separar y combinar strings Step14: Unicodes Los strings ocupan 1 byte en memoria, por lo que sólo se pueden representar 256 caractéres distintos; pero, si queremos representar los caracteres de todos los idiomas, 255 caracteres no son suficientes. Debido a esto, es que surgieron distintas codificaciones de los archivos, como pueden latin-1 (iso-8859-1), utf-8, etc.<br> Y si bien en un principio esto fue una solución, la verdad es que con el tiempo trajo mucho problemas por no saber cómo interpretar cada letra. <br> Para solucionar este problema es que Python introdujo en la versión 2.0 los caracteres de tipo unicode que pasaron a ocupar 2 bytes, por lo que ahora se pueden representar 65.536 todos los caracteres necesarios. <br> En Python 3 todos los strings pasaron a ser del tipo Unicode. Persistencia de datos Pero todo lo que vimos por el momento se guarda en memoria dinámica, por lo que al apagar la computadora, o simplemente con cerrar el programa y volver a abrirlo perdimos todos los datos que nos teníamos. La alternativa para esto siguen siendo los archivos. Apertura de archivos Al igual que en C, en Python en el mismo momento que abrimos el archivo, se lo asignamos a uno físico y elegimos el modo de apertura, que si no le indicamos nada, tomará por defecto el de lectura. <br> El modo de apertura puede ser cualquier combinación de Step15: La única condición que tenemos para usar este método es que el archivo lo hayamos abierto en modo lectura. Step16: Otra primitiva que podemos usar es readline, que al igual que read, también puede recibir un parámetro que indique la cantidad máxima de bytes a leer. Si no se le pasa ningún parámetro, lee toda la línea. Step17: Pero no es necesario que leamos de a una sola línea, sino que también podemos leer todas las líneas del archivo y guardarlas en una lista haciendo uso de la primitiva readlines. Step18: Sin embargo, la forma más Pythonic de leer el archivo por líneas es usando la estructura for y quedaría casi como lo diríamos en castellano Step19: Escritura de archivos Para escribir en un archivo podemos usar las las primitivas write(string) y writelines(lista_strings), que la primera es para escribir una cadena de caracteres y la segunda para escribir una lista de strings, uno a continuación del otro. Es importante destacar que en ningún caso se escribe algún carácter que no figure en los strings, como por ejemplo, caracteres de fin de línea. <br> El uso de writelines es equivalente a recorrer la lista y hacerle un write a cada elemento. <br> Pero el costo de escribir algo en el disco es mucho mayor a escribirlo en memoria por lo que, al igual que en C, se usa un buffer, que no es más que una porción de memoria para ir guardando en forma temporal los datos y cuando alcanzan un tamaño considerable se lo manda a escribir al disco. Otra forma de asegurarse que se haga la escritura es usando la primitiva flush, la cual guarda en el disco el contenido del buffer y lo vacía. <br> Step20: ¿Y qué pasa si le quiero agregar algunas líneas a este archivo? Step21: Otra forma de asegurarse que se escriba lo que hay en el disco es cerrándolo. Moverse en un archivo Al igual que en los archivos binarios de Pascal, en Python también podemos saltar a distintas posiciones mediante la primitiva seek(pos) la cual recibe, como mínimo un parámetro que indica la posición a la que nos queremos mover. Opcionalmente puede recibir un segundo parámetro Step22: Y así como podemos movernos en un archivo, también podemos averiguar nuestra posición usando la primitiva tell(). Step23: ¿Cómo recorrer todo un archivo? Cuando llegamos al final de un archivo de texto usando la función read o readline Python no arroja ningún valor, pero tampoco retorna ningún caracter, por lo que podríamos usar eso como condición de corte Step24: Como pueden ver, todas las líneas hasta la 22 (que es la última linea del arhcivo) tienen una longitud mayor a 0; incluso las 5, 10 y 19 que aparentemente no tienen ningún caracter. Eso es así ya que siempre tienen por lo menos uno, que es el Enter o \n. <br> Otra cosa a tener en cuenta es que, por más que intentamos leer más allá del fin de archivo, en ningún momento el interprete nos lanzó una excepción. <br> Por lo tanto, si no sabemos la longitud del archivo como era este caso, podríamos usar esta información para darnos cuenta cuándo dejar de leer Step25: Aunque Python también nos ofrece otra forma de recorer un archivo, y es usando una de las estructuras que ya conocemos Step26: O, incluso, usar enumerate para saber qué línea estoy leyendo
Python Code: cadena_caracteres = "Hola mundo" print dir(cadena_caracteres) Explanation: <!-- 29/10 Archivos de texto. Operaciones con cadenas de caracteres. String formaters. CLASE DE LABORATORIO (Gonzalo) --> Manejo de strings Con los strings podemos hacer muchas operaciones: End of explanation print 'Hola mundo' print 'Pero el print también imprime un Enter al terminar la línea' Explanation: String formating Existen varias formas crear, concatenar e imprimir strings: ¿Cómo imprimir un string? Para imprimir un string sólo es necesario usar la palabra print: End of explanation print 'Pero al imprimir con la coma al final', print 'cambia el enter por un espacio' print 'También puedo escribir lo mismo' ' en dos partes' print 'Lo que puedo usar ' \ 'cuando un string es muy largo' \ 'si le agrego una contrabarra' Explanation: Pero si no queremos imprimir ese último Enter lo que tenemos que hacer es poner una coma al final de la línea: End of explanation print 'Entonces tengo que ponerlo después de la coma:', 5 print 'Al que también le agrega la coma para separarlo' print 'También puedo ponerlo en el medio:\nHoy es', 29, 'de Octubre' Explanation: ¿Y si lo que quiero es imprimir un número pegado al string? End of explanation print str.format.__doc__ Explanation: String formating Pero en algunas ocasiones vamos a tener que crear springs más complejos y agregarle una coma no va a ser suficiente, o tal vez queremos crearlos pero no para imprimirlos, por lo que no podríamos usar la función print. <br> En esos casos, podemos usar la función format End of explanation print 'El nombre del jugador número {0} es {1}'.format(10, 'Lionel Messi') Explanation: Format lo que hace es reemplazar las llaves con los parámetros que le pasen: End of explanation print 'El nombre del jugador número {} es {}'.format(10, 'Lionel Messi') Explanation: Aunque en realidad los números no son obligatorios: End of explanation print '{0}{1}{0}'.format('abra', 'cad') Explanation: Pero la ventaja de usar los números es que podemos imprimir ese parámetro varias veces, y no necesariamente en el órden que figura: End of explanation print 'La nota del alumno {padron} - {nombre} es un {nota}.'. \ format(padron=123, nombre='Carlos Sanchez', nota=8) Explanation: Incluso, se pueden usar parámetros nombrados: End of explanation alumno = { 'padron': 123, 'nombre': 'Carlos Sanchez', 'nota': 8 } print 'La nota del alumno {padron} - {nombre} es un {nota}.'.\ format(**alumno) Explanation: Incluso, si en lugar de pasarle cada uno de los parámetros le pasamos un diccionario usando el operador ** End of explanation alumno = { 'padron': 123, 'nombre': 'Carlos Sanchez', 'tps': [8, 9] } print 'La nota de los tps de {nombre} son {tps[0]} y {tps[1]}.'.\ format(**alumno) Explanation: Incluso, si lo que le pasamos es una lista, podemos acceder a una posición en particular: End of explanation print 'Imprimo un texto alineado a la |{:<20}| de 20 posiciones'.format( 'izquierda') print 'Imprimo un texto alineado a la |{:>20}| de 20 posiciones'.format( 'derecha') print 'Imprimo un texto |{:^20}| de 20 posiciones'.format('centrado') print 'Relleno |{:#<20}| con #'.format('izquierda') print 'Relleno |{:#>20}| con #'.format('derecha') print 'Relleno |{:#^20}| con #'.format('centrado') Explanation: Incluso puedo alinear el texto que pongo usando los dos puntos (:) End of explanation cadena_caracteres = 'Hola mundo' print '"{0}" cambia a "{1}" con title'.format(cadena_caracteres, cadena_caracteres.title()) print '"{0}" cambia a "{1}" con lower'.format(cadena_caracteres, cadena_caracteres.lower()) print '"{0}" cambia a "{1} con upper"'.format(cadena_caracteres, cadena_caracteres.upper()) print '"{0}" cambia a "{1}" con capitalize'.format(cadena_caracteres, cadena_caracteres.capitalize()) print '"{0}" cambia a "{1}" cuando reemplazamos las o por 0'.format(cadena_caracteres, cadena_caracteres.replace('o', '0')) x = 'mi string' y = x.replace('i', 'AA') print x, y print id(x) x += 'Hola mundo' print id(x) Explanation: Pueden ver más ejemplos en la documentación oficial de Python<br> También se puese usar el signo % para construir un string, aunque no suele quedar tan claro el código: Funciones de los strings También existen varias funciones que podemos usar cuando trabajamos con strings: End of explanation print "Hola mundo".split() print "Hola mundo".split('o') print "Hola mundo".split('mu') print ''.join(['Hola', 'mundo']) print ' '.join(['Hola', 'mundo']) var = '#separador#'.join(['Hola', 'mundo']) print var padron, nombre, nota = '12321,nom bekr,4'.split(',') Explanation: Y también podemos separar y combinar strings: End of explanation arch = open("ejemplo.txt") cadena = arch.read(15) print "# Imprimo los primeros 15 caracteres del archivo. Tiene que ser 'Python was crea'" print cadena print "# Leo otros 7 caracteres y dejo el cursor del archivo en la siguiente posición. Tiene que ser 'ted in '" cadena = arch.read(7) print cadena print "# Ahora leo el resto del archivo." cadena = arch.read() print cadena print '# Cierro el archivo' arch.close() Explanation: Unicodes Los strings ocupan 1 byte en memoria, por lo que sólo se pueden representar 256 caractéres distintos; pero, si queremos representar los caracteres de todos los idiomas, 255 caracteres no son suficientes. Debido a esto, es que surgieron distintas codificaciones de los archivos, como pueden latin-1 (iso-8859-1), utf-8, etc.<br> Y si bien en un principio esto fue una solución, la verdad es que con el tiempo trajo mucho problemas por no saber cómo interpretar cada letra. <br> Para solucionar este problema es que Python introdujo en la versión 2.0 los caracteres de tipo unicode que pasaron a ocupar 2 bytes, por lo que ahora se pueden representar 65.536 todos los caracteres necesarios. <br> En Python 3 todos los strings pasaron a ser del tipo Unicode. Persistencia de datos Pero todo lo que vimos por el momento se guarda en memoria dinámica, por lo que al apagar la computadora, o simplemente con cerrar el programa y volver a abrirlo perdimos todos los datos que nos teníamos. La alternativa para esto siguen siendo los archivos. Apertura de archivos Al igual que en C, en Python en el mismo momento que abrimos el archivo, se lo asignamos a uno físico y elegimos el modo de apertura, que si no le indicamos nada, tomará por defecto el de lectura. <br> El modo de apertura puede ser cualquier combinación de: 'r' Lectura: el archivo debe existir. Similar al reset de Pascal. 'w' Escritura: no es necesario que el archivo exista, pero si existe lo sobre escribe. Similar al rewrite de Pascal. 'a' Append: Solo agrega al final y no es necesario que el archivo exista. Similar al append de Pascal. 't' Texto: Archivo de texto 'b' Binario: Archivo binario '+' *Permite lectura y escrituras simultáneas La primitiva del lenguaje para abrir y asignar un archivo es open, la cual puede recibir uno o dos parámetros. El primero es obligatorio, y corresponde a la ubicación relativa o absoluta del archivo físico. El segundo parámetro indica el modo de apertura y es opcional. Si no se lo pasamos asumirá que lo queremos abrir en modo Lectura. <br> Supongamos que estamos usando el intérprete en un escenario en el que solo tenemos un archivo que se llama f2.txt y queremos trabajar con los archivos f1.txt y f2.txt. <br> ```Python Lanza una excepción de IOError por no existir el archivo e intentar abrirlo en modo lectura. file1 = open("f1.txt") Traceback (most recent call last): File "<stdin>", line 1, in <module> IOError: [Errno 2] No such file or directory: 'f1.txt' Intento abrir el archivo f1.txt, pero en modo escritura, por lo crea y no falla. Si hubiera existido, lo hubiera truncado y creado vacío. file1 = open("f1.txt", "w") Abro el archivo f2.txt en modo lectura sin problemas, ya que éste si existe. file2 = open("f2.txt") ``` Cerrar un archivo Para cerrar un archivo solo tenemos que indicarlo poniendo la variable seguida de un punto y la primitiva close. La única restricción es que la variable sea de tipo archivo, si cerramos un archivo cerrado este sigue cerrado; y si cerramos uno abierto, el mismo cambia de estado. ```Python file2 = open("f2.txt") # Abro el archivo en modo lectura file2.close() # Cierro el archivo ``` Lectura de archivos Supongamos que tenemos un archivo llamado ejemplo.txt y tiene el siguiente texto: Python was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands as a successor of a language called ABC. Guido remains Python's principal author, although it includes many contributions from others. In 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) in Reston, Virginia where he released several versions of the software. In May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs team. In October of the same year, the PythonLabs team moved to Digital Creations (now Zope Corporation, see http://www.zope.com). In 2001, the Python Software Foundation (PSF, see http://www.python.org/psf/) was formed, a non-profit organization created specifically to own Python-related Intellectual Property. Zope Corporation is a sponsoring member of the PSF. All Python releases are Open Source (see http://www.opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible. Para leer un archivo podemos usar la primitiva read, la cual puede recibir un parámetro que indique la cantidad de caracteres a leer. Si no se pasa ese parámetro el intérprete leerá todo el archivo y lo retornará. End of explanation arch2 = open("ejemplo2.txt", "w") arch2.read() # Y si intentamos con un append? arch3 = open("ejemplo1.txt", "a") arch3.read() Explanation: La única condición que tenemos para usar este método es que el archivo lo hayamos abierto en modo lectura. End of explanation arch = open("ejemplo.txt") linea = arch.readline() # Notar que también imprime el Enter o \n print linea linea = arch.readline(7) # La segunda línea es 'Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands' print linea arch.close() Explanation: Otra primitiva que podemos usar es readline, que al igual que read, también puede recibir un parámetro que indique la cantidad máxima de bytes a leer. Si no se le pasa ningún parámetro, lee toda la línea. End of explanation arch = open("ejemplo.txt") lineas = arch.readlines() print lineas arch.close() Explanation: Pero no es necesario que leamos de a una sola línea, sino que también podemos leer todas las líneas del archivo y guardarlas en una lista haciendo uso de la primitiva readlines. End of explanation arch = open("ejemplo.txt") for linea in arch: print len(linea) arch.close() Explanation: Sin embargo, la forma más Pythonic de leer el archivo por líneas es usando la estructura for y quedaría casi como lo diríamos en castellano: "Para cada línea del archivo. <br> Por ejemplo, si queremos imprimir la cantidad de caracteres de cada línea podríamos hacer: End of explanation arch2 = open("ejemplo2.txt", "w") arch2.write("Es la primer cadena") arch2.write("Seguida de la segunda con un fin de linea\n") arch2.writelines(["1. Primero de la lista sin fin de línea. ", "2. Segundo string con fin de línea.\n", "3. Tercero con\\n.\n", "4. y último."]) arch2.flush() arch2.close() arch2 = open("ejemplo2.txt", "r+a") strfile = arch2.read() print strfile Explanation: Escritura de archivos Para escribir en un archivo podemos usar las las primitivas write(string) y writelines(lista_strings), que la primera es para escribir una cadena de caracteres y la segunda para escribir una lista de strings, uno a continuación del otro. Es importante destacar que en ningún caso se escribe algún carácter que no figure en los strings, como por ejemplo, caracteres de fin de línea. <br> El uso de writelines es equivalente a recorrer la lista y hacerle un write a cada elemento. <br> Pero el costo de escribir algo en el disco es mucho mayor a escribirlo en memoria por lo que, al igual que en C, se usa un buffer, que no es más que una porción de memoria para ir guardando en forma temporal los datos y cuando alcanzan un tamaño considerable se lo manda a escribir al disco. Otra forma de asegurarse que se haga la escritura es usando la primitiva flush, la cual guarda en el disco el contenido del buffer y lo vacía. <br> End of explanation arch2.write("Esto lo estoy agregando.\n.") arch2.writelines("Y estas dos líneas también con un \\n al final\n de cada una.\n") arch2.flush() arch2 = open("ejemplo2.txt", "r") # El open hace que me mueva a la primer posición del archivo. print arch2.read() arch2.close() Explanation: ¿Y qué pasa si le quiero agregar algunas líneas a este archivo? End of explanation arch = open("ejemplo.txt") arch.seek(30) # Voy a la posición número 30 del archivo print arch.read(7) # Debería salir 'y 1990s' arch.seek(-5,1) # Me muevo 5 posiciones para atrás desde mi posición actual. print arch.read(7) # Debería imprimir '1990s b' arch.seek(-12,2) # Me muevo a la posición número 12, comenzando a contar desde el final. print arch.read(10) # Debería imprimir 'compatible' arch.close() Explanation: Otra forma de asegurarse que se escriba lo que hay en el disco es cerrándolo. Moverse en un archivo Al igual que en los archivos binarios de Pascal, en Python también podemos saltar a distintas posiciones mediante la primitiva seek(pos) la cual recibe, como mínimo un parámetro que indica la posición a la que nos queremos mover. Opcionalmente puede recibir un segundo parámetro: * 0: La posición es desde el inicio del archivo y debe ser mayor o igual a 0 * 1: La posición es relativa a la posición actual; puede ser positiva o negativa * 2: La posición es desde el final del archivo, por lo que debe ser negativa End of explanation arch = open("ejemplo.txt") arch.seek(30) print arch.tell() # Debería imprimir 30 arch.seek(-5,1) # Retrocedo 5 posiciones print arch.tell() # Debería imprimir 25 arch.seek(-12,2) # Voy a 12 posiciones antes del fin de archivo print arch.tell() # Debería imprimir 1132 print arch.read(10) # Leo 10 caracteres print arch.tell() # Debería imprimir 1142 Explanation: Y así como podemos movernos en un archivo, también podemos averiguar nuestra posición usando la primitiva tell(). End of explanation arch = open("ejemplo.txt") # El archivo ejemplo.txt tiene 22 líneas, por lo que # si quiero imprimirlo completo anteponiendo el # número de línea y la cantidad de caracteres # puedo hacer: for x in range(1, 25): linea = arch.readline() print '{:2}[{:02}] - {}'.format(x, len(linea), linea) arch.close() Explanation: ¿Cómo recorrer todo un archivo? Cuando llegamos al final de un archivo de texto usando la función read o readline Python no arroja ningún valor, pero tampoco retorna ningún caracter, por lo que podríamos usar eso como condición de corte: End of explanation arch = open("ejemplo.txt") # Si no sabemos la cantidad de líneas que tiene # el archivo que queremos recorrer podemos hacer: linea = arch.readline() x = 0 while linea: # Es decir, mientras me devuelva algo # distinto al sting vacío x += 1 print '{:2}[{:02}] - {}'.format(x, len(linea), linea) linea = arch.readline() arch.close() Explanation: Como pueden ver, todas las líneas hasta la 22 (que es la última linea del arhcivo) tienen una longitud mayor a 0; incluso las 5, 10 y 19 que aparentemente no tienen ningún caracter. Eso es así ya que siempre tienen por lo menos uno, que es el Enter o \n. <br> Otra cosa a tener en cuenta es que, por más que intentamos leer más allá del fin de archivo, en ningún momento el interprete nos lanzó una excepción. <br> Por lo tanto, si no sabemos la longitud del archivo como era este caso, podríamos usar esta información para darnos cuenta cuándo dejar de leer: End of explanation arch = open("ejemplo.txt") # Si no sabemos la cantidad de líneas que tiene # el archivo que queremos recorrer podemos hacer: x = 0 for linea in arch: x += 1 print '{:2}[{:02}] - {}'.format(x, len(linea), linea) arch.close() Explanation: Aunque Python también nos ofrece otra forma de recorer un archivo, y es usando una de las estructuras que ya conocemos: for End of explanation arch = open("ejemplo.txt") # Si no sabemos la cantidad de líneas que tiene # el archivo que queremos recorrer podemos hacer: # Usando enumerate y comenzando en 1 for x, linea in enumerate(arch, 1): print '{:2}[{:02}] - {}'.format(x, len(linea), linea) arch.close() Explanation: O, incluso, usar enumerate para saber qué línea estoy leyendo: End of explanation
11,289
Given the following text description, write Python code to implement the functionality described below step by step Description: Title Student Names Learning Goals (Why are we asking you to do this?) Core Assignment Step1: Experimenting with this simulation A Forest With Thunderbolt & Lightning, Very Very Frightening Step2: A Forest Where You Show Trees No Mercy
Python Code: # Some setup to load outside web page elements from IPython.display import IFrame # When you execute this cell, it'll reset the simulationb IFrame("http://ncase.me/simulating/model?local=forest/0_growth&play=0&edit=1", width=800, height=400) Explanation: Title Student Names Learning Goals (Why are we asking you to do this?) Core Assignment End of explanation IFrame("http://ncase.me/simulating/model?local=forest/1_fire&play=1&edit=1", width=800, height=450) Explanation: Experimenting with this simulation A Forest With Thunderbolt & Lightning, Very Very Frightening End of explanation IFrame("http://ncase.me/simulating/model?local=forest/2_firebreak&edit=1&paused=1", width=800, height=450) Explanation: A Forest Where You Show Trees No Mercy End of explanation
11,290
Given the following text description, write Python code to implement the functionality described below step by step Description: Welcome to OnSSET Jupyter Interface¶ This page will guide you through the a simplified version of the OnSSET code, as well as the various parameters that can be set to generate any scenario of interest. The code is split up into blocks indicating main steps of the electrification analysis. Thus the blocks shall be executed sequesntially. Note! Online vizualization does not work if tool runs offline. Here you can choose the country of the analysis, as well as the modelling period. Step1: Step 1 and 2. GIS data collection and processing GIS data collection and processing is a demanding and time consuming process. The necessary layers should be prepared and calibarated properly for the model to work. In this case pre-made data will be used in a form of .csv files avaialble <a href="https Step2: Step 3a. Enter country specific data (Social) These are values that vary per country. They should be changed accordingly to better reflect the selected country's current and expected development. Step3: Step 3b. Enter country specific data (Energy Access Target) Step4: Step 3c. Enter country specific data (Preparation - Calibration) The cell below contains the procedures to prepare the geospatial data and make it ready to process a scenario. This includes setting grid penalties, calculating wind capacity factors and estimating current population and electricity access a values. The most important part is to set the actual electricity access rate, and then to adjust the other parameters to let the software which settlements are electrified and which not. Step5: Then you should set the parameters that decide whether or not a settlement is grid-connected, and run the block to see what the result is. This will need to be repeated until a satisfactory value is reached! Step6: Step 3d. Enter country specific data (Technology specifications & costs) The cell below contains all the information that is used to calculate the levelised costs for all the technologies, including grid. These should be updated to reflect the most accurate values. The following values can be provided by KTH dESA, based on OSeMOSYS, the open source optimization model for long-run integrated assessment and energy planning. These are the general parameters for calculating LCOE for all technologies Step7: These are the technology parameters for extending the grid Step8: These are the technology parameters for the diesel technologies Step9: These are the technology parameters for PV Step10: These are the technology parameters for hydro and wind Step11: Step 4. Estimating the LCoE per technology under various demand profiles Every technology yields a different Levelized Cost for electricity production (LCoE) based on specific characteristics such as the population size and resource availability and/or cost. To illustrate, the cost of providing electricity in a low populated, isolated location (far from grid and roads) will probably be a more demanding (thus expensive) task than a high populated urban settlement. Here is an example of how the different technologies perform under the followinga assumptions Step12: Step 5. Calculate technology costs for every settlement in the country Based on the previous calculation this piece of code identifies the LCoE that every technology can provide, for each single populated settlement of the selected country. Step13: Step 6. Grid extensions - The electrification algorithm This cell takes all the currently grid-connected points in the country, and looks at the points within a certain distance from them, to see if it is more ecnomical to connect them to the grid, or to use one of the non-grid technologies calculate above. Once more points are connected to the grid, the process is repeated, so that new points close to those points might also be connected. This is repeated until there are no new points to connect to the grid. The onle value that needs to be entered is the additional cost paid when extending the grid, to strengthen the previous sections of grid. It is given asa ratio of the original cost of that grid section Step14: Then this code runs the analysis. Step15: Step 7 - Results, Summaries and Visualization With all the calculations and grid-extensions complete, this cell gets the final results on which technology was chosen for each point, how much capacity needs to be installed and what it will cost. Then the summaries are generated, to show the overall requirements for the country. The only values that can be changed here are some capacity factor values for different technologies. Step16: Mapping of electrification results This code generates two maps
Python Code: country = 'Malawi' #Dependencies from IPython.display import display, Markdown, HTML import seaborn as sns import matplotlib.pylab as plt import folium import branca.colormap as cm import json %matplotlib inline %run onsset.py Explanation: Welcome to OnSSET Jupyter Interface¶ This page will guide you through the a simplified version of the OnSSET code, as well as the various parameters that can be set to generate any scenario of interest. The code is split up into blocks indicating main steps of the electrification analysis. Thus the blocks shall be executed sequesntially. Note! Online vizualization does not work if tool runs offline. Here you can choose the country of the analysis, as well as the modelling period. End of explanation o = SettlementProcessor('{}.csv'.format(country)) display(Markdown('### A random sampling from the input file for {}'.format(country))) o.df[['Country','X','Y','Pop','GridDistPlan','NightLights','TravelHours','GHI', 'WindVel','Hydropower','HydropowerDist']].sample(7) Explanation: Step 1 and 2. GIS data collection and processing GIS data collection and processing is a demanding and time consuming process. The necessary layers should be prepared and calibarated properly for the model to work. In this case pre-made data will be used in a form of .csv files avaialble <a href="https://drive.google.com/drive/folders/0B4H9lfvb9fHKUFNUUHM5UTJ6WW8" target="_blank">here</a>. You can see an example for the selected country below. End of explanation pop_actual = 17000000 pop_future = 26000000 urban_current = 0.574966829 urban_future = 0.633673829 num_people_per_hh_rural = 4.6 num_people_per_hh_urban = 4.4 Explanation: Step 3a. Enter country specific data (Social) These are values that vary per country. They should be changed accordingly to better reflect the selected country's current and expected development. End of explanation energy_per_hh_rural = 50 # in kWh/household/year (examples are 22, 224, 695, 1800, 2195) energy_per_hh_urban = 1200 o.condition_df() o.grid_penalties() o.calc_wind_cfs() ignore = o.calibrate_pop_and_urban(pop_actual, pop_future, urban_current, urban_future, 100) Explanation: Step 3b. Enter country specific data (Energy Access Target) End of explanation elec_actual = 0.11 Explanation: Step 3c. Enter country specific data (Preparation - Calibration) The cell below contains the procedures to prepare the geospatial data and make it ready to process a scenario. This includes setting grid penalties, calculating wind capacity factors and estimating current population and electricity access a values. The most important part is to set the actual electricity access rate, and then to adjust the other parameters to let the software which settlements are electrified and which not. End of explanation # Set the minimum night light intensity, below which it is assumed there is no electricity access. min_night_lights = 10 # In addition to the above, one of the below conditions must be reached to consider a settlement eelctrified. pop_cutoff = 10000 max_grid_dist = 10 # in km max_road_dist = 5 # in km o.df[SET_ELEC_CURRENT] = o.df.apply(lambda row: 1 if row[SET_NIGHT_LIGHTS] > min_night_lights and (row[SET_POP_CALIB] > pop_cutoff or row[SET_GRID_DIST_CURRENT] < max_grid_dist or row[SET_ROAD_DIST] < max_road_dist) else 0, axis=1) o.df.loc[o.df[SET_ELEC_CURRENT] == 1, SET_NEW_CONNECTIONS] = o.df[SET_POP_FUTURE] - o.df[SET_POP_CALIB] o.df.loc[o.df[SET_ELEC_CURRENT] == 0, SET_NEW_CONNECTIONS] = o.df[SET_POP_FUTURE] o.df.loc[o.df[SET_NEW_CONNECTIONS] < 0, SET_NEW_CONNECTIONS] = 0 elec_modelled = o.df.loc[o.df[SET_ELEC_CURRENT] == 1, SET_POP_CALIB].sum() / pop_actual display(Markdown('### The modelled electrification rate is {:.2f}, compared to the actual value of {:.2f}. \ If this is acceptable, you can continue.'.format(elec_modelled, elec_actual))) display(Markdown('### A random sampling from the input file for {}, showing some newly calculated columns' .format(country))) o.df[[SET_X_DEG,SET_Y_DEG, SET_POP_FUTURE, SET_URBAN, SET_ELEC_CURRENT, SET_WINDCF, SET_GRID_PENALTY]].sample(5) Explanation: Then you should set the parameters that decide whether or not a settlement is grid-connected, and run the block to see what the result is. This will need to be repeated until a satisfactory value is reached! End of explanation Technology.set_default_values(start_year=2015, end_year=2030, discount_rate=0.08, grid_cell_area=1, mv_line_cost=9000, lv_line_cost=5000, mv_line_capacity=50, lv_line_capacity=10, lv_line_max_length=30, hv_line_cost=53000, mv_line_max_length=50, hv_lv_transformer_cost=5000, mv_increase_rate=0.1) Explanation: Step 3d. Enter country specific data (Technology specifications & costs) The cell below contains all the information that is used to calculate the levelised costs for all the technologies, including grid. These should be updated to reflect the most accurate values. The following values can be provided by KTH dESA, based on OSeMOSYS, the open source optimization model for long-run integrated assessment and energy planning. These are the general parameters for calculating LCOE for all technologies: End of explanation max_grid_extension_dist = 50 grid_price = 0.05 grid_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.1, connection_cost_per_hh=125, base_to_peak_load_ratio=0.5, capacity_factor=1, tech_life=30, grid_capacity_investment=2000, grid_price=grid_price) Explanation: These are the technology parameters for extending the grid: End of explanation diesel_price = 0.5 mg_diesel_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.5, capacity_factor=0.7, tech_life=15, om_costs=0.1, efficiency=0.33, capital_cost=721, diesel_price=diesel_price, diesel_truck_consumption=33.7, diesel_truck_volume=15000) sa_diesel_calc = Technology(base_to_peak_load_ratio=0.5, capacity_factor=0.7, tech_life=10, om_costs=0.1, capital_cost=938, diesel_price=diesel_price, standalone=True, efficiency=0.28, diesel_truck_consumption=14, diesel_truck_volume=300) Explanation: These are the technology parameters for the diesel technologies: End of explanation mg_pv_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.9, tech_life=20, om_costs=0.015, capital_cost=4300) sa_pv_calc = Technology(base_to_peak_load_ratio=0.9, tech_life=15, om_costs=0.012, capital_cost=5500, standalone=True) Explanation: These are the technology parameters for PV: End of explanation mg_hydro_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=1, capacity_factor=0.5, tech_life=30, capital_cost=5000, om_costs=0.02) mg_wind_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.75, capital_cost=3000, om_costs=0.02, tech_life=20) Explanation: These are the technology parameters for hydro and wind: End of explanation grid_lcoes_rural = grid_calc.get_grid_table(energy_per_hh_rural, num_people_per_hh_rural, max_grid_extension_dist) grid_lcoes_urban = grid_calc.get_grid_table(energy_per_hh_urban, num_people_per_hh_urban, max_grid_extension_dist) display(Markdown('### Example of LCoE variation (in USD/kWh) per technology depending on number of people residing a settlement')) lcoe_eg_people = [10, 500, 1000, 2000, 5000, 10000] lcoe_eg_people_display = ['{} people'.format(p) for p in lcoe_eg_people] lcoe_sample = pd.DataFrame(columns=['grid', 'sa_diesel','sa_pv','mg_diesel','mg_pv','mg_wind','mg_hydro'], index=lcoe_eg_people_display) lcoe_sample['grid'] = [grid_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, additional_mv_line_length=20) for p in lcoe_eg_people] lcoe_sample['mg_wind'] = [mg_wind_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, capacity_factor = 0.4) for p in lcoe_eg_people] lcoe_sample['mg_hydro'] = [mg_hydro_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, mv_line_length=4) for p in lcoe_eg_people] lcoe_sample['mg_pv'] = [mg_pv_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, capacity_factor = 1500/8760) for p in lcoe_eg_people] lcoe_sample['sa_pv'] = [sa_pv_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, capacity_factor = 1500/8760) for p in lcoe_eg_people] lcoe_sample['mg_diesel'] = [mg_diesel_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, travel_hours=10) for p in lcoe_eg_people] lcoe_sample['sa_diesel'] = [sa_diesel_calc.get_lcoe(energy_per_hh_urban, p, num_people_per_hh_urban, travel_hours=10) for p in lcoe_eg_people] lcoe_sample.head(10) Explanation: Step 4. Estimating the LCoE per technology under various demand profiles Every technology yields a different Levelized Cost for electricity production (LCoE) based on specific characteristics such as the population size and resource availability and/or cost. To illustrate, the cost of providing electricity in a low populated, isolated location (far from grid and roads) will probably be a more demanding (thus expensive) task than a high populated urban settlement. Here is an example of how the different technologies perform under the followinga assumptions: - Distance from the National Electricity grid: 20 km - Global Horizontal Irradiation: 1500 kWh/m2/year - Hydro Availability: Positive - Diesel price: 0.345 USD/liter ### Note: this block takes a bit of time End of explanation o.set_scenario_variables(energy_per_hh_rural, energy_per_hh_urban, num_people_per_hh_rural, num_people_per_hh_urban) o.calculate_off_grid_lcoes(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc) display(Markdown('### A selection of LCoEs achieved for a sample of settlements')) o.df[[SET_LCOE_MG_HYDRO, SET_LCOE_MG_PV, SET_LCOE_SA_PV, SET_LCOE_MG_DIESEL, SET_LCOE_SA_DIESEL, SET_LCOE_MG_WIND]].sample(7) Explanation: Step 5. Calculate technology costs for every settlement in the country Based on the previous calculation this piece of code identifies the LCoE that every technology can provide, for each single populated settlement of the selected country. End of explanation existing_grid_cost_ratio = 0.1 Explanation: Step 6. Grid extensions - The electrification algorithm This cell takes all the currently grid-connected points in the country, and looks at the points within a certain distance from them, to see if it is more ecnomical to connect them to the grid, or to use one of the non-grid technologies calculate above. Once more points are connected to the grid, the process is repeated, so that new points close to those points might also be connected. This is repeated until there are no new points to connect to the grid. The onle value that needs to be entered is the additional cost paid when extending the grid, to strengthen the previous sections of grid. It is given asa ratio of the original cost of that grid section End of explanation o.run_elec(grid_lcoes_rural, grid_lcoes_urban, grid_price, existing_grid_cost_ratio, max_grid_extension_dist) elec2015 = o.df[SET_ELEC_CURRENT].sum() elec2030 = o.df.loc[o.df[SET_LCOE_GRID] < 99, SET_LCOE_GRID].count() display(Markdown('### The algorithm found {} new settlements to connect to the grid'.format(elec2030 - elec2015))) Explanation: Then this code runs the analysis. End of explanation o.results_columns(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc, grid_calc) population_ = 'population_' new_connections_ = 'new_connections_' capacity_ = 'capacity_' investments_ = 'investment_' rows = [] techs = [SET_LCOE_GRID,SET_LCOE_SA_DIESEL,SET_LCOE_SA_PV,SET_LCOE_MG_DIESEL, SET_LCOE_MG_PV,SET_LCOE_MG_WIND,SET_LCOE_MG_HYDRO] colors = ['#73B2FF','#EDD100','#EDA800','#1F6600','#98E600','#70A800','#1FA800'] techs_colors = dict(zip(techs, colors)) rows.extend([population_ + t for t in techs]) rows.extend([new_connections_ + t for t in techs]) rows.extend([capacity_ + t for t in techs]) rows.extend([investments_ + t for t in techs]) summary = pd.Series(index=rows) for t in techs: summary.loc[population_ + t] = o.df.loc[o.df[SET_MIN_OVERALL] == t, SET_POP_FUTURE].sum() summary.loc[new_connections_ + t] = o.df.loc[o.df[SET_MIN_OVERALL] == t, SET_NEW_CONNECTIONS].sum() summary.loc[capacity_ + t] = o.df.loc[o.df[SET_MIN_OVERALL] == t, SET_NEW_CAPACITY].sum() summary.loc[investments_ + t] = o.df.loc[o.df[SET_MIN_OVERALL] == t, SET_INVESTMENT_COST].sum() display(Markdown('### Summaries \nThese are the summaized results for full electrification of the selected country by 2030')) index = techs + ['Total'] columns = ['Population', 'New connections', 'Capacity (kW)', 'Investments (million USD)'] summary_table = pd.DataFrame(index=index, columns=columns) summary_table[columns[0]] = summary.iloc[0:7].astype(int).tolist() + [int(summary.iloc[0:7].sum())] summary_table[columns[1]] = summary.iloc[7:14].astype(int).tolist() + [int(summary.iloc[7:14].sum())] summary_table[columns[2]] = summary.iloc[14:21].astype(int).tolist() + [int(summary.iloc[14:21].sum())] summary_table[columns[3]] = [round(x/1e4)/1e2 for x in summary.iloc[21:28].astype(float).tolist()] + [round(summary.iloc[21:28].sum()/1e4)/1e2] summary_table.head(10) summary_plot=summary_table.drop(labels='Total',axis=0) fig_size = [30, 30] font_size = 15 plt.rcParams["figure.figsize"] = fig_size f, axarr = plt.subplots(2, 2) fig_size = [30, 30] font_size = 15 plt.rcParams["figure.figsize"] = fig_size sns.barplot(x=summary_plot.index.tolist(), y=columns[0], data=summary_plot, ax=axarr[0, 0], palette=colors) axarr[0, 0].set_ylabel(columns[0], fontsize=2*font_size) axarr[0, 0].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[1], data=summary_plot, ax=axarr[0, 1], palette=colors) axarr[0, 1].set_ylabel(columns[1], fontsize=2*font_size) axarr[0, 1].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[2], data=summary_plot, ax=axarr[1, 0], palette=colors) axarr[1, 0].set_ylabel(columns[2], fontsize=2*font_size) axarr[1, 0].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[3], data=summary_plot, ax=axarr[1, 1], palette=colors) axarr[1, 1].set_ylabel(columns[3], fontsize=2*font_size) axarr[1, 1].tick_params(labelsize=font_size) Explanation: Step 7 - Results, Summaries and Visualization With all the calculations and grid-extensions complete, this cell gets the final results on which technology was chosen for each point, how much capacity needs to be installed and what it will cost. Then the summaries are generated, to show the overall requirements for the country. The only values that can be changed here are some capacity factor values for different technologies. End of explanation x_ave = o.df[SET_X_DEG].mean() y_ave = o.df[SET_Y_DEG].mean() lcoe_ave = o.df[SET_MIN_OVERALL_LCOE].median() map_tech = folium.Map(location=[y_ave,x_ave], zoom_start=6) map_lcoe = folium.Map(location=[y_ave,x_ave], zoom_start=6) for index, row in o.df.iterrows(): tech_color = techs_colors[(row[SET_MIN_OVERALL])] folium.CircleMarker([row[SET_Y_DEG], row[SET_X_DEG]], radius=5000,#cell_size*300*(row['LCOE']/lcoe_ave)**2, #popup='LCOE: {0:.3f} USD/kWh'.format(row['LCOE']), color=tech_color, fill_color=tech_color, ).add_to(map_tech) lcoe_colors = {0.1: '#edf8fb',0.2: '#ccece6',0.3: '#99d8c9',0.4: '#66c2a4',0.5: '#2ca25f',0.6: '#006d2c'} for index, row in o.df.iterrows(): lcoe = row[SET_MIN_OVERALL_LCOE] if lcoe > 0.6: lcoe = 0.6 lcoe_color = lcoe_colors[ceil(lcoe*10)/10] folium.CircleMarker([row[SET_Y_DEG], row[SET_X_DEG]], radius=5000,#cell_size*300*(row['LCOE']/lcoe_ave)**2, #popup='LCOE: {0:.3f} USD/kWh'.format(row['LCOE']), color=lcoe_color, fill_color=lcoe_color, ).add_to(map_lcoe) grid_lines = [] for item in json.load(open('grid_planned.json'))['features']: grid_lines.append(item['geometry']['paths'][0]) for item in json.load(open('grid_existing.json'))['features']: grid_lines.append(item['geometry']['paths'][0]) for line in grid_lines: folium.PolyLine(line, color='#656464', weight=2.5, opacity=0.9, latlon=False).add_to(map_tech) folium.PolyLine(line, color='#656464', weight=2.5, opacity=0.9, latlon=False).add_to(map_lcoe) try: os.makedirs('maps') except FileExistsError: pass map_tech_output = 'maps/map_{}{}_tech.html'.format(country, energy_per_hh_urban) map_tech.save(map_tech_output) map_lcoe_output = 'maps/map_{}{}_lcoe.html'.format(country, energy_per_hh_urban) map_lcoe.save(map_lcoe_output) display(Markdown('<a href="{}" target="_blank">Map of technology split</a>'.format(map_tech_output))) display(Markdown('Colour coding for technology split:')) display(HTML('<font color="{}">&bull;Grid</font>&nbsp;&nbsp;&nbsp;<font color="{}">&bull;SA Diesel</font>&nbsp;&nbsp;&nbsp;\ <font color="{}">&bull;SA PV</font>&nbsp;&nbsp;&nbsp;<font color="{}">&bull;MG Diesel</font>&nbsp;&nbsp;&nbsp;\ <font color="{}">&bull;MG PV</font>&nbsp;&nbsp;&nbsp;<font color="{}">&bull;Wind</font>&nbsp;&nbsp;&nbsp;\ <font color="{}">&bull;Hydro</font>'.format(techs_colors[SET_LCOE_GRID], techs_colors[SET_LCOE_SA_DIESEL], techs_colors[SET_LCOE_SA_PV], techs_colors[SET_LCOE_MG_DIESEL], techs_colors[SET_LCOE_MG_PV], techs_colors[SET_LCOE_MG_WIND], techs_colors[SET_LCOE_MG_HYDRO]))) display(Markdown('<a href="{}" target="_blank">Map of electricity cost</a>'.format(map_lcoe_output))) display(Markdown('Colour coding for LCOE, in USD/kWh')) cm.LinearColormap(['#edf8fb','#ccece6','#99d8c9','#66c2a4','#2ca25f','#006d2c'], index=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], vmin=0, vmax=0.6) try: os.makedirs('csv_out') except FileExistsError: pass o.df.to_csv('csv_out/{}{}.csv'.format(country, energy_per_hh_urban), index=False) Explanation: Mapping of electrification results This code generates two maps: - one showing the spread of technologies - one showing the cost of electricity at each point They can be accessed using the links below. End of explanation
11,291
Given the following text description, write Python code to implement the functionality described below step by step Description: Migrating from Spark to BigQuery via Dataproc -- Part 4 Part 1 Step1: Load data into BigQuery Step2: BigQuery queries We can replace much of the initial exploratory code by SQL statements. Step3: Ooops. There are no column headers. Let's fix this. Step4: Spark analysis Replace Spark analysis by BigQuery SQL Step5: Spark SQL to BigQuery Pretty clean translation Step6: Write out report Copy the output to GCS so that we can safely delete the AI Platform Notebooks instance.
Python Code: # Catch-up cell. Run if you did not do previous notebooks of this sequence !wget http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz BUCKET='cloud-training-demos-ml' # CHANGE !gsutil cp kdd* gs://$BUCKET/ Explanation: Migrating from Spark to BigQuery via Dataproc -- Part 4 Part 1: The original Spark code, now running on Dataproc (lift-and-shift). Part 2: Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native) Part 3: Automate everything, so that we can run in a job-specific cluster. (cloud-optimized) Part 4: Load CSV into BigQuery, use BigQuery. (modernize) Part 5: Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless) Catch-up cell End of explanation !bq mk sparktobq BUCKET='cloud-training-demos-ml' # CHANGE !bq --location=US load --autodetect --source_format=CSV sparktobq.kdd_cup_raw gs://$BUCKET/kddcup.data_10_percent.gz Explanation: Load data into BigQuery End of explanation %%bigquery SELECT * FROM sparktobq.kdd_cup_raw LIMIT 5 Explanation: BigQuery queries We can replace much of the initial exploratory code by SQL statements. End of explanation %%bigquery CREATE OR REPLACE TABLE sparktobq.kdd_cup AS SELECT int64_field_0 AS duration, string_field_1 AS protocol_type, string_field_2 AS service, string_field_3 AS flag, int64_field_4 AS src_bytes, int64_field_5 AS dst_bytes, int64_field_6 AS wrong_fragment, int64_field_7 AS urgent, int64_field_8 AS hot, int64_field_9 AS num_failed_logins, int64_field_11 AS num_compromised, int64_field_13 AS su_attempted, int64_field_14 AS num_root, int64_field_15 AS num_file_creations, string_field_41 AS label FROM sparktobq.kdd_cup_raw %%bigquery SELECT * FROM sparktobq.kdd_cup LIMIT 5 Explanation: Ooops. There are no column headers. Let's fix this. End of explanation %%bigquery connections_by_protocol SELECT COUNT(*) AS count FROM sparktobq.kdd_cup GROUP BY protocol_type ORDER by count ASC connections_by_protocol Explanation: Spark analysis Replace Spark analysis by BigQuery SQL End of explanation %%bigquery attack_stats SELECT protocol_type, CASE label WHEN 'normal.' THEN 'no attack' ELSE 'attack' END AS state, COUNT(*) as total_freq, ROUND(AVG(src_bytes), 2) as mean_src_bytes, ROUND(AVG(dst_bytes), 2) as mean_dst_bytes, ROUND(AVG(duration), 2) as mean_duration, SUM(num_failed_logins) as total_failed_logins, SUM(num_compromised) as total_compromised, SUM(num_file_creations) as total_file_creations, SUM(su_attempted) as total_root_attempts, SUM(num_root) as total_root_acceses FROM sparktobq.kdd_cup GROUP BY protocol_type, state ORDER BY 3 DESC %matplotlib inline ax = attack_stats.plot.bar(x='protocol_type', subplots=True, figsize=(10,25)) Explanation: Spark SQL to BigQuery Pretty clean translation End of explanation import google.cloud.storage as gcs # save locally ax[0].get_figure().savefig('report.png'); connections_by_protocol.to_csv("connections_by_protocol.csv") # upload to GCS bucket = gcs.Client().get_bucket(BUCKET) for blob in bucket.list_blobs(prefix='sparktobq/'): blob.delete() for fname in ['report.png', 'connections_by_protocol.csv']: bucket.blob('sparktobq/{}'.format(fname)).upload_from_filename(fname) Explanation: Write out report Copy the output to GCS so that we can safely delete the AI Platform Notebooks instance. End of explanation
11,292
Given the following text description, write Python code to implement the functionality described below step by step Description: Minimal Plugin setup Plugins are dedicated analyzer to check the tags of one object at the time. For explanation purpose only, we just here make a plugin that report fountains, it is not looking for issue in the data. Step1: Each plugin is a class inheriting from Plugin. The class must define Step2: Each plugin should come with unitary test. It checks that the plugin done the expect job. You must call the plugin method with various OSM object definition. For each you must check if the plugin return an Osmose issue or not. self.check_err(plugin_return, expect_issue) assert the return of the plugin is valid and match the optional expect_issue. assert not plugin_return assert the plugin return nothing Step3: To run the analyze we need a context of execution. Each country or area have a entry in the file osmose_config.py. Parameters from the configuration can be used in the plugin, eg. self.father.config.options.get("phone_code"). Step4: The plugins are run by the analyzer "sax". The result can be fetched by Jupyter and displayed. By default it is in Osmose-QA XML format. CSV en GeoJson format are for debug only and have partial content.
Python Code: %cd "/opt/osmose-backend/" Explanation: Minimal Plugin setup Plugins are dedicated analyzer to check the tags of one object at the time. For explanation purpose only, we just here make a plugin that report fountains, it is not looking for issue in the data. End of explanation from modules.OsmoseTranslation import T_ from plugins.Plugin import Plugin class Fountain(Plugin): def init(self, logger): Plugin.init(self, logger) # Define meta-information about the produced Osmose issues self.errors[1] = self.def_class(item = 2020, level = 3, tags = ['tag', 'fix:survey'], title = T_('Fountain here'), detail = T_( '''Nice report of Fountain.''')) def node(self, data, tags): if tags.get("amenity") == "fountain": # When we found OSM object with tag amenity=fountain, # return Osmose issue of class 1, with name tag issue subtitle return {"class": 1, "text": T_("Name: {0}", tags.get("name"))} def way(self, data, tags, nds): # Same check as node return self.node(data, tags) def relation(self, data, tags, members): # Same check as node return self.node(data, tags) Explanation: Each plugin is a class inheriting from Plugin. The class must define: * An init(self, logger) * A method for each type of object to check node, way or relation. Method can call each other to factorize the code. But the method are optional. init() Should define the class of issue. It is meta-information about the produced issues. See general documentation for details. node(), way() and relation() The tags argument is a dictionary of OSM tags. The method should return a dictionary or an array of dictionary of Osmose-QA issues. See general documentation for details. But the returned dictionary must contain: * class: the same id from the definition * subclass optional: to make issue unique if required * text optional: detail about the issue * fix optional: fix suggestion End of explanation from plugins.Plugin import TestPluginCommon class Test(TestPluginCommon): def test(self): # Instantiate and initialize the Plugin a = Fountain(None) a.init(None) # Assert the OSM object with tag amenity=fountain # returns an Osmose issue of class 1 self.check_err( a.node(None, {"amenity": "fountain"}), {'class': 1} ) # Assert the plugin return nothing for OSM object with tag natural=peak assert not a.node(None, {"natural": "peak"}) # Run the test Test().test() # Returns nothing where it is OK, else error. Explanation: Each plugin should come with unitary test. It checks that the plugin done the expect job. You must call the plugin method with various OSM object definition. For each you must check if the plugin return an Osmose issue or not. self.check_err(plugin_return, expect_issue) assert the return of the plugin is valid and match the optional expect_issue. assert not plugin_return assert the plugin return nothing End of explanation import osmose_config as config country_conf = config.config['monaco'] country_conf.init() country_conf.analyser_options Explanation: To run the analyze we need a context of execution. Each country or area have a entry in the file osmose_config.py. Parameters from the configuration can be used in the plugin, eg. self.father.config.options.get("phone_code"). End of explanation from analysers.analyser_sax import Analyser_Sax from modules.jupyter import * csv = run(country_conf, Analyser_Sax, plugin = Fountain, format = 'csv') print_csv(csv) geojson = run(country_conf, Analyser_Sax, plugin = Fountain, format = 'geojson') print_geojson(geojson) Explanation: The plugins are run by the analyzer "sax". The result can be fetched by Jupyter and displayed. By default it is in Osmose-QA XML format. CSV en GeoJson format are for debug only and have partial content. End of explanation
11,293
Given the following text description, write Python code to implement the functionality described below step by step Description: Forecasting, updating datasets, and the "news" In this notebook, we describe how to use Statsmodels to compute the impacts of updated or revised datasets on out-of-sample forecasts or in-sample estimates of missing data. We follow the approach of the "Nowcasting" literature (see references at the end), by using a state space model to compute the "news" and impacts of incoming data. Note Step1: Forecasting exercises often start with a fixed set of historical data that is used for model selection and parameter estimation. Then, the fitted selected model (or models) can be used to create out-of-sample forecasts. Most of the time, this is not the end of the story. As new data comes in, you may need to evaluate your forecast errors, possibly update your models, and create updated out-of-sample forecasts. This is sometimes called a "real-time" forecasting exercise (by contrast, a pseudo real-time exercise is one in which you simulate this procedure). If all that matters is minimizing some loss function based on forecast errors (like MSE), then when new data comes in you may just want to completely redo model selection, parameter estimation and out-of-sample forecasting, using the updated datapoints. If you do this, your new forecasts will have changed for two reasons Step2: Step 1 Step3: To construct forecasts, we first estimate the parameters of the model. This returns a results object that we will be able to use produce forecasts. Step4: Creating the forecasts from the results object res is easy - you can just call the forecast method with the number of forecasts you want to construct. In this case, we'll construct four out-of-sample forecasts. Step5: For the AR(1) model, it is also easy to manually construct the forecasts. Denoting the last observed variable as $y_T$ and the $h$-step-ahead forecast as $y_{T+h|T}$, we have Step6: Step 2 Step7: To compute forecasts based on our updated dataset, we will create an updated results object res_post using the append method, to append on our new observation to the previous dataset. Note that by default, the append method does not re-estimate the parameters of the model. This is exactly what we want here, since we want to isolate the effect on the forecasts of the new information only. Step8: In this case, the forecast error is quite large - inflation was more than 10 percentage points below the AR(1) models' forecast. (This was largely because of large swings in oil prices around the global financial crisis). To analyse this in more depth, we can use Statsmodels to isolate the effect of the new information - or the "news" - on our forecasts. This means that we do not yet want to change our model or re-estimate the parameters. Instead, we will use the news method that is available in the results objects of state space models. Computing the news in Statsmodels always requires a previous results object or dataset, and an updated results object or dataset. Here we will use the original results object res_pre as the previous results and the res_post results object that we just created as the updated results. Once we have previous and updated results objects or datasets, we can compute the news by calling the news method. Here, we will call res_pre.news, and the first argument will be the updated results, res_post (however, if you have two results objects, the news method could can be called on either one). In addition to specifying the comparison object or dataset as the first argument, there are a variety of other arguments that are accepted. The most important specify the "impact periods" that you want to consider. These "impact periods" correspond to the forecasted periods of interest; i.e. these dates specify with periods will have forecast revisions decomposed. To specify the impact periods, you must pass two of start, end, and periods (similar to the Pandas date_range method). If your time series was a Pandas object with an associated date or period index, then you can pass dates as values for start and end, as we do below. Step9: The variable news is an object of the class NewsResults, and it contains details about the updates to the data in res_post compared to res_pre, the new information in the updated dataset, and the impact that the new information had on the forecasts in the period between start and end. One easy way to summarize the results are with the summary method. Step10: Summary output Step11: Multivariate example Step12: To show how this works, we'll imagine that it is April 14, 2017, which is the data of the March 2017 CPI release. So that we can show the effect of multiple updates at once, we'll assume that we haven't updated our data since the end of January, so that Step13: We chose this particular example because in March 2017, core CPI prices fell for the first time since 2010, and this information may be useful in forecast core PCE prices for that month. The graph below shows the CPI and PCE price data as it would have been observed on April 14th$^\dagger$. $\dagger$ This statement is not entirely true, because both the CPI and PCE price indexes can be revised to a certain extent after the fact. As a result, the series that we're pulling are not exactly like those observed on April 14, 2017. This could be fixed by pulling the archived data from ALFRED instead of FRED, but the data we have is good enough for this tutorial. Step14: To perform the exercise, we first construct and fit a DynamicFactor model. Specifically Step15: With the fitted model in hand, we now construct the news and impacts associated with observing the CPI for March 2017. The updated data is for February 2017 and part of March 2017, and we'll examining the impacts on both March and April. In the univariate example, we first created an updated results object, and then passed that to the news method. Here, we're creating the news by directly passing the updated dataset. Notice that Step16: Note Step17: Because we have multiple variables, by default the summary only shows the news from updated data along and the total impacts. From the first table, we can see that our updated dataset contains three new data points, with most of the "news" from these data coming from the very low reading in March 2017. The second table shows that these three datapoints substantially impacted the estimate for PCE in March 2017 (which was not yet observed). This estimate revised down by nearly 1.5 percentage points. The updated data also impacted the forecasts in the first out-of-sample month, April 2017. After incorporating the new data, the model's forecasts for CPI and PCE inflation in that month revised down 0.29 and 0.17 percentage point, respectively. While these tables show the "news" and the total impacts, they do not show how much of each impact was caused by each updated datapoint. To see that information, we need to look at the details tables. One way to see the details tables is to pass include_details=True to the summary method. To avoid repeating the tables above, however, we'll just call the summary_details method directly.
Python Code: %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt macrodata = sm.datasets.macrodata.load_pandas().data macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q') Explanation: Forecasting, updating datasets, and the "news" In this notebook, we describe how to use Statsmodels to compute the impacts of updated or revised datasets on out-of-sample forecasts or in-sample estimates of missing data. We follow the approach of the "Nowcasting" literature (see references at the end), by using a state space model to compute the "news" and impacts of incoming data. Note: this notebook applies to Statsmodels v0.12+. In addition, it only applies to the state space models or related classes, which are: sm.tsa.statespace.ExponentialSmoothing, sm.tsa.arima.ARIMA, sm.tsa.SARIMAX, sm.tsa.UnobservedComponents, sm.tsa.VARMAX, and sm.tsa.DynamicFactor. End of explanation # De-mean the inflation series y = macrodata['infl'] - macrodata['infl'].mean() Explanation: Forecasting exercises often start with a fixed set of historical data that is used for model selection and parameter estimation. Then, the fitted selected model (or models) can be used to create out-of-sample forecasts. Most of the time, this is not the end of the story. As new data comes in, you may need to evaluate your forecast errors, possibly update your models, and create updated out-of-sample forecasts. This is sometimes called a "real-time" forecasting exercise (by contrast, a pseudo real-time exercise is one in which you simulate this procedure). If all that matters is minimizing some loss function based on forecast errors (like MSE), then when new data comes in you may just want to completely redo model selection, parameter estimation and out-of-sample forecasting, using the updated datapoints. If you do this, your new forecasts will have changed for two reasons: You have received new data that gives you new information Your forecasting model or the estimated parameters are different In this notebook, we focus on methods for isolating the first effect. The way we do this comes from the so-called "nowcasting" literature, and in particular Bańbura, Giannone, and Reichlin (2011), Bańbura and Modugno (2014), and Bańbura et al. (2014). They describe this exercise as computing the "news", and we follow them in using this language in Statsmodels. These methods are perhaps most useful with multivariate models, since there multiple variables may update at the same time, and it is not immediately obvious what forecast change was created by what updated variable. However, they can still be useful for thinking about forecast revisions in univariate models. We will therefore start with the simpler univariate case to explain how things work, and then move to the multivariate case afterwards. Note on revisions: the framework that we are using is designed to decompose changes to forecasts from newly observed datapoints. It can also take into account revisions to previously published datapoints, but it does not decompose them separately. Instead, it only shows the aggregate effect of "revisions". Note on exog data: the framework that we are using only decomposes changes to forecasts from newly observed datapoints for modeled variables. These are the "left-hand-side" variables that in Statsmodels are given in the endog arguments. This framework does not decompose or account for changes to unmodeled "right-hand-side" variables, like those included in the exog argument. Simple univariate example: AR(1) We will begin with a simple autoregressive model, an AR(1): $$y_t = \phi y_{t-1} + \varepsilon_t$$ The parameter $\phi$ captures the persistence of the series We will use this model to forecast inflation. To make it simpler to describe the forecast updates in this notebook, we will work with inflation data that has been de-meaned, but it is straightforward in practice to augment the model with a mean term. End of explanation y_pre = y.iloc[:-5] y_pre.plot(figsize=(15, 3), title='Inflation'); Explanation: Step 1: fitting the model on the available dataset Here, we'll simulate an out-of-sample exercise, by constructing and fitting our model using all of the data except the last five observations. We'll assume that we haven't observed these values yet, and then in subsequent steps we'll add them back into the analysis. End of explanation mod_pre = sm.tsa.arima.ARIMA(y_pre, order=(1, 0, 0), trend='n') res_pre = mod_pre.fit() print(res_pre.summary()) Explanation: To construct forecasts, we first estimate the parameters of the model. This returns a results object that we will be able to use produce forecasts. End of explanation # Compute the forecasts forecasts_pre = res_pre.forecast(4) # Plot the last 3 years of data and the four out-of-sample forecasts y_pre.iloc[-12:].plot(figsize=(15, 3), label='Data', legend=True) forecasts_pre.plot(label='Forecast', legend=True); Explanation: Creating the forecasts from the results object res is easy - you can just call the forecast method with the number of forecasts you want to construct. In this case, we'll construct four out-of-sample forecasts. End of explanation # Get the estimated AR(1) coefficient phi_hat = res_pre.params[0] # Get the last observed value of the variable y_T = y_pre.iloc[-1] # Directly compute the forecasts at the horizons h=1,2,3,4 manual_forecasts = pd.Series([phi_hat * y_T, phi_hat**2 * y_T, phi_hat**3 * y_T, phi_hat**4 * y_T], index=forecasts_pre.index) # We'll print the two to double-check that they're the same print(pd.concat([forecasts_pre, manual_forecasts], axis=1)) Explanation: For the AR(1) model, it is also easy to manually construct the forecasts. Denoting the last observed variable as $y_T$ and the $h$-step-ahead forecast as $y_{T+h|T}$, we have: $$y_{T+h|T} = \hat \phi^h y_T$$ Where $\hat \phi$ is our estimated value for the AR(1) coefficient. From the summary output above, we can see that this is the first parameter of the model, which we can access from the params attribute of the results object. End of explanation # Get the next observation after the "pre" dataset y_update = y.iloc[-5:-4] # Print the forecast error print('Forecast error: %.2f' % (y_update.iloc[0] - forecasts_pre.iloc[0])) Explanation: Step 2: computing the "news" from a new observation Suppose that time has passed, and we have now received another observation. Our dataset is now larger, and we can evaluate our forecast error and produce updated forecasts for the subsequent quarters. End of explanation # Create a new results object by passing the new observations to the `append` method res_post = res_pre.append(y_update) # Since we now know the value for 2008Q3, we will only use `res_post` to # produce forecasts for 2008Q4 through 2009Q2 forecasts_post = pd.concat([y_update, res_post.forecast('2009Q2')]) print(forecasts_post) Explanation: To compute forecasts based on our updated dataset, we will create an updated results object res_post using the append method, to append on our new observation to the previous dataset. Note that by default, the append method does not re-estimate the parameters of the model. This is exactly what we want here, since we want to isolate the effect on the forecasts of the new information only. End of explanation # Compute the impact of the news on the four periods that we previously # forecasted: 2008Q3 through 2009Q2 news = res_pre.news(res_post, start='2008Q3', end='2009Q2') # Note: one alternative way to specify these impact dates is # `start='2008Q3', periods=4` Explanation: In this case, the forecast error is quite large - inflation was more than 10 percentage points below the AR(1) models' forecast. (This was largely because of large swings in oil prices around the global financial crisis). To analyse this in more depth, we can use Statsmodels to isolate the effect of the new information - or the "news" - on our forecasts. This means that we do not yet want to change our model or re-estimate the parameters. Instead, we will use the news method that is available in the results objects of state space models. Computing the news in Statsmodels always requires a previous results object or dataset, and an updated results object or dataset. Here we will use the original results object res_pre as the previous results and the res_post results object that we just created as the updated results. Once we have previous and updated results objects or datasets, we can compute the news by calling the news method. Here, we will call res_pre.news, and the first argument will be the updated results, res_post (however, if you have two results objects, the news method could can be called on either one). In addition to specifying the comparison object or dataset as the first argument, there are a variety of other arguments that are accepted. The most important specify the "impact periods" that you want to consider. These "impact periods" correspond to the forecasted periods of interest; i.e. these dates specify with periods will have forecast revisions decomposed. To specify the impact periods, you must pass two of start, end, and periods (similar to the Pandas date_range method). If your time series was a Pandas object with an associated date or period index, then you can pass dates as values for start and end, as we do below. End of explanation print(news.summary()) Explanation: The variable news is an object of the class NewsResults, and it contains details about the updates to the data in res_post compared to res_pre, the new information in the updated dataset, and the impact that the new information had on the forecasts in the period between start and end. One easy way to summarize the results are with the summary method. End of explanation # Print the news, computed by the `news` method print(news.news) # Manually compute the news print() print((y_update.iloc[0] - phi_hat * y_pre.iloc[-1]).round(6)) # Print the total impacts, computed by the `news` method # (Note: news.total_impacts = news.revision_impacts + news.update_impacts, but # here there are no data revisions, so total and update impacts are the same) print(news.total_impacts) # Manually compute the impacts print() print(forecasts_post - forecasts_pre) # Print the weights, computed by the `news` method print(news.weights) # Manually compute the weights print() print(np.array([1, phi_hat, phi_hat**2, phi_hat**3]).round(6)) Explanation: Summary output: the default summary for this news results object printed four tables: Summary of the model and datasets Details of the news from updated data Summary of the impacts of the new information on the forecasts between start='2008Q3' and end='2009Q2' Details of how the updated data led to the impacts on the forecasts between start='2008Q3' and end='2009Q2' These are described in more detail below. Notes: There are a number of arguments that can be passed to the summary method to control this output. Check the documentation / docstring for details. Table (4), showing details of the updates and impacts, can become quite large if the model is multivariate, there are multiple updates, or a large number of impact dates are selected. It is only shown by default for univariate models. First table: summary of the model and datasets The first table, above, shows: The type of model from which the forecasts were made. Here this is an ARIMA model, since an AR(1) is a special case of an ARIMA(p,d,q) model. The date and time at which the analysis was computed. The original sample period, which here corresponds to y_pre The endpoint of the updated sample period, which here is the last date in y_post Second table: the news from updated data This table simply shows the forecasts from the previous results for observations that were updated in the updated sample. Notes: Our updated dataset y_post did not contain any revisions to previously observed datapoints. If it had, there would be an additional table showing the previous and updated values of each such revision. Third table: summary of the impacts of the new information Columns: The third table, above, shows: The previous forecast for each of the impact dates, in the "estimate (prev)" column The impact that the new information (the "news") had on the forecasts for each of the impact dates, in the "impact of news" column The updated forecast for each of the impact dates, in the "estimate (new)" column Notes: In multivariate models, this table contains additional columns describing the relevant impacted variable for each row. Our updated dataset y_post did not contain any revisions to previously observed datapoints. If it had, there would be additional columns in this table showing the impact of those revisions on the forecasts for the impact dates. Note that estimate (new) = estimate (prev) + impact of news This table can be accessed independently using the summary_impacts method. In our example: Notice that in our example, the table shows the values that we computed earlier: The "estimate (prev)" column is identical to the forecasts from our previous model, contained in the forecasts_pre variable. The "estimate (new)" column is identical to our forecasts_post variable, which contains the observed value for 2008Q3 and the forecasts from the updated model for 2008Q4 - 2009Q2. Fourth table: details of updates and their impacts The fourth table, above, shows how each new observation translated into specific impacts at each impact date. Columns: The first three columns table described the relevant update (an "updated" is a new observation): The first column ("update date") shows the date of the variable that was updated. The second column ("forecast (prev)") shows the value that would have been forecasted for the update variable at the update date based on the previous results / dataset. The third column ("observed") shows the actual observed value of that updated variable / update date in the updated results / dataset. The last four columns described the impact of a given update (an impact is a changed forecast within the "impact periods"). The fourth column ("impact date") gives the date at which the given update made an impact. The fifth column ("news") shows the "news" associated with the given update (this is the same for each impact of a given update, but is just not sparsified by default) The sixth column ("weight") describes the weight that the "news" from the given update has on the impacted variable at the impact date. In general, weights will be different between each "updated variable" / "update date" / "impacted variable" / "impact date" combination. The seventh column ("impact") shows the impact that the given update had on the given "impacted variable" / "impact date". Notes: In multivariate models, this table contains additional columns to show the relevant variable that was updated and variable that was impacted for each row. Here, there is only one variable ("infl"), so those columns are suppressed to save space. By default, the updates in this table are "sparsified" with blanks, to avoid repeating the same values for "update date", "forecast (prev)", and "observed" for each row of the table. This behavior can be overridden using the sparsify argument. Note that impact = news * weight. This table can be accessed independently using the summary_details method. In our example: For the update to 2008Q3 and impact date 2008Q3, the weight is equal to 1. This is because we only have one variable, and once we have incorporated the data for 2008Q3, there is no no remaining ambiguity about the "forecast" for this date. Thus all of the "news" about this variable at 2008Q3 passes through to the "forecast" directly. Addendum: manually computing the news, weights, and impacts For this simple example with a univariate model, it is straightforward to compute all of the values shown above by hand. First, recall the formula for forecasting $y_{T+h|T} = \phi^h y_T$, and note that it follows that we also have $y_{T+h|T+1} = \phi^h y_{T+1}$. Finally, note that $y_{T|T+1} = y_T$, because if we know the value of the observations through $T+1$, we know the value of $y_T$. News: The "news" is nothing more than the forecast error associated with one of the new observations. So the news associated with observation $T+1$ is: $$n_{T+1} = y_{T+1} - y_{T+1|T} = Y_{T+1} - \phi Y_T$$ Impacts: The impact of the news is the difference between the updated and previous forecasts, $i_h \equiv y_{T+h|T+1} - y_{T+h|T}$. The previous forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} \phi y_T & \phi^2 y_T & \phi^3 y_T & \phi^4 y_T \end{pmatrix}'$. The updated forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} y_{T+1} & \phi y_{T+1} & \phi^2 y_{T+1} & \phi^3 y_{T+1} \end{pmatrix}'$. The impacts are therefore: $${ i_h }{h=1}^4 = \begin{pmatrix} y{T+1} - \phi y_T \ \phi (Y_{T+1} - \phi y_T) \ \phi^2 (Y_{T+1} - \phi y_T) \ \phi^3 (Y_{T+1} - \phi y_T) \end{pmatrix}$$ Weights: To compute the weights, we just need to note that it is immediate that we can rewrite the impacts in terms of the forecast errors, $n_{T+1}$. $${ i_h }{h=1}^4 = \begin{pmatrix} 1 \ \phi \ \phi^2 \ \phi^3 \end{pmatrix} n{T+1}$$ The weights are then simply $w = \begin{pmatrix} 1 \ \phi \ \phi^2 \ \phi^3 \end{pmatrix}$ We can check that this is what the news method has computed. End of explanation import pandas_datareader as pdr levels = pdr.get_data_fred(['PCEPILFE', 'CPILFESL'], start='1999', end='2019').to_period('M') infl = np.log(levels).diff().iloc[1:] * 1200 infl.columns = ['PCE', 'CPI'] # Remove two outliers and de-mean the series infl['PCE'].loc['2001-09':'2001-10'] = np.nan Explanation: Multivariate example: dynamic factor In this example, we'll consider forecasting monthly core price inflation based on the Personal Consumption Expenditures (PCE) price index and the Consumer Price Index (CPI), using a Dynamic Factor model. Both of these measures track prices in the US economy and are based on similar source data, but they have a number of definitional differences. Nonetheless, they track each other relatively well, so modeling them jointly using a single dynamic factor seems reasonable. One reason that this kind of approach can be useful is that the CPI is released earlier in the month than the PCE. One the CPI is released, therefore, we can update our dynamic factor model with that additional datapoint, and obtain an improved forecast for that month's PCE release. A more involved version of this kind of analysis is available in Knotek and Zaman (2017). We start by downloading the core CPI and PCE price index data from FRED, converting them to annualized monthly inflation rates, removing two outliers, and de-meaning each series (the dynamic factor model does not End of explanation # Previous dataset runs through 2017-02 y_pre = infl.loc[:'2017-01'].copy() const_pre = np.ones(len(y_pre)) print(y_pre.tail()) # For the updated dataset, we'll just add in the # CPI value for 2017-03 y_post = infl.loc[:'2017-03'].copy() y_post.loc['2017-03', 'PCE'] = np.nan const_post = np.ones(len(y_post)) # Notice the missing value for PCE in 2017-03 print(y_post.tail()) Explanation: To show how this works, we'll imagine that it is April 14, 2017, which is the data of the March 2017 CPI release. So that we can show the effect of multiple updates at once, we'll assume that we haven't updated our data since the end of January, so that: Our previous dataset will consist of all values for the PCE and CPI through January 2017 Our updated dataset will additionally incorporate the CPI for February and March 2017 and the PCE data for February 2017. But it will not yet the PCE (the March 2017 PCE price index was not released until May 1, 2017). End of explanation # Plot the updated dataset fig, ax = plt.subplots(figsize=(15, 3)) y_post.plot(ax=ax) ax.hlines(0, '2009', '2017-06', linewidth=1.0) ax.set_xlim('2009', '2017-06'); Explanation: We chose this particular example because in March 2017, core CPI prices fell for the first time since 2010, and this information may be useful in forecast core PCE prices for that month. The graph below shows the CPI and PCE price data as it would have been observed on April 14th$^\dagger$. $\dagger$ This statement is not entirely true, because both the CPI and PCE price indexes can be revised to a certain extent after the fact. As a result, the series that we're pulling are not exactly like those observed on April 14, 2017. This could be fixed by pulling the archived data from ALFRED instead of FRED, but the data we have is good enough for this tutorial. End of explanation mod_pre = sm.tsa.DynamicFactor(y_pre, exog=const_pre, k_factors=1, factor_order=6) res_pre = mod_pre.fit() print(res_pre.summary()) Explanation: To perform the exercise, we first construct and fit a DynamicFactor model. Specifically: We are using a single dynamic factor (k_factors=1) We are modeling the factor's dynamics with an AR(6) model (factor_order=6) We have included a vector of ones as an exogenous variable (exog=const_pre), because the inflation series we are working with are not mean-zero. End of explanation # Create the news results # Note const_post_plus1 = np.ones(len(y_post) + 1) news = res_pre.news(y_post, exog=const_post_plus1, start='2017-03', end='2017-04') Explanation: With the fitted model in hand, we now construct the news and impacts associated with observing the CPI for March 2017. The updated data is for February 2017 and part of March 2017, and we'll examining the impacts on both March and April. In the univariate example, we first created an updated results object, and then passed that to the news method. Here, we're creating the news by directly passing the updated dataset. Notice that: y_post contains the entire updated dataset (not just the new datapoints) We also had to pass an updated exog array. This array must cover both: The entire period associated with y_post Any additional datapoints after the end of y_post through the last impact date, specified by end Here, y_post ends in March 2017, so we needed our exog to extend one more period, to April 2017. End of explanation # Show the summary of the news results print(news.summary()) Explanation: Note: In the univariate example, above, we first constructed a new results object, and then passed that to the news method. We could have done that here too, although there is an extra step required. Since we are requesting an impact for a period beyond the end of y_post, we would still need to pass the additional value for the exog variable during that period to news: python res_post = res_pre.apply(y_post, exog=const_post) news = res_pre.news(res_post, exog=[1.], start='2017-03', end='2017-04') Now that we have computed the news, printing summary is a convenient way to see the results. End of explanation print(news.summary_details()) Explanation: Because we have multiple variables, by default the summary only shows the news from updated data along and the total impacts. From the first table, we can see that our updated dataset contains three new data points, with most of the "news" from these data coming from the very low reading in March 2017. The second table shows that these three datapoints substantially impacted the estimate for PCE in March 2017 (which was not yet observed). This estimate revised down by nearly 1.5 percentage points. The updated data also impacted the forecasts in the first out-of-sample month, April 2017. After incorporating the new data, the model's forecasts for CPI and PCE inflation in that month revised down 0.29 and 0.17 percentage point, respectively. While these tables show the "news" and the total impacts, they do not show how much of each impact was caused by each updated datapoint. To see that information, we need to look at the details tables. One way to see the details tables is to pass include_details=True to the summary method. To avoid repeating the tables above, however, we'll just call the summary_details method directly. End of explanation
11,294
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have a numpy array of different numpy arrays and I want to make a deep copy of the arrays. I found out the following:
Problem: import numpy as np pairs = [(2, 3), (3, 4), (4, 5)] array_of_arrays = np.array([np.arange(a*b).reshape(a,b) for (a, b) in pairs]) import copy result = copy.deepcopy(array_of_arrays)
11,295
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro to ConPy Step1: Loading the simulation data If the code in the above cell ran without any errors, we're good to go. Let's load the stimulation data. It is stored as an MNE-Python Epochs file. To load it, you must use the mne.read_epochs function. To see how it works, you need to take a look at the documentation for this function. You can call up the documentation of any function by appending a ? to the function name, like so Step2: The documentation shows us that mne.read_epochs takes one required parameter (fname) and three optional parameters (proj, preload and verbose). You can recognize optional parameters by the fact that they have a default value assigned to them. In this exercise, you can always leave the optional parameters as they are, unless explicitly instructed to change them. So, the only parameter of interest right now is the fname parameter, which must be a string containing the path and filename of the simulated data, namely Step3: "Epochs" are snippets of MEG sensor data. In this simulation, all sensors are gradiometers. There are two epochs, appoximately 10 second in length Step4: In the epochs plot, you can use the scrolling function of your mouse/trackpad to browse through the channels. The vertical dashed line indicates where one epoch ends and the next one begins. Question 1 Step5: If you were to name one frequency at which the sources are sending out a signal, what would that frequency be? Fill in the answer below. We'll use it in the upcoming tasks Step6: Question 2 Step7: Take a look at the topomap corresponding to the frequency band that contains the frequency at which the sources are sending out their signal. How many sources do you think I simulated? Fill in your answer below Step8: Question 3 Step9: If you examine the CSD matrix closely, you can already spot which sources are coherent with each other. Sssshhh! we'll look at it in more detail later. For now, let's compute the DICS beamformer! The next functions to call are mne.beamformer.make_dics and mne.beamformer.apply_dics_csd. Lets examine them more closely. mne.beamformer.make_dics This function will create the DICS beamformer weights. These weights are spatial filters Step10: For this exercise, we use a very sparse source grid (the yellow dots in the plot). This grid is enough for our purposes and our computations will run quickly. For real studies, I recommend a much denser grid. Another thing you'll need for the DICS beamformer is an Info object. This object contains information about the location of the MEG sensors and so forth. The epochs object provides one as epochs.info. Try running print(epochs.info) to check it out. Now you should have everything you need to create the DICS beamformer weights using the mne.beamformer.make_dics function. Store the result in the variable filters Step11: mne.beamformer.apply_dics_csd With the DICS filters computed, making a cortical power map is straightforward. The mne.beamformer.apply_dics_csd will do it for you. The only new thing here is that this function will return two things (up to now, all our functions only returned one thing!). Don't panick. The Python syntax for dealing with it is like this Step12: Use the mouse/trackpad to rotate the brain around. Can you find the sources on the cortex? Even though I've simulated them as dipole sources, they show more as "blobs" in the power map. This is called spatial leaking and is due to various inaccuracies and limitations of the DICS beamformer filters. Question 4 Step13: You may need to rotate the brain around to find the seed point. It should be drawn as a white sphere. Up to now, we've been using all data. However, we know our sources are only coherent during the second part. Executing the cell below will split the data into a "rest" and "task" part. Step14: To estimate connectivity for just the epochs_task part, we need to compute the CSD matrix on only this data. You've computed a CSD matrix before, so rince and repeat Step15: Now you are ready to compute one-to-all connectivity using DICS. It will take two lines of Python code. First, you'll need to use the conpy.one_to_all_connectivity_pairs function to compute the list of connectivity pairs. Then, you can use the conpy.dics_connectivity function to perform the connectivity estimation. Check the documentation for both functions (remember Step16: To visualize the connectivity result, we can create a cortical map, where the value at each source point is the coherence between the source point and the seed region. The con_task object defines a .make_stc() method that will do just that. Take a look at its documentation and store the map in the coherence_task variable Step17: Which source points seem to be in coherence with the seed point? Double-click on the text-cell below to edit it and write down your answer. Double-click here to edit this text cell. Pressing CTRL+Enter will transform it back into formatted text. Congratulations! You have now answered the original 4 questions. If you have some time left, you may continue below to explore all-to-all connectivity. If you examine the coherence map, you'll find that the regions surrounding the seed point are coherent with the seed point. This is not because there are active coherent sources there, but because of the spatial leakage. You will always find this coherence. Make a one-to-all coherence map like you did above, but this time for the epochs_rest data (in which none of the sources are coherent). Store the connectivity in the con_rest variable and the coherence map in the coherence_rest variable Step18: See? You'll find that also when no coherent sources are active, there is an area of coherence surrounding the seed region. This will be a major problem when attempting to estimate all-to-all connectivity. One way to deal with the spatial leakage problem is to make a contrast between the "task" and "rest" segments. Since the coherence due to spatial leakage is the same for both segments, it should cancel out. Connectivity objects, like con_task and con_rest have support for common math operators like +, -, * and /. Creating a constract between the two object is therefore as simple as con_task - con_rest. Im the cell below, make a new coherence map of the contrast and store it in the coherence_contrast variable Step19: If all went well, you'll see that the coherence due to spatial leakage has disappeared from the coherence map. All-to-all connectivity Use the conpy.all_to_all_connectivity_pairs function to compute the connectivity pairs in an all-to-all manner. Then, use the conpy.dics_connectivity function like before to create a connectivity object for the "task" and a connectivity object for the "rest" data segments. Store them in the all_to_all_task and all_to_all_rest variables. You'll notice that computing all-to-all connectivity takes a while... Finally, create the contrast between them and store it in the all_to_all_contrast variable. Step20: How to visualize this all-to-all connectivity? This is a question worth pondering a bit. But for this exercise, we can get away with producing a coherence map like we did with the one-to-all connectivity. The value of the coherence map is, for each source point, the sum of the coherence of all connections from and to the source point. Executing the cell below will plot this coherence map. Can you spot the connectivity between the sources?
Python Code: # Don't worry about warnings in this exercise, as they can be distracting. import warnings warnings.simplefilter('ignore') # Import the required Python modules import mne import conpy import surfer # Import and configure the 3D graphics backend from mayavi import mlab mlab.init_notebook('png') # Tell MNE-Python to be quiet. The normal barrage of information will only distract us. Only display errors. mne.set_log_level('ERROR') # Configure the plotting interface to display plots in their own windows. # The % sign makes it a "magic" command: a command ment for the notebook environment, rather than a command for Python. %matplotlib notebook # Tell MNE-Python and PySurfer where to find the brain model import os os.environ['SUBJECTS_DIR'] = 'data/subjects' # Let's test plotting a brain (this is handled by the PySurfer package) surfer.Brain('sample', hemi='both', surf='pial') Explanation: Intro to ConPy: functional connectivity estimation of MEG signals Welcome to this introductory tutorial for the ConPy package. Together with MNE-Python, we can use it to perform functional connectivity estimation of MEG data. This tutorial was written to be used as an exercise during my lectures. In lieu of my lecture, you can read this paper to get the theoretical background you need to understand the concepts we will be dealing with in this exercise. Ok, let's get started! I have similated some data for you. It's already stored on the virtual server you are talking to right now. In this simulation, a placed a couple of dipole sources on the cortex, sending out a signal in a narrow frequency band. During the first part of the recording, these sources are incoherent with each other. During the second part, some of the sources become coherent with each other. Your task is to find out: At which frequency are the sources sending out a signal? How many dipole sources did I place on the cortex? Where are the sources located on the cortex? Which sources are coherent in the second part of the recording? We will use MNE-Python and ConPy to aswer the above questions. Loading the required Python modules and configuring the environment Executing the code cell below will load the required Python modules and configure some things. If all goes well, you'll be rewarded with a plot of a brain: End of explanation mne.read_epochs? Explanation: Loading the simulation data If the code in the above cell ran without any errors, we're good to go. Let's load the stimulation data. It is stored as an MNE-Python Epochs file. To load it, you must use the mne.read_epochs function. To see how it works, you need to take a look at the documentation for this function. You can call up the documentation of any function by appending a ? to the function name, like so: End of explanation # Write your Python code here # If your code in the above cell is correct, executing this cell print some information about the data print(epochs) Explanation: The documentation shows us that mne.read_epochs takes one required parameter (fname) and three optional parameters (proj, preload and verbose). You can recognize optional parameters by the fact that they have a default value assigned to them. In this exercise, you can always leave the optional parameters as they are, unless explicitly instructed to change them. So, the only parameter of interest right now is the fname parameter, which must be a string containing the path and filename of the simulated data, namely: 'data/simulated-data-epo.fif'. Go ahead and call the mne.read_epochs function to load the stimulated data. Store the result in a variable called epochs: End of explanation # The semicolon at the end prevents the image from being included in this notebook epochs.plot(); Explanation: "Epochs" are snippets of MEG sensor data. In this simulation, all sensors are gradiometers. There are two epochs, appoximately 10 second in length: one epoch corresponding to the (simulated) subject "at rest" and one epoch corresponding to the subject performing some task. Most objects we'll be working with today have a plot method. For example, the cell below will plot the epochs object: End of explanation # Write here the code to plot the PSD of the MEG signal Explanation: In the epochs plot, you can use the scrolling function of your mouse/trackpad to browse through the channels. The vertical dashed line indicates where one epoch ends and the next one begins. Question 1: At which frequency are the sources sending out a signal? To find out, let's plot the power spectal density (PSD) of the signal. The PSD is computed by applying a Fourier transform to the data of each MEG sensor. We can use the plot_psd method of the epochs object to show it to us. By default, it will show us the average PSD across the sensors, which is good enough for our purposes. Check the documentation of the epochs.plot_psd method to see what parameters are required (remember: you are free to ignore the optional parameters). End of explanation # Fill in the source frequency, in Hertz source_frequency = ### Explanation: If you were to name one frequency at which the sources are sending out a signal, what would that frequency be? Fill in the answer below. We'll use it in the upcoming tasks: End of explanation # Write here the code to plot some PSD topomaps Explanation: Question 2: How many sources did I simulate? Ok, so now we know the frequency at which to look for sources. How many dipole sources did I use in the simulation? To find out, we must look at which sensors have the most activity at the frequency of the sources. The plot_psd_topomap method of the epochs object can do that for us. If you call it with the default parameters, it will plot so called "topomaps" for the following frequency bands: |Name | Frequency band |------|--------------- |Delta | 0-4 Hz |Theta | 4-8 Hz |Alpha | 8-12 Hz |Beta | 12-30 Hz |Gamma | 30-45 Hz Try it now: take a look at the documentation for the epochs.plot_psd_topomap method and plot some topomaps: End of explanation number_of_sources = ### Explanation: Take a look at the topomap corresponding to the frequency band that contains the frequency at which the sources are sending out their signal. How many sources do you think I simulated? Fill in your answer below: End of explanation # Write here the code to construct a CSD matrix # If the code in the cell above is correct, executing this cell will plot the CSD matrix csd.plot()[0] Explanation: Question 3: Where are the sources located on the cortex? Looking at the topomaps will give you a rough location of the sources, but let's be more exact. We will now use a DICS beamformer to localize the sources on the cortex. To construct a DICS beamformer, we must first estimate the cross-spectral density (CSD) between all sensors. You can use the mne.time_frequency.csd_morlet function to do so. Go check its documentation. You will find that one of the parameters is a list of frequencies at which to compute the CSD. Use a list containing a single frequency: the answer to Question 1 that you stored earlier in the source_frequency variable. In Python code, the list can be written like this: [source_frequency]. Store the result of mne.time_frequency.csd_morlet in a variable called csd. End of explanation # Write your code to read the forward solution here # If the code in the above cell is correct, executing this cell will plot the source grid fwd['src'].plot(trans='data/simulated-data-trans.fif') Explanation: If you examine the CSD matrix closely, you can already spot which sources are coherent with each other. Sssshhh! we'll look at it in more detail later. For now, let's compute the DICS beamformer! The next functions to call are mne.beamformer.make_dics and mne.beamformer.apply_dics_csd. Lets examine them more closely. mne.beamformer.make_dics This function will create the DICS beamformer weights. These weights are spatial filters: each filter will only pass activity for one specific location on the cortex, at one specific frequency(-band). In order to do this, we'll need a leadfield: a model that simulates how signals on the cortex manifest as magnetic fields as measured by the sensors. MNE-Python calls them "forward solutions". Luckily we have one lying around: the 'data/simulated-data-fwd.fif' file contains one. You can load it with the mne.read_forward_solution function. Take a look at the documentation for that function and load the forward solution in the variable fwd: End of explanation # Write your code to compute the DICS filters here # If the code in the above cell is correct, executing this cell will print some information about the filters print('Filters have been computed for %d points on the cortex at %d frequency.' % (filters['weights'].shape[1], filters['weights'].shape[0])) print('At each point, there are %d source dipoles (XYZ)' % filters['n_orient']) Explanation: For this exercise, we use a very sparse source grid (the yellow dots in the plot). This grid is enough for our purposes and our computations will run quickly. For real studies, I recommend a much denser grid. Another thing you'll need for the DICS beamformer is an Info object. This object contains information about the location of the MEG sensors and so forth. The epochs object provides one as epochs.info. Try running print(epochs.info) to check it out. Now you should have everything you need to create the DICS beamformer weights using the mne.beamformer.make_dics function. Store the result in the variable filters: End of explanation # Write your code to compute the power map here # If the code in the above cell is correct, executing the cell will plot the power map power_map.plot(hemi='both', smoothing_steps=20); Explanation: mne.beamformer.apply_dics_csd With the DICS filters computed, making a cortical power map is straightforward. The mne.beamformer.apply_dics_csd will do it for you. The only new thing here is that this function will return two things (up to now, all our functions only returned one thing!). Don't panick. The Python syntax for dealing with it is like this: python power_map, frequencies = mne.beamformer.apply_dics_csd(...) See? It returns both the power_map that we'll visualize in a minute, and a list of frequencies for which the power map is defined. Go read the documentation for mne.beamformer.apply_dics_csd and make the powermap: End of explanation # Write your code to find the seed point here # If the code in the above cell is correct, executing this cell will plot the seed point on the power map brain = power_map.plot(hemi='both', smoothing_steps=20) # Plot power map # We need to find out on which hemisphere the seed point lies lh_verts, rh_verts = power_map.vertices if seed_point < len(lh_verts): # Seed point is on the left hemisphere brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh') else: # Seed point is on the right hemisphere brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh') Explanation: Use the mouse/trackpad to rotate the brain around. Can you find the sources on the cortex? Even though I've simulated them as dipole sources, they show more as "blobs" in the power map. This is called spatial leaking and is due to various inaccuracies and limitations of the DICS beamformer filters. Question 4: Which sources are coherent in the second part of the recording? The simulated recording consists of two parts (=epochs): during the first epoch, our simulated subject is at rest and the sources are not coherent. During the second epoch, our simulated subject is performing a task that causes some of the sources to become coherent. It's finally time for some connectivity estimation! We'll first tackle one-to-all connectivity, as it is much easier to visualize and the results are less messy. Afterward, we'll move on to all-to-all connectivity. One-to-all connectivity estimation For this, we must first define a "seed region": one of the source points for which we will estimate coherence with all other source points. A common choice is to use the power map to find the source point with the most power. To find this point, you can use the .argmax() method of the power_map.data object. This is a method that all data arrays have. It will return the index of the maximum element in the array, which in the case of our power_map.data array will be the source point with the maximum power. Go find your seed point and store it in the variable seed_point: End of explanation # Splitting the data is not hard to do. epochs_rest = epochs['rest'] epochs_task = epochs['task'] Explanation: You may need to rotate the brain around to find the seed point. It should be drawn as a white sphere. Up to now, we've been using all data. However, we know our sources are only coherent during the second part. Executing the cell below will split the data into a "rest" and "task" part. End of explanation # Write your code here to compute the CSD on the epochs_task data # If the code in the above cell is correct, executing this cell will plot the CSD matrix csd_task.plot()[0] Explanation: To estimate connectivity for just the epochs_task part, we need to compute the CSD matrix on only this data. You've computed a CSD matrix before, so rince and repeat: compute the CSD on just the epochs_task data and store it in the csd_task variable: End of explanation # Write your code here to compute one-to-all connectivity for the "task" data # If the code in the above cell is correct, executing this cell will print some information about the connectivity print(con_task) Explanation: Now you are ready to compute one-to-all connectivity using DICS. It will take two lines of Python code. First, you'll need to use the conpy.one_to_all_connectivity_pairs function to compute the list of connectivity pairs. Then, you can use the conpy.dics_connectivity function to perform the connectivity estimation. Check the documentation for both functions (remember: you can leave all optional parameters as they are) and store your connectivity result in the con_task variable: End of explanation # Write your code here to compute the coherence map for the epochs_task data # If the code in the above cell is correct, executing this cell will plot the coherence map brain = coherence_task.plot(hemi='both', smoothing_steps=20) lh_verts, rh_verts = coherence_task.vertices if seed_point < len(lh_verts): # Seed point is on the left hemisphere brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh') else: # Seed point is on the right hemisphere brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh') Explanation: To visualize the connectivity result, we can create a cortical map, where the value at each source point is the coherence between the source point and the seed region. The con_task object defines a .make_stc() method that will do just that. Take a look at its documentation and store the map in the coherence_task variable: End of explanation # Write your code here to compute connectivity for the epochs_rest data and make a coherence map # If the code in the above cell is correct, executing this cell will plot the coherence map brain = coherence_rest.plot(hemi='both', smoothing_steps=20) lh_verts, rh_verts = coherence_rest.vertices if seed_point < len(lh_verts): # Seed point is on the left hemisphere brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh') else: # Seed point is on the right hemisphere brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh') Explanation: Which source points seem to be in coherence with the seed point? Double-click on the text-cell below to edit it and write down your answer. Double-click here to edit this text cell. Pressing CTRL+Enter will transform it back into formatted text. Congratulations! You have now answered the original 4 questions. If you have some time left, you may continue below to explore all-to-all connectivity. If you examine the coherence map, you'll find that the regions surrounding the seed point are coherent with the seed point. This is not because there are active coherent sources there, but because of the spatial leakage. You will always find this coherence. Make a one-to-all coherence map like you did above, but this time for the epochs_rest data (in which none of the sources are coherent). Store the connectivity in the con_rest variable and the coherence map in the coherence_rest variable: End of explanation # Write your code here to compute a contrast between the "task" and "rest" connectivity and make a coherence map # If the code in the above cell is correct, executing this cell will plot the coherence map brain = coherence_contrast.plot(hemi='both', smoothing_steps=20) lh_verts, rh_verts = coherence_contrast.vertices if seed_point < len(lh_verts): # Seed point is on the left hemisphere brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh') else: # Seed point is on the right hemisphere brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh') Explanation: See? You'll find that also when no coherent sources are active, there is an area of coherence surrounding the seed region. This will be a major problem when attempting to estimate all-to-all connectivity. One way to deal with the spatial leakage problem is to make a contrast between the "task" and "rest" segments. Since the coherence due to spatial leakage is the same for both segments, it should cancel out. Connectivity objects, like con_task and con_rest have support for common math operators like +, -, * and /. Creating a constract between the two object is therefore as simple as con_task - con_rest. Im the cell below, make a new coherence map of the contrast and store it in the coherence_contrast variable: End of explanation # Write your code to produce all-to-all connectivity estimates for the "rest" and "task" segments # and the contrast between them. # If the code in the above cell is correct, executing this cell will print some information about the connectivity print(all_to_all_contrast) Explanation: If all went well, you'll see that the coherence due to spatial leakage has disappeared from the coherence map. All-to-all connectivity Use the conpy.all_to_all_connectivity_pairs function to compute the connectivity pairs in an all-to-all manner. Then, use the conpy.dics_connectivity function like before to create a connectivity object for the "task" and a connectivity object for the "rest" data segments. Store them in the all_to_all_task and all_to_all_rest variables. You'll notice that computing all-to-all connectivity takes a while... Finally, create the contrast between them and store it in the all_to_all_contrast variable. End of explanation # This cell will plot the coherence map all_to_all_coherence = all_to_all_contrast.make_stc() all_to_all_coherence.plot(hemi='both', smoothing_steps=20); Explanation: How to visualize this all-to-all connectivity? This is a question worth pondering a bit. But for this exercise, we can get away with producing a coherence map like we did with the one-to-all connectivity. The value of the coherence map is, for each source point, the sum of the coherence of all connections from and to the source point. Executing the cell below will plot this coherence map. Can you spot the connectivity between the sources? End of explanation
11,296
Given the following text description, write Python code to implement the functionality described below step by step Description: Run mcode on the adjacency list for your toy graph, with vwp=0.8 Load the Krogan et al. network edge-list data as a Pandas data frame Step1: Make an igraph graph and print its summary Step2: Run mcode on your graph with vwp=0.1
Python Code: edge_list = Explanation: Run mcode on the adjacency list for your toy graph, with vwp=0.8 Load the Krogan et al. network edge-list data as a Pandas data frame End of explanation krogan_graph = krogan_graph.summary() Explanation: Make an igraph graph and print its summary End of explanation res = Explanation: Run mcode on your graph with vwp=0.1 End of explanation
11,297
Given the following text description, write Python code to implement the functionality described below step by step Description: Unsupervised Anomaly Detection Anomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection for 1-D data using Chronos's dbscan detector, autoencoder detector and threshold detector. For demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href="https Step1: Below are some example records of the data Step2: Data pre-processing Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. For the machine_usage data, the pre-processing contains 2 parts Step3: Anomaly Detection by DBScan Detector DBScanDetector uses DBSCAN clustering for anomaly detection. The DBSCAN algorithm tries to cluster the points and label the points that do not belong to any clusters as -1. It thus detects outliers detection in the input time series. DBScanDetector assigns anomaly score 1 to anomaly samples, and 0 to normal samples. Step4: Draw anomalies in line chart. Step5: Anomaly Detection by AutoEncoder Detector AEDetector is unsupervised anomaly detector. It builds an autoencoder network, try to fit the model to the input data, and calcuates the reconstruction error. The samples with larger reconstruction errors are more likely the anomalies. Step6: Draw anomalies in line chart. Step7: Anomaly Detection by Threshold Detector ThresholdDetector is a simple anomaly detector that detectes anomalies based on threshold. The target value for anomaly testing can be either 1) the sample value itself or 2) the difference between the forecasted value and the actual value. In this notebook we demostrate the first type. The thresold can be set by user or esitmated from the train data accoring to anomaly ratio and statistical distributions. Step8: Draw anomalies in line chart.
Python Code: import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline df_1932 = pd.read_csv("m_1932.csv", header=None, usecols=[1,2,3], names=["time_step", "cpu_usage","mem_usage"]) Explanation: Unsupervised Anomaly Detection Anomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection for 1-D data using Chronos's dbscan detector, autoencoder detector and threshold detector. For demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href="https://github.com/alibaba/clusterdata/blob/master/cluster-trace-v2018/trace_2018.md" target="_blank">here</a>. In particular, we use machine usage data to demonstrate anomaly detection, you can download the separate data file directly with <a href="http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz" target="_blank">machine_usage</a>. Download raw dataset and load into dataframe Now we download the dataset and load it into a pandas dataframe.Steps are as below: * First, download the raw data <a href="http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz" target="_blank">machine_usage</a>. Or run the script get_data.sh to download the raw data.It will download the resource usage of each machine from m_1932 to m_2085. * Second, run grep m_1932 machine_usage.csv &gt; m_1932.csv to extract records of machine 1932. Or run extract_data.sh.We use machine 1932 as an example in this notebook.You can choose any machines in the similar way. * Finally, use pandas to load m_1932.csv into a dataframe as shown below. End of explanation df_1932.head() df_1932.sort_values(by="time_step", inplace=True) df_1932.reset_index(inplace=True) df_1932.plot(y="cpu_usage", x="time_step", figsize=(16,6),title="cpu_usage of machine 1932") Explanation: Below are some example records of the data End of explanation df_1932["time_step"] = pd.to_datetime(df_1932["time_step"], unit='s', origin=pd.Timestamp('2018-01-01')) from bigdl.chronos.data import TSDataset tsdata = TSDataset.from_pandas(df_1932, dt_col="time_step", target_col="cpu_usage") df = tsdata.resample(interval='1min', merge_mode="mean")\ .impute(mode="last")\ .to_pandas() df['cpu_usage'].plot(figsize=(16,6)) Explanation: Data pre-processing Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. For the machine_usage data, the pre-processing contains 2 parts: 1. Convert the time step in seconds to timestamp starting from 2018-01-01 2. Generate a built-in TSDataset to resample the average of cpu_usage in minutes and impute missing data End of explanation from bigdl.chronos.detector.anomaly import DBScanDetector ad = DBScanDetector(eps=0.1, min_samples=6) ad.fit(df['cpu_usage'].to_numpy()) anomaly_scores = ad.score() anomaly_indexes = ad.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes) Explanation: Anomaly Detection by DBScan Detector DBScanDetector uses DBSCAN clustering for anomaly detection. The DBSCAN algorithm tries to cluster the points and label the points that do not belong to any clusters as -1. It thus detects outliers detection in the input time series. DBScanDetector assigns anomaly score 1 to anomaly samples, and 0 to normal samples. End of explanation plt.figure(figsize=(16,6)) plt.plot(df.time_step, df.cpu_usage, label='cpu_usage') plt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value') plt.title('the anomalies value') plt.xlabel('datetime') plt.legend(loc='upper left') plt.show() Explanation: Draw anomalies in line chart. End of explanation from bigdl.chronos.detector.anomaly import AEDetector ad = AEDetector(roll_len=10, ratio=0.05) ad.fit(df['cpu_usage'].to_numpy()) anomaly_scores = ad.score() anomaly_indexes = ad.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes) Explanation: Anomaly Detection by AutoEncoder Detector AEDetector is unsupervised anomaly detector. It builds an autoencoder network, try to fit the model to the input data, and calcuates the reconstruction error. The samples with larger reconstruction errors are more likely the anomalies. End of explanation plt.figure(figsize=(16,6)) plt.plot(df.time_step, df.cpu_usage, label='cpu_usage') plt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value') plt.title('the anomalies value') plt.xlabel('datetime') plt.legend(loc='upper left') plt.show() Explanation: Draw anomalies in line chart. End of explanation from bigdl.chronos.detector.anomaly import ThresholdDetector thd=ThresholdDetector() thd.set_params(threshold=(20, 80)) thd.fit(df['cpu_usage'].to_numpy()) anomaly_scores = thd.score() anomaly_indexes = thd.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes) Explanation: Anomaly Detection by Threshold Detector ThresholdDetector is a simple anomaly detector that detectes anomalies based on threshold. The target value for anomaly testing can be either 1) the sample value itself or 2) the difference between the forecasted value and the actual value. In this notebook we demostrate the first type. The thresold can be set by user or esitmated from the train data accoring to anomaly ratio and statistical distributions. End of explanation plt.figure(figsize=(16,6)) plt.plot(df.time_step, df.cpu_usage, label='cpu_usage') plt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value') plt.title('the anomalies value') plt.xlabel('datetime') plt.legend(loc='upper left') plt.show() Explanation: Draw anomalies in line chart. End of explanation
11,298
Given the following text description, write Python code to implement the functionality described below step by step Description: SpectralAnalysis showcase I've written a library for regular plotting and curve fitting that is much, much nicer than the FittingRoutines module. In this notebook I'll be showing off (I guess in a README-esque way) of how all the functions should be called as I've written them. The first notable thing is a class Model, which will house the functions used to fit our sets of data. The advantage of doing it this way is that necessary variables will be detected, and reported back to the user to remind them what is required. To create a Model instance, we give it a string name Step1: It'll then remind you to set a function, because as it stands it's only nominally a function. We can either define our own function within the notebook, or use one of the preset functions I've written. To start off, let's make it a straight line with the equation $$ y = mx + c $$ Step2: The instance will automagically detect what variables are required for the function, providing the function you gave it is a one-liner (only has return) just because of the way python works. In this case, we need the gradient and offset values which we can set by sending a dictionary along its way Step3: That's all we need to do to define a Model instance! Let's test it out by creating some fake linear data. Step4: Now I've also written a function that will package some xy data in the correct format for curve fitting. This is done by using the FormatData() function. Step5: df is now a pandas dataframe, where x is stored as the index and y is a column called "Y Range", just for internal consistency. Now we're ready to fit the data by calling the function FitModel. The input parameters for this is a reference to the dataframe holding the target data, as well as a reference to the Model instance. What it returns are Step6: And it worked! Now we can call the plotting interface to show us the results. For this, I've written a function called PlotData(). The dataframe input is formatted such that x is the index, and will work with up to 11 columns. A keyword Interface will also let you choose between using matplotlib (good for static plots, saves space and time) and bokeh (interactive and pretty, slow). Another interesting argument is Labels, which is a dictionary for specifying the axes labels and whatnot. If it is left unspecified, it will just plot without labels. Step7: A more advanced function, such as a Gaussian is shown below. The procedure I went through is exactly the same as above. The one thing I did differently was include boundary conditions for the curve fitting. This is done by calling the Model.SetBounds() method, where the input is shown below. Another thing demonstrated below is the use of the Spectrum class. I've written it to store and reference data, but admittedly haven't gone very far with it beyond storing the spectra as an attribute. Step8: Custom functions With the Model class, it is possible to write your own objective function, i.e. with convolutions, combinations etc. Let's start with the convolution of a Gaussian and a Boltzmann. I want to fit only the temperature and the total amplitude, but none of the other stuff. This is something I did for the $T_1$ methyl data, where the impulsive reservoir is fixed while the statistical reservoir grows with excitation energy. For this, I'm going to use a routine I've written that will do the convolution of two arrays, while returning the convolution result in the same dimensions as the input using a 1D interpolation. Step9: So now we'll setup a new instance of Model, and we'll call it the "Triplet Model" Step10: The instance method automatically detects which variables are actually required, making it quite trivial to set up new model functions to fit anything you want (in theory anyway).
Python Code: testmodel = Model("Test") # set up a test model with linear fit Explanation: SpectralAnalysis showcase I've written a library for regular plotting and curve fitting that is much, much nicer than the FittingRoutines module. In this notebook I'll be showing off (I guess in a README-esque way) of how all the functions should be called as I've written them. The first notable thing is a class Model, which will house the functions used to fit our sets of data. The advantage of doing it this way is that necessary variables will be detected, and reported back to the user to remind them what is required. To create a Model instance, we give it a string name: End of explanation testmodel.SetFunction(Linear) Explanation: It'll then remind you to set a function, because as it stands it's only nominally a function. We can either define our own function within the notebook, or use one of the preset functions I've written. To start off, let's make it a straight line with the equation $$ y = mx + c $$ End of explanation testmodel.SetVariables({"Gradient": 5., "Offset": 2.}) # Dictionary with variables Explanation: The instance will automagically detect what variables are required for the function, providing the function you gave it is a one-liner (only has return) just because of the way python works. In this case, we need the gradient and offset values which we can set by sending a dictionary along its way End of explanation x = np.linspace(0,10,20) Noise = np.random.rand(20) y = Linear(x, 6., 3.) + Noise # Generate some data to fit to Explanation: That's all we need to do to define a Model instance! Let's test it out by creating some fake linear data. End of explanation df = FormatData(x, y) Explanation: Now I've also written a function that will package some xy data in the correct format for curve fitting. This is done by using the FormatData() function. End of explanation popt, report, fits, pcov = FitModel(df, testmodel) Explanation: df is now a pandas dataframe, where x is stored as the index and y is a column called "Y Range", just for internal consistency. Now we're ready to fit the data by calling the function FitModel. The input parameters for this is a reference to the dataframe holding the target data, as well as a reference to the Model instance. What it returns are: Optimised parameters Fit report Fitted curves, including the original data Covariance matrix End of explanation Labels = {"X Label": "Durr", "Y Label": "Hurr", "Title": "Durr vs. Hurr", "X Limits": [0, 5], "Y Limits": [0, 20],} PlotData(fits, Interface="pyplot", Labels=Labels) Explanation: And it worked! Now we can call the plotting interface to show us the results. For this, I've written a function called PlotData(). The dataframe input is formatted such that x is the index, and will work with up to 11 columns. A keyword Interface will also let you choose between using matplotlib (good for static plots, saves space and time) and bokeh (interactive and pretty, slow). Another interesting argument is Labels, which is a dictionary for specifying the axes labels and whatnot. If it is left unspecified, it will just plot without labels. End of explanation FC063b = Spectrum("./FC063b_sub_KER.dat") FC063b.PlotAll() GaussianModel = Model("Gaussian") GaussianModel.SetFunction(GaussianFunction) GaussianModel.SetVariables({"Amplitude": 50., "Centre": 2500., "Width": 300.}) Boundaries = ([0., 2400., 200.], [1e3, 2900., 500.]) GaussianModel.SetBounds(Boundaries) popt, report, fits, cov = FitModel(FC063b.Data, GaussianModel) PlotData(fits) Explanation: A more advanced function, such as a Gaussian is shown below. The procedure I went through is exactly the same as above. The one thing I did differently was include boundary conditions for the curve fitting. This is done by calling the Model.SetBounds() method, where the input is shown below. Another thing demonstrated below is the use of the Spectrum class. I've written it to store and reference data, but admittedly haven't gone very far with it beyond storing the spectra as an attribute. End of explanation def ConvolveGB(x, A, T): return A * ConvolveArrays(GaussianFunction(x, 1., 2300., 250.), # None of the parameters are floated BoltzmannFunction(x, 1., T), # Only the temperature! x) Explanation: Custom functions With the Model class, it is possible to write your own objective function, i.e. with convolutions, combinations etc. Let's start with the convolution of a Gaussian and a Boltzmann. I want to fit only the temperature and the total amplitude, but none of the other stuff. This is something I did for the $T_1$ methyl data, where the impulsive reservoir is fixed while the statistical reservoir grows with excitation energy. For this, I'm going to use a routine I've written that will do the convolution of two arrays, while returning the convolution result in the same dimensions as the input using a 1D interpolation. End of explanation TripletModel = Model("Triplet Convolution") TripletModel.SetFunction(ConvolveGB) Explanation: So now we'll setup a new instance of Model, and we'll call it the "Triplet Model": End of explanation TripletModel.SetVariables({"A": 100., "T": 10.}) TripletModel.SetBounds(([0., 0.], [800., 10.])) popt, report, fits, cov = FitModel(FC063b.Data, TripletModel) PlotData(fits) np.diagonal(cov) Explanation: The instance method automatically detects which variables are actually required, making it quite trivial to set up new model functions to fit anything you want (in theory anyway). End of explanation
11,299
Given the following text description, write Python code to implement the functionality described below step by step Description: Whiskey Data This data set contains data on a small number of whiskies Step1: Summaries Shown below are the following charts Step2: Some Analysis Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value. We transform the output for plotting purposes, but note that the tooltips give the original data Step3: Simple Linked Charts
Python Code: import pandas as pd from numpy import log, abs, sign, sqrt import ibmcognitive ibmcognitive.brunel.set_brunel_service_url("http://localhost:8080/BrunelServices") data = pd.read_csv("data/whiskey.csv") print('Data on whiskies:', ', '.join(data.columns)) Explanation: Whiskey Data This data set contains data on a small number of whiskies End of explanation brunel x(country, category) color(rating) treemap label(name:3) tooltip(#all) style('.label {font-size:7pt}') legends(none):: width=900, height=600 brunel bubble color(rating:red) sort(rating) size(abv) label(name:6) tooltip(#all) filter(price, category) :: height=500 %%brunel line x(age) y(rating) mean(rating) label(country) split(country) using(interpolate) bin(age:8) color(#selection) legends(none) | treemap x(category) interaction(select) size(#count) color(#selection) legends(none) sort(#count:ascending) bin(category:9) tooltip(country) list(country) label(#count) style('.labels .label {font-size:14px}') :: width=900 %%brunel bubble label(country:3) bin(country) size(#count) color(#selection) sort(#count) interaction(select) tooltip(name) list(name) legends(none) | x(abv) y(rating) color(#count:blue) legends(none) bin(abv:8) bin(rating:5) style('symbol:rect; stroke:none; size:100%') at(0,10,70,100) interaction(select) label(#selection) list(#selection) at(60,15,100,100) tooltip(rating, abv,#count) legends(none) | bar label(brand:70) list(brand) at(0,0, 100, 10) axes(none) color(#selection) legends(none) interaction(filter) :: width=900, height=600 Explanation: Summaries Shown below are the following charts: A treemap display for each whiskey, broken down by country and category. The cells are colored by the rating, with lower-rated whiskies in blue, and higher-rated in reds. Missing data for ratings show as black. A filtered chart allowing you to select whiskeys based on price and category A line chart showing the relationship between age and rating. A simple treemap of categories is linked to this chart A bubble chart of countries linked to a heatmap of alcohol level (ABV) by rating End of explanation from sklearn import tree D = data[['Name', 'ABV', 'Age', 'Rating', 'Price']].dropna() X = D[ ['ABV', 'Age', 'Rating'] ] y = D['Price'] clf = tree.DecisionTreeRegressor(min_samples_leaf=4) clf.fit(X, y) D['Predicted'] = clf.predict(X) f = D['Predicted'] - D['Price'] D['Diff'] = sqrt(abs(f)) * sign(f) D['LPrice'] = log(y) %brunel y(diff) x(LPrice) tooltip(name, price, predicted, rating) color(rating) :: width=700 Explanation: Some Analysis Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value. We transform the output for plotting purposes, but note that the tooltips give the original data End of explanation %%brunel bar x(country) y(#count) | bar color(category) y(#count) polar stack label(category) legends(none) :: width=900, height=300 Explanation: Simple Linked Charts End of explanation