text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# CNTK 201A Part A: CIFAR-10 Data Loader This tutorial will show how to prepare image data sets for use with deep learning algorithms in CNTK. The CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) is a popular dataset for image classification, collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. It is a labeled subset of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset. The CIFAR-10 dataset is not included in the CNTK distribution but can be easily downloaded and converted to CNTK-supported format CNTK 201A tutorial is divided into two parts: - Part A: Familiarizes you with the CIFAR-10 data and converts them into CNTK supported format. This data will be used later in the tutorial for image classification tasks. - Part B: We will introduce image understanding tutorials. If you are curious about how well computers can perform on CIFAR-10 today, Rodrigo Benenson maintains a [blog](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130) on the state-of-the-art performance of various algorithms. ``` from __future__ import print_function from PIL import Image import getopt import numpy as np import pickle as cp import os import shutil import struct import sys import tarfile import xml.etree.cElementTree as et import xml.dom.minidom try: from urllib.request import urlretrieve except ImportError: from urllib import urlretrieve # Config matplotlib for inline plotting %matplotlib inline ``` ## Data download The CIFAR-10 dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. The 10 classes are: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. ``` # CIFAR Image data imgSize = 32 numFeature = imgSize * imgSize * 3 ``` We first setup a few helper functions to download the CIFAR data. The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. To prepare the input data for use in CNTK we use three oprations: > `readBatch`: Unpack the pickle files > `loadData`: Compose the data into single train and test objects > `saveTxt`: As the name suggests, saves the label and the features into text files for both training and testing. ``` def readBatch(src): with open(src, 'rb') as f: if sys.version_info[0] < 3: d = cp.load(f) else: d = cp.load(f, encoding='latin1') data = d['data'] feat = data res = np.hstack((feat, np.reshape(d['labels'], (len(d['labels']), 1)))) return res.astype(np.int) def loadData(src): print ('Downloading ' + src) fname, h = urlretrieve(src, './delete.me') print ('Done.') try: print ('Extracting files...') with tarfile.open(fname) as tar: tar.extractall() print ('Done.') print ('Preparing train set...') trn = np.empty((0, numFeature + 1), dtype=np.int) for i in range(5): batchName = './cifar-10-batches-py/data_batch_{0}'.format(i + 1) trn = np.vstack((trn, readBatch(batchName))) print ('Done.') print ('Preparing test set...') tst = readBatch('./cifar-10-batches-py/test_batch') print ('Done.') finally: os.remove(fname) return (trn, tst) def saveTxt(filename, ndarray): with open(filename, 'w') as f: labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str))) for row in ndarray: row_str = row.astype(str) label_str = labels[row[-1]] feature_str = ' '.join(row_str[:-1]) f.write('|labels {} |features {}\n'.format(label_str, feature_str)) ``` In addition to saving the images in the text format, we would save the images in PNG format. In addition we also compute the mean of the image. `saveImage` and `saveMean` are two functions used for this purpose. ``` def saveImage(fname, data, label, mapFile, regrFile, pad, **key_parms): # data in CIFAR-10 dataset is in CHW format. pixData = data.reshape((3, imgSize, imgSize)) if ('mean' in key_parms): key_parms['mean'] += pixData if pad > 0: pixData = np.pad(pixData, ((0, 0), (pad, pad), (pad, pad)), mode='constant', constant_values=128) img = Image.new('RGB', (imgSize + 2 * pad, imgSize + 2 * pad)) pixels = img.load() for x in range(img.size[0]): for y in range(img.size[1]): pixels[x, y] = (pixData[0][y][x], pixData[1][y][x], pixData[2][y][x]) img.save(fname) mapFile.write("%s\t%d\n" % (fname, label)) # compute per channel mean and store for regression example channelMean = np.mean(pixData, axis=(1,2)) regrFile.write("|regrLabels\t%f\t%f\t%f\n" % (channelMean[0]/255.0, channelMean[1]/255.0, channelMean[2]/255.0)) def saveMean(fname, data): root = et.Element('opencv_storage') et.SubElement(root, 'Channel').text = '3' et.SubElement(root, 'Row').text = str(imgSize) et.SubElement(root, 'Col').text = str(imgSize) meanImg = et.SubElement(root, 'MeanImg', type_id='opencv-matrix') et.SubElement(meanImg, 'rows').text = '1' et.SubElement(meanImg, 'cols').text = str(imgSize * imgSize * 3) et.SubElement(meanImg, 'dt').text = 'f' et.SubElement(meanImg, 'data').text = ' '.join(['%e' % n for n in np.reshape(data, (imgSize * imgSize * 3))]) tree = et.ElementTree(root) tree.write(fname) x = xml.dom.minidom.parse(fname) with open(fname, 'w') as f: f.write(x.toprettyxml(indent = ' ')) ``` `saveTrainImages` and `saveTestImages` are simple wrapper functions to iterate through the data set. ``` def saveTrainImages(filename, foldername): if not os.path.exists(foldername): os.makedirs(foldername) data = {} dataMean = np.zeros((3, imgSize, imgSize)) # mean is in CHW format. with open('train_map.txt', 'w') as mapFile: with open('train_regrLabels.txt', 'w') as regrFile: for ifile in range(1, 6): with open(os.path.join('./cifar-10-batches-py', 'data_batch_' + str(ifile)), 'rb') as f: if sys.version_info[0] < 3: data = cp.load(f) else: data = cp.load(f, encoding='latin1') for i in range(10000): fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % (i + (ifile - 1) * 10000))) saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 4, mean=dataMean) dataMean = dataMean / (50 * 1000) saveMean('CIFAR-10_mean.xml', dataMean) def saveTestImages(filename, foldername): if not os.path.exists(foldername): os.makedirs(foldername) with open('test_map.txt', 'w') as mapFile: with open('test_regrLabels.txt', 'w') as regrFile: with open(os.path.join('./cifar-10-batches-py', 'test_batch'), 'rb') as f: if sys.version_info[0] < 3: data = cp.load(f) else: data = cp.load(f, encoding='latin1') for i in range(10000): fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % i)) saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 0) # URLs for the train image and labels data url_cifar_data = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz' # Paths for saving the text files data_dir = './data/CIFAR-10/' train_filename = data_dir + '/Train_cntk_text.txt' test_filename = data_dir + '/Test_cntk_text.txt' train_img_directory = data_dir + '/Train' test_img_directory = data_dir + '/Test' root_dir = os.getcwd() if not os.path.exists(data_dir): os.makedirs(data_dir) try: os.chdir(data_dir) trn, tst= loadData('http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz') print ('Writing train text file...') saveTxt(r'./Train_cntk_text.txt', trn) print ('Done.') print ('Writing test text file...') saveTxt(r'./Test_cntk_text.txt', tst) print ('Done.') print ('Converting train data to png images...') saveTrainImages(r'./Train_cntk_text.txt', 'train') print ('Done.') print ('Converting test data to png images...') saveTestImages(r'./Test_cntk_text.txt', 'test') print ('Done.') finally: os.chdir("../..") ```
github_jupyter
``` #|hide #|skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab ``` # Tabular training > How to use the tabular application in fastai To illustrate the tabular application, we will use the example of the [Adult dataset](https://archive.ics.uci.edu/ml/datasets/Adult) where we have to predict if a person is earning more or less than $50k per year using some general data. ``` from fastai.tabular.all import * ``` We can download a sample of this dataset with the usual `untar_data` command: ``` path = untar_data(URLs.ADULT_SAMPLE) path.ls() ``` Then we can have a look at how the data is structured: ``` df = pd.read_csv(path/'adult.csv') df.head() ``` Some of the columns are continuous (like age) and we will treat them as float numbers we can feed our model directly. Others are categorical (like workclass or education) and we will convert them to a unique index that we will feed to embedding layers. We can specify our categorical and continuous column names, as well as the name of the dependent variable in `TabularDataLoaders` factory methods: ``` dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary", cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'], cont_names = ['age', 'fnlwgt', 'education-num'], procs = [Categorify, FillMissing, Normalize]) ``` The last part is the list of pre-processors we apply to our data: - `Categorify` is going to take every categorical variable and make a map from integer to unique categories, then replace the values by the corresponding index. - `FillMissing` will fill the missing values in the continuous variables by the median of existing values (you can choose a specific value if you prefer) - `Normalize` will normalize the continuous variables (subtract the mean and divide by the std) To further expose what's going on below the surface, let's rewrite this utilizing `fastai`'s `TabularPandas` class. We will need to make one adjustment, which is defining how we want to split our data. By default the factory method above used a random 80/20 split, so we will do the same: ``` splits = RandomSplitter(valid_pct=0.2)(range_of(df)) to = TabularPandas(df, procs=[Categorify, FillMissing,Normalize], cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'], cont_names = ['age', 'fnlwgt', 'education-num'], y_names='salary', splits=splits) ``` Once we build our `TabularPandas` object, our data is completely preprocessed as seen below: ``` to.xs.iloc[:2] ``` Now we can build our `DataLoaders` again: ``` dls = to.dataloaders(bs=64) ``` > Later we will explore why using `TabularPandas` to preprocess will be valuable. The `show_batch` method works like for every other application: ``` dls.show_batch() ``` We can define a model using the `tabular_learner` method. When we define our model, `fastai` will try to infer the loss function based on our `y_names` earlier. **Note**: Sometimes with tabular data, your `y`'s may be encoded (such as 0 and 1). In such a case you should explicitly pass `y_block = CategoryBlock` in your constructor so `fastai` won't presume you are doing regression. ``` learn = tabular_learner(dls, metrics=accuracy) ``` And we can train that model with the `fit_one_cycle` method (the `fine_tune` method won't be useful here since we don't have a pretrained model). ``` learn.fit_one_cycle(1) ``` We can then have a look at some predictions: ``` learn.show_results() ``` Or use the predict method on a row: ``` row, clas, probs = learn.predict(df.iloc[0]) row.show() clas, probs ``` To get prediction on a new dataframe, you can use the `test_dl` method of the `DataLoaders`. That dataframe does not need to have the dependent variable in its column. ``` test_df = df.copy() test_df.drop(['salary'], axis=1, inplace=True) dl = learn.dls.test_dl(test_df) ``` Then `Learner.get_preds` will give you the predictions: ``` learn.get_preds(dl=dl) ``` > Note: Since machine learning models can't magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training ## `fastai` with Other Libraries As mentioned earlier, `TabularPandas` is a powerful and easy preprocessing tool for tabular data. Integration with libraries such as Random Forests and XGBoost requires only one extra step, that the `.dataloaders` call did for us. Let's look at our `to` again. Its values are stored in a `DataFrame` like object, where we can extract the `cats`, `conts,` `xs` and `ys` if we want to: ``` to.xs[:3] ``` Now that everything is encoded, you can then send this off to XGBoost or Random Forests by extracting the train and validation sets and their values: ``` X_train, y_train = to.train.xs, to.train.ys.values.ravel() X_test, y_test = to.valid.xs, to.valid.ys.values.ravel() ``` And now we can directly send this in!
github_jupyter
# Enron email data set exploration ``` # Get better looking pictures %config InlineBackend.figure_format = 'retina' df = pd.read_feather('enron.feather') df = df.sort_values(['Date']) df.tail(5) ``` ## Email traffic over time Group the data set by `Date` and `MailID`, which will get you an index that collects all of the unique mail IDs per date. Then reset the index so that those date and mail identifiers become columns and then select for just those columns; we don't actually care about the counts created by the `groupby` (that was just to get the index). Create a histogram that shows the amount of traffic per day. Then specifically for email sent from `richard.shapiro` and then `john.lavorato`. Because some dates are set improperly (to 1980), filter for dates greater than January 1, 1999. ## Received emails Count the number of messages received per user and then sort in reverse order. Make a bar chart showing the top 30 email recipients. ## Sent emails Make a bar chart indicating the top 30 mail senders. This is more complicated than the received emails because a single person can email multiple people in a single email. So, group by `From` and `MailID`, convert the index back to columns and then group again by `From` and get the count. ## Email heatmap Given a list of Enron employees, compute a heat map that indicates how much email traffic went between each pair of employees. The heat map is not symmetric because Susan sending mail to Xue is not the same thing as Xue sending mail to Susan. The first step is to group the data frame by `From` and `To` columns in order to get the number of emails from person $i$ to person $j$. Then, create a 2D numpy matrix, $C$, of integers and set $C_{i,j}$ to the count of person $i$ to person $j$. Using matplotlib, `ax.imshow(C, cmap='GnBu', vmax=4000)`, show the heat map and add tick labels at 45 degrees for the X axis. Set the labels to the appropriate names. Draw the number of emails in the appropriate cells of the heat map, for all values greater than zero. Please note that when you draw text using `ax.text()`, the coordinates are X,Y whereas the coordinates in the $C$ matrix are row,column so you will have to flip the coordinates. ``` people = ['jeff.skilling', 'kenneth.lay', 'louise.kitchen', 'tana.jones', 'sara.shackleton', 'vince.kaminski', 'sally.beck', 'john.lavorato', 'mark.taylor', 'greg.whalley', 'jeff.dasovich', 'steven.kean', 'chris.germany', 'mike.mcconnell', 'benjamin.rogers', 'j.kaminski', 'stanley.horton', 'a..shankman', 'richard.shapiro'] ``` ## Build graph and compute rankings From the data frame, create a graph data structure using networkx. Create an edge from node A to node B if there is an email from A to B in the data frame. Although we do know the total number of emails between people, let's keep it simple and use simply a weight of 1 as the edge label. See networkx method `add_edge()`. 1. Using networkx, compute the pagerank between all nodes. Get the data into a data frame, sort in reverse order, and display the top 15 users from the data frame. 2. Compute the centrality for the nodes of the graph. The documentation says that centrality is "*the fraction of nodes it is connected to.*" I use `DataFrame.from_dict` to convert the dictionaries returned from the various networkx methods to data frames. ### Node PageRank ### Centrality ### Plotting graph subsets The email graph is way too large to display the whole thing and get any meaningful information out. However, we can look at subsets of the graph such as the neighbors of a specific node. To visualize it we can use different strategies to layout the nodes. In this case, we will use two different layout strategies: *spring* and *kamada-kawai*. According to [Wikipedia](https://en.wikipedia.org/wiki/Force-directed_graph_drawing), these force directed layout strategies have the characteristic: "*...the edges tend to have uniform length (because of the spring forces), and nodes that are not connected by an edge tend to be drawn further apart...*". Use networkx `ego_graph()` method to get a radius=1 neighborhood around `jeff.skilling` and draw the spring graph with a plot that is 20x20 inch so we can see details. Then, draw the same subgraph again using the kamada-kawai layout strategy. Finally, get the neighborhood around kenneth.lay and draw kamada-kawai.
github_jupyter
# ONNX Runtime: Tutorial for STVM execution provider This notebook shows a simple example for model inference with STVM EP. #### Tutorial Roadmap: 1. Prerequistes 2. Accuracy check for STVM EP 3. Configuration options ## 1. Prerequistes Make sure that you have installed all the necessary dependencies described in the corresponding paragraph of the documentation. Also, make sure you have the `tvm` and `onnxruntime-stvm` packages in your pip environment. If you are using `PYTHONPATH` variable expansion, make sure it contains the following paths: `<path_to_msft_onnxrt>/onnxruntime/cmake/external/tvm_update/python` and `<path_to_msft_onnxrt>/onnxruntime/build/Linux/Release`. ### Common import These packages can be delivered from standard `pip`. ``` import onnx import numpy as np from typing import List, AnyStr from onnx import ModelProto, helper, checker, mapping ``` ### Specialized import It is better to collect these packages from source code in order to clearly understand what is available to you right now. ``` import tvm.testing from tvm.contrib.download import download_testdata import onnxruntime.providers.stvm # nessesary to register tvm_onnx_import_and_compile and others ``` ### Helper functions for working with ONNX ModelProto This set of helper functions allows you to recognize the meta information of the models. This information is needed for more versatile processing of ONNX models. ``` def get_onnx_input_names(model: ModelProto) -> List[AnyStr]: inputs = [node.name for node in model.graph.input] initializer = [node.name for node in model.graph.initializer] inputs = list(set(inputs) - set(initializer)) return sorted(inputs) def get_onnx_output_names(model: ModelProto) -> List[AnyStr]: return [node.name for node in model.graph.output] def get_onnx_input_types(model: ModelProto) -> List[np.dtype]: input_names = get_onnx_input_names(model) return [ mapping.TENSOR_TYPE_TO_NP_TYPE[node.type.tensor_type.elem_type] for node in sorted(model.graph.input, key=lambda node: node.name) if node.name in input_names ] def get_onnx_input_shapes(model: ModelProto) -> List[List[int]]: input_names = get_onnx_input_names(model) return [ [dv.dim_value for dv in node.type.tensor_type.shape.dim] for node in sorted(model.graph.input, key=lambda node: node.name) if node.name in input_names ] def get_random_model_inputs(model: ModelProto) -> List[np.ndarray]: input_shapes = get_onnx_input_shapes(model) input_types = get_onnx_input_types(model) assert len(input_types) == len(input_shapes) inputs = [np.random.uniform(size=shape).astype(dtype) for shape, dtype in zip(input_shapes, input_types)] return inputs ``` ### Wrapper helper functions for Inference Wrapper helper functions for running model inference using ONNX Runtime EP. ``` def get_onnxruntime_output(model: ModelProto, inputs: List, provider_name: AnyStr) -> np.ndarray: output_names = get_onnx_output_names(model) input_names = get_onnx_input_names(model) assert len(input_names) == len(inputs) input_dict = {input_name: input_value for input_name, input_value in zip(input_names, inputs)} inference_session = onnxruntime.InferenceSession(model.SerializeToString(), providers=[provider_name]) output = inference_session.run(output_names, input_dict) # Unpack output if there's only a single value. if len(output) == 1: output = output[0] return output def get_cpu_onnxruntime_output(model: ModelProto, inputs: List) -> np.ndarray: return get_onnxruntime_output(model, inputs, "CPUExecutionProvider") def get_stvm_onnxruntime_output(model: ModelProto, inputs: List) -> np.ndarray: return get_onnxruntime_output(model, inputs, "StvmExecutionProvider") ``` ### Helper function for checking accuracy This function uses the TVM API to compare two output tensors. The tensor obtained using the `CPUExecutionProvider` is used as a reference. If a mismatch is found between tensors, an appropriate exception will be thrown. ``` def verify_with_ort_with_inputs( model, inputs, out_shape=None, opset=None, freeze_params=False, dtype="float32", rtol=1e-5, atol=1e-5, opt_level=1, ): if opset is not None: model.opset_import[0].version = opset ort_out = get_cpu_onnxruntime_output(model, inputs) stvm_out = get_stvm_onnxruntime_output(model, inputs) for stvm_val, ort_val in zip(stvm_out, ort_out): tvm.testing.assert_allclose(ort_val, stvm_val, rtol=rtol, atol=atol) assert ort_val.dtype == stvm_val.dtype ``` ### Helper functions for download models These functions use the TVM API to download models from the ONNX Model Zoo. ``` BASE_MODEL_URL = "https://github.com/onnx/models/raw/master/" MODEL_URL_COLLECTION = { "ResNet50-v1": "vision/classification/resnet/model/resnet50-v1-7.onnx", "ResNet50-v2": "vision/classification/resnet/model/resnet50-v2-7.onnx", "SqueezeNet-v1.1": "vision/classification/squeezenet/model/squeezenet1.1-7.onnx", "SqueezeNet-v1.0": "vision/classification/squeezenet/model/squeezenet1.0-7.onnx", "Inception-v1": "vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx", "Inception-v2": "vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx", } def get_model_url(model_name): return BASE_MODEL_URL + MODEL_URL_COLLECTION[model_name] def get_name_from_url(url): return url[url.rfind("/") + 1 :].strip() def find_of_download(model_name): model_url = get_model_url(model_name) model_file_name = get_name_from_url(model_url) return download_testdata(model_url, model_file_name, module="models") ``` ## 2. Accuracy check for STVM EP This section will check the accuracy. The check will be to compare the output tensors for `CPUExecutionProvider` and `STVMExecutionProvider`. See the description of `verify_with_ort_with_inputs` function used above. ### Check for simple architectures ``` def get_two_input_model(op_name: AnyStr) -> ModelProto: dtype = "float32" in_shape = [1, 2, 3, 3] in_type = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(dtype)] out_shape = in_shape out_type = in_type layer = helper.make_node(op_name, ["in1", "in2"], ["out"]) graph = helper.make_graph( [layer], "two_input_test", inputs=[ helper.make_tensor_value_info("in1", in_type, in_shape), helper.make_tensor_value_info("in2", in_type, in_shape), ], outputs=[ helper.make_tensor_value_info( "out", out_type, out_shape ) ], ) model = helper.make_model(graph, producer_name="two_input_test") checker.check_model(model, full_check=True) return model onnx_model = get_two_input_model("Add") inputs = get_random_model_inputs(onnx_model) verify_with_ort_with_inputs(onnx_model, inputs) print("****************** Success! ******************") ``` ### Check for DNN architectures ``` def get_onnx_model(model_name): model_path = find_of_download(model_name) onnx_model = onnx.load(model_path) return onnx_model model_name = "ResNet50-v1" onnx_model = get_onnx_model(model_name) inputs = get_random_model_inputs(onnx_model) verify_with_ort_with_inputs(onnx_model, inputs) print("****************** Success! ******************") ``` ## 3. Configuration options This section shows how you can configure STVM EP using custom options. For more details on the options used, see the corresponding section of the documentation. ``` provider_name = "StvmExecutionProvider" provider_options = dict(target="llvm -mtriple=x86_64-linux-gnu", target_host="llvm -mtriple=x86_64-linux-gnu", opt_level=3, freeze_weights=True, tuning_file_path="", tuning_type="Ansor", ) model_name = "ResNet50-v1" onnx_model = get_onnx_model(model_name) input_dict = {input_name: input_value for input_name, input_value in zip(get_onnx_input_names(onnx_model), get_random_model_inputs(onnx_model))} output_names = get_onnx_output_names(onnx_model) stvm_session = onnxruntime.InferenceSession(onnx_model.SerializeToString(), providers=[provider_name], provider_options=[provider_options] ) output = stvm_session.run(output_names, input_dict)[0] print(f"****************** Output shape: {output.shape} ******************") ```
github_jupyter
# Wave (.wav) to Zero Crossing. This is an attempt to produce synthetic ZC (Zero Crossing) from FS (Full Scan) files. All parts are calculated in the time domain to mimic true ZC. FFT is not used (maybe with the exception of the internal implementation of the Butterworth filter). Current status: Seems to work well for "easy files", but not for mixed and low amplitude recordings. I don't know why... The resulting plot is both embedded in this notebook and as separate files: 'zc_in_time_domain_test_1.png' and 'zc_in_time_domain_test_2.png'. Sources in information/inspiration: - http://users.lmi.net/corben/fileform.htm#Anabat%20File%20Formats - https://stackoverflow.com/questions/3843017/efficiently-detect-sign-changes-in-python - https://github.com/riggsd/zcant/blob/master/zcant/conversion.py ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.io.wavfile as wf import scipy.signal #import sounddevice # Settings. #sound_file = '../data_in/Mdau_TE384.wav' sound_file = '../data_in/Ppip_TE384.wav' #sound_file = '../data_in/Myotis-Plecotus-Eptesicus_TE384.wav' cutoff_freq_hz = 18000 zc_divratio = 4 # Debug settings. play_sound = False debug = False # Read the sound file. (sampling_freq, signal_int16) = wf.read(sound_file, 'rb') print('Sampling freq in file: ' + str(sampling_freq) + ' Hz.') print(str(len(signal_int16)) + ' samples.') #if play_sound: # sounddevice.play(signal_int16, sampling_freq) # sounddevice.wait() # Check if TE, Time Expansion. if '_TE' in sound_file: sampling_freq *= 10 print('Sampling freq: ' + str(sampling_freq) + ' Hz.') # Signed int16 to [-1.0, 1.0]. signal = np.array(signal_int16) / 32768 # Noise level. RMS, root-mean-square. noise_level = np.sqrt(np.mean(np.square(signal))) print(noise_level) # Filter. Butterworth. nyquist = 0.5 * sampling_freq low = cutoff_freq_hz / nyquist filter_order = 9 b, a = scipy.signal.butter(filter_order, [low], btype='highpass') #signal= scipy.signal.lfilter(b, a, signal) signal= scipy.signal.filtfilt(b, a, signal) # Add hysteresis around zero to remove noise. signal[(signal < noise_level) & (signal > -noise_level)] = 0.0 # Check where zero crossings may occur. sign_diff_array = np.diff(np.sign(signal)) # Extract positive zero passings and interpolate where it occurs. index_array = [] old_index = None for index, value in enumerate(sign_diff_array): if value in [2., 1., 0.]: # Check for raising signal level. if value == 2.: # From negative directly to positive. Calculate interpolated index. x_adjust = signal[index] / (signal[index] - signal[index+1]) index_array.append(index + x_adjust) old_index = None elif (value == 1.) and (old_index is None): # From negative to zero. old_index = index elif (value == 1.) and (old_index is not None): # From zero to positive. Calculate interpolated index. x_adjust = signal[old_index] / (signal[old_index] - signal[index+1]) index_array.append(old_index + x_adjust) old_index = None else: # Falling signal level. old_index = None print(len(index_array)) if debug: print(index_array[:100]) zero_crossings = index_array[::zc_divratio] print(len(zero_crossings)) # Prepare lists. freqs = [] times = [] for index, zero_crossing in enumerate(zero_crossings[0:-1]): freq = zero_crossings[index+1] - zero_crossings[index] freq_hz = sampling_freq * zc_divratio / freq if freq_hz >= cutoff_freq_hz: freqs.append(freq_hz) times.append(zero_crossing) print(len(freqs)) # Prepare arrays for plotting. freq_array_khz = np.array(freqs) / 1000.0 time_array_s = np.array(times) / sampling_freq time_array_compact = range(0, len(times)) if debug: print(len(freq_array_khz)) print(freq_array_khz[:100]) print(time_array_s[:100]) # Plot two diagrams, normal and compressed time. fig, (ax1, ax2) = plt.subplots(2,1, figsize=(16, 5), dpi=150, #facecolor='w', #edgecolor='k', ) # ax1. ax1.scatter(time_array_s, freq_array_khz, s=1, c='navy', alpha=0.5) ax1.set_title('File: ' + sound_file) ax1.set_ylim((0,120)) ax1.minorticks_on() ax1.grid(which='major', linestyle='-', linewidth='0.5', alpha=0.6) ax1.grid(which='minor', linestyle='-', linewidth='0.5', alpha=0.3) ax1.tick_params(which='both', top='off', left='off', right='off', bottom='off') # ax2. ax2.scatter(time_array_compact, freq_array_khz, s=1, c='navy', alpha=0.5) ax2.set_ylim((0,120)) ax2.minorticks_on() ax2.grid(which='major', linestyle='-', linewidth='0.5', alpha=0.6) ax2.grid(which='minor', linestyle='-', linewidth='0.5', alpha=0.3) ax2.tick_params(which='both', top='off', left='off', right='off', bottom='off') plt.tight_layout() fig.savefig('zc_in_time_domain_test.png') #fig.savefig('zc_in_time_domain_test_1.png') #fig.savefig('zc_in_time_domain_test_2.png') plt.show() ```
github_jupyter
# Binned Likelihood Tutorial The detection, flux determination, and spectral modeling of Fermi LAT sources is accomplished by a maximum likelihood optimization technique as described in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/) (see also, e.g., [Abdo, A. A. et al. 2009, ApJS, 183, 46](http://adsabs.harvard.edu/abs/2009ApJS..183...46A)). To illustrate how to use the Likelihood software, this tutorial gives a step-by-step description for performing a binned likelihood analysis. ## Binned vs Unbinned Likelihood Binned likelihood analysis is the preferred method for most types of LAT analysis (see [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/)). However, when analyzing data over short time periods (with few events), it is better to use the **unbinned** analysis. To perform an unbinned likelihood analysis, see the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) tutorial. **Additional references**: * [SciTools References](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/references.html) * Descriptions of available [Spectral and Spatial Models](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html) * Examples of [XML Model Definitions for Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#xmlModelDefinitions): * [Power Law](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#powerlaw) * [Broken Power Law](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#brokenPowerLaw) * [Broken Power Law 2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#powerLaw2) * [Log Parabola](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#logParabola) * [Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#expCutoff) * [BPL Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#bplExpCutoff) * [Gaussian](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#gaussian) * [Constant Value](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#constantValue) * [File Function](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#fileFunction) * [Band Function](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#bandFunction) * [PL Super Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#plSuperExpCutoff) # Prerequisites You will need an **event** data file, a **spacecraft** data file (also referred to as the "pointing and livetime history" file), and the current **background models** (available for [download](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html)). They are also found in code cells below. You may choose to select your own data files, or to use the files provided within this tutorial. Custom data sets may be retrieved from the [Lat Data Server](http://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi). # Outline 1. **Make Subselections from the Event Data** Since there is computational overhead for each event associated with each diffuse component, it is useful to filter out any events that are not within the extraction region used for the analysis. 2. **Make Counts Maps from the Event Files** By making simple FITS images, we can inspect our data and pick out obvious sources. 3. **Download the latest diffuse models** The recommended models for a normal point source analysis are `gll_iem_v07.fits` (a very large file) and `iso_P8R3_SOURCE_V2_v1.txt`. All of the background models along with a description of the models are available [here](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html). 4. **Create a Source Model XML File** The source model XML file contains the various sources and their model parameters to be fit using the **gtlike** tool. 5. **Create a 3D Counts Cube** The binned counts cube is used to reduce computation requirements in regions with large numbers of events. 6. **Compute Livetimes** Precomputing the livetime for the dataset speeds up the exposure calculation. 7. **Compute Exposure Cube** This accounts for exposure as a function of energy, based on the cuts made. The exposure map must be recomputed if any change is made to the data selection or binning. 8. **Compute Source Maps** Here the exposure calculation is applied to each of the sources described in the model. 9. **Perform the Likelihood Fit** Fitting the data to the model provides flux, errors, spectral indices, and other information. 10. **Create a Model Map** This can be compared to the counts map to verify the quality of the fit and to make a residual map. # 1. Make subselections from the event data For this case we will use two years of LAT Pass 8 data. This is a longer data set than is described in the [Extract LAT Data](../DataSelection/1.ExtractLATData.ipynb) tutorial. >**NOTE**: The ROI used by the binned likelihood analysis is defined by the 3D counts map boundary. The region selection used in the data extraction step, which is conical, must fully contain the 3D counts map spatial boundary, which is square. Selection of data: Search Center (RA, DEC) =(193.98, -5.82) Radius = 15 degrees Start Time (MET) = 239557417 seconds (2008-08-04 T15:43:37) Stop Time (MET) = 302572802 seconds (2010-08-04 T00:00:00) Minimum Energy = 100 MeV Maximum Energy = 500000 MeV This two-year dataset generates numerous data files. We provide the user with the original event data files and the accompanying spacecraft file: * L181126210218F4F0ED2738_PH00.fits (5.0 MB) * L181126210218F4F0ED2738_PH01.fits (10.5 MB) * L181126210218F4F0ED2738_PH02.fits (6.5 MB) * L181126210218F4F0ED2738_PH03.fits (9.2 MB) * L181126210218F4F0ED2738_PH04.fits (7.4 MB) * L181126210218F4F0ED2738_PH05.fits (6.2 MB) * L181126210218F4F0ED2738_PH06.fits (4.5 MB) * L181126210218F4F0ED2738_SC00.fits (256 MB spacecraft file) ``` !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH00.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH01.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH02.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH03.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH04.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH05.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH06.fits !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_SC00.fits !mkdir ./data !mv *.fits ./data !ls ./data ``` In order to combine the two events files for your analysis, you must first generate a text file listing the events files to be included. If you do not wish to download all the individual files, you can skip to the next step and retrieve the combined, filtered event file. However, you will need the spacecraft file to complete the analysis, so you should retrieve that now. To generate the file list, type: ``` !ls ./data/*_PH* > ./data/binned_events.txt ``` When analyzing point sources, it is recommended that you include events with high probability of being photons. To do this, you should use **gtselect** to cut on the event class, keeping only the SOURCE class events (event class 128, or as recommended in the Cicerone). In addition, since we do not wish to cut on any of the three event types (conversion type, PSF, or EDISP), we will use `evtype=3` (which corresponds to standard analysis in Pass 7). Note that `INDEF` is the default for evtype in gtselect. ```bash gtselect evclass=128 evtype=3 ``` Be aware that `evclass` and `evtype` are hidden parameters. So, to use them, you must type them on the command line. The text file you made (`binned_events.txt`) will be used in place of the input fits filename when running gtselect. The syntax requires that you use an @ before the filename to indicate that this is a text file input rather than a fits file. We perform a selection to the data we want to analyze. For this example, we consider the source class photons within our 15 degree region of interest (ROI) centered on the blazar 3C 279. For some of the selections that we made with the data server and don't want to modify, we can use "INDEF" to instruct the tool to read those values from the data file header. Here, we are only filtering on event class (not on event type) and applying a zenith cut, so many of the parameters are designated as "INDEF". We apply the **gtselect** tool to the data file as follows: ``` %%bash gtselect evclass=128 evtype=3 @./data/binned_events.txt ./data/3C279_binned_filtered.fits INDEF INDEF INDEF INDEF INDEF 100 500000 90 ``` In the last step we also selected the energy range and the maximum zenith angle value (90 degrees) as suggested in Cicerone and recommended by the LAT instrument team. The Earth's limb is a strong source of background gamma rays and we can filter them out with a zenith-angle cut. The use of "zmax" in calculating the exposure allows for a more selective method than just using the ROI cuts in controlling the Earth limb contamination. The filtered data from the above steps are provided [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_filtered.fits). After the data selection is made, we need to select the good time intervals in which the satellite was working in standard data taking mode and the data quality was good. For this task we use **gtmktime** to select GTIs by filtering on information provided in the spacecraft file. The current **gtmktime** filter expression recommended by the LAT team in the Cicerone is: ``` (DATA_QUAL>0)&&(LAT_CONFIG==1) ``` This excludes time periods when some spacecraft event has affected the quality of the data; it ensures the LAT instrument was in normal science data-taking mode. Here is an example of running **gtmktime** for our analysis of the region surrounding 3C 279. ``` %%bash gtmktime @./data/L181126210218F4F0ED2738_SC00.fits (DATA_QUAL>0)&&(LAT_CONFIG==1) no ./data/3C279_binned_filtered.fits ./data/3C279_binned_gti.fits ``` The data file with all the cuts described above is provided in this [link](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_gti.fits). A more detailed discussion of data selection can be found in the [Data Preparation](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data_preparation.html) analysis thread. To view the DSS keywords in a given extension of a data file, use the **gtvcut** tool and review the data cuts on the EVENTS extension. This provides a listing of the keywords reflecting each cut applied to the data file and their values, including the entire list of GTIs. (Use the option `suppress_gtis=no` to view the entire list.) ``` %%bash gtvcut suppress_gtis=no ./data/3C279_binned_gti.fits EVENTS ``` Here you can see the event class and event type, the location and radius of the data selection, as well as the energy range in MeV, the zenith angle cut, and the fact that the time cuts to be used in the exposure calculation are defined by the GTI table. Various Fermitools will be unable to run if you have multiple copies of a particular DSS keyword. This can happen if the position used in extracting the data from the data server is different than the position used with **gtselect**. It is wise to review the keywords for duplicates before proceeding. If you do have keyword duplication, it is advisable to regenerate the data file with consistent cuts. # 2. Make a counts map from the event data Next, we create a counts map of the ROI, summed over photon energies, in order to identify candidate sources and to ensure that the field looks sensible as a simple sanity check. For creating the counts map, we will use the [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) tool with the option "CMAP" (no spacecraft file is necessary for this step). Then we will view the output file, as shown below: ``` %%bash gtbin CMAP ./data/3C279_binned_gti.fits ./data/3C279_binned_cmap.fits NONE 150 150 0.2 CEL 193.98 -5.82 0.0 AIT ``` We chose an ROI of 15 degrees, corresponding to 30 degrees in diameter. Since we want a pixel size of 0.2 degrees/pixel, then we must select 30/0.2=150 pixels for the size of the x and y axes. The last command launches the visualization tool _ds9_ and produces a display of the generated [counts](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_cmap.fits) map. <img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_counts_map.png'> You can see several strong sources and a number of weaker sources in this map. Mousing over the positions of these sources shows that two of them are likely 3C 279 and 3C 273. It is important to inspect your data prior to proceeding to verify that the contents are as you expect. A malformed data query or improper data selection can generate a non-circular region, or a file with zero events. By inspecting your data prior to analysis, you have an opportunity to detect such issues early in the analysis. A more detailed discussion of data exploration can be found in the [Explore LAT Data](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/explore_latdata.html) analysis thread. # 3. Create a 3-D (binned) counts map Since the counts map shows the expected data, you are ready to prepare your data set for analysis. For binned likelihood analysis, the data input is a three-dimensional counts map with an energy axis, called a counts cube. The gtbin tool performs this task as well by using the `CCUBE` option. <img src="https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/square_in_circle.png"> The binning of the counts map determines the binning of the exposure calculation. The likelihood analysis may lose accuracy if the energy bins are not sufficiently narrow to accommodate more rapid variations in the effective area with decreasing energy below a few hundred MeV. For a typical analysis, ten logarithmically spaced bins per decade in energy are recommended. The analysis is less sensitive to the spatial binning and 0.2 deg bins are a reasonable standard. This counts cube is a square binned region that must fit within the circular acceptance cone defined during the data extraction step, and visible in the counts map above. To find the maximum size of the region your data will support, find the side of a square that can be fully inscribed within your circular acceptance region (multiply the radius of the acceptance cone by sqrt(2)). For this example, the maximum length for a side is 21.21 degrees. To create the counts cube, we run [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) as follows: ``` %%bash gtbin CCUBE ./data/3C279_binned_gti.fits ./data/3C279_binned_ccube.fits NONE 100 100 0.2 CEL 193.98 -5.82 0.0 AIT LOG 100 500000 37 ``` [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) takes the following as parameters: * Type of output file (CCUBE|CMAP|LC|PHA1|PHA2|HEALPIX) * Event data file name * Output file name * Spacecraft data file name * Size of the X axis in pixels * Size of the Y axis in pixels * Image scale (in degrees/pixel) * Coordindate system (CEL - celestial; GAL - galactic) (pick CEL or GAL) * First coordinate of image center in degrees (RA or galactic l) * Second coordinate of image center in degrees (DEC or galactic b) * Rotation angle of image axis, in degrees * Projection method (AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN) * Algorithm for defining energy bins (FILE|LIN|LOG) * Start value for first energy bin in MeV * Stop value for last energy bin in MeV * Number of logarithmically uniform energy bins The counts cube generated in this step is provided [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ccube.fits). If you open the file with _ds9_, you see that it is made up of 37 images, one for each logarithmic energy bin. By playing through these images, it is easy to see how the PSF of the LAT changes with energy. You can also see that changing energy cuts could be helpful when trying to optimize the localization or spectral information for specific sources. Be sure to verify that there are no black corners on your counts cube. These corners correspond to regions with no data and will cause errors in your exposure calculations. # 4. Download the latest diffuse model files When you use the current Galactic diffuse emission model ([`gll_iem_v07.fits`](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/gll_iem_v07.fits)) in a likelihood analysis, you also want to use the corresponding model for the extragalactic isotropic diffuse emission, which includes the residual cosmic-ray background. The recommended isotropic model for point source analysis is [`iso_P8R3_SOURCE_V2_v1.txt`](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/iso_P8R3_SOURCE_V2_v1.txt). All the Pass 8 background models have been included in the Fermitools distribution, in the `$(FERMI_DIR)/refdata/fermi/galdiffuse/` directory. If you use that path in your model, you should not have to download the diffuse models individually. >**NOTE**: Keep in mind that the isotropic model needs to agree with both the event class and event type selections you are using in your analysis. The iso_P8R3_SOURCE_V2_v1.txt isotropic spectrum is valid only for the latest response functions and only for data sets with front + back events combined. All of the most up-to-date background models along with a description of the models are available [here](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html). # 5. Create a source model XML file The [gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) tool reads the source model from an XML file. The model file contains your best guess at the locations and spectral forms for the sources in your data. A source model can be created using the [model editor](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/modeleditor.txt) tool, by using the user contributed tool `make4FGLxml.py` (available at the [user-contributed tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/) page), or by editing the file directly within a text editor. Here we cannot use the same source model that was used to analyze six months of data in the Unbinned Likelihood tutorial, as the 2-year data set contains many more significant sources and will not converge. Instead, we will use the 4FGL catalog to define our source model by running `make4FGLxml.py`. To run the script, you will need to download the current LAT catalog file and place it in your working directory: ``` !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py !wget https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/gll_psc_v18.fit !mv make4FGLxml.py gll_psc_v18.fit ./data !python ./data/make4FGLxml.py ./data/gll_psc_v18.fit ./data/3C279_binned_gti.fits -o ./data/3C279_input_model.xml ``` Note that we are using a high level of significance so that we only fit the brightest sources, and we have forced the extended sources to be modeled as point sources. It is also necessary to specify the entire path to location of the diffuse model on your system. Clearly, the simple 4-source model we used for the 6-month [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) analysis would have been too simplistic. This XML file uses the spectral model from the 4FGL catalog analysis for each source. (The catalog file is available at the [LAT 8-yr Catalog page](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/).) However, that analysis used a subset of the available spectral models. A dedicated analysis of the region may indicate a different spectral model is preferred. For more details on the options available for your XML models, see: * Descriptions of available [Spectral and Spatial Models](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html) * Examples of [XML Model Definitions for Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html) Finally, the `make4FGLxml.py` script automatically adds 10 degrees to your ROI to account for sources that lie outside your data region, but which may contribute photons to your data. In addition, it gives you the ability to free only some of the spectral parameters for sources within your ROI, and fixes them for the others. With hundreds of sources, there are too many free parameters to gain a good spectral fit. It is advisable to revise these values so that only sources near your source of interest, or very bright source, have all spectral parameters free. Farther away, you can fix the spectral form and free only the normalization parameter (or "prefactor"). If you are working in a crowded region or have nested sources (e.g. a point source on top of an extended source), you will probably want to fix parameters for some sources even if they lie close to your source of interest. Only the normalization parameter will be left free for the remaining sources within the ROI. We have also used the significance parameter (`-s`) of `make4FLGxml.py` to free only the brightest sources in our ROI. In addition, we used the `-v` flag to override that for sources that are significantly variable. Both these changes are necessary: having too many free parameters will not allow the fit to converge (see the section for the fitting step). ### XML for Extended Sources In some regions, the [make4FGLxml.py](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py) script may add one or more extended sources to your XML model. The script will provide the number of extended sources included in the model. In order to use these extended sources, you will need to downloaded the extended source templates from the [LAT Catalog](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/) page (look for "Extended Source template archive"). Extract the archive in the directory of your choice and note the path to the template files, which have names like `W44.fits` and `VelaX.fits`. You will need to provide the path to the template file to the script before you run it. Here is an example of the proper format for an extended source XML entry for Binned Likelihood analysis: ```xml <source name="SpatialMap_source" type="DiffuseSource"> <spectrum type="PowerLaw2"> <parameter free="1" max="1000.0" min="1e-05" name="Integral" scale="1e-06" value="1.0"/> <parameter free="1" max="-1.0" min="-5.0" name="Index" scale="1.0" value="-2.0"/> <parameter free="0" max="200000.0" min="20.0" name="LowerLimit" scale="1.0" value="20.0"/> <parameter free="0" max="200000.0" min="20.0" name="UpperLimit" scale="1.0" value="2e5"/> </spectrum> <spatialModel W44 file="$(PATH_TO_FILE)/W44.fits" type="SpatialMap" map_based_integral="true"> <parameter free="0" max="1000.0" min="0.001" name="Normalization" scale= "1.0" value="1.0"/> </spatialModel> </source> ``` # 6. Compute livetimes and exposure To speed up the exposure calculations performed by Likelihood, it is helpful to pre-compute the livetime as a function of sky position and off-axis angle. The [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/gtltcube.txt) tool creates a livetime cube, which is a [HealPix](http://healpix.jpl.nasa.gov/) table, covering the entire sky, of the integrated livetime as a function of inclination with respect to the LAT z-axis. Here is an example of how to run [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/gtltcube.txt): ``` %%bash gtltcube zmax=90 ./data/3C279_binned_gti.fits ./data/L181126210218F4F0ED2738_SC00.fits ./data/3C279_binned_ltcube.fits 0.025 1 ``` >**Note**: Values such as "0.1" for "Step size in cos(theta) are known to give unexpected results. Use "0.09" instead. The livetime cube generated from this analysis can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ltcube.fits). For more information about the livetime cubes see the documentation in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/) and also the explanation in the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) tutorial. # 7. Compute exposure map Next, you must apply the livetime calculated in the previous step to your region of interest. To do this, we use the [gtexpcube2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpcube2.txt) tool, which is an updated version of the previous **gtexpcube**. This tool generates a binned exposure map, an accounting of the exposure at each position in the sky, that are a required input to the likelihood process. >**NOTE**: In the past, running **gtsrcmaps** calculated the exposure map for you, so most analyses skipped the binned exposure map generation step. With the introduction of **gtexpcube2**, this is no longer the case. You must explicitly command the creation of the exposure map as a separate analysis step. In order to create an exposure map that accounts for contributions from all the sources in your analysis region, you must consider not just the sources included in the counts cube. The large PSF of the LAT means that at low energies, sources from well outside your counts cube could affect the sources you are analyzing. To compensate for this, you must create an exposure map that includes sources up to 10 degrees outside your ROI. (The ROI is determined by the radius you downloaded from the data server, here a 15 degree radius.) In addition, you should account for all the exposure that contributes to those additional sources. Since the exposure map uses square pixels, to match the binning in the counts cube, and to ensure we don't have errors, we generate a 300x300 pixel map. If you provide [gtexpcube2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpcube2.txt) a filename for your counts cube, it will use the information from that file to define the geometry of the exposure map. This is legacy behavior and will not give you the necessary 20° buffer you need to completely account for the exposure of nearby sources. (It will also cause an error in the next step.) Instead, you should specify the appropriate geometry for the exposure map, remembering that the counts cube used 0.2 degree pixel binning. To do that, enter `none` when asked for a Counts cube. **Note**: If you get a "`File not found`" error in the examples below, just put the IRF name in explicitly. The appropriate IRF for this data set is `P8R3_SOURCE_V2`. ``` %%bash gtexpcube2 ./data/3C279_binned_ltcube.fits none ./data/3C279_binned_expcube.fits P8R3_SOURCE_V2 300 300 .2 193.98 -5.82 0 AIT CEL 100 500000 37 ``` The generated exposure map can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_expcube.fits). At this point, you may decide it is easier to simply generate exposure maps for the entire sky. You may be right, as it certainly simplifies the step when scripting. However, making an all-sky map increases the processing time for this step, though the increase is modest. To generate an all-sky exposure map (rather than the exposure map we calculated above) you need to specify the proper binning and explicitly give the number of pixels for the entire sky (360°x180°). Here is an example: ``` %%bash gtexpcube2 ./data/3C279_binned_ltcube.fits none ./data/3C279_binned_allsky_expcube.fits P8R3_SOURCE_V2 1800 900 .2 193.98 -5.82 0 AIT CEL 100 500000 37 ``` The all-sky exposure map can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_allsky_expcube.fits). Just as in the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) analysis, the exposure needs to be recalculated if the ROI, zenith angle, time, event class, or energy selections applied to the data are changed. For the binned analysis, this also includes the spatial and energy binning of the 3D counts map (which affects the exposure map as well). # 8. Compute source map The [gtsrcmaps](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtsrcmaps.txt) tool creates model counts maps for use with the binned likelihood analysis. To do this, it takes each source spectrum in the XML model, multiplies it by the exposure at the source position, and convolves that exposure with the effective PSF. This is an example of how to run the tool: ``` %%bash gtsrcmaps ./data/3C279_binned_ltcube.fits ./data/3C279_binned_ccube.fits ./data/3C279_input_model.xml ./data/3C279_binned_allsky_expcube.fits ./data/3C279_binned_srcmaps.fits CALDB ``` The output file from [gtsrcmaps](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtsrcmaps.txt) can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_srcmaps.fits). Because your model map can include sources outside your ROI, you may see a list of warnings at the beginning of the output. These are expected (because you have properly included sources outside your ROI in your XML file) and should cause no problem in your analysis. In addition, if your exposure map is too small for the region, you will see the following warning: ``` Caught St13runtime_error at the top level: Request for exposure at a sky position that is outside of the map boundaries. The contribution of the diffuse source outside of the exposure and counts map boundaries is being computed to account for PSF leakage into the analysis region. To handle this, use an all-sky binned exposure map. Alternatively, to neglect contributions outside of the counts map region, use the emapbnds=no option when running gtsrcmaps. ``` In this situation, you should increase the dimensions of your exposure map, or just move to the all-sky version. Source map generation for the point sources is fairly quick, and maps for many point sources may take up a lot of disk space. If you are analyzing a single long data set, it may be preferable to pre-compute only the source maps for the diffuse components at this stage. [gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) will compute maps for the point sources on the fly if they appear in the XML definition and a corresponding map is not in the source maps FITS file. To skip generating source maps for point sources, specify "`ptsrc=no`" on the command line when running **gtsrcmaps**. However, if you expect to perform multiple fits on the same set of data, precomputing the source maps will probably save you time. # 9. Run gtlike >NOTE: Prior to running **gtlike** for Unbinned Likelihood, it is necessary to calculate the diffuse response for each event (when that response is not precomputed). However, for Binned Likelihood analysis the diffuse response is calculated over the entire bin, so this step is not necessary. If you want to use the **energy dispersion correction** during your analysis, you must enable this feature using the environment variable `USE_BL_EDISP`. This may be set on the command line using: ```bash export USE_BL_EDISP=true ``` or, depending on your shell, ``` setenv USE_BL_EDISP=true ``` To disable the use of energy dispersion, you must unset the variable: ```bash unset USE_BL_EDISP ``` or ``` unsetenv USE_BL_EDISP ``` ```bash export USE_BL_EDISP=true ``` or, depending on your shell, ``` setenv USE_BL_EDISP=true ``` To disable the use of energy dispersion, you must unset the variable: ```bash unset USE_BL_EDISP ``` or ``` unsetenv USE_BL_EDISP ``` Now we are ready to run the [gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) application. Here, we request that the fitted parameters be saved to an output XML model file for use in later steps. ``` !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_output_model.xml %%bash gtlike refit=yes plot=yes sfile=./data/3C279_binned_output.xml BINNED ./data/3C279_binned_srcmaps.fits ./data/3C279_binned_allsky_expcube.fits ./data/3C279_binned_ltcube.fits ./data/3C279_input_model.xml CALDB NEWMINUIT ``` Most of the entries prompted for are fairly obvious. In addition to the various XML and FITS files, the user is prompted for a choice of IRFs, the type of statistic to use, and the optimizer. The statistics available are: * **UNBINNED**: This should be used for short timescale or low source count data. If this option is chosen then parameters for the spacecraft file, event file, and exposure file must be given. See explanation in: [Likelihood Tutorial]() * **BINNED**: This is a standard binned analysis as described in this tutorial. This analysis is used for long timescale or high-density data (such as in the Galactic plane) which can cause memory errors in the unbinned analysis. If this option is chosen then parameters for the source map file, livetime file, and exposure file must be given. There are five optimizers from which to choose: `DRMNGB`, `DRMNFB`, `NEWMINUIT`, `MINUIT` and `LBFGS`. Generally speaking, the faster way to find the parameter estimates is to use `DRMNGB` (or `DRMNFB`) to find initial values and then use `MINUIT` (or `NEWMINUIT`) to find more accurate results. If you have trouble achieving convergence at first, you can loosen your tolerance by setting the hidden parameter `ftol` on the command line. (The default value for `ftol` is `0.001`.) Analyzing a 2-year dataset will take many hours (in our case more than 2 days with a 32-bit machine with 1 GB of RAM). The required running time is high if your source is in the Galactic plane. Here is some output from our fit, where 4FGL J1229.0+0202 and 4FGL J1256.1-0547 corresponds to 3C 273 and 3C 279, respectively: ``` This is gtlike version ... Photon fluxes are computed for the energy range 100 to 500000 MeV 4FGL J1229.0+0202: norm: 8.16706 +/- 0.0894921 alpha: 2.49616 +/- 0.015028 beta: 0.104635 +/- 0.0105201 Eb: 279.04 TS value: 32017.6 Flux: 6.69253e-07 +/- 7.20102e-09 photons/cm^2/s 4FGL J1256.1-0547: norm: 2.38177 +/- 0.0296458 alpha: 2.25706 +/- 0.0116212 beta: 0.0665607 +/- 0.00757385 Eb: 442.052 TS value: 29261.7 Flux: 5.05711e-07 +/- 6.14833e-09 photons/cm^2/s ... gll_iem_v07: Prefactor: 0.900951 +/- 0.0235397 Index: 0 Scale: 100 Flux: 0.000469334 +/- 1.22608e-05 photons/cm^2/s iso_P8R3_SOURCE_V2_v1: Normalization: 1.13545 +/- 0.0422581 Flux: 0.000139506 +/- 5.19439e-06 photons/cm^2/s WARNING: Fit may be bad in range [100, 199.488] (MeV) WARNING: Fit may be bad in range [251.124, 316.126] (MeV) WARNING: Fit may be bad in range [6302.3, 7933.61] (MeV) WARNING: Fit may be bad in range [39744.4, 50032.1] (MeV) WARNING: Fit may be bad in range [315519, 397190] (MeV) Total number of observed counts: 207751 Total number of model events: 207407 -log(Likelihood): 73014.38504 Writing fitted model to 3C279_binned_output.xml ``` Since we selected `plot=yes` in the command line, a plot of the fitted data appears. <img src="https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_spectral_fit.png"> In the first plot, the counts/MeV vs MeV are plotted. The points are the data, and the lines are the models. Error bars on the points represent sqrt(Nobs) in that band, where Nobs is the observed number of counts. The black line is the sum of the models for all sources. The colored lines follow the sources as follows: * Black - summed model * Red - first source (see below) * Green - second source * Blue - third source * Magenta - fourth source * Cyan - the fifth source If you have more sources, the colors are reused in the same order. In our case we have, in order of decreasing value on the y-axis: summed model (black), the extragalactic background (black), the galactic background (cyan), 3C 273 (red), and 3C 279 (black). The second plot gives the residuals between your model and the data. Error bars here represent (sqrt(Nopbs))/Npred, where Npred is the predicted number of counts in each band based on the fitted model. To assess the quality of the fit, look first for the words at the top of the output `<Optimizer> did successfully converge.` Successful convergence is a minimum requirement for a good fit. Next, look at the energy ranges that are generating warnings of bad fits. If any of these ranges affect your source of interest, you may need to revise the source model and refit. You can also look at the residuals on the plot (bottom panel). If the residuals indicate a poor fit overall (e.g., the points trending all low or all high) you should consider changing your model file, perhaps by using a different source model definition, and refit the data. If the fits and spectral shapes are good, but could be improved, you may wish to simply update your model file to hold some of the spectral parameters fixed. For example, by fixing the spectral model for 3C 273, you may get a better quality fit for 3C 279. Close the plot and you will be asked if you wish to refit the data. ``` Refit? [y] n Elapsed CPU time: 1571.805872 ``` Here, hitting `return` will instruct the application to fit again. We are happy with the result, so we type `n` and end the fit. ### Results When it completes, **gtlike** generates a standard output XML file. If you re-run the tool in the same directory, these files will be overwritten by default. Use the `clobber=no` option on the command line to keep from overwriting the output files. Unfortunately, the fit details and the value for the `-log(likelihood)` are not recorded in the automatic output files. You should consider logging the output to a text file for your records by using `> fit_data.txt` (or something similar) with your **gtlike** command. Be aware, however, that this will make it impossible to request a refit when the likelihood process completes. ``` !gtlike plot=yes sfile=./data/3C279_output_model.xml > fit_data.txt ``` In this example, we used the `sfile` parameter to request that the model results be written to an output XML file. This file contains the source model results that were written to `results.dat` at the completion of the fit. > **Note**: If you have specified an output XML model file and you wish to modify your model while waiting at the `Refit? [y]` prompt, you will need to copy the results of the output model file to your input model before making those modifications. The results of the likelihood analysis have to be scaled by the quantity called "scale" in the XML model in order to obtain the total photon flux (photons cm-2 s-1) of the source. You must refer to the model formula of your source for the interpretation of each parameter. In our example the 'prefactor' of our power law model of the first fitted source (4FGLJ1159.5-0723) has to be scaled by the factor 'scale'=10-14. For example the total flux of 4FGLJ1159.5-0723 is the integral between 100 MeV and 500000 MeV of: $Prefactor \cdot scale \cdot (E /100)^{index}=(6.7017x10-14) \cdot (E/100)^{-2.0196}$ Errors reported with each value in the `results.dat` file are 1σ estimates (based on inverse-Hessian at the optimum of the log-likelihood surface). ### Other Useful Hidden Parameters If you are scripting and wish to generate multiple output files without overwriting, the `results` and `specfile` parameters allow you to specify output filenames for the `results.dat` and `counts_spectra.fits` files respectively. If you do not specify a source model output file with the `sfile` parameter, then the input model file will be overwritten with the latest fit. This is convenient as it allows the user to edit that file while the application is waiting at the `Refit? [y]` prompt so that parameters can be adjusted and set free or fixed. This would be similar to the use of the "newpar", "freeze", and "thaw" commands in [XSPEC](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/index.html). # 10. Create a model map For comparison to the counts map data, we create a model map of the region based on the fit parameters. This map is essentially an infinite-statistics counts map of the region-of-interest based on our model fit. The [gtmodel](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtmodel.txt) application reads in the fitted model, applies the proper scaling to the source maps, and adds them together to get the final map. ``` %%bash gtmodel ./data/3C279_binned_srcmaps.fits ./data/3C279_binned_output.xml ./data/3C279_model_map.fits CALDB ./data/3C279_binned_ltcube.fits ./data/3C279_binned_allsky_expcube.fits ``` To understand how well the fit matches the data, we want to compare the [model map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_model_map.fits) just created with the counts map over the same field of view. First we have to create the [new counts map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_cmap_small.fits) that matches in size the model map (the one generated in encircles the ROI, while the model map is completely inscribed within the ROI): We will use again the [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) tool with the option `CMAP` as shown below: ``` %%bash gtbin CMAP ./data/3C279_binned_gti.fits ./data/3C279_binned_cmap_small.fits NONE 100 100 0.2 CEL 193.98 -5.82 0.0 STG ``` Here we've plotted the model map next to the the energy-summed counts map for the data. <img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_map_comparison.png'> Finally we want to create the [residual map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_residual.fits) by using the FTOOL **farith** to check if we can improve the model: ``` %%bash farith ./data/3C279_binned_cmap_small.fits ./data/3C279_model_map.fits ./data/3C279_residual.fits SUB ``` The residual map is shown below. As you can see, the binning we chose probably used pixels that were too large. The primary sources, 3C 273 and 3C 279, have some positive pixels next to some negative ones. This effect could be lessened by either using a smaller pixel size or by offsetting the central position slightly from the position of the blazar (or both). If your residual map contains bright sources, the next step would be to iterate the analysis with the additional sources included in the XML model file. <img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_residuals.png'>
github_jupyter
``` from collections import Counter import numpy as np from csv import DictReader from keras.preprocessing.sequence import pad_sequences from keras.utils import np_utils from keras.models import Sequential, Model, load_model from keras.layers import concatenate, Embedding, Dense, Dropout, Activation, LSTM, CuDNNLSTM, CuDNNGRU,Flatten, Input, RepeatVector, TimeDistributed, Bidirectional from keras.optimizers import Adam, RMSprop from keras.callbacks import Callback, ModelCheckpoint, EarlyStopping, TensorBoard import codecs import pickle MAX_LEN_HEAD = 100 MAX_LEN_BODY = 500 VOCAB_SIZE = 15000 EMBEDDING_DIM = 300 def get_vocab(lst, vocab_size): """ lst: list of sentences """ vocabcount = Counter(w for txt in lst for w in txt.lower().split()) vocabcount = vocabcount.most_common(vocab_size) word2idx = {} idx2word = {} for i, word in enumerate(vocabcount): word2idx[word[0]] = i idx2word[i] = word[0] return word2idx, idx2word def cov2idx_unk(lst, word2idx): output = [] for sentence in lst: temp = [] for word in sentence.split(): if word in word2idx: temp.append(word2idx[word]) else: temp.append(word2idx['<unk>']) temp.append(word2idx['<unk>']) output.append(temp) return output def pad_seq(cov_lst, max_len=MAX_LEN_BODY): """ list of list of index converted from words """ pad_lst = pad_sequences(cov_lst, maxlen = max_len, padding='post') return pad_lst label_ref = {'agree': 0, 'disagree': 1, 'discuss': 2, 'unrelated': 3} def load_train_unk(file_instances, file_bodies): """ article: the name of the article file """ instance_lst = [] # Process file with open(file_instances, "r", encoding='utf-8') as table: r = DictReader(table) for line in r: instance_lst.append(line) body_lst = [] # Process file with open(file_bodies, "r", encoding='utf-8') as table: r = DictReader(table) for line in r: body_lst.append(line) heads = {} bodies = {} for instance in instance_lst: if instance['Headline'] not in heads: head_id = len(heads) heads[instance['Headline']] = head_id instance['Body ID'] = int(instance['Body ID']) for body in body_lst: bodies[int(body['Body ID'])] = body['articleBody'] headData = [] bodyData = [] labelData = [] for instance in instance_lst: headData.append(instance['Headline']) bodyData.append(bodies[instance['Body ID']]) labelData.append(label_ref[instance['Stance']]) word2idx, idx2word = get_vocab(headData+bodyData, VOCAB_SIZE) word2idx['<unk>'] = len(word2idx) cov_head = cov2idx_unk(headData, word2idx) cov_body = cov2idx_unk(bodyData, word2idx) remove_list = [] for i in range(len(cov_head)): if len(cov_head[i])>MAX_LEN_HEAD or len(cov_body[i])>MAX_LEN_BODY: remove_list.append(i) for idx in sorted(remove_list, reverse = True): cov_head.pop(idx) cov_body.pop(idx) labelData.pop(idx) pad_head = pad_seq(cov_head, MAX_LEN_HEAD) pad_body = pad_seq(cov_body, MAX_LEN_BODY) return pad_head, pad_body, labelData, word2idx, idx2word pad_head, pad_body, labelData, word2idx, idx2word = load_train_unk("train_stances.csv", "train_bodies.csv") #for training train_head = pad_head[:-1000] train_body = pad_body[:-1000] train_label = labelData[:-1000] val_head = pad_head[-1000:] val_body = pad_body[-1000:] val_label = labelData[-1000:] BATCH_SIZE = 128 NUM_LAYERS = 0 HIDDEN_DIM = 512 EPOCHS = 60 input_head = Input(shape=(MAX_LEN_HEAD,), dtype='int32', name='input_head') embed_head = Embedding(output_dim=EMBEDDING_DIM, input_dim=VOCAB_SIZE+1, input_length=MAX_LEN_HEAD)(input_head) gru_head = CuDNNGRU(128)(embed_head) # embed_head = Embedding(VOCAB_SIZE, EMBEDDING_DIM , input_length = MAX_LEN_HEAD, weights = [g_word_embedding_matrix], trainable=False) input_body = Input(shape=(MAX_LEN_BODY,), dtype='int32', name='input_body') embed_body = Embedding(output_dim=EMBEDDING_DIM, input_dim=VOCAB_SIZE+1, input_length=MAX_LEN_BODY)(input_body) gru_body = CuDNNGRU(128)(embed_body) # embed_body = Embedding(VOCAB_SIZE, EMBEDDING_DIM , input_length = MAX_LEN_BODY, weights = [g_word_embedding_matrix], trainable=False) concat = concatenate([gru_head, gru_body], axis = 1) x = Dense(400, activation='relu')(concat) x = Dropout(0.5)(x) x = Dense(400, activation='relu')(x) x = Dropout(0.5)(x) # And finally we add the main logistic regression layer main_output = Dense(4, activation='softmax', name='main_output')(x) model = Model(inputs=[input_head, input_body], outputs=main_output) model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics = ['accuracy']) model.summary() wt_dir = "./models/seqLSTM/" model_path = wt_dir+'biLSTM'+'{epoch:03d}'+'.h5' tensorboard = TensorBoard(log_dir='./Graph') model_checkpoint = ModelCheckpoint(model_path, save_best_only =False, period =2, save_weights_only = False) # model.fit([try_head, try_body], # try_label, # epochs=30, # validation_data=([try_head, try_body], try_label), # batch_size=BATCH_SIZE, # shuffle=True, # callbacks = [model_checkpoint, tensorboard]) model.fit([train_head, train_body], train_label, epochs=2*EPOCHS, validation_data=([val_head, val_body], val_label), batch_size=BATCH_SIZE, shuffle = True, callbacks=[model_checkpoint, tensorboard]) pickle.dump(word2idx, open("word2idx_GRU.pkl", "wb")) ```
github_jupyter
``` ##### Copyright 2021 The Cirq Developers #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Floquet calibration <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/google/floquet"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> This notebook demonstrates the Floquet calibration API, a tool for characterizing $\sqrt{\text{iSWAP}}$ gates and inserting single-qubit $Z$ phases to compensate for errors. This characterization is done by the Quantum Engine and the insertion of $Z$ phases for compensation/calibration is completely client-side with the help of Cirq utilities. At the highest level, the tool inputs a quantum circuit of interest (as well as a backend to run on) and outputs a calibrated circuit for this backend which can then be executed to produce better results. ## Details on the calibration tool In more detail, assuming we have a number-convserving two-qubit unitary gate, Floquet calibration (FC) returns fast, accurate estimates for the relevant angles to be calibrated. The `cirq.PhasedFSimGate` has five angles $\theta$, $\zeta$, $\chi$, $\gamma$, $\phi$ with unitary matrix $$ \left[ \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & \exp(-i \gamma - i \zeta) cos( \theta ) & -i \exp(-i \gamma + i \chi) sin( \theta ) & 0 \\ 0 & -i \exp(-i \gamma - i \chi) sin( \theta ) & \exp(-i \gamma + i \zeta) cos( \theta) & 0 \\ 0 & 0 & 0 & \exp(-2 i \gamma -i \phi ) \end{matrix} \right] $$ With Floquet calibration, every angle but $\chi$ can be calibrated. In experiments, we have found these angles change when gates are run in parallel. Because of this, we perform FC on entire moments of two-qubits gates and return different characterized angles for each. After characterizing a set of angles, one needs to adjust the circuit to compensate for the offset. The simplest adjustment is for $\zeta$ and $\gamma$ and works by adding $R_z$ gates before and after the two-qubit gates in question. For many circuits, even this simplest compensation can lead to a significant improvement in results. We provide methods for doing this in this notebook and analyze results for an example circuit. We do not attempt to correct the misaligned iSWAP rotation or the additional two-qubit phase in this notebook. This is a non-trivial task and we do currently have simple tools to achieve this. It is up to the user to correct for these as best as possible. Note: The Floquet calibration API and this documentation is ongoing work. The amount by which errors are reduced may vary from run to run and from circuit to circuit. ## Setup ``` try: import cirq except ImportError: print("installing cirq...") !pip install cirq --quiet print("installed cirq.") from typing import Iterable, List, Optional, Sequence import matplotlib.pyplot as plt import numpy as np import cirq import cirq_google as cg # Contains the Floquet calibration tools. ``` Note: In order to run on Google's Quantum Computing Service, an environment variable `GOOGLE_CLOUD_PROJECT` must be present and set to a valid Google Cloud Platform project identifier. If this is not satisfied, we default to an engine simulator. Running the next cell will prompt you to authenticate Google Cloud SDK to use your project. See the [Getting Started Guide](../tutorials/google/start.ipynb) for more information. Note: Leave `project_id` blank to use a noisy simulator. ``` # The Google Cloud Project id to use. project_id = '' #@param {type:"string"} if project_id == '': import os if 'GOOGLE_CLOUD_PROJECT' not in os.environ: print("No processor_id provided and environment variable " "GOOGLE_CLOUD_PROJECT not set, defaulting to noisy simulator.") processor_id = None engine = cg.PhasedFSimEngineSimulator.create_with_random_gaussian_sqrt_iswap( mean=cg.SQRT_ISWAP_PARAMETERS, sigma=cg.PhasedFSimCharacterization( theta=0.01, zeta=0.10, chi=0.01, gamma=0.10, phi=0.02 ), ) sampler = engine device = cg.Bristlecone line_length = 20 else: import os os.environ['GOOGLE_CLOUD_PROJECT'] = project_id def authenticate_user(): """Runs the user through the Colab OAuth process. Checks for Google Application Default Credentials and runs interactive login if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook or other IPython runtimes, no interactive login is provided, it is assumed that the `GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login` was executed already. For more information on using Application Default Credentials see https://cloud.google.com/docs/authentication/production """ in_colab = False try: from IPython import get_ipython in_colab = 'google.colab' in str(get_ipython()) except: # Notebook is not executed within IPython. Assuming external authentication. return if in_colab: from google.colab import auth print("Getting OAuth2 credentials.") print("Press enter after entering the verification code.") auth.authenticate_user(clear_output=False) print("Authentication complete.") else: print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.") authenticate_user() print("Successful authentication to Google Cloud.") processor_id = "" #@param {type:"string"} engine = cg.get_engine() device = cg.get_engine_device(processor_id) sampler = cg.get_engine_sampler(processor_id, gate_set_name="sqrt_iswap") line_length = 35 ``` ## Minimal example for a single $\sqrt{\text{iSWAP}}$ gate To see how the API is used, we first show the simplest usage of Floquet calibration for a minimal example of one $\sqrt{\text{iSWAP}}$ gate. After this section, we show detailed usage with a larger circuit and analyze the results. The gates that are calibrated by Floquet calibration are $\sqrt{\text{iSWAP}}$ gates: ``` sqrt_iswap = cirq.FSimGate(np.pi / 4, 0.0) print(cirq.unitary(sqrt_iswap).round(3)) ``` First we get two connected qubits on the selected device and define a circuit. ``` """Define a simple circuit to use Floquet calibration on.""" qubits = cg.line_on_device(device, length=2) circuit = cirq.Circuit(sqrt_iswap.on(*qubits)) # Display it. print("Circuit to calibrate:\n") print(circuit) ``` The simplest way to use Floquet calibration is as follows. ``` """Simplest usage of Floquet calibration.""" calibrated_circuit, *_ = cg.run_zeta_chi_gamma_compensation_for_moments( circuit, engine, processor_id=processor_id, gate_set=cg.SQRT_ISWAP_GATESET ) ``` Note: Additional returned arguments, omitted here for simplicity, are described below. When we print out the returned `calibrated_circuit.circuit` below, we see the added $Z$ rotations to compensate for errors. ``` print("Calibrated circuit:\n") calibrated_circuit.circuit ``` This `calibrated_circuit` can now be executed on the processor to produce better results. ## More detailed example with a larger circuit We now use Floquet calibration on a larger circuit which models the evolution of a fermionic particle on a linear spin chain. The physics of this problem for a closed chain (here we use an open chain) has been studied in [Accurately computing electronic properties of materials using eigenenergies](https://arxiv.org/abs/2012.00921), but for the purposes of this notebook we can treat this just as an example to demonstrate Floquet calibration on. First we use the function `cirq_google.line_on_device` to return a line of qubits of a specified length. ``` line = cg.line_on_device(device, line_length) print(line) ``` This line is now broken up into a number of segments of a specified length (number of qubits). ``` segment_length = 5 segments = [line[i: i + segment_length] for i in range(0, line_length - segment_length + 1, segment_length)] ``` For example, the first segment consists of the following qubits. ``` print(*segments[0]) ``` We now implement a number of Trotter steps on each segment in parallel. The middle qubit on each segment is put into the $|1\rangle$ state, then each Trotter step consists of staggered $\sqrt{\text{iSWAP}}$ gates. All qubits are measured in the $Z$ basis at the end of the circuit. For convenience, this code is wrapped in a function. ``` def create_example_circuit( segments: Sequence[Sequence[cirq.Qid]], num_trotter_steps: int, ) -> cirq.Circuit: """Returns a linear chain circuit to demonstrate Floquet calibration on.""" circuit = cirq.Circuit() # Initial state preparation. for segment in segments: circuit += [cirq.X.on(segment[len(segment) // 2])] # Trotter steps. for step in range(num_trotter_steps): offset = step % 2 moment = cirq.Moment() for segment in segments: moment += cirq.Moment( [sqrt_iswap.on(a, b) for a, b in zip(segment[offset::2], segment[offset + 1::2])]) circuit += moment # Measurement. circuit += cirq.measure(*sum(segments, ()), key='z') return circuit ``` As an example, we show this circuit on the first segment of the line from above. ``` """Example of the linear chain circuit on one segment of the line.""" num_trotter_steps = 20 circuit_on_segment = create_example_circuit( segments=[segments[0]], num_trotter_steps=num_trotter_steps, ) print(circuit_on_segment.to_text_diagram(qubit_order=segments[0])) ``` The circuit we will use for Floquet calibration is this same pattern repeated on all segments of the line. ``` """Circuit used to demonstrate Floquet calibration.""" circuit = create_example_circuit( segments=segments, num_trotter_steps=num_trotter_steps ) ``` ### Execution on a simulator To establish a "ground truth," we first simulate a segment on a noiseless simulator. ``` """Simulate one segment on a simulator.""" nreps = 20_000 sim_result = cirq.Simulator().run(circuit_on_segment, repetitions=nreps) ``` ### Execution on the processor without Floquet calibration We now execute the full circuit on a processor without using Floquet calibration. ``` """Execute the full circuit on a processor without Floquet calibration.""" raw_results = sampler.run(circuit, repetitions=nreps) ``` ### Comparing raw results to simulator results For comparison we will plot densities (average measurement results) on each segment. Such densities are in the interval $[0, 1]$ and more accurate results are closer to the simulator results. To visualize results, we define a few helper functions. #### Helper functions Note: The functions in this section are just utilities for visualizing results and not essential for Floquet calibration. As such this section can be safely skipped or skimmed. The next cell defines two functions for returning the density (average measurement results) on a segment or on all segments. We can optionally post-select for measurements with a specific filling (particle number) - i.e., discard measurement results which don't obey this expected particle number. ``` def z_density_from_measurements( measurements: np.ndarray, post_select_filling: Optional[int] = 1 ) -> np.ndarray: """Returns density for one segment on the line.""" counts = np.sum(measurements, axis=1, dtype=int) if post_select_filling is not None: errors = np.abs(counts - post_select_filling) counts = measurements[(errors == 0).nonzero()] return np.average(counts, axis=0) def z_densities_from_result( result: cirq.Result, segments: Iterable[Sequence[cirq.Qid]], post_select_filling: Optional[int] = 1 ) -> List[np.ndarray]: """Returns densities for each segment on the line.""" measurements = result.measurements['z'] z_densities = [] offset = 0 for segment in segments: z_densities.append(z_density_from_measurements( measurements[:, offset: offset + len(segment)], post_select_filling) ) offset += len(segment) return z_densities ``` Now we define functions to plot the densities for the simulator, processor without Floquet calibration, and processor with Floquet calibration (which we will use at the end of this notebook). The first function is for a single segment, and the second function is for all segments. ``` #@title def plot_density( ax: plt.Axes, sim_density: np.ndarray, raw_density: np.ndarray, cal_density: Optional[np.ndarray] = None, raw_errors: Optional[np.ndarray] = None, cal_errors: Optional[np.ndarray] = None, title: Optional[str] = None, show_legend: bool = True, show_ylabel: bool = True, ) -> None: """Plots the density of a single segment for simulated, raw, and calibrated results. """ colors = ["grey", "orange", "green"] alphas = [0.5, 0.8, 0.8] labels = ["sim", "raw", "cal"] # Plot densities. for i, density in enumerate([sim_density, raw_density, cal_density]): if density is not None: ax.plot( range(len(density)), density, "-o" if i == 0 else "o", markersize=11, color=colors[i], alpha=alphas[i], label=labels[i] ) # Plot errors if provided. errors = [raw_errors, cal_errors] densities = [raw_density, cal_density] for i, (errs, dens) in enumerate(zip(errors, densities)): if errs is not None: ax.errorbar( range(len(errs)), dens, errs, linestyle='', color=colors[i + 1], capsize=8, elinewidth=2, markeredgewidth=2 ) # Titles, axes, and legend. ax.set_xticks(list(range(len(sim_density)))) ax.set_xlabel("Qubit index in segment") if show_ylabel: ax.set_ylabel("Density") if title: ax.set_title(title) if show_legend: ax.legend() def plot_densities( sim_density: np.ndarray, raw_densities: Sequence[np.ndarray], cal_densities: Optional[Sequence[np.ndarray]] = None, rows: int = 3 ) -> None: """Plots densities for simulated, raw, and calibrated results on all segments. """ if not cal_densities: cal_densities = [None] * len(raw_densities) cols = (len(raw_densities) + rows - 1) // rows fig, axes = plt.subplots( rows, cols, figsize=(cols * 4, rows * 3.5), sharey=True ) if rows == 1 and cols == 1: axes = [axes] elif rows > 1 and cols > 1: axes = [axes[row, col] for row in range(rows) for col in range(cols)] for i, (ax, raw, cal) in enumerate(zip(axes, raw_densities, cal_densities)): plot_density( ax, sim_density, raw, cal, title=f"Segment {i + 1}", show_legend=False, show_ylabel=i % cols == 0 ) # Common legend for all subplots. handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels) plt.tight_layout(pad=0.1, w_pad=1.0, h_pad=3.0) ``` #### Visualizing results Note: This section uses helper functions from the previous section to plot results. The code can be safely skimmed: emphasis should be on the plots. To visualize results, we first extract densities from the measurements. ``` """Extract densities from measurement results.""" # Simulator density. sim_density, = z_densities_from_result(sim_result,[circuit_on_segment]) # Processor densities without Floquet calibration. raw_densities = z_densities_from_result(raw_results, segments) ``` We first plot the densities on each segment. Note that the simulator densities ("sim") are repeated on each segment and the lines connecting them are just visual guides. ``` plot_densities(sim_density, raw_densities, rows=int(np.sqrt(line_length / segment_length))) ``` We can also look at the average and variance over the segments. ``` """Plot mean density and variance over segments.""" raw_avg = np.average(raw_densities, axis=0) raw_std = np.std(raw_densities, axis=0, ddof=1) plot_density( plt.gca(), sim_density, raw_density=raw_avg, raw_errors=raw_std, title="Average over segments" ) ``` In the next section, we will use Floquet calibration to produce better average results. After running the circuit with Floquet calibration, we will use these same visualizations to compare results. ### Execution on the processor with Floquet calibration There are two equivalent ways to use Floquet calibration which we outline below. A rough estimate for the time required for Floquet calibration is about 16 seconds per 10 qubits, plus 30 seconds of overhead, per calibrated moment. #### Simple usage The first way to use Floquet calibration is via the single function call used at the start of this notebook. Here, we describe the remaining returned values in addition to `calibrated_circuit`. Note: We comment out this section so Floquet calibration on the larger circuit is only executed once in the notebook. ``` # (calibrated_circuit, calibrations # ) = cg.run_zeta_chi_gamma_compensation_for_moments( # circuit, # engine, # processor_id=processor_id, # gate_set=cg.SQRT_ISWAP_GATESET # ) ``` The returned `calibrated_circuit.circuit` can then be run on the engine. The full list of returned arguments is as follows: * `calibrated_circuit.circuit`: The input `circuit` with added $Z$ rotations around each $\sqrt{\text{iSWAP}}$ gate to compensate for errors. * `calibrated_circuit.moment_to_calibration`: Provides an index of the matching characterization (index in calibrations list) for each moment of the `calibrated_circuit.circuit`, or `None` if the moment was not characterized (e.g., for a measurement outcome). * `calibrations`: List of characterization results for each characterized moment. Each characterization contains angles for each qubit pair. #### Step-by-step usage Note: This section is provided to see the Floquet calibration API at a lower level, but the results are identical to the "simple usage" in the previous section. The above function `cirq_google.run_floquet_phased_calibration_for_circuit` performs the following three steps: 1. Find moments within the circuit that need to be characterized. 2. Characterize them on the engine. 3. Apply corrections to the original circuit. To find moments that need to be characterized, we can do the following. ``` """Step 1: Find moments in the circuit that need to be characterized.""" (characterized_circuit, characterization_requests ) = cg.prepare_floquet_characterization_for_moments( circuit, options=cg.FloquetPhasedFSimCalibrationOptions( characterize_theta=False, characterize_zeta=True, characterize_chi=False, characterize_gamma=True, characterize_phi=False ) ) ``` The `characterization_requests` contain information on the operations (gate + qubit pairs) to characterize. ``` """Show an example characterization request.""" print(f"Total {len(characterization_requests)} moment(s) to characterize.") print("\nExample request") request = characterization_requests[0] print("Gate:", request.gate) print("Qubit pairs:", request.pairs) print("Options: ", request.options) ``` We now characterize them on the engine using `cirq_google.run_calibrations`. ``` """Step 2: Characterize moments on the engine.""" characterizations = cg.run_calibrations( characterization_requests, engine, processor_id=processor_id, gate_set=cg.SQRT_ISWAP_GATESET, max_layers_per_request=1, ) ``` The `characterizations` store characterization results for each pair in each moment, for example. ``` print(f"Total: {len(characterizations)} characterizations.") print() (pair, parameters), *_ = characterizations[0].parameters.items() print(f"Example pair: {pair}") print(f"Example parameters: {parameters}") ``` Finally, we apply corrections to the original circuit. ``` """Step 3: Apply corrections to the circuit to get a calibrated circuit.""" calibrated_circuit = cg.make_zeta_chi_gamma_compensation_for_moments( characterized_circuit, characterizations ) ``` The calibrated circuit can now be run on the processor. We first inspect the calibrated circuit to compare to the original. ``` print("Portion of calibrated circuit:") print("\n".join( calibrated_circuit.circuit.to_text_diagram(qubit_order=line).splitlines()[:9] + ["..."])) ``` Note again that $\sqrt{\text{iSWAP}}$ gates are padded by $Z$ phases to compensate for errors. We now run this calibrated circuit. ``` """Run the calibrated circuit on the engine.""" cal_results = sampler.run(calibrated_circuit.circuit, repetitions=nreps) ``` ### Comparing raw results to calibrated results We now compare results with and without Floquet calibration, again using the simulator results as a baseline for comparison. First we extract the calibrated densities. ``` """Extract densities from measurement results.""" cal_densities = z_densities_from_result(cal_results, segments) ``` Now we reproduce the same density plots from above on each segment, this time including the calibrated ("cal") results. ``` plot_densities( sim_density, raw_densities, cal_densities, rows=int(np.sqrt(line_length / segment_length)) ) ``` We also visualize the mean and variance of results over segments as before. ``` """Plot mean density and variance over segments.""" raw_avg = np.average(raw_densities, axis=0) raw_std = np.std(raw_densities, axis=0, ddof=1) cal_avg = np.average(cal_densities, axis=0) cal_std = np.std(cal_densities, axis=0, ddof=1) plot_density( plt.gca(), sim_density, raw_avg, cal_avg, raw_std, cal_std, title="Average over segments" ) ``` Last, we can look at density errors between raw/calibrated results and simulated results. ``` """Plot errors of raw vs calibrated results.""" fig, axes = plt.subplots(ncols=2, figsize=(15, 4)) axes[0].set_title("Error of the mean") axes[0].set_ylabel("Density") axes[1].set_title("Data standard deviation") colors = ["orange", "green"] labels = ["raw", "cal"] for index, density in enumerate([raw_densities, cal_densities]): color = colors[index] label = labels[index] average_density = np.average(density, axis=0) sites = list(range(len(average_density))) error = np.abs(average_density - sim_density) std_dev = np.std(density, axis=0, ddof=1) axes[0].plot(sites, error, color=color, alpha=0.6) axes[0].scatter(sites, error, color=color) axes[1].plot(sites, std_dev, label=label, color=color, alpha=0.6) axes[1].scatter(sites, std_dev, color=color) for ax in axes: ax.set_xticks(sites) ax.set_xlabel("Qubit index in segment") plt.legend(); ```
github_jupyter
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i> <i>Licensed under the MIT License.</i> # Hard Negative Sampling for Object Detection You built an object detection model, evaluated it on a test set, and are happy with its accuracy. Now you deploy the model in a real-world application and you may find that the model over-fires heavily, i.e. it detects objects where none are. This is a common problem in machine learning because our training set only contains a limited number of images, which is not sufficient to model the appearance of every object and every background in the world. Hard negative sampling (or hard negative mining) is a useful technique to address this problem. It is a way to make the model more robust to over-fitting by identifying images which are hard for the model and hence should be added to the training set. The technique is widely used when one has a large number of negative images however adding all to the training set would cause (i) training to become too slow; and (ii) overwhelm training with too high a ratio of negatives to positives. For many negative images the model likely already performs well and hence adding them to the training set would not improve accuracy. Therefore, we try to identify those negative images where the model is incorrect. Note that hard-negative mining is a special case of active learning where the task is to identify images which are hard for the model, annotate these images with the ground truth label, and to add them to the training set. *Hard* could be defined as the model being wrong, or as the model being uncertain about a prediction. # Overview In this notebook, we train our model on a training set <i>T</i> as usual, test the model on un-seen negative candidate images <i>U</i>, and see on which images in <i>U</i> the model over-fires. These images are then introduces into the training set <i>T</i> and the model is re-trained. As dataset, we use the *fridge objects* images (`watter_bottle`, `carton`, `can`, and `milk_bottle`), similar to the [01_training_introduction](./01_training_introduction.ipynb) notebook. <img src="./media/hard_neg.jpg" width="600"/> The overall hard negative mining process is as follows: * First, prepare training set <i>T</i> and negative-candidate set <i>U</i>. A small proportion of both sets are set aside for evaluation. * Second, load a pre-trained detection model. * Next, mine hard negatives by following steps as shown in the figure: 1. Train the model on <i>T</i>. 2. Score the model on <i>U</i>. 3. Identify `NEGATIVE_NUM` images in <i>U</i> where the model is most incorrect and add to <i>T</i>. * Finally, repeat these steps until the model stops improving. ``` import sys sys.path.append("../../") import os import matplotlib.pyplot as plt import numpy as np from PIL import Image import scrapbook as sb import torch import torchvision from torchvision import transforms from utils_cv.classification.data import Urls as UrlsIC from utils_cv.common.data import unzip_url from utils_cv.common.gpu import which_processor, is_windows from utils_cv.detection.data import Urls as UrlsOD from utils_cv.detection.dataset import DetectionDataset, get_transform from utils_cv.detection.model import DetectionLearner, get_pretrained_fasterrcnn from utils_cv.detection.plot import plot_detections, plot_grid # Change matplotlib backend so that plots are shown on windows machines if is_windows(): plt.switch_backend('TkAgg') print(f"TorchVision: {torchvision.__version__}") which_processor() # Ensure edits to libraries are loaded and plotting is shown in the notebook. %reload_ext autoreload %autoreload 2 %matplotlib inline ``` Default parameters. Choose `NEGATIVE_NUM` so that the number of negative images to be added at each iteration corresponds to roughly 10-20% of the total number of images in the training set. If `NEGATIVE_NUM` is too low, then too few hard negatives get added to make a noticeable difference. ``` # Path to training images, and to the negative images DATA_PATH = unzip_url(UrlsOD.fridge_objects_path, exist_ok=True) NEG_DATA_PATH = unzip_url(UrlsIC.fridge_objects_negatives_path, exist_ok=True) # Number of negative images to add to the training set after each negative mining iteration. # Here set to 10, but this value should be around 10-20% of the total number of images in the training set. NEGATIVE_NUM = 10 # Model parameters corresponding to the "fast_inference" parameters in the 03_training_accuracy_vs_speed notebook. EPOCHS = 10 LEARNING_RATE = 0.005 IM_SIZE = 500 BATCH_SIZE = 2 # Use GPU if available device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") print(f"Using torch device: {device}") assert str(device)=="cuda", "Model evaluation requires CUDA capable GPU" ``` ## 1. Prepare datasets We prepare our datasets in the following way: * Training images in `data.train_ds` which includes initially only *fridge objects* images, and after running hard-negative mining also negative images. * Negative candidate images in `neg_data.train_ds`. * Test images in `data.test_ds` to evaluate accuracy on *fridge objects* images, and in `neg_data.test_ds` to evaluate how often the model misfires on images which do not contain an object-of-interest. ``` # Model training dataset T, split into 75% training and 25% test data = DetectionDataset(DATA_PATH, train_pct=0.75) print(f"Positive dataset: {len(data.train_ds)} training images and {len(data.test_ds)} test images.") # Negative images split into hard-negative mining candidates U, and a negative test set. # Setting "allow_negatives=True" since the negative images don't have an .xml file with ground truth annotations neg_data = DetectionDataset(NEG_DATA_PATH, train_pct=0.80, batch_size=BATCH_SIZE, im_dir = "", allow_negatives = True, train_transforms = get_transform(train=False)) print(f"Negative dataset: {len(neg_data.train_ds)} candidates for hard negative mining and {len(neg_data.test_ds)} test images.") ``` ## 2. Prepare a model Initialize a pre-trained Faster R-CNN model similar to the [01_training_introduction](./01_training_introduction.ipynb) notebook. ``` # Pre-trained Faster R-CNN model detector = DetectionLearner(data, im_size=IM_SIZE) # Record after each mining iteration the validation accuracy and how many objects were found in the negative test set valid_accs = [] num_neg_detections = [] ``` ## 3. Train the model on *T* <a id='train'></a> Model training. As described at the start of this notebook, you likely need to repeat the steps from here until the end of the notebook several times to achieve optimal results. ``` # Fine-tune model. After each epoch prints the accuracy on the validation set. detector.fit(EPOCHS, lr=LEARNING_RATE, print_freq=30) ``` Show the accuracy on the validation set for this and all previous mining iterations. ``` # Get validation accuracy on test set at IOU=0.5:0.95 acc = float(detector.ap[-1]["bbox"]) valid_accs.append(acc) # Plot validation accuracy versus number of hard-negative mining iterations from utils_cv.common.plot import line_graph line_graph( values=(valid_accs), labels=("Validation"), x_guides=range(len(valid_accs)), x_name="Hard negative mining iteration", y_name="[email protected]:0.95", ) ``` ## 4. Score the model on *U* Run inference on all negative candidate images. The images where the model is most incorrect will later be added as hard negatives to the training set. ``` detections = detector.predict_dl(neg_data.train_dl, threshold=0) detections[0] ``` Count how many objects were detected in the negative test set. This number typically goes down dramatically after a few mining iterations, and is an indicator how much the model over-fires on unseen images. ``` # Count number of mis-detections on negative test set test_detections = detector.predict_dl(neg_data.test_dl, threshold=0) bbox_scores = [bbox.score for det in test_detections for bbox in det['det_bboxes']] num_neg_detections.append(len(bbox_scores)) # Plot from utils_cv.common.plot import line_graph line_graph( values=(num_neg_detections), labels=("Negative test set"), x_guides=range(len(num_neg_detections)), x_name="Hard negative mining iteration", y_name="Number of detections", ) ``` ## 5. Hard negative mining Use the negative candidate images where the model is most incorrect as hard negatives. ``` # For each image, get maximum score (i.e. confidence in the detection) over all detected bounding boxes in the image max_scores = [] for idx, detection in enumerate(detections): if len(detection['det_bboxes']) > 0: max_score = max([d.score for d in detection['det_bboxes']]) else: max_score = float('-inf') max_scores.append(max_score) # Use the n images with highest maximum score as hard negatives hard_im_ids = np.argsort(max_scores)[::-1] hard_im_ids = hard_im_ids[:NEGATIVE_NUM] hard_im_scores =[max_scores[i] for i in hard_im_ids] print(f"Indentified {len(hard_im_scores)} hard negative images with detection scores in range {min(hard_im_scores)} to {max(hard_im_scores):4.2f}") ``` Plot some of the identified hard negatives images. This will likely mistake objects which were not part of the training set as the objects-of-interest. ``` # Get image paths and ground truth boxes for the hard negative images dataset_ids = [detections[i]['idx'] for i in hard_im_ids] im_paths = [neg_data.train_ds.dataset.im_paths[i] for i in dataset_ids] gt_bboxes = [neg_data.train_ds.dataset.anno_bboxes[i] for i in dataset_ids] # Plot def _grid_helper(): for i in hard_im_ids: yield detections[i], neg_data, None, None plot_grid(plot_detections, _grid_helper(), rows=1) ``` ## 6. Add hard negatives to *T* We now add the identified hard negative images to the training set. ``` # Add identified hard negatives to training set data.add_images(im_paths, gt_bboxes, target = "train") print(f"Added {len(im_paths)} hard negative images. Now: {len(data.train_ds)} training images and {len(data.test_ds)} test images") print(f"Completed {len(valid_accs)} hard negative iterations.") # Preserve some of the notebook outputs sb.glue("valid_accs", valid_accs) sb.glue("hard_im_scores", list(hard_im_scores)) ``` ## Repeat Now, **repeat** all steps starting from "[3. Train the model on T](#train)" to re-train the model and the training set T with added and add more hard negative images to the training set. **Stop** once the accuracy `valid_accs` stopped improving and if the number of (mis)detections in the negative test set `num_neg_detections` stops decreasing.
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '' os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'prepare/mesolitica-tpu.json' b2_application_key_id = os.environ['b2_application_key_id'] b2_application_key = os.environ['b2_application_key'] from google.cloud import storage client = storage.Client() bucket = client.bucket('mesolitica-tpu-general') best = '1050000' directory = 't5-3x-super-tiny-true-case-4k' !rm -rf output out {directory} !mkdir {directory} model = best blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00000-of-00002') blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00000-of-00002') blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00001-of-00002') blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00001-of-00002') blob = bucket.blob(f'{directory}/model.ckpt-{model}.index') blob.download_to_filename(f'{directory}/model.ckpt-{model}.index') blob = bucket.blob(f'{directory}/model.ckpt-{model}.meta') blob.download_to_filename(f'{directory}/model.ckpt-{model}.meta') blob = bucket.blob(f'{directory}/checkpoint') blob.download_to_filename(f'{directory}/checkpoint') blob = bucket.blob(f'{directory}/operative_config.gin') blob.download_to_filename(f'{directory}/operative_config.gin') with open(f'{directory}/checkpoint', 'w') as fopen: fopen.write(f'model_checkpoint_path: "model.ckpt-{model}"') from b2sdk.v1 import * info = InMemoryAccountInfo() b2_api = B2Api(info) application_key_id = b2_application_key_id application_key = b2_application_key b2_api.authorize_account("production", application_key_id, application_key) file_info = {'how': 'good-file'} b2_bucket = b2_api.get_bucket_by_name('malaya-model') tar = 't5-3x-super-tiny-true-case-4k-2021-09-10.tar.gz' os.system(f'tar -czvf {tar} {directory}') outPutname = f'finetuned/{tar}' b2_bucket.upload_local_file( local_file=tar, file_name=outPutname, file_infos=file_info, ) os.system(f'rm {tar}') import tensorflow as tf import tensorflow_datasets as tfds import t5 model = t5.models.MtfModel( model_dir=directory, tpu=None, tpu_topology=None, model_parallelism=1, batch_size=1, sequence_length={"inputs": 256, "targets": 256}, learning_rate_schedule=0.003, save_checkpoints_steps=5000, keep_checkpoint_max=3, iterations_per_loop=100, mesh_shape="model:1,batch:1", mesh_devices=["cpu:0"] ) !rm -rf output/* import gin from t5.data import sentencepiece_vocabulary DEFAULT_SPM_PATH = 'prepare/sp10m.cased.ms-en-4k.model' DEFAULT_EXTRA_IDS = 100 model_dir = directory def get_default_vocabulary(): return sentencepiece_vocabulary.SentencePieceVocabulary( DEFAULT_SPM_PATH, DEFAULT_EXTRA_IDS) with gin.unlock_config(): gin.parse_config_file(t5.models.mtf_model._operative_config_path(model_dir)) gin.bind_parameter("Bitransformer.decode.beam_size", 1) gin.bind_parameter("Bitransformer.decode.temperature", 0) gin.bind_parameter("utils.get_variable_dtype.slice_dtype", "float32") gin.bind_parameter( "utils.get_variable_dtype.activation_dtype", "float32") vocabulary = t5.data.SentencePieceVocabulary(DEFAULT_SPM_PATH) estimator = model.estimator(vocabulary, disable_tpu=True) import os checkpoint_step = t5.models.mtf_model._get_latest_checkpoint_from_dir(model_dir) model_ckpt = "model.ckpt-" + str(checkpoint_step) checkpoint_path = os.path.join(model_dir, model_ckpt) checkpoint_step, model_ckpt, checkpoint_path from mesh_tensorflow.transformer import dataset as transformer_dataset def serving_input_fn(): inputs = tf.placeholder( dtype=tf.string, shape=[None], name="inputs") batch_size = tf.shape(inputs)[0] padded_inputs = tf.pad(inputs, [(0, tf.mod(-tf.size(inputs), batch_size))]) dataset = tf.data.Dataset.from_tensor_slices(padded_inputs) dataset = dataset.map(lambda x: {"inputs": x}) dataset = transformer_dataset.encode_all_features(dataset, vocabulary) dataset = transformer_dataset.pack_or_pad( dataset=dataset, length=model._sequence_length, pack=False, feature_keys=["inputs"] ) dataset = dataset.batch(tf.cast(batch_size, tf.int64)) features = tf.data.experimental.get_single_element(dataset) return tf.estimator.export.ServingInputReceiver( features=features, receiver_tensors=inputs) out = estimator.export_saved_model('output', serving_input_fn, checkpoint_path=checkpoint_path) config = tf.ConfigProto() config.allow_soft_placement = True sess = tf.Session(config = config) meta_graph_def = tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], out) saver = tf.train.Saver(tf.trainable_variables()) saver.save(sess, '3x-super-tiny-true-case-4k/model.ckpt') strings = [ n.name for n in tf.get_default_graph().as_graph_def().node if ('encoder' in n.op or 'decoder' in n.name or 'shared' in n.name or 'inputs' in n.name or 'output' in n.name or 'SentenceTokenizer' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'Assign' not in n.name ] def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names, ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('3x-super-tiny-true-case-4k', strings) import struct unknown = b'\xff\xff\xff\xff' def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) for node in graph_def.node: if node.op == 'RefSwitch': node.op = 'Switch' for index in xrange(len(node.input)): if 'moving_' in node.input[index]: node.input[index] = node.input[index] + '/read' elif node.op == 'AssignSub': node.op = 'Sub' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'AssignAdd': node.op = 'Add' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'Assign': node.op = 'Identity' if 'use_locking' in node.attr: del node.attr['use_locking'] if 'validate_shape' in node.attr: del node.attr['validate_shape'] if len(node.input) == 2: node.input[0] = node.input[1] del node.input[1] if 'Reshape/shape' in node.name or 'Reshape_1/shape' in node.name: b = node.attr['value'].tensor.tensor_content arr_int = [int.from_bytes(b[i:i + 4], 'little') for i in range(0, len(b), 4)] if len(arr_int): arr_byte = [unknown] + [struct.pack('<i', i) for i in arr_int[1:]] arr_byte = b''.join(arr_byte) node.attr['value'].tensor.tensor_content = arr_byte if len(node.attr['value'].tensor.int_val): node.attr['value'].tensor.int_val[0] = -1 with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('3x-super-tiny-true-case-4k/frozen_model.pb') i = g.get_tensor_by_name('import/inputs:0') o = g.get_tensor_by_name('import/SelectV2_3:0') i, o test_sess = tf.Session(graph = g) import sentencepiece as spm sp_model = spm.SentencePieceProcessor() sp_model.Load(DEFAULT_SPM_PATH) string1 = 'FORMAT TERBUKA. FORMAT TERBUKA IALAH SUATU FORMAT FAIL UNTUK TUJUAN MENYIMPAN DATA DIGITAL, DI MANA FORMAT INI DITAKRIFKAN BERDASARKAN SPESIFIKASI YANG DITERBITKAN DAN DIKENDALIKAN PERTUBUHAN PIAWAIAN , SERTA BOLEH DIGUNA PAKAI KHALAYAK RAMAI .' string2 = 'Husein ska mkn ayam dkat kampng Jawa' strings = [string1, string2] [f'kes benar: {s}' for s in strings] %%time o_ = test_sess.run(o, feed_dict = {i: [f'kes benar: {s}' for s in strings]}) o_.shape for k in range(len(o_)): print(k, sp_model.DecodeIds(o_[k].tolist())) from tensorflow.tools.graph_transforms import TransformGraph transforms = ['add_default_attributes', 'remove_nodes(op=Identity, op=CheckNumerics)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights(minimum_size=1536000)', #'quantize_weights(fallback_min=-10240, fallback_max=10240)', 'strip_unused_nodes', 'sort_by_execution_order'] pb = '3x-super-tiny-true-case-4k/frozen_model.pb' input_graph_def = tf.GraphDef() with tf.gfile.FastGFile(pb, 'rb') as f: input_graph_def.ParseFromString(f.read()) transformed_graph_def = TransformGraph(input_graph_def, ['inputs'], ['SelectV2_3'], transforms) with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f: f.write(transformed_graph_def.SerializeToString()) g = load_graph('3x-super-tiny-true-case-4k/frozen_model.pb.quantized') i = g.get_tensor_by_name('import/inputs:0') o = g.get_tensor_by_name('import/SelectV2_3:0') i, o test_sess = tf.InteractiveSession(graph = g) file = '3x-super-tiny-true-case-4k/frozen_model.pb.quantized' outPutname = 'true-case/3x-super-tiny-t5-4k-quantized/model.pb' b2_bucket.upload_local_file( local_file=file, file_name=outPutname, file_infos=file_info, ) file = '3x-super-tiny-true-case-4k/frozen_model.pb' outPutname = 'true-case/3x-super-tiny-t5-4k/model.pb' b2_bucket.upload_local_file( local_file=file, file_name=outPutname, file_infos=file_info, ) ```
github_jupyter
<a href="https://colab.research.google.com/github/cxbxmxcx/EatNoEat/blob/master/Chapter_9_Build_Nutritionist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Imports ``` import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import os import time from PIL import Image import pickle ``` Download Recipe Data ``` data_folder = 'data' recipes_zip = tf.keras.utils.get_file('recipes.zip', origin = 'https://www.dropbox.com/s/i1hvs96mnahozq0/Recipes5k.zip?dl=1', extract = True) print(recipes_zip) data_folder = os.path.dirname(recipes_zip) os.remove(recipes_zip) print(data_folder) ``` Setup Folder Paths ``` !dir /root/.keras/datasets data_folder = data_folder + '/Recipes5k/' annotations_folder = data_folder + 'annotations/' images_folder = data_folder + 'images/' print(annotations_folder) print(images_folder) %ls /root/.keras/datasets/Recipes5k/images/ ``` Extra Imports ``` from fastprogress.fastprogress import master_bar, progress_bar from IPython.display import Image from os import listdir from pickle import dump ``` Setup Convnet Application ``` use_NAS = False if use_NAS: IMG_SIZE = 224 # 299 for Inception, 224 for NASNet IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) else: IMG_SIZE = 299 # 299 for Inception, 224 for NASNet IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) def load_image(image_path): img = tf.io.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, (IMG_SIZE, IMG_SIZE)) if use_NAS: img = tf.keras.applications.nasnet.preprocess_input(img) else: img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path foods_txt = tf.keras.utils.get_file('foods.txt', origin = 'https://www.dropbox.com/s/xyukyq62g98dx24/foods_cat.txt?dl=1') print(foods_txt) def get_nutrient_array(fat, protein, carbs): nutrients = np.array([float(fat)*4, float(protein)*4, float(carbs)*4]) nutrients /= np.linalg.norm(nutrients) return nutrients def get_category_array(keto, carbs, health): return np.array([float(keto)-5, float(carbs)-5, float(health)-5]) import csv def get_food_nutrients(nutrient_file): foods = {} with open(foods_txt) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') line_count = 0 for row in csv_reader: if line_count == 0: print(f'Column names are {", ".join(row)}') line_count += 1 else: categories = get_category_array(row[1],row[2],row[3]) foods[row[0]] = categories line_count += 1 print(f'Processed {line_count} lines.') return foods food_nutrients = get_food_nutrients(foods_txt) print(food_nutrients) def load_images(food_w_nutrients, directory): X = [] Y = [] i=0 mb = master_bar(listdir(directory)) for food_group in mb: try: for pic in progress_bar(listdir(directory + food_group), parent=mb, comment='food = ' + food_group): filename = directory + food_group + '/' + pic image, img_path = load_image(filename) if i < 5: print(img_path) i+=1 Y.append(food_w_nutrients[food_group]) X.append(image) except: continue return X,Y X, Y = load_images(food_nutrients, images_folder) print(len(X), len(Y)) tf.keras.backend.clear_session() if use_NAS: # Create the base model from the pre-trained model base_model = tf.keras.applications.NASNetMobile(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') else: # Create the base model from the pre-trained model base_model = tf.keras.applications.InceptionResNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') dataset = tf.data.Dataset.from_tensor_slices((X, Y)) dataset batches = dataset.batch(64) for image_batch, label_batch in batches.take(1): pass image_batch.shape train_size = int(len(X)*.8) test_size = int(len(X)*.2) batches = batches.shuffle(test_size) train_dataset = batches.take(train_size) test_dataset = batches.skip(train_size) test_dataset = test_dataset.take(test_size) feature_batch = base_model(image_batch) print(feature_batch.shape) base_model.trainable = True # Let's take a look to see how many layers are in the base model print("Number of layers in the base model: ", len(base_model.layers)) # Fine-tune from this layer onwards if use_NAS: fine_tune_at = 100 else: fine_tune_at = 550 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model.layers[:fine_tune_at]: layer.trainable = False base_model.summary() ``` Add Regression Head ``` global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) prediction_layer = tf.keras.layers.Dense(3) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer ]) base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.Nadam(lr=base_learning_rate), loss=tf.keras.losses.MeanAbsoluteError(), metrics=['mae', 'mse', 'accuracy']) model.summary() from google.colab import drive drive.mount('/content/gdrive') folder = '/content/gdrive/My Drive/Models' if os.path.isdir(folder) == False: os.makedirs(folder) # Include the epoch in the file name (uses `str.format`) checkpoint_path = folder + "/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights every 5 epochs cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1, save_weights_only=True, period=5) history = model.fit(batches,epochs=25, callbacks=[cp_callback]) acc = history.history['accuracy'] loss = history.history['loss'] mae = history.history['mae'] mse = history.history['mse'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Loss') plt.legend(loc='upper right') plt.ylabel('MAE') plt.ylim([0,5.0]) plt.title('Training Loss') plt.xlabel('epoch') plt.show() def get_test_images(): directory = '/content/' images = [] for file in listdir(directory): if file.endswith(".jpg"): images.append(file) return images images = get_test_images() print(images) ``` ``` #@title Image Prediction { run: "auto", vertical-output: true, display-mode: "form" } image_idx = 42 #@param {type:"slider", min:0, max:100, step:1} cnt = len(images) if cnt > 0: image_idx = image_idx if image_idx < cnt else cnt - 1 image = images[image_idx] x, _ = load_image(image) img = x[np.newaxis, ...] predict = model.predict(img) print(predict+5) print(image_idx,image) plt.imshow(x) ```
github_jupyter
#Build a regression model: Get started with R and Tidymodels for regression models ## Introduction to Regression - Lesson 1 #### Putting it into perspective ✅ There are many types of regression methods, and which one you pick depends on the answer you're looking for. If you want to predict the probable height for a person of a given age, you'd use `linear regression`, as you're seeking a **numeric value**. If you're interested in discovering whether a type of cuisine should be considered vegan or not, you're looking for a **category assignment** so you would use `logistic regression`. You'll learn more about logistic regression later. Think a bit about some questions you can ask of data, and which of these methods would be more appropriate. In this section, you will work with a [small dataset about diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). Imagine that you wanted to test a treatment for diabetic patients. Machine Learning models might help you determine which patients would respond better to the treatment, based on combinations of variables. Even a very basic regression model, when visualized, might show information about variables that would help you organize your theoretical clinical trials. That said, let's get started on this task! ![Artwork by \@allison_horst](../images/encouRage.jpg)<br>Artwork by @allison_horst ## 1. Loading up our tool set For this task, we'll require the following packages: - `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun! - `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning. You can have them installed as: `install.packages(c("tidyverse", "tidymodels"))` The script below checks whether you have the packages required to complete this module and installs them for you in case some are missing. ``` if (!require("pacman")) install.packages("pacman") pacman::p_load(tidyverse, tidymodels) ``` Now, let's load these awesome packages and make them available in our current R session.(This is for mere illustration, `pacman::p_load()` already did that for you) ``` # load the core Tidyverse packages library(tidyverse) # load the core Tidymodels packages library(tidymodels) ``` ## 2. The diabetes dataset In this exercise, we'll put our regression skills into display by making predictions on a diabetes dataset. The [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt) includes `442 samples` of data around diabetes, with 10 predictor feature variables, `age`, `sex`, `body mass index`, `average blood pressure`, and `six blood serum measurements` as well as an outcome variable `y`: a quantitative measure of disease progression one year after baseline. |Number of observations|442| |----------------------|:---| |Number of predictors|First 10 columns are numeric predictive| |Outcome/Target|Column 11 is a quantitative measure of disease progression one year after baseline| |Predictor Information|- age in years ||- sex ||- bmi body mass index ||- bp average blood pressure ||- s1 tc, total serum cholesterol ||- s2 ldl, low-density lipoproteins ||- s3 hdl, high-density lipoproteins ||- s4 tch, total cholesterol / HDL ||- s5 ltg, possibly log of serum triglycerides level ||- s6 glu, blood sugar level| > 🎓 Remember, this is supervised learning, and we need a named 'y' target. Before you can manipulate data with R, you need to import the data into R's memory, or build a connection to the data that R can use to access the data remotely. > The [readr](https://readr.tidyverse.org/) package, which is part of the Tidyverse, provides a fast and friendly way to read rectangular data into R. Now, let's load the diabetes dataset provided in this source URL: <https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html> Also, we'll perform a sanity check on our data using `glimpse()` and dsiplay the first 5 rows using `slice()`. Before going any further, let's also introduce something you will encounter often in R code 🥁🥁: the pipe operator `%>%` The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying "and then" in your code. ``` # Import the data set diabetes <- read_table2(file = "https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt") # Get a glimpse and dimensions of the data glimpse(diabetes) # Select the first 5 rows of the data diabetes %>% slice(1:5) ``` `glimpse()` shows us that this data has 442 rows and 11 columns with all the columns being of data type `double` <br> > glimpse() and slice() are functions in [`dplyr`](https://dplyr.tidyverse.org/). Dplyr, part of the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges <br> Now that we have the data, let's narrow down to one feature (`bmi`) to target for this exercise. This will require us to select the desired columns. So, how do we do this? [`dplyr::select()`](https://dplyr.tidyverse.org/reference/select.html) allows us to *select* (and optionally rename) columns in a data frame. ``` # Select predictor feature `bmi` and outcome `y` diabetes_select <- diabetes %>% select(c(bmi, y)) # Print the first 5 rows diabetes_select %>% slice(1:10) ``` ## 3. Training and Testing data It's common practice in supervised learning to *split* the data into two subsets; a (typically larger) set with which to train the model, and a smaller "hold-back" set with which to see how the model performed. Now that we have data ready, we can see if a machine can help determine a logical split between the numbers in this dataset. We can use the [rsample](https://tidymodels.github.io/rsample/) package, which is part of the Tidymodels framework, to create an object that contains the information on *how* to split the data, and then two more rsample functions to extract the created training and testing sets: ``` set.seed(2056) # Split 67% of the data for training and the rest for tesing diabetes_split <- diabetes_select %>% initial_split(prop = 0.67) # Extract the resulting train and test sets diabetes_train <- training(diabetes_split) diabetes_test <- testing(diabetes_split) # Print the first 3 rows of the training set diabetes_train %>% slice(1:10) ``` ## 4. Train a linear regression model with Tidymodels Now we are ready to train our model! In Tidymodels, you specify models using `parsnip()` by specifying three concepts: - Model **type** differentiates models such as linear regression, logistic regression, decision tree models, and so forth. - Model **mode** includes common options like regression and classification; some model types support either of these while some only have one mode. - Model **engine** is the computational tool which will be used to fit the model. Often these are R packages, such as **`"lm"`** or **`"ranger"`** This modeling information is captured in a model specification, so let's build one! ``` # Build a linear model specification lm_spec <- # Type linear_reg() %>% # Engine set_engine("lm") %>% # Mode set_mode("regression") # Print the model specification lm_spec ``` After a model has been *specified*, the model can be `estimated` or `trained` using the [`fit()`](https://parsnip.tidymodels.org/reference/fit.html) function, typically using a formula and some data. `y ~ .` means we'll fit `y` as the predicted quantity/target, explained by all the predictors/features ie, `.` (in this case, we only have one predictor: `bmi` ) ``` # Build a linear model specification lm_spec <- linear_reg() %>% set_engine("lm") %>% set_mode("regression") # Train a linear regression model lm_mod <- lm_spec %>% fit(y ~ ., data = diabetes_train) # Print the model lm_mod ``` From the model output, we can see the coefficients learned during training. They represent the coefficients of the line of best fit that gives us the lowest overall error between the actual and predicted variable. <br> ## 5. Make predictions on the test set Now that we've trained a model, we can use it to predict the disease progression y for the test dataset using [parsnip::predict()](https://parsnip.tidymodels.org/reference/predict.model_fit.html). This will be used to draw the line between data groups. ``` # Make predictions for the test set predictions <- lm_mod %>% predict(new_data = diabetes_test) # Print out some of the predictions predictions %>% slice(1:5) ``` Woohoo! 💃🕺 We just trained a model and used it to make predictions! When making predictions, the tidymodels convention is to always produce a tibble/data frame of results with standardized column names. This makes it easy to combine the original data and the predictions in a usable format for subsequent operations such as plotting. `dplyr::bind_cols()` efficiently binds multiple data frames column. ``` # Combine the predictions and the original test set results <- diabetes_test %>% bind_cols(predictions) results %>% slice(1:5) ``` ## 6. Plot modelling results Now, its time to see this visually 📈. We'll create a scatter plot of all the `y` and `bmi` values of the test set, then use the predictions to draw a line in the most appropriate place, between the model's data groupings. R has several systems for making graphs, but `ggplot2` is one of the most elegant and most versatile. This allows you to compose graphs by **combining independent components**. ``` # Set a theme for the plot theme_set(theme_light()) # Create a scatter plot results %>% ggplot(aes(x = bmi)) + # Add a scatter plot geom_point(aes(y = y), size = 1.6) + # Add a line plot geom_line(aes(y = .pred), color = "blue", size = 1.5) ``` > ✅ Think a bit about what's going on here. A straight line is running through many small dots of data, but what is it doing exactly? Can you see how you should be able to use this line to predict where a new, unseen data point should fit in relationship to the plot's y axis? Try to put into words the practical use of this model. Congratulations, you built your first linear regression model, created a prediction with it, and displayed it in a plot!
github_jupyter
``` # Code based on souce from https://machinelearningmastery.com/how-to-develop-a-pix2pix-gan-for-image-to-image-translation/ # Required imports for dataset import, preprocessing and compression """ GAN analysis file. Takes in trained .h5 files created while training the network. Generates test files from testing synthetic input photos (files the GAN has never seen before). Generates psnr and ssim ratings for each model/.h5 files and loads the results into excel files. """ from os import listdir import numpy from numpy import asarray from numpy import vstack from numpy import savez_compressed from numpy import load from numpy import expand_dims from numpy.random import randint from keras.preprocessing.image import img_to_array from keras.preprocessing.image import load_img from keras.models import load_model import matplotlib from matplotlib import pyplot import glob # Load images from a directory to memory def load_images(path, size=(256,256)): pic_list = list() # enumerate filenames in directory, assume all are images for filename in listdir(path): # load and resize the image (the resizing is not being used in our implementation) pixels = load_img(path + filename, target_size=size) # convert to numpy array pic_list.append(img_to_array(pixels)) return asarray(pic_list) # Load and prepare test or validation images from compressed image files to memory def load_numpy_images(filename): # Load the compressed numpy array(s) data = load(filename) img_sets =[] for item in data: img_sets.append((data[item]- 127.5) / 127.5) return img_sets # Plot source, generated and target images all in one output def plot_images(src_img, gen_img, tar_img): images = vstack((src_img, gen_img, tar_img)) # scale from [-1,1] to [0,1] images = (images + 1) / 2.0 titles = ['Source', 'Generated', 'Expected'] # plot images row by row for i in range(len(images)): # define subplot pyplot.subplot(1, 3, 1 + i) # turn off axis pyplot.axis('off') # plot raw pixel data pyplot.imshow(images[i]) # show title pyplot.title(titles[i]) pyplot.show() # Load a single image def load_image(filename, size=(256,256)): # load image with the preferred size pixels = load_img(filename, target_size=size) # convert to numpy array pixels = img_to_array(pixels) # scale from [0,255] to [-1,1] pixels = (pixels - 127.5) / 127.5 # reshape to 1 sample pixels = expand_dims(pixels, 0) return pixels #################### # Convert the training dataset to a compressed numpy array (NOT USED FOR METRICS) #################### # Source images path (synthetic images) path = 'data/training/synthetic/' src_images = load_images(path) # Ground truth images path path = 'data/training/gt/' tar_images = load_images(path) # Perform a quick check on shape and sizes print('Loaded: ', src_images.shape, tar_images.shape) # Save as a compressed numpy array filename = 'data/training/train_256.npz' savez_compressed(filename, src_images, tar_images) print('Saved dataset: ', filename) ################### # Convert the validation dataset to a compressed numpy array (.npz) ################### # Source images path path = 'data/validation/synthetic/' src_images = load_images(path) # Ground truth images path path = 'data/validation/gt/' tar_images = load_images(path) # Perform a quick check on shape and sizes print('Loaded: ', src_images.shape, tar_images.shape) # Save as a compressed numpy array filename = 'data/validation/validation_256.npz' savez_compressed(filename, src_images, tar_images) print('Saved dataset: ', filename) # Load the validation dataset from the compressed numpy array to memory img_sets = load_numpy_images('data/validation/validation_256.npz') src_images = img_sets[0] print('Loaded: ', src_images.shape) #tar_images = img_sets[1] #print('Loaded: ', tar_images.shape) # Gain some memory del img_sets # Get the list of gt image names so outputs can be named correctly path = 'data/validation/gt/' img_list = os.listdir(path) exp_path = 'models/exp6/' model_list = os.listdir(exp_path) # loop through model/.h5 files for model in model_list: model_dir = 'outputs/'+model[:-3] os.mkdir(model_dir) # load model weights to be used in the generator predictor = load_model(exp_path+model) names = 0 for i in range(0, len(src_images),10 ): # push image through generator gen_images = predictor.predict(src_images[i:i+10]) # name and export file for img in range(len(gen_images)): filename = model_dir+'/'+img_list[names] names += 1 matplotlib.image.imsave(filename, (gen_images[img]+1)/2.0) # Code to evaluate generated images from each model run for PNSR and SSIM import numpy as np import matplotlib.pyplot as plt import csv import os import re import cv2 import pandas as pd from skimage import data, img_as_float from skimage.metrics import structural_similarity as ssim from skimage.metrics import peak_signal_noise_ratio as psnr exp_dir = 'outputs/' # result director gt_dir = 'data/validation/gt/' # ground truth directory img_list = os.listdir(gt_dir) column_names =[] exp_list = [ f.name for f in os.scandir(exp_dir) if f.is_dir() ] for exp in exp_list: model_list = [ f.name for f in os.scandir('outputs/'+exp+'/') if f.is_dir() ] for model in model_list: column_names.append(exp+'_'+model) # create data frames for excel output psnr_df = pd.DataFrame(columns = column_names) ssim_df = pd.DataFrame(columns = column_names) i=0 psnr_master=[] ssim_master=[] for img in img_list: # loop through every image created by the generator i+=1 # load image and create a grayscale for ssim measurement gt = cv2.imread(gt_dir+img) gt_gray = cv2.cvtColor(gt, cv2.COLOR_BGR2GRAY) psnr_list=[] ssim_list =[] exp_list = [f.name for f in os.scandir(exp_dir) if f.is_dir()] # for each experiment for exp in exp_list: model_list = [ f.name for f in os.scandir('outputs/'+exp+'/') if f.is_dir() ] # for each generator weights/model (outputted h5 file from experiemnt) for model in model_list: pred = cv2.imread(exp_dir+exp+'/'+model+'/'+img) pred_gray = cv2.cvtColor(pred, cv2.COLOR_BGR2GRAY) # calculate psnr and ssim psnr_list.append(psnr(gt, pred, data_range=pred.max() - pred.min())) ssim_list.append(ssim(gt_gray, pred_gray, data_range=pred.max() - pred.min())) psnr_master.append(psnr_list) ssim_master.append(ssim_list) # export for excel use psnr_df = pd.DataFrame(psnr_master, columns = column_names) psnr_df.index = img_list psnr_df.to_csv("PSNR.csv") ssim_df = pd.DataFrame(ssim_master, columns = column_names) ssim_df.index = img_list ssim_df.to_csv("SSIM.csv") import PIL ['sidewalk winter -grayscale -gray_05189.jpg', 'sidewalk winter -grayscale -gray_07146.jpg', 'snow_animal_00447.jpg', 'snow_animal_03742.jpg', 'snow_intersection_00058.jpg', 'snow_nature_1_105698.jpg','snow_nature_1_108122.jpg','snow_nature_1_108523.jpg','snow_walk_00080.jpg','winter intersection -snow_00399.jpg','winter__street_03783.jpg','winter__street_05208.jpg'] pic_list_dims = [(426, 640), (538, 640), (640, 427), (432, 640), (480, 640), (640, 527), (480, 640), (427, 640), (640, 427), (502, 640), (269, 640), (427, 640)] i=0 # load an image def load_image(filename, size=(256,256)): # load image with the preferred size pixels = load_img(filename, target_size=size) # convert to numpy array pixels = img_to_array(pixels) # scale from [0,255] to [-1,1] pixels = (pixels - 127.5) / 127.5 # reshape to 1 sample pixels = expand_dims(pixels, 0) return pixels src_path = 'data/realistic_full/' src_filename = pic_list[i] src_image = load_image(src_path+src_filename) print('Loaded', src_image.shape) model_path = 'models/Experiment4/' model_filename = 'model_125000.h5' predictor = load_model(model_path+model_filename) gen_img = predictor.predict(src_image) # scale from [-1,1] to [0,1] gen_img = (gen_img[0] + 1) / 2.0 # plot the image pyplot.imshow(gen_img) pyplot.axis('off') pyplot.show() gen_path = 'final/' gen_filename = src_filename matplotlib.image.imsave(gen_path+gen_filename, gen_img) gen_img = load_img(gen_path+gen_filename, target_size=pic_list_dims[i]) pyplot.imshow(gen_img) pyplot.axis('off') pyplot.show() print(gen_img) gen_img.save(gen_path+gen_filename) pic_list = ['sidewalk winter -grayscale -gray_05189.jpg', 'sidewalk winter -grayscale -gray_07146.jpg', 'snow_animal_00447.jpg', 'snow_animal_03742.jpg', 'snow_intersection_00058.jpg', 'snow_nature_1_105698.jpg','snow_nature_1_108122.jpg','snow_nature_1_108523.jpg','snow_walk_00080.jpg','winter intersection -snow_00399.jpg','winter__street_03783.jpg','winter__street_05208.jpg'] src_path = 'data/realistic_full/' dims=[] for img in pic_list: pixels = load_img(src_path+img) dims.append(tuple(reversed(pixels.size))) print(dims) ```
github_jupyter
# Load and preprocess 2012 data We will, over time, look over other years. Our current goal is to explore the features of a single year. --- ``` %pylab --no-import-all inline import pandas as pd ``` ## Load the data. --- If this fails, be sure that you've saved your own data in the prescribed location, then retry. ``` file = "../data/interim/2012data.dta" df_rawest = pd.read_stata(file) good_columns = [#'campfin_limcorp', # "Should gov be able to limit corporate contributions" 'pid_x', # Your own party identification 'abortpre_4point', # Abortion 'trad_adjust', # Moral Relativism 'trad_lifestyle', # "Newer" lifetyles 'trad_tolerant', # Moral tolerance 'trad_famval', # Traditional Families 'gayrt_discstd_x', # Gay Job Discrimination 'gayrt_milstd_x', # Gay Military Service 'inspre_self', # National health insurance 'guarpr_self', # Guaranteed Job 'spsrvpr_ssself', # Services/Spending 'aa_work_x', # Affirmative Action ( Should this be aapost_hire_x? ) 'resent_workway', 'resent_slavery', 'resent_deserve', 'resent_try', ] df_raw = df_rawest[good_columns] ``` ## Clean the data --- ``` def convert_to_int(s): """Turn ANES data entry into an integer. >>> convert_to_int("1. Govt should provide many fewer services") 1 >>> convert_to_int("2") 2 """ try: return int(s.partition('.')[0]) except ValueError: warnings.warn("Couldn't convert: "+s) return np.nan except AttributeError: return s def negative_to_nan(value): """Convert negative values to missing. ANES codes various non-answers as negative numbers. For instance, if a question does not pertain to the respondent. """ return value if value >= 0 else np.nan def lib1_cons2_neutral3(x): """Rearrange questions where 3 is neutral.""" return -3 + x if x != 1 else x def liblow_conshigh(x): """Reorder questions where the liberal response is low.""" return -x def dem_edu_special_treatment(x): """Eliminate negative numbers and {95. Other}""" return np.nan if x == 95 or x <0 else x df = df_raw.applymap(convert_to_int) df = df.applymap(negative_to_nan) df.abortpre_4point = df.abortpre_4point.apply(lambda x: np.nan if x not in {1, 2, 3, 4} else -x) df.loc[:, 'trad_lifestyle'] = df.trad_lifestyle.apply(lambda x: -x) # 1: moral relativism, 5: no relativism df.loc[:, 'trad_famval'] = df.trad_famval.apply(lambda x: -x) # Tolerance. 1: tolerance, 7: not df.loc[:, 'spsrvpr_ssself'] = df.spsrvpr_ssself.apply(lambda x: -x) df.loc[:, 'resent_workway'] = df.resent_workway.apply(lambda x: -x) df.loc[:, 'resent_try'] = df.resent_try.apply(lambda x: -x) df.rename(inplace=True, columns=dict(zip( good_columns, ["PartyID", "Abortion", "MoralRelativism", "NewerLifestyles", "MoralTolerance", "TraditionalFamilies", "GayJobDiscrimination", "GayMilitaryService", "NationalHealthInsurance", "StandardOfLiving", "ServicesVsSpending", "AffirmativeAction", "RacialWorkWayUp", "RacialGenerational", "RacialDeserve", "RacialTryHarder", ] ))) print("Variables now available: df") df_rawest.pid_x.value_counts() df.PartyID.value_counts() df.describe() df.head() df.to_csv("../data/processed/2012.csv") ```
github_jupyter
# Molecular Hydrogen H<sub>2</sub> Ground State Figure 7.1 from Chapter 7 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021, Cambridge University Press. Plot the ground state potential of the H<sub>2</sub> molecule (E vs R) and the bound vibration levels. Uses files with the H<sub>2</sub> potential curves tabulated by [Sharp, 1971, Atomic Data, 2, 119](https://ui.adsabs.harvard.edu/abs/1971AD......2..119S/abstract). All of the data files used are in the H2 subfolder that should accompany this notebook. ``` %matplotlib inline import math import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter import warnings warnings.filterwarnings('ignore',category=UserWarning, append=True) ``` ## Standard Plot Format Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style. ``` figName = 'Fig7_1' # graphic aspect ratio = width/height aspect = 4.0/3.0 # 4:3 # Text width in inches - don't change, this is defined by the print layout textWidth = 6.0 # inches # output format and resolution figFmt = 'png' dpi = 600 # Graphic dimensions plotWidth = dpi*textWidth plotHeight = plotWidth/aspect axisFontSize = 10 labelFontSize = 8 lwidth = 0.5 axisPad = 5 wInches = textWidth hInches = wInches/aspect # Plot filename plotFile = f'{figName}.{figFmt}' # LaTeX is used throughout for markup of symbols, Times-Roman serif font plt.rc('text', usetex=True) plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'}) # Font and line weight defaults for axes matplotlib.rc('axes',linewidth=lwidth) matplotlib.rcParams.update({'font.size':axisFontSize}) # axis and label padding plt.rcParams['xtick.major.pad'] = f'{axisPad}' plt.rcParams['ytick.major.pad'] = f'{axisPad}' plt.rcParams['axes.labelpad'] = f'{axisPad}' ``` ## H<sub>2</sub> energy level potential data H$_2$ $^{1}\Sigma_{g}^{+}$ ground state data from Sharp 1971: Potential curve: H2_1Sigma_g+_potl.dat: * interproton distance, r, in Angstroms * potential energy, V(r), in eV Vibrational levels: H2_1Sigma_g+_v.dat: * v = vibrational quantum number * eV = energy in eV * Rmin = minimum inter-proton distance in Angstroms * Rmax = maximum inter-proton distance in Angstroms ``` potlFile = './H2/H2_1Sigma_g+_potl.dat' vibFile = './H2/H2_1Sigma_g+_v.dat' data = pd.read_csv(potlFile,sep=r'\s+') gsR = np.array(data['R']) # radius in Angstroms gsE = np.array(data['eV']) # energy in eV data = pd.read_csv(vibFile,sep=r'\s+') v = np.array(data['v']) # vibrational quantum number vE = np.array(data['eV']) rMin = np.array(data['Rmin']) rMax = np.array(data['Rmax']) # plotting limits minR = 0.0 maxR = 5.0 minE = -0.5 maxE = 6.0 # Put labels on the vibrational levels? label_v = True ``` ### Make the Plot Plot the ground-state potential curve as a thick black line, then draw the vibrational energy levels. ``` fig,ax = plt.subplots() fig.set_dpi(dpi) fig.set_size_inches(wInches,hInches,forward=True) ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on') ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on') plt.xlim(minR,maxR) ax.xaxis.set_major_locator(MultipleLocator(1)) plt.xlabel(r'Distance between protons r [\AA]',fontsize=axisFontSize) plt.ylim(minE,maxE) ax.yaxis.set_major_locator(MultipleLocator(1.0)) plt.ylabel(r'Potential energy V(r) [eV]',fontsize=axisFontSize) # plot the curves plt.plot(gsR,gsE,'-',color='black',lw=1.5,zorder=10) for i in range(len(v)): plt.plot([rMin[i],rMax[i]],[vE[i],vE[i]],'-',color='black',lw=0.5,zorder=9) if v[i]==0: plt.text(rMin[i]-0.05,vE[i],rf'$v={v[i]}$',ha='right',va='center',fontsize=labelFontSize) elif v[i]==13: plt.text(rMin[i]-0.05,vE[i],rf'${v[i]}$',ha='right',va='center',fontsize=labelFontSize) # plot and file plt.plot() plt.savefig(plotFile,bbox_inches='tight',facecolor='white') ```
github_jupyter
``` %matplotlib inline import numpy as np import sygma import matplotlib.pyplot as plt from galaxy_analysis.plot.plot_styles import * import galaxy_analysis.utilities.convert_abundances as ca def plot_settings(): fsize = 21 rc('text',usetex=False) rc('font',size=fsize) return sygma.sygma? s = {} metallicities = np.flip(np.array([0.02, 0.01, 0.006, 0.001, 0.0001])) for z in metallicities: print(z) s[z] = sygma.sygma(iniZ = z, sn1a_on=False, #sn1a_rate='maoz', #iniabu_table = 'yield_tables/iniabu/iniab1.0E-02GN93.ppn', imf_yields_range=[1,25], table = 'yield_tables/agb_and_massive_stars_C15_LC18_R_mix_resampled.txt', mgal = 1.0) yields = {} yields_agb = {} yields_no_agb = {} for z in metallicities: yields[z] = {} yields_agb[z] = {} yields_no_agb[z] = {} for i,e in enumerate(s[z].history.elements): index = s[z].history.elements.index(e) yields[z][e] = np.array(s[z].history.ism_elem_yield)[:,index] yields_agb[z][e] = np.array(s[z].history.ism_elem_yield_agb)[:,index] yields_no_agb[z][e] = yields[z][e] - yields_agb[z][e] for z in metallicities: print(np.array(s[0.0001].history. colors = {0.0001: 'C0', 0.001 : 'C1', 0.01 : 'C2', 0.02 : 'C3'} colors = {} for i,z in enumerate(metallicities): colors[z] = magma((i+1)/(1.0*np.size(metallicities)+1)) colors plot_settings() plot_elements = ['C','N','O','Mg','Ca','Mn','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(5*3,5*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.001,0.01]: # metallicities: label = z ax.plot(s[z].history.age[1:]/1.0E9, np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'O': ax.legend(loc='lower right') count += 1 ax.set_xlim(0,2.0) #ax.semilogy() ax.set_ylim(-0.5,0.5) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('Time (Gyr)') all_ax[(i,0)].set_ylabel(r'[X/H] - [X/H]$_{0.0001}$') fig.savefig("X_H_lowz_comparison.png") plot_settings() plot_elements = ['C','N','O','Mg','Ca','Mn','Fe','Sr','Ba'] denom = 'Mg' fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(5*3,5*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.0001,0.001,0.01]: # metallicities: label = z yvals = ca.abundance_ratio_array(e, yields_no_agb[z][e][1:], denom, yields_no_agb[z][denom][1:],input_type='mass') yvals2 = ca.abundance_ratio_array(e, yields_no_agb[0.0001][e][1:], denom, yields_no_agb[0.0001][denom][1:],input_type='mass') if z == 0.0001 and e == 'Ca': print(yvals) ax.plot(s[z].history.age[1:]/1.0E9, yvals,# - yvals2, lw = 3, color = colors[z], label = label) # ax.semilogy() ax.set_ylim(-1,1) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'O': ax.legend(loc='lower right') count += 1 ax.set_xlim(0,0.250) ax.plot(ax.get_xlim(),[0.0,0.0],lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('Time (Gyr)') all_ax[(i,0)].set_ylabel(r'[X/Fe] - [X/Fe]$_{0.0001}$') fig.savefig("X_Mg.png") #fig.savefig("X_Fe_lowz_comparison.png") s1 = s[0.001] np.array(s1.history.sn1a_numbers)[ (s1.history.age/ 1.0E9 < 1.1)] * 5.0E4 def wd_mass(mproj, model = 'salaris'): if np.size(mproj) == 1: mproj = np.array([mproj]) wd = np.zeros(np.size(mproj)) if model == 'salaris': wd[mproj < 4.0] = 0.134 * mproj[mproj < 4.0] + 0.331 wd[mproj >= 4.0] = 0.047 * mproj[mproj >= 4.0] + 0.679 elif model == 'mist': wd[mproj < 2.85] = 0.08*mproj[mproj<2.85]+0.489 select=(mproj>2.85)*(mproj<3.6) wd[select]=0.187*mproj[select]+0.184 select=(mproj>3.6) wd[select]=0.107*mproj[select]+0.471 return wd plot_settings() plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(6*3,6*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in metallicities: #[0.0001,0.001,0.01,0.02]: # metallicities: label = "Z=%.4f"%(z) y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y, #np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'Ba': ax.legend(loc='upper right') count += 1 ax.set_xlim(0.8,4.2) #ax.semilogx() ax.semilogy() ax.set_ylim(2.0E-9,12.0) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('log(Time [Myr])') all_ax[(i,0)].set_ylabel(r'Rate [M$_{\odot}$ / (10$^4$ M$_{\odot}$) / Myr]') fig.savefig("C15_LC18_yields_rate.png") plot_settings() plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(6*3,6*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.0001,0.001,0.01,0.02]: # metallicities: label = "Z=%.4f"%(z) #y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) y = yields[z][e][1:] ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y, #np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'Ba': ax.legend(loc='upper right') count += 1 ax.set_xlim(0.8,4.2) #ax.semilogx() ax.semilogy() ax.set_ylim(1.0E-5,2.0E-2) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('log(Time [Myr])') all_ax[(i,0)].set_ylabel(r'Yield [M$_{\odot}$]') #/ (10$^4$ M$_{\odot}$)]') fig.savefig("C15_LC18_yields_total.png") plot_settings() plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(6*3,6*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.0001,0.001,0.01,0.02]: # metallicities: label = "Z=%.4f"%(z) #y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) y = 1.0E4 * yields[z][e][1:] ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y, #np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'Ba': ax.legend(loc='upper right') count += 1 ax.set_xlim(0.8,4.2) #ax.semilogx() ax.semilogy() ax.set_ylim(1.0E-6,1.0E3) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('log(Time [Myr])') all_ax[(i,0)].set_ylabel(r'Yield [M$_{\odot}$ / (10$^4$ M$_{\odot}$)]') fig.savefig("C15_LC18_yields_total.png") plot_settings() plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(6*3,6*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.0001,0.001,0.01,0.02]: # metallicities: label = "Z=%.4f"%(z) #y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) y = yields[z][e][1:] / yields[z][e][-1] ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y, #np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'O': ax.legend(loc='lower right') count += 1 ax.set_xlim(0.8,4.2) #ax.semilogx() #ax.semilogy() ax.set_ylim(0,1.0) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('log(Time [Myr])') all_ax[(i,0)].set_ylabel(r'Cumulative Fraction') fig.savefig("C15_LC18_yields_fraction.png") plot_settings() plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba'] fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True) fig.subplots_adjust(wspace=0,hspace=0) fig.set_size_inches(6*3,6*3) count = 0 for ax2 in all_ax: for ax in ax2: e = plot_elements[count] for z in [0.0001,0.001,0.01,0.02]: # metallicities: label = "Z=%.4f"%(z) y = 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) / yields[z][e][-1] ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y, #np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]), lw = 3, color = colors[z], label = label) # ax.semilogy() #ax.set_ylim(0,2) xy=(0.1,0.1) ax.annotate(e,xy,xy,xycoords='axes fraction') if e == 'Ba': ax.legend(loc='upper right') count += 1 ax.set_xlim(0.8,4.2) #ax.semilogx() ax.semilogy() ax.set_ylim(1.0E-5,1.0E-1) ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black') for i in np.arange(3): all_ax[(2,i)].set_xlabel('log(Time [Myr])') all_ax[(i,0)].set_ylabel(r'Fractional Rate [Myr$^{-1}$]') fig.savefig("C15_LC18_yields_fractional_rate.png") ```
github_jupyter
``` import tensorflow as tf import numpy as np import keras import pandas as pd import matplotlib.pyplot as plt from sklearn.utils import shuffle import os import cv2 import random import keras.backend as K import sklearn from tensorflow.keras.models import Sequential, Model from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.layers import Dense, Dropout, Activation, Input, BatchNormalization, GlobalAveragePooling2D from tensorflow.keras import layers from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, EarlyStopping from tensorflow.keras.experimental import CosineDecay from tensorflow.keras.utils import to_categorical from tensorflow.keras.applications import EfficientNetB3 from tensorflow.keras.layers.experimental.preprocessing import RandomCrop,CenterCrop, RandomRotation %matplotlib inline from google.colab import drive drive.mount('/content/drive') ROOT_DIR = '/content/drive/MyDrive/Broner' train_data = pd.read_csv('/content/drive/MyDrive/Broner/MURA-v1.1/train_path_label.csv' , dtype=str) test_data = pd.read_csv('/content/drive/MyDrive/Broner/MURA-v1.1/valid_path_label.csv' , dtype=str) train_data train_shoulder = train_data[:1300] train_humerus = train_data[8379:9651] train_forearm = train_data[29940:31265] test_shoulder = test_data[1708:2100] test_forearm = test_data[659:960] test_humerus = test_data[1420:1708] def change_class(df,val): for i in range(len(df)): df['label'] = val return df temp = change_class(train_shoulder,'0') type(temp['label'][0]) train_shoulder = change_class(train_shoulder,'0') train_humerus = change_class(train_humerus,'1') train_forearm = change_class(train_forearm,'2') test_shoulder = change_class(test_shoulder,'0') test_humerus = change_class(test_humerus,'1') test_forearm = change_class(test_forearm,'2') train_data = pd.concat([train_shoulder , train_forearm , train_humerus] , ignore_index=True) train_data test_data = pd.concat([test_shoulder , test_forearm , test_humerus] , ignore_index=True) test_data train_data = train_data.sample(frac = 1) test_data = test_data.sample(frac = 1) from sklearn.model_selection import train_test_split x_train , x_val , y_train , y_val = train_test_split(train_data['0'] , train_data['label'] , test_size = 0.2 , random_state=42 , stratify=train_data['label']) val_data = pd.DataFrame() val_data['0']=x_val val_data['label']=y_val val_data.reset_index(inplace=True,drop=True) val_data print(len(train_data) , len(test_data) , len(val_data)) def preproc(image): image = image/255. image[:,:,0] = (image[:,:,0]-0.485)/0.229 image[:,:,1] = (image[:,:,1]-0.456)/0.224 image[:,:,2] = (image[:,:,2]-0.406)/0.225 return image train_datagen = keras.preprocessing.image.ImageDataGenerator( preprocessing_function = preproc, rotation_range=20, horizontal_flip=True, zoom_range = 0.15, validation_split = 0.1) test_datagen = keras.preprocessing.image.ImageDataGenerator( preprocessing_function = preproc) train_generator=train_datagen.flow_from_dataframe( dataframe=train_data, directory=ROOT_DIR, x_col="0", y_col="label", subset="training", batch_size=128, seed=42, shuffle=True, class_mode="sparse", target_size=(320,320)) valid_generator=train_datagen.flow_from_dataframe( dataframe=train_data, directory=ROOT_DIR, x_col="0", y_col="label", subset="validation", batch_size=128, seed=42, shuffle=True, class_mode="sparse", target_size=(320,320)) from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dropout, Flatten, Dense, Activation, Convolution2D, MaxPooling2D # TARGET_SIZE = 320 # cnn = Sequential() # cnn.add(Convolution2D(filters=32, kernel_size=5, padding ="same", input_shape=(TARGET_SIZE, TARGET_SIZE, 3), activation='relu')) # cnn.add(MaxPooling2D(pool_size=(3,3))) # cnn.add(Convolution2D(filters=64, kernel_size=3, padding ="same",activation='relu')) # cnn.add(MaxPooling2D(pool_size=(3,3))) # cnn.add(Convolution2D(filters=128, kernel_size=3, padding ="same",activation='relu')) # cnn.add(MaxPooling2D(pool_size=(3,3))) # cnn.add(Flatten()) # cnn.add(Dense(100, activation='relu')) # cnn.add(Dropout(0.5)) # cnn.add(Dense(3, activation='softmax')) # cnn.summary() from keras.layers.normalization import BatchNormalization from keras.layers import Dropout def make_model(metrics = None): base_model = keras.applications.InceptionResNetV2(input_shape=(*[320,320], 3), include_top=False, weights='imagenet') base_model.trainable = False model = tf.keras.Sequential([ base_model, keras.layers.GlobalAveragePooling2D(), keras.layers.Dense(512), BatchNormalization(), keras.layers.Activation('relu'), Dropout(0.5), keras.layers.Dense(256), BatchNormalization(), keras.layers.Activation('relu'), Dropout(0.4), keras.layers.Dense(128), BatchNormalization(), keras.layers.Activation('relu'), Dropout(0.3), keras.layers.Dense(64), BatchNormalization(), keras.layers.Activation('relu'), keras.layers.Dense(3, activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005), loss='sparse_categorical_crossentropy', metrics=metrics) return model # def exponential_decay(lr0): # def exponential_decay_fn(epoch): # if epoch>5 and epoch%3==0: # return lr0 * tf.math.exp(-0.1) # else: # return lr0 # return exponential_decay_fn # exponential_decay_fn = exponential_decay(0.01) # lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn) # checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("/content/drive/MyDrive/Broner/bone.h5", # save_best_only=True) checkpoint_path = "/content/drive/MyDrive/Broner/best.hdf5" cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, monitor='val_sparse_categorical_accuracy', save_best_only=True, save_weights_only=True, mode='max', verbose=1) model = make_model(metrics=['sparse_categorical_accuracy']) model.summary() # cnn = model LR = 0.0005 EPOCHS=20 STEPS=train_generator.n//train_generator.batch_size VALID_STEPS=valid_generator.n//valid_generator.batch_size # cnn.compile( # optimizer=tf.keras.optimizers.Adam(learning_rate=LR), # loss='sparse_categorical_crossentropy', # metrics=['sparse_categorical_accuracy']) # checkpoint_path = "/content/drive/MyDrive/Broner/best.hdf5" # cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, # monitor='val_sparse_categorical_accuracy', # save_best_only=True, # save_weights_only=True, # mode='max', # verbose=1) history = model.fit_generator( train_generator, steps_per_epoch=STEPS, epochs=EPOCHS, validation_data=valid_generator, callbacks=[cp_callback], validation_steps=VALID_STEPS) model.save('/content/drive/MyDrive/Broner/model.h5') plt.plot(history.history['sparse_categorical_accuracy']) plt.plot(history.history['val_sparse_categorical_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() from keras.models import load_model import h5py from keras.preprocessing import image m = load_model('/content/drive/MyDrive/Broner/model.h5') def new_answer(img): arr = np.empty(5, dtype=int) # img = image.load_img(path,target_size=(320,320)) img_tensor = image.img_to_array(img) img_tensor = np.expand_dims(img_tensor,axis = 0) img_tensor /= 255 img_tensor[:,:,0] = (img_tensor[:,:,0]-0.485)/0.229 img_tensor[:,:,1] = (img_tensor[:,:,1]-0.456)/0.224 img_tensor[:,:,2] = (img_tensor[:,:,2]-0.406)/0.225 ans = m.predict(img_tensor) return np.argmax(ans),ans img = cv2.imread('/content/drive/MyDrive/Broner/MURA-v1.1/valid/XR_SHOULDER/patient11187/study1_negative/image1.png') resized = cv2.resize(img, (320,320)) new_answer(resized) #Shoulder = '0' Humerus = '1' Forearm = '2' import seaborn as sns sns.set_theme(style="darkgrid") ax = sns.countplot(x="label", data=train_data) ```
github_jupyter
# Reinterpreting Tensors Sometimes the data in tensors needs to be interpreted as if it had different type or shape. For example, reading a binary file into memory produces a flat tensor of byte-valued data, which the application code may want to interpret as an array of data of specific shape and possibly different type. DALI provides the following operations which affect tensor metadata (shape, type, layout): * reshape * reinterpret * squeeze * expand_dims Thsese operations neither modify nor copy the data - the output tensor is just another view of the same region of memory, making these operations very cheap. ## Fixed Output Shape This example demonstrates the simplest use of the `reshape` operation, assigning a new fixed shape to an existing tensor. First, we'll import DALI and other necessary modules, and define a utility for displaying the data, which will be used throughout this tutorial. ``` import nvidia.dali as dali import nvidia.dali.fn as fn from nvidia.dali import pipeline_def import nvidia.dali.types as types import numpy as np def show_result(outputs, names=["Input", "Output"], formatter=None): if not isinstance(outputs, tuple): return show_result((outputs,)) outputs = [out.as_cpu() if hasattr(out, "as_cpu") else out for out in outputs] for i in range(len(outputs[0])): print(f"---------------- Sample #{i} ----------------") for o, out in enumerate(outputs): a = np.array(out[i]) s = "x".join(str(x) for x in a.shape) title = names[o] if names is not None and o < len(names) else f"Output #{o}" l = out.layout() if l: l += ' ' print(f"{title} ({l}{s})") np.set_printoptions(formatter=formatter) print(a) def rand_shape(dims, lo, hi): return list(np.random.randint(lo, hi, [dims])) ``` Now let's define out pipeline - it takes data from an external source and returns it both in original form and reshaped to a fixed square shape `[5, 5]`. Additionally, output tensors' layout is set to HW ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example1(input_data): np.random.seed(1234) inp = fn.external_source(input_data, batch=False, dtype=types.INT32) return inp, fn.reshape(inp, shape=[5, 5], layout="HW") pipe1 = example1(lambda: np.random.randint(0, 10, size=[25], dtype=np.int32)) pipe1.build() show_result(pipe1.run()) ``` As we can see, the numbers from flat input tensors have been rearranged into 5x5 matrices. ## Reshape with Wildcards Let's now consider a more advanced use case. Imagine you have some flattened array that represents a fixed number of columns, but the number of rows is free to vary from sample to sample. In that case, you can put a wildcard dimension by specifying its shape as `-1`. Whe using wildcards, the output is resized so that the total number of elements is the same as in the input. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example2(input_data): np.random.seed(12345) inp = fn.external_source(input_data, batch=False, dtype=types.INT32) return inp, fn.reshape(inp, shape=[-1, 5]) pipe2 = example2(lambda: np.random.randint(0, 10, size=[5*np.random.randint(3, 10)], dtype=np.int32)) pipe2.build() show_result(pipe2.run()) ``` ## Removing and Adding Unit Dimensions There are two dedicated operators `squeeze` and `expand_dims` which can be used for removing and adding dimensions with unit extent. The following example demonstrates the removal of a redundant dimension as well as adding two new dimensions. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example_squeeze_expand(input_data): np.random.seed(4321) inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32) squeezed = fn.squeeze(inp, axes=[0]) expanded = fn.expand_dims(squeezed, axes=[0, 3], new_axis_names="FC") return inp, fn.squeeze(inp, axes=[0]), expanded def single_channel_generator(): return np.random.randint(0, 10, size=[1]+rand_shape(2, 1, 7), dtype=np.int32) pipe_squeeze_expand = example_squeeze_expand(single_channel_generator) pipe_squeeze_expand.build() show_result(pipe_squeeze_expand.run()) ``` ## Rearranging Dimensions Reshape allows you to swap, insert or remove dimenions. The argument `src_dims` allows you to specify which source dimension is used for a given output dimension. You can also insert a new dimension by specifying -1 as a source dimension index. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example_reorder(input_data): np.random.seed(4321) inp = fn.external_source(input_data, batch=False, dtype=types.INT32) return inp, fn.reshape(inp, src_dims=[1,0]) pipe_reorder = example_reorder(lambda: np.random.randint(0, 10, size=rand_shape(2, 1, 7), dtype=np.int32)) pipe_reorder.build() show_result(pipe_reorder.run()) ``` ## Adding and Removing Dimensions Dimensions can be added or removed by specifying `src_dims` argument or by using dedicated `squeeze` and `expand_dims` operators. The following example reinterprets single-channel data from CHW to HWC layout by discarding the leading dimension and adding a new trailing dimension. It also specifies the output layout. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example_remove_add(input_data): np.random.seed(4321) inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32) return inp, fn.reshape(inp, src_dims=[1,2,-1], # select HW and add a new one at the end layout="HWC") # specify the layout string pipe_remove_add = example_remove_add(lambda: np.random.randint(0, 10, [1,4,3], dtype=np.int32)) pipe_remove_add.build() show_result(pipe_remove_add.run()) ``` ## Relative Shape The output shape may be calculated in relative terms, with a new extent being a multiple of a source extent. For example, you may want to combine two subsequent rows into one - doubling the number of columns and halving the number of rows. The use of relative shape can be combined with dimension rearranging, in which case the new output extent is a multiple of a _different_ source extent. The example below reinterprets the input as having twice as many _columns_ as the input had _rows_. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example_rel_shape(input_data): np.random.seed(1234) inp = fn.external_source(input_data, batch=False, dtype=types.INT32) return inp, fn.reshape(inp, rel_shape=[0.5, 2], src_dims=[1,0]) pipe_rel_shape = example_rel_shape( lambda: np.random.randint(0, 10, [np.random.randint(1,7), 2*np.random.randint(1,5)], dtype=np.int32)) pipe_rel_shape.build() show_result(pipe_rel_shape.run()) ``` ## Reinterpreting Data Type The `reinterpret` operation can view the data as if it was of different type. When a new shape is not specified, the innermost dimension is resized accordingly. ``` @pipeline_def(device_id=0, num_threads=4, batch_size=3) def example_reinterpret(input_data): np.random.seed(1234) inp = fn.external_source(input_data, batch=False, dtype=types.UINT8) return inp, fn.reinterpret(inp, dtype=dali.types.UINT32) pipe_reinterpret = example_reinterpret( lambda: np.random.randint(0, 255, [np.random.randint(1,7), 4*np.random.randint(1,5)], dtype=np.uint8)) pipe_reinterpret.build() def hex_bytes(x): f = f"0x{{:0{2*x.nbytes}x}}" return f.format(x) show_result(pipe_reinterpret.run(), formatter={'int':hex_bytes}) ```
github_jupyter
<img src='./img/LogoWekeo_Copernicus_RGB_0.png' align='right' width='20%'></img> # Tutorial on basic land applications (data processing) Version 2 In this tutorial we will use the WEkEO Jupyterhub to access and analyse data from the Copernicus Sentinel-2 and products from the [Copernicus Land Monitoring Service (CLMS)](https://land.copernicus.eu/). A region in northern Corsica has been selected as it contains representative landscape features and process elements which can be used to demonstrate the capabilities and strengths of Copernicus space component and services. The tutorial comprises the following steps: 1. Search and download data: We will select and download a Sentinel-2 scene and the CLMS CORINE Land Cover (CLC) data from their original archive locations via WEkEO using the Harmonised Data Access (HAD) API. 2. [Read and view Sentinel-2 data](#load_sentinel2): Once downloaded, we will read and view the Sentinel-2 data in geographic coordinates as true colour image. 3. [Process and view Sentinel-2 data as a vegetation and other spectral indices](#sentinel2_ndvi): We will see how the vegetation density and health can be assessed from optical EO data to support crop and landscape management practices. 4. [Read and view the CLC data](#display_clc): Display the thematic CLC data with the correct legend. 5. [CLC2018 burnt area in the Sentinel-2 NDVI data](#CLC_burn_NDVI): The two products give different results, but they can be combined to provide more information. NOTE - This Jupyter Notebook contains additonal processing to demonstrate further functionality during the training debrief. <img src='./img/Intro_banner.jpg' align='center' width='100%'></img> ## <a id='load_sentinel2'></a>2. Load required Sentinel-2 bands and True Color image at 10 m spatial resolution Before we begin we must prepare our environment. This includes importing the various python libraries that we will need. ### Load required libraries ``` import os import rasterio as rio from rasterio import plot from rasterio.mask import mask from rasterio.plot import show_hist import matplotlib.pyplot as plt import geopandas as gpd from rasterio.plot import show from rasterio.plot import plotting_extent import zipfile from matplotlib import rcParams from pathlib import Path import numpy as np from matplotlib.colors import ListedColormap from matplotlib import cm from matplotlib import colors import warnings warnings.filterwarnings('ignore') from IPython.core.display import HTML from rasterio.warp import calculate_default_transform, reproject, Resampling import scipy.ndimage ``` The Sentinel-2 Multiple Spectral Imager (MSI) records 13 spectral bands across the visible and infrared portions of the electromagnetic spectrum at different spatial resolutions from 10 m to 60 m depending on their operation and use. There are currently two Sentinel-2 satellites in suitably phased orbits to give a revisit period of 5 days at the Equator and 2-3 days at European latitudes. Being an optical sensor they are of course also affected by cloud cover and illumination conditions. The two satellites have been fully operational since 2017 and record continuously over land and the adjacent coastal sea areas. Their specification represents a continuation and upgrade of the US Landsat system which has archive data stretching back to the mid 1980s. <img src='./img/S2_band_comp.png' align='center' width='50%'></img> For this training session we will only need a composite true colour image (made up of the blue green and red bands) and the individual bands for red (665 nm) and near infrared (833 nm). The cell below loads the required data. ``` #Download folder download_dir_path = os.path.join(os.getcwd(), 'data/from_wekeo') data_path = os.path.join(os.getcwd(), 'data') R10 = os.path.join(download_dir_path, 'S2A_MSIL2A_20170802T101031_N0205_R022_T32TNN_20170802T101051.SAFE/GRANULE/L2A_T32TNN_A011030_20170802T101051/IMG_DATA/R10m') #10 meters resolution folder b3 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B03_10m.jp2') #green b4 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B04_10m.jp2') #red b8 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B08_10m.jp2') #near infrared TCI = rio.open(R10+'/L2A_T32TNN_20170802T101031_TCI_10m.jp2') #true color ``` ### Display True Color and False Colour Infrared images The true colour image for the Sentinel-2 data downloaded in the previous JN can be displayed as a plot to show we have the required area and assess other aspects such as the presence of cloud, cloud shadow, etc. In this case we selected region of northern Corsica showing the area around Bastia and the Tyrrhenian Sea out to the Italian island of Elba in the east. The area has typical Mediterranean vegetation with mountainous semi natural habitats and urban and agricultural areas along the coasts. The cell below displays the true colour image in its native WGS-84 coordinate reference system. The right hand plot shows the same image in false colour infrared format (FCIR). In this format the green band is displayed as blue, red as green and near infrared as red. Vegetated areas appear red and water is black. ``` fig, (ax, ay) = plt.subplots(1,2, figsize=(21,7)) show(TCI.read(), ax=ax, transform=TCI.transform, title = "TRUE COLOR") ax.set_ylabel("Northing (m)") # (WGS 84 / UTM zone 32N) ax.set_xlabel("Easting (m)") ax.ticklabel_format(axis = 'both', style = 'plain') # Function to normalize false colour infrared image def normalize(array): """Normalizes numpy arrays into scale 0.0 - 1.0""" array_min, array_max = array.min(), array.max() return ((array - array_min)/(array_max - array_min)) nir = b8.read(1) red = b4.read(1) green = b3.read(1) nirn = normalize(scipy.ndimage.zoom(nir,0.5)) redn = normalize(scipy.ndimage.zoom(red,0.5)) greenn = normalize(scipy.ndimage.zoom(green,0.5)) FCIR = np.dstack((nirn, redn, greenn)) FCIR = np.moveaxis(FCIR.squeeze(),-1,0) show(FCIR, ax=ay, transform=TCI.transform, title = "FALSE COLOR INFRARED") ay.set_ylabel("Northing (m)") # (WGS 84 / UTM zone 32N) ay.set_xlabel("Easting (m)") ay.ticklabel_format(axis = 'both', style = 'plain') ``` ## <a id='sentinel2_ndvi'></a>3. Process and view Sentinel-2 data as vegetation and other spectral indices Vegetation status is a combination of a number of properties of the vegetation related to growth, density, health and environmental factors. By making measurements of surface reflectance in the red and near infrared (NIR) parts of the spectrum optical instruments can summarise crop status through a vegetation index. The red region is related to chlorophyll absorption and the NIR is related to multiple scattering within leaf structures, therefore low red and high NIR represent healthy / dense vegetation. These values are summarised in the commonly used Normalised Difference Vegetation Index (NDVI). <img src='./img/ndvi.jpg' align='center' width='20%'></img> We will examine a small subset of the full image were we know differences in vegetation will be present due to natural and anthropogenic processes and calculate the NDVI to show how its value changes. We will also calculate a second spectral index, the Normalised Difference Water Index (NDWI), which emphasises water surfaces to compare to NDVI. To do this we'll first load some vector datasets for an area of interest (AOI) and some field boundaries. ### Open Vector Data ``` path_shp = os.path.join(os.getcwd(), 'shp') aoi = gpd.read_file(os.path.join(path_shp, 'WEkEO-Land-AOI-201223.shp')) LPSI = gpd.read_file(os.path.join(path_shp, 'LPIS-AOI-201223.shp')) ``` ### Check CRS of Vector Data Before we can use the vector data we must check the coordinate reference system (CRS) and then transpose them to the same CRS as the Sentinel-2 data. In this case we require all the data to be in the WGS 84 / UTM zone 32N CRS with the EPSG code of 32632. ``` print(aoi.crs) print(LPSI.crs) aoi_proj = aoi.to_crs(epsg=32632) #convert to WGS 84 / UTM zone 32N (Sentinel-2 crs) LPIS_proj = LPSI.to_crs(epsg=32632) print("conversion to S2 NDVI crs:") print(aoi_proj.crs) print(LPIS_proj.crs) ``` ### Calculate NDVI from red and near infrared bands First step is to calculate the NDVI for the whole image using some straightforward band maths and write out the result to a geoTIFF file. ``` nir = b8.read() red = b4.read() ndvi = (nir.astype(float)-red.astype(float))/(nir+red) meta = b4.meta meta.update(driver='GTiff') meta.update(dtype=rio.float32) with rio.open(os.path.join(data_path, 'S2_NDVI.tif'), 'w', **meta) as dst: dst.write(ndvi.astype(rio.float32)) ``` ### Calculate NDWI from green and near infrared bands The new step is to calculate the NDWI for the whole image using some straightforward band maths and write out the result to a geoTIFF file. ``` nir = b8.read() green = b3.read() ndwi = (green.astype(float) - nir.astype(float))/(nir+green) meta = b3.meta meta.update(driver='GTiff') meta.update(dtype=rio.float32) with rio.open(os.path.join(data_path, 'S2_NDWI.tif'), 'w', **meta) as dst: dst.write(ndwi.astype(rio.float32)) ``` ### Crop the extent of the NDVI and NDWI images to the AOI The file produced in the previous step is then cropped using the AOI geometry. ``` with rio.open(os.path.join(data_path, "S2_NDVI.tif")) as src: out_image, out_transform = mask(src, aoi_proj.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform}) with rio.open(os.path.join(data_path, "S2_NDVI_masked.tif"), "w", **out_meta) as dest: dest.write(out_image) with rio.open(os.path.join(data_path, "S2_NDWI.tif")) as src: out_image, out_transform = mask(src, aoi_proj.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform}) with rio.open(os.path.join(data_path, "S2_NDWI_masked.tif"), "w", **out_meta) as dest: dest.write(out_image) ``` ### Display NDVI and NDWI for the AOI The AOI represents an area of northern Corsica centred on the town of Bagnasca. To the west are mountains dominated by forests and woodlands of evergreen sclerophyll oaks which tend to give high values of NDVI intersperse by areas of grassland or bare ground occuring naturally or as a consequnce of forest fires. The patterns are more irregular and follow the terrain and hydrological features. The lowlands to east have been clear of forest for agriculture shown by a fine scale mosaic of regular geometric features representing crop fields with diffrerent NDVIs or the presence of vegetated boundary features. The lower values of NDVI (below zero) in the east are associated with the sea and the large lagoon of the Réserve naturelle de l'étang de Biguglia. As expected the NDWI gives high values for the open sea and lagoon areas of the image. Interestingly there are relatively high values for some of the fields in the coastal plane suggesting they may be flooded or irrigated. The bare surfaces have NDWI values below zero and the vegetated areas are lower still. The colour map used to display the NDVI uses a ramp from blue to green to emphasise the increasing density and vigour of vegetation at high NDVI values. If distinction are not so clear the cmap value can be change from "BuGn" or "RdBu" to something more appropriate with reference to the the available colour maps at [Choosing Colormaps in Matplotlib](https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html). ``` ndvi_aoi = rio.open(os.path.join(data_path, 'S2_NDVI_masked.tif')) fig, (az, ay) = plt.subplots(1,2, figsize=(21, 7)) # use imshow so that we have something to map the colorbar to image_hidden_1 = az.imshow(ndvi_aoi.read(1), cmap='BuGn') # LPIS_proj.plot(ax=ax, facecolor='none', edgecolor='k') image = show(ndvi_aoi, ax=az, cmap='BuGn', transform=ndvi_aoi.transform, title ="NDVI") fig.colorbar(image_hidden_1, ax=az) az.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N) az.set_xlabel("Easting (m)") az.ticklabel_format(axis = 'both', style = 'plain') ndwi_aoi = rio.open(os.path.join(data_path, 'S2_NDWI_masked.tif')) # use imshow so that we have something to map the colorbar to image_hidden_1 = ay.imshow(ndwi_aoi.read(1), cmap='RdBu') # LPIS_proj.plot(ax=ax, facecolor='none', edgecolor='k') image = show(ndwi_aoi, ax=ay, cmap='RdBu', transform=ndwi_aoi.transform, title ="NDWI") fig.colorbar(image_hidden_1, ax=ay) ay.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N) ay.set_xlabel("Easting (m)") ay.ticklabel_format(axis = 'both', style = 'plain') ``` ### Histogram of NDVI values If the NDVI values for the area are summarised as a histogram the two main levels of vegetation density / vigour become apprent. On the left of the plot there is a peak between NDVI values of -0.1 and 0.3 for the water and unvegetated areas together (with the water generally lower) and on the right the peak around an NDVI value of 0.8 is the dense forest and vigorous crops. The region in between shows spare vegetation, grassland and crops that are yet to mature. In the NDWI histogram there are multiple peaks representing the sea and lagoons, bare surfaes and vegetation respectively. The NDVI and NDWI can be used in combination to characterise regons within satellite images. ``` fig, axhist = plt.subplots(1,1) show_hist(ndvi_aoi, bins=100, masked=False, title='Histogram of NDVI values', facecolor = 'g', ax =axhist) axhist.set_xlabel('NDVI') axhist.set_ylabel('number of pixels') plt.gca().get_legend().remove() fig, axhist = plt.subplots(1,1) show_hist(ndwi_aoi, bins=100, masked=False, title='Histogram of NDWI values', facecolor = 'b', ax =axhist) axhist.set_xlabel('NDWI') axhist.set_ylabel('number of pixels') plt.gca().get_legend().remove() ``` ### NDVI index on a cultivation pattern area We can look in more detail at the agricultural area to see the patterns in the NDVI values caused by differential crop density and growth. As before we load a vector file containing an AOI, subset the original Sentinel-2 NDVI image. This time we over lay a set of field boundaries from the Land Parcel Information System (LPIS) which highlight some of the management units. This analysis gives us a representation of the biophysical properties of the surface at the time of image acquisition. ``` #Load shapefile of the AOIs cult_zoom = gpd.read_file(os.path.join(path_shp, 'complex_cultivation_patterns_zoom.shp')) #Subset the Sentinel-2 NDVI image with rio.open(os.path.join(data_path, "S2_NDVI.tif")) as src: out_image, out_transform = mask(src, cult_zoom.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform}) with rio.open(os.path.join(data_path, "NDVI_cultivation_area.tif"), "w", **out_meta) as dest: dest.write(out_image.astype(rio.float32)) #Display the results with the LPIS rcParams['axes.titlepad'] = 20 src_cult = rio.open(os.path.join(data_path, "NDVI_cultivation_area.tif")) fig, axg = plt.subplots(figsize=(21, 7)) image_hidden_1 = axg.imshow(src_cult.read(1), cmap='BuGn') LPIS_proj.plot(ax=axg, facecolor='none', edgecolor='k') show(src_cult, ax=axg, cmap='BuGn', transform=src_cult.transform, title='NDVI - Complex cultivation patterns') fig.colorbar(image_hidden_1, ax=axg) axg.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N) axg.set_xlabel("Easting (m)") plt.subplots_adjust(bottom=0.1, right=0.6, top=0.9) axg.ticklabel_format(axis = 'both', style = 'plain') ``` ## <a id='display_clc'></a>4. Read and view the CLC data The CORINE Land Cover (CLC) inventory has been produced at a European level in 1990, 2000, 2006, 2012, and 2018. It records land cover and land use in 44 classes with a Minimum Mapping Unit (MMU) of 25 hectares (ha) and a minimum feature width of 100 m. The time series of status maps are complemented by change layers, which highlight changes between the land cover land use classes with an MMU of 5 ha. The Eionet network of National Reference Centres Land Cover (NRC/LC) produce the CLC databases at Member State level, which are coordinated and integrated by EEA. CLC is produced by the majority of countries by visual interpretation of high spatial resolution satellite imagery (10 - 30 m spatial resolution). In a few countries semi-automatic solutions are applied, using national in-situ data, satellite image processing, GIS integration and generalisation. CLC has a wide variety of applications, underpinning various policies in the domains of environment, but also agriculture, transport, spatial planning etc. ### Crop the extent of the Corine Land Cover 2018 (CLC 2018) to the AOI and display As with the Sentinel-2 data it is necesasary to crop the pan-European CLC2018 dataset to be able to review it at the local level. ### Set up paths to data ``` #path to Corine land cover 2018 land_cover_dir = Path(os.path.join(download_dir_path,'u2018_clc2018_v2020_20u1_raster100m/DATA/')) legend_dir = Path(os.path.join(download_dir_path,'u2018_clc2018_v2020_20u1_raster100m/Legend/')) #path to the colormap txt_filename = legend_dir/'CLC2018_CLC2018_V2018_20_QGIS.txt' ``` ### Re-project vector files to the same coordinate system of the CLC 2018 ``` aoi_3035 = aoi.to_crs(epsg=3035) # EPSG:3035 (ETRS89-extended / LAEA Europe) ``` ### Write CLC 2018 subset ``` with rio.open(str(land_cover_dir)+'/U2018_CLC2018_V2020_20u1.tif') as src: out_image, out_transform = mask(src, aoi_3035.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform, "dtype": "int8", "nodata":0 }) with rio.open("CLC_masked/Corine_masked.tif", "w", **out_meta) as dest: dest.write(out_image) ``` ### Set up the legend for the CLC data As the CLC data is thematic in nature we must set up a legend to be displayed with the results showing the colour, code and definition of each land cover / land use class. ### Read CLC 2018 legend A text file is availabe which contains the details of the CLC nomenclature for building the legend when displaying CLC. ``` ### Create colorbar def parse_line(line): _, r, g, b, a, descr = line.split(',') return (int(r), int(g), int(b), int(a)), descr.split('\n')[0] with open(txt_filename, 'r') as txtf: lines = txtf.readlines() legend = {nline+1: parse_line(line) for nline, line in enumerate(lines[:-1])} legend[0] = parse_line(lines[-1]) #print code and definition of each land cover / land use class def parse_line_class_list(line): class_id, r, g, b, a, descr = line.split(',') return (int(class_id), int(r), int(g), int(b), int(a)), descr.split('\n')[0] with open(txt_filename, 'r') as txtf: lines = txtf.readlines() legend_class = {nline+1: parse_line_class_list(line) for nline, line in enumerate(lines[:-1])} legend_class[0] = parse_line_class_list(lines[-1]) print('Level 3 classes') for k, v in sorted(legend_class.items()): print(f'{v[0][0]}\t{v[1]}') ``` ### Build the legend for the CLC 2018 in the area of interest As less than half of the CLC classes are present in the AOI an area specific legend will be built to simplify interpretation. ``` #open CLC 2018 subset cover_land = rio.open("CLC_masked/Corine_masked.tif") array_rast = cover_land.read(1) #Set no data value to 0 array_rast[array_rast == -128] = 0 class_aoi = list(np.unique(array_rast)) legend_aoi = dict((k, legend[k]) for k in class_aoi if k in legend) classes_list =[] number_list = [] for k, v in sorted(legend_aoi.items()): #print(f'{k}:\t{v[1]}') classes_list.append(v[1]) number_list.append(k) class_dict = dict(zip(classes_list,number_list)) #create the colobar corine_cmap_aoi= ListedColormap([np.array(v[0]).astype(float)/255.0 for k, v in sorted(legend_aoi.items())]) # Map the values in [0, 22] new_dict = dict() for i, v in enumerate(class_dict.items()): new_dict[v[1]] = (v[0], i) fun = lambda x : new_dict[x][1] matrix = map(np.vectorize(fun), array_rast) matrix = np.matrix(list(matrix)) ``` ### Display the CLC2018 data for the AOI The thematic nature and the 100 m spatial resolution of the CLC2018 give a very different view of the landscape compared to the Sentinel-2 data. CLC2018 offers a greater information content as it is a combination of multiple images, ancillary data and human interpretation while Sentinel-2 offers great spatial information for one instance in time. The separation of the mountains with woodland habitats and the coastal planes with agriculture can be clearly seen marked by a line of urban areas. The mountains are dominated by deciduous woodland, sclerophyllous vegetation and transitional scrub. The coastal planes consist of various types of agricultural land associated with small field farming practices. The most striking feature of the CLC2018 data is a large burnt area which resulted from a major forest fire in July 2017. ``` #plot fig2, axs2 = plt.subplots(figsize=(10,10),sharey=True) show(matrix, ax=axs2, cmap=corine_cmap_aoi, transform = cover_land.transform, title = "Corine Land Cover 2018") norm = colors.BoundaryNorm(np.arange(corine_cmap_aoi.N + 1), corine_cmap_aoi.N + 1) cb = plt.colorbar(cm.ScalarMappable(norm=norm, cmap=corine_cmap_aoi), ax=axs2, fraction=0.03) cb.set_ticks([x+.5 for x in range(-1,22)]) # move the marks to the middle cb.set_ticklabels(list(class_dict.keys())) # label the colors axs2.ticklabel_format(axis = 'both', style = 'plain') axs2.set_ylabel("Northing (m)") #EPSG:3035 (ETRS89-extended / LAEA Europe) axs2.set_xlabel("Easting (m)") ``` ## <a id='CLC_burn_NDVI'></a>5. CLC2018 burnt area in the Sentinel-2 NDVI data The area of the burn will have a very low NDVI compared to the surounding unburnt vegetation. The boundary of the burn can be easily seen as well as remnants of the original vegetation which have survived the burn. ``` #Load shapefile of the AOIs and check the crs burnt_aoi = gpd.read_file(os.path.join(path_shp, 'burnt_area.shp')) print("vector file crs:") print(burnt_aoi.crs) burnt_aoi_32632 = burnt_aoi.to_crs(epsg=32632) #Sentinel-2 NDVI crs print("conversion to S2 NDVI crs:") print(burnt_aoi_32632.crs) ``` ### Crop the extent of the NDVI image for burnt area ``` with rio.open(os.path.join(data_path, 'S2_NDVI_masked.tif')) as src: out_image, out_transform = mask(src, burnt_aoi_32632.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform}) with rio.open(os.path.join(data_path, "NDVI_burnt_area.tif"), "w", **out_meta) as dest: dest.write(out_image) ``` ### Crop the extent of the CLC 2018 for burnt area ``` #open CLC 2018 subset cover_land = rio.open("CLC_masked/Corine_masked.tif") print(cover_land.crs) #CLC 2018 crs burn_aoi_3035 = burnt_aoi.to_crs(epsg=3035) #conversion to CLC 2018 crs with rio.open(str(land_cover_dir)+'/U2018_CLC2018_V2020_20u1.tif') as src: out_image, out_transform = mask(src, burn_aoi_3035.geometry,crop=True) out_meta = src.meta.copy() out_meta.update({"driver": "GTiff", "height": out_image.shape[1],"width": out_image.shape[2], "transform": out_transform, "dtype": "int8", "nodata":0 }) with rio.open("CLC_masked/Corine_burnt_area.tif", "w", **out_meta) as dest: dest.write(out_image) # Re-project S2 NDVI image to CLC 2018 crs clc_2018_burnt_aoi = rio.open("CLC_masked/Corine_burnt_area.tif") dst_crs = clc_2018_burnt_aoi.crs with rio.open(os.path.join(data_path, "NDVI_burnt_area.tif")) as src: transform, width, height = calculate_default_transform( src.crs, dst_crs, src.width, src.height, *src.bounds) kwargs = src.meta.copy() kwargs.update({ 'crs': dst_crs, 'transform': transform, 'width': width, 'height': height }) with rio.open(os.path.join(data_path, "NDVI_burnt_area_EPSG_3035.tif"), 'w', **kwargs) as dst: reproject(source=rio.band(src,1), destination=rio.band(dst,1), src_transform=src.transform, src_crs=src.crs, dst_transform=transform, dst_crs=dst_crs, resampling=Resampling.nearest) ``` ### Display NDVI index on the AOIs ``` # Build the legend for the CLC 2018 in the area of interest array_rast_b = clc_2018_burnt_aoi.read(1) #Set no data value to 0 array_rast_b[array_rast_b == -128] = 0 class_aoi_b = list(np.unique(array_rast_b)) legend_aoi_b = dict((k, legend[k]) for k in class_aoi_b if k in legend) classes_list_b =[] number_list_b = [] for k, v in sorted(legend_aoi_b.items()): #print(f'{k}:\t{v[1]}') classes_list_b.append(v[1]) number_list_b.append(k) class_dict_b = dict(zip(classes_list_b,number_list_b)) #create the colobar corine_cmap_aoi_b= ListedColormap([np.array(v[0]).astype(float)/255.0 for k, v in sorted(legend_aoi_b.items())]) # Map the values in [0, 22] new_dict_b = dict() for i, v in enumerate(class_dict_b.items()): new_dict_b[v[1]] = (v[0], i) fun_b = lambda x : new_dict_b[x][1] matrix_b = map(np.vectorize(fun_b), array_rast_b) matrix_b = np.matrix(list(matrix_b)) #Plot rcParams['axes.titlepad'] = 20 src_burnt = rio.open(os.path.join(data_path, "NDVI_burnt_area_EPSG_3035.tif")) fig_b, (axr_b, axg_b) = plt.subplots(1,2, figsize=(25, 8)) image_hidden_1_b = axr_b.imshow(src_burnt.read(1), cmap='BuGn') show(src_burnt, ax=axr_b, cmap='BuGn', transform=src_burnt.transform, title='NDVI - Burnt area') show(matrix_b, ax=axg_b, cmap=corine_cmap_aoi_b, transform=clc_2018_burnt_aoi.transform, title='CLC 2018 - Burnt area') fig_b.colorbar(image_hidden_1_b, ax=axr_b) plt.tight_layout(h_pad=1.0) norm = colors.BoundaryNorm(np.arange(corine_cmap_aoi_b.N + 1), corine_cmap_aoi_b.N + 1) cb = plt.colorbar(cm.ScalarMappable(norm=norm, cmap=corine_cmap_aoi_b), ax=axg_b, fraction=0.03) cb.set_ticks([x+.5 for x in range(-1,6)]) # move the marks to the middle cb.set_ticklabels(list(class_dict_b.keys())) # label the colors axg_b.ticklabel_format(axis = 'both', style = 'plain') axr_b.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N) axr_b.set_xlabel("Easting (m)") axg_b.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N) axg_b.set_xlabel("Easting (m)") axr_b.ticklabel_format(axis = 'both', style = 'plain') axg_b.ticklabel_format(axis = 'both', style = 'plain') plt.tight_layout(h_pad=1.0) ``` <hr> <p><img src='./img/all_partners_wekeo_2.png' align='left' alt='Logo EU Copernicus' width='100%'></img></p>
github_jupyter
``` import numpy as np import pandas as pd from pathlib import Path # visualization import matplotlib.pyplot as plt from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator ``` ## Read and clean datasets ``` def clean_Cohen_datasets(path): """Read local raw datasets and clean them""" # read datasets df = pd.read_csv(path) # rename columns df.rename(columns={"abstracts":"abstract", "label1":"label_abstract_screening", "label2":"label_included"}, inplace=True) # recode inclusion indicators df.label_abstract_screening = np.where(df.label_abstract_screening == "I", 1, 0) df.label_included = np.where(df.label_included == "I", 1, 0) # add record id df.insert(0, "record_id", df.index + 1) return df df_ACEInhibitors = clean_Cohen_datasets("raw/ACEInhibitors.csv") df_ADHD = clean_Cohen_datasets("raw/ADHD.csv") df_Antihistamines = clean_Cohen_datasets("raw/Antihistamines.csv") df_AtypicalAntipsychotics = clean_Cohen_datasets("raw/AtypicalAntipsychotics.csv") df_BetaBlockers = clean_Cohen_datasets("raw/BetaBlockers.csv") df_CalciumChannelBlockers = clean_Cohen_datasets("raw/CalciumChannelBlockers.csv") df_Estrogens = clean_Cohen_datasets("raw/Estrogens.csv") df_NSAIDS = clean_Cohen_datasets("raw/NSAIDS.csv") df_Opiods = clean_Cohen_datasets("raw/Opiods.csv") df_OralHypoglycemics = clean_Cohen_datasets("raw/OralHypoglycemics.csv") df_ProtonPumpInhibitors = clean_Cohen_datasets("raw/ProtonPumpInhibitors.csv") df_SkeletalMuscleRelaxants = clean_Cohen_datasets("raw/SkeletalMuscleRelaxants.csv") df_Statins = clean_Cohen_datasets("raw/Statins.csv") df_Triptans = clean_Cohen_datasets("raw/Triptans.csv") df_UrinaryIncontinence = clean_Cohen_datasets("raw/UrinaryIncontinence.csv") ``` ## Export datasets ``` Path("output/local").mkdir(parents=True, exist_ok=True) df_ACEInhibitors.to_csv("output/local/ACEInhibitors.csv", index=False) df_ADHD.to_csv("output/local/ADHD.csv", index=False) df_Antihistamines.to_csv("output/local/Antihistamines.csv", index=False) df_AtypicalAntipsychotics.to_csv("output/local/AtypicalAntipsychotics.csv", index=False) df_BetaBlockers.to_csv("output/local/BetaBlockers.csv", index=False) df_CalciumChannelBlockers.to_csv("output/local/CalciumChannelBlockers.csv", index=False) df_Estrogens.to_csv("output/local/Estrogens.csv", index=False) df_NSAIDS.to_csv("output/local/NSAIDS.csv", index=False) df_Opiods.to_csv("output/local/Opiods.csv", index=False) df_OralHypoglycemics.to_csv("output/local/OralHypoglycemics.csv", index=False) df_ProtonPumpInhibitors.to_csv("output/local/ProtonPumpInhibitors.csv", index=False) df_SkeletalMuscleRelaxants.to_csv("output/local/SkeletalMuscleRelaxants.csv", index=False) df_Statins.to_csv("output/local/Statins.csv", index=False) df_Triptans.to_csv("output/local/Triptans.csv", index=False) df_UrinaryIncontinence.to_csv("output/local/UrinaryIncontinence.csv", index=False) ``` ## Dataset statistics See `process_Cohen_datasets_online.ipynb`.
github_jupyter
``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from astropy.time import Time import astropy.units as u from rms import Planet times, spotted_lc, spotless_lc = np.loadtxt('ring.txt', unpack=True) d = Planet(per=4.049959, inc=90, a=39.68, t0=0, rp=(0.3566/100)**0.5, lam=0, ecc=0, w=90) t14 = d.per/np.pi * np.arcsin( np.sqrt((1 + d.rp)**2 - d.b**2) / np.sin(np.radians(d.inc)) / d.a) t23 = d.per/np.pi * np.arcsin( np.sqrt((1 - d.rp)**2 - d.b**2) / np.sin(np.radians(d.inc)) / d.a) # plt.plot(times, spotted_lc - spotless_lc) plt.plot(times, spotless_lc) plt.plot(times, spotted_lc) for i in [1, -1]: plt.axvline(i*t14/2, color='k') plt.axvline(i*t23/2, color='k') from scipy.optimize import fmin_l_bfgs_b from batman import TransitModel from copy import deepcopy d.limb_dark = 'quadratic' d.u = [0.2, 0.1] d.fp = 0 def transit_model(times, rprs, params): trial_params = deepcopy(params) params.rp = rprs m = TransitModel(params, times, supersample_factor=7, exp_time=times[1]-times[0]) lc = m.light_curve(params) return lc def chi2(p, times, y, params): rprs = p[0] return np.sum((transit_model(times, rprs, params) - y)**2) initp =[d.rp] d0 = fmin_l_bfgs_b(chi2, initp, approx_grad=True, args=(times, spotless_lc, d), bounds=[[0, 0.5]])[0][0] mask_in_transit = (times > 0.5*(t14 + t23)/2) | (times < -0.5*(t14 + t23)/2) # mask_in_transit = (times > t23/2) | (times < -t23/2) bounds = [[0.5 * d.rp, 1.5 * d.rp]] d1 = fmin_l_bfgs_b(chi2, initp, approx_grad=True, args=(times[mask_in_transit], spotless_lc[mask_in_transit], d), bounds=bounds)[0][0] d2 = fmin_l_bfgs_b(chi2, initp, approx_grad=True, args=(times, spotted_lc, d), bounds=bounds)[0][0] d3 = fmin_l_bfgs_b(chi2, initp, approx_grad=True, args=(times[mask_in_transit], spotted_lc[mask_in_transit], d), bounds=bounds)[0][0] print("unspotted full LC \t = {0}\nunspotted only OOT \t = {1}\nspotted full LC " "\t = {2}\nspotted only OOT \t = {3}".format(d0, d1, d2, d3)) fractional_err = [(d0-d.rp)/d.rp, (d1-d.rp)/d.rp, (d2-d.rp)/d.rp, (d3-d.rp)/d.rp] print("unspotted full LC \t = {0}\nunspotted only OOT \t = {1}\nspotted full LC " "\t = {2}\nspotted only OOT \t = {3}".format(*fractional_err)) fig, ax = plt.subplots(1, 2, figsize=(10, 4), sharey='row', sharex=True) ax[0].plot(times, spotless_lc, label='unspotted') ax[0].plot(times, spotted_lc, label='spotted') ax[1].scatter(times[mask_in_transit], spotted_lc[mask_in_transit], label='obs', zorder=-10, s=5, color='k') ax[1].scatter(times[~mask_in_transit], spotted_lc[~mask_in_transit], label='obs masked', zorder=-10, s=5, color='gray') ax[1].plot(times, transit_model(times, d2, d), label='fit: full') ax[1].plot(times, transit_model(times, d3, d), label='fit: $T_{1,1.5}$+$T_{3.5,4}$') # ax[1, 1].scatter(range(2), fractional_err[2:]) for axis in fig.axes: axis.grid(ls=':') for s in ['right', 'top']: axis.spines[s].set_visible(False) axis.legend() fig.savefig('ringofspots.pdf', bbox_inches='tight') fig, ax = plt.subplots(2, 1, figsize=(5, 8)) ax[0].plot(times, spotless_lc, label='Spotless') ax[0].plot(times, spotted_lc, label='Spotted') from scipy.signal import savgol_filter filtered = savgol_filter(spotted_lc, 101, 2, deriv=2) n = len(times)//2 mins = [np.argmin(filtered[:n]), n + np.argmin(filtered[n:])] maxes = [np.argmax(filtered[:n]), n + np.argmax(filtered[n:])] ax[1].plot(times, filtered) # t14 = -1*np.diff(times[mins])[0] # t23 = -1*np.diff(times[maxes])[0] ax[1].scatter(times[mins], filtered[mins], color='k', zorder=10) ax[1].scatter(times[maxes], filtered[maxes], color='k', zorder=10) for ts, c in zip([times[mins], times[maxes]], ['k', 'gray']): for t in ts: ax[0].axvline(t, ls='--', color=c, zorder=-10) ax[1].axvline(t, ls='--', color=c, zorder=-10) for axis in fig.axes: axis.grid(ls=':') for s in ['right', 'top']: axis.spines[s].set_visible(False) axis.legend() ax[0].set_ylabel('$\mathcal{F}$', fontsize=20) ax[1].set_ylabel('$\ddot{\mathcal{F}}$', fontsize=20) ax[1].set_xlabel('Time [d]') fig.savefig('savgol.pdf', bbox_inches='tight') plt.show() one_plus_k = np.sqrt((np.sin(t14*np.pi/d.per) * np.sin(np.radians(d.inc)) * d.a)**2 + d.b**2) one_minus_k = np.sqrt((np.sin(t23*np.pi/d.per) * np.sin(np.radians(d.inc)) * d.a)**2 + d.b**2) k = (one_plus_k - one_minus_k)/2 print((k - d.rp)/d.rp) ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # 初学者的 TensorFlow 2.0 教程 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/quickstart/beginner"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.org 观看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载笔记本</a> </td> </table> Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。 这是一个 [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) 笔记本文件。 Python程序可以直接在浏览器中运行,这是学习 Tensorflow 的绝佳方式。想要学习该教程,请点击此页面顶部的按钮,在Google Colab中运行笔记本。 1. 在 Colab中, 连接到Python运行环境: 在菜单条的右上方, 选择 *CONNECT*。 2. 运行所有的代码块: 选择 *Runtime* > *Run all*。 下载并安装 TensorFlow 2.0 测试版包。将 TensorFlow 载入你的程序: ``` # 安装 TensorFlow import tensorflow as tf ``` 载入并准备好 [MNIST 数据集](http://yann.lecun.com/exdb/mnist/)。将样本从整数转换为浮点数: ``` mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ``` 将模型的各层堆叠起来,以搭建 `tf.keras.Sequential` 模型。为训练选择优化器和损失函数: ``` model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ``` 训练并验证模型: ``` model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ``` 现在,这个照片分类器的准确度已经达到 98%。想要了解更多,请阅读 [TensorFlow 教程](https://tensorflow.google.cn/tutorials/)。
github_jupyter
# Nonlinear Equations We want to find a root of the nonlinear function $f$ using different methods. 1. Bisection method 2. Newton method 3. Chord method 4. Secant method 5. Fixed point iterations ``` %matplotlib inline from numpy import * from matplotlib.pyplot import * import sympy as sym t = sym.symbols('t') f_sym = t/8. * (63.*t**4 - 70.*t**2. +15.) # Legendre polynomial of order 5 f_prime_sym = sym.diff(f_sym,t) f = sym.lambdify(t, f_sym, 'numpy') f_prime = sym.lambdify(t,f_prime_sym, 'numpy') phi = lambda x : 63./70.*x**3 + 15./(70.*x) #phi = lambda x : 70.0/15.0*x**3 - 63.0/15.0*x**5 #phi = lambda x : sqrt((63.*x**4 + 15.0)/70.) # Let's plot n = 1025 x = linspace(-1,1,n) c = zeros_like(x) _ = plot(x,f(x)) _ = plot(x,c) _ = grid() # Initial data for the variuos algorithms # interval in which we seek the solution a = 0.7 b = 1. # initial points x0 = (a+b)/2.0 x00 = b # stopping criteria eps = 1e-10 n_max = 1000 ``` ## Bisection method $$ x^k = \frac{a^k+b^k}{2} $$ ``` if (f(a_k) * f(x_k)) < 0: b_k1 = x_k a_k1 = a_k else: a_k1 = x_k b_k1 = b_k ``` ``` def bisect(f,a,b,eps,n_max): assert(f(a) * f(b) < 0) a_new = a b_new = b x = mean([a,b]) err = eps + 1. errors = [err] it = 0 while (err > eps and it < n_max): if ( f(a_new) * f(x) < 0 ): # root in (a_new,x) b_new = x else: # root in (x,b_new) a_new = x x_new = mean([a_new,b_new]) #err = 0.5 *(b_new -a_new) err = abs(f(x_new)) #err = abs(x-x_new) errors.append(err) x = x_new it += 1 semilogy(errors) print(it) print(x) print(err) return errors errors_bisect = bisect(f,a,b,eps,n_max) # is the number of iterations coherent with the theoretical estimation? ``` In order to find out other methods for solving non-linear equations, let's compute the Taylor's series of $f(x^k)$ up to the first order $$ f(x^k) \simeq f(x^k) + (x-x^k)f^{\prime}(x^k) $$ which suggests the following iterative scheme $$ x^{k+1} = x^k - \frac{f(x^k)}{f^{\prime}(x^k)} $$ The following methods are obtained applying the above scheme where $$ f^{\prime}(x^k) \approx q^k $$ ## Newton's method $$ q^k = f^{\prime}(x^k) $$ $$ x^{k+1} = x^k - \frac{f(x^k)}{q^k} $$ ``` def newton(f,f_prime,x0,eps,n_max): x_new = x0 err = eps + 1. errors = [err] it = 0 while (err > eps and it < n_max): x_new = x_new - (f(x_new)/f_prime(x_new)) err = abs(f(x_new)) errors.append(err) it += 1 semilogy(errors) print(it) print(x_new) print(err) return errors %time errors_newton = newton(f,f_prime,1.0,eps,n_max) ``` ## Chord method $$ q^k \equiv q = \frac{f(b)-f(a)}{b-a} $$ $$ x^{k+1} = x^k - \frac{f(x^k)}{q} $$ ``` def chord(f,a,b,x0,eps,n_max): x_new = x0 err = eps + 1. errors = [err] it = 0 while (err > eps and it < n_max): x_new = x_new - (f(x_new)/((f(b) - f(a)) / (b - a))) err = abs(f(x_new)) errors.append(err) it += 1 semilogy(errors) print(it) print(x_new) print(err) return errors errors_chord = chord (f,a,b,x0,eps,n_max) ``` ## Secant method $$ q^k = \frac{f(x^k)-f(x^{k-1})}{x^k - x^{k-1}} $$ $$ x^{k+1} = x^k - \frac{f(x^k)}{q^k} $$ Note that this algorithm requirs **two** initial points ``` def secant(f,x0,x00,eps,n_max): xk = x00 x_new = x0 err = eps + 1. errors = [err] it = 0 while (err > eps and it < n_max): temp = x_new x_new = x_new - (f(x_new)/((f(x_new)-f(xk))/(x_new - xk))) xk = temp err = abs(f(x_new)) errors.append(err) it += 1 semilogy(errors) print(it) print(x_new) print(err) return errors errors_secant = secant(f,x0,x00,eps,n_max) ``` ## Fixed point iterations $$ f(x)=0 \to x-\phi(x)=0 $$ $$ x^{k+1} = \phi(x^k) $$ ``` def fixed_point(phi,x0,eps,n_max): x_new = x0 err = eps + 1. errors = [err] it = 0 while (err > eps and it < n_max): x_new = phi(x_new) err = abs(f(x_new)) errors.append(err) it += 1 semilogy(errors) print(it) print(x_new) print(err) return errors errors_fixed = fixed_point(phi,0.3,eps,n_max) ``` ## Comparison ``` # plot the error convergence for the methods loglog(errors_bisect, label='bisect') loglog(errors_chord, label='chord') loglog(errors_secant, label='secant') loglog(errors_newton, label ='newton') loglog(errors_fixed, label ='fixed') _ = legend() # Let's compare the scipy implmentation of Newton's method with our.. import scipy.optimize as opt %time opt.newton(f, 1.0, f_prime, tol = eps) ``` We see that the scipy method is 1000 times slower than the `scipy` one
github_jupyter
<center> <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # **Exception Handling** Estimated time needed: **15** minutes ## Objectives After completing this lab you will be able to: * Understand exceptions * Handle the exceptions ## Table of Contents * What is an Exception? * Exception Handling *** ## What is an Exception? In this section you will learn about what an exception is and see examples of them. ### Definition An exception is an error that occurs during the execution of code. This error causes the code to raise an exception and if not prepared to handle it will halt the execution of the code. ### Examples Run each piece of code and observe the exception raised ``` 1/0 ``` <code>ZeroDivisionError</code> occurs when you try to divide by zero. ``` y = a + 5 ``` <code>NameError</code> -- in this case, it means that you tried to use the variable a when it was not defined. ``` a = [1, 2, 3] a[10] ``` <code>IndexError</code> -- in this case, it occured because you tried to access data from a list using an index that does not exist for this list. There are many more exceptions that are built into Python, here is a list of them [https://docs.python.org/3/library/exceptions.html](https://docs.python.org/3/library/exceptions.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01) ## Exception Handling In this section you will learn how to handle exceptions. You will understand how to make your program perform specified tasks instead of halting code execution when an exception is encountered. ### Try Except A <code>try except</code> will allow you to execute code that might raise an exception and in the case of any exception or a specific one we can handle or catch the exception and execute specific code. This will allow us to continue the execution of our program even if there is an exception. Python tries to execute the code in the <code>try</code> block. In this case if there is any exception raised by the code in the <code>try</code> block, it will be caught and the code block in the <code>except</code> block will be executed. After that, the code that comes <em>after</em> the try except will be executed. ``` # potential code before try catch try: # code to try to execute except: # code to execute if there is an exception # code that will still execute if there is an exception ``` ### Try Except Example In this example we are trying to divide a number given by the user, save the outcome in the variable <code>a</code>, and then we would like to print the result of the operation. When taking user input and dividing a number by it there are a couple of exceptions that can be raised. For example if we divide by zero. Try running the following block of code with <code>b</code> as a number. An exception will only be raised if <code>b</code> is zero. ``` a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b print("Success a=",a) except: print("There was an error") ``` ### Try Except Specific A specific <code>try except</code> allows you to catch certain exceptions and also execute certain code depending on the exception. This is useful if you do not want to deal with some exceptions and the execution should halt. It can also help you find errors in your code that you might not be aware of. Furthermore, it can help you differentiate responses to different exceptions. In this case, the code after the try except might not run depending on the error. <b>Do not run, just to illustrate:</b> ``` # potential code before try catch try: # code to try to execute except (ZeroDivisionError, NameError): # code to execute if there is an exception of the given types # code that will execute if there is no exception or a one that we are handling # potential code before try catch try: # code to try to execute except ZeroDivisionError: # code to execute if there is a ZeroDivisionError except NameError: # code to execute if there is a NameError # code that will execute if there is no exception or a one that we are handling ``` You can also have an empty <code>except</code> at the end to catch an unexpected exception: <b>Do not run, just to illustrate:</b> ``` # potential code before try catch try: # code to try to execute except ZeroDivisionError: # code to execute if there is a ZeroDivisionError except NameError: # code to execute if there is a NameError except: # code to execute if ther is any exception # code that will execute if there is no exception or a one that we are handling ``` ### Try Except Specific Example This is the same example as above, but now we will add differentiated messages depending on the exception, letting the user know what is wrong with the input. ``` a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b print("Success a=",a) except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") ``` ### Try Except Else and Finally <code>else</code> allows one to check if there was no exception when executing the try block. This is useful when we want to execute something only if there were no errors. <b>do not run, just to illustrate</b> ``` # potential code before try catch try: # code to try to execute except ZeroDivisionError: # code to execute if there is a ZeroDivisionError except NameError: # code to execute if there is a NameError except: # code to execute if ther is any exception else: # code to execute if there is no exception # code that will execute if there is no exception or a one that we are handling ``` <code>finally</code> allows us to always execute something even if there is an exception or not. This is usually used to signify the end of the try except. ``` # potential code before try catch try: # code to try to execute except ZeroDivisionError: # code to execute if there is a ZeroDivisionError except NameError: # code to execute if there is a NameError except: # code to execute if ther is any exception else: # code to execute if there is no exception finally: # code to execute at the end of the try except no matter what # code that will execute if there is no exception or a one that we are handling ``` ### Try Except Else and Finally Example You might have noticed that even if there is an error the value of <code>a</code> is always printed. Let's use the <code>else</code> and print the value of <code>a</code> only if there is no error. ``` a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") else: print("success a=",a) ``` Now lets let the user know that we are done processing their answer. Using the <code>finally</code>, let's add a print. ``` a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") else: print("success a=",a) finally: print("Processing Complete") ``` ## Authors <a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01" target="_blank">Joseph Santarcangelo</a> ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ---------- | ---------------------------- | | 2020-09-02 | 2.0 | Simran | Template updates to the file | | | | | | | | | | | ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
``` import matplotlib as mpl import matplotlib.pyplot as plt from agglio_lib import * #-------------------------------Data Generation section---------------------------# n = 1000 d = 50 sigma=0.5 w_radius = 10 wAst = np.random.randn(d,1) X = getData(0, 1, n, d)/np.sqrt(d) w0 =w_radius*np.random.randn(d,1)/np.sqrt(d) ipAst = np.matmul(X, wAst) # y = sigmoid(ipAst) y = sigmoid_noisy_pre(ipAst,sigma_noise=sigma) #-----------AGGLIO-GD-------------# params={} params['algo']='AG_GD' params['w0']=w0 params['wAst']=wAst objVals_agd,distVals_agd,time_agd = cross_validate(X,y,params,cross_validation=True) #-----------AGGLIO-SGD-------------# params={} params['algo']='AG_SGD' params['w0']=w0 params['wAst']=wAst objVals_agsgd,distVals_agsgd,time_agsgd = cross_validate(X,y,params,cross_validation=True) #-----------AGGLIO-SVRG-------------# params={} params['algo']='AG_SVRG' params['w0']=w0 params['wAst']=wAst objVals_agsvrg,distVals_agsvrg,time_agsvrg = cross_validate(X,y,params,cross_validation=True) #-----------AGGLIO-ADAM-------------# hparams['AG_ADAM']={} hparams['AG_ADAM']['alpha']=np.power(10.0, [0, -1, -2, -3]).tolist() hparams['AG_ADAM']['B_init']=np.power(10.0, [0, -1, -2, -3]).tolist() hparams['AG_ADAM']['B_step']=np.linspace(start=1.01, stop=3, num=5).tolist() hparams['AG_ADAM']['beta_1'] = [0.3, 0.5, 0.7, 0.9] hparams['AG_ADAM']['beta_2'] = [0.3, 0.5, 0.7, 0.9] hparams['AG_ADAM']['epsilon'] = np.power(10.0, [-3, -5, -8]).tolist() hparam = hparams['AG_ADAM'] cv = ShuffleSplit( n_splits = 1, test_size = 0.3, random_state = 42 ) grid = GridSearchCV( AG_ADAM(), param_grid=hparam, refit = False, cv=cv) #, verbose=3 grid.fit( X, y.ravel(), w_init=w0.ravel(), w_star=wAst.ravel(), minibatch_size=50) best = grid.best_params_ print("The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_)) print("The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_)) #ag_adam = AG_ADAM(alpha= best["alpha"], B_init=best['B_init'], B_step=best['B_step'], beta_1=best['beta_1'], beta_2=best['beta_2'] ) ag_adam = AG_ADAM(alpha= best["alpha"], B_init=best['B_init'], B_step=best['B_step'], beta_1=best['beta_1'], beta_2=best['beta_2'], epsilon=best['epsilon'] ) ag_adam.fit( X, y.ravel(), w_init=w0.ravel(), w_star=wAst.ravel(), from google.colab import drive drive.mount('/content/drive') %cd /cmax_iter=600 ) distVals_ag_adam = ag_adam.distVals time_ag_adam=ag_adam.clock plt.rcParams['pdf.fonttype'] = 42 plt.rcParams['ps.fonttype'] = 42 fig = plt.figure() plt.plot(time_agd, distVals_agd, label='AGGLIO-GD', color='#1b9e77', linewidth=3) plt.plot(time_agsgd, distVals_agsgd, label='AGGLIO-SGD', color='#5e3c99', linewidth=3) plt.plot(time_agsvrg, distVals_agsvrg, label='AGGLIO-SVRG', color='#d95f02', linewidth=3) plt.plot(time_ag_adam, distVals_ag_adam, label='AGGLIO-ADAM', color='#01665e', linewidth=3) plt.legend() plt.ylabel("$||w^t-w^*||_2$",fontsize=12) plt.xlabel("Time",fontsize=12) plt.grid() plt.yscale('log') plt.xlim(time_agd[0], time_agd[-1]) plt.title(f'n={n}, d={d}, $\sigma$ = {sigma}, pre-activation') plt.savefig('Agglio_pre-noise_sigmoid.pdf', dpi=300) plt.show() ```
github_jupyter
Lambda School Data Science *Unit 2, Sprint 3, Module 2* --- # Permutation & Boosting You will use your portfolio project dataset for all assignments this sprint. ## Assignment Complete these tasks for your project, and document your work. - [ ] If you haven't completed assignment #1, please do so first. - [ ] Continue to clean and explore your data. Make exploratory visualizations. - [ ] Fit a model. Does it beat your baseline? - [ ] Try xgboost. - [ ] Get your model's permutation importances. You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations. But, if you aren't ready to try xgboost and permutation importances with your dataset today, that's okay. You can practice with another dataset instead. You may choose any dataset you've worked with previously. The data subdirectory includes the Titanic dataset for classification and the NYC apartments dataset for regression. You may want to choose one of these datasets, because example solutions will be available for each. ## Reading Top recommendations in _**bold italic:**_ #### Permutation Importances - _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_ - [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html) #### (Default) Feature Importances - [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/) - [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html) #### Gradient Boosting - [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/) - _**[A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)**_ - [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8 - [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html) - _**[Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (2.5 minute video)**_ ``` # all imports needed for this sheet import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import category_encoders as ce from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt from sklearn.model_selection import validation_curve from sklearn.tree import DecisionTreeRegressor import xgboost as xgb %matplotlib inline import seaborn as sns from sklearn.metrics import accuracy_score from sklearn.model_selection import GridSearchCV, RandomizedSearchCV pip install category_encoders %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install eli5 # If you're working locally: else: DATA_PATH = '../data/' df = pd.read_excel(DATA_PATH+'/Unit_2_project_data.xlsx') exit_reasons = ['Rental by client with RRH or equivalent subsidy', 'Rental by client, no ongoing housing subsidy', 'Staying or living with family, permanent tenure', 'Rental by client, other ongoing housing subsidy', 'Permanent housing (other than RRH) for formerly homeless persons', 'Staying or living with friends, permanent tenure', 'Owned by client, with ongoing housing subsidy', 'Rental by client, VASH housing Subsidy' ] # pull all exit destinations from main data file and sum up the totals of each destination, # placing them into new df for calculations exits = df['3.12 Exit Destination'].value_counts() # create target column (multiple types of exits to perm) df['perm_leaver'] = df['3.12 Exit Destination'].isin(exit_reasons) # base case df['perm_leaver'].value_counts(normalize=True) # replace spaces with underscore df.columns = df.columns.str.replace(' ', '_') # see size of df prior to dropping empties df.shape # drop rows with no exit destination (current guests at time of report) df = df.dropna(subset=['3.12_Exit_Destination']) # shape of df after dropping current guests df.shape # verify no NaN in exit destination feature df['3.12_Exit_Destination'].isna().value_counts() import numpy as np import pandas as pd from sklearn.model_selection import train_test_split train = df # Split train into train & val train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['perm_leaver'], random_state=42) def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # drop any private information X = X.drop(columns=['3.1_FirstName', '3.1_LastName', '3.2_SocSecNo', '3.3_Birthdate', 'V5_Prior_Address']) # drop unusable columns X = X.drop(columns=['2.1_Organization_Name', '2.4_ProjectType', 'WorkSource_Referral_Most_Recent', 'YAHP_Referral_Most_Recent', 'SOAR_Enrollment_Determination_(Most_Recent)', 'R7_General_Health_Status', 'R8_Dental_Health_Status', 'R9_Mental_Health_Status', 'RRH_Date_Of_Move-In', 'RRH_In_Permanent_Housing', 'R10_Pregnancy_Due_Date', 'R10_Pregnancy_Status', 'R1_Referral_Source', 'R2_Date_Status_Determined', 'R2_Enroll_Status', 'R2_Reason_Why_No_Services_Funded', 'R2_Runaway_Youth', 'R3_Sexual_Orientation', '2.5_Utilization_Tracking_Method_(Invalid)', '2.2_Project_Name', '2.6_Federal_Grant_Programs', '3.16_Client_Location', '3.917_Stayed_Less_Than_90_Days', '3.917b_Stayed_in_Streets,_ES_or_SH_Night_Before', '3.917b_Stayed_Less_Than_7_Nights', '4.24_In_School_(Retired_Data_Element)', 'CaseChildren', 'ClientID', 'HEN-HP_Referral_Most_Recent', 'HEN-RRH_Referral_Most_Recent', 'Emergency_Shelter_|_Most_Recent_Enrollment', 'ProgramType', 'Days_Enrolled_Until_RRH_Date_of_Move-in', 'CurrentDate', 'Current_Age', 'Count_of_Bed_Nights_-_Entire_Episode', 'Bed_Nights_During_Report_Period']) # drop rows with no exit destination (current guests at time of report) X = X.dropna(subset=['3.12_Exit_Destination']) # remove columns to avoid data leakage X = X.drop(columns=['3.12_Exit_Destination', '5.9_Household_ID', '5.8_Personal_ID', '4.2_Income_Total_at_Exit', '4.3_Non-Cash_Benefit_Count_at_Exit']) # Drop needless feature unusable_variance = ['Enrollment_Created_By', '4.24_Current_Status_(Retired_Data_Element)'] X = X.drop(columns=unusable_variance) # Drop columns with timestamp timestamp_columns = ['3.10_Enroll_Date', '3.11_Exit_Date', 'Date_of_Last_ES_Stay_(Beta)', 'Date_of_First_ES_Stay_(Beta)', 'Prevention_|_Most_Recent_Enrollment', 'PSH_|_Most_Recent_Enrollment', 'Transitional_Housing_|_Most_Recent_Enrollment', 'Coordinated_Entry_|_Most_Recent_Enrollment', 'Street_Outreach_|_Most_Recent_Enrollment', 'RRH_|_Most_Recent_Enrollment', 'SOAR_Eligibility_Determination_(Most_Recent)', 'Date_of_First_Contact_(Beta)', 'Date_of_Last_Contact_(Beta)', '4.13_Engagement_Date', '4.11_Domestic_Violence_-_When_it_Occurred', '3.917_Homeless_Start_Date'] X = X.drop(columns=timestamp_columns) # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) train.columns # Assign to X, y to avoid data leakage features = ['3.15_Relationship_to_HoH', 'CaseMembers', '3.2_Social_Security_Quality', '3.3_Birthdate_Quality', 'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender', '3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry', '3.917_Living_Situation', 'Length_of_Time_Homeless_(3.917_Approximate_Start)', '3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years', 'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)', '4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence', '4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type', 'R4_Last_Grade_Completed', 'R5_School_Status', 'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment', 'R6_Looking_for_Work', '4.2_Income_Total_at_Entry', '4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry', 'Chronic_Homeless_Status', 'Under_25_Years_Old', '4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition', '4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)', '4.08_HIV/AIDS', '4.09_Mental_Health_Problem', '4.05_Physical_Disability' ] target = 'perm_leaver' X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] # Arrange data into X features matrix and y target vector target = 'perm_leaver' X_train = train.drop(columns=target) y_train = train[target] X_val = val.drop(columns=target) y_val = val[target] from scipy.stats import randint, uniform from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import RandomizedSearchCV param_distributions = { 'n_estimators': randint(5, 500, 5), 'max_depth': [10, 15, 20, 50, 'None'], 'max_features': [.5, 1, 1.5, 2, 2.5, 3, 'sqrt', None], } search = RandomizedSearchCV( RandomForestRegressor(random_state=42), param_distributions=param_distributions, n_iter=20, cv=5, scoring='neg_mean_absolute_error', verbose=10, return_train_score=True, n_jobs=-1, random_state=42 ) search.fit(X_train, y_train); import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.ensemble import GradientBoostingClassifier # Make pipeline! pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=100, n_jobs=-1, max_features=None, random_state=42 ) ) # Fit on train, score on val pipeline.fit(X_train, y_train) y_pred = pipeline.predict(X_val) print('Validation Accuracy', accuracy_score(y_val, y_pred)) # Get feature importances rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, X_train.columns) # Plot feature importances %matplotlib inline import matplotlib.pyplot as plt n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey'); # Make pipeline! pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), xgb.XGBClassifier(n_estimators=110, n_jobs=-1, num_parallel_tree=200, random_state=42 ) ) # Fit on Train pipeline.fit(X_train, y_train) # Score on val y_pred = pipeline.predict(X_val) print('Validation Accuracy', accuracy_score(y_val, y_pred)) # cross validation k = 3 scores = cross_val_score(pipeline, X_train, y_train, cv=k, scoring='accuracy') print(f'MAE for {k} folds:', -scores) -scores.mean() # get and plot feature importances # Linear models have coefficients whereas decision trees have "Feature Importances" import matplotlib.pyplot as plt model = pipeline.named_steps['xgbclassifier'] encoder = pipeline.named_steps['ordinalencoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) importances.sort_values().plot.barh(color='grey') df['4.1_Housing_Status'].value_counts() X_train.shape X_train.columns X_train.Days_Enrolled_in_Project.value_counts() column = 'Days_Enrolled_in_Project' # Fit without column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=250, random_state=42, n_jobs=-1) ) pipeline.fit(X_train.drop(columns=column), y_train) score_without = pipeline.score(X_val.drop(columns=column), y_val) print(f'Validation Accuracy without {column}: {score_without}') # Fit with column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=250, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) score_with = pipeline.score(X_val, y_val) print(f'Validation Accuracy with {column}: {score_with}') # Compare the error with & without column print(f'Drop-Column Importance for {column}: {score_with - score_without}') column = 'Days_Enrolled_in_Project' # Fit without column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=250, max_depth=7, random_state=42, n_jobs=-1) ) pipeline.fit(X_train.drop(columns=column), y_train) score_without = pipeline.score(X_val.drop(columns=column), y_val) print(f'Validation Accuracy without {column}: {score_without}') # Fit with column pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=250, max_depth=7, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) score_with = pipeline.score(X_val, y_val) print(f'Validation Accuracy with {column}: {score_with}') # Compare the error with & without column print(f'Drop-Column Importance for {column}: {score_with - score_without}') # Fit with all the data pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) score_with = pipeline.score(X_val, y_val) print(f'Validation Accuracy with {column}: {score_with}') # Before: Sequence of features to be permuted feature = 'Days_Enrolled_in_Project' X_val[feature].head() # Before: Distribution of quantity X_val[feature].value_counts() # Permute the dataset X_val_permuted = X_val.copy() X_val_permuted[feature] = np.random.permutation(X_val[feature]) # After: Sequence of features to be permuted X_val_permuted[feature].head() # Distribution hasn't changed! X_val_permuted[feature].value_counts() # Get the permutation importance score_permuted = pipeline.score(X_val_permuted, y_val) print(f'Validation Accuracy with {column} not permuted: {score_with}') print(f'Validation Accuracy with {column} permuted: {score_permuted}') print(f'Permutation Importance for {column}: {score_with - score_permuted}') pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent') ) X_train_transformed = pipeline.fit_transform(X_train) X_val_transformed = pipeline.transform(X_val) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_train_transformed, y_train) pip install eli5 import eli5 from eli5.sklearn import PermutationImportance permuter = PermutationImportance( model, scoring='accuracy', n_iter=5, random_state=42 ) permuter.fit(X_val_transformed, y_val) permuter.feature_importances_ eli5.show_weights( permuter, top=None, feature_names=X_val.columns.tolist() ) print('Shape before removing features:', X_train.shape) minimum_importance = 0 mask = permuter.feature_importances_ > minimum_importance features = X_train.columns[mask] X_train = X_train[features] print('Shape after removing features:', X_train.shape) X_val = X_val[features] pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy:', pipeline.score(X_val, y_val)) from xgboost import XGBClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy:', pipeline.score(X_val, y_val)) encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) model = XGBClassifier(n_estimators=1000, # <= 1000 trees, early stopping depency max_depth=7, # try deeper trees with high cardinality data learning_rate=0.1, # try higher learning rate random_state=42, num_class=1, n_jobs=-1) eval_set = [(X_train_encoded, y_train), (X_val_encoded, y_val)] # Fit on train, score on val model.fit(X_train_encoded, y_train, eval_metric='auc', eval_set=eval_set, early_stopping_rounds=25) from sklearn.metrics import mean_absolute_error as mae results = model.evals_result() train_error = results['validation_0']['auc'] val_error = results['validation_1']['auc'] iterations = range(1, len(train_error) + 1) plt.figure(figsize=(10,7)) plt.plot(iterations, train_error, label='Train') plt.plot(iterations, val_error, label='Validation') plt.title('XGBoost Validation Curve') plt.ylabel('Classification Error') plt.xlabel('Model Complexity (n_estimators)') plt.legend(); ```
github_jupyter
``` from tsfresh.feature_extraction import extract_features from tsfresh.feature_extraction.settings import ComprehensiveFCParameters, MinimalFCParameters, EfficientFCParameters from tsfresh.feature_extraction.settings import from_columns import numpy as np import pandas as pd ``` This notebooks illustrates the `"fc_parameters"` or `"kind_to_fc_parameters"` dictionaries. For a detailed explanation, see also http://tsfresh.readthedocs.io/en/latest/text/feature_extraction_settings.html ## Construct a time series container We construct the time series container that includes two sensor time series, _"temperature"_ and _"pressure"_, for two devices _"a"_ and _"b"_ ``` df = pd.DataFrame({"id": ["a", "a", "b", "b"], "temperature": [1,2,3,1], "pressure": [-1, 2, -1, 7]}) df ``` ## The default_fc_parameters Which features are calculated by tsfresh is controlled by a dictionary that contains a mapping from feature calculator names to their parameters. This dictionary is called `fc_parameters`. It maps feature calculator names (=keys) to parameters (=values). As keys, always the same names as in the tsfresh.feature_extraction.feature_calculators module are used. In the following we load an exemplary dictionary ``` settings_minimal = MinimalFCParameters() # only a few basic features settings_minimal ``` This dictionary can passed to the extract method, resulting in a few basic time series beeing calculated: ``` X_tsfresh = extract_features(df, column_id="id", default_fc_parameters = settings_minimal) X_tsfresh.head() ``` By using the settings_minimal as value of the default_fc_parameters parameter, those settings are used for all type of time series. In this case, the `settings_minimal` dictionary is used for both _"temperature"_ and _"pressure"_ time series. Now, lets say we want to remove the length feature and prevent it from beeing calculated. We just delete it from the dictionary. ``` del settings_minimal["length"] settings_minimal ``` Now, if we extract features for this reduced dictionary, the length feature will not be calculated ``` X_tsfresh = extract_features(df, column_id="id", default_fc_parameters = settings_minimal) X_tsfresh.head() ``` ## The kind_to_fc_parameters now, lets say we do not want to calculate the same features for both type of time series. Instead there should be different sets of features for each kind. To do that, we can use the `kind_to_fc_parameters` parameter, which lets us finely specifiy which `fc_parameters` we want to use for which kind of time series: ``` fc_parameters_pressure = {"length": None, "sum_values": None} fc_parameters_temperature = {"maximum": None, "minimum": None} kind_to_fc_parameters = { "temperature": fc_parameters_temperature, "pressure": fc_parameters_pressure } print(kind_to_fc_parameters) ``` So, in this case, for sensor _"pressure"_ both _"max"_ and _"min"_ are calculated. For the _"temperature"_ signal, the length and sum_values features are extracted instead. ``` X_tsfresh = extract_features(df, column_id="id", kind_to_fc_parameters = kind_to_fc_parameters) X_tsfresh.head() ``` So, lets say we lost the kind_to_fc_parameters dictionary. Or we apply a feature selection algorithm to drop irrelevant feature columns, so our extraction settings contain irrelevant features. In both cases, we can use the provided "from_columns" method to infer the creating dictionary from the dataframe containing the features ``` recovered_settings = from_columns(X_tsfresh) recovered_settings ``` Lets drop a column to show that the inferred settings dictionary really changes ``` X_tsfresh.iloc[:, 1:] recovered_settings = from_columns(X_tsfresh.iloc[:, 1:]) recovered_settings ``` ## More complex dictionaries We provide custom fc_parameters dictionaries with greater sets of features. The `EfficientFCParameters` contain features and parameters that should be calculated quite fastly: ``` settings_efficient = EfficientFCParameters() settings_efficient ``` The `ComprehensiveFCParameters` are the biggest set of features. It will take the longest to calculate ``` settings_comprehensive = ComprehensiveFCParameters() settings_comprehensive ``` You see those parameters as values in the fc_paramter dictionary? Those are the parameters of the feature extraction methods. In detail, the value in a fc_parameters dicitonary can contain a list of dictionaries. Every dictionary in that list is one feature. So, for example ``` settings_comprehensive['large_standard_deviation'] ``` would trigger the calculation of 20 different 'large_standard_deviation' features, one for r=0.05, for n=0.10 up to r=0.95. Lets just take them and extract some features ``` settings_value_count = {'large_standard_deviation': settings_comprehensive['large_standard_deviation']} settings_value_count X_tsfresh = extract_features(df, column_id="id", default_fc_parameters=settings_value_count) X_tsfresh.head() ``` The nice thing is, we actually contain the parameters in the feature name, so it is possible to reconstruct how the feature was calculated. ``` from_columns(X_tsfresh) ``` This means that you should never change a column name. Otherwise the information how it was calculated can get lost.
github_jupyter
## External Compton ![EC scheme](jetset_EC_scheme.png) ### Broad Line Region ``` import jetset print('tested on jetset',jetset.__version__) from jetset.jet_model import Jet my_jet=Jet(name='EC_example',electron_distribution='bkn',beaming_expr='bulk_theta') my_jet.add_EC_component(['EC_BLR','EC_Disk'],disk_type='BB') ``` The `show_model` method provides, among other information, information concerning the accretion disk, in this case we use a mono temperature black body `BB` ``` my_jet.show_model() ``` ### change Disk type the disk type can be set as a more realistic multi temperature black body (MultiBB). In this case the `show_model` method provides physical parameters regarding the multi temperature black body accretion disk: - the Schwarzschild (Sw radius) - the Eddington luminosity (L Edd.) - the accretion rate (accr_rate) - the Eddington accretion rate (accr_rate Edd.) ``` my_jet.add_EC_component(['EC_BLR','EC_Disk'],disk_type='MultiBB') my_jet.set_par('L_Disk',val=1E46) my_jet.set_par('gmax',val=5E4) my_jet.set_par('gmin',val=2.) my_jet.set_par('R_H',val=3E17) my_jet.set_par('p',val=1.5) my_jet.set_par('p_1',val=3.2) my_jet.set_par('R',val=3E15) my_jet.set_par('B',val=1.5) my_jet.set_par('z_cosm',val=0.6) my_jet.set_par('BulkFactor',val=20) my_jet.set_par('theta',val=1) my_jet.set_par('gamma_break',val=5E2) my_jet.set_N_from_nuLnu(nu_src=3E13,nuLnu_src=5E45) my_jet.set_IC_nu_size(100) my_jet.show_model() ``` now we set some parameter for the model ``` my_jet.eval() p=my_jet.plot_model(frame='obs') p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27) ``` ### Dusty Torus ``` my_jet.add_EC_component('DT') my_jet.show_model() my_jet.eval() p=my_jet.plot_model() p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27) my_jet.add_EC_component('EC_DT') my_jet.eval() p=my_jet.plot_model() p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27) my_jet.save_model('test_EC_model.pkl') my_jet=Jet.load_model('test_EC_model.pkl') ``` ### Changing the external field transformation Default method, is the transformation of the external photon field from the disk/BH frame to the relativistic blob ``` my_jet.set_external_field_transf('blob') ``` Alternatively, in the case of istropric fields as the CMB or the BLR and DT within the BLR radius, and DT radius, respectively, the it is possible to transform the the electron distribution, moving the blob to the disk/BH frame. ``` my_jet.set_external_field_transf('disk') ``` ### External photon field energy density along the jet ``` def iso_field_transf(L,R,BulckFactor): beta=1.0 - 1/(BulckFactor*BulckFactor) return L/(4*np.pi*R*R*3E10)*BulckFactor*BulckFactor*(1+((beta**2)/3)) def external_iso_behind_transf(L,R,BulckFactor): beta=1.0 - 1/(BulckFactor*BulckFactor) return L/((4*np.pi*R*R*3E10)*(BulckFactor*BulckFactor*(1+beta)**2)) ``` EC seed photon fields, in the Disk rest frame ``` %matplotlib inline fig = plt.figure(figsize=(8,6)) ax=fig.subplots(1) N=50 G=1 R_range=np.logspace(13,25,N) y=np.zeros((8,N)) my_jet.set_verbosity(0) my_jet.set_par('R_BLR_in',1E17) my_jet.set_par('R_BLR_out',1.1E17) for ID,R in enumerate(R_range): my_jet.set_par('R_H',val=R) my_jet.set_external_fields() my_jet.energetic_report(verbose=False) y[1,ID]=my_jet.energetic_dict['U_BLR_DRF'] y[0,ID]=my_jet.energetic_dict['U_Disk_DRF'] y[2,ID]=my_jet.energetic_dict['U_DT_DRF'] y[4,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,my_jet.parameters.R_DT.val,G) y[3,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,my_jet.parameters.R_BLR_in.val,G) y[5,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,R_range,G) y[6,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,R_range,G) y[7,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative,R_range,G) ax.plot(np.log10(R_range),np.log10(y[0,:]),label='Disk') ax.plot(np.log10(R_range),np.log10(y[1,:]),'-',label='BLR') ax.plot(np.log10(R_range),np.log10(y[2,:]),label='DT') ax.plot(np.log10(R_range),np.log10(y[3,:]),'--',label='BLR uniform') ax.plot(np.log10(R_range),np.log10(y[4,:]),'--',label='DT uniform') ax.plot(np.log10(R_range),np.log10(y[5,:]),'--',label='BLR 1/R2') ax.plot(np.log10(R_range),np.log10(y[6,:]),'--',label='DT 1/R2') ax.plot(np.log10(R_range),np.log10(y[7,:]),'--',label='Disk 1/R2') ax.set_xlabel('log(R_H) cm') ax.set_ylabel('log(Uph) erg cm-3 s-1') ax.legend() %matplotlib inline fig = plt.figure(figsize=(8,6)) ax=fig.subplots(1) L_Disk=1E45 N=50 G=my_jet.parameters.BulkFactor.val R_range=np.logspace(15,22,N) y=np.zeros((8,N)) my_jet.set_par('L_Disk',val=L_Disk) my_jet._blob.theta_n_int=100 my_jet._blob.l_n_int=100 my_jet._blob.theta_n_int=100 my_jet._blob.l_n_int=100 for ID,R in enumerate(R_range): my_jet.set_par('R_H',val=R) my_jet.set_par('R_BLR_in',1E17*(L_Disk/1E45)**.5) my_jet.set_par('R_BLR_out',1.1E17*(L_Disk/1E45)**.5) my_jet.set_par('R_DT',2.5E18*(L_Disk/1E45)**.5) my_jet.set_external_fields() my_jet.energetic_report(verbose=False) y[1,ID]=my_jet.energetic_dict['U_BLR'] y[0,ID]=my_jet.energetic_dict['U_Disk'] y[2,ID]=my_jet.energetic_dict['U_DT'] y[4,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,my_jet.parameters.R_DT.val,G) y[3,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,my_jet.parameters.R_BLR_in.val,G) y[5,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,R_range,G) y[6,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,R_range,G) y[7,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative,R_range,G) ax.plot(np.log10(R_range),np.log10(y[0,:]),label='Disk') ax.plot(np.log10(R_range),np.log10(y[1,:]),'-',label='BLR') ax.plot(np.log10(R_range),np.log10(y[2,:]),'-',label='DT') ax.plot(np.log10(R_range),np.log10(y[3,:]),'--',label='BLR uniform') ax.plot(np.log10(R_range),np.log10(y[4,:]),'--',label='DT uniform') ax.plot(np.log10(R_range),np.log10(y[5,:]),'--',label='BLR 1/R2') ax.plot(np.log10(R_range),np.log10(y[6,:]),'--',label='DT 1/R2') ax.plot(np.log10(R_range),np.log10(y[7,:]),'--',label='Disk 1/R2') ax.axvline(np.log10( my_jet.parameters.R_DT.val )) ax.axvline(np.log10( my_jet.parameters.R_BLR_out.val)) ax.set_xlabel('log(R_H) cm') ax.set_ylabel('log(Uph`) erg cm-3 s-1') ax.legend() ``` ### IC against the CMB ``` my_jet=Jet(name='test_equipartition',electron_distribution='lppl',beaming_expr='bulk_theta') my_jet.set_par('R',val=1E21) my_jet.set_par('z_cosm',val= 0.651) my_jet.set_par('B',val=2E-5) my_jet.set_par('gmin',val=50) my_jet.set_par('gamma0_log_parab',val=35.0E3) my_jet.set_par('gmax',val=30E5) my_jet.set_par('theta',val=12.0) my_jet.set_par('BulkFactor',val=3.5) my_jet.set_par('s',val=2.58) my_jet.set_par('r',val=0.42) my_jet.set_N_from_nuFnu(5E-15,1E12) my_jet.add_EC_component('EC_CMB') ``` We can now compare the different beaming pattern for the EC emission if the CMB, and realize that the beaming pattern is different. This is very important in the case of radio galaxies. The `src` transformation is the one to use in the case of radio galaies or misaligned AGNs, and gives a more accurate results. Anyhow, be careful that this works only for isotropic external fields, suchs as the CMB, or the BLR seed photons whitin the Dusty torus radius, and BLR radius, respectively ``` from jetset.plot_sedfit import PlotSED p=PlotSED() my_jet.set_external_field_transf('blob') c= ['k', 'g', 'r', 'c'] for ID,theta in enumerate(np.linspace(2,20,4)): my_jet.parameters.theta.val=theta my_jet.eval() my_jet.plot_model(plot_obj=p,comp='Sum',label='blob, theta=%2.2f'%theta,line_style='--',color=c[ID]) my_jet.set_external_field_transf('disk') for ID,theta in enumerate(np.linspace(2,20,4)): my_jet.parameters.theta.val=theta my_jet.eval() my_jet.plot_model(plot_obj=p,comp='Sum',label='disk, theta=%2.2f'%theta,line_style='',color=c[ID]) p.rescale(y_min=-17.5,y_max=-12.5,x_max=28) ``` ## Equipartition It is also possible to set our jet at the equipartition, that is achieved not using analytical approximation, but by numerically finding the equipartition value over a grid. We have to provide the value of the observed flux (`nuFnu_obs`) at a given observed frequency (`nu_obs`), the minimum value of B (`B_min`), and the number of grid points (`N_pts`) ``` my_jet.parameters.theta.val=12 B_min,b_grid,U_B,U_e=my_jet.set_B_eq(nuFnu_obs=5E-15,nu_obs=1E12,B_min=1E-9,N_pts=50,plot=True) my_jet.show_pars() my_jet.eval() p=my_jet.plot_model() p.rescale(y_min=-16.5,y_max=-13.5,x_max=28) ```
github_jupyter
<a href="https://colab.research.google.com/github/dafrie/fin-disclosures-nlp/blob/master/Multi_class_classification_with_Transformers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Multi-Class classification with Transformers # Setup ``` # Load Google drive where the data and models are stored from google.colab import drive drive.mount('/content/drive') ############################## CONFIG ############################## TASK = "multi-class" #@param ["multi-class"] # Set to true if fine-tuning should be enabled. Else it loads fine-tuned model ENABLE_FINE_TUNING = True #@param {type:"boolean"} # See list here: https://huggingface.co/models TRANSFORMER_MODEL_NAME = 'distilbert-base-cased' #@param ["bert-base-uncased", "bert-large-uncased", "albert-base-v2", "albert-large-v2", "albert-xlarge-v2", "albert-xxlarge-v2", "roberta-base", "roberta-large", "distilbert-base-uncased", "distilbert-base-cased"] # The DataLoader needs to know our batch size for training. BERT Authors recommend 16 or 32, however this leads to an error due to not enough GPU memory BATCH_SIZE = 16 #@param ["8", "16", "32"] {type:"raw"} MAX_TOKEN_SIZE = 256 #@param [512,256,128] {type:"raw"} EPOCHS = 4 # @param [1,2,3,4] {type:"raw"} LEARNING_RATE = 2e-5 WEIGHT_DECAY = 0.0 # TODO: Necessary? # Evaluation metric config. See for context: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html AVERAGING_STRATEGY = 'macro' #@param ["micro", "macro", "weighted"] # To make the notebook reproducible (not guaranteed for pytorch on different releases/platforms!) SEED_VALUE = 0 # Enable comet-ml logging DISABLE_COMET_ML = True #@param {type:"boolean"} #################################################################### full_task_name = TASK parameters = { "task": TASK, "enable_fine_tuning": ENABLE_FINE_TUNING, "model_type": "transformer", "model_name": TRANSFORMER_MODEL_NAME, "batch_size": BATCH_SIZE, "max_token_size": MAX_TOKEN_SIZE, "epochs": EPOCHS, "learning_rate": LEARNING_RATE, "weight_decay": WEIGHT_DECAY, "seed_value": SEED_VALUE, } # TODO: This could then be used to send to cometml to keep track of experiments... ``` ``` # Install transformers library + datasets helper !pip install transformers --quiet !pip install datasets --quiet !pip install optuna --quiet import os import pandas as pd import numpy as np import torch import textwrap import random from sklearn.metrics import accuracy_score, f1_score, roc_auc_score from transformers import logging, AutoTokenizer model_id = TRANSFORMER_MODEL_NAME print(f"Selected {TRANSFORMER_MODEL_NAME} as transformer model for the task...") # Setup the models path saved_models_path = "/content/drive/My Drive/{YOUR_PROJECT_HERE}/models/finetuned_models/" expected_model_path = os.path.join(saved_models_path, TASK, model_id) has_model_path = os.path.isdir(expected_model_path) model_checkpoint = TRANSFORMER_MODEL_NAME if ENABLE_FINE_TUNING else expected_model_path # Check if model exists if not ENABLE_FINE_TUNING: assert has_model_path, f"No fine-tuned model found at '{expected_model_path}', you need first to fine-tune a model from a pretrained checkpoint by enabling the 'ENABLE_FINE_TUNING' flag!" ``` # Data loading ``` # Note: Uses https://huggingface.co/docs/datasets/package_reference/main_classes.html from datasets import DatasetDict, Dataset, load_dataset, Sequence, ClassLabel, Features, Value, concatenate_datasets # TODO: Adapt doc_column = 'text' # Contains the text label_column = 'cro' # Needs to be an integer that represents the respective class # TODO: Load train/test data df_train = pd.read_pickle("/content/drive/My Drive/fin-disclosures-nlp/data/labels/Firm_AnnualReport_Labels_Training.pkl") df_test = pd.read_pickle("/content/drive/My Drive/fin-disclosures-nlp/data/labels/Firm_AnnualReport_Labels_Test.pkl") df_train = df_train.query(f"{label_column} == {label_column}") df_test = df_test.query(f"{label_column} == {label_column}") category_labels = df_train[label_column].unique().tolist() no_of_categories = len(category_labels) # TODO: Not sure if this step is necessary, but if you have the category in text and not integers # This assumes that there is t df_train[label_column] = df_train[label_column].astype('category').cat.codes.to_numpy(copy=True) df_test[label_column] = df_test[label_column].astype('category').cat.codes.to_numpy(copy=True) train_dataset = pd.DataFrame(df_train[[doc_column, label_column]].to_numpy(), columns=['text', 'labels']) test_dataset = pd.DataFrame(df_test[[doc_column, label_column]].to_numpy(), columns=['text', 'labels']) features = Features({'text': Value('string'), 'labels': ClassLabel(names=category_labels, num_classes=no_of_categories)}) # Setup Hugginface Dataset train_dataset = Dataset.from_pandas(train_dataset, features=features) test_dataset = Dataset.from_pandas(test_dataset, features=features) dataset = DatasetDict({ 'train': train_dataset, 'test': test_dataset }) ``` ## Tokenization ``` # Load the tokenizer. tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) # Encode the whole dataset def encode(data, max_len=MAX_TOKEN_SIZE): return tokenizer(data["text"], truncation=True, padding='max_length', max_length=max_len) dataset = dataset.map(encode, batched=True) ``` ## Validation set preparation ``` from torch.utils.data import TensorDataset, random_split, DataLoader, RandomSampler, SequentialSampler # See here for this workaround: https://github.com/huggingface/datasets/issues/767 dataset['train'], dataset['valid'] = dataset['train'].train_test_split(test_size=0.1, seed=SEED_VALUE).values() dataset['train'].features ``` # Model Setup and Training ``` from sklearn.metrics import accuracy_score, precision_recall_fscore_support, roc_auc_score, matthews_corrcoef from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer from transformers.trainer_pt_utils import nested_detach from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss from scipy.special import softmax # Check if GPU is available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Sets the evaluation metric depending on the task # TODO: Set your evaluation metric! Needs to be also in the provided "compute_metrics" function below metric_name = "matthews_correlation" # The training arguments args = TrainingArguments( output_dir=f"/content/models/{TASK}/{model_id}", evaluation_strategy = "epoch", learning_rate = LEARNING_RATE, per_device_train_batch_size = BATCH_SIZE, per_device_eval_batch_size = BATCH_SIZE, num_train_epochs = EPOCHS, weight_decay = WEIGHT_DECAY, load_best_model_at_end = True, metric_for_best_model = metric_name, greater_is_better = True, seed = SEED_VALUE, ) def model_init(): """Model initialization. Disabels logging temporarily to avoid spamming messages and loads the pretrained or fine-tuned model""" logging.set_verbosity_error() # Workaround to hide warnings that the model weights are randomly set and fine-tuning is necessary (which we do later...) model = AutoModelForSequenceClassification.from_pretrained( model_checkpoint, # Load from model checkpoint, i.e. the pretrained model or a previously saved fine-tuned model num_labels = no_of_categories, # The number of different categories/labels output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states.) ) logging.set_verbosity_warning() return model def compute_metrics(pred): """Computes classification task metric""" labels = pred.label_ids preds = pred.predictions # Convert to probabilities preds_prob = softmax(preds, axis=1) # Convert to 0/1, i.e. set to 1 the class with the highest logit preds = preds.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average=AVERAGING_STRATEGY) acc = accuracy_score(labels, preds) matthews_corr = matthews_corrcoef(labels, preds) return { 'f1': f1, 'precision': precision, 'recall': recall, 'matthews_correlation': matthews_corr } class CroTrainer(Trainer): # Note: If you need to do extra customization (like to alter the loss computation by adding weights), this can be done here pass trainer = CroTrainer( model_init=model_init, args=args, train_dataset=dataset["train"], eval_dataset=dataset["valid"], tokenizer=tokenizer, compute_metrics=compute_metrics ) # Only train if enabled, else we just want to load the model if ENABLE_FINE_TUNING: trainer.train() trainer.save_model() eval_metrics = trainer.evaluate() # experiment.log_metrics(eval_metrics) predict_result = trainer.predict(dataset['test']) from sklearn.metrics import multilabel_confusion_matrix, classification_report from scipy.special import softmax preds = predict_result.predictions labels = predict_result.label_ids test_roc_auc = roc_auc_score(labels, preds, average=AVERAGING_STRATEGY) print("Test ROC AuC: ", test_roc_auc) preds_prob = softmax(preds, axis=1) threshold = 0.5 preds_bool = (preds_prob > threshold) label_list = test_dataset.features['labels'].feature.names multilabel_confusion_matrix(labels, preds_bool) print(classification_report(labels, preds_bool, target_names=label_list)) ```
github_jupyter
``` # default_exp models.XResNet1dPlus ``` # XResNet1dPlus > This is a modified version of fastai's XResNet model in github. Changes include: * API is modified to match the default timeseriesAI's API. * (Optional) Uber's CoordConv 1d ``` #export from tsai.imports import * from tsai.models.layers import * from tsai.models.utils import * #export class XResNet1dPlus(nn.Sequential): @delegates(ResBlock1dPlus) def __init__(self, block, expansion, layers, fc_dropout=0.0, c_in=3, n_out=1000, stem_szs=(32,32,64), widen=1.0, sa=False, act_cls=defaults.activation, ks=3, stride=2, coord=False, **kwargs): store_attr('block,expansion,act_cls,ks') if ks % 2 == 0: raise Exception('kernel size has to be odd!') stem_szs = [c_in, *stem_szs] stem = [ConvBlock(stem_szs[i], stem_szs[i+1], ks=ks, coord=coord, stride=stride if i==0 else 1, act=act_cls) for i in range(3)] block_szs = [int(o*widen) for o in [64,128,256,512] +[256]*(len(layers)-4)] block_szs = [64//expansion] + block_szs blocks = self._make_blocks(layers, block_szs, sa, coord, stride, **kwargs) backbone = nn.Sequential(*stem, MaxPool(ks=ks, stride=stride, padding=ks//2, ndim=1), *blocks) head = nn.Sequential(AdaptiveAvgPool(sz=1, ndim=1), Flatten(), nn.Dropout(fc_dropout), nn.Linear(block_szs[-1]*expansion, n_out)) super().__init__(OrderedDict([('backbone', backbone), ('head', head)])) self._init_cnn(self) def _make_blocks(self, layers, block_szs, sa, coord, stride, **kwargs): return [self._make_layer(ni=block_szs[i], nf=block_szs[i+1], blocks=l, coord=coord, stride=1 if i==0 else stride, sa=sa and i==len(layers)-4, **kwargs) for i,l in enumerate(layers)] def _make_layer(self, ni, nf, blocks, coord, stride, sa, **kwargs): return nn.Sequential( *[self.block(self.expansion, ni if i==0 else nf, nf, coord=coord, stride=stride if i==0 else 1, sa=sa and i==(blocks-1), act_cls=self.act_cls, ks=self.ks, **kwargs) for i in range(blocks)]) def _init_cnn(self, m): if getattr(self, 'bias', None) is not None: nn.init.constant_(self.bias, 0) if isinstance(self, (nn.Conv1d,nn.Conv2d,nn.Conv3d,nn.Linear)): nn.init.kaiming_normal_(self.weight) for l in m.children(): self._init_cnn(l) #export def _xresnetplus(expansion, layers, **kwargs): return XResNet1dPlus(ResBlock1dPlus, expansion, layers, **kwargs) #export @delegates(ResBlock) def xresnet1d18plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2, 2, 2, 2], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d34plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3, 4, 6, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d50plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 4, 6, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d101plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 4, 23, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d152plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 8, 36, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d18_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2,2,2,2,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d34_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3,4,6,3,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d50_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3,4,6,3,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d18_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2,2,1,1,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d34_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3,4,6,3,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) @delegates(ResBlock) def xresnet1d50_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3,4,6,3,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs) net = xresnet1d18plus(3, 2, coord=True) x = torch.rand(32, 3, 50) net(x) bs, c_in, seq_len = 2, 4, 32 c_out = 2 x = torch.rand(bs, c_in, seq_len) archs = [ xresnet1d18plus, xresnet1d34plus, xresnet1d50plus, xresnet1d18_deepplus, xresnet1d34_deepplus, xresnet1d50_deepplus, xresnet1d18_deeperplus, xresnet1d34_deeperplus, xresnet1d50_deeperplus # # Long test # xresnet1d101, xresnet1d152, ] for i, arch in enumerate(archs): print(i, arch.__name__) test_eq(arch(c_in, c_out, sa=True, act=Mish, coord=True)(x).shape, (bs, c_out)) m = xresnet1d34plus(4, 2, act=Mish) test_eq(len(get_layers(m, is_bn)), 38) test_eq(check_weight(m, is_bn)[0].sum(), 22) # hide out = create_scripts() beep(out) ```
github_jupyter
Simulation Demonstration ===================== ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import soepy ``` In this notebook we present descriptive statistics of a series of simulated samples with the soepy toy model. soepy is closely aligned to the model in Blundell et. al. (2016). Yet, we wish to use the soepy package for estimation based on the German SOEP. In this simulation demonstration, some parameter values are partially set close to the parameters estimated in the seminal paper of Blundell et. al. (2016). The remainder of the parameter values are altered such that simulated wage levels and employment choice probabilities (roughly) match the statistics observed in the SOEP Data. - the constants in the wage process gamma_0 equal are set to ensure alignment with SOEP data. - the returns to experience in the wage process gamma_1 are set close to the coefficient values on gamma0, Blundell Table VIII, p. 1733 - the part-time experience accumulation parameter is set close to the coefficient on g(P), Blundell Table VIII, p. 1733, - the experience depreciation parameter delta is set close to the coefffient values on delta, Blundell Table VIII, p. 1733, - the disutility of part-time work parameter theta_p is set to ensure alignment with SOEP data, - the disutility of full-time work parameter theta_f is set to ensure alignment with SOEP data. To ensure that some individuals also choose to be non-emplyed, we set the period wage for nonemployed to be equal to some fixed value, constant over all periods. We call this income in unemployment "benefits". ``` data_frame_baseline = soepy.simulate('toy_model_init_file_01_1000.yml') data_frame_baseline.head(20) #Determine the observed wage given period choice def get_observed_wage (row): if row['Choice'] == 2: return row['Period Wage F'] elif row['Choice'] ==1: return row['Period Wage P'] elif row['Choice'] ==0: return row['Period Wage N'] else: return np.nan # Add to data frame data_frame_baseline['Wage Observed'] = data_frame_baseline.apply( lambda row: get_observed_wage (row),axis=1 ) # Determine the education level def get_educ_level(row): if row["Years of Education"] >= 10 and row["Years of Education"] < 12: return 0 elif row["Years of Education"] >= 12 and row["Years of Education"] < 16: return 1 elif row["Years of Education"] >= 16: return 2 else: return np.nan data_frame_baseline["Educ Level"] = data_frame_baseline.apply( lambda row: get_educ_level(row), axis=1 ) ``` Descriptive statistics to look at: - average part-time, full-time and nonemployment rate - ideally close to population rates - frequency of each choice per period - ideally more often part-time in early periods, more full-time in later periods - frequency of each choice over all periods for individuals with different levels of education - ideally, lower educated more often unemployed and in part-time jobs - average period wages over all individuals - series for all periods - average period individuals over all individuals - series for all periods ``` # Average non-employment, part-time, and full-time rates over all periods and individuals data_frame_baseline['Choice'].value_counts(normalize=True).plot(kind = 'bar') data_frame_baseline['Choice'].value_counts(normalize=True) # Average non-employment, part-time, and full-time rates per period data_frame_baseline.groupby(['Period'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True) ``` As far as the evolution of choices over all agents and periods is concerned, we first observe a declining tendency of individuals to be unemployed as desired in a perfectly calibrated simulation. Second, individuals in our simulation tend to choose full-time and non-employment less often in the later periods of the model. Rates of part-time employment increase for the same period. ``` # Average non-employment, part-time, and full-time rates for individuals with different level of education data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True) ``` As should be expected, the higher the education level of the individuals the lower the observed. ``` # Average wage for each period and choice fig,ax = plt.subplots() # Generate x axes values period = np.arange(1,31) # Generate plot lines ax.plot(period, data_frame_baseline[data_frame_baseline['Choice'] == 2].groupby(['Period'])['Period Wage F'].mean(), color='green', label = 'F') ax.plot(period, data_frame_baseline[data_frame_baseline['Choice'] == 1].groupby(['Period'])['Period Wage P'].mean(), color='orange', label = 'P') ax.plot(period, data_frame_baseline[data_frame_baseline['Choice'] == 0].groupby(['Period'])['Period Wage N'].mean(), color='blue', label = 'N') # Plot settings ax.set_xlabel("period") ax.set_ylabel("wage") ax.legend(loc='best') ``` The period wage of non-employment actually refers to the unemployment benefits individuals receive. The amount of the benefits is constant over time. Part-time and full-time wages rise as individuals gather more experience. ``` # Average wages by period data_frame_baseline.groupby(['Period'])['Wage Observed'].mean().plot() ``` Comparative Statics ------------------------ In the following, we discuss some comparative statics of the model. While changing other parameter values we wish to assume that the parameters central to the part-time penalty phenomenon studied in Blundell (2016) stay the same as in the benchmark specification: - part-time experience accumulation g_s1,2,3 - experience depreciation delta Comparative statics: Parameters in the systematic wage govern the choice between employment (either part-time, or full-time) and nonemployment. They do not determine the choice between part-time and full-time employment since the systematic wage is equal for both options. - constnat in wage process gamma_0: lower/higher value of the coefficient implies that other components such as accumulated work experience and the productivity shock are relatively more/less important in determining the choice between employment and nonemployment. Decreasing the constant for individuals of a certain education level, e.g., low, results in these individuals choosing nonemployment more often. - return to experience gamma_1: lower value of the coefficient implies that accumulated work experience is less relevant in determining the wage in comparison to other factors such as the constant or the productivity shock. Higher coefficients should lead to agents persistently choosing employment versus non-employment. The productivity shock: - productivity shock variances - the higher the variances, the more switching between occupational alternatives. Risk aversion: - risk aversion parameter mu: the more negative the risk aversion parameter, the more eager are agents to ensure themselves against productivity shoks through accumulation of experience. Therefore, lower values of the parameter are associated with higher rates of full-time employment. The labor disutility parameters directly influence: - benefits - for higher benefits individuals of all education levels would choose non-employment more often - labor disutility for part-time theta_p - for a higher coefficient, individuals of all education levels would choose to work part-time more often - labor disutility for full-time theta_f - for a higher coefficient, individuals of all education levels would choose to work part-time more often Finally, we illustrate one of the changes discussed above. In the alternative specifications the return to experience coefficient gamma_1 for the individuals with medium level of educations is increased from 0.157 to 0.195. As a result, experience accumulation matters more in the utility maximization. Therefore, individuals with medium level of education choose to be employed more often. Consequently, also aggregate levels of nonemployment are lower in the model. ``` data_frame_alternative = soepy.simulate('toy_model_init_file_01_1000.yml') # Average non-employment, part-time, and full-time rates for individuals with different level of education [data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True), data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True)] # Average non-employment, part-time, and full-time rates for individuals with different level of education data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True) # Average non-employment, part-time, and full-time rates over all periods and individuals data_frame_alternative['Choice'].value_counts(normalize=True).plot(kind = 'bar') data_frame_alternative['Choice'].value_counts(normalize=True) ```
github_jupyter
# Quickstart In this tutorial, we will show how to solve a famous optimization problem, minimizing the Rosenbrock function, in simplenlopt. First, let's define the Rosenbrock function and plot it: $$ f(x, y) = (1-x)^2+100(y-x^2)^2 $$ ``` import numpy as np def rosenbrock(pos): x, y = pos return (1-x)**2 + 100 * (y - x**2)**2 xgrid = np.linspace(-2, 2, 500) ygrid = np.linspace(-1, 3, 500) X, Y = np.meshgrid(xgrid, ygrid) Z = (1 - X)**2 + 100 * (Y -X**2)**2 x0=np.array([-1.5, 2.25]) f0 = rosenbrock(x0) #Plotly not rendering correctly on Readthedocs, but this shows how it is generated! Plot below is a PNG export import plotly.graph_objects as go fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y, cmax = 10, cmin = 0, showscale = False)]) fig.update_layout( scene = dict(zaxis = dict(nticks=4, range=[0,10]))) fig.add_scatter3d(x=[1], y=[1], z=[0], mode = 'markers', marker=dict(size=10, color='green'), name='Optimum') fig.add_scatter3d(x=[-1.5], y=[2.25], z=[f0], mode = 'markers', marker=dict(size=10, color='black'), name='Initial guess') fig.show() ``` ![The Rosenbrock optimization problem](./Rosenbrock.PNG) The crux of the Rosenbrock function is that the minimum indicated by the green dot is located in a very narrow, banana shaped valley with a small slope around the minimum. Local optimizers try to find the optimum by searching the parameter space starting from an initial guess. We place the initial guess shown in black on the other side of the banana. In simplenlopt, local optimizers are called by the minimize function. It requires the objective function and a starting point. The algorithm is chosen by the method argument. Here, we will use the derivative-free Nelder-Mead algorithm. Objective functions must be of the form ``f(x, ...)`` where ``x`` represents a numpy array holding the parameters which are optimized. ``` import simplenlopt def rosenbrock(pos): x, y = pos return (1-x)**2 + 100 * (y - x**2)**2 res = simplenlopt.minimize(rosenbrock, x0, method = 'neldermead') print("Position of optimum: ", res.x) print("Function value at Optimum: ", res.fun) print("Number of function evaluations: ", res.nfev) ``` The optimization result is stored in a class whose main attributes are the position of the optimum and the function value at the optimum. The number of function evaluations is a measure of performance: the less function evaluations are required to find the minimum, the faster the optimization will be. Next, let's switch to a derivative based solver. For better performance, we also supply the analytical gradient which is passed to the jac argument. ``` def rosenbrock_grad(pos): x, y = pos dx = 2 * x -2 - 400 * x * (y-x**2) dy = 200 * (y-x**2) return dx, dy res_slsqp = simplenlopt.minimize(rosenbrock, x0, method = 'slsqp', jac = rosenbrock_grad) print("Position of optimum: ", res_slsqp.x) print("Function value at Optimum: ", res_slsqp.fun) print("Number of function evaluations: ", res_slsqp.nfev) ``` As the SLSQP algorithm can use gradient information, it requires less function evaluations to find the minimum than the derivative-free Nelder-Mead algorithm. Unlike vanilla NLopt, simplenlopt automatically defaults to finite difference approximations of the gradient if it is not provided: ``` res = simplenlopt.minimize(rosenbrock, x0, method = 'slsqp') print("Position of optimum: ", res.x) print("Function value at Optimum: ", res.fun) print("Number of function evaluations: ", res.nfev) ``` As the finite differences are not as precise as the analytical gradient, the found optimal function value is higher than with analytical gradient information. In general, it is aways recommended to compute the gradient analytically or by automatic differentiation as the inaccuracies of finite differences can result in wrong results and bad performance. For demonstration purposes, let's finally solve the problem with a global optimizer. Like in SciPy, each global optimizer is called by a dedicated function such as crs() for the Controlled Random Search algorithm. Instead of a starting point, the global optimizers require a region in which they seek to find the minimum. This region is provided as a list of (lower_bound, upper_bound) for each coordinate. ``` bounds = [(-2., 2.), (-2., 2.)] res_crs = simplenlopt.crs(rosenbrock, bounds) print("Position of optimum: ", res_crs.x) print("Function value at Optimum: ", res_crs.fun) print("Number of function evaluations: ", res_crs.nfev) ``` Note that using a global optimizer is overkill for a small problem like the Rosenbrock function: it requires many more function evaluations than a local optimizer. Global optimization algorithms shine in case of complex, multimodal functions where local optimizers converge to local minima instead of the global minimum. Check the Global Optimization page for such an example.
github_jupyter
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science ## Lecture 5.5 - Smoothers and Generalized Additive Models - Model Fitting <div class="discussion"><b>JUST A NOTEBOOK</b></div> **Harvard University**<br> **Spring 2021**<br> **Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner<br> **Lab Instructor:** Eleni Kaxiras<br><BR> *Content:* Eleni Kaxiras and Will Claybaugh --- ``` ## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles) import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt import pandas as pd %matplotlib inline ``` ## Table of Contents * 1 - Overview - A Top View of LMs, GLMs, and GAMs to set the stage * 2 - A review of Linear Regression with `statsmodels`. Formulas. * 3 - Splines * 4 - Generative Additive Models with `pyGAM` * 5 - Smooting Splines using `csaps` ## Overview ![](../images/GAM_venn.png) *image source: Dani Servén Marín (one of the developers of pyGAM)* ### A - Linear Models First we have the **Linear Models** which you know from 109a. These models are linear in the coefficients. Very *interpretable* but suffer from high bias because let's face it, few relationships in life are linear. Simple Linear Regression (defined as a model with one predictor) as well as Multiple Linear Regression (more than one predictors) are examples of LMs. Polynomial Regression extends the linear model by adding terms that are still linear for the coefficients but non-linear when it somes to the predictiors which are now raised in a power or multiplied between them. ![](../images/linear.png) $$ \begin{aligned} y = \beta{_0} + \beta{_1}{x_1} & \quad \mbox{(simple linear regression)}\\ y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_2} + \beta{_3}{x_3} & \quad \mbox{(multiple linear regression)}\\ y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_1^2} + \beta{_3}{x_3^3} & \quad \mbox{(polynomial multiple regression)}\\ \end{aligned} $$ <div class="discussion"><b>Questions to think about</b></div> - What does it mean for a model to be **interpretable**? - Are linear regression models interpretable? Are random forests? What about Neural Networks such as Feed Forward? - Do we always want interpretability? Describe cases where we do and cases where we do not care. ### B - Generalized Linear Models (GLMs) ![](../images/GLM.png) **Generalized Linear Models** is a term coined in the early 1970s by Nelder and Wedderburn for a class of models that includes both Linear Regression and Logistic Regression. A GLM fits one coefficient per feature (predictor). ### C - Generalized Additive Models (GAMs) Hastie and Tidshirani coined the term **Generalized Additive Models** in 1986 for a class of non-linear extensions to Generalized Linear Models. ![](../images/GAM.png) $$ \begin{aligned} y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2\right) + f_3\left(x_3\right) \\ y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2, x_3\right) + f_3\left(x_3\right) & \mbox{(with interaction terms)} \end{aligned} $$ In practice we add splines and regularization via smoothing penalties to our GLMs. *image source: Dani Servén Marín* ### D - Basis Functions In our models we can use various types of functions as "basis". - Monomials such as $x^2$, $x^4$ (**Polynomial Regression**) - Sigmoid functions (neural networks) - Fourier functions - Wavelets - **Regression splines** ### 1 - Piecewise Polynomials a.k.a. Splines Splines are a type of piecewise polynomial interpolant. A spline of degree k is a piecewise polynomial that is continuously differentiable k − 1 times. Splines are the basis of CAD software and vector graphics including a lot of the fonts used in your computer. The name “spline” comes from a tool used by ship designers to draw smooth curves. Here is the letter $epsilon$ written with splines: ![](../images/epsilon.png) *font idea inspired by Chris Rycroft (AM205)* If the degree is 1 then we have a Linear Spline. If it is 3 then we have a Cubic spline. It turns out that cubic splines because they have a continous 2nd derivative (curvature) at the knots are very smooth to the eye. We do not need higher order than that. The Cubic Splines are usually Natural Cubic Splines which means they have the added constrain of the end points' second derivative = 0. We will use the CubicSpline and the B-Spline as well as the Linear Spline. #### scipy.interpolate See all the different splines that scipy.interpolate has to offer: https://docs.scipy.org/doc/scipy/reference/interpolate.html Let's use the simplest form which is interpolate on a set of points and then find the points between them. ``` from scipy.interpolate import splrep, splev from scipy.interpolate import BSpline, CubicSpline from scipy.interpolate import interp1d # define the range of the function a = -1 b = 1 # define the number of knots num_knots = 11 knots = np.linspace(a,b,num_knots) # define the function we want to approximate y = 1/(1+25*(knots**2)) # make a linear spline linspline = interp1d(knots, y) # sample at these points to plot xx = np.linspace(a,b,1000) yy = 1/(1+25*(xx**2)) plt.plot(knots,y,'*') plt.plot(xx, yy, label='true function') plt.plot(xx, linspline(xx), label='linear spline'); plt.legend(); ``` <div class="exercise"><b>Exercise</b></div> The Linear interpolation does not look very good. Fit a Cubic Spline and plot along the Linear to compare. Feel free to solve and then look at the solution. ``` # your answer here # solution # define the range of the function a = -1 b = 1 # define the knots num_knots = 10 x = np.linspace(a,b,num_knots) # define the function we want to approximate y = 1/(1+25*(x**2)) # make the Cubic spline cubspline = CubicSpline(x, y) print(f'Num knots in cubic spline: {num_knots}') # OR make a linear spline linspline = interp1d(x, y) # plot xx = np.linspace(a,b,1000) yy = 1/(1+25*(xx**2)) plt.plot(xx, yy, label='true function') plt.plot(x,y,'*', label='knots') plt.plot(xx, linspline(xx), label='linear'); plt.plot(xx, cubspline(xx), label='cubic'); plt.legend(); ``` <div class="discussion"><b>Questions to think about</b></div> - Change the number of knots to 100 and see what happens. What would happen if we run a polynomial model of degree equal to the number of knots (a global one as in polynomial regression, not a spline)? - What makes a spline 'Natural'? ``` # Optional and Outside of the scope of this class: create the `epsilon` in the figure above x = np.array([1.,0.,-1.5,0.,-1.5,0.]) y = np.array([1.5,1.,2.5,3,4,5]) t = np.linspace(0,5,6) f = interp1d(t,x,kind='cubic') g = interp1d(t,y,kind='cubic') tplot = np.linspace(0,5,200) plt.plot(x,y, '*', f(tplot), g(tplot)); ``` #### B-Splines (de Boor, 1978) One way to construct a curve given a set of points is to *interpolate the points*, that is, to force the curve to pass through the points. A B-splines (Basis Splines) is defined by a set of **control points** and a set of **basis functions** that fit the function between these points. By choosing to have no smoothing factor we force the final B-spline to pass though all the points. If, on the other hand, we set a smothing factor, our function is more of an approximation with the control points as "guidance". The latter produced a smoother curve which is prefferable for drawing software. For more on Splines see: https://en.wikipedia.org/wiki/B-spline) ![](../images/B-spline.png) We will use [`scipy.splrep`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splrep.html#scipy.interpolate.splrep) to calulate the coefficients for the B-Spline and draw it. #### B-Spline with no smooting ``` from scipy.interpolate import splev, splrep x = np.linspace(0, 10, 10) y = np.sin(x) # (t,c,k) is a tuple containing the vector of knots, coefficients, degree of the spline t,c,k = splrep(x, y) x2 = np.linspace(0, 10, 200) y2 = BSpline(t,c,k) plt.plot(x, y, 'o', x2, y2(x2)) plt.show() from scipy.interpolate import splrep x = np.linspace(0, 10, 10) y = np.sin(x) t,c,k = splrep(x, y, k=3) # (tck) is a tuple containing the vector of knots, coefficients, degree of the spline # define the points to plot on (x2) print(f'Knots ({len(t)} of them): {t}\n') print(f'B-Spline coefficients ({len(c)} of them): {c}\n') print(f'B-Spline degree {k}') x2 = np.linspace(0, 10, 100) y2 = BSpline(t, c, k) plt.figure(figsize=(10,5)) plt.plot(x, y, 'o', label='true points') plt.plot(x2, y2(x2), label='B-Spline') tt = np.zeros(len(t)) plt.plot(t, tt,'g*', label='knots eval by the function') plt.legend() plt.show() ``` <a id=splineparams></a> #### What do the tuple values returned by `scipy.splrep` mean? - The `t` variable is the array that contains the knots' position in the x axis. The length of this array is, of course, the number of knots. - The `c` variable is the array that holds the coefficients for the B-Spline. Its length should be the same as `t`. We have `number_of_knots - 1` B-spline basis elements to the spline constructed via this method, and they are defined as follows:<BR><BR> $$ \begin{aligned} B_{i, 0}(x) = 1, \textrm{if $t_i \le x < t_{i+1}$, otherwise $0$,} \\ \\ B_{i, k}(x) = \frac{x - t_i}{t_{i+k} - t_i} B_{i, k-1}(x) + \frac{t_{i+k+1} - x}{t_{i+k+1} - t_{i+1}} B_{i+1, k-1}(x) \end{aligned} $$ - t $\in [t_1, t_2, ..., t_]$ is the knot vector - c : are the spline coefficients - k : is the spline degree #### B-Spline with smooting factor s ``` from scipy.interpolate import splev, splrep x = np.linspace(0, 10, 5) y = np.sin(x) s = 0.5 # add smoothing factor task = 0 # task needs to be set to 0, which represents: # we are specifying a smoothing factor and thus only want # splrep() to find the optimal t and c t,c,k = splrep(x, y, task=task, s=s) # draw the line segments linspline = interp1d(x, y) # define the points to plot on (x2) x2 = np.linspace(0, 10, 200) y2 = BSpline(t, c, k) plt.plot(x, y, 'o', x2, y2(x2)) plt.plot(x2, linspline(x2)) plt.show() ``` #### B-Spline with given knots ``` x = np.linspace(0, 10, 100) y = np.sin(x) knots = np.quantile(x, [0.25, 0.5, 0.75]) print(knots) # calculate the B-Spline t,c,k = splrep(x, y, t=knots) curve = BSpline(t,c,k) curve plt.scatter(x=x,y=y,c='grey', alpha=0.4) yknots = np.sin(knots) plt.scatter(knots, yknots, c='r') plt.plot(x,curve(x)) plt.show() ``` ### 2 - GAMs https://readthedocs.org/projects/pygam/downloads/pdf/latest/ #### Classification in `pyGAM` Let's get our (multivariate!) data, the `kyphosis` dataset, and the `LogisticGAM` model from `pyGAM` to do binary classification. - kyphosis - wherther a particular deformation was present post-operation - age - patient's age in months - number - the number of vertebrae involved in the operation - start - the number of the topmost vertebrae operated on ``` kyphosis = pd.read_csv("../data/kyphosis.csv") display(kyphosis.head()) display(kyphosis.describe(include='all')) display(kyphosis.dtypes) # convert the outcome in a binary form, 1 or 0 kyphosis = pd.read_csv("../data/kyphosis.csv") kyphosis["outcome"] = 1*(kyphosis["Kyphosis"] == "present") kyphosis.describe() from pygam import LogisticGAM, s, f, l X = kyphosis[["Age","Number","Start"]] y = kyphosis["outcome"] kyph_gam = LogisticGAM().fit(X,y) ``` #### Outcome dependence on features To help us see how the outcome depends on each feature, `pyGAM` has the `partial_dependence()` function. ``` pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95) ``` For more on this see the : https://pygam.readthedocs.io/en/latest/api/logisticgam.html ``` res = kyph_gam.deviance_residuals(X,y) for i, term in enumerate(kyph_gam.terms): if term.isintercept: continue XX = kyph_gam.generate_X_grid(term=i) pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95) pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95) plt.figure() plt.scatter(X.iloc[:,term.feature], pdep2 + res) plt.plot(XX[:, term.feature], pdep) plt.plot(XX[:, term.feature], confi, c='r', ls='--') plt.title(X.columns.values[term.feature]) plt.show() ``` Notice that we did not specify the basis functions in the .fit(). `pyGAM` figures them out for us by using $s()$ (splines) for numerical variables and $f()$ for categorical features. If this is not what we want we can manually specify the basis functions, as follows: ``` kyph_gam = LogisticGAM(s(0)+s(1)+s(2)).fit(X,y) res = kyph_gam.deviance_residuals(X,y) for i, term in enumerate(kyph_gam.terms): if term.isintercept: continue XX = kyph_gam.generate_X_grid(term=i) pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95) pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95) plt.figure() plt.scatter(X.iloc[:,term.feature], pdep2 + res) plt.plot(XX[:, term.feature], pdep) plt.plot(XX[:, term.feature], confi, c='r', ls='--') plt.title(X.columns.values[term.feature]) plt.show() ``` #### Regression in `pyGAM` For regression problems, we can use a `linearGAM` model. For this part we will use the `wages` dataset. https://pygam.readthedocs.io/en/latest/api/lineargam.html #### The `wages` dataset Let's inspect another dataset that is included in `pyGAM` that notes the wages of people based on their age, year of employment and education. ``` # from the pyGAM documentation from pygam import LinearGAM, s, f from pygam.datasets import wage X, y = wage(return_X_y=True) ## model gam = LinearGAM(s(0) + s(1) + f(2)) gam.gridsearch(X, y) ## plotting plt.figure(); fig, axs = plt.subplots(1,3); titles = ['year', 'age', 'education'] for i, ax in enumerate(axs): XX = gam.generate_X_grid(term=i) ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX)) ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX, width=.95)[1], c='r', ls='--') if i == 0: ax.set_ylim(-30,30) ax.set_title(titles[i]); ``` ### 3 - Smoothing Splines using csaps **Note**: this is the spline model that minimizes <BR> $MSE - \lambda\cdot\text{wiggle penalty}$ $=$ $\sum_{i=1}^N \left(y_i - f(x_i)\right)^2 - \lambda \int \left(f''(t)\right)^2 dt$, <BR> across all possible functions $f$. ``` from csaps import csaps np.random.seed(1234) x = np.linspace(0,10,300000) y = np.sin(x*2*np.pi)*x + np.random.randn(len(x)) xs = np.linspace(x[0], x[-1], 1000) ys = csaps(x, y, xs, smooth=0.99) print(ys.shape) #plt.plot(x, y, 'o', xs, ys, '-') plt.plot(x, y, 'o', xs, ys, '-') plt.show() ``` ### 4 - Data fitting using pyGAM and Penalized B-Splines When we use a spline in pyGAM we are effectively using a penalized B-Spline with a regularization parameter $\lambda$. E.g. ``` LogisticGAM(s(0)+s(1, lam=0.5)+s(2)).fit(X,y) ``` Let's see how this smoothing works in `pyGAM`. We start by creating some arbitrary data and fitting them with a GAM. ``` X = np.linspace(0,10,500) y = np.sin(X*2*np.pi)*X + np.random.randn(len(X)) plt.scatter(X,y); # let's try a large lambda first and lots of splines gam = LinearGAM(lam=1e6, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ``` We see that the large $\lambda$ forces a straight line, no flexibility. Let's see now what happens if we make it smaller. ``` # let's try a smaller lambda gam = LinearGAM(lam=1e2, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ``` There is some curvature there but still not a good fit. Let's try no penalty. That should have the line fit exactly. ``` # no penalty, let's try a 0 lambda gam = LinearGAM(lam=0, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3) plt.plot(XX, gam.predict(XX)) ``` Yes, that is good. Now let's see what happens if we lessen the number of splines. The fit should not be as good. ``` # no penalty, let's try a 0 lambda gam = LinearGAM(lam=0, n_splines=10). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ```
github_jupyter
``` from gridworld import * % matplotlib inline # create the gridworld as a specific MDP gridworld=GridMDP([[-0.04,-0.04,-0.04,1],[-0.04,None, -0.04, -1], [-0.04, -0.04, -0.04, -0.04]], terminals=[(3,2), (3,1)], gamma=1.) example_pi = {(0,0): (0,1), (0,1): (0,1), (0,2): (1,0), (1,0): (1,0), (1,2): (1,0), (2,0): (0,1), (2,1): (0,1), (2,2): (1,0), (3,0):(-1,0), (3,1): None, (3,2):None} example_V = {(0,0): 0.1, (0,1): 0.2, (0,2): 0.3, (1,0): 0.05, (1,2): 0.5, (2,0): 0., (2,1): -0.2, (2,2): 0.5, (3,0):-0.4, (3,1): -1, (3,2):+1} """ 1) Complete the function policy evaluation below and use it on example_pi! The function takes as input a policy pi, and an MDP (including its transition model, reward and discounting factor gamma), and gives as output the value function for this specific policy in the MDP. Use equation (1) in the lecture slides! """ def policy_evaluation(pi, V, mdp, k=20): """Return an updated value function V for each state in the MDP """ R, T, gamma = mdp.R, mdp.T, mdp.gamma # retrieve reward, transition model and gamma from the MDP for i in range(k): # iterative update of V for s in mdp.states: V[s] = R(s) + gamma action = pi[s] probabilities = T(s, action) aux = 0 for p, state in probabilities: aux += p * V[state] V[s] += aux # raise NotImplementedError # implement iterative policy evaluation here return V def policy_evaluation(pi, V, mdp, k=20): """Return an updated value function V for each state in the MDP """ R, T, gamma = mdp.R, mdp.T, mdp.gamma # retrieve reward, transition model and gamma from the MDP for i in range(k): # iterative update of V for s in mdp.states: V[s] = R(s) + gamma * sum([p * V[s1] for (p, s1) in T(s, pi[s])]) return V R = gridworld.R T = gridworld.T V=policy_evaluation(example_pi, example_V, gridworld) gridworld.policy_plot(example_pi) print(V) gridworld.v_plot(V) """ 2) Complete the function value iteration below and use it to compute the optimal value function for the gridworld. The function takes as input the MDP (including reward function and transition model) and is supposed to compute the optimal value function using the value iteration algorithm presented in the lecture. Use the function best_policy to compute to compute the optimal policy under this value function! """ def value_iteration(mdp, epsilon=0.0001): "Solving an MDP by value iteration. epsilon determines the convergence criterion for stopping" V1 = dict([(s, 0) for s in mdp.states]) # initialize value function R, T, gamma = mdp.R, mdp.T, mdp.gamma while True: V = V1.copy() delta = 0 for s in mdp.states: raise NotImplementedError # implement the value iteration step here delta = max(delta, abs(V1[s] - V[s])) if delta < epsilon: return V def argmax(seq, fn): best = seq[0]; best_score = fn(best) for x in seq: x_score = fn(x) if x_score > best_score: best, best_score = x, x_score return best def expected_utility(a, s, V, mdp): "The expected utility of doing a in state s, according to the MDP and U." return sum([p * V[s1] for (p, s1) in mdp.T(s, a)]) def best_policy(mdp, V): """Given an MDP and a utility function V, best_policy determines the best policy, as a mapping from state to action. """ pi = {} for s in mdp.states: pi[s] = argmax(mdp.actions(s), lambda a:expected_utility(a, s, V, mdp)) return pi Vopt=value_iteration(gridworld) piopt = best_policy(gridworld, Vopt) gridworld.policy_plot(piopt) gridworld.v_plot(Vopt) """ 3) Complete the function policy iteration below and use it to compute the optimal policy for the gridworld. The function takes as input the MDP (including reward function and transition model) and is supposed to compute the optimal policy using the policy iteration algorithm presented in the lecture. Compare the result with what you got from running value_iteration and best_policy! """ def policy_iteration(mdp): "Solve an MDP by policy iteration" V = dict([(s, 0) for s in mdp.states]) pi = dict([(s, random.choice(mdp.actions(s))) for s in mdp.states]) while True: raise NotImplementedError # find value function for this policy unchanged = True for s in mdp.states: raise NotImplementedError # update policy if a != pi[s]: unchanged = False if unchanged: return pi ```
github_jupyter
<h2>Quadratic Regression Dataset - Linear Regression vs XGBoost</h2> Model is trained with XGBoost installed in notebook instance In the later examples, we will train using SageMaker's XGBoost algorithm. Training on SageMaker takes several minutes (even for simple dataset). If algorithm is supported on Python, we will try them locally on notebook instance This allows us to quickly learn an algorithm, understand tuning options and then finally train on SageMaker Cloud In this exercise, let's compare XGBoost and Linear Regression for Quadratic regression dataset ``` # Install xgboost in notebook instance. #### Command to install xgboost !conda install -y -c conda-forge xgboost import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error, mean_absolute_error # XGBoost import xgboost as xgb # Linear Regression from sklearn.linear_model import LinearRegression df = pd.read_csv('quadratic_all.csv') df.head() plt.plot(df.x,df.y,label='Target') plt.grid(True) plt.xlabel('Input Feature') plt.ylabel('Target') plt.legend() plt.title('Quadratic Regression Dataset') plt.show() train_file = 'quadratic_train.csv' validation_file = 'quadratic_validation.csv' # Specify the column names as the file does not have column header df_train = pd.read_csv(train_file,names=['y','x']) df_validation = pd.read_csv(validation_file,names=['y','x']) df_train.head() df_validation.head() plt.scatter(df_train.x,df_train.y,label='Training',marker='.') plt.scatter(df_validation.x,df_validation.y,label='Validation',marker='.') plt.grid(True) plt.xlabel('Input Feature') plt.ylabel('Target') plt.title('Quadratic Regression Dataset') plt.legend() plt.show() X_train = df_train.iloc[:,1:] # Features: 1st column onwards y_train = df_train.iloc[:,0].ravel() # Target: 0th column X_validation = df_validation.iloc[:,1:] y_validation = df_validation.iloc[:,0].ravel() # Create an instance of XGBoost Regressor # XGBoost Training Parameter Reference: # https://github.com/dmlc/xgboost/blob/master/doc/parameter.md regressor = xgb.XGBRegressor() regressor regressor.fit(X_train,y_train, eval_set = [(X_train, y_train), (X_validation, y_validation)]) eval_result = regressor.evals_result() training_rounds = range(len(eval_result['validation_0']['rmse'])) plt.scatter(x=training_rounds,y=eval_result['validation_0']['rmse'],label='Training Error') plt.scatter(x=training_rounds,y=eval_result['validation_1']['rmse'],label='Validation Error') plt.grid(True) plt.xlabel('Iteration') plt.ylabel('RMSE') plt.title('Training Vs Validation Error') plt.legend() plt.show() xgb.plot_importance(regressor) plt.show() ``` ## Validation Dataset Compare Actual and Predicted ``` result = regressor.predict(X_validation) result[:5] plt.title('XGBoost - Validation Dataset') plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.') plt.scatter(df_validation.x,result,label='predicted',marker='.') plt.grid(True) plt.legend() plt.show() # RMSE Metrics print('XGBoost Algorithm Metrics') mse = mean_squared_error(df_validation.y,result) print(" Mean Squared Error: {0:.2f}".format(mse)) print(" Root Mean Square Error: {0:.2f}".format(mse**.5)) # Residual # Over prediction and Under Prediction needs to be balanced # Training Data Residuals residuals = df_validation.y - result plt.hist(residuals) plt.grid(True) plt.xlabel('Actual - Predicted') plt.ylabel('Count') plt.title('XGBoost Residual') plt.axvline(color='r') plt.show() # Count number of values greater than zero and less than zero value_counts = (residuals > 0).value_counts(sort=False) print(' Under Estimation: {0}'.format(value_counts[True])) print(' Over Estimation: {0}'.format(value_counts[False])) # Plot for entire dataset plt.plot(df.x,df.y,label='Target') plt.plot(df.x,regressor.predict(df[['x']]) ,label='Predicted') plt.grid(True) plt.xlabel('Input Feature') plt.ylabel('Target') plt.legend() plt.title('XGBoost') plt.show() ``` ## Linear Regression Algorithm ``` lin_regressor = LinearRegression() lin_regressor.fit(X_train,y_train) ``` Compare Weights assigned by Linear Regression. Original Function: 5*x**2 -23*x + 47 + some noise Linear Regression Function: -15.08 * x + 709.86 Linear Regression Coefficients and Intercepts are not close to actual ``` lin_regressor.coef_ lin_regressor.intercept_ result = lin_regressor.predict(df_validation[['x']]) plt.title('LinearRegression - Validation Dataset') plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.') plt.scatter(df_validation.x,result,label='predicted',marker='.') plt.grid(True) plt.legend() plt.show() # RMSE Metrics print('Linear Regression Metrics') mse = mean_squared_error(df_validation.y,result) print(" Mean Squared Error: {0:.2f}".format(mse)) print(" Root Mean Square Error: {0:.2f}".format(mse**.5)) # Residual # Over prediction and Under Prediction needs to be balanced # Training Data Residuals residuals = df_validation.y - result plt.hist(residuals) plt.grid(True) plt.xlabel('Actual - Predicted') plt.ylabel('Count') plt.title('Linear Regression Residual') plt.axvline(color='r') plt.show() # Count number of values greater than zero and less than zero value_counts = (residuals > 0).value_counts(sort=False) print(' Under Estimation: {0}'.format(value_counts[True])) print(' Over Estimation: {0}'.format(value_counts[False])) # Plot for entire dataset plt.plot(df.x,df.y,label='Target') plt.plot(df.x,lin_regressor.predict(df[['x']]) ,label='Predicted') plt.grid(True) plt.xlabel('Input Feature') plt.ylabel('Target') plt.legend() plt.title('LinearRegression') plt.show() ``` Linear Regression is showing clear symptoms of under-fitting Input Features are not sufficient to capture complex relationship <h2>Your Turn</h2> You can correct this under-fitting issue by adding relavant features. 1. What feature will you add and why? 2. Complete the code and Test 3. What performance do you see now? ``` # Specify the column names as the file does not have column header df_train = pd.read_csv(train_file,names=['y','x']) df_validation = pd.read_csv(validation_file,names=['y','x']) df = pd.read_csv('quadratic_all.csv') ``` # Add new features ``` # Place holder to add new features to df_train, df_validation and df # if you need help, scroll down to see the answer # Add your code X_train = df_train.iloc[:,1:] # Features: 1st column onwards y_train = df_train.iloc[:,0].ravel() # Target: 0th column X_validation = df_validation.iloc[:,1:] y_validation = df_validation.iloc[:,0].ravel() lin_regressor.fit(X_train,y_train) ``` Original Function: -23*x + 5*x**2 + 47 + some noise (rewritten with x term first) ``` lin_regressor.coef_ lin_regressor.intercept_ result = lin_regressor.predict(X_validation) plt.title('LinearRegression - Validation Dataset') plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.') plt.scatter(df_validation.x,result,label='predicted',marker='.') plt.grid(True) plt.legend() plt.show() # RMSE Metrics print('Linear Regression Metrics') mse = mean_squared_error(df_validation.y,result) print(" Mean Squared Error: {0:.2f}".format(mse)) print(" Root Mean Square Error: {0:.2f}".format(mse**.5)) print("***You should see an RMSE score of 30.45 or less") df.head() # Plot for entire dataset plt.plot(df.x,df.y,label='Target') plt.plot(df.x,lin_regressor.predict(df[['x','x2']]) ,label='Predicted') plt.grid(True) plt.xlabel('Input Feature') plt.ylabel('Target') plt.legend() plt.title('LinearRegression') plt.show() ``` ## Solution for under-fitting add a new X**2 term to the dataframe syntax: df_train['x2'] = df_train['x']**2 df_validation['x2'] = df_validation['x']**2 df['x2'] = df['x']**2 ### Tree Based Algorithms have a lower bound and upper bound for predicted values ``` # True Function def quad_func (x): return 5*x**2 -23*x + 47 # X is outside range of training samples # New Feature: Adding X^2 term X = np.array([-100,-25,25,1000,5000]) y = quad_func(X) df_tmp = pd.DataFrame({'x':X,'y':y,'x2':X**2}) df_tmp['xgboost']=regressor.predict(df_tmp[['x']]) df_tmp['linear']=lin_regressor.predict(df_tmp[['x','x2']]) df_tmp plt.scatter(df_tmp.x,df_tmp.y,label='Actual',color='r') plt.plot(df_tmp.x,df_tmp.linear,label='LinearRegression') plt.plot(df_tmp.x,df_tmp.xgboost,label='XGBoost') plt.legend() plt.xlabel('X') plt.ylabel('y') plt.title('Input Outside Range') plt.show() # X is inside range of training samples X = np.array([-15,-12,-5,0,1,3,5,7,9,11,15,18]) y = quad_func(X) df_tmp = pd.DataFrame({'x':X,'y':y,'x2':X**2}) df_tmp['xgboost']=regressor.predict(df_tmp[['x']]) df_tmp['linear']=lin_regressor.predict(df_tmp[['x','x2']]) df_tmp # XGBoost Predictions have an upper bound and lower bound # Linear Regression Extrapolates plt.scatter(df_tmp.x,df_tmp.y,label='Actual',color='r') plt.plot(df_tmp.x,df_tmp.linear,label='LinearRegression') plt.plot(df_tmp.x,df_tmp.xgboost,label='XGBoost') plt.legend() plt.xlabel('X') plt.ylabel('y') plt.title('Input within range') plt.show() ``` <h2>Summary</h2> 1. In this exercise, we compared performance of XGBoost model and Linear Regression on a quadratic dataset 2. The relationship between input feature and target was non-linear. 3. XGBoost handled it pretty well; whereas, linear regression was under-fitting 4. To correct the issue, we had to add additional features for linear regression 5. With this change, linear regression performed much better XGBoost can detect patterns involving non-linear relationship; whereas, algorithms like linear regression may need complex feature engineering
github_jupyter
# Table of Contents <div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Purpose" data-toc-modified-id="Purpose-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Purpose</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Requirements" data-toc-modified-id="Requirements-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Requirements</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Abstract-Stakeholder" data-toc-modified-id="Abstract-Stakeholder-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Abstract Stakeholder</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Actual-Stakeholder" data-toc-modified-id="Actual-Stakeholder-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Actual Stakeholder</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Dependencies" data-toc-modified-id="Dependencies-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Dependencies</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#R-installation" data-toc-modified-id="R-installation-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>R installation</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#An-R-kernel-for-Jupyter-notebooks" data-toc-modified-id="An-R-kernel-for-Jupyter-notebooks-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>An R kernel for Jupyter notebooks</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Load-R-libraries-for-the-analyses" data-toc-modified-id="Load-R-libraries-for-the-analyses-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Load R libraries for the analyses</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Analyses" data-toc-modified-id="Analyses-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Analyses</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Leaf" data-toc-modified-id="First-Leaf-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>First Leaf</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Inputs" data-toc-modified-id="Inputs-4.1.1"><span class="toc-item-num">4.1.1&nbsp;&nbsp;</span>Inputs</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Outputs" data-toc-modified-id="Outputs-4.1.2"><span class="toc-item-num">4.1.2&nbsp;&nbsp;</span>Outputs</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Histogram" data-toc-modified-id="Histogram-4.1.2.1"><span class="toc-item-num">4.1.2.1&nbsp;&nbsp;</span>Histogram</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Boxplots" data-toc-modified-id="Boxplots-4.1.2.2"><span class="toc-item-num">4.1.2.2&nbsp;&nbsp;</span>Boxplots</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Ridgeline-Plots" data-toc-modified-id="Ridgeline-Plots-4.1.2.3"><span class="toc-item-num">4.1.2.3&nbsp;&nbsp;</span>Ridgeline Plots</a></span></li></ul></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Bloom" data-toc-modified-id="First-Bloom-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>First Bloom</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Code" data-toc-modified-id="Code-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Code</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Provenance" data-toc-modified-id="Provenance-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Provenance</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Citations" data-toc-modified-id="Citations-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Citations</a></span></li></ul></div> # Purpose This [biogeographical analysis package](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst) (BAP) uses the [USA National Phenology Network](https://www.usanpn.org/usa-national-phenology-network) (USA-NPN)'s modeled information on phenological changes to inform and support management decisions on the timing and coordination of season-specific activities within the boundaries of a user-specified management unit. While various categories of phenological information are applicable to the seasonal allocation of resources, this package focuses on one of those, USA-NPN's modeled spring indices of first leaf and first bloom. The use case for design and development of the BAP was that of a resource manager using this analysis package and USA-NPN's Extended Spring Indices to guide the timing and location of treaments within their protected area. # Requirements ## Abstract Stakeholder Stakeholders for the information produced by this analysis package are people making decisions based on the timing of seasonal events at a specific location. Examples include resource managers, health professionals, and recreationalists. Note: For more on the concept of "Abstract Stakeholder" please see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#abstract-stakeholder). ## Actual Stakeholder To be determined Note: For more on the concept of "Actual Stakeholder" see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#actual-stakeholder). # Dependencies This notebook was developed using the R software environment. Several R software packages are required to run this scientific code in a Jupyter notebook. An R kernel for Jupyter notebooks is also required. ## R installation Guidance on installing the R software environment is available at the [R Project](https://www.r-project.org). Several R libraries, listed below, are used for the analyses and visualizations in this notebook. General instructions for finding and installing libraries are also provided at the [R Project](https://www.r-project.org) website. ## An R kernel for Jupyter notebooks This notebook uses [IRkernel](https://irkernel.github.io). At the time of this writing (2018-05-06), Karlijn Willems provides excellent guidance on installing the IRkernel and running R in a Jupyter notebook in her article entitled ["Jupyter And R Markdown: Notebooks With R"](https://www.datacamp.com/community/blog/jupyter-notebook-r#markdown) ## Load R libraries for the analyses ``` library(tidyverse) library(ggplot2) library(ggridges) library(jsonlite) library(viridis) ``` # Analyses An understanding of the USA National Phenology Network's suite of [models and maps](https://www.usanpn.org/data/maps) is required to properly use this analysis package and to assess the results. The Extended Spring Indices, the model used to estimate the timing of "first leaf" and "first bloom" events for early spring indicator species at a specific location, are detailed on this [page](https://www.usanpn.org/data/spring_indices) of the USA-NPN website. Note both indices are based on the 2013 version of the underlying predictive model (Schwartz et al. 2013). The current model and its antecedents are described on the USA-NPN site and in peer-reviewed literatire (Ault et al. 2015, Schwartz 1997, Schwartz et al. 2006, Schwartz et al. 2013). Crimmins et al. (2017) documents the USA National Phenology Network gridded data products used in this analysis package. USA-NPN also provides an assessment of Spring Index uncertainty and error with their [Spring Index and Plausibility Dashboard](https://www.usanpn.org/data/si-x_plausibility). ## First Leaf This analysis looks at the timing of First Leaf or leaf out for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_leaf_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis. ### Inputs The operational BAP prototype retrieves data in real-time from the [USA National Phenology Network](https://www.usanpn.org)'s Web Processing Service (WPS) using a developer key issued by USA-NPN. Their WPS allows a key holder to request and retrieve model output values for a specified model, area of interest and time period. Model output for the variable *average_leaf_prism* was retrieved 2018-01-27. The area of interest, Yellowstone National Park, was analyzed using information from the [Spatial Feature Registry](https://github.com/usgs-bis/nbmdocs/blob/master/docs/bis.rst). The specified time period was 1981 to 2016. This notebook provides a lightly processsed version of that retrieval, [YellowstoneNP-1981-2016-processed-numbers.json](./YellowstoneNP-1981-2016-processed-numbers.json), for those who do not have a personal developer key. ``` # transform the BIS emitted JSON into something ggplot2 can work with yell <- read_json("YellowstoneNP-1981-2016-processed-numbers.json", simplifyDataFrame = TRUE, simplifyVector = TRUE, flatten = TRUE) yelldf <- as_tibble(yell) yellt <- gather(yelldf, Year, DOY) ``` ### Outputs #### Histogram Produce a histogram of modeled results for Yellowstone National Park for all years within the specified period of interest (1981 to 2016). The visualization allows the user to assess the range and distribution of all the modeled values for the user-selected area for the entire, user-specified time period. Here, the modeled Leaf Spring Index values for each of the grid that fall within the boundary of Yellowstone National Park are binned by Day of Year for the entire period of interest. The period of interest is 1981 to 2016 inclusive. Dotted vertical lines indicating the minimum (green), mean (red), and maximum (green) values of the dataset are also shown. ``` # produce a histogram for all years ggplot(yellt, aes(DOY)) + geom_histogram(binwidth = 1, color = "grey", fill = "lightblue") + ggtitle("Histogram of First Leaf Spring Index, Yellowstone National Park (1981 - 2016)") + geom_vline(aes(xintercept=mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) + geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) + geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) ``` This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the above histogram. Operationalized, online versions of this visualization should be based on the guidance provided by the ggplot2 developers. See their section entitled [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) for details and approaches. The webpage provides links to their source code. Also, note the modeled grid cell values are discrete and should be portrayed as such in an operationalized graphic. #### Boxplots Produce a multiple boxplot display of the modeled results for Yellowstone National Park for each year within the specified time period. Each individual boxplot portrays that year's median, hinges, whiskers and "outliers". The multiple boxplot display allows the user to explore the distribution of modeled spring index values through time. ``` # Produce a mulitple boxplot display with a boxplot for each year ggplot(yellt, aes(y = DOY, x = Year, group = Year)) + geom_boxplot() + geom_hline(aes(yintercept = median(DOY, na.rm=T)), color = "blue", linetype = "dotted", size = 0.5) + ggtitle("DRAFT: Boxplot of Spring Index, Yellowstone National Park (1981 to 2016)") ``` This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the multiple boxplot above. Base any operationalized, online versions of this visualization on the guidance provided by the ggplot2 developers. See their section entitled [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html) for details and approaches. Links to their source code are available at that web location. #### Ridgeline Plots Produce ridgeline plots for each year to better visualize changes in the distributions over time. ``` # ridgeline plot with gradient coloring based on day of year for each available year ggplot(yellt, aes(x = DOY, y = Year, group = Year, fill = ..x..)) + geom_density_ridges_gradient(scale = 3, rel_min_height = 0.01, gradient_lwd = 1.0, from = 80, to = 180) + scale_x_continuous(expand = c(0.01, 0)) + scale_y_continuous(expand = c(0.01, 0)) + scale_fill_viridis(name = "Day of\nYear", option = "D", direction = -1) + labs(title = 'DRAFT: Spring Index, Yellowstone National Park', subtitle = 'Annual Spring Index by Year for the Period 1981 to 2016\nModel Results from the USA National Phenology Network', y = 'Year', x = 'Spring Index (Day of Year)', caption = "(model results retrieved 2018-01-26)") + theme_ridges(font_size = 12, grid = TRUE) + geom_vline(aes(xintercept = mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) + geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) + geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) ``` This notebook used the [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package to produce the ridgeline above. Base any operationalized, online versions of this visualization on the guidance provided by the ggridges developer. See their R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) for details and approaches. Source code is available at their [GitHub repo](https://github.com/clauswilke/ggridges). ## First Bloom This analysis looks at the timing of First Bloom for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_bloom_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis. Output visualizations and implementation notes follow the approach and patterns used for First Leaf: histograms, multiple boxplots and ridgeline plots. # Code Code used for this notebook is available at the [usgs-bcb/phenology-baps](https://github.com/usgs-bcb/phenology-baps) GitHub repository. # Provenance This prototype analysis package was a collaborative development effort between USGS [Core Science Analytics, Synthesis, and Libraries](https://www.usgs.gov/science/mission-areas/core-science-systems/csasl?qt-programs_l2_landing_page=0#qt-programs_l2_landing_page) and the [USA National Phenology Network](https://www.usanpn.org). Members of the scientific development team met and discussed use cases, analyses, and visualizations during the third quarter of 2016. Model output choices as well as accessing the information by means of the USA-NPN Web Processing Service were also discussed at that time. This notebook was based upon those group discussions and Tristan Wellman's initial ideas for processing and visualizing the USA-NPN spring index data. That initial body of work and other suppporting code is available at his GitHub repository, [TWellman/USGS_BCB-NPN-Dev-Space](https://github.com/TWellman/USGS_BCB-NPN-Dev-Space). This notebook used the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the histograms and boxplots and ridgeplots. The ggplot2 developers provide online guidance and links to their source code for these at [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) and [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html). The [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package is used to produce the ridgeline plot. Usage is described in the R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html). The underlying source code is available at the author Claus O. Wilke's [GitHub repo](https://github.com/clauswilke/ggridges). Software developers at the Fort Collins Science Center worked with members of the team to operationalize the scientific code and make it publically available on the web. An initial prototype application is available at (https://my-beta.usgs.gov/biogeography/). # Citations Ault, T. R., M. D. Schwartz, R. Zurita-Milla, J. F. Weltzin, and J. L. Betancourt (2015): Trends and natural variability of North American spring onset as evaluated by a new gridded dataset of spring indices. Journal of Climate 28: 8363-8378. Crimmins, T.M., R.L. Marsh, J. Switzer, M.A. Crimmins, K.L. Gerst, A.H. Rosemartin, and J.F. Weltzin. 2017. USA National Phenology Network gridded products documentation. U.S. Geological Survey Open-File Report 2017–1003. DOI: 10.3133/ofr20171003. Monahan, W. B., A. Rosemartin, K. L. Gerst, N. A. Fisichelli, T. Ault, M. D. Schwartz, J. E. Gross, and J. F. Weltzin. 2016. Climate change is advancing spring onset across the U.S. national park system. Ecosphere 7(10):e01465. 10.1002/ecs2.1465 Schwartz, M. D. 1997. Spring index models: an approach to connecting satellite and surface phenology. Phenology in seasonal climates I, 23-38. Schwartz, M.D., R. Ahas, and A. Aasa, 2006. Onset of spring starting earlier across the Northern Hemisphere. Global Change Biology, 12, 343-351. Schwartz, M. D., T. R. Ault, and J. L. Betancourt, 2013: Spring onset variations and trends in the continental United States: past and regional assessment using temperature-based indices. International Journal of Climatology, 33, 2917–2922, 10.1002/joc.3625.
github_jupyter
# Analyse data with Python Pandas Welcome to this Jupyter Notebook! Today you'll learn how to import a CSV file into a Jupyter Notebook, and how to analyse already cleaned data. This notebook is part of the course Python for Journalists at [datajournalism.com](https://datajournalism.com/watch/python-for-journalists). The data used originally comes from [the Electoral Commission website](http://search.electoralcommission.org.uk/Search?currentPage=1&rows=10&sort=AcceptedDate&order=desc&tab=1&open=filter&et=pp&isIrishSourceYes=false&isIrishSourceNo=false&date=Reported&from=&to=&quarters=2018Q12&rptPd=3617&prePoll=false&postPoll=false&donorStatus=individual&donorStatus=tradeunion&donorStatus=company&donorStatus=unincorporatedassociation&donorStatus=publicfund&donorStatus=other&donorStatus=registeredpoliticalparty&donorStatus=friendlysociety&donorStatus=trust&donorStatus=limitedliabilitypartnership&donorStatus=impermissibledonor&donorStatus=na&donorStatus=unidentifiabledonor&donorStatus=buildingsociety&register=ni&register=gb&optCols=Register&optCols=IsIrishSource&optCols=ReportingPeriodName), but is edited for training purposes. The edited dataset is available on the course website. ## About Jupyter Notebooks and Pandas Right now you're looking at a Jupyter Notebook: an interactive, browser based programming environment. You can use these notebooks to program in R, Julia or Python - as you'll be doing later on. Read more about Jupyter Notebook in the [Jupyter Notebook Quick Start Guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html). To analyse up our data, we'll be using Python and Pandas. Pandas is an open-source Python library - basically an extra toolkit to go with Python - that is designed for data analysis. Pandas is flexible, easy to use and has lots of useful functions built right in. Read more about Pandas and its features in [the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/). That Pandas functions in ways similar to both spreadsheets and SQL databases (though the latter won't be discussed in this course), makes it beginner friendly. :) **Notebook shortcuts** Within Jupyter Notebooks, there are some shortcuts you can use. If you'll be using more notebooks for your data analysis in the future, you'll remember these shortcuts soon enough. :) * `esc` will take you into command mode * `a` will insert cell above * `b` will insert cell below * `shift then tab` will show you the documentation for your code * `shift and enter` will run your cell * ` d d` will delete a cell **Pandas dictionary** * **dataframe**: dataframe is Pandas speak for a table with a labeled y-axis, also known as an index. (The index usually starts at 0.) * **series**: a series is a list, a series can be made of a single column within a dataframe. Before we dive in, a little more about Jupyter Notebooks. Every notebooks is made out of cells. A cell can either contain Markdown text - like this one - or code. In the latter you can execute your code. To see what that means, type the following command in the next cell `print("hello world")`. ``` print("hello world") ``` ## Getting started In the module 'Clean data' from this course, we cleaned up a dataset with donations to political parties in the UK. Now, we're going to analyse the data in that dataset. Let's start by importing the Pandas library, using `import pandas as pd`. ``` import pandas as pd ``` Now, import the cleaned dataset, use `df = pd.read_csv('/path/to/file_with_clean_data.csv')`. ## Importing data ``` df = pd.read_csv('results_clean.csv') ``` Let's see if the data is anything like you'd expect, use `df.head()`, `df.tail()` or `df.sample()`. ``` df.head(10) ``` Whoops! When we saved the data after cleaning it, the index was saved in an unnamed column. With importing, Pandas added a new index... Let's get rid of the 'Unnamed: 0' column. Drop it like it's hot... `df = df.drop('Unnamed: 0', 1)`. ``` df = df.drop('Unnamed: 0', 1) ``` Let's see if this worked, use `df.head()`, `df.tail()` or `df.sample()`. ``` df.tail(10) ``` Now, if this looks better, let's get started and analyse some data. # Analyse data ## Statistical summary In the module Clean data, you already saw the power of `df.describe()`. This function gives a basic statistical summary of every column in the dataset. It will give you even more information when you tell the function that you want everything included, like this: `df.describe(include='all')` ``` df.describe(include='all') ``` For columns with numeric values, `df.describe()` will give back the most information, here's a full list of the parameters and their meaning: **df.describe() parameters** * **count**: number of values in that column * **unique**: number of unique values in that column * **top**: first value in that column * **freq**: the most common value’s frequency * **mean**: average * **std**: standard deviation * **min**: minimum value, lowest value in the column * **25%**: first percentile * **50%**: second percentile, this is the same as the median * **75%**: thirth percentile * **max**: maximum value, highest value in the column If a column does not contain numeric value, only those parameters that are applicable are returned. Python gives you NaN-values when that's the case - NaN is short for Not a Number. Notice that 'count' is 300 for every column. This means that every column has a value for every row in the dataset. How do I know? I looked at the total number of rows, using `df.shape`. ``` df.shape ``` ## Filter Let's try to filter the dataframe based on the value in the Value column. You can do this using `df[df['Value'] > 10000 ]`. This will give you a dataframe with only donations from 10.000 pound or more. ``` df[df['Value'] > 10000 ] ``` ## Sort Let's try to sort the data. Using the command `df.sort_values(by='column_name')` will sort the dataframe based on the column of your choosing. Sorting by default happens ascending, from small to big. In case you want to see the sorting from big to small, descending, you'll have to type: `df.sort_values(by='column_name', ascending=False)`. Now, let's try to sort the dataframe based on the number in the Value column it's easy to find out who made the biggest donation. The above commands will sort the dataframe by a column, but - since we never asked our notebook to - won't show the data. To sort the data and show us the new order of the top 10, we'll have to combine the command with `.head(10)` like this: `df.sort_values(by='column_name').head(10)`. Now, what would you type if you want to see the 10 smallest donations? ``` df.sort_values(by='Value').head(10) ``` If you want to see the biggest donations made, there are two ways to do that. You could use `df.tail(10)` to see the last 10 rows of the dataframe as it is now. Since the dataframe is ordered from small to big donation, the biggest donations will be in the last 10 rows. Another way of doing this, is using `df.sort_values(by='Value', ascending=False).head(10)`. This would sort the dataframe based on the Value column from big to small. Personally I prefer the latter... ``` df.sort_values(by='Value', ascending=False).head(10) ``` ## Sum Wow! There are some big donations in our dataset. If you want to know how much money was donated in total, you need to get the sum of the column Value. Use `df['Value'].sum()`. ``` df['Value'].sum() ``` ## Count Let's look at the receivers of all this donation money. Use `df['RegulatedEntityName'].count()` to count the number of times a regulated entity received a donation. ``` df['RegulatedEntityName'].count() ``` Not really what we were looking for, right? Using `.count()` gives you the number of values in a column. Not the number of appearances per unique value in the column. You'll need to use `df['RegulatedEntityName'].value_counts()` if you want to know that... ``` df['RegulatedEntityName'].value_counts() ``` Ok. Let's see if you really understand the difference between `.value_counts()` and `.count()` If you want to know how many donors have donated, you should count the values in the DonorName column. Do you use `df['DonorName'].value_counts()` or `df['DonorName'].count()`? When in doubt, try both. Remember: we're using a Jupyter Notebook here. It's a **Notebook**, so you can't go wrong here. :) ``` df['DonorName'].count() df['DonorName'].value_counts() ``` Interesting: apparently Ms Jane Mactaggart, Mr Duncan Greenland, and Lord Charles Falconer of Thoroton have donated most often. Let's look into that... ## Groupby If you're familiar with Excel, you probably heard of 'pivot tables'. Python Pandas has a function very similar to those pivot tables. Let's start with a small refresher: pivot tables are summaries of a dataset inside a new table. Huh? That's might be a lot to take in. Look at our example: data on donations to political parties in the UK. If we want to know how much each unique donor donated, we are looking for a specific summary of our dataset. To get the anwer to this question: 'How much have Ms Jane Mactaggart, Mr Duncan Greenland, and Lord Charles Falconer of Thoroton donated in total?' We need Pandas to sum up all donation for every donor in the dataframe. In a way, this is a summary of the original dataframe by grouping values by in this case the column DonorName. Using Python this can be done using the group by function. Let's create a new dataframe called donors, that has all donors and the total sum of their donations in there. Use `donors = df.groupby('DonorName')['Value'].sum()`. This is a combination of several functions: group data by 'DonorName', and sum the data in the 'Value' column... ``` donors = df.groupby('DonorName')['Value'].sum() donors.head(10) ``` To see if it worked, you'll have to add `donors.head(10)`, otherwise your computer won't know that you actually want to see the result of your effort. :) ## Pivot tables But Python has it's own pivot table as well. You can get a similar result in a better looking table using de `df.pivot_table` function. Here's a perfectly fine `.pivot_table` example: `df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10)` Let's go over this code before running it. What will `df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10)` actually do? For the dataframe called df, create a pivot table where: - the values in the pivot table should be based on the Value column - the index of the pivot table should be base don the DonorName column, in other words: create a row for every unique value in the DonorName column - create a new column for every unique value in the Year column - aggregate the data that fills up these columns (from the Value column, see?) by summing it for every row. Are you ready to try it yourself? ``` df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10) ``` ## Save your data Now that we've put all this work into cleaning our dataset, let's save a copy. Off course Pandas has a nifty command for that too. Use `dataframe.to_csv('filename.csv', encoding='utf8')`. Be ware: use a different name than the filename of the original data file, or it will be overwritten. ``` df.to_csv('results clean - pivot table.csv') ``` In case you want to check if a new file was created in your directory, you can use the `pwd` and `ls` commands. At the beginning of this module, we used these commands to print the working directory (`pwd`) and list the content of the working directory (`ls`). First, use `pwd` to see in which folder - also known as directory - you are: ``` pwd ``` Now use `ls` to get a list of all files in this directory. If everything worked your newly saved datafile should be among the files in the list. ``` ls ```
github_jupyter
# Example File: In this package, we show three examples: <ol> <li>4 site XY model</li> <li>4 site Transverse Field XY model with random coefficients</li> <li><b> Custom Hamiltonian from OpenFermion </b> </li> </ol> ## Clone and Install The Repo via command line: ``` git clone https://github.com/kemperlab/cartan-quantum-synthesizer.git cd ./cartan-quantum-synthesizer/ pip install . ``` # Building Custom Hamiltonians In this example, we will use OpenFermion to generate a Hubbard Model Hamiltonian, then use the Jordan-Wigner methods of OpenFermion and some custom functions to feed the output into the Cartan-Quantum-Synthesizer package ## Step 1: Build the Hamiltonian in OpenFermion ``` from CQS.methods import * from CQS.util.IO import tuplesToMatrix import openfermion from openfermion import FermionOperator t = 1 U = 8 mu = 1 systemSize = 4 #number of qubits neeed #2 site, 1D lattice, indexed as |↑_0↑_1↓_2↓_3> #Hopping terms H = -t*(FermionOperator('0^ 1') + FermionOperator('1^ 0') + FermionOperator('2^ 3') + FermionOperator('3^ 2')) #Coulomb Terms H += U*(FermionOperator('0^ 0 2^ 2') + FermionOperator('1^ 1 3^ 3')) #Chemical Potential H += -mu*(FermionOperator('0^ 0') + FermionOperator('1^ 1') + FermionOperator('2^ 2') + FermionOperator('3^ 3')) print(H) #Jordan Wigner Transform HPauli = openfermion.jordan_wigner(H) print(HPauli) #Custom Function to convert OpenFermion operators to a format readable by CQS: #Feel free to use or modify this code, but it is not built into the CQS package def OpenFermionToCQS(H, systemSize): """ Converts the Operators to a list of (PauliStrings) Args: H(obj): The OpenFermion Operator systemSize (int): The number of qubits in the system """ stringToTuple = { 'X': 1, 'Y': 2, 'Z': 3 } opList = [] coList = [] for op in H.terms.keys(): #Pulls the operator out of the QubitOperator format coList.append(H.terms[op]) opIndexList = [] opTypeDict = {} tempTuple = () for (opIndex, opType) in op: opIndexList.append(opIndex) opTypeDict[opIndex] = opType for index in range(systemSize): if index in opIndexList: tempTuple += (stringToTuple[opTypeDict[index]],) else: tempTuple += (0,) opList.append(tempTuple) return (coList, opList) #The new format looks like: print(OpenFermionToCQS(HPauli, systemSize)) #Now, we can put all this together: #Step 1: Create an Empty Hamiltonian Object HubbardH = Hamiltonian(systemSize) #Use Hamiltonian.addTerms to build the Hubbard model Hamiltonian: HubbardH.addTerms(OpenFermionToCQS(HPauli, systemSize)) #This gives: HubbardH.getHamiltonian(type='printText') #There's an IIII term we would rather not deal with, so we can remove it like this: HubbardH.removeTerm((0,0,0,0)) #This gives: print('Idenity/Global Phase removed:') HubbardH.getHamiltonian(type='printText') #Be careful choosing an involution, because it might now decompose such that the Hamiltonian is in M: try: HubbardC = Cartan(HubbardH) except Exception as e: print('Default Even/Odd Involution does not work:') print(e) print('countY does work though. g = ') HubbardC = Cartan(HubbardH, involution='countY') print(HubbardC.g) ```
github_jupyter
``` import requests import simplejson as json import pandas as pd import numpy as np import os import json import math from openpyxl import load_workbook df={"mapping":{ "Afferent / Efferent Arteriole Endothelial": "Afferent Arteriole Endothelial Cell", "Ascending Thin Limb": "Ascending Thin Limb Cell", "Ascending Vasa Recta Endothelial": "Ascending Vasa Recta Endothelial Cell", "B": "B cell", "Classical Dendritic": "Dendritic Cell (classical)", "Connecting Tubule": "Connecting Tubule Cell", "Connecting Tubule Intercalated Type A": "Connecting Tubule Intercalated Cell Type A", "Connecting Tubule Principal": "Connecting Tubule Principal Cell", "Cortical Collecting Duct Intercalated Type A": "Collecting Duct Intercalated Cell Type A", "Cortical Collecting Duct Principal": "Cortical Collecting Duct Principal Cell", "Cortical Thick Ascending Limb": "Cortical Thick Ascending Limb Cell", "Cortical Vascular Smooth Muscle / Pericyte": "Vascular Smooth Muscle Cell/Pericyte (general)", "Cycling Mononuclear Phagocyte": "Monocyte", "Descending Thin Limb Type 1": "Descending Thin Limb Cell Type 1", "Descending Thin Limb Type 2": "Descending Thin Limb Cell Type 2", "Descending Thin Limb Type 3": "Descending Thin Limb Cell Type 3", "Descending Vasa Recta Endothelial": "Descending Vasa Recta Endothelial Cell", "Distal Convoluted Tubule Type 1": "Distal Convoluted Tubule Cell Type 1", "Distal Convoluted Tubule Type 2": "Distal Convoluted Tubule Cell Type 1", "Fibroblast": "Fibroblast", "Glomerular Capillary Endothelial": "Glomerular Capillary Endothelial Cell", "Inner Medullary Collecting Duct": "Inner Medullary Collecting Duct Cell", "Intercalated Type B": "Intercalated Cell Type B", "Lymphatic Endothelial": "Lymphatic Endothelial Cell", "M2 Macrophage": "M2-Macrophage", "Macula Densa": "Macula Densa cell", "Mast": "Mast cell", "Medullary Fibroblast": "Fibroblast", "Medullary Thick Ascending Limb": "Medullary Thick Ascending Limb Cell", "Mesangial": "Mesangial Cell", "Monocyte-derived": "Monocyte", "Natural Killer T": "Natural Killer T Cell", "Neutrophil": "Neutrophil", "Non-classical monocyte": "non Classical Monocyte", "Outer Medullary Collecting Duct Intercalated Type A": "Outer Medullary Collecting Duct Intercalated Cell Type A", "Outer Medullary Collecting Duct Principal": "Outer Medullary Collecting Duct Principal Cell", "Papillary Tip Epithelial": "Endothelial", "Parietal Epithelial": "Parietal Epithelial Cell", "Peritubular Capilary Endothelial": "Peritubular Capillary Endothelial Cell", "Plasma": "Plasma cell", "Plasmacytoid Dendritic": "Dendritic Cell (plasmatoid)", "Podocyte": "Podocyte", "Proximal Tubule Epithelial Segment 1": "Proximal Tubule Epithelial Cell Segment 1", "Proximal Tubule Epithelial Segment 2": "Proximal Tubule Epithelial Cell Segment 2", "Proximal Tubule Epithelial Segment 3": "Proximal Tubule Epithelial Cell Segment 3", "Renin-positive Juxtaglomerular Granular": "Juxtaglomerular granular cell (Renin positive)", "Schwann / Neural": "other", "T": "T Cell", "Vascular Smooth Muscle / Pericyte": "Vascular Smooth Muscle Cell/Pericyte (general)"} } df = pd.DataFrame(df) df.reset_index(inplace=True) df.rename(columns = {"index":"AZ.CT/LABEL","mapping":"ASCTB.CT/LABEL"},inplace = True) df.reset_index(inplace=True,drop=True) df len(df) df_1=pd.read_excel('./Data/Final/kidney.xlsx',sheet_name='Final_Matches') len(df_1) df_1 ```
github_jupyter
Single-channel CSC (Constrained Data Fidelity) ============================================== This example demonstrates solving a constrained convolutional sparse coding problem with a greyscale signal $$\mathrm{argmin}_\mathbf{x} \sum_m \| \mathbf{x}_m \|_1 \; \text{such that} \; \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2 \leq \epsilon \;,$$ where $\mathbf{d}_{m}$ is the $m^{\text{th}}$ dictionary filter, $\mathbf{x}_{m}$ is the coefficient map corresponding to the $m^{\text{th}}$ dictionary filter, and $\mathbf{s}$ is the input image. ``` from __future__ import print_function from builtins import input import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40 import numpy as np from sporco import util from sporco import signal from sporco import plot plot.config_notebook_plotting() import sporco.metric as sm from sporco.admm import cbpdn ``` Load example image. ``` img = util.ExampleImages().image('kodim23.png', scaled=True, gray=True, idxexp=np.s_[160:416,60:316]) ``` Highpass filter example image. ``` npd = 16 fltlmbd = 10 sl, sh = signal.tikhonov_filter(img, fltlmbd, npd) ``` Load dictionary and display it. ``` D = util.convdicts()['G:12x12x36'] plot.imview(util.tiledict(D), fgsz=(7, 7)) ``` Set [admm.cbpdn.ConvMinL1InL2Ball](http://sporco.rtfd.org/en/latest/modules/sporco.admm.cbpdn.html#sporco.admm.cbpdn.ConvMinL1InL2Ball) solver options. ``` epsilon = 3.4e0 opt = cbpdn.ConvMinL1InL2Ball.Options({'Verbose': True, 'MaxMainIter': 200, 'HighMemSolve': True, 'LinSolveCheck': True, 'RelStopTol': 5e-3, 'AuxVarObj': False, 'rho': 50.0, 'AutoRho': {'Enabled': False}}) ``` Initialise and run CSC solver. ``` b = cbpdn.ConvMinL1InL2Ball(D, sh, epsilon, opt) X = b.solve() print("ConvMinL1InL2Ball solve time: %.2fs" % b.timer.elapsed('solve')) ``` Reconstruct image from sparse representation. ``` shr = b.reconstruct().squeeze() imgr = sl + shr print("Reconstruction PSNR: %.2fdB\n" % sm.psnr(img, imgr)) ``` Display low pass component and sum of absolute values of coefficient maps of highpass component. ``` fig = plot.figure(figsize=(14, 7)) plot.subplot(1, 2, 1) plot.imview(sl, title='Lowpass component', fig=fig) plot.subplot(1, 2, 2) plot.imview(np.sum(abs(X), axis=b.cri.axisM).squeeze(), cmap=plot.cm.Blues, title='Sparse representation', fig=fig) fig.show() ``` Display original and reconstructed images. ``` fig = plot.figure(figsize=(14, 7)) plot.subplot(1, 2, 1) plot.imview(img, title='Original', fig=fig) plot.subplot(1, 2, 2) plot.imview(imgr, title='Reconstructed', fig=fig) fig.show() ``` Get iterations statistics from solver object and plot functional value, ADMM primary and dual residuals, and automatically adjusted ADMM penalty parameter against the iteration number. ``` its = b.getitstat() fig = plot.figure(figsize=(20, 5)) plot.subplot(1, 3, 1) plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig) plot.subplot(1, 3, 2) plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T, ptyp='semilogy', xlbl='Iterations', ylbl='Residual', lgnd=['Primal', 'Dual'], fig=fig) plot.subplot(1, 3, 3) plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig) fig.show() ```
github_jupyter
# 六軸史都華平台模擬 ``` import numpy as np import pandas as pd from sympy import * init_printing(use_unicode=True) import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits import mplot3d import seaborn as sns sns.set() %matplotlib inline ``` ### Stewart Func ``` α, β, γ = symbols('α β γ') x, y, z = symbols('x y z') r, R, w, W, t = symbols('r R w W t') # x, y, z三軸固定軸旋轉矩陣 rotx = lambda θ : Matrix([[1, 0, 0], [0, cos(θ), -sin(θ)], [0, sin(θ), cos(θ)]]) roty = lambda θ : Matrix([[cos(θ), 0, sin(θ)], [0, 1, 0], [-sin(θ), 0, cos(θ)]]) rotz = lambda θ : Matrix([[cos(θ), -sin(θ), 0], [sin(θ), cos(θ), 0], [0, 0, 1]]) # 姿勢產生器 固定座標旋轉 def poses(α, β, γ): return rotz(γ)*roty(β)*rotx(α) # 質心位置產生器 def posit(x, y, z): return Matrix([x, y, z]) # Basic 6 軸點設定 def basic(r, w): b1 = Matrix([w/2, r, 0]) b2 = Matrix([-w/2, r, 0]) b3 = rotz(pi*2/3)*b1 b4 = rotz(pi*2/3)*b2 b5 = rotz(pi*2/3)*b3 b6 = rotz(pi*2/3)*b4 return [b1, b2, b3, b4, b5, b6] # 平台 6 軸點設定 def plat(r, w, pos=poses(0, 0, 0), pit=posit(0, 0, 5)): p1 = Matrix([-w/2, r, 0]) p1 = rotz(-pi/3)*p1 p2 = Matrix([[-1, 0, 0], [0, 1, 0], [0, 0, 1]])*p1 p3 = rotz(pi*2/3)*p1 p4 = rotz(pi*2/3)*p2 p5 = rotz(pi*2/3)*p3 p6 = rotz(pi*2/3)*p4 lst = [p1, p2, p3, p4, p5, p6] for n in range(6): lst[n] = (pos*lst[n]) + pit return lst # 六軸長度 def leng(a, b): if a.ndim == 1: return (((a - b)**2).sum())**0.5 else: return (((a - b)**2).sum(1))**0.5 ``` ### Basic & plane ``` basic(R, W) plat(r, w, poses(α, β, γ), posit(x, y, z)) ``` ### 設定α, β, γ, x, y, z 可設定為時間t的函數 ``` pos = poses(sin(2*t), cos(t), 0) pit = posit(x, y, z) baspt = basic(R, W) baspt = np.array(baspt) pltpt = plat(r, w, pos, pit) pltpt = np.array(pltpt) ``` ### 6軸的長度 (示範軸1) ``` length = leng(baspt ,pltpt) ``` ### 設定參數 r = 10, R = 5, w = 2, W = 2, x = 0, y = 0, z = 10 ``` x1 = length[0].subs([(r, 10), (R, 5), (w, 2), (W, 2), (x, 0), (y, 0), (z, 10)]) x1 ``` ### 微分一次獲得速度變化 ``` dx1 = diff(x1, t) dx1 ``` ### 微分兩次獲得加速度變化 ``` ddx1 = diff(dx1, t) ddx1 ``` ### 畫圖 ``` tline = np.linspace(0, 2*np.pi, 361) xlst = lambdify(t, x1, 'numpy') dxlst = lambdify(t, dx1, 'numpy') ddxlst = lambdify(t, ddx1, 'numpy') plt.rcParams['figure.figsize'] = [16, 6] plt.plot(tline, xlst(tline), label = 'x') plt.plot(tline, dxlst(tline), label = 'v') plt.plot(tline, ddxlst(tline), label = 'a') plt.ylabel('Length') plt.xlabel('Time') plt.legend() ```
github_jupyter
<a href="https://colab.research.google.com/github/davidnoone/PHYS332_FluidExamples/blob/main/04_ColloidViscosity_SOLUTION.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Colloids and no-constant viscosity (1d case) Colloids are a group of materials that include small particles emersen in a fluid (could be liquid or gas). Some examples include emulsions and gels, which emcompass substances like milk, whipped cream, styrofoam, jelly, and some glasses. We imagine a "pile" of substance that undergoes a viscous dissipation following a simple law. $$ \frac{\partial h}{\partial t} = \frac{\partial}{\partial x} \left( \nu \frac{\partial h}{\partial x} \right) $$ where h is the depth of the colloidal material, and $\nu$ is the kinematic viscosity, at constant density. The viscosity follows a simple law: $$ \nu = \nu_0 (1 + 2.5 \phi) $$ where $\phi$ is the volume fraction. In the case that $\phi = \phi(h)$ some there are some non-linear consequences on the viscous flow. ## Work tasks 1. Create a numerical model of viscous flow using finite differences. (Hint: You have basically done this in previous excersizes). 2. Compute the height at a futute time under the null case where $\phi = 0$ 3. Repeate the experiment for the case that $\phi$ has positive and negative values of a range of sizes. You may choose to assume $\phi = \pm h/h_{max}$, where $h_{max} $ is the maximum value of your initial "pile". 4. Compare the results of your experiments. ``` import math import numpy as np import matplotlib.pyplot as plt ``` The main component of this problem is developing an equation to calculate vicsous derivative using finite differences. Notice that unlike the previous case in which the viscosity is constant, here we must keep the viscosity within derivative estimates. We wish to evaluate the second grid on a discrete grid between 0 and 2$\pi$, with steps $\Delta x$ indicated by index $i = 0, N-1$. (Note python has arrays startning at index 0 Using a finite difference method, we can obtain scheme with second order accuracy as: $$ \frac{\partial}{\partial x} \left( \nu \frac{\partial h}{\partial x} \right)= \frac{F_{i+1/2} - F_{i-1/2}}{(\Delta x)} $$ where we have used fluxes $F$ at the "half" locations defined by $$ F_{i-1/2} = \nu_{i-1/2} (\frac{h_{i} - h_{i-1})}{\Delta x} $$ and $$ F_{i+1/2} = \nu_{i+1/2} (\frac{h_{i+1} - h_{i})}{\Delta x} $$ Notice that $\nu$ needs to be determined at the "half" locations, which means that $h$ needs to be estimated at those points. It is easiest to assume it is the average of the values on either side. i.e., $h_{i-1/2} = 0.5(h_i + h_{i-1})$, and similarly for $h_{i+1/2}$. We are working with periodic boundary conditions so we may "wrap arround" such that $f_{-1} = f_{N-1}$ and $f_{N} = f_{1}$. You may choose to do this with python array indices, or take a look at the numpy finction [numpy.roll()](https://numpy.org/doc/stable/reference/generated/numpy.roll.html). ``` # Create a coordinate, which is periodix npts = 50 xvals = np.linspace(-math.pi,math.pi,npts) dx = 2*math.pi/npts hmax = 1.0 # maximum height of pile ["metres"] vnu0 = 0.5 # reference viscosity [m2/sec] # Define the an initial "pile" of substance: a gaussian width = 3*dx h = hmax*np.exp(-(xvals/width)**2) ``` Make a plot showing your initial vorticity: vorticity as a function of X ``` # PLot! fig = plt.figure() plt.plot(xvals,h) ``` Let's define a function to perform some number of time steps ``` def viscosity(h): global hmax phi = 0. phi = h/hmax vnu = vnu0*(1 + 2.5*phi) return vnu def forward_step(h_old, nsteps, dtime): for n in range(nsteps): dhdt = np.zeros_like(h_old) hmid = 0.5*(h_old + np.roll(h_old,+1)) # at indices i:nx1 = i-1/2 upward vmid = viscosity(hmid) hflx = vmid*(h_old - np.roll(h_old,+1))/dx dhdt = (np.roll(hflx,-1) - hflx)/dx # hflx(i+1/2) - hflx(i-1/2) h_new = h_old + dtime*dhdt return h_new ``` Use your integration function to march forward in time to check the analytic result. Note, the time step must be small enough for a robust solution. It must be: $$ \Delta t \lt \frac{(\Delta x)^2} {4 \eta} $$ ``` dt_max = 0.25*dx*dx/vnu0 print("maximum allowed dtime is ",dt_max," seconds") dtime = 0.005 nsteps = 200 # step forward more steps, and plot again nlines = 10 for iline in range(nlines): h = forward_step(h.copy(),nsteps,dtime) plt.plot(xvals,h) #Rerun with a different phi (redefine the viscosity function - but clumsy) def viscosity(h): global hmax phi = -h/hmax # Try this? vnu = vnu0*(1 + phi) return vnu # step forward more steps, and plot again h = hmax*np.exp(-(xvals/width)**2) for iline in range(nlines): h = forward_step(h.copy(),nsteps,dtime) plt.plot(xvals,h) ``` #Results! How did the shapes differ with thinning vs thickening?
github_jupyter
``` import warnings warnings.filterwarnings('ignore') import pandas as pd from plotnine import * %ls test = pd.read_csv('shoppingmall_info_template.csv', encoding='cp949') test.shape test.head() test.columns test.head() test['Category'] = test['Name'].str.extract(r'^(스타필드|롯데몰)\s.*') test import folium geo_df = test map = folium.Map(location=[geo_df['Latitude'].mean(), geo_df['Longitude'].mean()], zoom_start=17, tiles='stamenwatercolor') for i, row in geo_df.iterrows(): mall_name = row['Name'] + '-' + row['Address'] if row['Category'] == '롯데몰': icon = folium.Icon(color='red',icon='info-sign') elif row['Category'] == '스타필드': icon = folium.Icon(color='blue',icon='info-sign') else: icon = folium.Icon(color='green',icon='info-sign') folium.Marker( [row['Latitude'], row['Longitude']], popup=mall_name, icon=icon ).add_to(map) # for n in geo_df.index: # mall_name = geo_df['Name'][n] \ # + '-' + geo_df['Address'][n] # folium.Marker([geo_df['latitude'][n], # geo_df['longitude'][n]], # popup=mall_name, # icon=folium.Icon(color='pink',icon='info-sign'), # ).add_to(map) # folium.Marker( # location=[37.545614, 127.224064], # popup = mall_name, # icon=folium.Icon(icon='cloud') # ).add_to(map) # folium.Marker( # location=[37.511683, 127.059108], # popup = mall_name, # icon=folium.Icon(icon='cloud') # ).add_to(map) map import folium geo_df = test map = folium.Map(location=[geo_df['Latitude'].mean(), geo_df['Longitude'].mean()], zoom_start=6, tiles='stamenwatercolor') for i, row in geo_df.iterrows(): mall_name = row['Name'] + '-' + row['Address'] if row['Category'] == '롯데몰': icon = folium.Icon(color='red',icon='info-sign') elif row['Category'] == '스타필드': icon = folium.Icon(color='blue',icon='info-sign') else: icon = folium.Icon(color='green',icon='info-sign') folium.Marker( [row['Latitude'], row['Longitude']], popup=mall_name, icon=icon ).add_to(map) map.choropleth( geo_data = test, name='choropleth', data = test, columns=['Traffic_no'], key_on='feature.id', fill_color='Set1', fill_opacity=0.7, line_opacity=0.2, ) folium.LayerControl().add_to(map) test['Traffic_no'] # 교통 가능 수 import plotly.plotly as py map = folium.Map( location=[geo_df['Latitude'].mean(), geo_df['Longitude'].mean()], zoom_start=6, tiles='stamenwatercolor') map.choropleth( geo_data = test, name='choropleth', data = test, columns=['Traffic_no'], key_on='feature.id', fill_color='Set1', fill_opacity=0.7, line_opacity=0.2, ) folium.LayerControl().add_to(map) ```
github_jupyter
<h3>Implementation Of Doubly Linked List in Python</h3> <p> It is similar to Single Linked List but the only Difference lies that it where in Single Linked List we had a link to the next data element ,In Doubly Linked List we also have the link to previous data element with addition to next link</p> <ul> <b>It has three parts</b> <li> Data Part :This stores the data element of the Node and also consists the reference of the address of the next and previous element </li> <li>Next :This stores the address of the next pointer via links</li> <li>Previous :This stores the address of the previous element/ pointer via links </li> </ul> ``` from IPython.display import Image Image(filename='C:/Users/prakhar/Desktop/Python_Data_Structures_and_Algorithms_Implementations/Images/DoublyLinkedList.png',width=800, height=400) #save the images from github to your local machine and then give the absolute path of the image class Node: # All the operation are similar to Single Linked List with just an addition of Previous which points towards the Prev address via links def __init__(self, data=None, next=None, prev=None): self.data = data self.next = next self.prev = prev class Double_LL(): def __init__(self): self.head = None def print_forward(self): if self.head is None: print("Linked List is empty") return itr = self.head llstr = '' while itr: llstr += str(itr.data) + ' --> ' itr = itr.next print(llstr) def print_backward(self): if self.head is None: print("Linked list is empty") return last_node = self.get_last_node() itr = last_node llstr = '' while itr: llstr += itr.data + '-->' itr = itr.prev print("Link list in reverse: ", llstr) def get_last_node(self): itr = self.head while itr.next: itr = itr.next return itr def get_length(self): count = 0 itr = self.head while itr: count += 1 itr = itr.next return count def insert_at_begining(self, data): if self.head == None: node = Node(data, self.head, None) self.head = node else: node = Node(data, self.head, None) self.head.prev = node self.head = node def insert_at_end(self, data): if self.head is None: self.head = Node(data, None, None) return itr = self.head while itr.next: itr = itr.next itr.next = Node(data, None, itr) def insert_at(self, index, data): if index < 0 or index > self.get_length(): raise Exception("Invalid Index") if index == 0: self.insert_at_begining(data) return count = 0 itr = self.head while itr: if count == index - 1: node = Node(data, itr.next, itr) if node.next: node.next.prev = node itr.next = node break itr = itr.next count += 1 def remove_at(self, index): if index < 0 or index >= self.get_length(): raise Exception("Invalid Index") if index == 0: self.head = self.head.next self.head.prev = None return count = 0 itr = self.head while itr: if count == index: itr.prev.next = itr.next if itr.next: itr.next.prev = itr.prev break itr = itr.next count += 1 def insert_values(self, data_list): self.head = None for data in data_list: self.insert_at_end(data) from IPython.display import Image Image(filename='C:/Users/prakhar/Desktop/Python_Data_Structures_and_Algorithms_Implementations/Images/DLL_insertion_at_beginning.png',width=800, height=400) #save the images from github to your local machine and then give the absolute path of the image ``` <p> Insertion at Beginning</p> ``` from IPython.display import Image Image(filename='C:/Users/prakhar/Desktop/Python_Data_Structures_and_Algorithms_Implementations/Images/DLL_insertion.png',width=800, height=400) #save the images from github to your local machine and then give the absolute path of the image ``` <p>Inserting Node at Index</p> ``` if __name__ == '__main__': ll = Double_LL() ll.insert_values(["banana", "mango", "grapes", "orange"]) ll.print_forward() ll.print_backward() ll.insert_at_end("figs") ll.print_forward() ll.insert_at(0, "jackfruit") ll.print_forward() ll.insert_at(6, "dates") ll.print_forward() ll.insert_at(2, "kiwi") ll.print_forward() ```
github_jupyter
``` #import libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(font_scale = 1.2, style = 'darkgrid') %matplotlib inline import warnings warnings.filterwarnings("ignore") #change display into using full screen from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) #import .csv file data = pd.read_csv('FB_data_platform.csv') ``` # 1. Data Exploration & Cleaning ### 1. Have a first look at data ``` data.head() #Consider only those records where amount spent > 0 data = data[(data['Amount spent (INR)'] > 0)] data.shape ``` ### 2. Drop Columns that are extra ``` #We see that Reporting Starts and Reporting Ends are additional columns which we don't require. So we drop them data.drop(['Reporting ends','Reporting starts'],axis = 1, inplace = True) #look at the data again data.head() #check rows and columns in data data.shape ``` #### So, there are 62 rows and 14 columns in the data ### 3. Deal with Null Values ``` #let's look if any column has null values data.isnull().sum() ``` #### From this we can infer that some columns have Null values (basically blank). Let's look at them: **1. Results & Result Type:** This happened when there was no conversion (Result). **2. Result rate, Cost per result:** As both these metrics depend on Result, so these are also blank. This was bound to happen because not every single day and every ad got a result (conversion). **So it is safe to replace all nulls in Results and Result rate column with 0.** ``` #Fill all blanks in Results with 0 data['Results'] = data['Results'].fillna(0) data['Result rate'] = data['Result rate'].fillna(0) #check how many nulls are still there data.isnull().sum() ``` #### Voila! Results & Result rate column has no nulls now. Let's see what column Results Type is all about. ``` data['Result Type'].value_counts() ``` So we infer that 'Result Type' is basically the type of conversion event taking place. It can be either Page Like, Post Like, On-Facebook Lead, Custom Conversion etc. **Since, we are analysing just one campaign here, we can drop this column as it has same meaning throughout data set.** If we were analysing multiple campaigns, with different objectives, then keeping this column would have made sense. ``` #Drop Result Type column from data data.drop(['Result Type'],axis = 1, inplace = True) #check how many nulls are still there data.isnull().sum() ``` Now we need to deal with **Cost per result**. The cases where CPA is Null means that there was no conversion. So ideally, in these cases the CPA should be very high (in case a conversion actually happened). #### So, let's leave this column as it is because we can't assign any value for records where no conversion happened. ``` data.info() ``` # 2. Feature Engineering ### 1. We can divide Frequency in buckets ``` data['Frequency'] = data['Frequency'].apply(lambda x:'1 to 2' if x<2 else '2 to 3' if x>=2 and x<3 else '3 to 4' if x>=3 and x<4 else '4 to 5' if x>=4 and x<5 else 'More than 5') data.head() ``` ### 2. Split Ad name into Ad Format and Ad Headline ``` data['Ad_name'] = data['Ad name'] data.head() data[['Ad Format','Ad Headline']] = data.Ad_name.str.split("-", expand = True) data.head() data.drop(['Ad name','Ad_name'],axis = 1, inplace = True) data.head() data.info(verbose = 1) data.to_csv('Clean_Data_Platform.csv') ``` ## Now our data is clean. Here are our features that we will use for analysis - **1. Campaign Name** - Name of campaign - **2. Ad Set Name** - Targeting - **3. Platform** - Facebook / Instagram - **4. Results** - How many conversions were achieved - **5. Amount spent** - How much money was spent on ad campaign - **6. Frequency** - On an average how many times did one user see the ad - **7. Result Rate** - Conversion Rate - **8. CTR** - Click Through Rate - **9. CPM** - Cost per 1000 impressions - **10. Cost per result** - Average Cost required for 1 conversion - **11. Ad Format** - Whether the ad crative is **Image/Video/Carousel** - **12. Ad Headline** - The headline used in ad So, our target variable here is **Results** and we will analyse the effect of other variable on our target variable. # 3. Relationship Visualization ### 1. Effect of Platform + Ad Format ``` # increase figure size plt.figure(figsize = (20, 5)) # subplot 1 plt.subplot(1, 6, 1) sns.barplot(x = 'Platform', y = 'Amount spent (INR)', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("Total Amount Spent") plt.xticks(rotation = 90) # subplot 2 plt.subplot(1, 6, 2) sns.barplot(x = 'Platform', y = 'Clicks (all)', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("Total Clicks") plt.xticks(rotation = 90) # subplot 3 plt.subplot(1, 6, 3) sns.barplot(x = 'Platform', y = 'CTR (all)', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("CTR") plt.xticks(rotation = 90) # subplot 4 plt.subplot(1, 6, 4) sns.barplot(x = 'Platform', y = 'Results', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("Total Conversions") plt.xticks(rotation = 90) # subplot 5 plt.subplot(1, 6, 5) sns.barplot(x = 'Platform', y = 'Cost per result', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("Avg. Cost per Conversion") plt.xticks(rotation = 90) # subplot 6 plt.subplot(1,6, 6) sns.barplot(x = 'Platform', y = 'Result rate', data = data, hue = 'Ad Format', estimator = np.sum, ci = None) plt.title("CVR") plt.xticks(rotation = 90) plt.tight_layout(pad = 0.7) plt.show() ``` ### 2. Effect of Platform + Frequency ``` data = data.sort_values(by = ['Frequency']) # increase figure size plt.figure(figsize = (25, 6)) # subplot 1 plt.subplot(1, 6, 1) sns.barplot(hue = 'Platform', y = 'Amount spent (INR)', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("Total Amount Spent") plt.xticks(rotation = 90) # subplot 2 plt.subplot(1, 6, 2) sns.barplot(hue = 'Platform', y = 'Clicks (all)', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("Total Clicks") plt.xticks(rotation = 90) # subplot 3 plt.subplot(1, 6, 3) sns.barplot(hue = 'Platform', y = 'CTR (all)', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("CTR") plt.xticks(rotation = 90) # subplot 4 plt.subplot(1, 6, 4) sns.barplot(hue = 'Platform', y = 'Results', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("Total Conversions") plt.xticks(rotation = 90) # subplot 5 plt.subplot(1, 6, 5) sns.barplot(hue = 'Platform', y = 'Cost per result', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("Avg. Cost per Conversion") plt.xticks(rotation = 90) # subplot 6 plt.subplot(1, 6, 6) sns.barplot(hue = 'Platform', y = 'Result rate', data = data, x = 'Frequency', estimator = np.sum, ci = None) plt.title("CVR") plt.xticks(rotation = 90) plt.tight_layout(pad = 0.7) plt.show() ```
github_jupyter
# Along isopycnal spice gradients Here we consider the properties of spice gradients along isopycnals. We do this using the 2 point differences and their distributions. This is similar (generalization) to the spice gradients that Klymak et al 2015 considered. ``` import numpy as np import xarray as xr import glidertools as gt from cmocean import cm as cmo import gsw import matplotlib.pyplot as plt plt.style.use('seaborn-colorblind') plt.rcParams['font.size'] = 16 ds_659_rho = xr.open_dataset('data/sg_O2_659_isopycnal_grid_4m_27_sept_2021.nc') ds_660_rho = xr.open_dataset('data/sg_O2_660_isopycnal_grid_4m_27_sept_2021.nc') # compute spice # Pick constant alpha and beta for convenience (can always update later) alpha_659 = gsw.alpha(ds_659_rho.SA, ds_659_rho.CT, ds_659_rho.ctd_pressure) alpha_660 = gsw.alpha(ds_660_rho.SA, ds_660_rho.CT, ds_660_rho.ctd_pressure) #alpha = 8.3012133e-05 #beta = 0.00077351 # dCT_659 = ds_659_rho.CT - ds_659_rho.CT.mean('dives') dSA_659 = ds_659_rho.SA - ds_659_rho.SA.mean('dives') ds_659_rho['Spice'] = (2*alpha_659*dCT_659).rename('Spice') # remove a mean per isopycnal dCT_660 = ds_660_rho.CT - ds_660_rho.CT.mean('dives') dSA_660 = ds_660_rho.SA - ds_660_rho.SA.mean('dives') ds_660_rho['Spice'] = (2*alpha_660*dCT_660).rename('Spice') plt.figure(figsize=(12,6)) plt.subplot(211) ds_660_rho.Spice.sel(rho_grid=27.2, method='nearest').plot(label='27.2') ds_660_rho.Spice.sel(rho_grid=27.4, method='nearest').plot(label='27.4') ds_660_rho.Spice.sel(rho_grid=27.6, method='nearest').plot(label='27.6') plt.legend() plt.title('660') plt.subplot(212) ds_659_rho.Spice.sel(rho_grid=27.2, method='nearest').plot(label='27.2') ds_659_rho.Spice.sel(rho_grid=27.4, method='nearest').plot(label='27.4') ds_659_rho.Spice.sel(rho_grid=27.6, method='nearest').plot(label='27.6') plt.legend() plt.title('659') plt.tight_layout() ``` ### Analysis at a couple of single depths ``` #def great_circle_distance(lon1, lat1, lon2, lat2): def great_circle_distance(X1, X2): """Calculate the great circle distance between one or multiple pairs of points given in spherical coordinates. Spherical coordinates are expected in degrees. Angle definition follows standard longitude/latitude definition. This uses the arctan version of the great-circle distance function (en.wikipedia.org/wiki/Great-circle_distance) for increased numerical stability. Parameters ---------- lon1: float scalar or numpy array Longitude coordinate(s) of the first element(s) of the point pair(s), given in degrees. lat1: float scalar or numpy array Latitude coordinate(s) of the first element(s) of the point pair(s), given in degrees. lon2: float scalar or numpy array Longitude coordinate(s) of the second element(s) of the point pair(s), given in degrees. lat2: float scalar or numpy array Latitude coordinate(s) of the second element(s) of the point pair(s), given in degrees. Calculation of distances follows numpy elementwise semantics, so if an array of length N is passed, all input parameters need to be arrays of length N or scalars. Returns ------- distance: float scalar or numpy array The great circle distance(s) (in degrees) between the given pair(s) of points. """ # Change form of input to make compliant with pdist lon1 = X1[0] lat1 = X1[1] lon2 = X2[0] lat2 = X2[1] # Convert to radians: lat1 = np.array(lat1) * np.pi / 180.0 lat2 = np.array(lat2) * np.pi / 180.0 dlon = (lon1 - lon2) * np.pi / 180.0 # Evaluate trigonometric functions that need to be evaluated more # than once: c1 = np.cos(lat1) s1 = np.sin(lat1) c2 = np.cos(lat2) s2 = np.sin(lat2) cd = np.cos(dlon) # This uses the arctan version of the great-circle distance function # from en.wikipedia.org/wiki/Great-circle_distance for increased # numerical stability. # Formula can be obtained from [2] combining eqns. (14)-(16) # for spherical geometry (f=0). return ( 180.0 / np.pi * np.arctan2( np.sqrt((c2 * np.sin(dlon)) ** 2 + (c1 * s2 - s1 * c2 * cd) ** 2), s1 * s2 + c1 * c2 * cd, ) ) from scipy.spatial.distance import pdist ``` #### 27.4 ``` spatial_bins=np.logspace(2,6,17) def select_data(rho_sel): # function to find the dX and dSpice from the different data sets and merging them ds_sel = ds_660_rho.sel(rho_grid=rho_sel, method='nearest') lon_sel = ds_sel.longitude.values.reshape((-1,1)) lat_sel = ds_sel.latitude.values.reshape((-1,1)) time_sel = ds_sel.days.values.reshape((-1,1)) Spice_sel = ds_sel.Spice.values.reshape((-1,1)) Xvec = np.concatenate([lon_sel, lat_sel], axis=1) # mXn, where m is number of obs and n is dimension dX_660 = pdist(Xvec, great_circle_distance)*110e3 # convert to m dTime_660 = pdist(time_sel, 'cityblock') dSpice_660 = pdist(Spice_sel, 'cityblock') # we just want to know the abs diff ds_sel = ds_659_rho.sel(rho_grid=rho_sel, method='nearest') lon_sel = ds_sel.longitude.values.reshape((-1,1)) lat_sel = ds_sel.latitude.values.reshape((-1,1)) time_sel = ds_sel.days.values.reshape((-1,1)) Spice_sel = ds_sel.Spice.values.reshape((-1,1)) Xvec = np.concatenate([lon_sel, lat_sel], axis=1) # mXn, where m is number of obs and n is dimension dX_659 = pdist(Xvec, great_circle_distance)*110e3 # convert to m dTime_659 = pdist(time_sel, 'cityblock') dSpice_659 = pdist(Spice_sel, 'cityblock') # we just want to know the abs diff # combine data dX = np.concatenate([dX_659, dX_660]) dTime = np.concatenate([dTime_659, dTime_660]) dSpice = np.concatenate([dSpice_659, dSpice_660]) # condition cond = (dTime <= 2e-3*dX**(2/3)) return dX[cond], dSpice[cond] rho_sel = 27.25 dX_sel, dSpice_sel = select_data(rho_sel) # estimate pdfs Hspice, xedges, yedges = np.histogram2d(dX_sel, dSpice_sel/dX_sel, bins=(spatial_bins, np.logspace(-14, -6, 37))) xmid = 0.5*(xedges[0:-1] + xedges[1:]) ymid = 0.5*(yedges[0:-1] + yedges[1:]) #dX_edges = xedges[1:] - xedges[0:-1] #dY_edges = yedges[1:] - yedges[0:-1] Hspice_Xdnorm = Hspice/ Hspice.sum(axis=1).reshape((-1,1)) mean_dist = np.zeros((len(xmid,))) for i in range(len(xmid)): mean_dist[i] = np.sum(Hspice_Xdnorm[i, :]*ymid) plt.figure(figsize=(7,5)) plt.pcolor(xedges, yedges, Hspice_Xdnorm.T, norm=colors.LogNorm(vmin=1e-3, vmax=0.3), cmap=cmo.amp) plt.plot(xmid, mean_dist , linewidth=2., color='cyan', label='Mean') plt.plot(xmid, 1e-6*xmid**-.6, '--',linewidth=2, color='gray', label='L$^{-0.6}$') plt.colorbar(label='PDF') plt.xscale('log') plt.yscale('log') plt.xlabel('L [m]') plt.ylabel(r'$|dSpice/dx|$ [kg m$^{-4}$]') plt.legend(loc='lower left') plt.ylim([1e-13, 1e-6]) plt.xlim([1e2, 1e5]) plt.grid() plt.title('$\sigma$='+str(rho_sel)+'kg m$^{-3}$') plt.tight_layout() plt.savefig('./figures/figures_spice_gradients_panel1.pdf') ``` ### Compute the structure functions at many depths ### Structure functions Here we consider the structure functions; quantities like $<d\tau ^n>$. Power law scalings go as, at $k^{-\alpha}$ in power spectrum will appear at $r^{\alpha-1}$ in spectra. So a power law scaling of 2/3 corresponds to $-5/3$, while shallower than 2/3 would correspond to shallower. ``` rho_sels = ds_660_rho.rho_grid[0:-1:10] rho_sels # estimate the structure functions # We will do the distribution calculations at a few depths spatial_bins=np.logspace(2,6,17) S2 = np.zeros((len(rho_sels),len(spatial_bins)-1)) S4 = np.zeros((len(rho_sels),len(spatial_bins)-1)) for count, rho_sel in enumerate(rho_sels): print(count) dX_cond, dSpice_cond = select_data(rho_sel) # compute the structure functions for i in range(len(spatial_bins)-1): S2[count, i] = np.nanmean(dSpice_cond[ (dX_cond> spatial_bins[i]) & (dX_cond <= spatial_bins[i+1])]**2) S4[count, i] = np.nanmean(dSpice_cond[ (dX_cond> spatial_bins[i]) & (dX_cond <= spatial_bins[i+1])]**4) ``` Unlike the surface buoyancy gradients, there is less of a suggestion of saturation at the small scales. Suggesting that even if the is a natural limit to the smallest gradients (wave mixing or such), it is not reached at a few 100m. This result seems to be similar, regardless of the isopycnal we are considering (tried this by changing the density level manually). Things to try: - Second order structure functions (do they look more like k^-1 or k^-2?) - 4th order structure functions could also help as a summary metric ``` import matplotlib.colors as colors from matplotlib import ticker np.linspace(-4.5,0, 19) np.linspace(-11,-8, 22) plt.figure(figsize=(10, 7)) lev_exp = np.linspace(-11,-8, 22) levs = np.power(10, lev_exp) cnt = plt.contourf(xmid, rho_sels, S2, levels=levs, norm = colors.LogNorm(3e-11), extend='both', cmap=cmo.tempo_r) for c in cnt.collections: c.set_edgecolor("face") plt.xscale('log') plt.colorbar(ticks=[1e-11, 1e-10, 1e-9, 1e-8], label=r'$\left< \delta \tau ^2\right> $ [kg$^2$ m$^{-6}$]') plt.ylim([27.65, 27.15]) plt.xlim([3e2, 1e5]) plt.ylabel(r'$\rho$ [kg m$^{-3}$]') plt.xlabel('L [m]') plt.tight_layout() plt.savefig('figures/figure_iso_spec_freq_panel4.pdf') spatial_bins_mid = 0.5*(spatial_bins[0:-1] + spatial_bins[1:]) np.any(np.isnan(y)) # Fit slope npres = len(rho_sels) m_mean = np.zeros((npres,)) x = spatial_bins_mid[(spatial_bins_mid>=1e3) & (spatial_bins_mid<=40e3)] for i in range(npres): y = S2[i, (spatial_bins_mid>=1e3) & (spatial_bins_mid<=40e3)] if ~np.any(np.isnan(y)): m_mean[i],b = np.polyfit(np.log(x), np.log(y),1) else: m_mean[i] = np.nan np.mean(m_mean[20:60]) plt.figure(figsize=(3,7)) plt.plot(m_mean, rho_sels, color='k', linewidth=2) plt.vlines([0,2/3,1], 27.15, 27.65, linestyles='--') plt.gca().invert_yaxis() plt.ylim([27.65, 27.15]) plt.xlim([-.1,1.1]) plt.xlabel('Slope') plt.ylabel(r'$\rho$ [kg m$^{-3}$]') plt.tight_layout() plt.savefig('figures/figure_iso_spec_freq_panel5.pdf') np.linspace(2, 20, 19) plt.figure(figsize=(10, 7)) cnt = plt.contourf(xmid, rho_sels, S4/ S2**2, levels=np.linspace(2, 20, 19), extend='both', cmap=cmo.turbid_r) for c in cnt.collections: c.set_edgecolor("face") plt.xscale('log') plt.colorbar(ticks=[3, 6, 9, 12,15, 18], label=r'$\left< \delta \tau ^4 \right> / \left< \delta \tau ^2\right>^2 $ ') plt.ylim([27.65, 27.15]) plt.xlim([3e2, 1e5]) plt.ylabel(r'$\rho$ [kg m$^{-3}$]') plt.xlabel('L [m]') plt.tight_layout() plt.savefig('figures/figure_iso_spec_freq_panel6.pdf') #plt.gca().invert_yaxis() ``` The second order structure of spice follow as power law of about 2/3, which corresponds to about -5/3 slope of tracers. This is slightly at odds with the $k^{-2}$. scaling seen in wavenumber. However, note that this is still very far from $r^0$ (constant) that one might expect is the $k^{-1}$ case (which is what theory would predict).
github_jupyter
``` import pandas as pd from influxdb import DataFrameClient user = 'root' password = 'root' dbname = 'base47' host='localhost' port=32768 # Temporarily avoid line protocol time conversion issues #412, #426, #431. protocol = 'json' client = DataFrameClient(host, port, user, password, dbname) print("Create pandas DataFrame") df = pd.DataFrame(data=list(range(30)), index=pd.date_range(start='2017-11-16', periods=30, freq='H')) gas=pd.read_csv('data/gas_ft.csv',parse_dates=True,index_col='ts').drop('measurement_unit',axis=1) #client.create_database(dbname) client.query("show databases") #for i in range(len(gas)/500): n=5000 for i in range(5): print(i), print("Writing batch "+str(i)+" of "+str(n)+" elements to Influx | progress: "+str(round(i*100.0/(len(gas)/n),2)))+"%" client.write_points(gas[n*i:n*(i+1)], 'gas', protocol=protocol) #https://influxdb-python.readthedocs.io/en/latest/api-documentation.html#dataframeclient client.write_points(gas, 'gas', protocol=protocol, batch_size=5000) client.write_points(gas, 'gas2', protocol=protocol, tag_columns=['monitor_id'], field_columns=['measurement'], batch_size=5000) print("Write DataFrame with Tags") client.write_points(df, 'demo', {'k1': 'v1', 'k2': 'v2'}, protocol=protocol) print("Read DataFrame") client.query("select * from demo") print("Delete database: " + dbname) client.drop_database(dbname) import numpy as np import pandas as pd import time import requests url = 'http://localhost:32768/write' params = {"db": "base", "u": "root", "p": "root"} def read_data(): with open('data/gas_ft.csv') as f: return [x.split(',') for x in f.readlines()[1:]] a = read_data() a[0] #payload = "elec,id=500 value=24 2018-03-05T19:31:00.000Z\n" payload = "elec,id=500 value=24 "#+str(pd.to_datetime('2018-03-05T19:29:00.000Z\n').value // 10 ** 9) r = requests.post(url, params=params, data=payload) # -*- coding: utf-8 -*- """Tutorial how to use the class helper `SeriesHelper`.""" from influxdb import InfluxDBClient from influxdb import SeriesHelper # InfluxDB connections settings host = 'localhost' port = 8086 user = 'root' password = 'root' dbname = 'mydb' myclient = InfluxDBClient(host, port, user, password, dbname) # Uncomment the following code if the database is not yet created # myclient.create_database(dbname) # myclient.create_retention_policy('awesome_policy', '3d', 3, default=True) class MySeriesHelper(SeriesHelper): """Instantiate SeriesHelper to write points to the backend.""" class Meta: """Meta class stores time series helper configuration.""" # The client should be an instance of InfluxDBClient. client = myclient # The series name must be a string. Add dependent fields/tags # in curly brackets. series_name = 'events.stats.{server_name}' # Defines all the fields in this time series. fields = ['some_stat', 'other_stat'] # Defines all the tags for the series. tags = ['server_name'] # Defines the number of data points to store prior to writing # on the wire. bulk_size = 5 # autocommit must be set to True when using bulk_size autocommit = True # The following will create *five* (immutable) data points. # Since bulk_size is set to 5, upon the fifth construction call, *all* data # points will be written on the wire via MySeriesHelper.Meta.client. MySeriesHelper(server_name='us.east-1', some_stat=159, other_stat=10) MySeriesHelper(server_name='us.east-1', some_stat=158, other_stat=20) MySeriesHelper(server_name='us.east-1', some_stat=157, other_stat=30) MySeriesHelper(server_name='us.east-1', some_stat=156, other_stat=40) MySeriesHelper(server_name='us.east-1', some_stat=155, other_stat=50) # To manually submit data points which are not yet written, call commit: MySeriesHelper.commit() # To inspect the JSON which will be written, call _json_body_(): MySeriesHelper._json_body_() for metric in a[:]: payload = "elec,id="+str(metric[0])+" value="+str(metric[2])+" "+str(pd.to_datetime(metric[3]).value // 10 ** 9)+"\n" #payload = "water,id="+str(metric[0])+" value="+str(metric[2])+"\n" r = requests.post(url, params=params, data=payload) def read_data(): with open('data/water_ft.csv') as f: return [x.split(',') for x in f.readlines()[1:]] a = read_data() a[0] for metric in a[1000:3000]: #payload = "gas,id="+str(metric[0])+" value="+str(metric[2])+" "+str(pd.to_datetime(metric[3]).value // 10 ** 9)+"\n" payload = "water,id="+str(metric[0])+" value="+str(metric[2])+"\n" r = requests.post(url, params=params, data=payload) time.sleep(1) ```
github_jupyter
``` import numpy as np import tensorflow as tf import collections def build_dataset(words, n_words): count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]] count.extend(collections.Counter(words).most_common(n_words - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: index = dictionary.get(word, 0) if index == 0: unk_count += 1 data.append(index) count[0][1] = unk_count reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reversed_dictionary with open('english-train', 'r') as fopen: text_from = fopen.read().lower().split('\n') with open('vietnam-train', 'r') as fopen: text_to = fopen.read().lower().split('\n') print('len from: %d, len to: %d'%(len(text_from), len(text_to))) concat_from = ' '.join(text_from).split() vocabulary_size_from = len(list(set(concat_from))) data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from) print('vocab from size: %d'%(vocabulary_size_from)) print('Most common words', count_from[4:10]) print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]]) concat_to = ' '.join(text_to).split() vocabulary_size_to = len(list(set(concat_to))) data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to) print('vocab to size: %d'%(vocabulary_size_to)) print('Most common words', count_to[4:10]) print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]]) GO = dictionary_from['GO'] PAD = dictionary_from['PAD'] EOS = dictionary_from['EOS'] UNK = dictionary_from['UNK'] class Chatbot: def __init__(self, size_layer, num_layers, embedded_size, from_dict_size, to_dict_size, learning_rate, batch_size): def cells(reuse=False): return tf.nn.rnn_cell.GRUCell(size_layer,reuse=reuse) self.X = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.int32, [None, None]) self.X_seq_len = tf.placeholder(tf.int32, [None]) self.Y_seq_len = tf.placeholder(tf.int32, [None]) encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1)) decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1)) encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X) main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1) decoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, decoder_input) rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]) _, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded, dtype = tf.float32) with tf.variable_scope("decoder"): rnn_cells_dec = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]) outputs, _ = tf.nn.dynamic_rnn(rnn_cells_dec, decoder_embedded, initial_state = last_state, dtype = tf.float32) self.logits = tf.layers.dense(outputs,to_dict_size) masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.logits, targets = self.Y, weights = masks) self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost) size_layer = 128 num_layers = 2 embedded_size = 128 learning_rate = 0.001 batch_size = 32 epoch = 50 tf.reset_default_graph() sess = tf.InteractiveSession() model = Chatbot(size_layer, num_layers, embedded_size, vocabulary_size_from + 4, vocabulary_size_to + 4, learning_rate, batch_size) sess.run(tf.global_variables_initializer()) def str_idx(corpus, dic): X = [] for i in corpus: ints = [] for k in i.split(): try: ints.append(dic[k]) except Exception as e: print(e) ints.append(2) X.append(ints) return X X = str_idx(text_from, dictionary_from) Y = str_idx(text_to, dictionary_to) def pad_sentence_batch(sentence_batch, pad_int): padded_seqs = [] seq_lens = [] max_sentence_len = 120 for sentence in sentence_batch: padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence))) seq_lens.append(120) return padded_seqs, seq_lens def check_accuracy(logits, Y): acc = 0 for i in range(logits.shape[0]): internal_acc = 0 for k in range(len(Y[i])): if Y[i][k] == logits[i][k]: internal_acc += 1 acc += (internal_acc / len(Y[i])) return acc / logits.shape[0] for i in range(epoch): total_loss, total_accuracy = 0, 0 for k in range(0, (len(text_from) // batch_size) * batch_size, batch_size): batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD) batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD) predicted, loss, _ = sess.run([tf.argmax(model.logits,2), model.cost, model.optimizer], feed_dict={model.X:batch_x, model.Y:batch_y, model.X_seq_len:seq_x, model.Y_seq_len:seq_y}) total_loss += loss total_accuracy += check_accuracy(predicted,batch_y) total_loss /= (len(text_from) // batch_size) total_accuracy /= (len(text_from) // batch_size) print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy)) ```
github_jupyter
# Regularization with SciKit-Learn ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv('Data/Advertising.csv') df.head() X = df.drop('sales', axis=1) y = df['sales'] ``` ### Polynomial Conversion ``` from sklearn.preprocessing import PolynomialFeatures poly_converter = PolynomialFeatures(degree=3, include_bias=False) poly_features = poly_converter.fit_transform(X) poly_features.shape ``` ### Train | Test Split ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101) ``` ----- # Scaling the Data ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # to avoid data leakage : meaning model got an idea of test data # only fit to train data set scaler.fit(X_train) # overwrite scaled data X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) ``` We can see that after scaling, values has scaled down. ``` X_train[0] poly_features[0] ``` -------- ------- # Ridge Regression ``` from sklearn.linear_model import Ridge ridge_model = Ridge(alpha=10) ridge_model.fit(X_train, y_train) test_predictions = ridge_model.predict(X_test) from sklearn.metrics import mean_absolute_error, mean_squared_error MAE = mean_absolute_error(y_test, test_predictions) MAE RMSE = np.sqrt(mean_squared_error(y_test, test_predictions)) RMSE # Training Set Performance train_predictions = ridge_model.predict(X_train) MAE = mean_absolute_error(y_train, train_predictions) RMSE = np.sqrt(mean_squared_error(y_train, train_predictions)) MAE, RMSE ``` --------- ## Choosing an alpha value with Cross-Validation ``` from sklearn.linear_model import RidgeCV ridge_cv_model = RidgeCV(alphas=(0.1, 1.0, 10.0), scoring='neg_mean_absolute_error') ridge_cv_model.fit(X_train, y_train) # get the best alpha value ridge_cv_model.alpha_ ridge_cv_model.coef_ ``` As we can see from the coefficient, ridge regression is considering every features. ------ If you don't remember which key to use for `scoring metrics`, we can search like that. ``` from sklearn.metrics import SCORERS SCORERS.keys() # these are the scoring parameters that we can use. but depends on the model, use the appropriate key # for neg_mean_squared_error, etc the HIGHER the value is, the BETTER (because it is the negative value/opposite of mean squared error (the lower the bettter)) ``` ------- ``` # check performance test_predictions = ridge_cv_model.predict(X_test) MAE = mean_absolute_error(y_test, test_predictions) RMSE = np.sqrt(mean_squared_error(y_test, test_predictions)) MAE, RMSE ``` Comparing the MAE and RMSE with alpha value of 10 result (0.5774404204714166, 0.8946386461319675), the results are much better now. -------- -------- # Lasso Regression - LASSO (Least Absolute Shinkage and Selection Operator) - **the smaller `eps` value is, the wider range we are checking** ``` from sklearn.linear_model import LassoCV # lasso_cv_model = LassoCV(eps=0.001,n_alphas=100,cv=5, max_iter=1000000) #wider range because eps value is smaller lasso_cv_model = LassoCV(eps=0.1,n_alphas=100,cv=5) #narrower range lasso_cv_model.fit(X_train,y_train) # best alpha value lasso_cv_model.alpha_ test_predictions = lasso_cv_model.predict(X_test) MAE = mean_absolute_error(y_test, test_predictions) RMSE = np.sqrt(mean_squared_error(y_test, test_predictions)) MAE, RMSE ``` By comparing the previous results of Ridge Regression, this model seems like not performing well. ``` lasso_cv_model.coef_ ``` We can check the coefficient of lasso regression model. As we can see from the above, there are only 2 features that model is considering. Other features are 0 and not considered by model. But based on the context and if we want to consider only two features, Lasso may be a better choice. However we need to take note that MAE, RMSE is not performing as well as Ridge. But alphas value can be higher (expand the wider range of search) and make a more complext model. -------- ------- # Elastic Net (L1 + L2) ``` from sklearn.linear_model import ElasticNetCV elastic_cv_model = ElasticNetCV(l1_ratio=[.1, .5, .7,.9, .95, .99, 1], eps=0.001, n_alphas=100, max_iter=1000000) elastic_cv_model.fit(X_train, y_train) #best l1 ratio elastic_cv_model.l1_ratio_ elastic_cv_model.alpha_ test_predictions = elastic_cv_model.predict(X_test) MAE = mean_absolute_error(y_test, test_predictions) RMSE = np.sqrt(mean_squared_error(y_test, test_predictions)) MAE, RMSE elastic_cv_model.coef_ ```
github_jupyter
``` # -- coding: utf-8 -- # This code is part of Qiskit. # # (C) Copyright IBM 2019. # # This code is licensed under the Apache License, Version 2.0. You may # obtain a copy of this license in the LICENSE.txt file in the root directory # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. # # Any modifications or derivative works of this code must retain this # copyright notice, and modified files need to carry a notice indicating # that they have been altered from the originals. import torch from torch.autograd import Function import torch.optim as optim from qiskit import QuantumRegister,QuantumCircuit,ClassicalRegister,execute from qiskit.circuit import Parameter from qiskit import Aer import numpy as np from tqdm import tqdm from matplotlib import pyplot as plt %matplotlib inline def to_numbers(tensor_list): num_list = [] for tensor in tensor_list: num_list += [tensor.item()] return num_list class QiskitCircuit(): def __init__(self,shots): self.theta = Parameter('Theta') self.phi = Parameter('Phi') self.shots = shots def create_circuit(): qr = QuantumRegister(1,'q') cr = ClassicalRegister(1,'c') ckt = QuantumCircuit(qr,cr) ckt.h(qr[0]) # ckt.barrier() ckt.u2(self.theta,self.phi,qr[0]) ckt.barrier() ckt.measure(qr,cr) return ckt self.circuit = create_circuit() def N_qubit_expectation_Z(self,counts, shots, nr_qubits): expects = np.zeros(nr_qubits) for key in counts.keys(): perc = counts[key]/shots check = np.array([(float(key[i])-1/2)*2*perc for i in range(nr_qubits)]) expects += check return expects def bind(self, parameters): [self.theta,self.phi] = to_numbers(parameters) self.circuit.data[1][0]._params = to_numbers(parameters) def run(self, i): self.bind(i) backend = Aer.get_backend('qasm_simulator') job_sim = execute(self.circuit,backend,shots=self.shots) result_sim = job_sim.result() counts = result_sim.get_counts(self.circuit) return self.N_qubit_expectation_Z(counts,self.shots,1) class TorchCircuit(Function): @staticmethod def forward(ctx, i): if not hasattr(ctx, 'QiskitCirc'): ctx.QiskitCirc = QiskitCircuit(shots=10000) exp_value = ctx.QiskitCirc.run(i[0]) result = torch.tensor([exp_value]) ctx.save_for_backward(result, i) return result @staticmethod def backward(ctx, grad_output): eps = 0.01 forward_tensor, i = ctx.saved_tensors input_numbers = to_numbers(i[0]) gradient = [0,0] for k in range(len(input_numbers)): input_eps = input_numbers input_eps[k] = input_numbers[k] + eps exp_value = ctx.QiskitCirc.run(torch.tensor(input_eps))[0] result_eps = torch.tensor([exp_value]) gradient_result = (exp_value - forward_tensor[0][0].item())/eps gradient[k] = gradient_result # print(gradient) result = torch.tensor([gradient]) # print(result) return result.float() * grad_output.float() # x = torch.tensor([np.pi/4, np.pi/4, np.pi/4], requires_grad=True) x = torch.tensor([[0.0, 0.0]], requires_grad=True) qc = TorchCircuit.apply y1 = qc(x) y1.backward() print(x.grad) qc = TorchCircuit.apply def cost(x): target = -1 expval = qc(x) return torch.abs(qc(x) - target) ** 2, expval x = torch.tensor([[0.0, np.pi/4]], requires_grad=True) opt = torch.optim.Adam([x], lr=0.1) num_epoch = 100 loss_list = [] expval_list = [] for i in tqdm(range(num_epoch)): # for i in range(num_epoch): opt.zero_grad() loss, expval = cost(x) loss.backward() opt.step() loss_list.append(loss.item()) expval_list.append(expval.item()) # print(loss.item()) plt.plot(loss_list) # print(circuit(phi, theta)) # print(cost(x)) ``` ### MNIST in pytorch ``` import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import torchvision from torchvision import datasets, transforms batch_size_train = 1 batch_size_test = 1 learning_rate = 0.01 momentum = 0.5 log_interval = 10 torch.backends.cudnn.enabled = False transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor()]) mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=transform) labels = mnist_trainset.targets #get labels labels = labels.numpy() idx1 = np.where(labels == 0) #search all zeros idx2 = np.where(labels == 1) # search all ones idx = np.concatenate((idx1[0][0:100],idx2[0][0:100])) # concatenate their indices mnist_trainset.targets = labels[idx] mnist_trainset.data = mnist_trainset.data[idx] print(mnist_trainset) train_loader = torch.utils.data.DataLoader(mnist_trainset, batch_size=batch_size_train, shuffle=True) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 2) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) # return F.softmax(x) # x = np.pi*F.tanh(x) # print(x) x = qc(x) x = (x+1)/2 x = torch.cat((x, 1-x), -1) return x network = Net() # optimizer = optim.SGD(network.parameters(), lr=learning_rate, # momentum=momentum) optimizer = optim.Adam(network.parameters(), lr=learning_rate/10) epochs = 10 loss_list = [] for epoch in range(epochs): total_loss = [] target_list = [] for batch_idx, (data, target) in enumerate(train_loader): target_list.append(target.item()) # print(batch_idx) optimizer.zero_grad() output = network(data) # loss = F.nll_loss(output, target) loss = F.cross_entropy(output, target) # print(output) # print(output[0][1].item(), target.item()) loss.backward() optimizer.step() total_loss.append(loss.item()) loss_list.append(sum(total_loss)/len(total_loss)) print(loss_list[-1]) plt.plot(loss_list) ```
github_jupyter
``` from datascience import * path_data = '../../data/' import matplotlib matplotlib.use('Agg', warn=False) %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') import numpy as np ``` ### The Monty Hall Problem ### This [problem](https://en.wikipedia.org/wiki/Monty_Hall_problem) has flummoxed many people over the years, [mathematicians included](https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf). Let's see if we can work it out by simulation. The setting is derived from a television game show called "Let's Make a Deal". Monty Hall hosted this show in the 1960's, and it has since led to a number of spin-offs. An exciting part of the show was that while the contestants had the chance to win great prizes, they might instead end up with "zonks" that were less desirable. This is the basis for what is now known as *the Monty Hall problem*. The setting is a game show in which the contestant is faced with three closed doors. Behind one of the doors is a fancy car, and behind each of the other two there is a goat. The contestant doesn't know where the car is, and has to attempt to find it under the following rules. - The contestant makes an initial choice, but that door isn't opened. - At least one of the other two doors must have a goat behind it. Monty opens one of these doors to reveal a goat, displayed in all its glory in [Wikipedia](https://en.wikipedia.org/wiki/Monty_Hall_problem): ![Monty Hall goat](../../../images/monty_hall_goat.png) - There are two doors left, one of which was the contestant's original choice. One of the doors has the car behind it, and the other one has a goat. The contestant now gets to choose which of the two doors to open. The contestant has a decision to make. Which door should she choose to open, if she wants the car? Should she stick with her initial choice, or switch to the other door? That is the Monty Hall problem. ### The Solution ### In any problem involving chances, the assumptions about randomness are important. It's reasonable to assume that there is a 1/3 chance that the contestant's initial choice is the door that has the car behind it. The solution to the problem is quite straightforward under this assumption, though the straightforward solution doesn't convince everyone. Here it is anyway. - The chance that the car is behind the originally chosen door is 1/3. - The car is behind either the originally chosen door or the door that remains. It can't be anywhere else. - Therefore, the chance that the car is behind the door that remains is 2/3. - Therefore, the contestant should switch. That's it. End of story. Not convinced? Then let's simulate the game and see how the results turn out. ### Simulation ### The simulation will be more complex that those we have done so far. Let's break it down. ### Step 1: What to Simulate ### For each play we will simulate what's behind all three doors: - the one the contestant first picks - the one that Monty opens - the remaining door So we will be keeping track of three quantitites, not just one. ### Step 2: Simulating One Play ### The bulk of our work consists of simulating one play of the game. This involves several pieces. #### The Goats #### We start by setting up an array `goats` that contains unimaginative names for the two goats. ``` goats = make_array('first goat', 'second goat') ``` To help Monty conduct the game, we are going to have to identify which goat is selected and which one is revealed behind the open door. The function `other_goat` takes one goat and returns the other. ``` def other_goat(x): if x == 'first goat': return 'second goat' elif x == 'second goat': return 'first goat' ``` Let's confirm that the function works. ``` other_goat('first goat'), other_goat('second goat'), other_goat('watermelon') ``` The string `'watermelon'` is not the name of one of the goats, so when `'watermelon'` is the input then `other_goat` does nothing. #### The Options #### The array `hidden_behind_doors` contains the set of things that could be behind the doors. ``` hidden_behind_doors = make_array('car', 'first goat', 'second goat') ``` We are now ready to simulate one play. To do this, we will define a function `monty_hall_game` that takes no arguments. When the function is called, it plays Monty's game once and returns a list consisting of: - the contestant's guess - what Monty reveals when he opens a door - what remains behind the other door The game starts with the contestant choosing one door at random. In doing so, the contestant makes a random choice from among the car, the first goat, and the second goat. If the contestant happens to pick one of the goats, then the other goat is revealed and the car is behind the remaining door. If the contestant happens to pick the car, then Monty reveals one of the goats and the other goat is behind the remaining door. ``` def monty_hall_game(): """Return [contestant's guess, what Monty reveals, what remains behind the other door]""" contestant_guess = np.random.choice(hidden_behind_doors) if contestant_guess == 'first goat': return [contestant_guess, 'second goat', 'car'] if contestant_guess == 'second goat': return [contestant_guess, 'first goat', 'car'] if contestant_guess == 'car': revealed = np.random.choice(goats) return [contestant_guess, revealed, other_goat(revealed)] ``` Let's play! Run the cell several times and see how the results change. ``` monty_hall_game() ``` ### Step 3: Number of Repetitions ### To gauge the frequency with which the different results occur, we have to play the game many times and collect the results. Let's run 10,000 repetitions. ### Step 4: Coding the Simulation ### It's time to run the whole simulation. We will play the game 10,000 times and collect the results in a table. Each row of the table will contain the result of one play. One way to grow a table by adding a new row is to use the `append` method. If `my_table` is a table and `new_row` is a list containing the entries in a new row, then `my_table.append(new_row)` adds the new row to the bottom of `my_table`. Note that `append` does not create a new table. It changes `my_table` to have one more row than it did before. First let's create a table `games` that has three empty columns. We can do this by just specifying a list of the column labels, as follows. ``` games = Table(['Guess', 'Revealed', 'Remaining']) ``` Notice that we have chosen the order of the columns to be the same as the order in which `monty_hall_game` returns the result of one game. Now we can add 10,000 rows to `trials`. Each row will represent the result of one play of Monty's game. ``` # Play the game 10000 times and # record the results in the table games for i in np.arange(10000): games.append(monty_hall_game()) ``` The simulation is done. Notice how short the code is. The majority of the work was done in simulating the outcome of one game. ### Visualization ### To see whether the contestant should stick with her original choice or switch, let's see how frequently the car is behind each of her two options. ``` original_choice = games.group('Guess') original_choice remaining_door = games.group('Remaining') remaining_door ``` As our earlier solution said, the car is behind the remaining door two-thirds of the time, to a pretty good approximation. The contestant is twice as likely to get the car if she switches than if she sticks with her original choice. To see this graphically, we can join the two tables above and draw overlaid bar charts. ``` joined = original_choice.join('Guess', remaining_door, 'Remaining') combined = joined.relabeled(0, 'Item').relabeled(1, 'Original Door').relabeled(2, 'Remaining Door') combined combined.barh(0) ``` Notice how the three blue bars are almost equal – the original choice is equally likely to be any of the three available items. But the gold bar corresponding to `Car` is twice as long as the blue. The simulation confirms that the contestant is twice as likely to win if she switches.
github_jupyter
# Ungraded Lab Part 2 - Consuming a Machine Learning Model Welcome to the second part of this ungraded lab! **Before going forward check that the server from part 1 is still running.** In this notebook you will code a minimal client that uses Python's `requests` library to interact with your running server. ``` import os import io import cv2 import requests import numpy as np from IPython.display import Image, display ``` ## Understanding the URL ### Breaking down the URL After experimenting with the fastAPI's client you may have noticed that we made all requests by pointing to a specific URL and appending some parameters to it. More concretely: 1. The server is hosted in the URL [http://localhost:8000/](http://localhost:8000/). 2. The endpoint that serves your model is the `/predict` endpoint. Also you can specify the model to use: `yolov3` or`yolov3-tiny`. Let's stick to the tiny version for computational efficiency. Let's get started by putting in place all this information. ``` base_url = 'http://localhost:8000' endpoint = '/predict' model = 'yolov3-tiny' confidence = 0.1 ``` To consume your model, you append the endpoint to the base URL to get the full URL. Notice that the parameters are absent for now. ``` url_with_endpoint_no_params = base_url + endpoint url_with_endpoint_no_params ``` To set any of the expected parameters, the syntax is to add a "?" character followed by the name of the parameter and its value. Let's do it and check how the final URL looks like: ``` full_url = url_with_endpoint_no_params + "?model=" + model + '&confidence=' + str(confidence) full_url ``` This endpoint expects both a model's name and an image. But since the image is more complex it is not passed within the URL. Instead we leverage the `requests` library to handle this process. # Sending a request to your server ### Coding the response_from_server function As a reminder, this endpoint expects a POST HTTP request. The `post` function is part of the requests library. To pass the file along with the request, you need to create a dictionary indicating the name of the file ('file' in this case) and the actual file. `status code` is a handy command to check the status of the response the request triggered. **A status code of 200 means that everything went well.** ``` def response_from_server(url, image_file, verbose=True): """Makes a POST request to the server and returns the response. Args: url (str): URL that the request is sent to. image_file (_io.BufferedReader): File to upload, should be an image. verbose (bool): True if the status of the response should be printed. False otherwise. Returns: requests.models.Response: Response from the server. """ files = {'file': image_file} response = requests.post(url, files=files) status_code = response.status_code if verbose: msg = "Everything went well!" if status_code == 200 else "There was an error when handling the request." print(msg) return response ``` To test this function, open a file in your filesystem and pass it as a parameter alongside the URL: ``` with open("images/clock2.jpg", "rb") as image_file: prediction = response_from_server(full_url, image_file) ``` Great news! The request was successful. However, you are not getting any information about the objects in the image. To get the image with the bounding boxes and labels, you need to parse the content of the response into an appropriate format. This process looks very similar to how you read raw images into a cv2 image on the server. To handle this step, let's create a directory called `images_predicted` to save the image to: ``` dir_name = "images_predicted" if not os.path.exists(dir_name): os.mkdir(dir_name) ``` ### Creating the display_image_from_response function ``` def display_image_from_response(response): """Display image within server's response. Args: response (requests.models.Response): The response from the server after object detection. """ image_stream = io.BytesIO(response.content) image_stream.seek(0) file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8) image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR) filename = "image_with_objects.jpeg" cv2.imwrite(f'images_predicted/{filename}', image) display(Image(f'images_predicted/{filename}')) display_image_from_response(prediction) ``` Now you are ready to consume your object detection model through your own client! Let's test it out on some other images: ``` image_files = [ 'car2.jpg', 'clock3.jpg', 'apples.jpg' ] for image_file in image_files: with open(f"images/{image_file}", "rb") as image_file: prediction = response_from_server(full_url, image_file, verbose=False) display_image_from_response(prediction) ``` **Congratulations on finishing this ungraded lab!** Real life clients and servers have a lot more going on in terms of security and performance. However, the code you just experienced is close to what you see in real production environments. Hopefully, this lab served the purpose of increasing your familiarity with the process of deploying a Deep Learning model, and consuming from it. **Keep it up!** # ## Optional Challenge - Adding the confidence level to the request Let's expand on what you have learned so far. The next logical step is to extend the server and the client so that they can accommodate an additional parameter: the level of confidence of the prediction. **To test your extended implementation you must perform the following steps:** - Stop the server by interrupting the Kernel. - Extend the `prediction` function in the server. - Re run the cell containing your server code. - Re launch the server. - Extend your client. - Test it with some images (either with your client or fastAPI's one). Here are some hints that can help you out throughout the process: #### Server side: - The `prediction` function that handles the `/predict` endpoint needs an additional parameter to accept the confidence level. Add this new parameter before the `File` parameter. This is necessary because `File` has a default value and must be specified last. - `cv.detect_common_objects` accepts the `confidence` parameter, which is a floating point number (type `float`in Python). #### Client side: - You can add a new parameter to the URL by extending it with an `&` followed by the name of the parameter and its value. The name of this new parameter must be equal to the name used within the `prediction` function in the server. An example would look like this: `myawesomemodel.com/predict?model=yolov3-tiny&newParam=value` **You can do it!**
github_jupyter
TSG028 - Restart node manager on all storage pool nodes ======================================================= Description ----------- ### Parameters ``` container='hadoop' command=f'supervisorctl restart nodemanager' ``` ### Instantiate Kubernetes client ``` # Instantiate the Python Kubernetes client into 'api' variable import os try: from kubernetes import client, config from kubernetes.stream import stream if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ: config.load_incluster_config() else: config.load_kube_config() api = client.CoreV1Api() print('Kubernetes client instantiated') except ImportError: from IPython.display import Markdown display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.')) raise ``` ### Get the namespace for the big data cluster Get the namespace of the big data cluster from the Kuberenetes API. NOTE: If there is more than one big data cluster in the target Kubernetes cluster, then set \[0\] to the correct value for the big data cluster. ``` # Place Kubernetes namespace name for BDC into 'namespace' variable try: namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name except IndexError: from IPython.display import Markdown display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.')) raise print('The kubernetes namespace for your big data cluster is: ' + namespace) ``` ### Run command in containers ``` pod_list = api.list_namespaced_pod(namespace) for pod in pod_list.items: container_names = [container.name for container in pod.spec.containers] for container_name in container_names: if container_name == container: print (f"Pod: {pod.metadata.name} / Container: {container}:") try: output=stream(api.connect_get_namespaced_pod_exec, pod.metadata.name, namespace, command=['/bin/sh', '-c', command], container=container, stderr=True, stdout=True) print (output) except Exception: print (f"Failed to run {command} in container: {container} for pod: {pod.metadata.name}") print('Notebook execution complete.') ```
github_jupyter
# ASTE Release 1: Accessing the output with xmitgcm's llcreader module The Arctic Subpolar gyre sTate Estimate (ASTE) is a medium resolution, dynamically consistent, data constrained simulation of the ocean and sea ice state in the Arctic and subpolar gyre, spanning 2002-2017. See details on Release 1 in [Nguyen et al, 2020]. This notebook serves as an example for accessing the output from this state estimate using xmitgcm's [llcreader module](https://xmitgcm.readthedocs.io/en/latest/llcreader.html) to get the output in an [xarray](http://xarray.pydata.org/en/stable/) dataset. These capabilities heavily rely on [dask](https://dask.org/) to lazily grab the data as we need it. Users are strongly encouraged to check out [dask's best practices](https://docs.dask.org/en/latest/best-practices.html) regarding memory management before performing more advanced calculations. Any problems due to connections with the server can be reported as a [GitHub Issue](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/creating-an-issue) on the [ASTE repo](https://github.com/crios-ut/aste). Finally, we are grateful to the Texas Advanced Computing Center (TACC) for providing storage on Amazon Web Services (AWS) through cloud services integration on Frontera [Stanzione et al, 2020]. --- --- Nguyen, An T., Ocaña, V., Pillar, H., Bigdeli, A., Smith, T. A., & Heimbach, P. (2021). The Arctic Subpolar gyre sTate Estimate: a data-constrained and dynamically consistent ocean-sea ice estimate for 2002–2017. Submitted to Journal of Advances in Modeling Earth Systems. Dan Stanzione, John West, R. Todd Evans, Tommy Minyard, Omar Ghattas, and Dhabaleswar K. Panda. 2020. Frontera: The Evolution of Leadership Computing at the National Science Foundation. In Practice and Experience in Advanced Research Computing (PEARC ’20), July 26–30, 2020, Portland, OR, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3311790.3396656 ``` import numpy as np import warnings import matplotlib.pyplot as plt import xarray as xr import cmocean from dask.distributed import Client from xmitgcm import llcreader import ecco_v4_py ``` ## Get an xarray dataset with *all* ASTE_R1 variables, depth levels, and time steps The function `get_dataset` by default grabs all available output, at all depth levels and all time steps, where each time step represents a monthly mean for that field. This may be suboptimal when operating on a machine with limited memory, for instance on a laptop. See the [llcreader documentation](https://xmitgcm.readthedocs.io/en/latest/llcreader.html) for more examples on how to subset the data with the [get_dataset method](https://xmitgcm.readthedocs.io/en/latest/llcreader.html#api-documentation), including how to grab specific variables, vertical slices, or time steps. ``` aste = llcreader.CRIOSPortalASTE270Model() ds = aste.get_dataset() ``` ### Grab a single monthly average Here we subset the dataset to show the ASTE ocean state during a single month, September 2012. Alternatively, one can provide the corresponding iteration to the `iters` option in `get_dataset` to achieve the same behavior. ``` ds = ds.sel(time='2012-09') ``` We are grabbing a single time slice to make this demo quick. Of course, [xarray](http://xarray.pydata.org/en/stable/) makes it easy to compute full time mean quantities, for example SST averaged from 2006 through 2017: ``` sst = ds['THETA'].sel(k=0,time=slice('2006','2017')).mean(dim='time') ``` but note that this will take longer than the plots below because `llcreader` has to grab all of the 2006-2017 data from the cloud. ### Some house keeping - rename the `faces` dimension to `tiles` (shown and discussed below) - split the coordinate and data variables to speed things up a bit ``` ds = ds.rename({'face':'tile'}) cds = ds.coords.to_dataset().reset_coords() ds = ds.reset_coords(drop=True) ``` #### A list of all the data variables ``` ncols=10 for i,f in enumerate(list(ds.data_vars),start=1): end = '\n' if i%ncols==0 else ', ' print(f,end=end) ``` #### and all the variables describing the underlying grid ``` ncols=10 for i,f in enumerate(list(cds.data_vars),start=1): end = '\n' if i%ncols==0 else ', ' print(f,end=end) ``` and we can get some nice meta data to explain what this means thanks to `xmitgcm`+`xarray` ``` ds.ADVx_TH ``` ### A quick plot This is just a sanity check - we have the output! ``` %%time ds.THETA.sel(k=0).plot(col='tile',col_wrap=3) ``` ## Use ECCOv4-py to make a nicer plot: average SST and SSS during September, 2012 The plot above shows the "tiled" LLC grid topology of ASTE, which can be cumbersome to work with. This grid is familiar to anyone used to the global ECCO state estimate, which [ecco_v4_py](https://github.com/ECCO-GROUP/ECCOv4-py) is designed to deal with. As of `ecco_v4_py` version 1.3.0, we can now use all the same functions with ASTE as well. See below for an example of a nicer plot. See [here](https://ecco-v4-python-tutorial.readthedocs.io/fields.html#geographical-layout) to read more about the LLC grid. ``` sst = ds['THETA'].sel(k=0) sss = ds['SALT'].sel(k=0) %%time fig = plt.figure(figsize=(18,6)) for i,(fld,cmap,cmin,cmax) in enumerate(zip([sst,sss], ['cmo.thermal','cmo.haline'], [-1,30],[30,38]),start=1): with warnings.catch_warnings(): warnings.simplefilter('ignore') fig,ax,p,cbar,*_=ecco_v4_py.plot_proj_to_latlon_grid(cds.XC,cds.YC,fld, show_colorbar=True, projection_type='ortho', user_lon_0=-45,user_lat_0=50, subplot_grid=[1,2,i], cmap=cmap,cmin=cmin,cmax=cmax); ``` ## Use ECCOv4-py to get velocities in the "expected" direction The first plot showed how the rotated fields in the ASTE domain can be difficult to visualize. This is especially true for any vector field (e.g. zonal, meridional velocity), where the vector components are also rotated with each "tile". In order to visualize vector components, we can use the [vector_calc](https://github.com/ECCO-GROUP/ECCOv4-py/blob/master/ecco_v4_py/vector_calc.py) module to perform the necessary interpolation and rotation operations. Note that these routines are essentially simple wrappers around [xgcm](https://xgcm.readthedocs.io/en/latest/) Grid operations, which make all of this possible while working with [xarray](http://xarray.pydata.org/en/stable/) and [dask](https://dask.org/). ``` # get an xgcm Grid object grid = ecco_v4_py.get_llc_grid(cds,domain='aste') %%time uvel,vvel = ecco_v4_py.vector_calc.UEVNfromUXVY(ds['UVELMASS'].sel(k=0), ds['VVELMASS'].sel(k=0), coords=cds, grid=grid) uvel.attrs = ds.UVELMASS.attrs vvel.attrs = ds.VVELMASS.attrs %%time fig = plt.figure(figsize=(18,6)) vmax = .6 for i,(fld,cmap,cmin,cmax) in enumerate(zip([uvel,vvel], ['cmo.balance','cmo.balance'], [-vmax]*2,[vmax]*2),start=1): with warnings.catch_warnings(): warnings.simplefilter('ignore') fig,ax,p,cbar,*_=ecco_v4_py.plot_proj_to_latlon_grid(cds.XC,cds.YC,fld, show_colorbar=True, projection_type='ortho', user_lon_0=-45,user_lat_0=50, subplot_grid=[1,2,i], cmap=cmap,cmin=cmin,cmax=cmax); ``` ## Use ECCOv4-py to compute volumetric transports: Fram Strait example Compare to Fig. 14 of [Nguyen et al., 2020], showing the time mean: - inflow of Atlantic waters to the Arctic = 6.2$\pm$2.3 Sv - outflow of modified waters = -8.3$\pm$2.5 Sv where positive indicates "toward the Arctic". Again, we compute this quantity for a single time slice as a quick example, but this can be easily extended to compute for example the time series of volumetric transport. ``` fsW = ecco_v4_py.calc_section_vol_trsp(ds,grid=grid,pt1=[-18.5,80.37],pt2=[1,80.14],coords=cds) fsE = ecco_v4_py.calc_section_vol_trsp(ds,grid=grid,pt1=[1,80.14],pt2=[11.39,79.49],coords=cds) fsW = fsW.swap_dims({'k':'Z'}) fsE = fsE.swap_dims({'k':'Z'}) plt.rcParams.update({'font.size':14}) fig,ax = plt.subplots(1,1,figsize=(6,8),constrained_layout=True) for vds,lbl in zip([fsW,fsE],['Outflow','Inflow']): mylbl = f'Total {lbl} %2.2f {vds.vol_trsp.units}' % vds.vol_trsp.values vds.vol_trsp_z.plot(y='Z',ax=ax,label=mylbl) ax.grid(True) ax.set(ylim=[-3000,0], xlabel=f'Volumetric Transport [{fsW.vol_trsp.units}]', title=f'Fram Strait Volumetric Transport, Sep. 2012\nPositive into Arctic [{vds.vol_trsp.units}]') ax.legend() ```
github_jupyter
# Location and the deviation survey Most wells are vertical, but many are not. All modern wells have a deviation survey, which is converted into a position log, giving the 3D position of the well in space. `welly` has a simple way to add a position log in a specific format, and computes a position log from it. You can use the position log to convert between MD and TVD. First, version check. ``` import welly welly.__version__ ``` ## Adding deviation to an existing well First we'll read a LAS and instantiate a well `w` ``` from welly import Well w = Well.from_las("data/P-130_out.LAS") w w.plot() ``` There aren't a lot of tricks for handling the input data, which is assumed to be a CSV-like file containing columns like: MD, inclination, azimuth For example: ``` with open('data/P-130_deviation_survey.csv') as f: lines = f.readlines() for line in lines[:6]: print(line, end='') ``` Then we can turn that into an `ndarray`: ``` import numpy as np dev = np.loadtxt('data/P-130_deviation_survey.csv', delimiter=',', skiprows=1, usecols=[0,1,2]) dev[:5] ``` You can use any other method to get to an array or `pandas.DataFrame` like this one. Then we can add the deviation survey to the well's `location` attribute. This will automatically convert it into a position log, which is an array containing the x-offset, y-offset, and TVD of the well, in that order. ``` w.location.add_deviation(dev, td=w.location.tdd) ``` Now you have the position log: ``` w.location.position[:5] ``` Note that it is irregularly sampled &mdash; this is nothing more than the deviation survey (which is MD, INCL, AZI) converted into relative positions (i.e. deltaX, deltaY, deltaZ). These positions are relative to the tophole location. ## MD to TVD and vice versa We now have the methods `md2tvd` and `tvd2md` available to us: ``` w.location.md2tvd(1000) w.location.tvd2md(998.78525) ``` These can also accept an array: ``` md = np.linspace(0, 300, 31) w.location.md2tvd(md) ``` Note that these are linear in MD, but not in TVD. ``` w.location.md2tvd([0, 10, 20, 30]) ``` ## If you have the position log, but no deviation survey In general, deviation surveys are considered 'canonical'. That is, they are data recorded in the well. The position log &mdash; a set of (x, y, z) points in a linear Euclidean space like (X_UTM, Y_UTM, TVDSS) &mdash; is then computed from the deviation survey. If you have deviation *and* position log, I recommend loading the deviation survey as above. If you *only* have position, in a 3-column array-like called `position` (say), then you can add it to the well like so: w.location.position = np.array(position) You can still use the MD-to-TVD and TVD-to-MD converters above, and `w.position.trajectory()` will work as usual, but you won't have `w.position.dogleg` or `w.position.deviation`. ## Dogleg severity The dogleg severity array is captured in the `dogleg` attribute: ``` w.location.dogleg[:10] ``` ## Starting from new well Data from Rob: ``` import pandas as pd dev = pd.read_csv('data/deviation.csv') dev.head(10) dev.tail() ``` First we'll create an 'empty' well. ``` x = Well(params={'header': {'name': 'foo'}}) ``` Now add the `Location` object to the well's `location` attribute, finally calling its `add_deviation()` method on the deviation data: ``` from welly import Location x.location = Location(params={'kb': 100}) x.location.add_deviation(dev[['MD[m]', 'Inc[deg]', 'Azi[deg]']].values) ``` Let's see how our new position data compares to what was in the `deviation.csv` data file: ### Compare x, y, and dogleg ``` np.set_printoptions(suppress=True) import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(15,5)) md_welly = x.location.deviation[:, 0] # Plot x vs depth ax.plot(x.location.position[:, 0], md_welly, lw=6, label="welly") ax.plot(dev['East[m]'], dev['MD[m]'], c='limegreen', label="file") ax.invert_yaxis() ax.legend() ``` They seem to match well. There's a difference at the top because `welly` always adds a (0, 0, 0) point to both the deviation and position logs: ``` x.location.position[:7] ``` In plan view, the wells match: ``` fig, ax = plt.subplots(figsize=(6,6)) ax.plot(*x.location.position[:, :2].T, c='c', lw=5, label="welly") ax.plot(dev['East[m]'], dev['North[m]'], c='yellow', ls='--', label="file") #ax.set_xlim(-20, 800); ax.set_ylim(-820, 20) ax.grid(color='black', alpha=0.2) ``` ## Fit a spline to the position log To make things a bit more realistic, we can shift to the correct spatial datum, i.e. the (x, y, z) of the top hole, where _z_ is the KB elevation. We can also adjust the _z_ value to elevation (i.e. negative downwards). ``` np.set_printoptions(suppress=True, precision=2) x.location.trajectory(datum=[111000, 2222000, 100], elev=True) ``` We can make a 3D plot with this trajectory: ``` from mpl_toolkits.mplot3d import Axes3D fig, ax = plt.subplots(figsize=(12, 7), subplot_kw={'projection': '3d'}) ax.plot(*x.location.trajectory().T, lw=3, alpha=0.75) plt.show() ``` ## Compare doglegs The `deviation.csv` file also contains a measure of dogleg severity, which `welly` also generates (since v0.4.2). **Note that in the current version dogleg severity is in radians, whereas the usual units are degrees per 100 ft or degrees per 30 m. The next release of welly, v0.5, will start using degrees per 30 m by default.** ``` fig, ax = plt.subplots(figsize=(15,4)) ax.plot(x.location.dogleg, lw=5, label="welly") ax = plt.twinx(ax=ax) ax.plot(dev['Dogleg [deg/30m]'], c='limegreen', ls='--', label="file") ax.text(80, 4, 'file', color='limegreen', ha='right', va='top', size=16) ax.text(80, 3.5, 'welly', color='C0', ha='right', va='top', size=16) ``` Apart from the scaling, they agree. ## Implementation details The position log is computed from the deviation survey with the minimum curvature algorithm, which is fairly standard in the industry. To use a different method, pass `method='aa'` (average angle) or `method='bt'` (balanced tangent) directly to `Location.compute_position_log()` yourself. Once we have the position log, we still need a way to look up arbitrary depths. To do this, we use a cubic spline fitted to the position log. This should be OK for most 'natural' well paths, but it might break horribly. If you get weird results, you can pass `method='linear'` to the conversion functions — less accurate but more stable. ---- ## Azimuth datum You can adjust the angle of the azimuth datum with the `azimuth_datum` keyword argument. The default is zero, which means the azimuths in your survey are in degrees relative to grid north (of your UTM grid, say). Let's make some fake data like MD, INCL, AZI ``` dev = [[100, 0, 0], [200, 10, 45], [300, 20, 45], [400, 20, 45], [500, 20, 60], [600, 20, 75], [700, 90, 90], [800, 90, 90], [900, 90, 90], ] z = welly.Well() z.location = welly.Location(params={'kb': 10}) z.location.add_deviation(dev, td=1000, azimuth_datum=20) z.location.plot_plan() z.location.plot_3d() ``` ## Trajectory Get regularly sampled well trajectory with a specified number of points. Assumes there is a position log already, e.g. resulting from calling `add_deviation()` on a deviation survey. Computed from the position log by `scipy.interpolate.splprep()`. ``` z.location.trajectory(points=20) ``` ## TODO - Add `plot_projection()` for a vertical projection. - Export a `shapely` linestring. - Export SHP. - Export 3D `postgis` object. ---- &copy; Agile Scientific 2019–2022, licensed CC-BY / Apache 2.0
github_jupyter
# <span style="color:Maroon">Trade Strategy __Summary:__ <span style="color:Blue">In this code we shall test the results of given model ``` # Import required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import os np.random.seed(0) import warnings warnings.filterwarnings('ignore') # User defined names index = "Gold" filename_whole = "whole_dataset"+index+"_rf_model.csv" filename_trending = "Trending_dataset"+index+"_rf_model.csv" filename_meanreverting = "MeanReverting_dataset"+index+"_rf_model.csv" date_col = "Date" Rf = 0.01 #Risk free rate of return # Get current working directory mycwd = os.getcwd() print(mycwd) # Change to data directory os.chdir("..") os.chdir(str(os.getcwd()) + "\\Data") # Read the datasets df_whole = pd.read_csv(filename_whole, index_col=date_col) df_trending = pd.read_csv(filename_trending, index_col=date_col) df_meanreverting = pd.read_csv(filename_meanreverting, index_col=date_col) # Convert index to datetime df_whole.index = pd.to_datetime(df_whole.index) df_trending.index = pd.to_datetime(df_trending.index) df_meanreverting.index = pd.to_datetime(df_meanreverting.index) # Head for whole dataset df_whole.head() df_whole.shape # Head for Trending dataset df_trending.head() df_trending.shape # Head for Mean Reverting dataset df_meanreverting.head() df_meanreverting.shape # Merge results from both models to one df_model = df_trending.append(df_meanreverting) df_model.sort_index(inplace=True) df_model.head() df_model.shape ``` ## <span style="color:Maroon">Functions ``` def initialize(df): days, Action1, Action2, current_status, Money, Shares = ([] for i in range(6)) Open_price = list(df['Open']) Close_price = list(df['Adj Close']) Predicted = list(df['Predicted']) Action1.append(Predicted[0]) Action2.append(0) current_status.append(Predicted[0]) if(Predicted[0] != 0): days.append(1) if(Predicted[0] == 1): Money.append(0) else: Money.append(200) Shares.append(Predicted[0] * (100/Open_price[0])) else: days.append(0) Money.append(100) Shares.append(0) return days, Action1, Action2, current_status, Predicted, Money, Shares, Open_price, Close_price def Action_SA_SA(days, Action1, Action2, current_status, i): if(current_status[i-1] != 0): days.append(1) else: days.append(0) current_status.append(current_status[i-1]) Action1.append(0) Action2.append(0) return days, Action1, Action2, current_status def Action_ZE_NZE(days, Action1, Action2, current_status, i): if(days[i-1] < 5): days.append(days[i-1] + 1) Action1.append(0) Action2.append(0) current_status.append(current_status[i-1]) else: days.append(0) Action1.append(current_status[i-1] * (-1)) Action2.append(0) current_status.append(0) return days, Action1, Action2, current_status def Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i): current_status.append(Predicted[i]) Action1.append(Predicted[i]) Action2.append(0) days.append(days[i-1] + 1) return days, Action1, Action2, current_status def Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i): current_status.append(Predicted[i]) Action1.append(Predicted[i]) Action2.append(Predicted[i]) days.append(1) return days, Action1, Action2, current_status def get_df(df, Action1, Action2, days, current_status, Money, Shares): df['Action1'] = Action1 df['Action2'] = Action2 df['days'] = days df['current_status'] = current_status df['Money'] = Money df['Shares'] = Shares return df def Get_TradeSignal(Predicted, days, Action1, Action2, current_status): # Loop over 1 to N for i in range(1, len(Predicted)): # When model predicts no action.. if(Predicted[i] == 0): if(current_status[i-1] != 0): days, Action1, Action2, current_status = Action_ZE_NZE(days, Action1, Action2, current_status, i) else: days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) # When Model predicts sell elif(Predicted[i] == -1): if(current_status[i-1] == -1): days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) elif(current_status[i-1] == 0): days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i) else: days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i) # When model predicts Buy elif(Predicted[i] == 1): if(current_status[i-1] == 1): days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) elif(current_status[i-1] == 0): days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i) else: days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i) return days, Action1, Action2, current_status def Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price): for i in range(1, len(Open_price)): if(Action1[i] == 0): Money.append(Money[i-1]) Shares.append(Shares[i-1]) else: if(Action2[i] == 0): # Enter new position if(Shares[i-1] == 0): Shares.append(Action1[i] * (Money[i-1]/Open_price[i])) Money.append(Money[i-1] - Action1[i] * Money[i-1]) # Exit the current position else: Shares.append(0) Money.append(Money[i-1] - Action1[i] * np.abs(Shares[i-1]) * Open_price[i]) else: Money.append(Money[i-1] -1 *Action1[i] *np.abs(Shares[i-1]) * Open_price[i]) Shares.append(Action2[i] * (Money[i]/Open_price[i])) Money[i] = Money[i] - 1 * Action2[i] * np.abs(Shares[i]) * Open_price[i] return Money, Shares def Get_TradeData(df): # Initialize the variables days,Action1,Action2,current_status,Predicted,Money,Shares,Open_price,Close_price = initialize(df) # Get Buy/Sell trade signal days, Action1, Action2, current_status = Get_TradeSignal(Predicted, days, Action1, Action2, current_status) Money, Shares = Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price) df = get_df(df, Action1, Action2, days, current_status, Money, Shares) df['CurrentVal'] = df['Money'] + df['current_status'] * np.abs(df['Shares']) * df['Adj Close'] return df def Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year): """ Prints the metrics """ print("++++++++++++++++++++++++++++++++++++++++++++++++++++") print(" Year: {0}".format(year)) print(" Number of Trades Executed: {0}".format(number_of_trades)) print("Number of days with Active Position: {}".format(active_days)) print(" Annual Return: {:.6f} %".format(annual_returns*100)) print(" Sharpe Ratio: {:.2f}".format(sharpe_ratio)) print(" Maximum Drawdown (Daily basis): {:.2f} %".format(drawdown*100)) print("----------------------------------------------------") return def Get_results_PL_metrics(df, Rf, year): df['tmp'] = np.where(df['current_status'] == 0, 0, 1) active_days = df['tmp'].sum() number_of_trades = np.abs(df['Action1']).sum()+np.abs(df['Action2']).sum() df['tmp_max'] = df['CurrentVal'].rolling(window=20).max() df['tmp_min'] = df['CurrentVal'].rolling(window=20).min() df['tmp'] = np.where(df['tmp_max'] > 0, (df['tmp_max'] - df['tmp_min'])/df['tmp_max'], 0) drawdown = df['tmp'].max() annual_returns = (df['CurrentVal'].iloc[-1]/100 - 1) std_dev = df['CurrentVal'].pct_change(1).std() sharpe_ratio = (annual_returns - Rf)/std_dev Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year) return ``` ``` # Change to Images directory os.chdir("..") os.chdir(str(os.getcwd()) + "\\Images") ``` ## <span style="color:Maroon">Whole Dataset ``` df_whole_train = df_whole[df_whole["Sample"] == "Train"] df_whole_test = df_whole[df_whole["Sample"] == "Test"] df_whole_test_2019 = df_whole_test[df_whole_test.index.year == 2019] df_whole_test_2020 = df_whole_test[df_whole_test.index.year == 2020] output_train_whole = Get_TradeData(df_whole_train) output_test_whole = Get_TradeData(df_whole_test) output_test_whole_2019 = Get_TradeData(df_whole_test_2019) output_test_whole_2020 = Get_TradeData(df_whole_test_2020) output_train_whole["BuyandHold"] = (100 * output_train_whole["Adj Close"])/(output_train_whole.iloc[0]["Adj Close"]) output_test_whole["BuyandHold"] = (100*output_test_whole["Adj Close"])/(output_test_whole.iloc[0]["Adj Close"]) output_test_whole_2019["BuyandHold"] = (100 * output_test_whole_2019["Adj Close"])/(output_test_whole_2019.iloc[0] ["Adj Close"]) output_test_whole_2020["BuyandHold"] = (100 * output_test_whole_2020["Adj Close"])/(output_test_whole_2020.iloc[0] ["Adj Close"]) Get_results_PL_metrics(output_test_whole_2019, Rf, 2019) Get_results_PL_metrics(output_test_whole_2020, Rf, 2020) # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_train_whole["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_train_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Train Sample "+ str(index) + " RF Whole Dataset", fontsize=16) plt.savefig("Train Sample Whole Dataset RF Model" + str(index) +'.png') plt.show() plt.close() # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_test_whole["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_test_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Test Sample "+ str(index) + " RF Whole Dataset", fontsize=16) plt.savefig("Test Sample Whole Dataset RF Model" + str(index) +'.png') plt.show() plt.close() ``` __Comments:__ <span style="color:Blue"> Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. But the performance of model in Test Sample is very poor ## <span style="color:Maroon">Segment Model ``` df_model_train = df_model[df_model["Sample"] == "Train"] df_model_test = df_model[df_model["Sample"] == "Test"] df_model_test_2019 = df_model_test[df_model_test.index.year == 2019] df_model_test_2020 = df_model_test[df_model_test.index.year == 2020] output_train_model = Get_TradeData(df_model_train) output_test_model = Get_TradeData(df_model_test) output_test_model_2019 = Get_TradeData(df_model_test_2019) output_test_model_2020 = Get_TradeData(df_model_test_2020) output_train_model["BuyandHold"] = (100 * output_train_model["Adj Close"])/(output_train_model.iloc[0]["Adj Close"]) output_test_model["BuyandHold"] = (100 * output_test_model["Adj Close"])/(output_test_model.iloc[0]["Adj Close"]) output_test_model_2019["BuyandHold"] = (100 * output_test_model_2019["Adj Close"])/(output_test_model_2019.iloc[0] ["Adj Close"]) output_test_model_2020["BuyandHold"] = (100 * output_test_model_2020["Adj Close"])/(output_test_model_2020.iloc[0] ["Adj Close"]) Get_results_PL_metrics(output_test_model_2019, Rf, 2019) Get_results_PL_metrics(output_test_model_2020, Rf, 2020) # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_train_model["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_train_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Train Sample Hurst Segment RF Models "+ str(index), fontsize=16) plt.savefig("Train Sample Hurst Segment RF Models" + str(index) +'.png') plt.show() plt.close() # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_test_model["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_test_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Test Sample Hurst Segment RF Models" + str(index), fontsize=16) plt.savefig("Test Sample Hurst Segment RF Models" + str(index) +'.png') plt.show() plt.close() ``` __Comments:__ <span style="color:Blue"> Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. The model does perform well in Test sample (Not compared to Buy and Hold strategy) compared to single model. Hurst Exponent based segmentation has definately added value to the model
github_jupyter
``` !pip install -Uq catalyst gym ``` # Seminar. RL, DDPG. Hi! It's a second part of the seminar. Here we are going to introduce another way to train bot how to play games. A new algorithm will help bot to work in enviroments with continuos actinon spaces. However, the algorithm have no small changes in bot-enviroment communication process. That's why a lot of code for DQN part are reused. Let's code! ``` from collections import deque, namedtuple import random import numpy as np import gym import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from catalyst import dl, utils device = utils.get_device() import numpy as np from collections import deque, namedtuple Transition = namedtuple( 'Transition', field_names=[ 'state', 'action', 'reward', 'done', 'next_state' ] ) class ReplayBuffer: def __init__(self, capacity: int): self.buffer = deque(maxlen=capacity) def append(self, transition: Transition): self.buffer.append(transition) def sample(self, size: int): indices = np.random.choice( len(self.buffer), size, replace=size > len(self.buffer) ) states, actions, rewards, dones, next_states = \ zip(*[self.buffer[idx] for idx in indices]) states, actions, rewards, dones, next_states = ( np.array(states, dtype=np.float32), np.array(actions, dtype=np.int64), np.array(rewards, dtype=np.float32), np.array(dones, dtype=np.bool), np.array(next_states, dtype=np.float32) ) return states, actions, rewards, dones, next_states def __len__(self): return len(self.buffer) from torch.utils.data.dataset import IterableDataset # as far as RL does not have some predefined dataset, # we need to specify epoch lenght by ourselfs class ReplayDataset(IterableDataset): def __init__(self, buffer: ReplayBuffer, epoch_size: int = int(1e3)): self.buffer = buffer self.epoch_size = epoch_size def __iter__(self): states, actions, rewards, dones, next_states = \ self.buffer.sample(self.epoch_size) for i in range(len(dones)): yield states[i], actions[i], rewards[i], dones[i], next_states[i] def __len__(self): return self.epoch_size ``` The first difference is action normalization. Some enviroments have action space bounds, and model's actino have to lie in the bounds. ``` class NormalizedActions(gym.ActionWrapper): def action(self, action): low_bound = self.action_space.low upper_bound = self.action_space.high action = low_bound + (action + 1.0) * 0.5 * (upper_bound - low_bound) action = np.clip(action, low_bound, upper_bound) return action def _reverse_action(self, action): low_bound = self.action_space.low upper_bound = self.action_space.high action = 2 * (action - low_bound) / (upper_bound - low_bound) - 1 action = np.clip(action, low_bound, upper_bound) return actions ``` Next difference is randomness. We can't just sample an action from action space. But we can add noise to generated action. ``` def get_action(env, network, state, sigma=None): state = torch.tensor(state, dtype=torch.float32).to(device).unsqueeze(0) action = network(state).detach().cpu().numpy()[0] if sigma is not None: action = np.random.normal(action, sigma) return action def generate_session( env, network, sigma=None, replay_buffer=None, ): total_reward = 0 state = env.reset() for t in range(env.spec.max_episode_steps): action = get_action(env, network, state=state, sigma=sigma) next_state, reward, done, _ = env.step(action) if replay_buffer is not None: transition = Transition( state, action, reward, done, next_state) replay_buffer.append(transition) total_reward += reward state = next_state if done: break return total_reward, t def generate_sessions( env, network, sigma=None, replay_buffer=None, num_sessions=100, ): sessions_reward, sessions_steps = 0, 0 for i_episone in range(num_sessions): r, t = generate_session( env=env, network=network, sigma=sigma, replay_buffer=replay_buffer, ) sessions_reward += r sessions_steps += t return sessions_reward, sessions_steps def soft_update(target, source, tau): """Updates the target data with smoothing by ``tau``""" for target_param, param in zip(target.parameters(), source.parameters()): target_param.data.copy_( target_param.data * (1.0 - tau) + param.data * tau ) class GameCallback(dl.Callback): def __init__( self, *, env, replay_buffer, session_period, sigma, # sigma_k, actor_key, ): super().__init__(order=0) self.env = env self.replay_buffer = replay_buffer self.session_period = session_period self.sigma = sigma # self.sigma_k = sigma_k self.actor_key = actor_key def on_stage_start(self, runner: dl.IRunner): self.actor = runner.model[self.actor_key] self.actor.eval() generate_sessions( env=self.env, network=self.actor, sigma=self.sigma, replay_buffer=self.replay_buffer, num_sessions=1000, ) self.actor.train() def on_epoch_start(self, runner: dl.IRunner): self.session_counter = 0 self.session_steps = 0 def on_batch_end(self, runner: dl.IRunner): if runner.global_batch_step % self.session_period == 0: self.actor.eval() session_reward, session_steps = generate_session( env=self.env, network=self.actor, sigma=self.sigma, replay_buffer=self.replay_buffer, ) self.session_counter += 1 self.session_steps += session_steps runner.batch_metrics.update({"s_reward": session_reward}) runner.batch_metrics.update({"s_steps": session_steps}) self.actor.train() def on_epoch_end(self, runner: dl.IRunner): num_sessions = 100 self.actor.eval() valid_rewards, valid_steps = generate_sessions( env=self.env, network=self.actor, num_sessions=num_sessions ) self.actor.train() valid_rewards /= float(num_sessions) valid_steps /= float(num_sessions) runner.epoch_metrics["_epoch_"]["num_samples"] = self.session_steps runner.epoch_metrics["_epoch_"]["updates_per_sample"] = ( runner.loader_sample_step / self.session_steps ) runner.epoch_metrics["_epoch_"]["v_reward"] = valid_rewards ``` And the main difference is that we have two networks! Look at the algorithm: ![DDPG algorithm](https://miro.medium.com/max/1084/1*BVST6rlxL2csw3vxpeBS8Q.png) One network is used to generate action (Policy Network). Another judge network's action and predict current reward. Because we have two networks, we can train our model to act in a continues space. Let's code this algorithm in Runner train step. ``` class CustomRunner(dl.Runner): def __init__( self, *, gamma, tau, tau_period=1, **kwargs, ): super().__init__(**kwargs) self.gamma = gamma self.tau = tau self.tau_period = tau_period def on_stage_start(self, runner: dl.IRunner): super().on_stage_start(runner) soft_update(self.model["target_actor"], self.model["actor"], 1.0) soft_update(self.model["target_critic"], self.model["critic"], 1.0) def handle_batch(self, batch): # model train/valid step states, actions, rewards, dones, next_states = batch actor, target_actor = self.model["actor"], self.model["target_actor"] critic, target_critic = self.model["critic"], self.model["target_critic"] actor_optimizer, critic_optimizer = self.optimizer["actor"], self.optimizer["critic"] # get actions for the current state pred_actions = actor(states) # get q-values for the actions in current states pred_critic_states = torch.cat([states, pred_actions], 1) # use q-values to train the actor model policy_loss = (-critic(pred_critic_states)).mean() with torch.no_grad(): # get possible actions for the next states next_state_actions = target_actor(next_states) # get possible q-values for the next actions next_critic_states = torch.cat([next_states, next_state_actions], 1) next_state_values = target_critic(next_critic_states).detach().squeeze() next_state_values[dones] = 0.0 # compute Bellman's equation value target_state_values = next_state_values * self.gamma + rewards # compute predicted values critic_states = torch.cat([states, actions], 1) state_values = critic(critic_states).squeeze() # train the critic model value_loss = self.criterion( state_values, target_state_values.detach() ) self.batch_metrics.update({ "critic_loss": value_loss, "actor_loss": policy_loss }) if self.is_train_loader: actor.zero_grad() actor_optimizer.zero_grad() policy_loss.backward() actor_optimizer.step() critic.zero_grad() critic_optimizer.zero_grad() value_loss.backward() critic_optimizer.step() if self.global_batch_step % self.tau_period == 0: soft_update(target_actor, actor, self.tau) soft_update(target_critic, critic, self.tau) ``` Prepare networks generator and train models! ``` def get_network_actor(env): inner_fn = utils.get_optimal_inner_init(nn.ReLU) outer_fn = utils.outer_init network = torch.nn.Sequential( nn.Linear(env.observation_space.shape[0], 400), nn.ReLU(), nn.Linear(400, 300), nn.ReLU(), ) head = torch.nn.Sequential( nn.Linear(300, 1), nn.Tanh() ) network.apply(inner_fn) head.apply(outer_fn) return torch.nn.Sequential(network, head) def get_network_critic(env): inner_fn = utils.get_optimal_inner_init(nn.LeakyReLU) outer_fn = utils.outer_init network = torch.nn.Sequential( nn.Linear(env.observation_space.shape[0] + 1, 400), nn.LeakyReLU(0.01), nn.Linear(400, 300), nn.LeakyReLU(0.01), ) head = nn.Linear(300, 1) network.apply(inner_fn) head.apply(outer_fn) return torch.nn.Sequential(network, head) # data batch_size = 64 epoch_size = int(1e3) * batch_size buffer_size = int(1e5) # runner settings, ~training gamma = 0.99 tau = 0.01 tau_period = 1 # callback, ~exploration session_period = 1 sigma = 0.3 # optimization lr_actor = 1e-4 lr_critic = 1e-3 # env_name = "LunarLanderContinuous-v2" env_name = "Pendulum-v0" env = NormalizedActions(gym.make(env_name)) replay_buffer = ReplayBuffer(buffer_size) actor, target_actor = get_network_actor(env), get_network_actor(env) critic, target_critic = get_network_critic(env), get_network_critic(env) utils.set_requires_grad(target_actor, requires_grad=False) utils.set_requires_grad(target_critic, requires_grad=False) models = { "actor": actor, "critic": critic, "target_actor": target_actor, "target_critic": target_critic, } criterion = torch.nn.MSELoss() optimizer = { "actor": torch.optim.Adam(actor.parameters(), lr_actor), "critic": torch.optim.Adam(critic.parameters(), lr=lr_critic), } loaders = { "train": DataLoader( ReplayDataset(replay_buffer, epoch_size=epoch_size), batch_size=batch_size, ), } runner = CustomRunner( gamma=gamma, tau=tau, tau_period=tau_period, ) runner.train( model=models, criterion=criterion, optimizer=optimizer, loaders=loaders, logdir="./logs_ddpg", num_epochs=10, verbose=True, valid_loader="_epoch_", valid_metric="v_reward", minimize_valid_metric=False, load_best_on_end=True, callbacks=[ GameCallback( env=env, replay_buffer=replay_buffer, session_period=session_period, sigma=sigma, actor_key="actor", ) ] ) ``` And we can watch how our model plays in the games! \* to run cells below, you should update your python environment. Instruction depends on your system specification. ``` import gym.wrappers env = gym.wrappers.Monitor( gym.make(env_name), directory="videos_ddpg", force=True) generate_sessions( env=env, network=runner.model["actor"], num_sessions=100 ) env.close() # show video from IPython.display import HTML import os video_names = list( filter(lambda s: s.endswith(".mp4"), os.listdir("./videos_ddpg/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices ```
github_jupyter
``` # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd import cv2 # Using glob to read all pokemon images at once # Don't forget to change the path when copying this project import glob images = [cv2.imread(file) for file in glob.glob("C:/Users/Rahul/Desktop/data/Pikachu/*.jpg")] images_1 = [cv2.imread(file) for file in glob.glob("C:/Users/Rahul/Desktop/data/Butterfree/*.jpg")] images_2 = [cv2.imread(file) for file in glob.glob("C:/Users/Rahul/Desktop/data/Ditto/*.jpg")] images, images_1, images_2 # Scaling and resizing all the Pikachu images and saving the result in a new list called mera_dat mera_dat = [] for i in range(199): desired_size = 368 im = images[i] old_size = im.shape[:2] # old_size is in (height, width) format ratio = float(desired_size)/max(old_size) new_size = tuple([int(x*ratio) for x in old_size]) # new_size should be in (width, height) format im = cv2.resize(im, (new_size[1], new_size[0])) delta_w = desired_size - new_size[1] delta_h = desired_size - new_size[0] top, bottom = delta_h//2, delta_h-(delta_h//2) left, right = delta_w//2, delta_w-(delta_w//2) color = [0, 0, 0] new_im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # cv2.imshow("image", new_im) # cv2.waitKey(0) # cv2.destroyAllWindows() # # cv2.imwrite('C:/Users/Rahul/Desktop/a.jpg'.format(i), new_im) mera_dat.append(new_im) # Scaling and resizing all the Butterfree images and saving the result in a new list called mera_dat mera_dat_1 = [] for i in range(66): desired_size = 368 im = images_1[i] old_size = im.shape[:2] # old_size is in (height, width) format ratio = float(desired_size)/max(old_size) new_size = tuple([int(x*ratio) for x in old_size]) # new_size should be in (width, height) format im = cv2.resize(im, (new_size[1], new_size[0])) delta_w = desired_size - new_size[1] delta_h = desired_size - new_size[0] top, bottom = delta_h//2, delta_h-(delta_h//2) left, right = delta_w//2, delta_w-(delta_w//2) color = [0, 0, 0] new_im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # cv2.imshow("image", new_im) # cv2.waitKey(0) # cv2.destroyAllWindows() # # cv2.imwrite('C:/Users/Rahul/Desktop/a.jpg'.format(i), new_im) mera_dat_1.append(new_im) # Scaling and resizing all the Ditto images and saving the result in a new list called mera_dat_2 mera_dat_2 = [] for i in range(56): desired_size = 368 im = images_2[i] old_size = im.shape[:2] # old_size is in (height, width) format ratio = float(desired_size)/max(old_size) new_size = tuple([int(x*ratio) for x in old_size]) # new_size should be in (width, height) format im = cv2.resize(im, (new_size[1], new_size[0])) delta_w = desired_size - new_size[1] delta_h = desired_size - new_size[0] top, bottom = delta_h//2, delta_h-(delta_h//2) left, right = delta_w//2, delta_w-(delta_w//2) color = [0, 0, 0] new_im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # cv2.imshow("image", new_im) # cv2.waitKey(0) # cv2.destroyAllWindows() # # cv2.imwrite('C:/Users/Rahul/Desktop/a.jpg'.format(i), new_im) mera_dat_2.append(new_im) # Converting the preprocessed and resized list into numpy arrays and performing normalization arr = np.array(mera_dat) arr = arr.reshape((199, 406272)) ar1 = np.array(mera_dat_1) ar1 = ar1.reshape((66, 406272)) ar2 = np.array(mera_dat_2) ar2 = ar2.reshape((56, 406272)) arr = arr / 255 ar1 = ar1 / 255 ar2 = ar2 / 255 # Scaled Images in numpy ndarray data structure arr, ar1, ar2 # Converting the numpy arrays to pandas dataframe structure # Generating labels for different pokemons # label 1 is for Pikachu # label 0 is for Butterfree # label 2 is for Ditto dataset = pd.DataFrame(arr) dataset['label'] = np.ones(199) dataset.iloc[:, -1] dataset_1 = pd.DataFrame(ar1) dataset_1['label'] = np.zeros(66) dataset_1.iloc[:, -1] dataset_2 = pd.DataFrame(ar2) dataset_2['label'] = np.array(np.ones(56) + np.ones(56)) dataset_2.iloc[:, -1] # Concatenating everything into a master dataframe dataset_master = pd.concat([dataset, dataset_1, dataset_2]) dataset_master # Splitting the dataset into feature matrix 'X' and vector of predictions 'y' X = dataset_master.iloc[:, 0:406272].values y = dataset_master.iloc[:, -1].values X, y # Implementing a simple ANN architecture for the classification problem import tensorflow as tf from tensorflow import keras model = keras.models.Sequential() model.add(keras.layers.Dense(256, activation = 'relu')) model.add(keras.layers.Dense(128, activation = 'relu')) model.add(keras.layers.Dense(3, activation = 'softmax')) model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) history = model.fit(X, y, epochs = 5) # Visualizing the results pd.DataFrame(history.history).plot(figsize = (8, 5)) plt.grid(True) plt.gca().set_ylim(0, 1) plt.show() ```
github_jupyter
# Notebook used to visualize the daily distribution of electrical events, as depicted in the data descriptor ## Load packages and basic dataset information ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates from sklearn.preprocessing import MinMaxScaler from matplotlib import patches import h5py import pandas as pd import os import sys from pathlib import Path from datetime import datetime from datetime import timedelta import math import seaborn as sns import pdb import scipy # Add project path to path for import project_path = os.path.abspath("..") if project_path not in sys.path: sys.path.append(project_path) # Add module path to path for import module_path = os.path.abspath("../data_utility/data_utility.py") if module_path not in sys.path: sys.path.append(module_path) from data_utility import CREAM_Day # class to work with a day of the CREAM Dataset # Intentional replication is necessary %load_ext autoreload # Reload all modules every time before executing the Python code typed. %autoreload 2 # Import some graphical modules from IPython.display import display, clear_output from ipywidgets import Button, Layout, ButtonStyle, HBox, VBox, widgets, Output from IPython.display import SVG, display, clear_output import subprocess import glob PATH_TO_DATA = os.path.abspath(os.path.join("..", "..","rbgstorage", "nilm", "i13-dataset", "CREAM")) def create_time_bin(hours : float, minutes : float) -> str: """ Creates a hour:minutes timestamp, ceiled to full 30 minutes. All minutes below 15, become 0. All between 15 and 45 minutes, become 30 minutes. All minutes between 45 and 60 become 0 and belong to the next hour. """ if minutes < 15: minutes = "00" elif minutes >= 15 and minutes < 45: minutes = "30" elif minutes >= 45: minutes = "00" hours += 1 if hours < 10: hours = "0" + str(hours) else: hours = str(hours) return hours + ":" + minutes def get_distribution(path_to_data, machine_name): path_to_data = os.path.join(path_to_data, machine_name) all_days = glob.glob(os.path.join(path_to_data, "*")) all_days = [os.path.basename(d) for d in all_days if "2018" in os.path.basename(d) or "2019" in os.path.basename(d) ] all_days.sort() # Load the events day_path = os.path.join(path_to_data, all_days[0]) #arbitrary day to initialize the object current_CREAM_day = CREAM_Day(cream_day_location=day_path,use_buffer=True, buffer_size_files=2) # Load the electrical component events (the raw ones) if machine == "X9": all_component_events = current_CREAM_day.load_component_events(os.path.join(path_to_data, "component_events_coarse.csv"), filter_day=False) else: all_component_events = current_CREAM_day.load_component_events(os.path.join(path_to_data, "component_events.csv"), filter_day=False) # Load the product and the maintenance events (the raw ones, per minute events) and filter for the day all_maintenance_events = current_CREAM_day.load_machine_events(os.path.join(path_to_data, "maintenance_events.csv"), raw_file=False, filter_day=False) all_product_events = current_CREAM_day.load_machine_events(os.path.join(path_to_data, "product_events.csv"), raw_file=False, filter_day=False) # create a new column with: hour:30, hour:0 in it for the x-axis as the labels all_component_events["Time_Bin"] = all_component_events.Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute)) times, counts = np.unique(all_component_events["Time_Bin"] , return_counts=True) return times, counts ``` # Plot of the daily distribution of electrical events ``` # Create the figure fig, axes = plt.subplots(2, 1, sharex=True, sharey=True, figsize=(28,8)) # Iterate over the machines for i, machine in enumerate(["X8", "X9"]): # get the axis object ax = axes[i] # get the count information for the barplot times, counts = get_distribution(path_to_data=PATH_TO_DATA, machine_name=machine) # scale the counts between 0 and 1 counts = counts / np.max(counts) title_text = "Daily, scaled distribution of electrical events (Jura GIGA " + machine + ")" ax.set_title(label = title_text, fontdict={'fontsize': 24}) ax.set_ylabel("Events", fontsize=24) ax.tick_params(axis="both", labelsize=24, rotation=90) # ax.set_ylim(0, 20000) sns.barplot(x=times, y=counts,color="b", ax=ax) plt.xlabel("Time", fontsize=24) plt.tight_layout() plt.savefig("./Figure_3_BOTH.pdf") plt.show() ```
github_jupyter
``` #@title Clone MelGAN-VC Repository ! git clone https://github.com/moiseshorta/MelGAN-VC.git #@title Mount your Google Drive #Mount your Google Drive account from google.colab import drive drive.mount('/content/drive') #Get Example Datasets %cd /content/ #Target Audio = Antonio Zepeda - Templo Mayor !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1-1pKhCyccFc6cDIHe_vOZFSRak6eh5Dy' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1-1pKhCyccFc6cDIHe_vOZFSRak6eh5Dy" -O az.zip && rm -rf /tmp/cookies.txt #Source Audio = Native American flutes !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1-BR2SlBmxskjBKHKqazUiyIL0GGdyeg7' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1-BR2SlBmxskjBKHKqazUiyIL0GGdyeg7" -O flautaindigena.zip && rm -rf /tmp/cookies.txt !unzip az.zip !unzip flautaindigena.zip #@title Import Tensorflow and torchaudio #We'll be using TF 2.1 and torchaudio try: %tensorflow_version 2.x except Exception: pass import tensorflow as tf !pip install soundfile #to save wav files !pip install torch==1.8.0 !pip install --no-deps torchaudio==0.8 !pip uninstall tensorflow !pip install tensorflow-gpu==2.1 #@title Imports from __future__ import print_function, division from glob import glob import scipy import soundfile as sf import matplotlib.pyplot as plt from IPython.display import clear_output from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Concatenate, Conv2D, Conv2DTranspose, GlobalAveragePooling2D, UpSampling2D, LeakyReLU, ReLU, Add, Multiply, Lambda, Dot, BatchNormalization, Activation, ZeroPadding2D, Cropping2D, Cropping1D from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.optimizers import Adam from tensorflow.keras.initializers import TruncatedNormal, he_normal import tensorflow.keras.backend as K import datetime import numpy as np import random import matplotlib.pyplot as plt import collections from PIL import Image from skimage.transform import resize import imageio import librosa import librosa.display from librosa.feature import melspectrogram import os import time import IPython #@title Hyperparameters #Hyperparameters hop=512 #hop size (window size = 6*hop) sr=44100 #sampling rate min_level_db=-100 #reference values to normalize data ref_level_db=20 shape=64 #length of time axis of split specrograms to feed to generator vec_len=128 #length of vector generated by siamese vector bs = 16 #batch size delta = 2. #constant for siamese loss #@title Waveform to Spectrogram converter #There seems to be a problem with Tensorflow STFT, so we'll be using pytorch to handle offline mel-spectrogram generation and waveform reconstruction #For waveform reconstruction, a gradient-based method is used: ''' Decorsière, Rémi, Peter L. Søndergaard, Ewen N. MacDonald, and Torsten Dau. "Inversion of auditory spectrograms, traditional spectrograms, and other envelope representations." IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, no. 1 (2014): 46-56.''' #ORIGINAL CODE FROM https://github.com/yoyololicon/spectrogram-inversion import torch import torch.nn as nn import torch.nn.functional as F from tqdm import tqdm from functools import partial import math import heapq from torchaudio.transforms import MelScale, Spectrogram torch.set_default_tensor_type('torch.cuda.FloatTensor') specobj = Spectrogram(n_fft=6*hop, win_length=6*hop, hop_length=hop, pad=0, power=2, normalized=True) specfunc = specobj.forward melobj = MelScale(n_mels=hop, sample_rate=sr, f_min=0.) melfunc = melobj.forward def melspecfunc(waveform): specgram = specfunc(waveform) mel_specgram = melfunc(specgram) return mel_specgram def spectral_convergence(input, target): return 20 * ((input - target).norm().log10() - target.norm().log10()) def GRAD(spec, transform_fn, samples=None, init_x0=None, maxiter=1000, tol=1e-6, verbose=1, evaiter=10, lr=0.003): spec = torch.Tensor(spec) samples = (spec.shape[-1]*hop)-hop if init_x0 is None: init_x0 = spec.new_empty((1,samples)).normal_(std=1e-6) x = nn.Parameter(init_x0) T = spec criterion = nn.L1Loss() optimizer = torch.optim.Adam([x], lr=lr) bar_dict = {} metric_func = spectral_convergence bar_dict['spectral_convergence'] = 0 metric = 'spectral_convergence' init_loss = None with tqdm(total=maxiter, disable=not verbose) as pbar: for i in range(maxiter): optimizer.zero_grad() V = transform_fn(x) loss = criterion(V, T) loss.backward() optimizer.step() lr = lr*0.9999 for param_group in optimizer.param_groups: param_group['lr'] = lr if i % evaiter == evaiter - 1: with torch.no_grad(): V = transform_fn(x) bar_dict[metric] = metric_func(V, spec).item() l2_loss = criterion(V, spec).item() pbar.set_postfix(**bar_dict, loss=l2_loss) pbar.update(evaiter) return x.detach().view(-1).cpu() def normalize(S): return np.clip((((S - min_level_db) / -min_level_db)*2.)-1., -1, 1) def denormalize(S): return (((np.clip(S, -1, 1)+1.)/2.) * -min_level_db) + min_level_db def prep(wv,hop=192): S = np.array(torch.squeeze(melspecfunc(torch.Tensor(wv).view(1,-1))).detach().cpu()) S = librosa.power_to_db(S)-ref_level_db return normalize(S) def deprep(S): S = denormalize(S)+ref_level_db S = librosa.db_to_power(S) wv = GRAD(np.expand_dims(S,0), melspecfunc, maxiter=2000, evaiter=10, tol=1e-8) return np.array(np.squeeze(wv)) #@title Helper Functions #Helper functions #Generate spectrograms from waveform array def tospec(data): specs=np.empty(data.shape[0], dtype=object) for i in range(data.shape[0]): x = data[i] S=prep(x) S = np.array(S, dtype=np.float32) specs[i]=np.expand_dims(S, -1) print(specs.shape) return specs #Generate multiple spectrograms with a determined length from single wav file def tospeclong(path, length=4*44100): x, sr = librosa.load(path,sr=44100) x,_ = librosa.effects.trim(x) loudls = librosa.effects.split(x, top_db=50) xls = np.array([]) for interv in loudls: xls = np.concatenate((xls,x[interv[0]:interv[1]])) x = xls num = x.shape[0]//length specs=np.empty(num, dtype=object) for i in range(num-1): a = x[i*length:(i+1)*length] S = prep(a) S = np.array(S, dtype=np.float32) try: sh = S.shape specs[i]=S except AttributeError: print('spectrogram failed') print(specs.shape) return specs #Waveform array from path of folder containing wav files def audio_array(path): ls = glob(f'{path}/*.wav') adata = [] for i in range(len(ls)): x, sr = tf.audio.decode_wav(tf.io.read_file(ls[i]), 1) x = np.array(x, dtype=np.float32) adata.append(x) return np.array(adata) #Concatenate spectrograms in array along the time axis def testass(a): but=False con = np.array([]) nim = a.shape[0] for i in range(nim): im = a[i] im = np.squeeze(im) if not but: con=im but=True else: con = np.concatenate((con,im), axis=1) return np.squeeze(con) #Split spectrograms in chunks with equal size def splitcut(data): ls = [] mini = 0 minifinal = 10*shape #max spectrogram length for i in range(data.shape[0]-1): if data[i].shape[1]<=data[i+1].shape[1]: mini = data[i].shape[1] else: mini = data[i+1].shape[1] if mini>=3*shape and mini<minifinal: minifinal = mini for i in range(data.shape[0]): x = data[i] if x.shape[1]>=3*shape: for n in range(x.shape[1]//minifinal): ls.append(x[:,n*minifinal:n*minifinal+minifinal,:]) ls.append(x[:,-minifinal:,:]) return np.array(ls) #@title Generating Mel-Spectrogram dataset. Audio files must be 44.1Khz/16bit .wav source_audio_directory = "/content/flautaindigena" #@param {type:"string"} target_audio_directory = "/content/AntonioZepeda" #@param {type:"string"} #Generating Mel-Spectrogram dataset (Uncomment where needed) #adata: source spectrograms #bdata: target spectrograms #SOURCE awv = audio_array(source_audio_directory) #get waveform array from folder containing wav files aspec = tospec(awv) #get spectrogram array adata = splitcut(aspec) #split spectrogams to fixed length #TARGET bwv = audio_array(target_audio_directory) bspec = tospec(bwv) bdata = splitcut(bspec) #@title Creates Tensorflow Datasets #Creating Tensorflow Datasets @tf.function def proc(x): return tf.image.random_crop(x, size=[hop, 3*shape, 1]) dsa = tf.data.Dataset.from_tensor_slices(adata).repeat(50).map(proc, num_parallel_calls=tf.data.experimental.AUTOTUNE).shuffle(10000).batch(bs, drop_remainder=True) dsb = tf.data.Dataset.from_tensor_slices(bdata).repeat(50).map(proc, num_parallel_calls=tf.data.experimental.AUTOTUNE).shuffle(10000).batch(bs, drop_remainder=True) #@title Adding Spectral Normalization to convolutional layers #Adding Spectral Normalization to convolutional layers from tensorflow.python.keras.utils import conv_utils from tensorflow.python.ops import array_ops from tensorflow.python.ops import math_ops from tensorflow.python.ops import sparse_ops from tensorflow.python.ops import gen_math_ops from tensorflow.python.ops import standard_ops from tensorflow.python.eager import context from tensorflow.python.framework import tensor_shape def l2normalize(v, eps=1e-12): return v / (tf.norm(v) + eps) class ConvSN2D(tf.keras.layers.Conv2D): def __init__(self, filters, kernel_size, power_iterations=1, **kwargs): super(ConvSN2D, self).__init__(filters, kernel_size, **kwargs) self.power_iterations = power_iterations def build(self, input_shape): super(ConvSN2D, self).build(input_shape) if self.data_format == 'channels_first': channel_axis = 1 else: channel_axis = -1 self.u = self.add_weight(self.name + '_u', shape=tuple([1, self.kernel.shape.as_list()[-1]]), initializer=tf.initializers.RandomNormal(0, 1), trainable=False ) def compute_spectral_norm(self, W, new_u, W_shape): for _ in range(self.power_iterations): new_v = l2normalize(tf.matmul(new_u, tf.transpose(W))) new_u = l2normalize(tf.matmul(new_v, W)) sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u)) W_bar = W/sigma with tf.control_dependencies([self.u.assign(new_u)]): W_bar = tf.reshape(W_bar, W_shape) return W_bar def call(self, inputs): W_shape = self.kernel.shape.as_list() W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1])) new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape) outputs = self._convolution_op(inputs, new_kernel) if self.use_bias: if self.data_format == 'channels_first': outputs = tf.nn.bias_add(outputs, self.bias, data_format='NCHW') else: outputs = tf.nn.bias_add(outputs, self.bias, data_format='NHWC') if self.activation is not None: return self.activation(outputs) return outputs class ConvSN2DTranspose(tf.keras.layers.Conv2DTranspose): def __init__(self, filters, kernel_size, power_iterations=1, **kwargs): super(ConvSN2DTranspose, self).__init__(filters, kernel_size, **kwargs) self.power_iterations = power_iterations def build(self, input_shape): super(ConvSN2DTranspose, self).build(input_shape) if self.data_format == 'channels_first': channel_axis = 1 else: channel_axis = -1 self.u = self.add_weight(self.name + '_u', shape=tuple([1, self.kernel.shape.as_list()[-1]]), initializer=tf.initializers.RandomNormal(0, 1), trainable=False ) def compute_spectral_norm(self, W, new_u, W_shape): for _ in range(self.power_iterations): new_v = l2normalize(tf.matmul(new_u, tf.transpose(W))) new_u = l2normalize(tf.matmul(new_v, W)) sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u)) W_bar = W/sigma with tf.control_dependencies([self.u.assign(new_u)]): W_bar = tf.reshape(W_bar, W_shape) return W_bar def call(self, inputs): W_shape = self.kernel.shape.as_list() W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1])) new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape) inputs_shape = array_ops.shape(inputs) batch_size = inputs_shape[0] if self.data_format == 'channels_first': h_axis, w_axis = 2, 3 else: h_axis, w_axis = 1, 2 height, width = inputs_shape[h_axis], inputs_shape[w_axis] kernel_h, kernel_w = self.kernel_size stride_h, stride_w = self.strides if self.output_padding is None: out_pad_h = out_pad_w = None else: out_pad_h, out_pad_w = self.output_padding out_height = conv_utils.deconv_output_length(height, kernel_h, padding=self.padding, output_padding=out_pad_h, stride=stride_h, dilation=self.dilation_rate[0]) out_width = conv_utils.deconv_output_length(width, kernel_w, padding=self.padding, output_padding=out_pad_w, stride=stride_w, dilation=self.dilation_rate[1]) if self.data_format == 'channels_first': output_shape = (batch_size, self.filters, out_height, out_width) else: output_shape = (batch_size, out_height, out_width, self.filters) output_shape_tensor = array_ops.stack(output_shape) outputs = K.conv2d_transpose( inputs, new_kernel, output_shape_tensor, strides=self.strides, padding=self.padding, data_format=self.data_format, dilation_rate=self.dilation_rate) if not context.executing_eagerly(): out_shape = self.compute_output_shape(inputs.shape) outputs.set_shape(out_shape) if self.use_bias: outputs = tf.nn.bias_add( outputs, self.bias, data_format=conv_utils.convert_data_format(self.data_format, ndim=4)) if self.activation is not None: return self.activation(outputs) return outputs class DenseSN(Dense): def build(self, input_shape): super(DenseSN, self).build(input_shape) self.u = self.add_weight(self.name + '_u', shape=tuple([1, self.kernel.shape.as_list()[-1]]), initializer=tf.initializers.RandomNormal(0, 1), trainable=False) def compute_spectral_norm(self, W, new_u, W_shape): new_v = l2normalize(tf.matmul(new_u, tf.transpose(W))) new_u = l2normalize(tf.matmul(new_v, W)) sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u)) W_bar = W/sigma with tf.control_dependencies([self.u.assign(new_u)]): W_bar = tf.reshape(W_bar, W_shape) return W_bar def call(self, inputs): W_shape = self.kernel.shape.as_list() W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1])) new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape) rank = len(inputs.shape) if rank > 2: outputs = standard_ops.tensordot(inputs, new_kernel, [[rank - 1], [0]]) if not context.executing_eagerly(): shape = inputs.shape.as_list() output_shape = shape[:-1] + [self.units] outputs.set_shape(output_shape) else: inputs = math_ops.cast(inputs, self._compute_dtype) if K.is_sparse(inputs): outputs = sparse_ops.sparse_tensor_dense_matmul(inputs, new_kernel) else: outputs = gen_math_ops.mat_mul(inputs, new_kernel) if self.use_bias: outputs = tf.nn.bias_add(outputs, self.bias) if self.activation is not None: return self.activation(outputs) return outputs #@title Networks Architecture #Networks Architecture init = tf.keras.initializers.he_uniform() def conv2d(layer_input, filters, kernel_size=4, strides=2, padding='same', leaky=True, bnorm=True, sn=True): if leaky: Activ = LeakyReLU(alpha=0.2) else: Activ = ReLU() if sn: d = ConvSN2D(filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=init, use_bias=False)(layer_input) else: d = Conv2D(filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=init, use_bias=False)(layer_input) if bnorm: d = BatchNormalization()(d) d = Activ(d) return d def deconv2d(layer_input, layer_res, filters, kernel_size=4, conc=True, scalev=False, bnorm=True, up=True, padding='same', strides=2): if up: u = UpSampling2D((1,2))(layer_input) u = ConvSN2D(filters, kernel_size, strides=(1,1), kernel_initializer=init, use_bias=False, padding=padding)(u) else: u = ConvSN2DTranspose(filters, kernel_size, strides=strides, kernel_initializer=init, use_bias=False, padding=padding)(layer_input) if bnorm: u = BatchNormalization()(u) u = LeakyReLU(alpha=0.2)(u) if conc: u = Concatenate()([u,layer_res]) return u #Extract function: splitting spectrograms def extract_image(im): im1 = Cropping2D(((0,0), (0, 2*(im.shape[2]//3))))(im) im2 = Cropping2D(((0,0), (im.shape[2]//3,im.shape[2]//3)))(im) im3 = Cropping2D(((0,0), (2*(im.shape[2]//3), 0)))(im) return im1,im2,im3 #Assemble function: concatenating spectrograms def assemble_image(lsim): im1,im2,im3 = lsim imh = Concatenate(2)([im1,im2,im3]) return imh #U-NET style architecture def build_generator(input_shape): h,w,c = input_shape inp = Input(shape=input_shape) #downscaling g0 = tf.keras.layers.ZeroPadding2D((0,1))(inp) g1 = conv2d(g0, 256, kernel_size=(h,3), strides=1, padding='valid') g2 = conv2d(g1, 256, kernel_size=(1,9), strides=(1,2)) g3 = conv2d(g2, 256, kernel_size=(1,7), strides=(1,2)) #upscaling g4 = deconv2d(g3,g2, 256, kernel_size=(1,7), strides=(1,2)) g5 = deconv2d(g4,g1, 256, kernel_size=(1,9), strides=(1,2), bnorm=False) g6 = ConvSN2DTranspose(1, kernel_size=(h,1), strides=(1,1), kernel_initializer=init, padding='valid', activation='tanh')(g5) return Model(inp,g6, name='G') #Siamese Network def build_siamese(input_shape): h,w,c = input_shape inp = Input(shape=input_shape) g1 = conv2d(inp, 256, kernel_size=(h,3), strides=1, padding='valid', sn=False) g2 = conv2d(g1, 256, kernel_size=(1,9), strides=(1,2), sn=False) g3 = conv2d(g2, 256, kernel_size=(1,7), strides=(1,2), sn=False) g4 = Flatten()(g3) g5 = Dense(vec_len)(g4) return Model(inp, g5, name='S') #Discriminator (Critic) Network def build_critic(input_shape): h,w,c = input_shape inp = Input(shape=input_shape) g1 = conv2d(inp, 512, kernel_size=(h,3), strides=1, padding='valid', bnorm=False) g2 = conv2d(g1, 512, kernel_size=(1,9), strides=(1,2), bnorm=False) g3 = conv2d(g2, 512, kernel_size=(1,7), strides=(1,2), bnorm=False) g4 = Flatten()(g3) g4 = DenseSN(1, kernel_initializer=init)(g4) return Model(inp, g4, name='C') #@title Save in training loop checkpoint_save_directory = "/content/MelGAN-VC/checkpoint" #@param {type:"string"} #Load past models from path to resume training or test def load(path): gen = build_generator((hop,shape,1)) siam = build_siamese((hop,shape,1)) critic = build_critic((hop,3*shape,1)) gen.load_weights(path+'/gen.h5') critic.load_weights(path+'/critic.h5') siam.load_weights(path+'/siam.h5') return gen,critic,siam #Build models def build(): gen = build_generator((hop,shape,1)) siam = build_siamese((hop,shape,1)) critic = build_critic((hop,3*shape,1)) #the discriminator accepts as input spectrograms of triple the width of those generated by the generator return gen,critic,siam #Generate a random batch to display current training results def testgena(): sw = True while sw: a = np.random.choice(aspec) if a.shape[1]//shape!=1: sw=False dsa = [] if a.shape[1]//shape>6: num=6 else: num=a.shape[1]//shape rn = np.random.randint(a.shape[1]-(num*shape)) for i in range(num): im = a[:,rn+(i*shape):rn+(i*shape)+shape] im = np.reshape(im, (im.shape[0],im.shape[1],1)) dsa.append(im) return np.array(dsa, dtype=np.float32) #Show results mid-training def save_test_image_full(path): a = testgena() print(a.shape) ab = gen(a, training=False) ab = testass(ab) a = testass(a) abwv = deprep(ab) awv = deprep(a) sf.write(path+'/new_file.wav', abwv, sr) IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr)) IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr)) fig, axs = plt.subplots(ncols=2) axs[0].imshow(np.flip(a, -2), cmap=None) axs[0].axis('off') axs[0].set_title('Source') axs[1].imshow(np.flip(ab, -2), cmap=None) axs[1].axis('off') axs[1].set_title('Generated') plt.show() #Save in training loop def save_end(epoch,gloss,closs,mloss,n_save=3,save_path=checkpoint_save_directory): #use custom save_path (i.e. Drive '../content/drive/My Drive/') if epoch % n_save == 0: print('Saving...') path = f'{save_path}/MELGANVC-{str(gloss)[:9]}-{str(closs)[:9]}-{str(mloss)[:9]}' os.mkdir(path) gen.save_weights(path+'/gen.h5') critic.save_weights(path+'/critic.h5') siam.save_weights(path+'/siam.h5') save_test_image_full(path) #@title Losses #Losses def mae(x,y): return tf.reduce_mean(tf.abs(x-y)) def mse(x,y): return tf.reduce_mean((x-y)**2) def loss_travel(sa,sab,sa1,sab1): l1 = tf.reduce_mean(((sa-sa1) - (sab-sab1))**2) l2 = tf.reduce_mean(tf.reduce_sum(-(tf.nn.l2_normalize(sa-sa1, axis=[-1]) * tf.nn.l2_normalize(sab-sab1, axis=[-1])), axis=-1)) return l1+l2 def loss_siamese(sa,sa1): logits = tf.sqrt(tf.reduce_sum((sa-sa1)**2, axis=-1, keepdims=True)) return tf.reduce_mean(tf.square(tf.maximum((delta - logits), 0))) def d_loss_f(fake): return tf.reduce_mean(tf.maximum(1 + fake, 0)) def d_loss_r(real): return tf.reduce_mean(tf.maximum(1 - real, 0)) def g_loss_f(fake): return tf.reduce_mean(- fake) #@title Get models and optimizers & Set up learning rate #Get models and optimizers def get_networks(shape, load_model=False, path=None): if not load_model: gen,critic,siam = build() else: gen,critic,siam = load(path) print('Built networks') opt_gen = Adam(0.0001, 0.5) opt_disc = Adam(0.0001, 0.5) return gen,critic,siam, [opt_gen,opt_disc] #Set learning rate def update_lr(lr): opt_gen.learning_rate = lr opt_disc.learning_rate = lr #@title Training Functions set to Voice Modality #Training Functions #Train Generator, Siamese and Critic @tf.function def train_all(a,b): #splitting spectrogram in 3 parts aa,aa2,aa3 = extract_image(a) bb,bb2,bb3 = extract_image(b) with tf.GradientTape() as tape_gen, tf.GradientTape() as tape_disc: #translating A to B fab = gen(aa, training=True) fab2 = gen(aa2, training=True) fab3 = gen(aa3, training=True) #identity mapping B to B COMMENT THESE 3 LINES IF THE IDENTITY LOSS TERM IS NOT NEEDED fid = gen(bb, training=True) fid2 = gen(bb2, training=True) fid3 = gen(bb3, training=True) #concatenate/assemble converted spectrograms fabtot = assemble_image([fab,fab2,fab3]) #feed concatenated spectrograms to critic cab = critic(fabtot, training=True) cb = critic(b, training=True) #feed 2 pairs (A,G(A)) extracted spectrograms to Siamese sab = siam(fab, training=True) sab2 = siam(fab3, training=True) sa = siam(aa, training=True) sa2 = siam(aa3, training=True) #identity mapping loss loss_id = (mae(bb,fid)+mae(bb2,fid2)+mae(bb3,fid3))/3. #loss_id = 0. IF THE IDENTITY LOSS TERM IS NOT NEEDED #loss_id = 0. #travel loss loss_m = loss_travel(sa,sab,sa2,sab2)+loss_siamese(sa,sa2) #generator and critic losses loss_g = g_loss_f(cab) loss_dr = d_loss_r(cb) loss_df = d_loss_f(cab) loss_d = (loss_dr+loss_df)/2. #generator+siamese total loss lossgtot = loss_g+10.*loss_m+0.5*loss_id #CHANGE LOSS WEIGHTS HERE (COMMENT OUT +w*loss_id IF THE IDENTITY LOSS TERM IS NOT NEEDED) #computing and applying gradients grad_gen = tape_gen.gradient(lossgtot, gen.trainable_variables+siam.trainable_variables) opt_gen.apply_gradients(zip(grad_gen, gen.trainable_variables+siam.trainable_variables)) grad_disc = tape_disc.gradient(loss_d, critic.trainable_variables) opt_disc.apply_gradients(zip(grad_disc, critic.trainable_variables)) return loss_dr,loss_df,loss_g,loss_id #Train Critic only @tf.function def train_d(a,b): aa,aa2,aa3 = extract_image(a) with tf.GradientTape() as tape_disc: fab = gen(aa, training=True) fab2 = gen(aa2, training=True) fab3 = gen(aa3, training=True) fabtot = assemble_image([fab,fab2,fab3]) cab = critic(fabtot, training=True) cb = critic(b, training=True) loss_dr = d_loss_r(cb) loss_df = d_loss_f(cab) loss_d = (loss_dr+loss_df)/2. grad_disc = tape_disc.gradient(loss_d, critic.trainable_variables) opt_disc.apply_gradients(zip(grad_disc, critic.trainable_variables)) return loss_dr,loss_df #@title Training Functions set to Music Modality #Training Functions #Train Generator, Siamese and Critic @tf.function def train_all(a,b): #splitting spectrogram in 3 parts aa,aa2,aa3 = extract_image(a) bb,bb2,bb3 = extract_image(b) with tf.GradientTape() as tape_gen, tf.GradientTape() as tape_disc: #translating A to B fab = gen(aa, training=True) fab2 = gen(aa2, training=True) fab3 = gen(aa3, training=True) #identity mapping B to B COMMENT THESE 3 LINES IF THE IDENTITY LOSS TERM IS NOT NEEDED #fid = gen(bb, training=True) #fid2 = gen(bb2, training=True) #fid3 = gen(bb3, training=True) #concatenate/assemble converted spectrograms fabtot = assemble_image([fab,fab2,fab3]) #feed concatenated spectrograms to critic cab = critic(fabtot, training=True) cb = critic(b, training=True) #feed 2 pairs (A,G(A)) extracted spectrograms to Siamese sab = siam(fab, training=True) sab2 = siam(fab3, training=True) sa = siam(aa, training=True) sa2 = siam(aa3, training=True) #identity mapping loss #loss_id = (mae(bb,fid)+mae(bb2,fid2)+mae(bb3,fid3))/3. #loss_id = 0. IF THE IDENTITY LOSS TERM IS NOT NEEDED loss_id = 0. #travel loss loss_m = loss_travel(sa,sab,sa2,sab2)+loss_siamese(sa,sa2) #generator and critic losses loss_g = g_loss_f(cab) loss_dr = d_loss_r(cb) loss_df = d_loss_f(cab) loss_d = (loss_dr+loss_df)/2. #generator+siamese total loss lossgtot = loss_g+10.*loss_m #+0.5*loss_id #CHANGE LOSS WEIGHTS HERE (COMMENT OUT +w*loss_id IF THE IDENTITY LOSS TERM IS NOT NEEDED) #computing and applying gradients grad_gen = tape_gen.gradient(lossgtot, gen.trainable_variables+siam.trainable_variables) opt_gen.apply_gradients(zip(grad_gen, gen.trainable_variables+siam.trainable_variables)) grad_disc = tape_disc.gradient(loss_d, critic.trainable_variables) opt_disc.apply_gradients(zip(grad_disc, critic.trainable_variables)) return loss_dr,loss_df,loss_g,loss_id #Train Critic only @tf.function def train_d(a,b): aa,aa2,aa3 = extract_image(a) with tf.GradientTape() as tape_disc: fab = gen(aa, training=True) fab2 = gen(aa2, training=True) fab3 = gen(aa3, training=True) fabtot = assemble_image([fab,fab2,fab3]) cab = critic(fabtot, training=True) cb = critic(b, training=True) loss_dr = d_loss_r(cb) loss_df = d_loss_f(cab) loss_d = (loss_dr+loss_df)/2. grad_disc = tape_disc.gradient(loss_d, critic.trainable_variables) opt_disc.apply_gradients(zip(grad_disc, critic.trainable_variables)) return loss_dr,loss_df #@title Training Loop #Training Loop def train(epochs, batch_size=16, lr=0.0001, n_save=6, gupt=5): update_lr(lr) df_list = [] dr_list = [] g_list = [] id_list = [] c = 0 g = 0 for epoch in range(epochs): bef = time.time() for batchi,(a,b) in enumerate(zip(dsa,dsb)): if batchi%gupt==0: dloss_t,dloss_f,gloss,idloss = train_all(a,b) else: dloss_t,dloss_f = train_d(a,b) df_list.append(dloss_f) dr_list.append(dloss_t) g_list.append(gloss) id_list.append(idloss) c += 1 g += 1 if batchi%600==0: print(f'[Epoch {epoch}/{epochs}] [Batch {batchi}] [D loss f: {np.mean(df_list[-g:], axis=0)} ', end='') print(f'r: {np.mean(dr_list[-g:], axis=0)}] ', end='') print(f'[G loss: {np.mean(g_list[-g:], axis=0)}] ', end='') print(f'[ID loss: {np.mean(id_list[-g:])}] ', end='') print(f'[LR: {lr}]') g = 0 nbatch=batchi print(f'Time/Batch {(time.time()-bef)/nbatch}') save_end(epoch,np.mean(g_list[-n_save*c:], axis=0),np.mean(df_list[-n_save*c:], axis=0),np.mean(id_list[-n_save*c:], axis=0),n_save=n_save) print(f'Mean D loss: {np.mean(df_list[-c:], axis=0)} Mean G loss: {np.mean(g_list[-c:], axis=0)} Mean ID loss: {np.mean(id_list[-c:], axis=0)}') c = 0 #@title Build models and initialize optimizers. If resume_model=True, specify the path where the models are saved resume_model = True #@param {type:"boolean"} model_checkpoints_directory = "/content/MelGAN-VC/checkpoint" #@param {type:"string"} #Build models and initialize optimizers #If load_model=True, specify the path where the models are saved gen,critic,siam, [opt_gen,opt_disc] = get_networks(shape, load_model=resume_model, path=model_checkpoints_directory) #@title Training. epoch_save = how many epochs between each saving and displaying of results. n_epochs = how many epochs the model will train epoch_save = 10#@param {type:"integer"} n_epochs = 3000 #@param {type:"integer"} #Training #n_save = how many epochs between each saving and displaying of results #gupt = how many discriminator updates for generator+siamese update train(n_epochs, batch_size=bs, lr=0.0002, n_save=epoch_save, gupt=3) #@title After Training, use these functions to convert data with the generator and save the results #After Training, use these functions to convert data with the generator and save the results #Assembling generated Spectrogram chunks into final Spectrogram def specass(a,spec): but=False con = np.array([]) nim = a.shape[0] for i in range(nim-1): im = a[i] im = np.squeeze(im) if not but: con=im but=True else: con = np.concatenate((con,im), axis=1) diff = spec.shape[1]-(nim*shape) a = np.squeeze(a) con = np.concatenate((con,a[-1,:,-diff:]), axis=1) return np.squeeze(con) #Splitting input spectrogram into different chunks to feed to the generator def chopspec(spec): dsa=[] for i in range(spec.shape[1]//shape): im = spec[:,i*shape:i*shape+shape] im = np.reshape(im, (im.shape[0],im.shape[1],1)) dsa.append(im) imlast = spec[:,-shape:] imlast = np.reshape(imlast, (imlast.shape[0],imlast.shape[1],1)) dsa.append(imlast) return np.array(dsa, dtype=np.float32) #Converting from source Spectrogram to target Spectrogram def towave(spec, name, path='../content/', show=False): specarr = chopspec(spec) print(specarr.shape) a = specarr print('Generating...') ab = gen(a, training=False) print('Assembling and Converting...') a = specass(a,spec) ab = specass(ab,spec) awv = deprep(a) abwv = deprep(ab) print('Saving...') pathfin = f'{path}/{name}' os.mkdir(pathfin) sf.write(pathfin+'/AB.wav', abwv, sr) sf.write(pathfin+'/A.wav', awv, sr) print('Saved WAV!') IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr)) IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr)) if show: fig, axs = plt.subplots(ncols=2) axs[0].imshow(np.flip(a, -2), cmap=None) axs[0].axis('off') axs[0].set_title('Source') axs[1].imshow(np.flip(ab, -2), cmap=None) axs[1].axis('off') axs[1].set_title('Generated') plt.show() return abwv #@title Generator and wav to wav conversion input_wav = "/content/flautaindigena/23.wav" #@param {type:"string"} output_name = "flauta_to_AntonioZepeda" #@param {type:"string"} output_directory = "/content/MelGAN-VC" #@param {type:"string"} #Wav to wav conversion wv, sr = librosa.load(input_wav, sr=44100) #Load waveform print(wv.shape) speca = prep(wv) #Waveform to Spectrogram plt.figure(figsize=(50,1)) #Show Spectrogram plt.imshow(np.flip(speca, axis=0), cmap=None) plt.axis('off') plt.show() abwv = towave(speca, name=output_name, path=output_directory) #Convert and save wav ```
github_jupyter
# VQE for Unitary Coupled Cluster using tket In this tutorial, we will focus on:<br> - building parameterised ansätze for variational algorithms;<br> - compilation tools for UCC-style ansätze. This example assumes the reader is familiar with the Variational Quantum Eigensolver and its application to electronic structure problems through the Unitary Coupled Cluster approach.<br> <br> To run this example, you will need `pytket` and `pytket-qiskit`, as well as `openfermion`, `scipy`, and `sympy`.<br> <br> We will start with a basic implementation and then gradually modify it to make it faster, more general, and less noisy. The final solution is given in full at the bottom of the notebook.<br> <br> Suppose we have some electronic configuration problem, expressed via a physical Hamiltonian. (The Hamiltonian and excitations in this example were obtained using `qiskit-aqua` version 0.5.2 and `pyscf` for H2, bond length 0.75A, sto3g basis, Jordan-Wigner encoding, with no qubit reduction or orbital freezing.) ``` from openfermion import QubitOperator hamiltonian = ( -0.8153001706270075 * QubitOperator("") + 0.16988452027940318 * QubitOperator("Z0") + -0.21886306781219608 * QubitOperator("Z1") + 0.16988452027940323 * QubitOperator("Z2") + -0.2188630678121961 * QubitOperator("Z3") + 0.12005143072546047 * QubitOperator("Z0 Z1") + 0.16821198673715723 * QubitOperator("Z0 Z2") + 0.16549431486978672 * QubitOperator("Z0 Z3") + 0.16549431486978672 * QubitOperator("Z1 Z2") + 0.1739537877649417 * QubitOperator("Z1 Z3") + 0.12005143072546047 * QubitOperator("Z2 Z3") + 0.04544288414432624 * QubitOperator("X0 X1 X2 X3") + 0.04544288414432624 * QubitOperator("X0 X1 Y2 Y3") + 0.04544288414432624 * QubitOperator("Y0 Y1 X2 X3") + 0.04544288414432624 * QubitOperator("Y0 Y1 Y2 Y3") ) nuclear_repulsion_energy = 0.70556961456 ``` We would like to define our ansatz for arbitrary parameter values. For simplicity, let's start with a Hardware Efficient Ansatz. ``` from pytket import Circuit ``` Hardware efficient ansatz: ``` def hea(params): ansatz = Circuit(4) for i in range(4): ansatz.Ry(params[i], i) for i in range(3): ansatz.CX(i, i + 1) for i in range(4): ansatz.Ry(params[4 + i], i) return ansatz ``` We can use this to build the objective function for our optimisation. ``` from pytket.extensions.qiskit import AerBackend from pytket.utils import expectation_from_counts backend = AerBackend() ``` Naive objective function: ``` def objective(params): energy = 0 for term, coeff in hamiltonian.terms.items(): if not term: energy += coeff continue circ = hea(params) circ.add_c_register("c", len(term)) for i, (q, pauli) in enumerate(term): if pauli == "X": circ.H(q) elif pauli == "Y": circ.V(q) circ.Measure(q, i) backend.compile_circuit(circ) counts = backend.run_circuit(circ, n_shots=4000).get_counts() energy += coeff * expectation_from_counts(counts) return energy + nuclear_repulsion_energy ``` This objective function is then run through a classical optimiser to find the set of parameter values that minimise the energy of the system. For the sake of example, we will just run this with a single parameter value. ``` arg_values = [ -7.31158201e-02, -1.64514836e-04, 1.12585591e-03, -2.58367544e-03, 1.00006068e00, -1.19551357e-03, 9.99963988e-01, 2.53283285e-03, ] energy = objective(arg_values) print(energy) ``` The HEA is designed to cram as many orthogonal degrees of freedom into a small circuit as possible to be able to explore a large region of the Hilbert space whilst the circuits themselves can be run with minimal noise. These ansätze give virtually-optimal circuits by design, but suffer from an excessive number of variational parameters making convergence slow, barren plateaus where the classical optimiser fails to make progress, and spanning a space where most states lack a physical interpretation. These drawbacks can necessitate adding penalties and may mean that the ansatz cannot actually express the true ground state.<br> <br> The UCC ansatz, on the other hand, is derived from the electronic configuration. It sacrifices efficiency of the circuit for the guarantee of physical states and the variational parameters all having some meaningful effect, which helps the classical optimisation to converge.<br> <br> This starts by defining the terms of our single and double excitations. These would usually be generated using the orbital configurations, so we will just use a hard-coded example here for the purposes of demonstration. ``` from pytket.pauli import Pauli, QubitPauliString from pytket.circuit import Qubit q = [Qubit(i) for i in range(4)] xyii = QubitPauliString([q[0], q[1]], [Pauli.X, Pauli.Y]) yxii = QubitPauliString([q[0], q[1]], [Pauli.Y, Pauli.X]) iixy = QubitPauliString([q[2], q[3]], [Pauli.X, Pauli.Y]) iiyx = QubitPauliString([q[2], q[3]], [Pauli.Y, Pauli.X]) xxxy = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.X, Pauli.Y]) xxyx = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.Y, Pauli.X]) xyxx = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.X, Pauli.X]) yxxx = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.X, Pauli.X]) yyyx = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.Y, Pauli.X]) yyxy = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.X, Pauli.Y]) yxyy = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.Y, Pauli.Y]) xyyy = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.Y, Pauli.Y]) singles_a = {xyii: 1.0, yxii: -1.0} singles_b = {iixy: 1.0, iiyx: -1.0} doubles = { xxxy: 0.25, xxyx: -0.25, xyxx: 0.25, yxxx: -0.25, yyyx: -0.25, yyxy: 0.25, yxyy: -0.25, xyyy: 0.25, } ``` Building the ansatz circuit itself is often done naively by defining the map from each term down to basic gates and then applying it to each term. ``` def add_operator_term(circuit: Circuit, term: QubitPauliString, angle: float): qubits = [] for q, p in term.map.items(): if p != Pauli.I: qubits.append(q) if p == Pauli.X: circuit.H(q) elif p == Pauli.Y: circuit.V(q) for i in range(len(qubits) - 1): circuit.CX(i, i + 1) circuit.Rz(angle, len(qubits) - 1) for i in reversed(range(len(qubits) - 1)): circuit.CX(i, i + 1) for q, p in term.map.items(): if p == Pauli.X: circuit.H(q) elif p == Pauli.Y: circuit.Vdg(q) ``` Unitary Coupled Cluster Singles & Doubles ansatz: ``` def ucc(params): ansatz = Circuit(4) # Set initial reference state ansatz.X(1).X(3) # Evolve by excitations for term, coeff in singles_a.items(): add_operator_term(ansatz, term, coeff * params[0]) for term, coeff in singles_b.items(): add_operator_term(ansatz, term, coeff * params[1]) for term, coeff in doubles.items(): add_operator_term(ansatz, term, coeff * params[2]) return ansatz ``` This is already quite verbose, but `pytket` has a neat shorthand construction for these operator terms using the `PauliExpBox` construction. We can then decompose these into basic gates using the `DecomposeBoxes` compiler pass. ``` from pytket.circuit import PauliExpBox from pytket.passes import DecomposeBoxes def add_excitation(circ, term_dict, param): for term, coeff in term_dict.items(): qubits, paulis = zip(*term.map.items()) pbox = PauliExpBox(paulis, coeff * param) circ.add_pauliexpbox(pbox, qubits) ``` UCC ansatz with syntactic shortcuts: ``` def ucc(params): ansatz = Circuit(4) ansatz.X(1).X(3) add_excitation(ansatz, singles_a, params[0]) add_excitation(ansatz, singles_b, params[1]) add_excitation(ansatz, doubles, params[2]) DecomposeBoxes().apply(ansatz) return ansatz ``` The objective function can also be simplified using a utility method for constructing the measurement circuits and processing for expectation value calculations. ``` from pytket.utils.operators import QubitPauliOperator from pytket.utils import get_operator_expectation_value hamiltonian_op = QubitPauliOperator.from_OpenFermion(hamiltonian) ``` Simplified objective function using utilities: ``` def objective(params): circ = ucc(params) return ( get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000) + nuclear_repulsion_energy ) arg_values = [-3.79002933e-05, 2.42964799e-05, 4.63447157e-01] energy = objective(arg_values) print(energy) ``` This is now the simplest form that this operation can take, but it isn't necessarily the most effective. When we decompose the ansatz circuit into basic gates, it is still very expensive. We can employ some of the circuit simplification passes available in `pytket` to reduce its size and improve fidelity in practice.<br> <br> A good example is to decompose each `PauliExpBox` into basic gates and then apply `FullPeepholeOptimise`, which defines a compilation strategy utilising all of the simplifications in `pytket` that act locally on small regions of a circuit. We can examine the effectiveness by looking at the number of two-qubit gates before and after simplification, which tends to be a good indicator of fidelity for near-term systems where these gates are often slow and inaccurate. ``` from pytket import OpType from pytket.passes import FullPeepholeOptimise test_circuit = ucc(arg_values) print("CX count before", test_circuit.n_gates_of_type(OpType.CX)) print("CX depth before", test_circuit.depth_by_type(OpType.CX)) FullPeepholeOptimise().apply(test_circuit) print("CX count after FPO", test_circuit.n_gates_of_type(OpType.CX)) print("CX depth after FPO", test_circuit.depth_by_type(OpType.CX)) ``` These simplification techniques are very general and are almost always beneficial to apply to a circuit if you want to eliminate local redundancies. But UCC ansätze have extra structure that we can exploit further. They are defined entirely out of exponentiated tensors of Pauli matrices, giving the regular structure described by the `PauliExpBox`es. Under many circumstances, it is more efficient to not synthesise these constructions individually, but simultaneously in groups. The `PauliSimp` pass finds the description of a given circuit as a sequence of `PauliExpBox`es and resynthesises them (by default, in groups of commuting terms). This can cause great change in the overall structure and shape of the circuit, enabling the identification and elimination of non-local redundancy. ``` from pytket.passes import PauliSimp test_circuit = ucc(arg_values) print("CX count before", test_circuit.n_gates_of_type(OpType.CX)) print("CX depth before", test_circuit.depth_by_type(OpType.CX)) PauliSimp().apply(test_circuit) print("CX count after PS", test_circuit.n_gates_of_type(OpType.CX)) print("CX depth after PS", test_circuit.depth_by_type(OpType.CX)) FullPeepholeOptimise().apply(test_circuit) print("CX count after PS+FPO", test_circuit.n_gates_of_type(OpType.CX)) print("CX depth after PS+FPO", test_circuit.depth_by_type(OpType.CX)) ``` To include this into our routines, we can just add the simplification passes to the objective function. The `get_operator_expectation_value` utility handles compiling to meet the requirements of the backend, so we don't have to worry about that here. Objective function with circuit simplification: ``` def objective(params): circ = ucc(params) PauliSimp().apply(circ) FullPeepholeOptimise().apply(circ) return ( get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000) + nuclear_repulsion_energy ) ``` These circuit simplification techniques have tried to preserve the exact unitary of the circuit, but there are ways to change the unitary whilst preserving the correctness of the algorithm as a whole.<br> <br> For example, the excitation terms are generated by trotterisation of the excitation operator, and the order of the terms does not change the unitary in the limit of many trotter steps, so in this sense we are free to sequence the terms how we like and it is sensible to do this in a way that enables efficient synthesis of the circuit. Prioritising collecting terms into commuting sets is a very beneficial heuristic for this and can be performed using the `gen_term_sequence_circuit` method to group the terms together into collections of `PauliExpBox`es and the `GuidedPauliSimp` pass to utilise these sets for synthesis. ``` from pytket.passes import GuidedPauliSimp from pytket.utils import gen_term_sequence_circuit def ucc(params): singles_params = {qps: params[0] * coeff for qps, coeff in singles.items()} doubles_params = {qps: params[1] * coeff for qps, coeff in doubles.items()} excitation_op = QubitPauliOperator({**singles_params, **doubles_params}) reference_circ = Circuit(4).X(1).X(3) ansatz = gen_term_sequence_circuit(excitation_op, reference_circ) GuidedPauliSimp().apply(ansatz) FullPeepholeOptimise().apply(ansatz) return ansatz ``` Adding these simplification routines doesn't come for free. Compiling and simplifying the circuit to achieve the best results possible can be a difficult task, which can take some time for the classical computer to perform.<br> <br> During a VQE run, we will call this objective function many times and run many measurement circuits within each, but the circuits that are run on the quantum computer are almost identical, having the same gate structure but with different gate parameters and measurements. We have already exploited this within the body of the objective function by simplifying the ansatz circuit before we call `get_operator_expectation_value`, so it is only done once per objective calculation rather than once per measurement circuit.<br> <br> We can go even further by simplifying it once outside of the objective function, and then instantiating the simplified ansatz with the parameter values needed. For this, we will construct the UCC ansatz circuit using symbolic (parametric) gates. ``` from sympy import symbols ``` Symbolic UCC ansatz generation: ``` syms = symbols("p0 p1 p2") singles_a_syms = {qps: syms[0] * coeff for qps, coeff in singles_a.items()} singles_b_syms = {qps: syms[1] * coeff for qps, coeff in singles_b.items()} doubles_syms = {qps: syms[2] * coeff for qps, coeff in doubles.items()} excitation_op = QubitPauliOperator({**singles_a_syms, **singles_b_syms, **doubles_syms}) ucc_ref = Circuit(4).X(1).X(3) ucc = gen_term_sequence_circuit(excitation_op, ucc_ref) GuidedPauliSimp().apply(ucc) FullPeepholeOptimise().apply(ucc) ``` Objective function using the symbolic ansatz: ``` def objective(params): circ = ucc.copy() sym_map = dict(zip(syms, params)) circ.symbol_substitution(sym_map) return ( get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000) + nuclear_repulsion_energy ) ``` We have now got some very good use of `pytket` for simplifying each individual circuit used in our experiment and for minimising the amount of time spent compiling, but there is still more we can do in terms of reducing the amount of work the quantum computer has to do. Currently, each (non-trivial) term in our measurement hamiltonian is measured by a different circuit within each expectation value calculation. Measurement reduction techniques exist for identifying when these observables commute and hence can be simultaneously measured, reducing the number of circuits required for the full expectation value calculation.<br> <br> This is built in to the `get_operator_expectation_value` method and can be applied by specifying a way to partition the measuremrnt terms. `PauliPartitionStrat.CommutingSets` can greatly reduce the number of measurement circuits by combining any number of terms that mutually commute. However, this involves potentially adding an arbitrary Clifford circuit to change the basis of the measurements which can be costly on NISQ devices, so `PauliPartitionStrat.NonConflictingSets` trades off some of the reduction in circuit number to guarantee that only single-qubit gates are introduced. ``` from pytket.partition import PauliPartitionStrat ``` Objective function using measurement reduction: ``` def objective(params): circ = ucc.copy() sym_map = dict(zip(syms, params)) circ.symbol_substitution(sym_map) return ( get_operator_expectation_value( circ, operator, backend, n_shots=4000, partition_strat=PauliPartitionStrat.CommutingSets, ) + nuclear_repulsion_energy ) ``` At this point, we have completely transformed how our VQE objective function works, improving its resilience to noise, cutting the number of circuits run, and maintaining fast runtimes. In doing this, we have explored a number of the features `pytket` offers that are beneficial to VQE and the UCC method:<br> - high-level syntactic constructs for evolution operators;<br> - utility methods for easy expectation value calculations;<br> - both generic and domain-specific circuit simplification methods;<br> - symbolic circuit compilation;<br> - measurement reduction for expectation value calculations. For the sake of completeness, the following gives the full code for the final solution, including passing the objective function to a classical optimiser to find the ground state: ``` from openfermion import QubitOperator from scipy.optimize import minimize from sympy import symbols from pytket.extensions.qiskit import AerBackend from pytket.circuit import Circuit, Qubit from pytket.partition import PauliPartitionStrat from pytket.passes import GuidedPauliSimp, FullPeepholeOptimise from pytket.pauli import Pauli, QubitPauliString from pytket.utils import get_operator_expectation_value, gen_term_sequence_circuit from pytket.utils.operators import QubitPauliOperator ``` Obtain electronic Hamiltonian: ``` hamiltonian = ( -0.8153001706270075 * QubitOperator("") + 0.16988452027940318 * QubitOperator("Z0") + -0.21886306781219608 * QubitOperator("Z1") + 0.16988452027940323 * QubitOperator("Z2") + -0.2188630678121961 * QubitOperator("Z3") + 0.12005143072546047 * QubitOperator("Z0 Z1") + 0.16821198673715723 * QubitOperator("Z0 Z2") + 0.16549431486978672 * QubitOperator("Z0 Z3") + 0.16549431486978672 * QubitOperator("Z1 Z2") + 0.1739537877649417 * QubitOperator("Z1 Z3") + 0.12005143072546047 * QubitOperator("Z2 Z3") + 0.04544288414432624 * QubitOperator("X0 X1 X2 X3") + 0.04544288414432624 * QubitOperator("X0 X1 Y2 Y3") + 0.04544288414432624 * QubitOperator("Y0 Y1 X2 X3") + 0.04544288414432624 * QubitOperator("Y0 Y1 Y2 Y3") ) nuclear_repulsion_energy = 0.70556961456 hamiltonian_op = QubitPauliOperator.from_OpenFermion(hamiltonian) ``` Obtain terms for single and double excitations: ``` q = [Qubit(i) for i in range(4)] xyii = QubitPauliString([q[0], q[1]], [Pauli.X, Pauli.Y]) yxii = QubitPauliString([q[0], q[1]], [Pauli.Y, Pauli.X]) iixy = QubitPauliString([q[2], q[3]], [Pauli.X, Pauli.Y]) iiyx = QubitPauliString([q[2], q[3]], [Pauli.Y, Pauli.X]) xxxy = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.X, Pauli.Y]) xxyx = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.Y, Pauli.X]) xyxx = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.X, Pauli.X]) yxxx = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.X, Pauli.X]) yyyx = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.Y, Pauli.X]) yyxy = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.X, Pauli.Y]) yxyy = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.Y, Pauli.Y]) xyyy = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.Y, Pauli.Y]) ``` Symbolic UCC ansatz generation: ``` syms = symbols("p0 p1 p2") singles_syms = {xyii: syms[0], yxii: -syms[0], iixy: syms[1], iiyx: -syms[1]} doubles_syms = { xxxy: 0.25 * syms[2], xxyx: -0.25 * syms[2], xyxx: 0.25 * syms[2], yxxx: -0.25 * syms[2], yyyx: -0.25 * syms[2], yyxy: 0.25 * syms[2], yxyy: -0.25 * syms[2], xyyy: 0.25 * syms[2], } excitation_op = QubitPauliOperator({**singles_syms, **doubles_syms}) ucc_ref = Circuit(4).X(0).X(2) ucc = gen_term_sequence_circuit(excitation_op, ucc_ref) ``` Circuit simplification: ``` GuidedPauliSimp().apply(ucc) FullPeepholeOptimise().apply(ucc) ``` Connect to a simulator/device: ``` backend = AerBackend() ``` Objective function: ``` def objective(params): circ = ucc.copy() sym_map = dict(zip(syms, params)) circ.symbol_substitution(sym_map) return ( get_operator_expectation_value( circ, hamiltonian_op, backend, n_shots=4000, partition_strat=PauliPartitionStrat.CommutingSets, ) + nuclear_repulsion_energy ).real ``` Optimise against the objective function: ``` initial_params = [1e-4, 1e-4, 4e-1] result = minimize(objective, initial_params, method="Nelder-Mead") print("Final parameter values", result.x) print("Final energy value", result.fun) ``` Exercises:<br> - Replace the `get_operator_expectation_value` call with its implementation and use this to pull the analysis for measurement reduction outside of the objective function, so our circuits can be fully determined and compiled once. This means that the `symbol_substitution` method will need to be applied to each measurement circuit instead of just the state preparation circuit.<br> - Use the `SpamCorrecter` class to add some mitigation of the measurement errors. Start by running the characterisation circuits first, before your main VQE loop, then apply the mitigation to each of the circuits run within the objective function.<br> - Change the `backend` by passing in a `Qiskit` `NoiseModel` to simulate a noisy device. Compare the accuracy of the objective function both with and without the circuit simplification. Try running a classical optimiser over the objective function and compare the convergence rates with different noise models. If you have access to a QPU, try changing the `backend` to connect to that and compare the results to the simulator.
github_jupyter
![seQuencing logo](../images/sequencing-logo.svg) # Sequences In some cases, one may want to intersperse ideal unitary gates within a sequence of time-dependent operations. This is possible using an object called a [Sequence](../api/classes.rst#Sequence). A `Sequence` is essentially a list containing [PulseSequences](../api/classes.rst#PulseSequence), [Operations](../api/classes.rst#Operation), and unitary operators. When `Sequence.run(init_state)` is called, the `Sequence` iterates over its constituent `PulseSequences`, `Operations`, and unitaries, applying each to the resulting state of the last. `Sequence` is designed to behave like a Python [list](https://docs.python.org/3/tutorial/datastructures.html), so it has the following methods defined: - `append()` - `extend()` - `insert()` - `pop()` - `clear()` - `__len__()` - `__getitem__()` - `__iter__()` **Notes:** - Just like a `PulseSequence` or `CompiledPulseSequence`, a `Sequence` must be associated with a `System`. - Whereas `PulseSequence.run()` and `CompiledPulseSequence.run()` return an instance of `qutip.solver.Result`, `Sequence.run()` returns a [SequenceResult](../api/classes.rst#SequenceResult) object, which behaves just like `qutip.solver.Result`. `SequenceResult.states` stores the quantum `states` after each stage of the simulation (`states[0]` is `init_state` and `states[-1]` is the final state of the system). ``` %config InlineBackend.figure_formats = ['svg'] %matplotlib inline import matplotlib.pyplot as plt import numpy as np import qutip from sequencing import Transmon, Cavity, System, Sequence qubit = Transmon('qubit', levels=3, kerr=-200e-3) cavity = Cavity('cavity', levels=10, kerr=-10e-6) system = System('system', modes=[qubit, cavity]) system.set_cross_kerr(cavity, qubit, chi=-2e-3) qubit.gaussian_pulse.drag = 5 ``` ## Interleave pulses and unitaries Here we perform a "$\pi$-pulse" composed of $20$ interleaved $\frac{\pi}{40}$-pulses and unitary rotations. ``` init_state = system.ground_state() # calculate expectation value of |qubit=1, cavity=0> e_ops = [system.fock_dm(qubit=1)] n_rotations = 20 theta = np.pi / n_rotations seq = Sequence(system) for _ in range(n_rotations): # Capture a PulseSequence qubit.rotate_x(theta/2) # # Alternatively, we can append an Operation # operation = qubit.rotate_x(theta/2, capture=False) # seq.append(operation) # Append a unitary seq.append(qubit.Rx(theta/2)) result = seq.run(init_state, e_ops=e_ops, full_evolution=True, progress_bar=True) states = result.states ``` ### Inspect the sequence `Sequence.plot_coefficients()` plots Hamiltonian coefficients vs. time. Instantaneous unitary operations are represented by dashed vertical lines. If multiple unitaries occur at the same time, only a single dashed line is drawn. ``` fig, ax = seq.plot_coefficients(subplots=False) ax.set_xlabel('Time [ns]') ax.set_ylabel('Hamiltonian coefficient [GHz]') fig.set_size_inches(8,4) fig.tight_layout() fig.subplots_adjust(top=0.9) print('len(states):', len(states)) print(f'state fidelity: {qutip.fidelity(states[-1], qubit.Rx(np.pi) * init_state)**2:.4f}') ``` ### Plot the results ``` e_pops = result.expect[0] # probability of measuring the state |qubit=1, cavity=0> fig, ax = plt.subplots(figsize=(8,4)) ax.plot(result.times, e_pops, '.') ax.scatter(result.times[:1], e_pops[:1], marker='s', color='k', label='init_state') # draw vertical lines at the location of each unitary rotation for i in range(1, result.times.size // (2*n_rotations) + 1): t = 2 * n_rotations * i - 1 label = 'unitaries' if i == 1 else None ax.axvline(t, color='k', alpha=0.25, ls='--', lw=1.5, label=label) ax.axhline(0, color='k', lw=1) ax.axhline(1, color='k', lw=1) ax.set_ylabel('$P(|e\\rangle)$') ax.set_xlabel('Times [ns]') ax.set_title('Interleaved pulses and unitaries') ax.legend(loc=0); print(result) from qutip.ipynbtools import version_table version_table() ```
github_jupyter
``` import pandas as pd import numpy as np import os from collections import Counter from tqdm import tqdm ``` # Data Analysis ``` data = pd.read_csv("test.csv",engine = 'python') data.tail() colum = data.columns for i in colum: print(f'{len(set(data[i]))} different values in the {i} column') print(f"\ntotal number of examples {len(data)}") #Host, link, Time(ET), Time(GMT),is of no use for trainig the function data = data.drop(["Host", "Link", "Date(ET)", "Time(ET)", "time(GMT)"], axis=1) colum = data.columns for i in colum: print(f'{len(set(data[i]))} different values in the {i} column') print(f"\ntotal number of examples {len(data)}") list(set(data["Source"])) # differnet values in "Source" column # repalcing FACEBOOK to Facebook data.replace(to_replace='FACEBOOK', value='Facebook',inplace=True) # Now there are only 4 different values in "Source" column Counter(data.loc[:,"Source"]) # # distribution of differnet values in "Source" column Counter(data.iloc[:,[-1]]['Patient_Tag']) # distribution of labels in the "Patien_Tag" column # It's an unbalanced data dummy = {} for i in list(set(data["Source"])): print(i,"---", Counter(data.iloc[:,[0,-1]][data['Source'] == i]['Patient_Tag'])) # distribution of labels with reference to each values in "Source" column replace_ = {} for index, i in enumerate(list(set(data["Source"])),start=1): replace_[index] = i data.replace(to_replace=i, value=index,inplace=True) data.fillna('UNK',inplace=True) list(set(data["Source"])) data.fillna('UNK',inplace=True) ``` # Vocab creation ``` import re rep_with = ['.', '?', '/', '\n', '(', ')','[', ']', '{', '}', '-','"','!', '|' ] def rep_(sent): for i in rep_with: sent = sent.replace(i,' ').replace('$', ' ').replace(',','').replace("'",'') return sent import re import num2words def n2w(text): return re.sub(r"(\d+)", lambda x: num2words.num2words(int(x.group(0))), text) def preprocess(data,pos): sent = [] for i in range(len(data)): try:sent.append(n2w(rep_(data.iloc[i,pos]))) except:print(data.iloc[i,pos]) return sent sent = preprocess(data, 2) import nltk from nltk.corpus import stopwords from nltk.tokenize import word_tokenize stop_words = set(stopwords.words('english')) sent_preprocess = [] for i in sent: sent_preprocess.append([w for w in word_tokenize(i) if not w in stop_words]) sent = sent_preprocess # sent = [i.split(' ') for i in sent] train = [[sent[i], data.loc[:,'Patient_Tag'][i]] for i in range(len(sent))] train = sorted(train, key = lambda c: len(c[0])) # sorted train using the len of x {helps in minibatching} train.pop(0) train_len = [] # len of each example for i in train: train_len.append(len(i[0])) import pickle try: with open("elmo_st.pkl", "rb") as f: print("loading embeddings") embeddings = pickle.load(f) except: print("creating embeddings ...it will take time") from allennlp.commands.elmo import ElmoEmbedder elmo = ElmoEmbedder( options_file='https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway_5.5B/elmo_2x4096_512_2048cnn_2xhighway_5.5B_options.json', weight_file='https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway_5.5B/elmo_2x4096_512_2048cnn_2xhighway_5.5B_weights.hdf5', cuda_device=1) #define max token length, 2187 is the max max_tokens=1024 #input sentences sentences = [] # x = [i[0] for i in train] for i in range(0,len(sent),16): sentences.append(sent[i:i+16]) #create a pretrained elmo model (requires internet connection) embeddings=[] #loop through the input sentences for k,j in enumerate(sentences): print(k) for i, elmo_embedding in enumerate(elmo.embed_sentences(j)): # Average the 3 layers returned from Elmo avg_elmo_embedding = np.average(elmo_embedding, axis=0) padding_length = max_tokens - avg_elmo_embedding.shape[0] if(padding_length>0): avg_elmo_embedding =np.append(avg_elmo_embedding, np.zeros((padding_length, avg_elmo_embedding.shape[1])), axis=0) else: avg_elmo_embedding=avg_elmo_embedding[:max_tokens] embeddings.append(avg_elmo_embedding) with open("test_elmo.pkl", "wb") as f: pickle.dump(embeddings,f) len(embeddings) ``` # Training ``` import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim device = torch.device("cuda:0") class EncoderRNN(nn.Module): def __init__(self, hidden_size, num_layers,directions,bidirectonal,out): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.directions = directions self.gru = nn.GRU(1024, hidden_size, num_layers,bidirectional=bidirectonal) self.linear1 = nn.Linear(hidden_size*directions, 32) self.linear2 = nn.Linear(32, out, bias = True) self.drop = nn.Dropout(p=0.7, inplace=False) self.sigmoid = nn.Sigmoid() def forward(self, inp, hidden): out,h = self.gru(inp,hidden) out = self.linear1(out)#.view(-1,2,out.shape[2])) out = self.linear2(self.drop(out)) return self.sigmoid(out) def init_hidden(self, batch_size): print() return torch.zeros(self.num_layers*self.directions, batch_size, self.hidden_size, dtype=torch.double) model = EncoderRNN(hidden_size=256,num_layers=1,directions=1,bidirectonal=False,out=1) model.to(device).double() y = [i[1] for i in train] import pickle with open("out.pkl", "wb") as f: pickle.dump(y,f) loss_function = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) batch = 64 for epoch in range(5): running_loss = 0 for i in range(0,len(embeddings),batch): h = model.init_hidden(len(embeddings[i:i+batch])).to(device) target = torch.tensor(y[i:i+batch],dtype=torch.double).to(device) inp = torch.tensor(embeddings[i:i+batch],dtype=torch.double).view(1830,len(embeddings[i:i+batch]),1024).to(device) out = model(inp,h) loss = loss_function(out[-1,:,:],target) nn.utils.clip_grad_norm_(model.parameters(), 50) running_loss+=loss.item() print(loss.item()) loss.backward() optimizer.step() print("loss", running_loss/len(embeddings)) ```
github_jupyter
# recreating the paper with tiny imagenet First we're going to take a stab at the most basic version of DeViSE: learning a mapping between image feature vectors and their corresponding labels' word vectors for imagenet classes. Doing this with the entirety of imagenet feels like overkill, so we'll start with tiny imagenet. ## tiny imagenet Tiny imagenet is a subset of imagenet which has been preprocessed for the stanford computer vision course CS231N. It's freely available to download and ideal for putting together quick and easy tests and proof-of-concept work in computer vision. From [their website](https://tiny-imagenet.herokuapp.com/): > Tiny Imagenet has 200 classes. Each class has 500 training images, 50 validation images, and 50 test images. Images are also resized to 64x64px, making the whole dataset small and fast to load. We'll use it to demo the DeViSE idea here. Lets load in a few of the packages we'll use in the project - plotting libraries, numpy, pandas etc, and pytorch, which we'll use to construct our deep learning models. ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style("white") plt.rcParams["figure.figsize"] = (20, 20) import os import io import numpy as np import pandas as pd from PIL import Image from scipy.spatial.distance import cdist import torch from torch import nn, optim from torch.utils.data import Dataset, DataLoader from torchvision import models, transforms from tqdm._tqdm_notebook import tqdm_notebook as tqdm device = torch.device("cuda" if torch.cuda.is_available() else "cpu") base_path = "/mnt/efs/images/tiny-imagenet-200/" ``` # wordvectors We're going to use the [fasttext](https://fasttext.cc/docs/en/english-vectors.html) word vectors trained on [common crawl](http://commoncrawl.org) as the target word vectors throughout this work. Let's load them into memory ``` wv_path = "/mnt/efs/nlp/word_vectors/fasttext/crawl-300d-2M.vec" wv_file = io.open(wv_path, "r", encoding="utf-8", newline="\n", errors="ignore") fasttext = { line.split()[0]: np.array(line.split()[1:]).astype(np.float) for line in tqdm(list(wv_file)) } vocabulary = set(fasttext.keys()) ``` # wordnet We're also going to need to load the wordnet classes and ids from tiny-imagenet ``` clean = lambda x: x.lower().strip().replace(" ", "-").split(",-") with open(base_path + "wnids.txt") as f: wnids = np.array([id.strip() for id in f.readlines()]) wordnet = {} with open(base_path + "words.txt") as f: for line in f.readlines(): wnid, raw_words = line.split("\t") words = [word for word in clean(raw_words) if word in vocabulary] if wnid in wnids and len(words) > 0: wordnet[wnid] = words wnid_to_wordvector = { wnid: (np.array([fasttext[word] for word in words]).mean(axis=0)) for wnid, words in wordnet.items() } wnids = list(wnid_to_wordvector.keys()) ``` # example data here's an example of what we've got inside tiny-imagenet: one tiny image and its corresponding class ``` wnid = np.random.choice(wnids) image_path = base_path + "train/" + wnid + "/images/" + wnid + "_{}.JPEG" print(" ".join(wordnet[wnid])) Image.open(image_path.format(np.random.choice(500))) ``` # datasets and dataloaders Pytorch allows you to explicitly write out how batches of data are assembled and fed to a network. Especially when dealing with images, I've found it's best to use a pandas dataframe of simple paths and pointers as the base structure for assembling data. Instead of loading all of the images and corresponding word vectors into memory at once, we can just store the paths to the images with their wordnet ids. Using pandas also gives us the opportunity to do all sorts of work to the structure of the data without having to use much memory. Here's how that dataframe is put together: ``` df = {} for wnid in wnids: wnid_path = base_path + "train/" + wnid + "/images/" image_paths = [wnid_path + file_name for file_name in os.listdir(wnid_path)] for path in image_paths: df[path] = wnid df = pd.Series(df).to_frame().reset_index() df.columns = ["path", "wnid"] ``` Pandas is great for working with this kind of structured data - we can quickly shuffle the dataframe: ``` df = df.sample(frac=1).reset_index(drop=True) ``` and split it into 80:20 train:test portions. ``` split_ratio = 0.8 train_size = int(split_ratio * len(df)) train_df = df.loc[:train_size] test_df = df.loc[train_size:] ``` n.b. tiny-imagenet already has `train/`, `test/`, and `val/` directories set up which we could have used here instead. However, we're just illustrating the principle in this notebook so the data itself isn't important, and we'll use this kind of split later on when incorporating non-toy data. Now we can define how our `Dataset` object will transform the initial, simple data when it's called on to produce a batch. Images are generated by giving a path to `PIL`, and word vectors are looked up in our `wnid_to_wordvector` dictionary. Both objects are then transformed into pytorch tensors and handed over to the network. ``` class ImageDataset(Dataset): def __init__(self, dataframe, wnid_to_wordvector, transform=transforms.ToTensor()): self.image_paths = dataframe["path"].values self.wnids = dataframe["wnid"].values self.wnid_to_wordvector = wnid_to_wordvector self.transform = transform def __getitem__(self, index): image = Image.open(self.image_paths[index]).convert("RGB") if self.transform is not None: image = self.transform(image) target = torch.Tensor(wnid_to_wordvector[self.wnids[index]]) return image, target def __len__(self): return len(self.wnids) ``` We can also apply transformations to the images as they move through the pipeline (see the `if` statement above in `__getitem__()`). The torchvision package provides lots of fast, intuitive utilities for this kind of thing which can be strung together as follows. Note that we're not applying any flips or grayscale to the test dataset - the test data should generally be left as raw as possible, with distortions applied at train time to increase the generality of the network's knowledge. ``` train_transform = transforms.Compose( [ transforms.Resize(224), transforms.RandomHorizontalFlip(), transforms.RandomRotation(15), transforms.RandomGrayscale(0.25), transforms.ToTensor(), ] ) test_transform = transforms.Compose([transforms.Resize(224), transforms.ToTensor()]) ``` Now all we need to do is pass our dataframe, dictionary of word vectors, and the desired image transforms to the `ImageDataset` object to define our data pipeline for training and testing. ``` train_dataset = ImageDataset(train_df, wnid_to_wordvector, train_transform) test_dataset = ImageDataset(test_df, wnid_to_wordvector, test_transform) ``` Pytorch then requires that you pass the `Dataset` through a `DataLoader` to handle the batching etc. The `DataLoader` manages the pace and order of the work, while the `Dataset` does the work itself. The structure of these things is very predictable, and we don't have to write anything custom at this point. ``` batch_size = 128 train_loader = DataLoader( dataset=train_dataset, batch_size=batch_size, num_workers=5, shuffle=True ) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, num_workers=5) ``` # building the model Our model uses a pre-trained backbone to extract feature vectors from the images. This biases our network to perform well on imagenet-style images and worse on others, but hey, we're searching on imagenet in this example! Later on, when working in some less imagenet-y images, we'll make some attempts to compensate for the backbone's biases. ``` backbone = models.vgg16_bn(pretrained=True).features ``` We don't want this backbone to be trainable, so we switch off the gradients for its weight and bias tensors. ``` for param in backbone.parameters(): param.requires_grad = False ``` Now we can put together the DeViSE network itself, which embeds image features into word vector space. The output of our backbone network is a $[512 \times 7 \times 7]$ tensor, which we then flatten into a 25088 dimensional vector. That vector is then fed through a few fully connected layers and ReLUs, while compressing the dimensionality down to our target size (300, to match the fasttext word vectors). ``` class DeViSE(nn.Module): def __init__(self, backbone, target_size=300): super(DeViSE, self).__init__() self.backbone = backbone self.head = nn.Sequential( nn.Linear(in_features=(25088), out_features=target_size * 2), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(in_features=target_size * 2, out_features=target_size), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(in_features=target_size, out_features=target_size), ) def forward(self, x): x = self.backbone(x) x = x.view(x.size(0), -1) x = self.head(x) x = x / x.max() return x devise_model = DeViSE(backbone, target_size=300).to(device) ``` # train loop Pytorch requires that we write our own training loops - this is rough skeleton structure that I've got used to. For each batch, the inputs and target tensors are first passed to the GPU. The inputs are then passed through the network to generate a set of predictions, which are compared to the target using some appropriate loss function. Those losses are used to inform the backpropagation of tweaks to the network's weights and biases, before repeating the whole process with a new batch. We also display the network's current loss through in the progress bar which tracks the speed and progress of the training. We can also specify the number of epochs in the parameters for the train function. ``` losses = [] flags = torch.ones(batch_size).cuda() def train(model, train_loader, loss_function, optimiser, n_epochs): for epoch in range(n_epochs): model.train() loop = tqdm(train_loader) for images, targets in loop: images = images.cuda(non_blocking=True) targets = targets.cuda(non_blocking=True) optimiser.zero_grad() predictions = model(images) loss = loss_function(predictions, targets, flags) loss.backward() optimiser.step() loop.set_description("Epoch {}/{}".format(epoch + 1, n_epochs)) loop.set_postfix(loss=loss.item()) losses.append(loss.item()) ``` Here we define the optimiser, loss function and learning rate which we'll use. ``` trainable_parameters = filter(lambda p: p.requires_grad, devise_model.parameters()) loss_function = nn.CosineEmbeddingLoss() optimiser = optim.Adam(trainable_parameters, lr=0.001) ``` Let's do some training! ``` train( model=devise_model, n_epochs=3, train_loader=train_loader, loss_function=loss_function, optimiser=optimiser, ) ``` When that's done, we can take a look at how the losses are doing. ``` loss_data = pd.Series(losses).rolling(window=15).mean() ax = loss_data.plot() ax.set_xlim( 0, ) ax.set_ylim(0, 1); ``` # evaluate on test set The loop below is very similar to the training one above, but evaluates the network's loss against the test set and stores the predictions. Obviously we're only going to loop over the dataset once here as we're not training anything. The network only has to see an image once to process it. ``` preds = [] test_loss = [] flags = torch.ones(batch_size).cuda() devise_model.eval() with torch.no_grad(): test_loop = tqdm(test_loader) for images, targets in test_loop: images = images.cuda(non_blocking=True) targets = targets.cuda(non_blocking=True) predictions = devise_model(images) loss = loss_function(predictions, targets, flags) preds.append(predictions.cpu().data.numpy()) test_loss.append(loss.item()) test_loop.set_description("Test set") test_loop.set_postfix(loss=np.mean(test_loss[-5:])) preds = np.concatenate(preds).reshape(-1, 300) np.mean(test_loss) ``` # run a search on the predictions Now we're ready to use our network to perform image searches! Each of the test set's images has been assigned a position in word vector space which the network believes is a reasonable numeric description of its features. We can use the complete fasttext dictionary to find the position of new, unseen words, and then return the nearest images to our query. ``` def search(query, n=5): image_paths = test_df["path"].values distances = cdist(fasttext[query].reshape(1, -1), preds) closest_n_paths = image_paths[np.argsort(distances)].squeeze()[:n] close_images = [ np.array(Image.open(image_path).convert("RGB")) for image_path in closest_n_paths ] return Image.fromarray(np.concatenate(close_images, axis=1)) search("bridge") ``` It works! The network has never seen the word 'bridge', has never been told what a bridge might look like, and has never seen any of the test set's images, but thanks to the combined subtlety of the word vector space which we're embedding our images in and the dexterity with which a neural network can manipulate manifolds like these, the machine has enough knowledge to make a very good guess at what a bridge might be. This has been trained on a tiny, terribly grainy set of data but it's enough to get startlingly good results.
github_jupyter
# Тест. Практика проверки гипотез По данным опроса, 75% работников ресторанов утверждают, что испытывают на работе существенный стресс, оказывающий негативное влияние на их личную жизнь. Крупная ресторанная сеть опрашивает 100 своих работников, чтобы выяснить, отличается ли уровень стресса работников в их ресторанах от среднего. 67 из 100 работников отметили высокий уровень стресса. Посчитайте достигаемый уровень значимости, округлите ответ до четырёх знаков после десятичной точки. ``` from __future__ import division import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" n = 100 prob = 0.75 F_H0 = stats.binom(n, prob) x = np.linspace(0,100,101) plt.bar(x, F_H0.pmf(x), align = 'center') plt.xlim(60, 90) plt.show() print('p-value: %.4f' % stats.binom_test(67, 100, prob)) ``` <b> Представим теперь, что в другой ресторанной сети только 22 из 50 работников испытывают существенный стресс. Гипотеза о том, что 22/50 соответствует 75% по всей популяции, методом, который вы использовали в предыдущей задаче, отвергается. Чем это может объясняться? Выберите все возможные варианты. ``` print('p-value: %.10f' % stats.binom_test(22, 50, prob)) ``` <b> The Wage Tract — заповедник в округе Тома, Джорджия, США, деревья в котором не затронуты деятельностью человека со времён первых поселенцев. Для участка заповедника размером 200х200 м имеется информация о координатах сосен (sn — координата в направлении север-юг, we — в направлении запад-восток, обе от 0 до 200). pines.txt Проверим, можно ли пространственное распределение сосен считать равномерным, или они растут кластерами. Загрузите данные, поделите участок на 5х5 одинаковых квадратов размера 40x40 м, посчитайте количество сосен в каждом квадрате (чтобы получить такой же результат, как у нас, используйте функцию scipy.stats.binned_statistic_2d). Если сосны действительно растут равномерно, какое среднее ожидаемое количество сосен в каждом квадрате? В правильном ответе два знака после десятичной точки. ``` pines_data = pd.read_table('pines.txt') pines_data.describe() pines_data.head() sns.pairplot(pines_data, size=4); sn_num, we_num = 5, 5 trees_bins = stats.binned_statistic_2d(pines_data.sn, pines_data.we, None, statistic='count', bins=[sn_num, we_num]) trees_squares_num = trees_bins.statistic trees_squares_num trees_bins.x_edge trees_bins.y_edge mean_trees_num = np.sum(trees_squares_num) / 25 print(mean_trees_num) ``` <b> Чтобы сравнить распределение сосен с равномерным, посчитайте значение статистики хи-квадрат для полученных 5х5 квадратов. Округлите ответ до двух знаков после десятичной точки. ``` stats.chisquare(trees_squares_num.flatten(), ddof = 0) ```
github_jupyter
``` import os md_dir = os.path.join(os.getcwd(), 'mds') md_filenames = [os.path.join(os.getcwd(), 'mds', filename) for filename in os.listdir(md_dir)] print(md_filenames) from collections import namedtuple LineStat = namedtuple('LineStat', 'source cleaned line_num total_lines_in_text is_header') lines = [] for f in md_filenames: with open(f, 'rt', encoding='utf-8') as f: file_lines = f.read().splitlines() for i, line in enumerate(file_lines): source = line.strip() if source=='': continue cleaned = source is_header = False if line.startswith('# ') or line.startswith('## ') or line.startswith('### '): splitted_header = source.split(' ', 1) if len(splitted_header)<2: continue cleaned = splitted_header[1] cleaned = cleaned.strip() is_header = True line_stat = LineStat(source=source, cleaned=cleaned, line_num=i, total_lines_in_text=len(file_lines), is_header=is_header) lines.append(line_stat) print(len(lines)) print('\n'.join([str(l) for l in lines[:10]])) print(sum(l.source.startswith('### ') for l in lines)) print(sum(l.is_header for l in lines)) # divide to learn and test set from random import shuffle shuffle(lines) threshold = int(len(lines) * 0.7) lines_learn = lines[:threshold] lines_test = lines[threshold:] print(f"Total lines: {len(lines)}, learn set: {len(lines_learn)}, lines_test: {len(lines_test)}") import csv csv.field_size_limit(1000000) def save_as_csv(filename, data): with open(filename, 'wt', encoding='utf-8', newline='') as f: fieldnames = LineStat._fields writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() writer.writerows([r._asdict() for r in data]) print(LineStat._fields) save_as_csv('learn.csv', lines_learn) save_as_csv('test.csv', lines_test) def naive_is_header(line: LineStat): return False def short_is_header(line: LineStat): return True if len(line.cleaned)<32 else False def check_measures(classifier): tp, tn, fp, fn = 0, 0, 0, 0 for l in lines: l_wo_res = LineStat(source='', cleaned=l.cleaned, line_num=l.line_num, is_header=False) res = classifier(l_wo_res) if res: if l.is_header: tp += 1 else: fp += 1 print(l.source) else: if l.is_header: fn += 1 else: tn += 1 print(f"TP={tp}, TN={tn}, FP={fp}, FN={fn}") prec = tp / (tp+fp+1e-6) recall = tp / (tp+fn+1e-6) f1 = 2 * prec * recall /(prec + recall+1e-6) print(f"Precision={prec}, recall={recall}, F1={f1}") check_measures(naive_is_header) check_measures(short_is_header) bad_chars = set('#=./<>|(){}:[];') def rules_classifier(line: LineStat): text: str = line.cleaned numbers = sum(c.isdigit() for c in text) if numbers*2>len(text): return False if not text[0].isalnum(): return False if text[0].isalnum() and text[0].islower(): return False if any((c in bad_chars) for c in text): return False if len(text)>120: return False if line.line_num<2: return True if len(text)<32: return True if text.istitle(): return True return False check_measures(rules_classifier) ```
github_jupyter
Internet Resources: [Python Programming.net - machine learning episodes 39-42](https://pythonprogramming.net/hierarchical-clustering-mean-shift-machine-learning-tutorial/) ``` import matplotlib.pyplot as plt from matplotlib import style style.use('ggplot') import numpy as np from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=25, centers=3, n_features=2, random_state=42) ##X = np.array([[1, 2], ## [1.5, 1.8], ## [5, 8], ## [8, 8], ## [1, 0.6], ## [9, 11], ## [8, 2], ## [10, 2], ## [9, 3]]) ##plt.scatter(X[:, 0],X[:, 1], marker = "x", s=150, linewidths = 5, zorder = 10) ##plt.show() #X = np.array([[-5, -4], [4,5], [3,-2], [-2,1]]) colors = 10*["g","r","c","b","k"] plt.scatter(X[:,0], X[:,1], s=50) plt.show() ``` The goal is to find clusters in the dataset. The first implementation of this algorithm still requires the user to input a radius parameter. **Training algorithm for mean shift with fixed bandwitdth** - 1. at start every data point is a centroid Repeat until optimized: for every centroid: - 2. for every data point: calculate distance to centroid - 3. new centroid = mean of all data points where distance of centroid and data point < radius **Prediction** - query is classified as class of nearest centroid ![](notes_screenshots/MeanShift1.png) ``` # mean shift without dynamic bandwidth class Mean_Shift_With_Fixed_Bandwidth: def __init__(self, radius): self.radius = radius def fit(self, data): centroids = {} # 1. every data point is initialized as centroid for i in range(len(data)): centroids[i] = data[i] # Repeat until optimized while True: new_centroids = [] # for every centroid for i in centroids: # i is centroid index in_bandwidth = [] # list of all data points that are within a proximity of self.radius of centroid centroid = centroids[i] # 2. for every data point: calculate distance to centroid for featureset in data: if np.linalg.norm(featureset-centroid) < self.radius: in_bandwidth.append(featureset) # 3. new centroid = mean of all data points where distance of centroid and data point < self.radius new_centroid = np.average(in_bandwidth,axis=0) new_centroids.append(tuple(new_centroid)) # casts nparray to tuple # get rid of any duplicate centroids uniques = sorted(list(set(new_centroids))) # need previous centroids to check if optimized prev_centroids = dict(centroids) # set new centroids (=uniques) as current centroids centroids = {i:np.array(uniques[i]) for i in range(len(uniques))} # is optimized if centroids are not moving anymore optimized = True for i in centroids: if not np.array_equal(centroids[i], prev_centroids[i]): optimized = False if not optimized: break if optimized: break self.centroids = centroids mean_shift = Mean_Shift_With_Fixed_Bandwidth(radius=3) mean_shift.fit(X) mean_shift_centroids = mean_shift.centroids plt.scatter(X[:,0], X[:,1], s=150) for c in mean_shift_centroids: plt.scatter(mean_shift_centroids[c][0], mean_shift_centroids[c][1], color='k', marker='*', s=150) plt.show() ``` Mean shift with dynamic bandwidth differs from the implementation with fixed bandwidth in that it no longer requires a set bandwidth from the user. Instead it estimates a fitting radius. Additionally when calculating new means, each data point is weighted depending how near or far its away from the current mean. **Training algorithm for mean shift with dynamic bandwitdth** - 1. at start every data point is a centroid - 2. estimate radius: radius = mean(distance of every datapoint to mean of entire dataset) Repeat until optimized: for every centroid: - 3. for every data point: calculate distance to centroid and assign weight to it. - 4. calculate new centroid: new centroid = mean of all weighted data points within radius **Prediction:** - query is classified as class of nearest centroid ![](notes_screenshots/MeanShift2.png) ``` class Mean_Shift_With_Dynamic_Bandwith: def __init__(self, radius_norm_step = 100): self.radius_norm_step = radius_norm_step # controls how many discrete wights are used def fit(self,data,max_iter=1000, weight_fkt=lambda x : x**2): # weight_fkt: how to weight distances # 1. every data point is a centroid self.centroids = {i:data[i] for i in range(len(data))} # 2. radius calculation mean_of_entire_data = np.average(data,axis=0) # all_distances= list of distances for ever data point to mean of entire date all_distances = [np.linalg.norm(datapoint-mean_of_entire_data) for datapoint in data] self.radius = np.mean(all_distances) print("radius:", self.radius) # list of discrete weights: let n = self.radius_norm_step -> weights = [n-1,n-2,n-3,...,2,1] weights = [i for i in range(self.radius_norm_step)][::-1] # [::-1] inverts list # do until convergence for count in range(max_iter): new_centroids = [] # for each centroid for centroid_class in self.centroids: centroid = self.centroids[centroid_class] # 3. weigh data points new_centroid_weights = [] for data_point in data: weight_index = int(np.linalg.norm(data_point - centroid)/self.radius * self.radius_norm_step) new_centroid_weights.append(weight_fkt(weights[min(weight_index, self.radius_norm_step)-1])) # calculate new centroid # w: weight, x: sample new_centroid = np.sum([ w*x for w, x in zip(new_centroid_weights, data)], axis=0) / np.sum(new_centroid_weights) new_centroids.append(tuple(new_centroid)) uniques = sorted(list(set(new_centroids))) #remove non uniques for i in uniques: for ii in [i for i in uniques]: # centroid is near enough to another centroid to merge if not i == ii and np.linalg.norm(np.array(i)-np.array(ii)) <= self.radius/self.radius_norm_step: uniques.remove(ii) prev_centroids = dict(self.centroids) self.centroids = {} for i in range(len(uniques)): self.centroids[i] = np.array(uniques[i]) # check if optimized optimized = True for i in self.centroids: if not np.array_equal(self.centroids[i], prev_centroids[i]): optimized = False if optimized: print("Converged @ iteration ", count) break # classify training data self.classifications = {i:[] for i in range(len(self.centroids))} for featureset in data: #compare distance to either centroid distances = [np.linalg.norm(featureset-self.centroids[centroid]) for centroid in self.centroids] classification = (distances.index(min(distances))) # featureset that belongs to that cluster self.classifications[classification].append(featureset) def predict(self,data): #compare distance to either centroid distances = [np.linalg.norm(data-self.centroids[centroid]) for centroid in self.centroids] classification = (distances.index(min(distances))) return classification clf = Mean_Shift_With_Dynamic_Bandwith() clf.fit(X) centroids = clf.centroids print(centroids) colors = 10*['r','g','b','c','k','y'] for classification in clf.classifications: color = colors[classification] for featureset in clf.classifications[classification]: plt.scatter(featureset[0],featureset[1], marker = "x", color=color, s=150, linewidths = 5, zorder = 10) for c in centroids: plt.scatter(centroids[c][0],centroids[c][1], color='k', marker = "*", s=150, linewidths = 5) plt.show() ```
github_jupyter
#Plotting Velocities and Tracers on Vertical Planes This notebook contains discussion, examples, and best practices for plotting velocity field and tracer results from NEMO on vertical planes. Topics include: * Plotting colour meshes of velocity on vertical sections through the domain * Using `nc_tools.timestamp()` to get time stamps from results datasets * Plotting salinity as a colour mesh on thalweg section We'll start with the usual imports, and activation of the Matplotlib inline backend: ``` from __future__ import division, print_function import matplotlib.pyplot as plt import netCDF4 as nc import numpy as np from salishsea_tools import ( nc_tools, viz_tools, ) %matplotlib inline ``` Let's look at the results from the 17-Dec-2003 to 26-Dec-2003 spin-up run. We'll also load the bathymetry so that we can plot land masks. ``` u_vel = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/17dec26dec/SalishSea_1d_20031217_20031226_grid_U.nc') v_vel = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/17dec26dec/SalishSea_1d_20031217_20031226_grid_V.nc') ugrid = u_vel.variables['vozocrtx'] vgrid = v_vel.variables['vomecrty'] zlevels = v_vel.variables['depthv'] timesteps = v_vel.variables['time_counter'] grid = nc.Dataset('/data/dlatorne/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc') ``` ##Velocity Component Colour Mesh on a Vertical Plane There's really not much new involved in plotting on vertical planes compared to the horizontal plane plots that we've done in the previous notebooks. Here's are plots of the v velocity component crossing a vertical plane defined by a section line running from just north of Howe Sound to a little north of Nanaimo, and the surface current streamlines in the area with the section line shown for orientation. Things to note: * The use of the `invert_yaxis()` method on the vertical plane y-axis to make the depth scale go from 0 at the surface to positive depths below, and the resulting reversal of the limit values passed to the `set_ylim()` method * The use of the `set_axis_bgcolor()` method to make the extension of the axis area below the maximum depth appear consistent with the rest of the non-water regions ``` fig, (axl, axr) = plt.subplots(1, 2, figsize=(16, 8)) land_colour = 'burlywood' # Define the v velocity component slice to plot t, zmax, ylocn = -1, 41, 500 section_slice = np.arange(208, 293) timestamp = nc_tools.timestamp(v_vel, t) # Slice and mask the v array vgrid_tzyx = np.ma.masked_values(vgrid[t, :zmax, ylocn, section_slice], 0) # Plot the v velocity colour mesh cmap = plt.get_cmap('bwr') cmap.set_bad(land_colour) mesh = axl.pcolormesh( section_slice[:], zlevels[:zmax], vgrid_tzyx, cmap=cmap, vmin=-0.1, vmax=0.1, ) axl.invert_yaxis() cbar = fig.colorbar(mesh, ax=axl) cbar.set_label('v Velocity [{.units}]'.format(vgrid)) # Axes labels and title axl.set_xlabel('x Index') axl.set_ylabel('{0.long_name} [{0.units}]'.format(zlevels)) axl.set_title( '24h Average v Velocity at y={y} on {date}' .format(y=ylocn, date=timestamp.format('DD-MMM-YYYY'))) # Axes limits and grid axl.set_xlim(section_slice[1], section_slice[-1]) axl.set_ylim(zlevels[zmax - 2] + 10, 0) axl.set_axis_bgcolor(land_colour) axl.grid() # Define surface current magnitude slice x_slice = np.arange(150, 350) y_slice = np.arange(425, 575) # Slice and mask the u and v arrays ugrid_tzyx = np.ma.masked_values(ugrid[t, 0, y_slice, x_slice], 0) vgrid_tzyx = np.ma.masked_values(vgrid[t, 0, y_slice, x_slice], 0) # "Unstagger" the velocity values by interpolating them to the T-grid points # and calculate the surface current speeds u_tzyx, v_tzyx = viz_tools.unstagger(ugrid_tzyx, vgrid_tzyx) speeds = np.sqrt(np.square(u_tzyx) + np.square(v_tzyx)) max_speed = viz_tools.calc_abs_max(speeds) # Plot section line on surface streamlines map viz_tools.set_aspect(axr) axr.streamplot( x_slice[1:], y_slice[1:], u_tzyx, v_tzyx, linewidth=7*speeds/max_speed, ) viz_tools.plot_land_mask( axr, grid, xslice=x_slice, yslice=y_slice, color=land_colour) axr.plot( section_slice, ylocn*np.ones_like(section_slice), linestyle='solid', linewidth=3, color='black', label='Section Line', ) # Axes labels and title axr.set_xlabel('x Index') axr.set_ylabel('y Index') axr.set_title( '24h Average Surface Streamlines on {date}' .format(date=timestamp.format('DD-MMM-YYYY'))) legend = axr.legend(loc='best', fancybox=True, framealpha=0.25) # Axes limits and grid axr.set_xlim(x_slice[0], x_slice[-1]) axr.set_ylim(y_slice[0], y_slice[-1]) axr.grid() ``` The code above uses the `nc_tools.timestamp()` function to obtain the time stamp of the plotted results from the dataset and formats that value as a date in the axes titles. Documentation for `nc_tools.timestamp()` (and the other functions in the `nc_tools` module) is available at http://salishsea-meopar-tools.readthedocs.org/en/latest/SalishSeaTools/salishsea-tools.html#nc_tools.timestamp and via shift-TAB or the `help()` command in notebooks: ``` help(nc_tools.timestamp) ``` Passing a tuple or list of time indices; e.g. `[0, 3, 6, 9]`, to `nc_tools.timestamp()` causes a list of time stamp values to be returned The time stamp value(s) returned are [Arrow](http://crsmithdev.com/arrow/) instances. The `format()` method can be used to produce a string representation of a time stamp, for example: ``` timestamp.format('YYYY-MM-DD HH:mm:ss') ``` NEMO results are calculated using the UTC time zone but `Arrow` time stamps can easily be converted to other time zones: ``` timestamp.to('Canada/Pacific') ``` Please see the [Arrow](http://crsmithdev.com/arrow/) docs for other useful methods and ways of manipulating dates and times in Python. ##Salinity Colour Mesh on Thalweg Section For this plot we'll look at results from the spin-up run that includes 27-Sep-2003 because it shows deep water renewal in the Strait of Georgia. ``` tracers = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/18sep27sep/SalishSea_1d_20030918_20030927_grid_T.nc') ``` The salinity netCDF4 variables needs to be changed to a NumPy array. ``` sal = tracers.variables['vosaline'] npsal = sal[:] zlevels = tracers.variables['deptht'] ``` The thalweg is a line that connects the deepest points of successive cross-sections through the model domain. The grid indices of the thalweg are calculated in the [compute_thalweg.ipynb](https://nbviewer.jupyter.org/github/SalishSeaCast/tools/blob/master/analysis_tools/compute_thalweg.ipynb) notebook and stored as `(j, i)` ordered pairs in the `tools/analysis_tools/thalweg.txt/thalweg.txt` file: ``` !head thalweg.txt ``` We use the NumPy `loadtxt()` function to read the thalweg points into a pair of arrays. The `unpack` argument causes the result to be transposed from an array of ordered pairs to arrays of `j` and `i` values. ``` thalweg = np.loadtxt('/data/dlatorne/MEOPAR/tools/bathymetry/thalweg_working.txt', dtype=int, unpack=True) ``` Plotting salinity along the thalweg is an example of plotting a model result quantity on an arbitrary section through the domain. ``` # Set up the figure and axes fig, (axl, axcb, axr) = plt.subplots(1, 3, figsize=(16, 8)) land_colour = 'burlywood' for ax in (axl, axr): ax.set_axis_bgcolor(land_colour) axl.set_position((0.125, 0.125, 0.6, 0.775)) axcb.set_position((0.73, 0.125, 0.02, 0.775)) axr.set_position((0.83, 0.125, 0.2, 0.775)) # Plot thalweg points on bathymetry map viz_tools.set_aspect(axr) cmap = plt.get_cmap('winter_r') cmap.set_bad(land_colour) bathy = grid.variables['Bathymetry'] x_slice = np.arange(bathy.shape[1]) y_slice = np.arange(200, 800) axr.pcolormesh(x_slice, y_slice, bathy[y_slice, x_slice], cmap=cmap) axr.plot( thalweg[1], thalweg[0], linestyle='-', marker='+', color='red', label='Thalweg Points', ) legend = axr.legend(loc='best', fancybox=True, framealpha=0.25) axr.set_xlabel('x Index') axr.set_ylabel('y Index') axr.grid() # Plot 24h average salinity at all depths along thalweg line t = -1 # 27-Dec-2003 smin, smax, dels = 26, 34, 0.5 cmap = plt.get_cmap('rainbow') cmap.set_bad(land_colour) sal_0 = npsal[t, :, thalweg[0], thalweg[1]] sal_tzyx = np.ma.masked_values(sal_0, 0) x, z = np.meshgrid(np.arange(thalweg.shape[1]), zlevels) mesh = axl.pcolormesh(x, z, sal_tzyx.T, cmap=cmap, vmin=smin, vmax=smax) cbar = plt.colorbar(mesh, cax=axcb) cbar.set_label('Practical Salinity') clines = axl.contour(x, z, sal_tzyx.T, np.arange(smin, smax, dels), colors='black') axl.clabel(clines, fmt='%1.1f', inline=True) axl.invert_yaxis() axl.set_xlim(0, thalweg[0][-1]) axl.set_xlabel('x Index') axl.set_ylabel('{0.long_name} [{0.units}]'.format(zlevels)) axl.grid() ```
github_jupyter
**This notebook is an exercise in the [Natural Language Processing](https://www.kaggle.com/learn/natural-language-processing) course. You can reference the tutorial at [this link](https://www.kaggle.com/matleonard/word-vectors).** --- # Vectorizing Language Embeddings are both conceptually clever and practically effective. So let's try them for the sentiment analysis model you built for the restaurant. Then you can find the most similar review in the data set given some example text. It's a task where you can easily judge for yourself how well the embeddings work. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import spacy # Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.nlp.ex3 import * print("\nSetup complete") # Load the large model to get the vectors nlp = spacy.load('en_core_web_lg') review_data = pd.read_csv('../input/nlp-course/yelp_ratings.csv') review_data.head() ``` Here's an example of loading some document vectors. Calculating 44,500 document vectors takes about 20 minutes, so we'll get only the first 100. To save time, we'll load pre-saved document vectors for the hands-on coding exercises. ``` reviews = review_data[:100] # We just want the vectors so we can turn off other models in the pipeline with nlp.disable_pipes(): vectors = np.array([nlp(review.text).vector for idx, review in reviews.iterrows()]) vectors.shape ``` The result is a matrix of 100 rows and 300 columns. Why 100 rows? Because we have 1 row for each column. Why 300 columns? This is the same length as word vectors. See if you can figure out why document vectors have the same length as word vectors (some knowledge of linear algebra or vector math would be needed to figure this out). Go ahead and run the following cell to load in the rest of the document vectors. ``` # Loading all document vectors from file vectors = np.load('../input/nlp-course/review_vectors.npy') ``` # 1) Training a Model on Document Vectors Next you'll train a `LinearSVC` model using the document vectors. It runs pretty quick and works well in high dimensional settings like you have here. After running the LinearSVC model, you might try experimenting with other types of models to see whether it improves your results. ``` from sklearn.svm import LinearSVC from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(vectors, review_data.sentiment, test_size=0.1, random_state=1) # Create the LinearSVC model model = LinearSVC(random_state=1, dual=False) # Fit the model model.fit(X_train, y_train) # Uncomment and run to see model accuracy print(f'Model test accuracy: {model.score(X_test, y_test)*100:.3f}%') # Uncomment to check your work q_1.check() # Lines below will give you a hint or solution code #q_1.hint() #q_1.solution() # Scratch space in case you want to experiment with other models from sklearn.neural_network import MLPClassifier second_model = MLPClassifier(hidden_layer_sizes=(128,32,), early_stopping=True, random_state=1) second_model.fit(X_train, y_train) print(f'Model test accuracy: {second_model.score(X_test, y_test)*100:.3f}%') ``` # Document Similarity For the same tea house review, find the most similar review in the dataset using cosine similarity. # 2) Centering the Vectors Sometimes people center document vectors when calculating similarities. That is, they calculate the mean vector from all documents, and they subtract this from each individual document's vector. Why do you think this could help with similarity metrics? Run the following line after you've decided your answer. ``` # Check your answer (Run this code cell to receive credit!) #q_2.solution() q_2.check() ``` # 3) Find the most similar review Given an example review below, find the most similar document within the Yelp dataset using the cosine similarity. ``` review = """I absolutely love this place. The 360 degree glass windows with the Yerba buena garden view, tea pots all around and the smell of fresh tea everywhere transports you to what feels like a different zen zone within the city. I know the price is slightly more compared to the normal American size, however the food is very wholesome, the tea selection is incredible and I know service can be hit or miss often but it was on point during our most recent visit. Definitely recommend! I would especially recommend the butternut squash gyoza.""" def cosine_similarity(a, b): return np.dot(a, b)/np.sqrt(a.dot(a)*b.dot(b)) review_vec = nlp(review).vector ## Center the document vectors # Calculate the mean for the document vectors, should have shape (300,) vec_mean = vectors.mean(axis=0) # Subtract the mean from the vectors centered = vectors - vec_mean # Calculate similarities for each document in the dataset # Make sure to subtract the mean from the review vector review_centered = review_vec - vec_mean sims = np.array([cosine_similarity(v, review_centered) for v in centered]) # Get the index for the most similar document most_similar = sims.argmax() # Uncomment to check your work q_3.check() # Lines below will give you a hint or solution code #q_3.hint() #q_3.solution() print(review_data.iloc[most_similar].text) ``` Even though there are many different sorts of businesses in our Yelp dataset, you should have found another tea shop. # 4) Looking at similar reviews If you look at other similar reviews, you'll see many coffee shops. Why do you think reviews for coffee are similar to the example review which mentions only tea? ``` # Check your answer (Run this code cell to receive credit!) #q_4.solution() q_4.check() ``` # Congratulations! You've finished the NLP course. It's an exciting field that will help you make use of vast amounts of data you didn't know how to work with before. This course should be just your introduction. Try a project **[with text](https://www.kaggle.com/datasets?tags=14104-text+data)**. You'll have fun with it, and your skills will continue growing. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161466) to chat with other Learners.*
github_jupyter
# EDA Case Study: House Price ### Task Description House Prices is a classical Kaggle competition. The task is to predicts final price of each house. For more detail, refer to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/. ### Goal of this notebook As it is a famous competition, there exists lots of excelent analysis on how to do eda and how to build model for this task. See https://www.kaggle.com/khandelwallaksya/house-prices-eda for a reference. In this notebook, we will show how dataprep.eda can simply the eda process using a few lines of code. In conclusion: * **Understand the problem**. We'll look at each variable and do a philosophical analysis about their meaning and importance for this problem. * **Univariable study**. We'll just focus on the dependent variable ('SalePrice') and try to know a little bit more about it. * **Multivariate study**. We'll try to understand how the dependent variable and independent variables relate. * **Basic cleaning**. We'll clean the dataset and handle the missing data, outliers and categorical variables. ### Import libraries ``` from dataprep.eda import plot from dataprep.eda import plot_correlation from dataprep.eda import plot_missing from dataprep.datasets import load_dataset import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style="whitegrid", color_codes=True) sns.set(font_scale=1) ``` ### Load data ``` houses = load_dataset("house_prices_train") houses.head() houses_test = load_dataset("house_prices_test") houses_test.head() houses.shape ``` There are total 1460 tuples, each tuple contains 80 features and 1 target value. ``` houses_test.shape ``` ### Variable identification ``` plot(houses) ``` ### Overview of the data We could get the following information: * **Variable**-Variable name * **Type**-There are 43 categorical columns and 38 numerical columns. * **Missing value**-How many missing values each column contains. For instance, Fence contains 80.8% * 1460 = 1180 missing tuples. Usually, some model does not allow the input data contains missing value such as SVM, we have to clean the data before we utilize it. * **Target Value**-The distribution of target value (SalePrice). According to the distribution of the target value, we could get the information that the target value is numerical and the distribution of the target value conforms to the norm distribution. Thus, we are not confronted with imbalanced classes problem. It is really great. * **Guess**-According to the columns' name, we reckon GrLivArea, YearBuilt and OverallQual are likely to be correlated to the target value (SalePrice). ### Correlation in data ``` plot_correlation(houses, "SalePrice") plot_correlation(houses, "SalePrice", value_range=[0.5, 1]) ``` OverallQual, GrLivArea, GarageCars, GarageArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmsAbvGrd, YearBuilt, YearRemodAdd have more than 0.5 Pearson correlation with SalePrice. OverallQual, GrLivArea, GarageCars, YearBuilt, GarageArea, FullBath, TotalBsmtSF, GarageYrBlt, 1stFlrSF, YearRemodAdd, TotRmsAbvGrd and Fireplaces have more than 0.5 Spearman correlation with SalePrice. OverallQual, GarageCars, GrLivArea and FullBath have more than 0.5 KendallTau correlation with SalePrice. EnclosedPorch and KitchenAbvGr have little negative correlation with target variable. These can prove to be important features to predict SalePrice. ### Heatmap ``` plot_correlation(houses) ``` ### In summary In my opinion, this heatmap is the best way to get a quick overview of features' relationships. At first sight, there are two red colored squares that get my attention. The first one refers to the 'TotalBsmtSF' and '1stFlrSF' variables, and the second one refers to the 'GarageX' variables. Both cases show how significant the correlation is between these variables. Actually, this correlation is so strong that it can indicate a situation of multicollinearity. If we think about these variables, we can conclude that they give almost the same information so multicollinearity really occurs. Heatmaps are great to detect this kind of situations and in problems dominated by feature selection, like ours, they are an essential tool. Another thing that got my attention was the 'SalePrice' correlations. We can see our well-known 'GrLivArea', 'TotalBsmtSF', and 'OverallQual', but we can also see many other variables that should be taken into account. That's what we will do next. ``` plot_correlation(houses[["SalePrice","OverallQual","GrLivArea","GarageCars", "GarageArea","GarageYrBlt","TotalBsmtSF","1stFlrSF","FullBath", "TotRmsAbvGrd","YearBuilt","YearRemodAdd"]]) ``` As we saw above there are few feature which shows high multicollinearity from heatmap. Lets focus on red squares on diagonal line and few on the sides. SalePrice and OverallQual GarageArea and GarageCars TotalBsmtSF and 1stFlrSF GrLiveArea and TotRmsAbvGrd YearBulit and GarageYrBlt We have to create a single feature from them before we use them as predictors. ``` plot_correlation(houses, value_range=[0.5, 1]) plot_correlation(houses, k=30) ``` **Attribute Pair Correlation** 7 (GarageArea, GarageCars) 0.882475 11 (GarageYrBlt, YearBuilt) 0.825667 15 (GrLivArea, TotRmsAbvGrd) 0.825489 18 (1stFlrSF, TotalBsmtSF) 0.819530 19 (2ndFlrSF, GrLivArea) 0.687501 9 (BedroomAbvGr, TotRmsAbvGrd) 0.676620 0 (BsmtFinSF1, BsmtFullBath) 0.649212 2 (GarageYrBlt, YearRemodAdd) 0.642277 24 (FullBath, GrLivArea) 0.630012 8 (2ndFlrSF, TotRmsAbvGrd) 0.616423 1 (2ndFlrSF, HalfBath) 0.609707 4 (GarageCars, OverallQual) 0.600671 16 (GrLivArea, OverallQual) 0.593007 23 (YearBuilt, YearRemodAdd) 0.592855 22 (GarageCars, GarageYrBlt) 0.588920 12 (OverallQual, YearBuilt) 0.572323 5 (1stFlrSF, GrLivArea) 0.566024 25 (GarageArea, GarageYrBlt) 0.564567 6 (GarageArea, OverallQual) 0.562022 17 (FullBath, TotRmsAbvGrd) 0.554784 13 (OverallQual, YearRemodAdd) 0.550684 14 (FullBath, OverallQual) 0.550600 3 (GarageYrBlt, OverallQual) 0.547766 10 (GarageCars, YearBuilt) 0.537850 27 (OverallQual, TotalBsmtSF) 0.537808 20 (BsmtFinSF1, TotalBsmtSF) 0.522396 21 (BedroomAbvGr, GrLivArea) 0.521270 26 (2ndFlrSF, BedroomAbvGr) 0.502901 This shows multicollinearity. In regression, "multicollinearity" refers to features that are correlated with other features. Multicollinearity occurs when your model includes multiple factors that are correlated not just to your target variable, but also to each other. Problem: Multicollinearity increases the standard errors of the coefficients. That means, multicollinearity makes some variables statistically insignificant when they should be significant. To avoid this we can do 3 things: Completely remove those variables Make new feature by adding them or by some other operation. Use PCA, which will reduce feature set to small number of non-collinear features. Reference:http://blog.minitab.com/blog/understanding-statistics/handling-multicollinearity-in-regression-analysis ### Univariate Analysis How 1 single variable is distributed in numeric range. What is statistical summary of it. Is it positively skewed or negatively. ``` plot(houses, "SalePrice") ``` ### Pivotal Features ``` plot_correlation(houses, "OverallQual", "SalePrice") plot(houses, "OverallQual", "SalePrice") # why not combine them together? plot(houses, "GarageCars", "SalePrice") plot(houses, "Fireplaces", "SalePrice") plot(houses, "GrLivArea", "SalePrice") plot(houses, "TotalBsmtSF", "SalePrice") plot(houses, "YearBuilt", "SalePrice") ``` ### In summary Based on the above analysis, we can conclude that: 'GrLivArea' and 'TotalBsmtSF' seem to be linearly related with 'SalePrice'. Both relationships are positive, which means that as one variable increases, the other also increases. In the case of 'TotalBsmtSF', we can see that the slope of the linear relationship is particularly high. 'OverallQual' and 'YearBuilt' also seem to be related with 'SalePrice'. The relationship seems to be stronger in the case of 'OverallQual', where the box plot shows how sales prices increase with the overall quality. We just analysed four variables, but there are many other that we should analyse. The trick here seems to be the choice of the right features (feature selection) and not the definition of complex relationships between them (feature engineering). That said, let's separate the wheat from the chaff. ### Missing Value Imputation Missing values in the training data set can affect prediction or classification of a model negatively. Also some machine learning algorithms can't accept missing data eg. SVM, Neural Network. But filling missing values with mean/median/mode or using another predictive model to predict missing values is also a prediction which may not be 100% accurate, instead you can use models like Decision Trees and Random Forest which handle missing values very well. Some of this part is based on this kernel: https://www.kaggle.com/bisaria/house-prices-advanced-regression-techniques/handling-missing-data ``` plot_missing(houses) # plot_missing(houses, "BsmtQual") basement_cols=['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','BsmtFinSF1','BsmtFinSF2'] houses[basement_cols][houses['BsmtQual'].isnull()==True] ``` All categorical variables contains NAN whereas continuous ones have 0. So that means there is no basement for those houses. we can replace it with 'None'. ``` for col in basement_cols: if 'FinSF'not in col: houses[col] = houses[col].fillna('None') # plot_missing(houses, "FireplaceQu") houses["FireplaceQu"] = houses["FireplaceQu"].fillna('None') pd.crosstab(houses.Fireplaces, houses.FireplaceQu) garage_cols=['GarageType','GarageQual','GarageCond','GarageYrBlt','GarageFinish','GarageCars','GarageArea'] houses[garage_cols][houses['GarageType'].isnull()==True] ``` All garage related features are missing values in same rows. that means we can replace categorical variables with None and continuous ones with 0. ``` for col in garage_cols: if houses[col].dtype==np.object: houses[col] = houses[col].fillna('None') else: houses[col] = houses[col].fillna(0) ```
github_jupyter
# Keyword Spotting Dataset Curation [![Open In Colab <](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ShawnHymel/ei-keyword-spotting/blob/master/ei-audio-dataset-curation.ipynb) Use this tool to download the Google Speech Commands Dataset, combine it with your own keywords, mix in some background noise, and upload the curated dataset to Edge Impulse. From there, you can train a neural network to classify spoken words and upload it to a microcontroller to perform real-time keyword spotting. 1. Upload samples of your own keyword (optional) 2. Adjust parameters in the Settings cell (you will need an [Edge Impulse](https://www.edgeimpulse.com/) account) 3. Run the rest of the cells! ('shift' + 'enter' on each cell) ### Upload your own keyword samples You are welcome to use my [custom keyword dataset](https://github.com/ShawnHymel/custom-speech-commands-dataset), but note that it's limited and that I can't promise it will work well. If you want to use it, uncomment the `###Download custom dataset` cell below. You may also add your own recorded keywords to the extracted folder (`/content/custom_keywords`) to augment what's already there. If you'd rather upload your own custom keyword dataset, follow these instructions: On the left pane, in the file browser, create a directory structure containing space for your keyword audio samples. All samples for each keyword should be in a directory with that keyword's name. The audio samples should be `.wav` format, mono, and 1 second long. Bitrate and bitdepth should not matter. Samples shorter than 1 second will be padded with 0s, and samples longer than 1 second will be truncated to 1 second. The exact name of each `.wav` file does not matter, as they will be read, mixed with background noise, and saved to a separate file with an auto-generated name. Directory name does matter (it is used to determine the name of the class during neural network training). Right-click on each keyword directory and upload all of your samples. Your directory structor should look like the following: ``` / |- content |--- custom_keywords |----- keyword_1 |------- 000.wav |------- 001.wav |------- ... |----- keyword_2 |------- 000.wav |------- 001.wav |------- ... |----- ... ``` ``` ### Update Node.js to the latest stable version !npm cache clean -f !npm install -g n !n stable ### Install required packages and tools !python -m pip install soundfile !npm install -g --unsafe-perm edge-impulse-cli ### Settings (You probably do not need to change these) BASE_DIR = "/content" OUT_DIR = "keywords_curated" GOOGLE_DATASET_FILENAME = "speech_commands_v0.02.tar.gz" GOOGLE_DATASET_URL = "http://download.tensorflow.org/data/" + GOOGLE_DATASET_FILENAME GOOGLE_DATASET_DIR = "google_speech_commands" CUSTOM_KEYWORDS_FILENAME = "main.zip" CUSTOM_KEYWORDS_URL = "https://github.com/ShawnHymel/custom-speech-commands-dataset/archive/" + CUSTOM_KEYWORDS_FILENAME CUSTOM_KEYWORDS_DIR = "custom_keywords" CUSTOM_KEYWORDS_REPO_NAME = "custom-speech-commands-dataset-main" CURATION_SCRIPT = "dataset-curation.py" CURATION_SCRIPT_URL = "https://raw.githubusercontent.com/ShawnHymel/ei-keyword-spotting/master/" + CURATION_SCRIPT UTILS_SCRIPT_URL = "https://raw.githubusercontent.com/ShawnHymel/ei-keyword-spotting/master/utils.py" NUM_SAMPLES = 1500 # Target number of samples to mix and send to Edge Impulse WORD_VOL = 1.0 # Relative volume of word in output sample BG_VOL = 0.1 # Relative volume of noise in output sample SAMPLE_TIME = 1.0 # Time (seconds) of output sample SAMPLE_RATE = 16000 # Sample rate (Hz) of output sample BIT_DEPTH = "PCM_16" # Options: [PCM_16, PCM_24, PCM_32, PCM_U8, FLOAT, DOUBLE] BG_DIR = "_background_noise_" TEST_RATIO = 0.2 # 20% reserved for test set, rest is for training EI_INGEST_TEST_URL = "https://ingestion.edgeimpulse.com/api/test/data" EI_INGEST_TRAIN_URL = "https://ingestion.edgeimpulse.com/api/training/data" ### Download Google Speech Commands Dataset !cd {BASE_DIR} !wget {GOOGLE_DATASET_URL} !mkdir {GOOGLE_DATASET_DIR} !echo "Extracting..." !tar xfz {GOOGLE_DATASET_FILENAME} -C {GOOGLE_DATASET_DIR} ### Pull out background noise directory !cd {BASE_DIR} !mv "{GOOGLE_DATASET_DIR}/{BG_DIR}" "{BG_DIR}" ### (Optional) Download custom dataset--uncomment the code in this cell if you want to use my custom datase ## Download, extract, and move dataset to separate directory # !cd {BASE_DIR} # !wget {CUSTOM_KEYWORDS_URL} # !echo "Extracting..." # !unzip -q {CUSTOM_KEYWORDS_FILENAME} # !mv "{CUSTOM_KEYWORDS_REPO_NAME}/{CUSTOM_KEYWORDS_DIR}" "{CUSTOM_KEYWORDS_DIR}" ### User Settings (do change these) # Location of your custom keyword samples (e.g. "/content/custom_keywords") # Leave blank ("") for no custom keywords. set to the CUSTOM_KEYWORDS_DIR # variable to use samples from my custom-speech-commands-dataset repo. CUSTOM_DATASET_PATH = "" # Edge Impulse > your_project > Dashboard > Keys EI_API_KEY = "ei_e544..." # Comma separated words. Must match directory names (that contain samples). # Recommended: use 2 keywords for microcontroller demo TARGETS = "go, stop" ### Download curation and utils scripts !wget {CURATION_SCRIPT_URL} !wget {UTILS_SCRIPT_URL} ### Perform curation and mixing of samples with background noise !cd {BASE_DIR} !python {CURATION_SCRIPT} \ -t "{TARGETS}" \ -n {NUM_SAMPLES} \ -w {WORD_VOL} \ -g {BG_VOL} \ -s {SAMPLE_TIME} \ -r {SAMPLE_RATE} \ -e {BIT_DEPTH} \ -b "{BG_DIR}" \ -o "{OUT_DIR}" \ "{GOOGLE_DATASET_DIR}" \ "{CUSTOM_DATASET_PATH}" ### Use CLI tool to send curated dataset to Edge Impulse !cd {BASE_DIR} # Imports import os import random # Seed with system time random.seed() # Go through each category in our curated dataset for dir in os.listdir(OUT_DIR): # Create list of files for one category paths = [] for filename in os.listdir(os.path.join(OUT_DIR, dir)): paths.append(os.path.join(OUT_DIR, dir, filename)) # Shuffle and divide into test and training sets random.shuffle(paths) num_test_samples = int(TEST_RATIO * len(paths)) test_paths = paths[:num_test_samples] train_paths = paths[num_test_samples:] # Create arugments list (as a string) for CLI call test_paths = ['"' + s + '"' for s in test_paths] test_paths = ' '.join(test_paths) train_paths = ['"' + s + '"' for s in train_paths] train_paths = ' '.join(train_paths) # Send test files to Edge Impulse !edge-impulse-uploader \ --category testing \ --label {dir} \ --api-key {EI_API_KEY} \ --silent \ {test_paths} # # Send training files to Edge Impulse !edge-impulse-uploader \ --category training \ --label {dir} \ --api-key {EI_API_KEY} \ --silent \ {train_paths} ```
github_jupyter
<img src="../img/logo_white_bkg_small.png" align="right" /> # Worksheet 3: Detecting Domain Generation Algorithm (DGA) Domains against DNS This worksheet covers concepts covered in the second half of Module 6 - Hunting with Data Science. It should take no more than 20-30 minutes to complete. Please raise your hand if you get stuck. Your objective is to reduce a dataset that has thousands of domain names and identify those created by DGA. ## Import the Libraries For this exercise, we will be using: * Pandas (http://pandas.pydata.org/pandas-docs/stable/) * Flare (https://github.com/austin-taylor/flare) * Json (https://docs.python.org/3/library/json.html) * WHOIS (https://pypi.python.org/pypi/whois) Beacon writeup: <a href="http://www.austintaylor.io/detect/beaconing/intrusion/detection/system/command/control/flare/elastic/stack/2017/06/10/detect-beaconing-with-flare-elasticsearch-and-intrusion-detection-systems/"> Detect Beaconing <a href="../answers/Worksheet 10 - Hunting with Data Science - Answers.ipynb"> Answers for this section </a> ``` from flare.data_science.features import entropy from flare.data_science.features import dga_classifier from flare.data_science.features import domain_tld_extract from flare.tools.alexa import Alexa from pandas.io.json import json_normalize from whois import whois import pandas as pd import json import warnings warnings.filterwarnings('ignore') ``` ## This is an example of how to generate domain generated algorithms. ``` def generate_domain(year, month, day): """Generates a domain name for the given date.""" domain = "" for i in range(16): year = ((year ^ 8 * year) >> 11) ^ ((year & 0xFFFFFFF0) << 17) month = ((month ^ 4 * month) >> 25) ^ 16 * (month & 0xFFFFFFF8) day = ((day ^ (day << 13)) >> 19) ^ ((day & 0xFFFFFFFE) << 12) domain += chr(((year ^ month ^ day) % 25) + 97) return domain + '.com' generate_domain(2017, 6, 23) ``` ### A large portion of data science is data preparation. In this exercise, we'll take output from Suricata's eve.json file and extract the DNS records so we can find anything using DGA. First you'll need to **unzip the large_eve_json.zip file** in the data directory and specify the path. ``` eve_json = '../data/large_eve.json' ``` ### Next read the data in and build a list ``` all_suricata_data = [json.loads(record) for record in open(eve_json).readlines()] len(all_suricata_data) ``` ### Our output from Suricata has 746909 records, and for we are only interested in DNS records. Let's narrow our data down to records that only contain dns ### Read in suricata data and load each record as json if DNS is in the key. This will help pandas json_normalize feature ``` # YOUR CODE (hint check if dns is in key) len(dns_records) ``` ### Down to 21484 -- much better. ### Somewhere in our _21484_ records is communication from infected computers. It's up to you to narrow the results down and find the malicious DNS request. ``` dns_records[2] ``` ### The data is nested json and has varying lengths, so you will need to use the json_normalize feature ``` suricata_df = json_normalize(dns_records) suricata_df.shape suricata_df.head(2) ``` ### Next we need to filter out all A records ``` # YOUR CODE to filter out all A records a_records.shape ``` ### By filtering out the A records, our dataset is down to 2849. ``` a_records['dns.rrname'].value_counts().head() ``` ### Next we can figure out how many unique DNS names there are. ``` a_records_unique = pd.DataFrame(a_records['dns.rrname'].unique(), columns=['dns_rrname']) ``` ### Should have a much smaller set of domains to process now ``` a_records_unique.head() ``` ### Next we need to train extract the top level domains (remove subdomains) using flare so we can feed it to our classifier ``` #Apply extract to the dns_rrname and create a column named domain_tld a_records_unique.head() ``` ### Train DGA Classifier with dictionary words, n-grams and DGA Domains ``` dga_predictor = dga_classifier() ``` You can apply dga prediction to a column by using dga_predictor.predict('baddomain.com') ``` # YOUR CODE ``` ### A quick sampling of the data shows our prediction has labelled our data. ``` a_records_unique.sample(10) ``` Create a new dataframe called dga_df and filter it out to only show names predicted as DGA ``` # YOUR CODE dga_df ``` ### Our dataset is down to 5 results! Let's run the domains through alexa to see if ny are in the top 1 million ``` alexa = Alexa() # Example: dga_df['in_alexa'] = dga_df.dns_rrname.apply(alexa.domain_in_alexa) def get_creation_date(domain): try: lookup = whois(domain) output = lookup.get('creation_date','No results') except: output = 'No Creation Date!' if output is None: output = 'No Creation Date!' return output get_creation_date('google.com') ``` ### It appears none of our domains are in Alexa, but let's check creation dates. ``` # YOUR CODE ``` ### Congrats! If you did this exercise right, you should have 2 domains with no creation date which were generated by DGA! Bonus points if you can figure out the dates for each domain.
github_jupyter
``` import pandas as pds import numpy as np import matplotlib.pyplot as plt method_dict = {"vi": "D-CODE", "diff": "SR-T", "spline": "SR-S", "gp": "SR-G"} val_dict = { "noise": "sigma", "freq": "del_t", "n": "n", } ode_list = ["GompertzODE", "LogisticODE"] def plot_df(df, x_val="sigma"): for method in method_dict.keys(): df_sub = df[df.method == method] df_sub = df_sub.dropna() plt.fill_between( df_sub[x_val], df_sub.rate - df_sub.rate_sd, df_sub.rate + df_sub.rate_sd, alpha=0.3, ) plt.plot(df_sub[x_val], df_sub.rate, "o-", label=method_dict[method]) plt.ylim(-0.05, 1.05) plt.figure(figsize=(12, 4)) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" counter = 1 for i in range(len(ode_list)): ode = ode_list[i] for val_key, x_val in val_dict.items(): print(ode, val_key, x_val) df = pds.read_csv("results/{}-{}.txt".format(ode, val_key), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) plot_conf = 230 + counter plt.subplot(plot_conf) plot_df(df, x_val=x_val) if counter == 1 or counter == 4: plt.ylabel("Success Prob.", size=16) if counter == 1: plt.title("Varying noise level $\sigma_R$") plt.xscale("log") elif counter == 2: plt.title("Gompertz Model \n Varying step size $\Delta t$") plt.xscale("log") elif counter == 3: plt.title("Varying sample size $N$") elif counter == 5: plt.title("Generalized Logistic Model") if counter == 4: plt.xlabel(r"$\sigma_R$", size=16) plt.xscale("log") elif counter == 5: plt.xlabel(r"$\Delta t$", size=16) plt.xscale("log") elif counter == 6: plt.xlabel(r"$N$", size=16) counter += 1 plt.legend(title="Methods", bbox_to_anchor=(1.05, 1), loc="upper left") plt.tight_layout(pad=0.2) plt.savefig("growth_results.png", dpi=200) ``` ## Selkov Table ``` ode = "SelkovODE" val_key = list(val_dict.keys())[0] x_val = val_dict[val_key] df = pds.read_csv("results/{}-{}.txt".format(ode, val_key), header=None) df.columns = ["ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd"] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) df["x_id"] = 0 df0 = df df = pds.read_csv("results/{}-{}-1.txt".format(ode, val_key), header=None) df.columns = ["ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd"] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) df["x_id"] = 1 df1 = df df = pds.read_csv("results/{}-{}-param.txt".format(ode, val_key), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "sigma_rmse", "sigma_sd", "rho_rmse", "rho_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) df["x_id"] = 0 df0_param = df df = pds.read_csv("results/{}-{}-param-1.txt".format(ode, val_key), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "sigma_rmse", "sigma_sd", "rho_rmse", "rho_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) df["x_id"] = 1 df1_param = df df = pds.concat([df0, df1]) df_param = pds.concat([df0_param, df1_param]) df = pds.merge(df, df_param) tbl_rate = pds.pivot_table(df, values="rate", index=["x_id", "method"], columns="sigma") tbl_rate["val"] = "rate" tbl_rate_sd = pds.pivot_table( df, values="rate_sd", index=["x_id", "method"], columns="sigma" ) tbl_rate_sd["val"] = "rate_sd" tbl_sigma_rmse = pds.pivot_table( df, values="sigma_rmse", index=["x_id", "method"], columns="sigma" ) tbl_sigma_rmse["val"] = "sigma_rmse" tbl_sigma_sd = pds.pivot_table( df, values="sigma_sd", index=["x_id", "method"], columns="sigma" ) tbl_sigma_sd["val"] = "sigma_sd" tbl_sigma_ks = pds.pivot_table( df, values="ks", index=["x_id", "method"], columns="sigma" ) tbl_sigma_ks["val"] = "ks" tbl_sigma_ks tbl_sigma_ks_sd = pds.pivot_table( df, values="ks_sd", index=["x_id", "method"], columns="sigma" ) tbl_sigma_ks_sd["val"] = "ks_sd" tbl_sigma_ks_sd selkov_table = pds.concat([tbl_rate, tbl_rate_sd, tbl_sigma_rmse, tbl_sigma_sd]) selkov_table tt = selkov_table.reset_index() tt[tt["method"] == "gp"] tt[tt["method"] == "diff"] selkov_table.to_csv("Selkov_results.csv") ``` ## Lorenz results ``` def plot_df_ax(df, ax, x_val="sigma"): for method in method_dict.keys(): df_sub = df[df.method == method] df_sub = df_sub.dropna() ax.fill_between( df_sub[x_val], df_sub.rate - df_sub.rate_sd, df_sub.rate + df_sub.rate_sd, alpha=0.3, ) ax.plot(df_sub[x_val], df_sub.rate, "o-", label=method_dict[method]) ax.set_ylim(-0.05, 1.05) ## this part is the placeholder def lorenz(x, y, z, s=10, r=28, b=2.667): """ Given: x, y, z: a point of interest in three dimensional space s, r, b: parameters defining the lorenz attractor Returns: x_dot, y_dot, z_dot: values of the lorenz attractor's partial derivatives at the point x, y, z """ x_dot = s * (y - x) y_dot = r * x - y - x * z z_dot = x * y - b * z return x_dot, y_dot, z_dot dt = 0.01 num_steps = 10000 # Need one more for the initial values xs = np.empty(num_steps + 1) ys = np.empty(num_steps + 1) zs = np.empty(num_steps + 1) # Set initial values xs[0], ys[0], zs[0] = (0.0, 1.0, 1.05) # Step through "time", calculating the partial derivatives at the current point # and using them to estimate the next point for i in range(num_steps): x_dot, y_dot, z_dot = lorenz(xs[i], ys[i], zs[i]) xs[i + 1] = xs[i] + (x_dot * dt) ys[i + 1] = ys[i] + (y_dot * dt) zs[i + 1] = zs[i] + (z_dot * dt) import pickle with open("results/Lorenz_traj.pkl", "rb") as f: diff_dict = pickle.load(f) with open("results/Lorenz_vi_traj.pkl", "rb") as f: vi_dict = pickle.load(f) with open("results/Lorenz_true_traj.pkl", "rb") as f: true_dict = pickle.load(f) with open("results/Lorenz_node_traj2.pkl", "rb") as f: node_dict = pickle.load(f) def plot_trac(ax, xs, ys, zs, title, lw=0.5): elev = 5.0 azim = 120.0 ax.view_init(elev, azim) ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.plot(xs, ys, zs, lw=lw) ax.set_title(title) # ax.set_xticks([]) # ax.set_yticks([]) # ax.set_zticks([]) import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec plt.figure(figsize=(12, 6)) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" gs = gridspec.GridSpec(3, 12) ax1a = plt.subplot(gs[0, :4]) ax1b = plt.subplot(gs[0, 4:8]) ax1c = plt.subplot(gs[0, 8:]) # ax1a = plt.subplot(gs[0, :3]) # ax1b = plt.subplot(gs[0, 3:6]) # ax1c = plt.subplot(gs[0, 6:9]) ax2a = plt.subplot(gs[1:, :3], projection="3d") ax2b = plt.subplot(gs[1:, 3:6], projection="3d") ax2c = plt.subplot(gs[1:, 6:9], projection="3d") ax2d = plt.subplot(gs[1:, 9:], projection="3d") for i, ax in enumerate(plt.gcf().axes): if i < 3: x_id = i if x_id == 0: df = pds.read_csv("results/Lorenz-noise.txt", header=None) else: df = pds.read_csv("results/Lorenz-noise-{}.txt".format(x_id), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", "sigma"]) plot_df_ax(df, ax) ax.set_xlabel("Noise $\sigma_R$") if i == 0: ax.set_title("Success Prob. $\dot{x}_1(t)$") elif i == 1: ax.set_title("Success Prob. $\dot{x}_2(t)$") else: ax.set_title("Success Prob. $\dot{x}_3(t)$") ax.legend(bbox_to_anchor=(1.005, 1), loc="upper left", fontsize=10) # ax.legend(loc='center left', fontsize=10) else: if i == 3: plot_trac(ax, true_dict["x"], true_dict["y"], true_dict["z"], str(i)) ax.set_title("Ground truth") elif i == 4: plot_trac(ax, vi_dict["x"], vi_dict["y"], vi_dict["z"], str(i), lw=0.3) ax.set_title("D-CODE") elif i == 5: plot_trac( ax, diff_dict["x"], diff_dict["y"], diff_dict["z"], str(i), lw=0.8 ) ax.set_title("SR-T") else: plot_trac( ax, node_dict["x"], node_dict["y"], node_dict["z"], str(i), lw=2.0 ) ax.set_title("Neural ODE") ax.set_zlim(0, 50) ax.set_xlim(-25, 25) ax.set_ylim(-25, 25) plt.tight_layout() plt.savefig("lorenz.png", dpi=200) plt.show() ``` ## sensitivity plot ``` def plot_df2(df_sub, x_val="n_basis"): plt.fill_between( df_sub[x_val], df_sub.rate - df_sub.rate_sd, df_sub.rate + df_sub.rate_sd, alpha=0.3, ) plt.plot(df_sub[x_val], df_sub.rate, "o-") plt.ylim(-0.05, 1.05) ode_list = ["GompertzODE", "Lorenz"] bas_list = ["sine", "cubic"] plt.figure(figsize=(12, 4)) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" counter = 1 for i in range(len(ode_list)): ode = ode_list[i] for bas in bas_list: df = pds.read_csv("results/sensitivity_{}.txt".format(ode), header=None) df.columns = ["ode", "basis", "n_basis", "N", "rate", "rate_sd", "ks", "ks_sd"] df = df.sort_values(["basis", "n_basis"]) df_sub = df[df["basis"] == bas] df_sub = df_sub.dropna() plot_conf = 220 + counter plt.subplot(plot_conf) plot_df2(df_sub) if counter > 2: plt.xlabel("Number of basis", size=16) # if counter == 1 or counter == 4: # plt.ylabel('Recovery Rate', size=16) plt.title("{} - {}".format(ode, bas)) # if counter == 1: # plt.title('Varying noise level $\sigma_R$') # plt.xscale('log') # elif counter == 2: # plt.title('Gompertz Model \n Varying step size $\Delta t$') # elif counter == 3: # plt.title('Varying sample size $N$') # elif counter == 4: # plt.title('Generalized Logistic Model') # if counter == 4: # plt.xlabel(r'$\sigma_R$', size=16) # plt.xscale('log') # elif counter == 5: # plt.xlabel(r'$\Delta t$', size=16) # elif counter == 6: # plt.xlabel(r'$N$', size=16) counter += 1 plt.tight_layout(pad=0.2) plt.savefig("sensitivity_results.png", dpi=200) ``` ## objective ``` method_dict = {"vi": "D-CODE", "diff": "SR-T", "spline": "SR-S", "gp": "SR-G"} val_dict = { "noise": "sigma", "freq": "del_t", "n": "n", } ode_list = ["GompertzODE", "LogisticODE"] def plot_df(df, x_val="sigma"): for method in method_dict.keys(): df_sub = df[df.method == method] df_sub = df_sub.dropna() # if x_val == 'sigma': # df_sub = df_sub[df_sub[x_val] < 0.6] plt.fill_between( df_sub[x_val], df_sub.ks - df_sub.ks_sd, df_sub.ks + df_sub.ks_sd, alpha=0.3 ) plt.plot(df_sub[x_val], df_sub.ks, "o-", label=method_dict[method]) # plt.ylim([-0.05, None]) plt.figure(figsize=(12, 4)) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" counter = 1 for i in range(len(ode_list)): ode = ode_list[i] for val_key, x_val in val_dict.items(): df = pds.read_csv("results/{}-{}.txt".format(ode, val_key), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", x_val]) plot_conf = 230 + counter plt.subplot(plot_conf) plot_df(df, x_val=x_val) if counter == 1 or counter == 4: plt.ylabel("Objective $d_x$", size=16) if counter == 1: plt.title("Varying noise level $\sigma_R$") plt.xscale("log") elif counter == 2: plt.title("Gompertz Model \n Varying step size $\Delta t$") plt.xscale("log") elif counter == 3: plt.title("Varying sample size $N$") elif counter == 5: plt.title("Generalized Logistic Model") if counter == 4: plt.xlabel(r"$\sigma_R$", size=16) plt.xscale("log") elif counter == 5: plt.xlabel(r"$\Delta t$", size=16) elif counter == 6: plt.xlabel(r"$N$", size=16) counter += 1 plt.legend(title="Methods", bbox_to_anchor=(1.05, 1), loc="upper left") plt.tight_layout(pad=0.2) plt.savefig("growth_results_obj.png", dpi=200) ``` ## Lorenz objective ``` def plot_df_ax2(df, ax, x_val="sigma"): for method in method_dict.keys(): df_sub = df[df.method == method] df_sub = df_sub.dropna() ax.fill_between( df_sub[x_val], df_sub.ks - df_sub.ks_sd, df_sub.ks + df_sub.ks_sd, alpha=0.3 ) ax.plot(df_sub[x_val], df_sub.ks, "o-", label=method_dict[method]) # ax.set_ylim(-0.05, 1.05) import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec plt.figure(figsize=(12, 2.5)) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" # gs = gridspec.GridSpec(3, 9) ax1a = plt.subplot(1, 3, 1) ax1b = plt.subplot(1, 3, 2) ax1c = plt.subplot(1, 3, 3) # ax1a = plt.subplot(gs[0, :3]) # ax1b = plt.subplot(gs[0, 3:6]) # ax1c = plt.subplot(gs[0, 6:9]) # ax2a = plt.subplot(gs[1:, :3], projection='3d') # ax2b = plt.subplot(gs[1:, 3:6], projection='3d') # ax2c = plt.subplot(gs[1:, 6:9], projection='3d') # ax2d = plt.subplot(gs[1:, 9:], projection='3d') for i, ax in enumerate(plt.gcf().axes): if i < 3: x_id = i if x_id == 0: df = pds.read_csv("results/Lorenz-noise.txt", header=None) else: df = pds.read_csv("results/Lorenz-noise-{}.txt".format(x_id), header=None) df.columns = [ "ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd", ] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", "sigma"]) plot_df_ax2(df, ax) ax.set_xlabel("Noise $\sigma_R$") if i == 0: ax.set_title("Objective $d_x$ for $\dot{x}_1(t)$") elif i == 1: ax.set_title("Objective $d_x$ for $\dot{x}_2(t)$") else: ax.set_title("Objective $d_x$ for $\dot{x}_3(t)$") ax.legend(bbox_to_anchor=(1.005, 1), loc="upper left", fontsize=10) # ax.legend(loc='center left', fontsize=10) plt.tight_layout() plt.savefig("lorenz_objective.png", dpi=200) plt.show() ``` ## Fraction ODE ``` import matplotlib.pyplot as plt plt.figure(figsize=(12, 2.5)) plt.subplot(1, 2, 1) plt.style.use("tableau-colorblind10") colors = plt.rcParams["axes.prop_cycle"].by_key()["color"] plt.rcParams["font.size"] = "13" df = pds.read_csv("results/FracODE-noise.txt", header=None) df.columns = ["ode", "freq", "n", "sigma", "method", "rate", "rate_sd", "ks", "ks_sd"] df["del_t"] = 1.0 / df["freq"] df = df.sort_values(["method", "sigma"]) x_val = "sigma" for method in method_dict.keys(): df_sub = df[df.method == method] df_sub = df_sub.dropna() plt.fill_between( df_sub[x_val], df_sub.rate - df_sub.rate_sd, df_sub.rate + df_sub.rate_sd, alpha=0.3, ) plt.plot(df_sub[x_val], df_sub.rate, "o-", label=method_dict[method]) plt.ylim(-0.05, 1.05) plt.title("Discover Prob.") plt.xlabel("Noise level $\sigma$") ax = plt.subplot(1, 2, 2) plot_df_ax2(df, ax) ax.set_title("Objective $d_x$") ax.legend(bbox_to_anchor=(1.005, 1), loc="upper left", fontsize=10) plt.xlabel("Noise level $\sigma$") plt.savefig("frac.png", dpi=200) plt.show() ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # Tutorial: Train a classification model with automated machine learning In this tutorial, you'll learn how to generate a machine learning model using automated machine learning (automated ML). Azure Machine Learning can perform algorithm selection and hyperparameter selection in an automated way for you. The final model can then be deployed following the workflow in the [Deploy a model](02.deploy-models.ipynb) tutorial. [flow diagram](./imgs/flow2.png) Similar to the [train models tutorial](01.train-models.ipynb), this tutorial classifies handwritten images of digits (0-9) from the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. But this time you don't to specify an algorithm or tune hyperparameters. The automated ML technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion. You'll learn how to: > * Set up your development environment > * Access and examine the data > * Train using an automated classifier locally with custom parameters > * Explore the results > * Review training results > * Register the best model ## Prerequisites Use [these instructions](https://aka.ms/aml-how-to-configure-environment) to: * Create a workspace and its configuration file (**config.json**) * Upload your **config.json** to the same folder as this notebook ### Start a notebook To follow along, start a new notebook from the same directory as **config.json** and copy the code from the sections below. ## Set up your development environment All the setup for your development work can be accomplished in the Python notebook. Setup includes: * Import Python packages * Configure a workspace to enable communication between your local computer and remote resources * Create a directory to store training scripts ### Import packages Import Python packages you need in this tutorial. ``` import azureml.core import pandas as pd from azureml.core.workspace import Workspace from azureml.train.automl.run import AutoMLRun import time import logging from sklearn import datasets from matplotlib import pyplot as plt from matplotlib.pyplot import imshow import random import numpy as np ``` ### Configure workspace Create a workspace object from the existing workspace. `Workspace.from_config()` reads the file **aml_config/config.json** and loads the details into an object named `ws`. `ws` is used throughout the rest of the code in this tutorial. Once you have a workspace object, specify a name for the experiment and create and register a local directory with the workspace. The history of all runs is recorded under the specified experiment. ``` ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-classifier' # project folder project_folder = './automl-classifier' import os output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder pd.set_option('display.max_colwidth', -1) pd.DataFrame(data=output, index=['']).T ``` ## Explore data The initial training tutorial used a high-resolution version of the MNIST dataset (28x28 pixels). Since auto training requires many iterations, this tutorial uses a smaller resolution version of the images (8x8 pixels) to demonstrate the concepts while speeding up the time needed for each iteration. ``` from sklearn import datasets digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ``` ### Display some sample images Load the data into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. ``` count = 0 sample_size = 30 plt.figure(figsize = (16, 6)) for i in np.random.permutation(X_train.shape[0])[:sample_size]: count = count + 1 plt.subplot(1, sample_size, count) plt.axhline('') plt.axvline('') plt.text(x = 2, y = -2, s = y_train[i], fontsize = 18) plt.imshow(X_train[i].reshape(8, 8), cmap = plt.cm.Greys) plt.show() ``` You now have the necessary packages and data ready for auto training for your model. ## Auto train a model To auto train a model, first define settings for autogeneration and tuning and then run the automatic classifier. ### Define settings for autogeneration and tuning Define the experiment parameters and models settings for autogeneration and tuning. |Property| Value in this tutorial |Description| |----|----|---| |**primary_metric**|AUC Weighted | Metric that you want to optimize.| |**max_time_sec**|12,000|Time limit in seconds for each iteration| |**iterations**|20|Number of iterations. In each iteration, the model trains with the data with a specific pipeline| |**n_cross_validations**|3|Number of cross validation splits| |**exit_score**|0.9985|*double* value indicating the target for *primary_metric*. Once the target is surpassed the run terminates| |**blacklist_algos**|['kNN','LinearSVM']|*Array* of *strings* indicating algorithms to ignore. ``` from azureml.train.automl import AutoMLConfig ##Local compute Automl_config = AutoMLConfig(task = 'classification', primary_metric = 'AUC_weighted', max_time_sec = 12000, iterations = 20, n_cross_validations = 3, exit_score = 0.9985, blacklist_algos = ['kNN','LinearSVM'], X = X_train, y = y_train, path=project_folder) ``` ### Run the automatic classifier Start the experiment to run locally. Define the compute target as local and set the output to true to view progress on the experiment. ``` from azureml.core.experiment import Experiment experiment=Experiment(ws, experiment_name) local_run = experiment.submit(Automl_config, show_output=True) ``` ## Explore the results Explore the results of automatic training with a Jupyter widget or by examining the experiment history. ### Jupyter widget Use the Jupyter notebook widget to see a graph and a table of all results. ``` from azureml.train.widgets import RunDetails RunDetails(local_run).show() ``` ### Retrieve all iterations View the experiment history and see individual metrics for each iteration run. ``` children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics import pandas as pd rundata = pd.DataFrame(metricslist).sort_index(1) rundata ``` ## Register the best model Use the `local_run` object to get the best model and register it into the workspace. ``` # find the run with the highest accuracy value. best_run, fitted_model = local_run.get_output() # register model in workspace description = 'Automated Machine Learning Model' tags = None local_run.register_model(description=description, tags=tags) local_run.model_id # Use this id to deploy the model as a web service in Azure ``` ## Test the best model Use the model to predict a few random digits. Display the predicted and the image. Red font and inverse image (white on black) is used to highlight the misclassified samples. Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample. ``` # find 30 random samples from test set n = 30 X_test = digits.data[:100, :] y_test = digits.target[:100] sample_indices = np.random.permutation(X_test.shape[0])[0:n] test_samples = X_test[sample_indices] # predict using the model result = fitted_model.predict(test_samples) # compare actual value vs. the predicted values: i = 0 plt.figure(figsize = (20, 1)) for s in sample_indices: plt.subplot(1, n, i + 1) plt.axhline('') plt.axvline('') # use different color for misclassified sample font_color = 'red' if y_test[s] != result[i] else 'black' clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys plt.text(x = 2, y = -2, s = result[i], fontsize = 18, color = font_color) plt.imshow(X_test[s].reshape(8, 8), cmap = clr_map) i = i + 1 plt.show() ``` ## Next steps In this Azure Machine Learning tutorial, you used Python to: > * Set up your development environment > * Access and examine the data > * Train using an automated classifier locally with custom parameters > * Explore the results > * Review training results > * Register the best model Learn more about [how to configure settings for automatic training](https://aka.ms/aml-how-to-configure-auto) or [how to use automatic training on a remote resource](https://aka.ms/aml-how-to-auto-remote).
github_jupyter
# Natural Language Processing ## Importing the libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ## Importing the dataset ``` dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3) ``` ## Cleaning the texts ``` import re import nltk nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer corpus = [] for i in range(0, 1000): review = re.sub('[^a-zA-Z]', ' ', dataset['Review'][i]) review = review.lower() review = review.split() ps = PorterStemmer() all_stopwords = stopwords.words('english') all_stopwords.remove('not') review = [ps.stem(word) for word in review if not word in set(all_stopwords)] review = ' '.join(review) corpus.append(review) print(corpus) ``` ## Creating the Bag of Words model ``` from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_features = 1500) X = cv.fit_transform(corpus).toarray() y = dataset.iloc[:, -1].values ``` ## Splitting the dataset into the Training set and Test set ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0) ``` ## Training the Naive Bayes model on the Training set ``` from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(X_train, y_train) ``` ## Predicting the Test set results ``` y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) ``` ## Making the Confusion Matrix ``` from sklearn.metrics import confusion_matrix, accuracy_score cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) print(X_test) ``` ## Predicting if a single review is positive or negative ### Positive review Use our model to predict if the following review: "I love this restaurant so much" is positive or negative. **Solution:** We just repeat the same text preprocessing process we did before, but this time with a single review. ``` new_review = 'I love this restaurant so much' new_review = re.sub('[^a-zA-Z]', ' ', new_review) new_review = new_review.lower() new_review = new_review.split() ps = PorterStemmer() all_stopwords = stopwords.words('english') all_stopwords.remove('not') new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)] new_review = ' '.join(new_review) new_corpus = [new_review] new_X_test = cv.transform(new_corpus).toarray() new_y_pred = classifier.predict(new_X_test) print(new_y_pred) ``` The review was correctly predicted as positive by our model. ### Negative review Use our model to predict if the following review: "I hate this restaurant so much" is positive or negative. **Solution:** We just repeat the same text preprocessing process we did before, but this time with a single review. ``` new_review = 'I hate this restaurant so much' new_review = re.sub('[^a-zA-Z]', ' ', new_review) new_review = new_review.lower() new_review = new_review.split() ps = PorterStemmer() all_stopwords = stopwords.words('english') all_stopwords.remove('not') new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)] new_review = ' '.join(new_review) new_corpus = [new_review] new_X_test = cv.transform(new_corpus).toarray() new_y_pred = classifier.predict(new_X_test) print(new_y_pred) ``` The review was correctly predicted as negative by our model. ``` ```
github_jupyter
# Ungraded Lab: Mask R-CNN Image Segmentation Demo In this lab, you will see how to use a [Mask R-CNN](https://arxiv.org/abs/1703.06870) model from Tensorflow Hub for object detection and instance segmentation. This means that aside from the bounding boxes, the model is also able to predict segmentation masks for each instance of a class in the image. You have already encountered most of the commands here when you worked with the Object Dection API and you will see how you can use it with instance segmentation models. Let's begin! *Note: You should use a TPU runtime for this colab because of the processing requirements for this model. We have already enabled it for you but if you'll be using it in another colab, you can change the runtime from `Runtime --> Change runtime type` then select `TPU`.* ## Installation As mentioned, you will be using the Tensorflow 2 [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). You can do that by cloning the [Tensorflow Model Garden](https://github.com/tensorflow/models) and installing the object detection packages just like you did in Week 2. ``` # Clone the tensorflow models repository !git clone --depth 1 https://github.com/tensorflow/models %%bash sudo apt install -y protobuf-compiler cd models/research/ protoc object_detection/protos/*.proto --python_out=. cp object_detection/packages/tf2/setup.py . python -m pip install . ``` ## Import libraries ``` import tensorflow as tf import tensorflow_hub as hub import matplotlib import matplotlib.pyplot as plt import numpy as np from six import BytesIO from PIL import Image from six.moves.urllib.request import urlopen from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as viz_utils from object_detection.utils import ops as utils_ops tf.get_logger().setLevel('ERROR') %matplotlib inline ``` ## Utilities For convenience, you will use a function to convert an image to a numpy array. You can pass in a relative path to an image (e.g. to a local directory) or a URL. You can see this in the `TEST_IMAGES` dictionary below. Some paths point to test images that come with the API package (e.g. `Beach`) while others are URLs that point to images online (e.g. `Street`). ``` def load_image_into_numpy_array(path): """Load an image from file into a numpy array. Puts image into numpy array to feed into tensorflow graph. Note that by convention we put it into a numpy array with shape (height, width, channels), where channels=3 for RGB. Args: path: the file path to the image Returns: uint8 numpy array with shape (img_height, img_width, 3) """ image = None if(path.startswith('http')): response = urlopen(path) image_data = response.read() image_data = BytesIO(image_data) image = Image.open(image_data) else: image_data = tf.io.gfile.GFile(path, 'rb').read() image = Image.open(BytesIO(image_data)) (im_width, im_height) = (image.size) return np.array(image.getdata()).reshape( (1, im_height, im_width, 3)).astype(np.uint8) # dictionary with image tags as keys, and image paths as values TEST_IMAGES = { 'Beach' : 'models/research/object_detection/test_images/image2.jpg', 'Dogs' : 'models/research/object_detection/test_images/image1.jpg', # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg 'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg', # By 663highland, Source: https://commons.wikimedia.org/wiki/File:Kitano_Street_Kobe01s5s4110.jpg 'Street' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/08/Kitano_Street_Kobe01s5s4110.jpg/2560px-Kitano_Street_Kobe01s5s4110.jpg' } ``` ## Load the Model Tensorflow Hub provides a Mask-RCNN model that is built with the Object Detection API. You can read about the details [here](https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1). Let's first load the model and see how to use it for inference in the next section. ``` model_display_name = 'Mask R-CNN Inception ResNet V2 1024x1024' model_handle = 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1' print('Selected model:'+ model_display_name) print('Model Handle at TensorFlow Hub: {}'.format(model_handle)) # This will take 10 to 15 minutes to finish print('loading model...') hub_model = hub.load(model_handle) print('model loaded!') ``` ## Inference You will use the model you just loaded to do instance segmentation on an image. First, choose one of the test images you specified earlier and load it into a numpy array. ``` # Choose one and use as key for TEST_IMAGES below: # ['Beach', 'Street', 'Dogs','Phones'] image_path = TEST_IMAGES['Street'] image_np = load_image_into_numpy_array(image_path) plt.figure(figsize=(24,32)) plt.imshow(image_np[0]) plt.show() ``` You can run inference by simply passing the numpy array of a *single* image to the model. Take note that this model does not support batching. As you've seen in the notebooks in Week 2, this will output a dictionary containing the results. These are described in the `Outputs` section of the [documentation](https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1) ``` # run inference results = hub_model(image_np) # output values are tensors and we only need the numpy() # parameter when we visualize the results result = {key:value.numpy() for key,value in results.items()} # print the keys for key in result.keys(): print(key) ``` ## Visualizing the results You can now plot the results on the original image. First, you need to create the `category_index` dictionary that will contain the class IDs and names. The model was trained on the [COCO2017 dataset](https://cocodataset.org/) and the API package has the labels saved in a different format (i.e. `mscoco_label_map.pbtxt`). You can use the [create_category_index_from_labelmap](https://github.com/tensorflow/models/blob/5ee7a4627edcbbaaeb8a564d690b5f1bc498a0d7/research/object_detection/utils/label_map_util.py#L313) internal utility function to convert this to the required dictionary format. ``` PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) # sample output print(category_index[1]) print(category_index[2]) print(category_index[4]) ``` Next, you will preprocess the masks then finally plot the results. * The result dictionary contains a `detection_masks` key containing segmentation masks for each box. That will be converted first to masks that will overlay to the full image size. * You will also select mask pixel values that are above a certain threshold. We picked a value of `0.6` but feel free to modify this and see what results you will get. If you pick something lower, then you'll most likely notice mask pixels that are outside the object. * As you've seen before, you can use `visualize_boxes_and_labels_on_image_array()` to plot the results on the image. The difference this time is the parameter `instance_masks` and you will pass in the reframed detection boxes to see the segmentation masks on the image. You can see how all these are handled in the code below. ``` # Handle models with masks: label_id_offset = 0 image_np_with_mask = image_np.copy() if 'detection_masks' in result: # convert np.arrays to tensors detection_masks = tf.convert_to_tensor(result['detection_masks'][0]) detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0]) # reframe the the bounding box mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image_np.shape[1], image_np.shape[2]) # filter mask pixel values that are above a specified threshold detection_masks_reframed = tf.cast(detection_masks_reframed > 0.6, tf.uint8) # get the numpy array result['detection_masks_reframed'] = detection_masks_reframed.numpy() # overlay labeled boxes and segmentation masks on the image viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_mask[0], result['detection_boxes'][0], (result['detection_classes'][0] + label_id_offset).astype(int), result['detection_scores'][0], category_index, use_normalized_coordinates=True, max_boxes_to_draw=100, min_score_thresh=.70, agnostic_mode=False, instance_masks=result.get('detection_masks_reframed', None), line_thickness=8) plt.figure(figsize=(24,32)) plt.imshow(image_np_with_mask[0]) plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/AI4Finance-Foundation/FinRL/blob/master/FinRL_Ensemble_StockTrading_ICAIF_2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Deep Reinforcement Learning for Stock Trading from Scratch: Multiple Stock Trading Using Ensemble Strategy Tutorials to use OpenAI DRL to trade multiple stocks using ensemble strategy in one Jupyter Notebook | Presented at ICAIF 2020 * This notebook is the reimplementation of our paper: Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy, using FinRL. * Check out medium blog for detailed explanations: https://medium.com/@ai4finance/deep-reinforcement-learning-for-automated-stock-trading-f1dad0126a02 * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues * **Pytorch Version** # Content * [1. Problem Definition](#0) * [2. Getting Started - Load Python packages](#1) * [2.1. Install Packages](#1.1) * [2.2. Check Additional Packages](#1.2) * [2.3. Import Packages](#1.3) * [2.4. Create Folders](#1.4) * [3. Download Data](#2) * [4. Preprocess Data](#3) * [4.1. Technical Indicators](#3.1) * [4.2. Perform Feature Engineering](#3.2) * [5.Build Environment](#4) * [5.1. Training & Trade Data Split](#4.1) * [5.2. User-defined Environment](#4.2) * [5.3. Initialize Environment](#4.3) * [6.Implement DRL Algorithms](#5) * [7.Backtesting Performance](#6) * [7.1. BackTestStats](#6.1) * [7.2. BackTestPlot](#6.2) * [7.3. Baseline Stats](#6.3) * [7.3. Compare to Stock Market Index](#6.4) <a id='0'></a> # Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem. The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are: * Action: The action space describes the allowed actions that the agent interacts with the environment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent selling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use an action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively * Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio values at state s′ and s, respectively * State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so our trading agent observes many different features to better learn in an interactive environment. * Environment: Dow 30 consituents The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. <a id='1'></a> # Part 2. Getting Started- Load Python Packages <a id='1.1'></a> ## 2.1. Install all the packages through FinRL library ``` # ## install finrl library !pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git ``` <a id='1.2'></a> ## 2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API * pandas * numpy * matplotlib * stockstats * OpenAI gym * stable-baselines * tensorflow * pyfolio <a id='1.3'></a> ## 2.3. Import Packages ``` import warnings warnings.filterwarnings("ignore") import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt # matplotlib.use('Agg') import datetime %matplotlib inline from finrl import config from finrl import config_tickers from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv from finrl.agents.stablebaselines3.models import DRLAgent,DRLEnsembleAgent from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline from pprint import pprint import sys sys.path.append("../FinRL-Library") import itertools ``` <a id='1.4'></a> ## 2.4. Create Folders ``` import os if not os.path.exists("./" + config.DATA_SAVE_DIR): os.makedirs("./" + config.DATA_SAVE_DIR) if not os.path.exists("./" + config.TRAINED_MODEL_DIR): os.makedirs("./" + config.TRAINED_MODEL_DIR) if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR): os.makedirs("./" + config.TENSORBOARD_LOG_DIR) if not os.path.exists("./" + config.RESULTS_DIR): os.makedirs("./" + config.RESULTS_DIR) ``` <a id='2'></a> # Part 3. Download Data Yahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free. * FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API * Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day). ----- class YahooDownloader: Provides methods for retrieving daily stock data from Yahoo Finance API Attributes ---------- start_date : str start date of the data (modified from config.py) end_date : str end date of the data (modified from config.py) ticker_list : list a list of stock tickers (modified from config.py) Methods ------- fetch_data() Fetches data from yahoo API ``` # from config.py start_date is a string config.START_DATE print(config_tickers.DOW_30_TICKER) df = YahooDownloader(start_date = '2009-01-01', end_date = '2021-07-06', ticker_list = config_tickers.DOW_30_TICKER).fetch_data() df.head() df.tail() df.shape df.sort_values(['date','tic']).head() len(df.tic.unique()) df.tic.value_counts() ``` # Part 4: Preprocess Data Data preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state. * Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI. * Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation. ``` tech_indicators = ['macd', 'rsi_30', 'cci_30', 'dx_30'] fe = FeatureEngineer( use_technical_indicator=True, tech_indicator_list = tech_indicators, use_turbulence=True, user_defined_feature = False) processed = fe.preprocess_data(df) processed = processed.copy() processed = processed.fillna(0) processed = processed.replace(np.inf,0) processed.sample(5) ``` <a id='4'></a> # Part 5. Design Environment Considering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds. Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation. The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. ``` stock_dimension = len(processed.tic.unique()) state_space = 1 + 2*stock_dimension + len(tech_indicators)*stock_dimension print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}") env_kwargs = { "hmax": 100, "initial_amount": 1000000, "buy_cost_pct": 0.001, "sell_cost_pct": 0.001, "state_space": state_space, "stock_dim": stock_dimension, "tech_indicator_list": tech_indicators, "action_space": stock_dimension, "reward_scaling": 1e-4, "print_verbosity":5 } ``` <a id='5'></a> # Part 6: Implement DRL Algorithms * The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups. * FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG, Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to design their own DRL algorithms by adapting these DRL algorithms. * In this notebook, we are training and validating 3 agents (A2C, PPO, DDPG) using Rolling-window Ensemble Method ([reference code](https://github.com/AI4Finance-LLC/Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020/blob/80415db8fa7b2179df6bd7e81ce4fe8dbf913806/model/models.py#L92)) ``` rebalance_window = 63 # rebalance_window is the number of days to retrain the model validation_window = 63 # validation_window is the number of days to do validation and trading (e.g. if validation_window=63, then both validation and trading period will be 63 days) train_start = '2009-01-01' train_end = '2020-04-01' val_test_start = '2020-04-01' val_test_end = '2021-07-20' ensemble_agent = DRLEnsembleAgent(df=processed, train_period=(train_start,train_end), val_test_period=(val_test_start,val_test_end), rebalance_window=rebalance_window, validation_window=validation_window, **env_kwargs) A2C_model_kwargs = { 'n_steps': 5, 'ent_coef': 0.01, 'learning_rate': 0.0005 } PPO_model_kwargs = { "ent_coef":0.01, "n_steps": 2048, "learning_rate": 0.00025, "batch_size": 64 } DDPG_model_kwargs = { #"action_noise":"ornstein_uhlenbeck", "buffer_size": 100_000, "learning_rate": 0.000005, "batch_size": 64 } timesteps_dict = {'a2c' : 30_000, 'ppo' : 100_000, 'ddpg' : 10_000 } timesteps_dict = {'a2c' : 10_000, 'ppo' : 10_000, 'ddpg' : 10_000 } df_summary = ensemble_agent.run_ensemble_strategy(A2C_model_kwargs, PPO_model_kwargs, DDPG_model_kwargs, timesteps_dict) df_summary ``` <a id='6'></a> # Part 7: Backtest Our Strategy Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. ``` unique_trade_date = processed[(processed.date > val_test_start)&(processed.date <= val_test_end)].date.unique() df_trade_date = pd.DataFrame({'datadate':unique_trade_date}) df_account_value=pd.DataFrame() for i in range(rebalance_window+validation_window, len(unique_trade_date)+1,rebalance_window): temp = pd.read_csv('results/account_value_trade_{}_{}.csv'.format('ensemble',i)) df_account_value = df_account_value.append(temp,ignore_index=True) sharpe=(252**0.5)*df_account_value.account_value.pct_change(1).mean()/df_account_value.account_value.pct_change(1).std() print('Sharpe Ratio: ',sharpe) df_account_value=df_account_value.join(df_trade_date[validation_window:].reset_index(drop=True)) df_account_value.head() %matplotlib inline df_account_value.account_value.plot() ``` <a id='6.1'></a> ## 7.1 BackTestStats pass in df_account_value, this information is stored in env class ``` print("==============Get Backtest Results===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') perf_stats_all = backtest_stats(account_value=df_account_value) perf_stats_all = pd.DataFrame(perf_stats_all) #baseline stats print("==============Get Baseline Stats===========") baseline_df = get_baseline( ticker="^DJI", start = df_account_value.loc[0,'date'], end = df_account_value.loc[len(df_account_value)-1,'date']) stats = backtest_stats(baseline_df, value_col_name = 'close') ``` <a id='6.2'></a> ## 7.2 BackTestPlot ``` print("==============Compare to DJIA===========") %matplotlib inline # S&P 500: ^GSPC # Dow Jones Index: ^DJI # NASDAQ 100: ^NDX backtest_plot(df_account_value, baseline_ticker = '^DJI', baseline_start = df_account_value.loc[0,'date'], baseline_end = df_account_value.loc[len(df_account_value)-1,'date']) ```
github_jupyter
# Chaper 8 - Intrinsic Curiosity Module #### Deep Reinforcement Learning *in Action* ##### Listing 8.1 ``` import gym from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv #A import gym_super_mario_bros from gym_super_mario_bros.actions import SIMPLE_MOVEMENT, COMPLEX_MOVEMENT #B env = gym_super_mario_bros.make('SuperMarioBros-v0') env = BinarySpaceToDiscreteSpaceEnv(env, COMPLEX_MOVEMENT) #C done = True for step in range(2500): #D if done: state = env.reset() state, reward, done, info = env.step(env.action_space.sample()) env.render() env.close() ``` ##### Listing 8.2 ``` import matplotlib.pyplot as plt from skimage.transform import resize #A import numpy as np def downscale_obs(obs, new_size=(42,42), to_gray=True): if to_gray: return resize(obs, new_size, anti_aliasing=True).max(axis=2) #B else: return resize(obs, new_size, anti_aliasing=True) plt.imshow(env.render("rgb_array")) plt.imshow(downscale_obs(env.render("rgb_array"))) ``` ##### Listing 8.4 ``` import torch from torch import nn from torch import optim import torch.nn.functional as F from collections import deque def prepare_state(state): #A return torch.from_numpy(downscale_obs(state, to_gray=True)).float().unsqueeze(dim=0) def prepare_multi_state(state1, state2): #B state1 = state1.clone() tmp = torch.from_numpy(downscale_obs(state2, to_gray=True)).float() state1[0][0] = state1[0][1] state1[0][1] = state1[0][2] state1[0][2] = tmp return state1 def prepare_initial_state(state,N=3): #C state_ = torch.from_numpy(downscale_obs(state, to_gray=True)).float() tmp = state_.repeat((N,1,1)) return tmp.unsqueeze(dim=0) ``` ##### Listing 8.5 ``` def policy(qvalues, eps=None): #A if eps is not None: if torch.rand(1) < eps: return torch.randint(low=0,high=7,size=(1,)) else: return torch.argmax(qvalues) else: return torch.multinomial(F.softmax(F.normalize(qvalues)), num_samples=1) #B ``` ##### Listing 8.6 ``` from random import shuffle import torch from torch import nn from torch import optim import torch.nn.functional as F class ExperienceReplay: def __init__(self, N=500, batch_size=100): self.N = N #A self.batch_size = batch_size #B self.memory = [] self.counter = 0 def add_memory(self, state1, action, reward, state2): self.counter +=1 if self.counter % 500 == 0: #C self.shuffle_memory() if len(self.memory) < self.N: #D self.memory.append( (state1, action, reward, state2) ) else: rand_index = np.random.randint(0,self.N-1) self.memory[rand_index] = (state1, action, reward, state2) def shuffle_memory(self): #E shuffle(self.memory) def get_batch(self): #F if len(self.memory) < self.batch_size: batch_size = len(self.memory) else: batch_size = self.batch_size if len(self.memory) < 1: print("Error: No data in memory.") return None #G ind = np.random.choice(np.arange(len(self.memory)),batch_size,replace=False) batch = [self.memory[i] for i in ind] #batch is a list of tuples state1_batch = torch.stack([x[0].squeeze(dim=0) for x in batch],dim=0) action_batch = torch.Tensor([x[1] for x in batch]).long() reward_batch = torch.Tensor([x[2] for x in batch]) state2_batch = torch.stack([x[3].squeeze(dim=0) for x in batch],dim=0) return state1_batch, action_batch, reward_batch, state2_batch ``` ##### Listing 8.7 ``` class Phi(nn.Module): #A def __init__(self): super(Phi, self).__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=(3,3), stride=2, padding=1) self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) def forward(self,x): x = F.normalize(x) y = F.elu(self.conv1(x)) y = F.elu(self.conv2(y)) y = F.elu(self.conv3(y)) y = F.elu(self.conv4(y)) #size [1, 32, 3, 3] batch, channels, 3 x 3 y = y.flatten(start_dim=1) #size N, 288 return y class Gnet(nn.Module): #B def __init__(self): super(Gnet, self).__init__() self.linear1 = nn.Linear(576,256) self.linear2 = nn.Linear(256,12) def forward(self, state1,state2): x = torch.cat( (state1, state2) ,dim=1) y = F.relu(self.linear1(x)) y = self.linear2(y) y = F.softmax(y,dim=1) return y class Fnet(nn.Module): #C def __init__(self): super(Fnet, self).__init__() self.linear1 = nn.Linear(300,256) self.linear2 = nn.Linear(256,288) def forward(self,state,action): action_ = torch.zeros(action.shape[0],12) #D indices = torch.stack( (torch.arange(action.shape[0]), action.squeeze()), dim=0) indices = indices.tolist() action_[indices] = 1. x = torch.cat( (state,action_) ,dim=1) y = F.relu(self.linear1(x)) y = self.linear2(y) return y ``` ##### Listing 8.8 ``` class Qnetwork(nn.Module): def __init__(self): super(Qnetwork, self).__init__() #in_channels, out_channels, kernel_size, stride=1, padding=0 self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=(3,3), stride=2, padding=1) self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1) self.linear1 = nn.Linear(288,100) self.linear2 = nn.Linear(100,12) def forward(self,x): x = F.normalize(x) y = F.elu(self.conv1(x)) y = F.elu(self.conv2(y)) y = F.elu(self.conv3(y)) y = F.elu(self.conv4(y)) y = y.flatten(start_dim=2) y = y.view(y.shape[0], -1, 32) y = y.flatten(start_dim=1) y = F.elu(self.linear1(y)) y = self.linear2(y) #size N, 12 return y ``` ##### Listing 8.9 ``` params = { 'batch_size':150, 'beta':0.2, 'lambda':0.1, 'eta': 1.0, 'gamma':0.2, 'max_episode_len':100, 'min_progress':15, 'action_repeats':6, 'frames_per_state':3 } replay = ExperienceReplay(N=1000, batch_size=params['batch_size']) Qmodel = Qnetwork() encoder = Phi() forward_model = Fnet() inverse_model = Gnet() forward_loss = nn.MSELoss(reduction='none') inverse_loss = nn.CrossEntropyLoss(reduction='none') qloss = nn.MSELoss() all_model_params = list(Qmodel.parameters()) + list(encoder.parameters()) #A all_model_params += list(forward_model.parameters()) + list(inverse_model.parameters()) opt = optim.Adam(lr=0.001, params=all_model_params) ``` ##### Listing 8.10 ``` def loss_fn(q_loss, inverse_loss, forward_loss): loss_ = (1 - params['beta']) * inverse_loss loss_ += params['beta'] * forward_loss loss_ = loss_.sum() / loss_.flatten().shape[0] loss = loss_ + params['lambda'] * q_loss return loss def reset_env(): """ Reset the environment and return a new initial state """ env.reset() state1 = prepare_initial_state(env.render('rgb_array')) return state1 ``` ##### Listing 8.11 ``` def ICM(state1, action, state2, forward_scale=1., inverse_scale=1e4): state1_hat = encoder(state1) #A state2_hat = encoder(state2) state2_hat_pred = forward_model(state1_hat.detach(), action.detach()) #B forward_pred_err = forward_scale * forward_loss(state2_hat_pred, \ state2_hat.detach()).sum(dim=1).unsqueeze(dim=1) pred_action = inverse_model(state1_hat, state2_hat) #C inverse_pred_err = inverse_scale * inverse_loss(pred_action, \ action.detach().flatten()).unsqueeze(dim=1) return forward_pred_err, inverse_pred_err ``` ##### Listing 8.12 ``` def minibatch_train(use_extrinsic=True): state1_batch, action_batch, reward_batch, state2_batch = replay.get_batch() action_batch = action_batch.view(action_batch.shape[0],1) #A reward_batch = reward_batch.view(reward_batch.shape[0],1) forward_pred_err, inverse_pred_err = ICM(state1_batch, action_batch, state2_batch) #B i_reward = (1. / params['eta']) * forward_pred_err #C reward = i_reward.detach() #D if use_explicit: #E reward += reward_batch qvals = Qmodel(state2_batch) #F reward += params['gamma'] * torch.max(qvals) reward_pred = Qmodel(state1_batch) reward_target = reward_pred.clone() indices = torch.stack( (torch.arange(action_batch.shape[0]), \ action_batch.squeeze()), dim=0) indices = indices.tolist() reward_target[indices] = reward.squeeze() q_loss = 1e5 * qloss(F.normalize(reward_pred), F.normalize(reward_target.detach())) return forward_pred_err, inverse_pred_err, q_loss ``` ##### Listing 8.13 ``` epochs = 5000 env.reset() state1 = prepare_initial_state(env.render('rgb_array')) eps=0.15 losses = [] episode_length = 0 switch_to_eps_greedy = 1000 state_deque = deque(maxlen=params['frames_per_state']) e_reward = 0. last_x_pos = env.env.env._x_position #A ep_lengths = [] use_explicit = False for i in range(epochs): opt.zero_grad() episode_length += 1 q_val_pred = Qmodel(state1) #B if i > switch_to_eps_greedy: #C action = int(policy(q_val_pred,eps)) else: action = int(policy(q_val_pred)) for j in range(params['action_repeats']): #D state2, e_reward_, done, info = env.step(action) last_x_pos = info['x_pos'] if done: state1 = reset_env() break e_reward += e_reward_ state_deque.append(prepare_state(state2)) state2 = torch.stack(list(state_deque),dim=1) #E replay.add_memory(state1, action, e_reward, state2) #F e_reward = 0 if episode_length > params['max_episode_len']: #G if (info['x_pos'] - last_x_pos) < params['min_progress']: done = True else: last_x_pos = info['x_pos'] if done: ep_lengths.append(info['x_pos']) state1 = reset_env() last_x_pos = env.env.env._x_position episode_length = 0 else: state1 = state2 if len(replay.memory) < params['batch_size']: continue forward_pred_err, inverse_pred_err, q_loss = minibatch_train(use_extrinsic=False) #H loss = loss_fn(q_loss, forward_pred_err, inverse_pred_err) #I loss_list = (q_loss.mean(), forward_pred_err.flatten().mean(),\ inverse_pred_err.flatten().mean()) losses.append(loss_list) loss.backward() opt.step() ``` ##### Test Trained Agent ``` done = True state_deque = deque(maxlen=params['frames_per_state']) for step in range(5000): if done: env.reset() state1 = prepare_initial_state(env.render('rgb_array')) q_val_pred = Qmodel(state1) action = int(policy(q_val_pred,eps)) state2, reward, done, info = env.step(action) state2 = prepare_multi_state(state1,state2) state1=state2 env.render() ```
github_jupyter
# Fuzzing with Grammars In the chapter on ["Mutation-Based Fuzzing"](MutationFuzzer.ipynb), we have seen how to use extra hints – such as sample input files – to speed up test generation. In this chapter, we take this idea one step further, by providing a _specification_ of the legal inputs to a program. Specifying inputs via a _grammar_ allows for very systematic and efficient test generation, in particular for complex input formats. Grammars also serve as the base for configuration fuzzing, API fuzzing, GUI fuzzing, and many more. ``` from bookutils import YouTubeVideo YouTubeVideo('mswyS3Wok1c') ``` **Prerequisites** * You should know how basic fuzzing works, e.g. from the [Chapter introducing fuzzing](Fuzzer.ipynb). * Knowledge on [mutation-based fuzzing](MutationFuzzer.ipynb) and [coverage](Coverage.ipynb) is _not_ required yet, but still recommended. ``` import bookutils from typing import List, Dict, Union, Any, Tuple, Optional import Fuzzer ``` ## Synopsis <!-- Automatically generated. Do not edit. --> To [use the code provided in this chapter](Importing.ipynb), write ```python >>> from fuzzingbook.Grammars import <identifier> ``` and then make use of the following features. This chapter introduces _grammars_ as a simple means to specify input languages, and to use them for testing programs with syntactically valid inputs. A grammar is defined as a mapping of nonterminal symbols to lists of alternative expansions, as in the following example: ```python >>> US_PHONE_GRAMMAR: Grammar = { >>> "<start>": ["<phone-number>"], >>> "<phone-number>": ["(<area>)<exchange>-<line>"], >>> "<area>": ["<lead-digit><digit><digit>"], >>> "<exchange>": ["<lead-digit><digit><digit>"], >>> "<line>": ["<digit><digit><digit><digit>"], >>> "<lead-digit>": ["2", "3", "4", "5", "6", "7", "8", "9"], >>> "<digit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] >>> } >>> >>> assert is_valid_grammar(US_PHONE_GRAMMAR) ``` Nonterminal symbols are enclosed in angle brackets (say, `<digit>`). To generate an input string from a grammar, a _producer_ starts with the start symbol (`<start>`) and randomly chooses a random expansion for this symbol. It continues the process until all nonterminal symbols are expanded. The function `simple_grammar_fuzzer()` does just that: ```python >>> [simple_grammar_fuzzer(US_PHONE_GRAMMAR) for i in range(5)] ['(692)449-5179', '(519)230-7422', '(613)761-0853', '(979)881-3858', '(810)914-5475'] ``` In practice, though, instead of `simple_grammar_fuzzer()`, you should use [the `GrammarFuzzer` class](GrammarFuzzer.ipynb) or one of its [coverage-based](GrammarCoverageFuzzer.ipynb), [probabilistic-based](ProbabilisticGrammarFuzzer.ipynb), or [generator-based](GeneratorGrammarFuzzer.ipynb) derivatives; these are more efficient, protect against infinite growth, and provide several additional features. This chapter also introduces a [grammar toolbox](#A-Grammar-Toolbox) with several helper functions that ease the writing of grammars, such as using shortcut notations for character classes and repetitions, or extending grammars ## Input Languages All possible behaviors of a program can be triggered by its input. "Input" here can be a wide range of possible sources: We are talking about data that is read from files, from the environment, or over the network, data input by the user, or data acquired from interaction with other resources. The set of all these inputs determines how the program will behave – including its failures. When testing, it is thus very helpful to think about possible input sources, how to get them under control, and _how to systematically test them_. For the sake of simplicity, we will assume for now that the program has only one source of inputs; this is the same assumption we have been using in the previous chapters, too. The set of valid inputs to a program is called a _language_. Languages range from the simple to the complex: the CSV language denotes the set of valid comma-separated inputs, whereas the Python language denotes the set of valid Python programs. We commonly separate data languages and programming languages, although any program can also be treated as input data (say, to a compiler). The [Wikipedia page on file formats](https://en.wikipedia.org/wiki/List_of_file_formats) lists more than 1,000 different file formats, each of which is its own language. To formally describe languages, the field of *formal languages* has devised a number of *language specifications* that describe a language. *Regular expressions* represent the simplest class of these languages to denote sets of strings: The regular expression `[a-z]*`, for instance, denotes a (possibly empty) sequence of lowercase letters. *Automata theory* connects these languages to automata that accept these inputs; *finite state machines*, for instance, can be used to specify the language of regular expressions. Regular expressions are great for not-too-complex input formats, and the associated finite state machines have many properties that make them great for reasoning. To specify more complex inputs, though, they quickly encounter limitations. At the other end of the language spectrum, we have *universal grammars* that denote the language accepted by *Turing machines*. A Turing machine can compute anything that can be computed; and with Python being Turing-complete, this means that we can also use a Python program $p$ to specify or even enumerate legal inputs. But then, computer science theory also tells us that each such testing program has to be written specifically for the program to be tested, which is not the level of automation we want. ## Grammars The middle ground between regular expressions and Turing machines is covered by *grammars*. Grammars are among the most popular (and best understood) formalisms to formally specify input languages. Using a grammar, one can express a wide range of the properties of an input language. Grammars are particularly great for expressing the *syntactical structure* of an input, and are the formalism of choice to express nested or recursive inputs. The grammars we use are so-called *context-free grammars*, one of the easiest and most popular grammar formalisms. ### Rules and Expansions A grammar consists of a *start symbol* and a set of *expansion rules* (or simply *rules*) which indicate how the start symbol (and other symbols) can be expanded. As an example, consider the following grammar, denoting a sequence of two digits: ``` <start> ::= <digit><digit> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` To read such a grammar, start with the start symbol (`<start>`). An expansion rule `<A> ::= <B>` means that the symbol on the left side (`<A>`) can be replaced by the string on the right side (`<B>`). In the above grammar, `<start>` would be replaced by `<digit><digit>`. In this string again, `<digit>` would be replaced by the string on the right side of the `<digit>` rule. The special operator `|` denotes *expansion alternatives* (or simply *alternatives*), meaning that any of the digits can be chosen for an expansion. Each `<digit>` thus would be expanded into one of the given digits, eventually yielding a string between `00` and `99`. There are no further expansions for `0` to `9`, so we are all set. The interesting thing about grammars is that they can be *recursive*. That is, expansions can make use of symbols expanded earlier – which would then be expanded again. As an example, consider a grammar that describes integers: ``` <start> ::= <integer> <integer> ::= <digit> | <digit><integer> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` Here, a `<integer>` is either a single digit, or a digit followed by another integer. The number `1234` thus would be represented as a single digit `1`, followed by the integer `234`, which in turn is a digit `2`, followed by the integer `34`. If we wanted to express that an integer can be preceded by a sign (`+` or `-`), we would write the grammar as ``` <start> ::= <number> <number> ::= <integer> | +<integer> | -<integer> <integer> ::= <digit> | <digit><integer> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` These rules formally define the language: Anything that can be derived from the start symbol is part of the language; anything that cannot is not. ``` from bookutils import quiz quiz("Which of these strings cannot be produced " "from the above `<start>` symbol?", [ "`007`", "`-42`", "`++1`", "`3.14`" ], "[27 ** (1/3), 256 ** (1/4)]") ``` ### Arithmetic Expressions Let us expand our grammar to cover full *arithmetic expressions* – a poster child example for a grammar. We see that an expression (`<expr>`) is either a sum, or a difference, or a term; a term is either a product or a division, or a factor; and a factor is either a number or a parenthesized expression. Almost all rules can have recursion, and thus allow arbitrary complex expressions such as `(1 + 2) * (3.4 / 5.6 - 789)`. ``` <start> ::= <expr> <expr> ::= <term> + <expr> | <term> - <expr> | <term> <term> ::= <term> * <factor> | <term> / <factor> | <factor> <factor> ::= +<factor> | -<factor> | (<expr>) | <integer> | <integer>.<integer> <integer> ::= <digit><integer> | <digit> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` In such a grammar, if we start with `<start>` and then expand one symbol after another, randomly choosing alternatives, we can quickly produce one valid arithmetic expression after another. Such *grammar fuzzing* is highly effective as it comes to produce complex inputs, and this is what we will implement in this chapter. ``` quiz("Which of these strings cannot be produced " "from the above `<start>` symbol?", [ "`1 + 1`", "`1+1`", "`+1`", "`+(1)`", ], "4 ** 0.5") ``` ## Representing Grammars in Python Our first step in building a grammar fuzzer is to find an appropriate format for grammars. To make the writing of grammars as simple as possible, we use a format that is based on strings and lists. Our grammars in Python take the format of a _mapping_ between symbol names and expansions, where expansions are _lists_ of alternatives. A one-rule grammar for digits thus takes the form ``` DIGIT_GRAMMAR = { "<start>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] } ``` ### Excursion: A `Grammar` Type Let us define a type for grammars, such that we can check grammar types statically. A first attempt at a grammar type would be that each symbol (a string) is mapped to a list of expansions (strings): ``` SimpleGrammar = Dict[str, List[str]] ``` However, our `opts()` feature for adding optional attributes, which we will introduce later in this chapter, also allows expansions to be _pairs_ that consist of strings and options, where options are mappings of strings to values: ``` Option = Dict[str, Any] ``` Hence, an expansion is either a string – or a pair of a string and an option. ``` Expansion = Union[str, Tuple[str, Option]] ``` With this, we can now define a `Grammar` as a mapping of strings to `Expansion` lists. ### End of Excursion We can capture the grammar structure in a _`Grammar`_ type, in which each symbol (a string) is mapped to a list of expansions (strings): ``` Grammar = Dict[str, List[Expansion]] ``` With this `Grammar` type, the full grammar for arithmetic expressions looks like this: ``` EXPR_GRAMMAR: Grammar = { "<start>": ["<expr>"], "<expr>": ["<term> + <expr>", "<term> - <expr>", "<term>"], "<term>": ["<factor> * <term>", "<factor> / <term>", "<factor>"], "<factor>": ["+<factor>", "-<factor>", "(<expr>)", "<integer>.<integer>", "<integer>"], "<integer>": ["<digit><integer>", "<digit>"], "<digit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] } ``` In the grammar, every symbol can be defined exactly once. We can access any rule by its symbol... ``` EXPR_GRAMMAR["<digit>"] ``` ....and we can check whether a symbol is in the grammar: ``` "<identifier>" in EXPR_GRAMMAR ``` Note that we assume that on the left hand side of a rule (i.e., the key in the mapping) is always a single symbol. This is the property that gives our grammars the characterization of _context-free_. ## Some Definitions We assume that the canonical start symbol is `<start>`: ``` START_SYMBOL = "<start>" ``` The handy `nonterminals()` function extracts the list of nonterminal symbols (i.e., anything between `<` and `>`, except spaces) from an expansion. ``` import re RE_NONTERMINAL = re.compile(r'(<[^<> ]*>)') def nonterminals(expansion): # In later chapters, we allow expansions to be tuples, # with the expansion being the first element if isinstance(expansion, tuple): expansion = expansion[0] return RE_NONTERMINAL.findall(expansion) assert nonterminals("<term> * <factor>") == ["<term>", "<factor>"] assert nonterminals("<digit><integer>") == ["<digit>", "<integer>"] assert nonterminals("1 < 3 > 2") == [] assert nonterminals("1 <3> 2") == ["<3>"] assert nonterminals("1 + 2") == [] assert nonterminals(("<1>", {'option': 'value'})) == ["<1>"] ``` Likewise, `is_nonterminal()` checks whether some symbol is a nonterminal: ``` def is_nonterminal(s): return RE_NONTERMINAL.match(s) assert is_nonterminal("<abc>") assert is_nonterminal("<symbol-1>") assert not is_nonterminal("+") ``` ## A Simple Grammar Fuzzer Let us now put the above grammars to use. We will build a very simple grammar fuzzer that starts with a start symbol (`<start>`) and then keeps on expanding it. To avoid expansion to infinite inputs, we place a limit (`max_nonterminals`) on the number of nonterminals. Furthermore, to avoid being stuck in a situation where we cannot reduce the number of symbols any further, we also limit the total number of expansion steps. ``` import random class ExpansionError(Exception): pass def simple_grammar_fuzzer(grammar: Grammar, start_symbol: str = START_SYMBOL, max_nonterminals: int = 10, max_expansion_trials: int = 100, log: bool = False) -> str: """Produce a string from `grammar`. `start_symbol`: use a start symbol other than `<start>` (default). `max_nonterminals`: the maximum number of nonterminals still left for expansion `max_expansion_trials`: maximum # of attempts to produce a string `log`: print expansion progress if True""" term = start_symbol expansion_trials = 0 while len(nonterminals(term)) > 0: symbol_to_expand = random.choice(nonterminals(term)) expansions = grammar[symbol_to_expand] expansion = random.choice(expansions) # In later chapters, we allow expansions to be tuples, # with the expansion being the first element if isinstance(expansion, tuple): expansion = expansion[0] new_term = term.replace(symbol_to_expand, expansion, 1) if len(nonterminals(new_term)) < max_nonterminals: term = new_term if log: print("%-40s" % (symbol_to_expand + " -> " + expansion), term) expansion_trials = 0 else: expansion_trials += 1 if expansion_trials >= max_expansion_trials: raise ExpansionError("Cannot expand " + repr(term)) return term ``` Let us see how this simple grammar fuzzer obtains an arithmetic expression from the start symbol: ``` simple_grammar_fuzzer(grammar=EXPR_GRAMMAR, max_nonterminals=3, log=True) ``` By increasing the limit of nonterminals, we can quickly get much longer productions: ``` for i in range(10): print(simple_grammar_fuzzer(grammar=EXPR_GRAMMAR, max_nonterminals=5)) ``` Note that while fuzzer does the job in most cases, it has a number of drawbacks. ``` quiz("What drawbacks does `simple_grammar_fuzzer()` have?", [ "It has a large number of string search and replace operations", "It may fail to produce a string (`ExpansionError`)", "It often picks some symbol to expand " "that does not even occur in the string", "All of the above" ], "1 << 2") ``` Indeed, `simple_grammar_fuzzer()` is rather inefficient due to the large number of search and replace operations, and it may even fail to produce a string. On the other hand, the implementation is straightforward and does the job in most cases. For this chapter, we'll stick to it; in the [next chapter](GrammarFuzzer.ipynb), we'll show how to build a more efficient one. ## Visualizing Grammars as Railroad Diagrams With grammars, we can easily specify the format for several of the examples we discussed earlier. The above arithmetic expressions, for instance, can be directly sent into `bc` (or any other program that takes arithmetic expressions). Before we introduce a few additional grammars, let us give a means to _visualize_ them, giving an alternate view to aid their understanding. _Railroad diagrams_, also called _syntax diagrams_, are a graphical representation of context-free grammars. They are read left to right, following possible "rail" tracks; the sequence of symbols encountered on the track defines the language. To produce railroad diagrams, we implement a function `syntax_diagram()`. ### Excursion: Implementing `syntax_diagram()` We use [RailroadDiagrams](RailroadDiagrams.ipynb), an external library for visualization. ``` from RailroadDiagrams import NonTerminal, Terminal, Choice, HorizontalChoice, Sequence from RailroadDiagrams import show_diagram from IPython.display import SVG ``` We first define the method `syntax_diagram_symbol()` to visualize a given symbol. Terminal symbols are denoted as ovals, whereas nonterminal symbols (such as `<term>`) are denoted as rectangles. ``` def syntax_diagram_symbol(symbol: str) -> Any: if is_nonterminal(symbol): return NonTerminal(symbol[1:-1]) else: return Terminal(symbol) SVG(show_diagram(syntax_diagram_symbol('<term>'))) ``` We define `syntax_diagram_expr()` to visualize expansion alternatives. ``` def syntax_diagram_expr(expansion: Expansion) -> Any: # In later chapters, we allow expansions to be tuples, # with the expansion being the first element if isinstance(expansion, tuple): expansion = expansion[0] symbols = [sym for sym in re.split(RE_NONTERMINAL, expansion) if sym != ""] if len(symbols) == 0: symbols = [""] # special case: empty expansion return Sequence(*[syntax_diagram_symbol(sym) for sym in symbols]) SVG(show_diagram(syntax_diagram_expr(EXPR_GRAMMAR['<term>'][0]))) ``` This is the first alternative of `<term>` – a `<factor>` followed by `*` and a `<term>`. Next, we define `syntax_diagram_alt()` for displaying alternate expressions. ``` from itertools import zip_longest def syntax_diagram_alt(alt: List[Expansion]) -> Any: max_len = 5 alt_len = len(alt) if alt_len > max_len: iter_len = alt_len // max_len alts = list(zip_longest(*[alt[i::iter_len] for i in range(iter_len)])) exprs = [[syntax_diagram_expr(expr) for expr in alt if expr is not None] for alt in alts] choices = [Choice(len(expr) // 2, *expr) for expr in exprs] return HorizontalChoice(*choices) else: return Choice(alt_len // 2, *[syntax_diagram_expr(expr) for expr in alt]) SVG(show_diagram(syntax_diagram_alt(EXPR_GRAMMAR['<digit>']))) ``` We see that a `<digit>` can be any single digit from `0` to `9`. Finally, we define `syntax_diagram()` which given a grammar, displays the syntax diagram of its rules. ``` def syntax_diagram(grammar: Grammar) -> None: from IPython.display import SVG, display for key in grammar: print("%s" % key[1:-1]) display(SVG(show_diagram(syntax_diagram_alt(grammar[key])))) ``` ### End of Excursion Let us use `syntax_diagram()` to produce a railroad diagram of our expression grammar: ``` syntax_diagram(EXPR_GRAMMAR) ``` This railroad representation will come in handy as it comes to visualizing the structure of grammars – especially for more complex grammars. ## Some Grammars Let us create (and visualize) some more grammars and use them for fuzzing. ### A CGI Grammar Here's a grammar for `cgi_decode()` introduced in the [chapter on coverage](Coverage.ipynb). ``` CGI_GRAMMAR: Grammar = { "<start>": ["<string>"], "<string>": ["<letter>", "<letter><string>"], "<letter>": ["<plus>", "<percent>", "<other>"], "<plus>": ["+"], "<percent>": ["%<hexdigit><hexdigit>"], "<hexdigit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "b", "c", "d", "e", "f"], "<other>": # Actually, could be _all_ letters ["0", "1", "2", "3", "4", "5", "a", "b", "c", "d", "e", "-", "_"], } syntax_diagram(CGI_GRAMMAR) ``` In contrast to [basic fuzzing](Fuzzer.ipynb) or [mutation-based fuzzing](MutationFuzzer.ipynb), the grammar quickly produces all sorts of combinations: ``` for i in range(10): print(simple_grammar_fuzzer(grammar=CGI_GRAMMAR, max_nonterminals=10)) ``` ### A URL Grammar The same properties we have seen for CGI input also hold for more complex inputs. Let us use a grammar to produce a large number of valid URLs: ``` URL_GRAMMAR: Grammar = { "<start>": ["<url>"], "<url>": ["<scheme>://<authority><path><query>"], "<scheme>": ["http", "https", "ftp", "ftps"], "<authority>": ["<host>", "<host>:<port>", "<userinfo>@<host>", "<userinfo>@<host>:<port>"], "<host>": # Just a few ["cispa.saarland", "www.google.com", "fuzzingbook.com"], "<port>": ["80", "8080", "<nat>"], "<nat>": ["<digit>", "<digit><digit>"], "<digit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"], "<userinfo>": # Just one ["user:password"], "<path>": # Just a few ["", "/", "/<id>"], "<id>": # Just a few ["abc", "def", "x<digit><digit>"], "<query>": ["", "?<params>"], "<params>": ["<param>", "<param>&<params>"], "<param>": # Just a few ["<id>=<id>", "<id>=<nat>"], } syntax_diagram(URL_GRAMMAR) ``` Again, within milliseconds, we can produce plenty of valid inputs. ``` for i in range(10): print(simple_grammar_fuzzer(grammar=URL_GRAMMAR, max_nonterminals=10)) ``` ### A Natural Language Grammar Finally, grammars are not limited to *formal languages* such as computer inputs, but can also be used to produce *natural language*. This is the grammar we used to pick a title for this book: ``` TITLE_GRAMMAR: Grammar = { "<start>": ["<title>"], "<title>": ["<topic>: <subtopic>"], "<topic>": ["Generating Software Tests", "<fuzzing-prefix>Fuzzing", "The Fuzzing Book"], "<fuzzing-prefix>": ["", "The Art of ", "The Joy of "], "<subtopic>": ["<subtopic-main>", "<subtopic-prefix><subtopic-main>", "<subtopic-main><subtopic-suffix>"], "<subtopic-main>": ["Breaking Software", "Generating Software Tests", "Principles, Techniques and Tools"], "<subtopic-prefix>": ["", "Tools and Techniques for "], "<subtopic-suffix>": [" for <reader-property> and <reader-property>", " for <software-property> and <software-property>"], "<reader-property>": ["Fun", "Profit"], "<software-property>": ["Robustness", "Reliability", "Security"], } syntax_diagram(TITLE_GRAMMAR) from typing import Set titles: Set[str] = set() while len(titles) < 10: titles.add(simple_grammar_fuzzer( grammar=TITLE_GRAMMAR, max_nonterminals=10)) titles ``` (If you find that there is redundancy ("Robustness and Robustness") in here: In [our chapter on coverage-based fuzzing](GrammarCoverageFuzzer.ipynb), we will show how to cover each expansion only once. And if you like some alternatives more than others, [probabilistic grammar fuzzing](ProbabilisticGrammarFuzzer.ipynb) will be there for you.) ## Grammars as Mutation Seeds One very useful property of grammars is that they produce mostly valid inputs. From a syntactical standpoint, the inputs are actually _always_ valid, as they satisfy the constraints of the given grammar. (Of course, one needs a valid grammar in the first place.) However, there are also _semantical_ properties that cannot be easily expressed in a grammar. If, say, for a URL, the port range is supposed to be between 1024 and 2048, this is hard to write in a grammar. If one has to satisfy more complex constraints, one quickly reaches the limits of what a grammar can express. One way around this is to attach constraints to grammars, as we will discuss [later in this book](ConstraintFuzzer.ipynb). Another possibility is to put together the strengths of grammar-based fuzzing and [mutation-based fuzzing](MutationFuzzer.ipynb). The idea is to use the grammar-generated inputs as *seeds* for further mutation-based fuzzing. This way, we can explore not only _valid_ inputs, but also check out the _boundaries_ between valid and invalid inputs. This is particularly interesting as slightly invalid inputs allow to find parser errors (which are often abundant). As with fuzzing in general, it is the unexpected which reveals errors in programs. To use our generated inputs as seeds, we can feed them directly into the mutation fuzzers introduced earlier: ``` from MutationFuzzer import MutationFuzzer # minor dependency number_of_seeds = 10 seeds = [ simple_grammar_fuzzer( grammar=URL_GRAMMAR, max_nonterminals=10) for i in range(number_of_seeds)] seeds m = MutationFuzzer(seeds) [m.fuzz() for i in range(20)] ``` While the first 10 `fuzz()` calls return the seeded inputs (as designed), the later ones again create arbitrary mutations. Using `MutationCoverageFuzzer` instead of `MutationFuzzer`, we could again have our search guided by coverage – and thus bring together the best of multiple worlds. ## A Grammar Toolbox Let us now introduce a few techniques that help us writing grammars. ### Escapes With `<` and `>` delimiting nonterminals in our grammars, how can we actually express that some input should contain `<` and `>`? The answer is simple: Just introduce a symbol for them. ``` simple_nonterminal_grammar: Grammar = { "<start>": ["<nonterminal>"], "<nonterminal>": ["<left-angle><identifier><right-angle>"], "<left-angle>": ["<"], "<right-angle>": [">"], "<identifier>": ["id"] # for now } ``` In `simple_nonterminal_grammar`, neither the expansion for `<left-angle>` nor the expansion for `<right-angle>` can be mistaken as a nonterminal. Hence, we can produce as many as we want. ### Extending Grammars In the course of this book, we frequently run into the issue of creating a grammar by _extending_ an existing grammar with new features. Such an extension is very much like subclassing in object-oriented programming. To create a new grammar $g'$ from an existing grammar $g$, we first copy $g$ into $g'$, and then go and extend existing rules with new alternatives and/or add new symbols. Here's an example, extending the above `nonterminal` grammar with a better rule for identifiers: ``` import copy nonterminal_grammar = copy.deepcopy(simple_nonterminal_grammar) nonterminal_grammar["<identifier>"] = ["<idchar>", "<identifier><idchar>"] nonterminal_grammar["<idchar>"] = ['a', 'b', 'c', 'd'] # for now nonterminal_grammar ``` Since such an extension of grammars is a common operation, we introduce a custom function `extend_grammar()` which first copies the given grammar and then updates it from a dictionary, using the Python dictionary `update()` method: ``` def extend_grammar(grammar: Grammar, extension: Grammar = {}) -> Grammar: new_grammar = copy.deepcopy(grammar) new_grammar.update(extension) return new_grammar ``` This call to `extend_grammar()` extends `simple_nonterminal_grammar` to `nonterminal_grammar` just like the "manual" example above: ``` nonterminal_grammar = extend_grammar(simple_nonterminal_grammar, { "<identifier>": ["<idchar>", "<identifier><idchar>"], # for now "<idchar>": ['a', 'b', 'c', 'd'] } ) ``` ### Character Classes In the above `nonterminal_grammar`, we have enumerated only the first few letters; indeed, enumerating all letters or digits in a grammar manually, as in `<idchar> ::= 'a' | 'b' | 'c' ...` is a bit painful. However, remember that grammars are part of a program, and can thus also be constructed programmatically. We introduce a function `srange()` which constructs a list of characters in a string: ``` import string def srange(characters: str) -> List[Expansion]: """Construct a list with all characters in the string""" return [c for c in characters] ``` If we pass it the constant `string.ascii_letters`, which holds all ASCII letters, `srange()` returns a list of all ASCII letters: ``` string.ascii_letters srange(string.ascii_letters)[:10] ``` We can use such constants in our grammar to quickly define identifiers: ``` nonterminal_grammar = extend_grammar(nonterminal_grammar, { "<idchar>": (srange(string.ascii_letters) + srange(string.digits) + srange("-_")) } ) [simple_grammar_fuzzer(nonterminal_grammar, "<identifier>") for i in range(10)] ``` The shortcut `crange(start, end)` returns a list of all characters in the ASCII range of `start` to (including) `end`: ``` def crange(character_start: str, character_end: str) -> List[Expansion]: return [chr(i) for i in range(ord(character_start), ord(character_end) + 1)] ``` We can use this to express ranges of characters: ``` crange('0', '9') assert crange('a', 'z') == srange(string.ascii_lowercase) ``` ### Grammar Shortcuts In the above `nonterminal_grammar`, as in other grammars, we have to express repetitions of characters using _recursion_, that is, by referring to the original definition: ``` nonterminal_grammar["<identifier>"] ``` It could be a bit easier if we simply could state that a nonterminal should be a non-empty sequence of letters – for instance, as in ``` <identifier> = <idchar>+ ``` where `+` denotes a non-empty repetition of the symbol it follows. Operators such as `+` are frequently introduced as handy _shortcuts_ in grammars. Formally, our grammars come in the so-called [Backus-Naur form](https://en.wikipedia.org/wiki/Backus-Naur_form), or *BNF* for short. Operators _extend_ BNF to so-called _extended BNF*, or *EBNF* for short: * The form `<symbol>?` indicates that `<symbol>` is optional – that is, it can occur 0 or 1 times. * The form `<symbol>+` indicates that `<symbol>` can occur 1 or more times repeatedly. * The form `<symbol>*` indicates that `<symbol>` can occur 0 or more times. (In other words, it is an optional repetition.) To make matters even more interesting, we would like to use _parentheses_ with the above shortcuts. Thus, `(<foo><bar>)?` indicates that the sequence of `<foo>` and `<bar>` is optional. Using such operators, we can define the identifier rule in a simpler way. To this end, let us create a copy of the original grammar and modify the `<identifier>` rule: ``` nonterminal_ebnf_grammar = extend_grammar(nonterminal_grammar, { "<identifier>": ["<idchar>+"] } ) ``` Likewise, we can simplify the expression grammar. Consider how signs are optional, and how integers can be expressed as sequences of digits. ``` EXPR_EBNF_GRAMMAR: Grammar = { "<start>": ["<expr>"], "<expr>": ["<term> + <expr>", "<term> - <expr>", "<term>"], "<term>": ["<factor> * <term>", "<factor> / <term>", "<factor>"], "<factor>": ["<sign>?<factor>", "(<expr>)", "<integer>(.<integer>)?"], "<sign>": ["+", "-"], "<integer>": ["<digit>+"], "<digit>": srange(string.digits) } ``` Let us implement a function `convert_ebnf_grammar()` that takes such an EBNF grammar and automatically translates it into a BNF grammar. #### Excursion: Implementing `convert_ebnf_grammar()` Our aim is to convert EBNF grammars such as the ones above into a regular BNF grammar. This is done by four rules: 1. An expression `(content)op`, where `op` is one of `?`, `+`, `*`, becomes `<new-symbol>op`, with a new rule `<new-symbol> ::= content`. 2. An expression `<symbol>?` becomes `<new-symbol>`, where `<new-symbol> ::= <empty> | <symbol>`. 3. An expression `<symbol>+` becomes `<new-symbol>`, where `<new-symbol> ::= <symbol> | <symbol><new-symbol>`. 4. An expression `<symbol>*` becomes `<new-symbol>`, where `<new-symbol> ::= <empty> | <symbol><new-symbol>`. Here, `<empty>` expands to the empty string, as in `<empty> ::= `. (This is also called an *epsilon expansion*.) If these operators remind you of _regular expressions_, this is not by accident: Actually, any basic regular expression can be converted into a grammar using the above rules (and character classes with `crange()`, as defined above). Applying these rules on the examples above yields the following results: * `<idchar>+` becomes `<idchar><new-symbol>` with `<new-symbol> ::= <idchar> | <idchar><new-symbol>`. * `<integer>(.<integer>)?` becomes `<integer><new-symbol>` with `<new-symbol> ::= <empty> | .<integer>`. Let us implement these rules in three steps. ##### Creating New Symbols First, we need a mechanism to create new symbols. This is fairly straightforward. ``` def new_symbol(grammar: Grammar, symbol_name: str = "<symbol>") -> str: """Return a new symbol for `grammar` based on `symbol_name`""" if symbol_name not in grammar: return symbol_name count = 1 while True: tentative_symbol_name = symbol_name[:-1] + "-" + repr(count) + ">" if tentative_symbol_name not in grammar: return tentative_symbol_name count += 1 assert new_symbol(EXPR_EBNF_GRAMMAR, '<expr>') == '<expr-1>' ``` ##### Expanding Parenthesized Expressions Next, we need a means to extract parenthesized expressions from our expansions and expand them according to the rules above. Let's start with extracting expressions: ``` RE_PARENTHESIZED_EXPR = re.compile(r'\([^()]*\)[?+*]') def parenthesized_expressions(expansion: Expansion) -> List[str]: # In later chapters, we allow expansions to be tuples, # with the expansion being the first element if isinstance(expansion, tuple): expansion = expansion[0] return re.findall(RE_PARENTHESIZED_EXPR, expansion) assert parenthesized_expressions("(<foo>)* (<foo><bar>)+ (+<foo>)? <integer>(.<integer>)?") == [ '(<foo>)*', '(<foo><bar>)+', '(+<foo>)?', '(.<integer>)?'] ``` We can now use these to apply rule number 1, above, introducing new symbols for expressions in parentheses. ``` def convert_ebnf_parentheses(ebnf_grammar: Grammar) -> Grammar: """Convert a grammar in extended BNF to BNF""" grammar = extend_grammar(ebnf_grammar) for nonterminal in ebnf_grammar: expansions = ebnf_grammar[nonterminal] for i in range(len(expansions)): expansion = expansions[i] if not isinstance(expansion, str): expansion = expansion[0] while True: parenthesized_exprs = parenthesized_expressions(expansion) if len(parenthesized_exprs) == 0: break for expr in parenthesized_exprs: operator = expr[-1:] contents = expr[1:-2] new_sym = new_symbol(grammar) exp = grammar[nonterminal][i] opts = None if isinstance(exp, tuple): (exp, opts) = exp assert isinstance(exp, str) expansion = exp.replace(expr, new_sym + operator, 1) if opts: grammar[nonterminal][i] = (expansion, opts) else: grammar[nonterminal][i] = expansion grammar[new_sym] = [contents] return grammar ``` This does the conversion as sketched above: ``` convert_ebnf_parentheses({"<number>": ["<integer>(.<integer>)?"]}) ``` It even works for nested parenthesized expressions: ``` convert_ebnf_parentheses({"<foo>": ["((<foo>)?)+"]}) ``` ##### Expanding Operators After expanding parenthesized expressions, we now need to take care of symbols followed by operators (`?`, `*`, `+`). As with `convert_ebnf_parentheses()`, above, we first extract all symbols followed by an operator. ``` RE_EXTENDED_NONTERMINAL = re.compile(r'(<[^<> ]*>[?+*])') def extended_nonterminals(expansion: Expansion) -> List[str]: # In later chapters, we allow expansions to be tuples, # with the expansion being the first element if isinstance(expansion, tuple): expansion = expansion[0] return re.findall(RE_EXTENDED_NONTERMINAL, expansion) assert extended_nonterminals( "<foo>* <bar>+ <elem>? <none>") == ['<foo>*', '<bar>+', '<elem>?'] ``` Our converter extracts the symbol and the operator, and adds new symbols according to the rules laid out above. ``` def convert_ebnf_operators(ebnf_grammar: Grammar) -> Grammar: """Convert a grammar in extended BNF to BNF""" grammar = extend_grammar(ebnf_grammar) for nonterminal in ebnf_grammar: expansions = ebnf_grammar[nonterminal] for i in range(len(expansions)): expansion = expansions[i] extended_symbols = extended_nonterminals(expansion) for extended_symbol in extended_symbols: operator = extended_symbol[-1:] original_symbol = extended_symbol[:-1] assert original_symbol in ebnf_grammar, \ f"{original_symbol} is not defined in grammar" new_sym = new_symbol(grammar, original_symbol) exp = grammar[nonterminal][i] opts = None if isinstance(exp, tuple): (exp, opts) = exp assert isinstance(exp, str) new_exp = exp.replace(extended_symbol, new_sym, 1) if opts: grammar[nonterminal][i] = (new_exp, opts) else: grammar[nonterminal][i] = new_exp if operator == '?': grammar[new_sym] = ["", original_symbol] elif operator == '*': grammar[new_sym] = ["", original_symbol + new_sym] elif operator == '+': grammar[new_sym] = [ original_symbol, original_symbol + new_sym] return grammar convert_ebnf_operators({"<integer>": ["<digit>+"], "<digit>": ["0"]}) ``` ##### All Together We can combine the two, first extending parentheses and then operators: ``` def convert_ebnf_grammar(ebnf_grammar: Grammar) -> Grammar: return convert_ebnf_operators(convert_ebnf_parentheses(ebnf_grammar)) ``` #### End of Excursion Here's an example of using `convert_ebnf_grammar()`: ``` convert_ebnf_grammar({"<authority>": ["(<userinfo>@)?<host>(:<port>)?"]}) expr_grammar = convert_ebnf_grammar(EXPR_EBNF_GRAMMAR) expr_grammar ``` Success! We have nicely converted the EBNF grammar into BNF. With character classes and EBNF grammar conversion, we have two powerful tools that make the writing of grammars easier. We will use these again and again as it comes to working with grammars. ### Grammar Extensions During the course of this book, we frequently want to specify _additional information_ for grammars, such as [_probabilities_](ProbabilisticGrammarFuzzer.ipynb) or [_constraints_](GeneratorGrammarFuzzer.ipynb). To support these extensions, as well as possibly others, we define an _annotation_ mechanism. Our concept for annotating grammars is to add _annotations_ to individual expansions. To this end, we allow that an expansion cannot only be a string, but also a _pair_ of a string and a set of attributes, as in ```python "<expr>": [("<term> + <expr>", opts(min_depth=10)), ("<term> - <expr>", opts(max_depth=2)), "<term>"] ``` Here, the `opts()` function would allow us to express annotations that apply to the individual expansions; in this case, the addition would be annotated with a `min_depth` value of 10, and the subtraction with a `max_depth` value of 2. The meaning of these annotations is left to the individual algorithms dealing with the grammars; the general idea, though, is that they can be ignored. #### Excursion: Implementing `opts()` Our `opts()` helper function returns a mapping of its arguments to values: ``` def opts(**kwargs: Any) -> Dict[str, Any]: return kwargs opts(min_depth=10) ``` To deal with both expansion strings and pairs of expansions and annotations, we access the expansion string and the associated annotations via designated helper functions, `exp_string()` and `exp_opts()`: ``` def exp_string(expansion: Expansion) -> str: """Return the string to be expanded""" if isinstance(expansion, str): return expansion return expansion[0] exp_string(("<term> + <expr>", opts(min_depth=10))) def exp_opts(expansion: Expansion) -> Dict[str, Any]: """Return the options of an expansion. If options are not defined, return {}""" if isinstance(expansion, str): return {} return expansion[1] def exp_opt(expansion: Expansion, attribute: str) -> Any: """Return the given attribution of an expansion. If attribute is not defined, return None""" return exp_opts(expansion).get(attribute, None) exp_opts(("<term> + <expr>", opts(min_depth=10))) exp_opt(("<term> - <expr>", opts(max_depth=2)), 'max_depth') ``` Finally, we define a helper function that sets a particular option: ``` def set_opts(grammar: Grammar, symbol: str, expansion: Expansion, opts: Option = {}) -> None: """Set the options of the given expansion of grammar[symbol] to opts""" expansions = grammar[symbol] for i, exp in enumerate(expansions): if exp_string(exp) != exp_string(expansion): continue new_opts = exp_opts(exp) if opts == {} or new_opts == {}: new_opts = opts else: for key in opts: new_opts[key] = opts[key] if new_opts == {}: grammar[symbol][i] = exp_string(exp) else: grammar[symbol][i] = (exp_string(exp), new_opts) return raise KeyError( "no expansion " + repr(symbol) + " -> " + repr( exp_string(expansion))) ``` #### End of Excursion ## Checking Grammars Since grammars are represented as strings, it is fairly easy to introduce errors. So let us introduce a helper function that checks a grammar for consistency. The helper function `is_valid_grammar()` iterates over a grammar to check whether all used symbols are defined, and vice versa, which is very useful for debugging; it also checks whether all symbols are reachable from the start symbol. You don't have to delve into details here, but as always, it is important to get the input data straight before we make use of it. ### Excursion: Implementing `is_valid_grammar()` ``` import sys def def_used_nonterminals(grammar: Grammar, start_symbol: str = START_SYMBOL) -> Tuple[Optional[Set[str]], Optional[Set[str]]]: """Return a pair (`defined_nonterminals`, `used_nonterminals`) in `grammar`. In case of error, return (`None`, `None`).""" defined_nonterminals = set() used_nonterminals = {start_symbol} for defined_nonterminal in grammar: defined_nonterminals.add(defined_nonterminal) expansions = grammar[defined_nonterminal] if not isinstance(expansions, list): print(repr(defined_nonterminal) + ": expansion is not a list", file=sys.stderr) return None, None if len(expansions) == 0: print(repr(defined_nonterminal) + ": expansion list empty", file=sys.stderr) return None, None for expansion in expansions: if isinstance(expansion, tuple): expansion = expansion[0] if not isinstance(expansion, str): print(repr(defined_nonterminal) + ": " + repr(expansion) + ": not a string", file=sys.stderr) return None, None for used_nonterminal in nonterminals(expansion): used_nonterminals.add(used_nonterminal) return defined_nonterminals, used_nonterminals def reachable_nonterminals(grammar: Grammar, start_symbol: str = START_SYMBOL) -> Set[str]: reachable = set() def _find_reachable_nonterminals(grammar, symbol): nonlocal reachable reachable.add(symbol) for expansion in grammar.get(symbol, []): for nonterminal in nonterminals(expansion): if nonterminal not in reachable: _find_reachable_nonterminals(grammar, nonterminal) _find_reachable_nonterminals(grammar, start_symbol) return reachable def unreachable_nonterminals(grammar: Grammar, start_symbol=START_SYMBOL) -> Set[str]: return grammar.keys() - reachable_nonterminals(grammar, start_symbol) def opts_used(grammar: Grammar) -> Set[str]: used_opts = set() for symbol in grammar: for expansion in grammar[symbol]: used_opts |= set(exp_opts(expansion).keys()) return used_opts def is_valid_grammar(grammar: Grammar, start_symbol: str = START_SYMBOL, supported_opts: Set[str] = set()) -> bool: """Check if the given `grammar` is valid. `start_symbol`: optional start symbol (default: `<start>`) `supported_opts`: options supported (default: none)""" defined_nonterminals, used_nonterminals = \ def_used_nonterminals(grammar, start_symbol) if defined_nonterminals is None or used_nonterminals is None: return False # Do not complain about '<start>' being not used, # even if start_symbol is different if START_SYMBOL in grammar: used_nonterminals.add(START_SYMBOL) for unused_nonterminal in defined_nonterminals - used_nonterminals: print(repr(unused_nonterminal) + ": defined, but not used", file=sys.stderr) for undefined_nonterminal in used_nonterminals - defined_nonterminals: print(repr(undefined_nonterminal) + ": used, but not defined", file=sys.stderr) # Symbols must be reachable either from <start> or given start symbol unreachable = unreachable_nonterminals(grammar, start_symbol) msg_start_symbol = start_symbol if START_SYMBOL in grammar: unreachable = unreachable - \ reachable_nonterminals(grammar, START_SYMBOL) if start_symbol != START_SYMBOL: msg_start_symbol += " or " + START_SYMBOL for unreachable_nonterminal in unreachable: print(repr(unreachable_nonterminal) + ": unreachable from " + msg_start_symbol, file=sys.stderr) used_but_not_supported_opts = set() if len(supported_opts) > 0: used_but_not_supported_opts = opts_used( grammar).difference(supported_opts) for opt in used_but_not_supported_opts: print( "warning: option " + repr(opt) + " is not supported", file=sys.stderr) return used_nonterminals == defined_nonterminals and len(unreachable) == 0 ``` ### End of Excursion Let us make use of `is_valid_grammar()`. Our grammars defined above pass the test: ``` assert is_valid_grammar(EXPR_GRAMMAR) assert is_valid_grammar(CGI_GRAMMAR) assert is_valid_grammar(URL_GRAMMAR) ``` The check can also be applied to EBNF grammars: ``` assert is_valid_grammar(EXPR_EBNF_GRAMMAR) ``` These ones do not pass the test, though: ``` assert not is_valid_grammar({"<start>": ["<x>"], "<y>": ["1"]}) # type: ignore assert not is_valid_grammar({"<start>": "123"}) # type: ignore assert not is_valid_grammar({"<start>": []}) # type: ignore assert not is_valid_grammar({"<start>": [1, 2, 3]}) # type: ignore ``` (The `#type: ignore` annotations avoid static checkers flagging the above as errors). From here on, we will always use `is_valid_grammar()` when defining a grammar. ## Synopsis This chapter introduces _grammars_ as a simple means to specify input languages, and to use them for testing programs with syntactically valid inputs. A grammar is defined as a mapping of nonterminal symbols to lists of alternative expansions, as in the following example: ``` US_PHONE_GRAMMAR: Grammar = { "<start>": ["<phone-number>"], "<phone-number>": ["(<area>)<exchange>-<line>"], "<area>": ["<lead-digit><digit><digit>"], "<exchange>": ["<lead-digit><digit><digit>"], "<line>": ["<digit><digit><digit><digit>"], "<lead-digit>": ["2", "3", "4", "5", "6", "7", "8", "9"], "<digit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] } assert is_valid_grammar(US_PHONE_GRAMMAR) ``` Nonterminal symbols are enclosed in angle brackets (say, `<digit>`). To generate an input string from a grammar, a _producer_ starts with the start symbol (`<start>`) and randomly chooses a random expansion for this symbol. It continues the process until all nonterminal symbols are expanded. The function `simple_grammar_fuzzer()` does just that: ``` [simple_grammar_fuzzer(US_PHONE_GRAMMAR) for i in range(5)] ``` In practice, though, instead of `simple_grammar_fuzzer()`, you should use [the `GrammarFuzzer` class](GrammarFuzzer.ipynb) or one of its [coverage-based](GrammarCoverageFuzzer.ipynb), [probabilistic-based](ProbabilisticGrammarFuzzer.ipynb), or [generator-based](GeneratorGrammarFuzzer.ipynb) derivatives; these are more efficient, protect against infinite growth, and provide several additional features. This chapter also introduces a [grammar toolbox](#A-Grammar-Toolbox) with several helper functions that ease the writing of grammars, such as using shortcut notations for character classes and repetitions, or extending grammars ## Lessons Learned * Grammars are powerful tools to express and produce syntactically valid inputs. * Inputs produced from grammars can be used as is, or used as seeds for mutation-based fuzzing. * Grammars can be extended with character classes and operators to make writing easier. ## Next Steps As they make a great foundation for generating software tests, we use grammars again and again in this book. As a sneak preview, we can use grammars to [fuzz configurations](ConfigurationFuzzer.ipynb): ``` <options> ::= <option>* <option> ::= -h | --version | -v | -d | -i | --global-config <filename> ``` We can use grammars for [fuzzing functions and APIs](APIFuzzer.ipynb) and [fuzzing graphical user interfaces](WebFuzzer.ipynb): ``` <call-sequence> ::= <call>* <call> ::= urlparse(<url>) | urlsplit(<url>) ``` We can assign [probabilities](ProbabilisticGrammarFuzzer.ipynb) and [constraints](GeneratorGrammarFuzzer.ipynb) to individual expansions: ``` <term>: 50% <factor> * <term> | 30% <factor> / <term> | 20% <factor> <integer>: <digit>+ { <integer> >= 100 } ``` All these extras become especially valuable as we can 1. _infer grammars automatically_, dropping the need to specify them manually, and 2. _guide them towards specific goals_ such as coverage or critical functions; which we also discuss for all techniques in this book. To get there, however, we still have bit of homework to do. In particular, we first have to learn how to * [create an efficient grammar fuzzer](GrammarFuzzer.ipynb) ## Background As one of the foundations of human language, grammars have been around as long as human language existed. The first _formalization_ of generative grammars was by Dakṣiputra Pāṇini in 350 BC \cite{Panini350bce}. As a general means to express formal languages for both data and programs, their role in computer science cannot be overstated. The seminal work by Chomsky \cite{Chomsky1956} introduced the central models of regular languages, context-free grammars, context-sensitive grammars, and universal grammars as they are used (and taught) in computer science as a means to specify input and programming languages ever since. The use of grammars for _producing_ test inputs goes back to Burkhardt \cite{Burkhardt1967}, to be later rediscovered and applied by Hanford \cite{Hanford1970} and Purdom \cite{Purdom1972}. The most important use of grammar testing since then has been *compiler testing*. Actually, grammar-based testing is one important reason why compilers and Web browsers work as they should: * The [CSmith](https://embed.cs.utah.edu/csmith/) tool \cite{Yang2011} specifically targets C programs, starting with a C grammar and then applying additional steps, such as referring to variables and functions defined earlier or ensuring integer and type safety. Their authors have used it "to find and report more than 400 previously unknown compiler bugs." * The [LangFuzz](http://issta2016.cispa.saarland/interview-with-christian-holler/) work \cite{Holler2012}, which shares two authors with this book, uses a generic grammar to produce outputs, and is used day and night to generate JavaScript programs and test their interpreters; as of today, it has found more than 2,600 bugs in browsers such as Mozilla Firefox, Google Chrome, and Microsoft Edge. * The [EMI Project](http://web.cs.ucdavis.edu/~su/emi-project/) \cite{Le2014} uses grammars to stress-test C compilers, transforming known tests into alternative programs that should be semantically equivalent over all inputs. Again, this has led to more than 100 bugs in C compilers being fixed. * [Grammarinator](https://github.com/renatahodovan/grammarinator) \cite{Hodovan2018} is an open-source grammar fuzzer (written in Python!), using the popular ANTLR format as grammar specification. Like LangFuzz, it uses the grammar for both parsing and producing, and has found more than 100 issues in the *JerryScript* lightweight JavaScript engine and an associated platform. * [Domato](https://github.com/googleprojectzero/domato) is a generic grammar generation engine that is specifically used for fuzzing DOM input. It has revealed a number of security issues in popular Web browsers. Compilers and Web browsers, of course, are not only domains where grammars are needed for testing, but also domains where grammars are well-known. Our claim in this book is that grammars can be used to generate almost _any_ input, and our aim is to empower you to do precisely that. ## Exercises ### Exercise 1: A JSON Grammar Take a look at the [JSON specification](http://www.json.org) and derive a grammar from it: * Use _character classes_ to express valid characters * Use EBNF to express repetitions and optional parts * Assume that - a string is a sequence of digits, ASCII letters, punctuation and space characters without quotes or escapes - whitespace is just a single space. * Use `is_valid_grammar()` to ensure the grammar is valid. Feed the grammar into `simple_grammar_fuzzer()`. Do you encounter any errors, and why? **Solution.** This is a fairly straightforward translation: ``` CHARACTERS_WITHOUT_QUOTE = (string.digits + string.ascii_letters + string.punctuation.replace('"', '').replace('\\', '') + ' ') JSON_EBNF_GRAMMAR: Grammar = { "<start>": ["<json>"], "<json>": ["<element>"], "<element>": ["<ws><value><ws>"], "<value>": ["<object>", "<array>", "<string>", "<number>", "true", "false", "null", "'; DROP TABLE STUDENTS"], "<object>": ["{<ws>}", "{<members>}"], "<members>": ["<member>(,<members>)*"], "<member>": ["<ws><string><ws>:<element>"], "<array>": ["[<ws>]", "[<elements>]"], "<elements>": ["<element>(,<elements>)*"], "<element>": ["<ws><value><ws>"], "<string>": ['"' + "<characters>" + '"'], "<characters>": ["<character>*"], "<character>": srange(CHARACTERS_WITHOUT_QUOTE), "<number>": ["<int><frac><exp>"], "<int>": ["<digit>", "<onenine><digits>", "-<digits>", "-<onenine><digits>"], "<digits>": ["<digit>+"], "<digit>": ['0', "<onenine>"], "<onenine>": crange('1', '9'), "<frac>": ["", ".<digits>"], "<exp>": ["", "E<sign><digits>", "e<sign><digits>"], "<sign>": ["", '+', '-'], # "<ws>": srange(string.whitespace) "<ws>": [" "] } assert is_valid_grammar(JSON_EBNF_GRAMMAR) JSON_GRAMMAR = convert_ebnf_grammar(JSON_EBNF_GRAMMAR) from ExpectError import ExpectError for i in range(50): with ExpectError(): print(simple_grammar_fuzzer(JSON_GRAMMAR, '<object>')) ``` We get these errors because `simple_grammar_fuzzer()` first expands to a maximum number of elements, and then is limited because every further expansion would _increase_ the number of nonterminals, even though these may eventually reduce the string length. This issue is addressed in the [next chapter](GrammarFuzzer.ipynb), introducing a more solid algorithm for producing strings from grammars. ### Exercise 2: Finding Bugs The name `simple_grammar_fuzzer()` does not come by accident: The way it expands grammars is limited in several ways. What happens if you apply `simple_grammar_fuzzer()` on `nonterminal_grammar` and `expr_grammar`, as defined above, and why? **Solution**. `nonterminal_grammar` does not work because `simple_grammar_fuzzer()` eventually tries to expand the just generated nonterminal: ``` from ExpectError import ExpectError, ExpectTimeout with ExpectError(): simple_grammar_fuzzer(nonterminal_grammar, log=True) ``` For `expr_grammar`, things are even worse, as `simple_grammar_fuzzer()` can start a series of infinite expansions: ``` with ExpectTimeout(1): for i in range(10): print(simple_grammar_fuzzer(expr_grammar)) ``` Both issues are addressed and discussed in the [next chapter](GrammarFuzzer.ipynb), introducing a more solid algorithm for producing strings from grammars. ### Exercise 3: Grammars with Regular Expressions In a _grammar extended with regular expressions_, we can use the special form ``` /regex/ ``` to include regular expressions in expansions. For instance, we can have a rule ``` <integer> ::= /[+-]?[0-9]+/ ``` to quickly express that an integer is an optional sign, followed by a sequence of digits. #### Part 1: Convert regular expressions Write a converter `convert_regex(r)` that takes a regular expression `r` and creates an equivalent grammar. Support the following regular expression constructs: * `*`, `+`, `?`, `()` should work just in EBNFs, above. * `a|b` should translate into a list of alternatives `[a, b]`. * `.` should match any character except newline. * `[abc]` should translate into `srange("abc")` * `[^abc]` should translate into the set of ASCII characters _except_ `srange("abc")`. * `[a-b]` should translate into `crange(a, b)` * `[^a-b]` should translate into the set of ASCII characters _except_ `crange(a, b)`. Example: `convert_regex(r"[0-9]+")` should yield a grammar such as ```python { "<start>": ["<s1>"], "<s1>": [ "<s2>", "<s1><s2>" ], "<s2>": crange('0', '9') } ``` **Solution.** Left as exercise to the reader. #### Part 2: Identify and expand regular expressions Write a converter `convert_regex_grammar(g)` that takes a EBNF grammar `g` containing regular expressions in the form `/.../` and creates an equivalent BNF grammar. Support the regular expression constructs as above. Example: `convert_regex_grammar({ "<integer>" : "/[+-]?[0-9]+/" })` should yield a grammar such as ```python { "<integer>": ["<s1><s3>"], "<s1>": [ "", "<s2>" ], "<s2>": srange("+-"), "<s3>": [ "<s4>", "<s4><s3>" ], "<s4>": crange('0', '9') } ``` Optional: Support _escapes_ in regular expressions: `\c` translates to the literal character `c`; `\/` translates to `/` (and thus does not end the regular expression); `\\` translates to `\`. **Solution.** Left as exercise to the reader. ### Exercise 4: Defining Grammars as Functions (Advanced) To obtain a nicer syntax for specifying grammars, one can make use of Python constructs which then will be _parsed_ by an additional function. For instance, we can imagine a grammar definition which uses `|` as a means to separate alternatives: ``` def expression_grammar_fn(): start = "<expr>" expr = "<term> + <expr>" | "<term> - <expr>" term = "<factor> * <term>" | "<factor> / <term>" | "<factor>" factor = "+<factor>" | "-<factor>" | "(<expr>)" | "<integer>.<integer>" | "<integer>" integer = "<digit><integer>" | "<digit>" digit = '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' ``` If we execute `expression_grammar_fn()`, this will yield an error. Yet, the purpose of `expression_grammar_fn()` is not to be executed, but to be used as _data_ from which the grammar will be constructed. ``` with ExpectError(): expression_grammar_fn() ``` To this end, we make use of the `ast` (abstract syntax tree) and `inspect` (code inspection) modules. ``` import ast import inspect ``` First, we obtain the source code of `expression_grammar_fn()`... ``` source = inspect.getsource(expression_grammar_fn) source ``` ... which we then parse into an abstract syntax tree: ``` tree = ast.parse(source) ``` We can now parse the tree to find operators and alternatives. `get_alternatives()` iterates over all nodes `op` of the tree; If the node looks like a binary _or_ (`|` ) operation, we drill deeper and recurse. If not, we have reached a single production, and we try to get the expression from the production. We define the `to_expr` parameter depending on how we want to represent the production. In this case, we represent a single production by a single string. ``` def get_alternatives(op, to_expr=lambda o: o.s): if isinstance(op, ast.BinOp) and isinstance(op.op, ast.BitOr): return get_alternatives(op.left, to_expr) + [to_expr(op.right)] return [to_expr(op)] ``` `funct_parser()` takes the abstract syntax tree of a function (say, `expression_grammar_fn()`) and iterates over all assignments: ``` def funct_parser(tree, to_expr=lambda o: o.s): return {assign.targets[0].id: get_alternatives(assign.value, to_expr) for assign in tree.body[0].body} ``` The result is a grammar in our regular format: ``` grammar = funct_parser(tree) for symbol in grammar: print(symbol, "::=", grammar[symbol]) ``` #### Part 1 (a): One Single Function Write a single function `define_grammar(fn)` that takes a grammar defined as function (such as `expression_grammar_fn()`) and returns a regular grammar. **Solution**. This is straightforward: ``` def define_grammar(fn, to_expr=lambda o: o.s): source = inspect.getsource(fn) tree = ast.parse(source) grammar = funct_parser(tree, to_expr) return grammar define_grammar(expression_grammar_fn) ``` **Note.** Python allows us to directly bind the generated grammar to the name `expression_grammar_fn` using function decorators. This can be used to ensure that we do not have a faulty function lying around: ```python @define_grammar def expression_grammar(): start = "<expr>" expr = "<term> + <expr>" | "<term> - <expr>" #... ``` #### Part 1 (b): Alternative representations We note that the grammar representation we designed previously does not allow simple generation of alternatives such as `srange()` and `crange()`. Further, one may find the string representation of expressions limiting. It turns out that it is simple to extend our grammar definition to support grammars such as below: ``` def define_name(o): return o.id if isinstance(o, ast.Name) else o.s def define_expr(op): if isinstance(op, ast.BinOp) and isinstance(op.op, ast.Add): return (*define_expr(op.left), define_name(op.right)) return (define_name(op),) def define_ex_grammar(fn): return define_grammar(fn, define_expr) ``` The grammar: ```python @define_ex_grammar def expression_grammar(): start = expr expr = (term + '+' + expr | term + '-' + expr) term = (factor + '*' + term | factor + '/' + term | factor) factor = ('+' + factor | '-' + factor | '(' + expr + ')' | integer + '.' + integer | integer) integer = (digit + integer | digit) digit = '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' for symbol in expression_grammar: print(symbol, "::=", expression_grammar[symbol]) ``` **Note.** The grammar data structure thus obtained is a little more detailed than the standard data structure. It represents each production as a tuple. We note that we have not enabled `srange()` or `crange()` in the above grammar. How would you go about adding these? (*Hint:* wrap `define_expr()` to look for `ast.Call`) #### Part 2: Extended Grammars Introduce an operator `*` that takes a pair `(min, max)` where `min` and `max` are the minimum and maximum number of repetitions, respectively. A missing value `min` stands for zero; a missing value `max` for infinity. ``` def identifier_grammar_fn(): identifier = idchar * (1,) ``` With the `*` operator, we can generalize the EBNF operators – `?` becomes (0,1), `*` becomes (0,), and `+` becomes (1,). Write a converter that takes an extended grammar defined using `*`, parse it, and convert it into BNF. **Solution.** No solution yet :-)
github_jupyter
``` import requests import json import re # Setting the base URL for the ARAX reasoner and its endpoint endpoint_url = 'https://arax.rtx.ai/api/rtx/v1/query' # Given we have some chemical substances which are linked to asthma exacerbations for a certain cohort of patients, # we want to find what diseases are associated with them # This DSL command extracts the pathways to view which diseases are associated with those chemicals. # We do this by creating a dict of the request, specifying a start previous Message and the list of DSL commands query = {"previous_message_processing_plan": {"processing_actions": [ "add_qnode(curie=CHEMBL.COMPOUND:CHEMBL896, type= chemical_substance, id=n0)", "add_qnode(type=protein, id=n1)", "add_qnode(type=disease, id=n2)", "add_qedge(source_id=n0, target_id=n1, id=e0)", "add_qedge(source_id=n1, target_id=n2, id=e1)", "expand()", #"expand(kp=RTX-KG2)". "resultify()", "filter_results(action=limit_number_of_results, max_results=20)", "return(message=true, store=true)", ]}} # Sending the request to RTX and check the status print(f"Executing query at {endpoint_url}\nPlease wait...") response_content = requests.post(endpoint_url, json=query, headers={'accept': 'application/json'}) status_code = response_content.status_code if status_code != 200: print("ERROR returned with status "+str(status_code)) print(response_content.json()) else: print(f"Response returned with status {status_code}") # Unpack respsonse from JSON and display the information log response_dict = response_content.json() for message in response_dict['log']: if message['level'] >= 20: print(message['prefix']+message['message']) # These URLs provide direct access to resulting data and GUI if 'id' in response_dict and response_dict['id'] is not None: print(f"Data: {response_dict['id']}") match = re.search(r'(\d+)$', response_dict['id']) if match: print(f"GUI: https://arax.rtx.ai/?m={match.group(1)}") else: print("No id was returned in response") # Or you can view the entire Translator API response Message print(json.dumps(response_dict, indent=2, sort_keys=True)) # Setting the base URL for the ARAX reasoner and its endpoint endpoint_url = 'https://arax.rtx.ai/api/rtx/v1/query' # Given we have some chemical substances which are linked to asthma exacerbations for a certain cohort of patients, we want to # find what diseases are associated with them # This DSL command extracts the pathways to view which phenotypes are associated with those chemicals. # We do this by creating a dict of the request, specifying a start previous Message and the list of DSL commands query = {"previous_message_processing_plan": {"processing_actions": [ "add_qnode(curie=CHEMBL.COMPOUND:CHEMBL896, type= chemical_substance, id=n0)", "add_qnode(type=protein, id=n1)", "add_qnode(type=phenotypic_feature, id=n2)", "add_qedge(source_id=n0, target_id=n1, id=e0)", "add_qedge(source_id=n1, target_id=n2, id=e1)", "expand()", #"expand(kp=RTX-KG2)". "resultify()", "filter_results(action=limit_number_of_results, max_results=20)", "return(message=true, store=true)", ]}} # Sending the request to RTX and check the status print(f"Executing query at {endpoint_url}\nPlease wait...") response_content = requests.post(endpoint_url, json=query, headers={'accept': 'application/json'}) status_code = response_content.status_code if status_code != 200: print("ERROR returned with status "+str(status_code)) print(response_content.json()) else: print(f"Response returned with status {status_code}") # Unpack respsonse from JSON and display the information log response_dict = response_content.json() for message in response_dict['log']: if message['level'] >= 20: print(message['prefix']+message['message']) # These URLs provide direct access to resulting data and GUI if 'id' in response_dict and response_dict['id'] is not None: print(f"Data: {response_dict['id']}") match = re.search(r'(\d+)$', response_dict['id']) if match: print(f"GUI: https://arax.rtx.ai/?m={match.group(1)}") else: print("No id was returned in response") # Or you can view the entire Translator API response Message print(json.dumps(response_dict, indent=2, sort_keys=True)) ```
github_jupyter
# Tensor Manipulation: Psi4 and NumPy manipulation routines Contracting tensors together forms the core of the Psi4NumPy project. First let us consider the popluar [Einstein Summation Notation](https://en.wikipedia.org/wiki/Einstein_notation) which allows for very succinct descriptions of a given tensor contraction. For example, let us consider a [inner (dot) product](https://en.wikipedia.org/wiki/Dot_product): $$c = \sum_{ij} A_{ij} * B_{ij}$$ With the Einstein convention, all indices that are repeated are considered summed over, and the explicit summation symbol is dropped: $$c = A_{ij} * B_{ij}$$ This can be extended to [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication): \begin{align} \rm{Conventional}\;\;\; C_{ik} &= \sum_{j} A_{ij} * B_{jk} \\ \rm{Einstein}\;\;\; C &= A_{ij} * B_{jk} \\ \end{align} Where the $C$ matrix has *implied* indices of $C_{ik}$ as the only repeated index is $j$. However, there are many cases where this notation fails. Thus we often use the generalized Einstein convention. To demonstrate let us examine a [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)): $$C_{ij} = \sum_{ij} A_{ij} * B_{ij}$$ This operation is nearly identical to the dot product above, and is not able to be written in pure Einstein convention. The generalized convention allows for the use of indices on the left hand side of the equation: $$C_{ij} = A_{ij} * B_{ij}$$ Usually it should be apparent within the context the exact meaning of a given expression. Finally we also make use of Matrix notation: \begin{align} {\rm Matrix}\;\;\; \bf{D} &= \bf{A B C} \\ {\rm Einstein}\;\;\; D_{il} &= A_{ij} B_{jk} C_{kl} \end{align} Note that this notation is signified by the use of bold characters to denote matrices and consecutive matrices next to each other imply a chain of matrix multiplications! ## Einsum To perform most operations we turn to [NumPy's einsum function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) which allows the Einsten convention as an input. In addition to being much easier to read, manipulate, and change, it is also much more efficient that a pure Python implementation. To begin let us consider the construction of the following tensor (which you may recognize): $$G_{pq} = 2.0 * I_{pqrs} D_{rs} - 1.0 * I_{prqs} D_{rs}$$ First let us import our normal suite of modules: ``` import numpy as np import psi4 import time ``` We can then use conventional Python loops and einsum to perform the same task. Keep size relatively small as these 4-index tensors grow very quickly in size. ``` size = 20 if size > 30: raise Exception("Size must be smaller than 30.") D = np.random.rand(size, size) I = np.random.rand(size, size, size, size) # Build the fock matrix using loops, while keeping track of time tstart_loop = time.time() Gloop = np.zeros((size, size)) for p in range(size): for q in range(size): for r in range(size): for s in range(size): Gloop[p, q] += 2 * I[p, q, r, s] * D[r, s] Gloop[p, q] -= I[p, r, q, s] * D[r, s] g_loop_time = time.time() - tstart_loop # Build the fock matrix using einsum, while keeping track of time tstart_einsum = time.time() J = np.einsum('pqrs,rs', I, D, optimize=True) K = np.einsum('prqs,rs', I, D, optimize=True) G = 2 * J - K einsum_time = time.time() - tstart_einsum # Make sure the correct answer is obtained print('The loop and einsum fock builds match: %s\n' % np.allclose(G, Gloop)) # Print out relative times for explicit loop vs einsum Fock builds print('Time for loop G build: %14.4f seconds' % g_loop_time) print('Time for einsum G build: %14.4f seconds' % einsum_time) print('G builds with einsum are {:3.4f} times faster than Python loops!'.format(g_loop_time / einsum_time)) ``` As you can see, the einsum function is considerably faster than the pure Python loops and, in this author's opinion, much cleaner and easier to use. ## Dot Now let us turn our attention to a more canonical matrix multiplication example such as: $$D_{il} = A_{ij} B_{jk} C_{kl}$$ We could perform this operation using einsum; however, matrix multiplication is an extremely common operation in all branches of linear algebra. Thus, these functions have been optimized to be more efficient than the `einsum` function. The matrix product will explicitly compute the following operation: $$C_{ij} = A_{ij} * B_{ij}$$ This can be called with [NumPy's dot function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html#numpy.dot). ``` size = 200 A = np.random.rand(size, size) B = np.random.rand(size, size) C = np.random.rand(size, size) # First compute the pair product tmp_dot = np.dot(A, B) tmp_einsum = np.einsum('ij,jk->ik', A, B, optimize=True) print("Pair product allclose: %s" % np.allclose(tmp_dot, tmp_einsum)) ``` Now that we have proved exactly what the dot product does, let us consider the full chain and do a timing comparison: ``` D_dot = np.dot(A, B).dot(C) D_einsum = np.einsum('ij,jk,kl->il', A, B, C, optimize=True) print("Chain multiplication allclose: %s" % np.allclose(D_dot, D_einsum)) print("\nnp.dot time:") %timeit np.dot(A, B).dot(C) print("\nnp.einsum time") # no optimization here for illustrative purposes! %timeit np.einsum('ij,jk,kl->il', A, B, C) ``` On most machines the `np.dot` times are roughly ~2,000 times faster. The reason is twofold: - The `np.dot` routines typically call [Basic Linear Algebra Subprograms (BLAS)](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms). The BLAS routines are highly optimized and threaded versions of the code. - The `np.einsum` code will not factorize the operation by default; Thus, the overall cost is ${\cal O}(N^4)$ (as there are four indices) rather than the factored $(\bf{A B}) \bf{C}$ which runs ${\cal O}(N^3)$. The first issue is difficult to overcome; however, the second issue can be resolved by the following: ``` print("np.einsum factorized time:") # no optimization here for illustrative purposes! %timeit np.einsum('ik,kl->il', np.einsum('ij,jk->ik', A, B), C) ``` On most machines the factorized `einsum` expression is only ~10 times slower than `np.dot`. While a massive improvement, this is a clear demonstration the BLAS usage is usually recommended. It is a tradeoff between speed and readability. The Psi4NumPy project tends to lean toward `einsum` usage except in case where the benefit is too large to pass up. Starting in NumPy 1.12, the [einsum function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) has a `optimize` flag which will automatically factorize the einsum code for you using a greedy algorithm, leading to considerable speedups at almost no cost: ``` print("\nnp.einsum optimized time") %timeit np.einsum('ij,jk,kl->il', A, B, C, optimize=True) ``` In this example, using `optimize=True` for automatic factorization is only 25% slower than `np.dot`. Furthermore, it is ~5 times faster than factorizing the expression by hand, which represents a very good trade-off between speed and readability. When unsure, `optimize=True` is strongly recommended. ## Complex tensor manipulations Let us consider a popular index transformation example: $$M_{pqrs} = C_{pi} C_{qj} I_{ijkl} C_{rk} C_{sl}$$ Here, a naive `einsum` call would scale like $\mathcal{O}(N^8)$ which translates to an extremely costly computation for all but the smallest $N$. ``` # Grab orbitals size = 15 if size > 15: raise Exception("Size must be smaller than 15.") C = np.random.rand(size, size) I = np.random.rand(size, size, size, size) # Numpy einsum N^8 transformation. print("\nStarting Numpy's N^8 transformation...") n8_tstart = time.time() # no optimization here for illustrative purposes! MO_n8 = np.einsum('pI,qJ,pqrs,rK,sL->IJKL', C, C, I, C, C) n8_time = time.time() - n8_tstart print("...transformation complete in %.3f seconds." % (n8_time)) # Numpy einsum N^5 transformation. print("\n\nStarting Numpy's N^5 transformation with einsum...") n5_tstart = time.time() # no optimization here for illustrative purposes! MO_n5 = np.einsum('pA,pqrs->Aqrs', C, I) MO_n5 = np.einsum('qB,Aqrs->ABrs', C, MO_n5) MO_n5 = np.einsum('rC,ABrs->ABCs', C, MO_n5) MO_n5 = np.einsum('sD,ABCs->ABCD', C, MO_n5) n5_time = time.time() - n5_tstart print("...transformation complete in %.3f seconds." % n5_time) print("\nN^5 %4.2f faster than N^8 algorithm!" % (n8_time / n5_time)) print("Allclose: %s" % np.allclose(MO_n8, MO_n5)) # Numpy einsum optimized transformation. print("\nNow Numpy's optimized transformation...") n8_tstart = time.time() MO_n8 = np.einsum('pI,qJ,pqrs,rK,sL->IJKL', C, C, I, C, C, optimize=True) n8_time_opt = time.time() - n8_tstart print("...optimized transformation complete in %.3f seconds." % (n8_time_opt)) # Numpy GEMM N^5 transformation. # Try to figure this one out! print("\n\nStarting Numpy's N^5 transformation with dot...") dgemm_tstart = time.time() MO = np.dot(C.T, I.reshape(size, -1)) MO = np.dot(MO.reshape(-1, size), C) MO = MO.reshape(size, size, size, size).transpose(1, 0, 3, 2) MO = np.dot(C.T, MO.reshape(size, -1)) MO = np.dot(MO.reshape(-1, size), C) MO = MO.reshape(size, size, size, size).transpose(1, 0, 3, 2) dgemm_time = time.time() - dgemm_tstart print("...transformation complete in %.3f seconds." % dgemm_time) print("\nAllclose: %s" % np.allclose(MO_n8, MO)) print("N^5 %4.2f faster than N^8 algorithm!" % (n8_time / dgemm_time)) ```
github_jupyter
# Joint Probability This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python. Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) The following cell downloads `utils.py`, which contains some utility function we'll need. ``` from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py') ``` If everything we need is installed, the following cell should run with no error messages. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## Review So far we have been working with distributions of only one variable. In this notebook we'll take a step toward multivariate distributions, starting with two variables. We'll use cross-tabulation to compute a **joint distribution**, then use the joint distribution to compute **conditional distributions** and **marginal distributions**. We will re-use `pmf_from_seq`, which I introduced in a previous notebook. ``` def pmf_from_seq(seq): """Make a PMF from a sequence of values. seq: sequence returns: Series representing a PMF """ pmf = pd.Series(seq).value_counts(sort=False).sort_index() pmf /= pmf.sum() return pmf ``` ## Cross tabulation To understand joint distributions, I'll start with cross tabulation. And to demonstrate cross tabulation, I'll generate a dataset of colors and fruits. Here are the possible values. ``` colors = ['red', 'yellow', 'green'] fruits = ['apple', 'banana', 'grape'] ``` And here's a random sample of 100 fruits. ``` np.random.seed(2) fruit_sample = np.random.choice(fruits, 100, replace=True) ``` We can use `pmf_from_seq` to compute the distribution of fruits. ``` pmf_fruit = pmf_from_seq(fruit_sample) pmf_fruit ``` And here's what it looks like. ``` pmf_fruit.plot.bar(color='C0') plt.ylabel('Probability') plt.title('Distribution of fruit'); ``` Similarly, here's a random sample of colors. ``` color_sample = np.random.choice(colors, 100, replace=True) ``` Here's the distribution of colors. ``` pmf_color = pmf_from_seq(color_sample) pmf_color ``` And here's what it looks like. ``` pmf_color.plot.bar(color='C1') plt.ylabel('Probability') plt.title('Distribution of colors'); ``` Looking at these distributions, we know the proportion of each fruit, ignoring color, and we know the proportion of each color, ignoring fruit type. But if we only have the distributions and not the original data, we don't know how many apples are green, for example, or how many yellow fruits are bananas. We can compute that information using `crosstab`, which computes the number of cases for each combination of fruit type and color. ``` xtab = pd.crosstab(color_sample, fruit_sample, rownames=['color'], colnames=['fruit']) xtab ``` The result is a DataFrame with colors along the rows and fruits along the columns. ## Heatmap The following function plots a cross tabulation using a pseudo-color plot, also known as a heatmap. It represents each element of the cross tabulation with a colored square, where the color corresponds to the magnitude of the element. The following function generates a heatmap using the Matplotlib function `pcolormesh`: ``` def plot_heatmap(xtab): """Make a heatmap to represent a cross tabulation. xtab: DataFrame containing a cross tabulation """ plt.pcolormesh(xtab) # label the y axis ys = xtab.index plt.ylabel(ys.name) locs = np.arange(len(ys)) + 0.5 plt.yticks(locs, ys) # label the x axis xs = xtab.columns plt.xlabel(xs.name) locs = np.arange(len(xs)) + 0.5 plt.xticks(locs, xs) plt.colorbar() plt.gca().invert_yaxis() plot_heatmap(xtab) ``` ## Joint Distribution A cross tabulation represents the "joint distribution" of two variables, which is a complete description of two distributions, including all of the conditional distributions. If we normalize `xtab` so the sum of the elements is 1, the result is a joint PMF: ``` joint = xtab / xtab.to_numpy().sum() joint ``` Each column in the joint PMF represents the conditional distribution of color for a given fruit. For example, we can select a column like this: ``` col = joint['apple'] col ``` If we normalize it, we get the conditional distribution of color for a given fruit. ``` col / col.sum() ``` Each row of the cross tabulation represents the conditional distribution of fruit for each color. If we select a row and normalize it, like this: ``` row = xtab.loc['red'] row / row.sum() ``` The result is the conditional distribution of fruit type for a given color. ## Conditional distributions The following function takes a joint PMF and computes conditional distributions: ``` def conditional(joint, name, value): """Compute a conditional distribution. joint: DataFrame representing a joint PMF name: string name of an axis value: value to condition on returns: Series representing a conditional PMF """ if joint.columns.name == name: cond = joint[value] elif joint.index.name == name: cond = joint.loc[value] return cond / cond.sum() ``` The second argument is a string that identifies which axis we want to select; in this example, `'fruit'` means we are selecting a column, like this: ``` conditional(joint, 'fruit', 'apple') ``` And `'color'` means we are selecting a row, like this: ``` conditional(joint, 'color', 'red') ``` **Exercise:** Compute the conditional distribution of color for bananas. What is the probability that a banana is yellow? ``` # Solution cond = conditional(joint, 'fruit', 'banana') cond # Solution cond['yellow'] ``` ## Marginal distributions Given a joint distribution, we can compute the unconditioned distribution of either variable. If we sum along the rows, which is axis 0, we get the distribution of fruit type, regardless of color. ``` joint.sum(axis=0) ``` If we sum along the columns, which is axis 1, we get the distribution of color, regardless of fruit type. ``` joint.sum(axis=1) ``` These distributions are called "[marginal](https://en.wikipedia.org/wiki/Marginal_distribution#Multivariate_distributions)" because of the way they are often displayed. We'll see an example later. As we did with conditional distributions, we can write a function that takes a joint distribution and computes the marginal distribution of a given variable: ``` def marginal(joint, name): """Compute a marginal distribution. joint: DataFrame representing a joint PMF name: string name of an axis returns: Series representing a marginal PMF """ if joint.columns.name == name: return joint.sum(axis=0) elif joint.index.name == name: return joint.sum(axis=1) ``` Here's the marginal distribution of fruit. ``` pmf_fruit = marginal(joint, 'fruit') pmf_fruit ``` And the marginal distribution of color: ``` pmf_color = marginal(joint, 'color') pmf_color ``` The sum of the marginal PMF is the same as the sum of the joint PMF, so if the joint PMF was normalized, the marginal PMF should be, too. ``` joint.to_numpy().sum() pmf_color.sum() ``` However, due to floating point error, the total might not be exactly 1. ``` pmf_fruit.sum() ``` **Exercise:** The following cells load the data from the General Social Survey that we used in Notebooks 1 and 2. ``` # Load the data file import os if not os.path.exists('gss_bayes.csv'): !wget https://github.com/AllenDowney/BiteSizeBayes/raw/master/gss_bayes.csv gss = pd.read_csv('gss_bayes.csv', index_col=0) ``` As an exercise, you can use this data to explore the joint distribution of two variables: * `partyid` encodes each respondent's political affiliation, that is, the party the belong to. [Here's the description](https://gssdataexplorer.norc.org/variables/141/vshow). * `polviews` encodes their political alignment on a spectrum from liberal to conservative. [Here's the description](https://gssdataexplorer.norc.org/variables/178/vshow). The values for `partyid` are ``` 0 Strong democrat 1 Not str democrat 2 Ind,near dem 3 Independent 4 Ind,near rep 5 Not str republican 6 Strong republican 7 Other party ``` The values for `polviews` are: ``` 1 Extremely liberal 2 Liberal 3 Slightly liberal 4 Moderate 5 Slightly conservative 6 Conservative 7 Extremely conservative ``` 1. Make a cross tabulation of `gss['partyid']` and `gss['polviews']` and normalize it to make a joint PMF. 2. Use `plot_heatmap` to display a heatmap of the joint distribution. What patterns do you notice? 3. Use `marginal` to compute the marginal distributions of `partyid` and `polviews`, and plot the results. 4. Use `conditional` to compute the conditional distribution of `partyid` for people who identify themselves as "Extremely conservative" (`polviews==7`). How many of them are "strong Republicans" (`partyid==6`)? 5. Use `conditional` to compute the conditional distribution of `polviews` for people who identify themselves as "Strong Democrat" (`partyid==0`). How many of them are "Extremely liberal" (`polviews==1`)? ``` # Solution xtab2 = pd.crosstab(gss['partyid'], gss['polviews']) joint2 = xtab2 / xtab2.to_numpy().sum() # Solution plot_heatmap(joint2) plt.xlabel('polviews') plt.title('Joint distribution of polviews and partyid'); # Solution marginal(joint2, 'polviews').plot.bar(color='C2') plt.ylabel('Probability') plt.title('Distribution of polviews'); # Solution marginal(joint2, 'polviews').plot.bar(color='C3') plt.ylabel('Probability') plt.title('Distribution of polviews'); # Solution cond1 = conditional(joint2, 'polviews', 7) cond1.plot.bar(label='Extremely conservative', color='C4') plt.ylabel('Probability') plt.title('Distribution of partyid') cond1[6] # Solution cond2 = conditional(joint2, 'partyid', 0) cond2.plot.bar(label='Strong democrat', color='C6') plt.ylabel('Probability') plt.title('Distribution of polviews') cond2[1] ``` ## Review In this notebook we started with cross tabulation, which we normalized to create a joint distribution, which describes the distribution of two (or more) variables and all of their conditional distributions. We used heatmaps to visualize cross tabulations and joint distributions. Then we defined `conditional` and `marginal` functions that take a joint distribution and compute conditional and marginal distributions for each variables. As an exercise, you had a chance to apply the same methods to explore the relationship between political alignment and party affiliation using data from the General Social Survey. You might have noticed that we did not use Bayes's Theorem in this notebook. [In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/11_faceoff.ipynb) we'll take the ideas from this notebook and apply them Bayesian inference.
github_jupyter
# KNN Here we use K Nearest Neighbors algorithm to perform classification and regression ``` import numpy as np import matplotlib.pyplot as plt import sys import pandas as pd from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from rdkit import Chem, DataStructs from sklearn.neighbors import KNeighborsRegressor, KNeighborsClassifier from tqdm.notebook import tqdm ``` ## Load Data ``` # training data: assays = pd.read_pickle('../processed_data/combined_dataset.pkl') assays = assays[assays.activity_target.isin(['Active', 'Inactive'])] # get rid of any 'Inconclusive' assays = assays.dropna(subset=['acvalue_scaled_to_tmprss2']) # only use data that could be scaled dcm = pd.read_pickle('../processed_data/DarkChemicalMatter_processed.pkl.gz') # testing data: screening_data = pd.read_pickle('../processed_data/screening_data_processed.pkl') ``` # Classification ## Load training data ``` # set up features (X) and labels (y) for knn X_assays = np.stack(assays.morgan_fingerprint) y_assays = assays.activity_target.values assays_hist = plt.hist(y_assays) X_dcm = np.stack(dcm.sample(frac=.1).morgan_fingerprint) y_dcm = ['Inactive'] * len(X_dcm) dcm_hist = plt.hist(y_dcm) ``` ### Validation ``` # make a validation set out of some of the assays and some of the dcm percent_test_assays = .3 # Make the val set a bit less skewed than the train set percent_test_dcm = .1 # random_state = 3 # for reproducibility of train/val split train_X_assays, val_X_assays, train_y_assays, test_y_assays = train_test_split(X_assays, y_assays, test_size=percent_test_assays, random_state=random_state) train_X_dcm, val_X_dcm, train_y_dcm, test_y_dcm = train_test_split(X_dcm, y_dcm, test_size=percent_test_dcm, random_state=random_state) plt.figure() plt.bar(['assays', 'dcm'], [len(train_X_assays), len(train_X_dcm)]) plt.title('training data') plt.figure() plt.bar(['assays', 'dcm'], [len(val_X_assays), len(val_X_dcm)]) plt.title('val data') train_X = np.concatenate([train_X_assays, train_X_dcm], axis=0) val_X = np.concatenate([val_X_assays, val_X_dcm], axis=0) train_y = np.concatenate([train_y_assays, train_y_dcm], axis=0) test_y = np.concatenate([test_y_assays, test_y_dcm], axis=0) ``` ## Optimize KNN Classifier using Validation Data ``` # optimize knn, test a couple ks ks = np.arange(1, 14, 2) accuracies = [] active_accuracies = [] inactive_accuracies = [] for k in tqdm(ks): nbrs = KNeighborsClassifier(n_neighbors=k, metric='jaccard', algorithm='ball_tree', n_jobs=32) nbrs.fit(train_X, train_y) pred = nbrs.predict(val_X) accuracies.append(np.count_nonzero(pred == test_y) / len(test_y)) if np.count_nonzero(test_y == 'Inactive') == 0: inactive_accuracies.append(1) # all inactive classified correctly: vacuously true else: inactive_accuracies.append(np.count_nonzero((pred == test_y) & (pred == 'Inactive')) / np.count_nonzero(test_y == 'Inactive')) if np.count_nonzero(test_y == 'Active') == 0: active_accuracies.append(1) else: active_accuracies.append(np.count_nonzero((pred == test_y) & (test_y == 'Active')) / np.count_nonzero(test_y == 'Active')) plt.figure() plt.plot(ks, accuracies, label='overall') plt.plot(ks, active_accuracies, label='active') plt.plot(ks, inactive_accuracies, label='inactive') plt.xlabel("k") plt.ylabel("accuracy") plt.title('Classification Accuracy') plt.legend() ``` From the above experiment, we can see that k=5 does the best on active compounds; we'll choose this. ## Test on the Screening Data ``` # set up train and test X_train = np.concatenate([X_assays, X_dcm]) y_train = np.concatenate([y_assays, y_dcm]) X_test = np.stack(screening_data.morgan_fingerprint) print("Training set size:", len(X_train)) print("Test set size:", len(X_test)) nbrs = KNeighborsClassifier(n_neighbors=3, metric='jaccard', algorithm='ball_tree', weights='distance', n_jobs=32) # turns out it gets much faster with many jobs (even 8x more jobs than my laptop's 4 physical cores). 64 is slower than 32 though, overhead catches up I guess. nbrs.fit(train_X, train_y) # chunk the test set in order to get some sense of progress pred_activity = [] for test_chunk in tqdm(np.array_split(X_test, 100)): pred_activity.append(nbrs.predict(test_chunk)) pred_activity = np.concatenate(pred_activity) fig, ax = plt.subplots(1, 2, figsize=(8, 3)) ax[0].hist(y_train) ax[0].set_title('training labels') ax[1].hist(pred_activity) ax[1].set_title('predicted labels') t = plt.suptitle('Label Distributions') ``` We can see the screening data mostly comes back as inactive. The distribution is similar to the training distribution, which could mean the model is biased by the training distribution, but this isn't necessarily true. Could use a test with different training data distribution to see. # Regression Now that we have identified active compounds out of the screening data, we can regress the activity of these compounds using our assay data. ## Validation ### Load Train Data Features are still morgan fingerprints, labels are log activity values. Where available, the activity values are scaled to tmprss2 based on correlation between target activities. Where correlation was unavailable, activity values are unscaled. ``` X_assays = np.stack(assays.morgan_fingerprint) y_assays = np.log10(assays.acvalue_scaled_to_tmprss2) assert y_assays.isna().sum() == 0 ``` ### Regression Cross-Validation ``` from sklearn.model_selection import cross_val_score ks = np.arange(1, 23, 2) RMSE = [] for k in tqdm(ks): knn_cv = KNeighborsRegressor(n_neighbors=k, metric='jaccard', weights='distance') RMSE.append(-cross_val_score(knn_cv, X_assays, y_assays, cv=10, scoring='neg_root_mean_squared_error')) plt.plot(ks, RMSE, '.') plt.plot(ks, np.median(RMSE, axis=1), label='median') plt.plot(ks, np.mean(RMSE, axis=1), label='mean') plt.xticks(ks) plt.legend() plt.ylabel('RMSE') plt.xlabel('k') plt.title('10-fold Cross Validation') ``` From the cross-validation, it seems k=7 is a reasonable choice. ``` from sklearn.metrics import mean_squared_error best_k_regr = 7 best_RMSE = np.median(RMSE, axis=1)[3] X_train, X_test, y_train, y_test = train_test_split(X_assays, y_assays, test_size=.25, random_state=1) nbrs = KNeighborsRegressor(n_neighbors=best_k_regr, metric='jaccard', weights='distance') nbrs.fit(X_train, y_train) y_pred = nbrs.predict(X_test) plt.plot(y_test, y_pred, '.') plt.xlabel('True Activity Values (log)') plt.ylabel('Predicted Activity Values (log)') bnds = [np.min([y_test, y_pred])*1.1, np.max([y_test, y_pred])*1.2] plt.axis('square') plt.xlim(bnds) plt.ylim(bnds) plt.plot(np.linspace(*bnds), np.linspace(*bnds), 'k--', label='y=x') plt.legend() plt.title('Sample Validation (3/4 train, 1/4 test)') print(f'RMSE={mean_squared_error(y_test, y_pred, squared=False)}') ``` Here you can see that the distribution as a whole looks good, but the accuracy in the low-end is poor. Since we care about the low end, this is concerning. ### Load Test Data The test data consists of all the screening molecules which were marked 'active' in classification above. ``` active_screening_data = screening_data[pred_activity=='Active'].copy() X_test_active = np.stack(active_screening_data.morgan_fingerprint) nbrs = KNeighborsRegressor(n_neighbors=best_k_regr, metric='jaccard', weights='distance') nbrs.fit(X_assays, y_assays) pred_acvalue = nbrs.predict(X_test_active) active_screening_data.insert(loc=2, column='predicted_acvalue(log10)', value=pred_acvalue) # a look at the predicted activity distributions by dataset source: from seaborn import violinplot violinplot(x='source', y='predicted_acvalue(log10)', data=active_screening_data) # and the top hits! active_screening_data.sort_values(by='predicted_acvalue(log10)', inplace=True) active_screening_data['name'] = active_screening_data.name.str.upper() active_screening_data.drop(columns=['morgan_fingerprint'], inplace=True) active_screening_data.drop_duplicates(subset=['name'], inplace=True) active_screening_data.head(20) ``` Nafamostat comes in on top, which is reassuring. The rest of the results... unclear. Not a ton of trust in the KNN, the large errors in the low-end on the regression test is concerning. ``` # store the results! active_screening_data['RMSE'] = best_RMSE active_screening_data.to_csv('../results/knn_results.csv') ```
github_jupyter
# Dispersion relations in a micropolar medium We are interested in computing the dispersion relations in a homogeneous micropolar solid. ## Wave propagation in micropolar solids The equations of motion for a micropolar solid are given by [[1, 2]](#References) \begin{align} &c_1^2 \nabla\nabla\cdot\mathbf{u}- c_2^2\nabla\times\nabla\times\mathbf{u} + K^2\nabla\times\boldsymbol{\theta} = -\omega^2 \mathbf{u} \, ,\\ &c_3^2 \nabla\nabla\cdot\boldsymbol{\theta} - c_4^2\nabla\times\nabla\times\boldsymbol{\theta} + Q^2\nabla\times\mathbf{u} - 2Q^2\boldsymbol{\theta} = -\omega^2 \boldsymbol{\theta} \, \end{align} where $\mathbf{u}$ is the displacement vector and $\boldsymbol{\theta}$ is the microrrotations vector, and where: $c_1$ represents the phase/group speed for the longitudinal wave ($P$) that is non-dispersive as in the classical case, $c_2$ represents the high-frequency limit phase/group speed for a transverse wave ($S$) that is dispersive unlike the classical counterpart, $c_3$ represents the high-frequency limit phase/group speed for a longitudinal-rotational wave ($LR$) with a corkscrew-like motion that is dispersive and does not have a classical counterpart, $c_4$ represents the high-frequency limit phase/group speed for a transverse-rotational wave ($TR$) that is dispersive and does not have a classical counterpart, $Q$ represents the cut-off frequency for rotational waves appearance, and $K$ quantifies the difference between the low-frequency and high-frequency phase/group speed for the S-wave. These parameters are defined by: \begin{align} c_1^2 = \frac{\lambda +2\mu}{\rho},\quad &c_3^2 =\frac{\beta + 2\eta}{J},\\ c_2^2 = \frac{\mu +\alpha}{\rho},\quad &c_4^2 =\frac{\eta + \varepsilon}{J},\\ Q^2= \frac{2\alpha}{J},\quad &K^2 =\frac{2\alpha}{\rho} \, , \end{align} ## Dispersion relations To identify types of propagating waves that can arise in the micropolar medium it is convenient to expand the displacement and rotation vectors in terms of scalar and vector potentials \begin{align} \mathbf{u} &= \nabla \phi + \nabla\times\boldsymbol{\Gamma}\, ,\\ \boldsymbol{\theta} &= \nabla \tau + \nabla\times\mathbf{E}\, , \end{align} subject to the conditions: \begin{align} &\nabla\cdot\boldsymbol{\Gamma} = 0\\ &\nabla\cdot\mathbf{E} = 0\, . \end{align} Using the above in the displacements equations of motion yields the following equations, after some manipulations \begin{align} c_1^2 \nabla^2 \phi &= \frac{\partial^2 \phi}{\partial t^2}\, ,\\ c_3^2 \nabla^2 \tau - 2Q^2\tau &= \frac{\partial^2 \tau}{\partial t^2}\, ,\\ \begin{bmatrix} c_2^2 \nabla^2 &K^2\nabla\times\, ,\\ Q^2\nabla\times &c_4^2\nabla^2 - 2Q^2 \end{bmatrix} \begin{Bmatrix} \boldsymbol{\Gamma}\\ \mathbf{E}\end{Bmatrix} &= \frac{\partial^2}{\partial t^2} \begin{Bmatrix} \boldsymbol{\Gamma}\\ \mathbf{E}\end{Bmatrix} \, , \end{align} where we can see that the equations for the scalar potentials are uncoupled, while the ones for the vector potentials are coupled. Writing the vector potentials as plane waves of amplitude $ \mathbf{A}$ and $ \mathbf{B}$, wave number $\kappa$ and circular frequency $\omega$ that propagate along the \(x\) axis, \begin{align} \boldsymbol{\Gamma} &= \mathbf{A}\exp(i\kappa x - i\omega t)\\ \mathbf{E} &= \mathbf{B}\exp(i\kappa x - i\omega t)\, . \end{align} We can do these calculations using some the functions available functions in the package. ``` from sympy import Matrix, diff, symbols, exp, I, sqrt from sympy import simplify, expand, solve, limit from sympy import init_printing, pprint, factor from continuum_mechanics.vector import lap_vec, curl, div init_printing() A1, A2, A3, B1, B2, B3 = symbols("A1 A2 A3 B1 B2 B3") kappa, omega, t, x = symbols("kappa omega t x") c1, c2, c3, c4, K, Q = symbols("c1 c2 c3 c4 K Q", positive=True) ``` We define the vector potentials $\boldsymbol{\Gamma}$ and $\mathbf{E}$. ``` Gamma = Matrix([A1, A2, A3]) * exp(I*kappa*x - I*omega*t) E = Matrix([B1, B2, B3]) * exp(I*kappa*x - I*omega*t) ``` And compute the equations using the vector operators. Namely, the Laplace ([`vector.lap_vec()`](https://continuum-mechanics.readthedocs.io/en/latest/modules.html#vector.lap_vec) and the curl ([`vector.curl()`](https://continuum-mechanics.readthedocs.io/en/latest/modules.html#vector.curl)) operators. ``` eq1 = c2**2 * lap_vec(Gamma) + K**2*curl(E) - Gamma.diff(t, 2) eq2 = Q**2 * curl(Gamma) + c4**2*lap_vec(E) - 2*Q**2*E - E.diff(t, 2) eq1 = simplify(eq1/exp(I*kappa*x - I*omega*t)) eq2 = simplify(eq2/exp(I*kappa*x - I*omega*t)) eq = eq1.col_join(eq2) ``` We can compute the matrix for the system using [`.jacobian()`](https://docs.sympy.org/1.5.1/modules/matrices/matrices.html#sympy.matrices.matrices.MatrixCalculus.jacobian) ``` M = eq.jacobian([A1, A2, A3, B1, B2, B3]) M ``` And, we are interested in the determinant of the matrix $M$. ``` factor(M.det()) ``` The roots for this polynomial (in $\omega^2$) represent the dispersion relations. ``` disps = solve(M.det(), omega**2) for disp in disps: display(disp) ``` ## References 1. Nowacki, W. (1986). Theory of asymmetric elasticity. Pergamon Press, Headington Hill Hall, Oxford OX 3 0 BW, UK, 1986. 2. Guarín-Zapata, N., Gomez, J., Valencia, C., Dargush, G. F., & Hadjesfandiari, A. R. (2020). Finite element modeling of micropolar-based phononic crystals. Wave Motion, 92, 102406.
github_jupyter
# UNSEEN-open In this project, the aim is to build an open, reproducible, and transferable workflow for UNSEEN. <!-- -- an increasingly popular method that exploits seasonal prediction systems to assess and anticipate climate extremes beyond the observed record. The approach uses pooled forecasts as plausible alternate realities. Instead of the 'single realization' of reality, pooled forecasts can be exploited to better assess the likelihood of infrequent events. --> The workflow consists of four steps, as illustrated below: ![title](../../graphs/Workflow.png) In this project, UNSEEN-open is applied to assess two extreme events in 2020: February 2020 UK precipitation and the 2020 Siberian heatwave. February average precipitation was the highest on record in the UK: with what frequency of occurrence can February extreme precipitation events such as the 2020 event be expected? The Siberian heatwave has broken the records as well. Could such an event be anticipation with UNSEEN? And to what extend can we expect changes in the frequency of occurrence and magnitude of these kind of events? ## Overview Here we provide an overview of the steps taken to apply UNSEEN-open. ### Download We want to download February precipitation over the UK and March-May average temperature over Siberia. We retrieve all SEAS5 seasonal forecasts that are forecasting the target months (i.e. February and MAM) and we retrieve ERA5 reanalysis for the same regions and variables for evaluation. ``` import os import sys sys.path.insert(0, os.path.abspath('../../')) os.chdir(os.path.abspath('../../')) import src.cdsretrieve as retrieve import src.preprocess as preprocess import numpy as np retrieve.retrieve_SEAS5(variables = ['2m_temperature','2m_dewpoint_temperature'], target_months = [3,4,5], area = [70, -11, 30, 120], years=np.arange(1981, 2021), folder = '../Siberia_example/SEAS5/') retrieve.retrieve_SEAS5(variables = 'total_precipitation', target_months = [2], area = [60, -11, 50, 2], folder = '../UK_example/SEAS5/') retrieve.retrieve_ERA5(variables = ['2m_temperature','2m_dewpoint_temperature'], target_months = [3,4,5], area = [70, -11, 30, 120], folder = '../Siberia_example/ERA5/') retrieve.retrieve_ERA5(variables = 'total_precipitation', target_months = [2], area = [60, -11, 50, 2], folder = '../UK_example/ERA5/') ``` ### Preprocess In the preprocessing step, we first merge all downloaded files into one netcdf file. Then the rest of the preprocessing depends on the definition of the extreme event. For example, for the UK case study, we want to extract the UK average precipitation while for the Siberian heatwave we will just used the defined area to spatially average over. For the MAM season, we still need to take the seasonal average, while for the UK we already have the average February precipitation. ``` SEAS5_Siberia = preprocess.merge_SEAS5(folder = '../Siberia_example/SEAS5/', target_months = [3,4,5]) SEAS5_Siberia SEAS5_Siberia.sel(latitude = 60, longitude = -10, time = '2000-03', number = 24, leadtime = 3).load() SEAS5_UK = preprocess.merge_SEAS5(folder = '../UK_example/SEAS5/', target_months = [2]) SEAS5_UK ``` ### Read more Jump into the respective sections for more detail: * **Download** * [1. Retrieve](1.Download/1.Retrieve.ipynb) * **Pre-process** * [2.1 Merge](2.Preprocess/2.1Merge.ipynb) * [2.2 Mask](2.Preprocess/2.2Mask.ipynb) * [2.3 Upscale](2.Preprocess/2.3Upscale.ipynb) * **Evaluate** * [3. Evaluate](3.Evaluate/3.Evaluate.ipynb) * **Illustrate**
github_jupyter
``` # default_exp utils ``` # Utils > Collection of useful functions. ``` #hide from nbdev.showdoc import * #export import os import numpy as np from typing import Iterable, TypeVar, Generator from plum import dispatch from pathlib import Path from functools import reduce function = type(lambda: ()) T = TypeVar('T') ``` ## Basics ``` #export def identity(x: T) -> T: """Indentity function.""" return x #export def simplify(x): """Return an object of an iterable if it is lonely.""" @dispatch def _simplify(x): if callable(x): try: return x() except TypeError: pass return x @dispatch def _simplify(i: Iterable): return next(i.__iter__()) if len(i) == 1 else i return _simplify(x) ``` The simplify function is used to de-nest an iterable with a single element in it, as for instance [1], while leaving everything else constant. It can also exchange a function for its default argument. ``` simplify({1}) simplify(simplify)(lambda x='lul': 2*x) #export def listify(x, *args): """Convert `x` to a `list`.""" if args: x = (x,) + args if x is None: result = [] elif isinstance(x, list): result = x elif isinstance(x, str) or hasattr(x, "__array__") or hasattr(x, "iloc"): result = [x] elif isinstance(x, (Iterable, Generator)): result = list(x) else: result = [x] return result ``` What's very convenient is that it leaves lists invariant (it doen't nest them into a new list). ``` listify([1, 2]) listify(1, 2, 3) #export def setify(x, *args): """Convert `x` to a `set`.""" return set(listify(x, *args)) setify(1, 2, 3) #export def tuplify(x, *args): """Convert `x` to a `tuple`.""" return tuple(listify(x, *args)) tuplify(1) #export def merge_tfms(*tfms): """Merge two dictionnaries by stacking common key into list.""" def _merge_tfms(tf1, tf2): return { k: simplify(listify(setify(listify(tf1.get(k)) + listify(tf2.get(k))))) for k in {**tf1, **tf2} } return reduce(_merge_tfms, tfms, dict()) merge_tfms( {'animals': ['cats', 'dog'], 'colors': 'blue'}, {'animals': 'cats', 'colors': 'red', 'OS': 'i use arch btw'} ) #export def compose(*functions): """Compose an arbitrary number of functions.""" def _compose(fn1, fn2): return lambda x: fn1(fn2(x)) return reduce(_compose, functions, identity) #export def pipe(*functions): """Pipe an arbitrary number of functions.""" return compose(*functions[::-1]) #export def flow(data, *functions): """Flow `data` through a list of functions.""" return pipe(*functions)(data) ``` ## File manipulation helper ``` #export def get_files(path, extensions=None, recurse=False, folders=None, followlinks=True): """Get all those file names.""" path = Path(path) folders = listify(folders) extensions = setify(extensions) extensions = {e.lower() for e in extensions} def simple_getter(p, fs, extensions=None): p = Path(p) res = [ p / f for f in fs if not f.startswith(".") and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions) ] return res if recurse: result = [] for i, (p, d, f) in enumerate(os.walk(path, followlinks=followlinks)): if len(folders) != 0 and i == 0: d[:] = [o for o in d if o in folders] else: d[:] = [o for o in d if not o.startswith(".")] if len(folders) != 0 and i == 0 and "." not in folders: continue result += simple_getter(p, f, extensions) else: f = [o.name for o in os.scandir(path) if o.is_file()] result = simple_getter(path, f, extensions) return list(map(str, result)) # export from fastcore.all import * @patch def decompress(self: Path, dest='.'): pass #export @patch def compress(self: Path, dest='.', keep_copy=True): pass #export def save_array(array, fname, suffix): """Save an array with the given name and suffix.""" if not suffix.startswith("."): suffix = "." + suffix fname = Path(fname) return np.save(array, fname.with_suffix(suffix)) def save_dataset(data): return 'NotImplementedError' ```
github_jupyter
# Generate Region of Interests (ROI) labeled arrays for simple shapes This example notebook explain the use of analysis module "skbeam/core/roi" https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/roi.py ``` import skbeam.core.roi as roi import skbeam.core.correlation as corr import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib.ticker import MaxNLocator from matplotlib.colors import LogNorm import xray_vision.mpl_plotting as mpl_plot ``` ### Easily switch between interactive and static matplotlib plots ``` interactive_mode = False import matplotlib as mpl if interactive_mode: %matplotlib notebook else: %matplotlib inline backend = mpl.get_backend() cmap='viridis' ``` ## Draw annular (ring-shaped) regions of interest ``` center = (100., 100.) # center of the rings # Image shape which is used to determine the maximum extent of output pixel coordinates img_shape = (200, 205) first_q = 10.0 # inner radius of the inner-most ring delta_q = 5.0 #ring thickness num_rings = 7 # number of Q rings # step or spacing, spacing between rings one_step_q = 5.0 # one spacing between rings step_q = [2.5, 3.0, 5.8] # differnt spacing between rings ``` ### Test when there is same spacing between rings ``` # inner and outer radius for each ring edges = roi.ring_edges(first_q, width=delta_q, spacing=one_step_q, num_rings=num_rings) edges #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.rings(edges, center, img_shape) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Same spacing between rings") im = mpl_plot.show_label_array(axes, label_array, cmap) plt.show() ``` ### Test when there is different spacing between rings ``` # inner and outer radius for each ring edges = roi.ring_edges(first_q, width=delta_q, spacing=step_q, num_rings=4) print("edges when there is different spacing between rings", edges) #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.rings(edges, center, img_shape) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Different spacing between rings") axes.set_xlim(50, 150) axes.set_ylim(50, 150) im = mpl_plot.show_label_array(axes, label_array, cmap) plt.show() ``` ### Test when there is no spacing between rings ``` # inner and outer radius for each ring edges = roi.ring_edges(first_q, width=delta_q, num_rings=num_rings) edges #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.rings(edges, center, img_shape) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("There is no spacing between rings") axes.set_xlim(50, 150) axes.set_ylim(50, 150) im = mpl_plot.show_label_array(axes, label_array, cmap) plt.show() ``` ### Generate a ROI of Segmented Rings¶ ``` center = (75, 75) # center of the rings #Image shape which is used to determine the maximum extent of output pixel coordinates img_shape = (150, 140) first_q = 5.0 # inner radius of the inner-most ring delta_q = 5.0 #ring thickness num_rings = 4 # number of rings slicing = 4 # number of pie slices or list of angles in radians spacing = 4 # margin between rings, 0 by default ``` #### find the inner and outer radius of each ring ``` # inner and outer radius for each ring edges = roi.ring_edges(first_q, width=delta_q, spacing=spacing, num_rings=num_rings) edges #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.segmented_rings(edges, slicing, center, img_shape, offset_angle=0) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Segmented Rings") axes.set_xlim(38, 120) axes.set_ylim(38, 120) im = mpl_plot.show_label_array(axes, label_array, cmap) plt.show() ``` ## Segmented rings using list of angles in radians ``` slicing = np.radians([0, 60, 120, 240, 300]) slicing #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.segmented_rings(edges, slicing, center, img_shape, offset_angle=0) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Segmented Rings") axes.set_xlim(38, 120) axes.set_ylim(38, 120) im = mpl_plot.show_label_array(axes, label_array, cmap="gray") plt.show() ``` ### Generate a ROI of Pies ``` first_q = 0 # inner and outer radius for each ring edges = roi.ring_edges(first_q, width=50, num_rings=1) edges slicing = 10 # number of pie slices or list of angles in radians #Elements not inside any ROI are zero; elements inside each #ROI are 1, 2, 3, corresponding to the order they are specified in edges. label_array = roi.segmented_rings(edges, slicing, center, img_shape, offset_angle=0) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Pies") axes.set_xlim(20, 140) axes.set_ylim(20, 140) im = mpl_plot.show_label_array(axes, label_array, cmap) plt.show() ``` ## Rectangle region of interests. ``` # Image shape which is used to determine the maximum extent of output pixel coordinates shape = (15, 26) # coordinates of the upper-left corner and width and height of each rectangle roi_data = np.array(([2, 2, 6, 3], [6, 7, 8, 5], [8, 18, 5, 10]), dtype=np.int64) #Elements not inside any ROI are zero; elements inside each ROI are 1, 2, 3, corresponding # to the order they are specified in coords. label_array = roi.rectangles(roi_data, shape) roi_inds, pixel_list = roi.extract_label_indices(label_array) ``` ## Generate Bar ROI's ``` edges = [[3, 4], [5, 7], [12, 15]] edges ``` ## Create Horizontal bars and Vertical bars ``` h_label_array = roi.bar(edges, (20, 25)) # Horizontal Bars v_label_array = roi.bar(edges, (20, 25), horizontal=False) # Vertical Bars ``` ## Create Box ROI's ``` b_label_array = roi.box((20, 25), edges) ``` ## Plot bar rois, box rois and rectangle rois ``` fig, axes = plt.subplots(2, 2, figsize=(12, 10)) axes[1, 0].set_title("Horizontal Bars") im = mpl_plot.show_label_array(axes[1, 0], h_label_array, cmap) axes[0, 1].set_title("Vertical Bars") im = mpl_plot.show_label_array(axes[0, 1], v_label_array, cmap) axes[1, 1].set_title("Box Rois") im = mpl_plot.show_label_array(axes[1, 1], b_label_array, cmap) axes[0, 0].set_title("Rectangle Rois") im = mpl_plot.show_label_array(axes[0, 0], label_array, cmap) plt.show() ``` # Create line ROI's ``` label_lines= roi.lines(([0, 45, 50, 256], [56, 60, 80, 150]), (150, 250)) # plot the figure fig, axes = plt.subplots(figsize=(6, 5)) axes.set_title("Lines") im = mpl_plot.show_label_array(axes, label_lines, cmap) plt.show() import skbeam print(skbeam.__version__) ```
github_jupyter
<p align="center"> <img src="http://www.di.uoa.gr/themes/corporate_lite/logo_el.png" title="Department of Informatics and Telecommunications - University of Athens"/> </p> --- <h1 align="center"> Artificial Intelligence </h1> <h1 align="center" > Deep Learning for Natural Language Processing </h1> --- <h2 align="center"> <b>Konstantinos Nikoletos</b> </h2> <h3 align="center"> <b>Winter 2020-2021</b> </h3> --- --- ### __Task__ This exercise is about developing a document retrieval system to return titles of scientific papers containing the answer to a given user question. You will use the first version of the COVID-19 Open Research Dataset (CORD-19) in your work (articles in the folder comm use subset). For example, for the question “What are the coronaviruses?”, your system can return the paper title “Distinct Roles for Sialoside and Protein Receptors in Coronavirus Infection” since this paper contains the answer to the asked question. To achieve the goal of this exercise, you will need first to read the paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, in order to understand how you can create sentence embeddings. In the related work of this paper, you will also find other approaches for developing your model. For example, you can using Glove embeddings, etc. In this link, you can find the extended versions of this dataset to test your model, if you want. You are required to: <ol type="a"> <li>Preprocess the provided dataset. You will decide which data of each paper is useful to your model in order to create the appropriate embeddings. You need to explain your decisions.</li> <li>Implement at least 2 different sentence embedding approaches (see the related work of the Sentence-BERT paper), in order for your model to retrieve the titles of the papers related to a given question.</li> <li>Compare your 2 models based on at least 2 different criteria of your choice. Explain why you selected these criteria, your implementation choices, and the results. Some questions you can pose are included here. You will need to provide the extra questions you posed to your model and the results of all the questions as well.</li> </ol> ### __Notebook__ Same implementation as Sentence Bert notebook but with adding CrossEncoders that I read that they perform even better --- --- __Import__ of essential libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import sys # only needed to determine Python version number import matplotlib # only needed to determine Matplotlib version import nltk from nltk.stem import WordNetLemmatizer import pprint import torch import torch.nn as nn import torch.optim as optim from torchtext import data import logging nltk.download('punkt') nltk.download('wordnet') nltk.download('stopwords') nltk.download('averaged_perceptron_tagger') ``` Selecting device (GPU - CUDA if available) ``` # First checking if GPU is available train_on_gpu=torch.cuda.is_available() if(train_on_gpu): print('Training on GPU.') else: print('No GPU available, training on CPU.') ``` # Loading data --- ``` # Opening data file import io from google.colab import drive from os import listdir from os.path import isfile, join import json drive.mount('/content/drive',force_remount=True) ``` Loading the dictionary if it has been created ``` #@title Select number of papers that will be feeded in the model { vertical-output: true, display-mode: "both" } number_of_papers = "9000" #@param ["1000","3000", "6000","9000"] import pickle CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_"+number_of_papers+".pkl" with open(CORD19_Dataframe, 'rb') as drivef: CORD19Dictionary = pickle.load(drivef) ``` OR the summary of the papers ``` #@title Select number of summarized papers that will be feeded in the model { vertical-output: true, display-mode: "both" } number_of_papers = "9000" #@param ["1000", "3000", "6000", "9000"] import pickle CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_Summarized_"+number_of_papers+".pkl" with open(CORD19_Dataframe, 'rb') as drivef: CORD19Dictionary = pickle.load(drivef) ``` ## Queries --- ``` query_list = [ 'What are the coronoviruses?', 'What was discovered in Wuhuan in December 2019?', 'What is Coronovirus Disease 2019?', 'What is COVID-19?', 'What is caused by SARS-COV2?', 'How is COVID-19 spread?', 'Where was COVID-19 discovered?','How does coronavirus spread?' ] proposed_answers = [ 'Coronaviruses (CoVs) are common human and animal pathogens that can transmit zoonotically and cause severe respiratory disease syndromes. ', 'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.', 'Coronavirus Disease 2019 (COVID-19) is an emerging disease with a rapid increase in cases and deaths since its first identification in Wuhan, China, in December 2019.', 'COVID-19 is a viral respiratory illness caused by a new coronavirus called SARS-CoV-2.', 'Coronavirus disease (COVID-19) is caused by SARS-COV2 and represents the causative agent of a potentially fatal disease that is of great global public health concern.', 'First, although COVID-19 is spread by the airborne route, air disinfection of cities and communities is not known to be effective for disease control and needs to be stopped.', 'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.', 'The new coronavirus was reported to spread via droplets, contact and natural aerosols from human-to-human.' ] myquery_list = [ "How long can the coronavirus survive on surfaces?", "What means COVID-19?", "Is COVID19 worse than flue?", "When the vaccine will be ready?", "Whats the proteins that consist COVID-19?", "Whats the symptoms of COVID-19?", "How can I prevent COVID-19?", "What treatments are available for COVID-19?", "Is hand sanitizer effective against COVID-19?", "Am I at risk for serious complications from COVID-19 if I smoke cigarettes?", "Are there any FDA-approved drugs (medicines) for COVID-19?", "How are people tested?", "Why is the disease being called coronavirus disease 2019, COVID-19?", "Am I at risk for COVID-19 from mail, packages, or products?", "What is community spread?", "How can I protect myself?", "What is a novel coronavirus?", "Was Harry Potter a good magician?" ] ``` # Results dataframes ``` resultsDf = pd.DataFrame(columns=['Number of papers','Embeddings creation time']) queriesDf = pd.DataFrame(columns=['Query','Proposed_answer','Model_answer','Cosine_similarity']) queriesDf['Query'] = query_list queriesDf['Proposed_answer'] = proposed_answers myQueriesDf = pd.DataFrame(columns=['Query','Model_answer','Cosine_similarity']) myQueriesDf['Query'] = myquery_list queriesDf ``` # SBERT --- ``` !pip install -U sentence-transformers ``` # Selecting transformer and Cross Encoder ``` from sentence_transformers import SentenceTransformer, util, CrossEncoder import torch import time encoder = SentenceTransformer('msmarco-distilbert-base-v2') cross_encoder = CrossEncoder('cross-encoder/ms-marco-TinyBERT-L-6') ``` # Initializing corpus ``` corpus = list(CORD19Dictionary.keys()) ``` # Creating the embeddings Encoding the papers ``` %%time corpus_embeddings = encoder.encode(corpus, convert_to_tensor=True, show_progress_bar=True,device='cuda') ``` # Saving corpus as tensors to drive ``` corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt" torch.save(corpus_embeddings,corpus_embeddings_path) ``` # Loading embeddings if have been created and saved --- ``` corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt" with open(corpus_embeddings_path, 'rb') as f: corpus_embeddings = torch.load(f) ``` # Evaluation --- ``` import re from nltk import tokenize from termcolor import colored def paperTitle(answer,SentenceMap): record = SentenceMap[answer] print("Paper title:",record[1]) print("Paper id: ",record[0]) def evaluation(query_list,top_k,resultsDf): query_answers = [] scores = [] for query in query_list: #Encode the query using the bi-encoder and find potentially relevant corpus start_time = time.time() question_embedding = encoder.encode(query, convert_to_tensor=True,device='cuda') hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k) hits = hits[0] # Get the hits for the first query #Now, score all retrieved corpus with the cross_encoder cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits] cross_scores = cross_encoder.predict(cross_inp) #Sort results by the cross-encoder scores for idx in range(len(cross_scores)): hits[idx]['cross-score'] = cross_scores[idx] hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True) end_time = time.time() #Output of top-5 hits print("\n\n======================\n\n") print("Query:",colored(query,'green') ) print("Results (after {:.3f} seconds):".format(end_time - start_time)) iter=0 for hit in hits[0:top_k]: print("\n-> ",iter+1) answer = ' '.join([re.sub(r"^\[.*\]", "", x) for x in corpus[hit['corpus_id']].split()]) if len(tokenize.word_tokenize(answer)) > 1: print("Score: {:.4f}".format(hit['cross-score'])) paperTitle(corpus[hit['corpus_id']],CORD19Dictionary) print("Anser size: ",len(tokenize.word_tokenize(answer))) print("Anser: ") if iter==0: query_answers.append(answer) scores.append(hit['cross-score'].item()) iter+=1 print(colored(answer,'yellow')) resultsDf['Model_answer'] = query_answers resultsDf['Cosine_similarity'] = scores top_k = 3 evaluation(query_list,top_k,queriesDf) top_k = 3 evaluation(myquery_list,top_k,myQueriesDf) ``` # Overall results ## 6000 papers with no summarization --- ### Time needed for creating the embeddings: - CPU times: - user 13min 10s - sys: 5min 40s - total: 18min 51s - Wall time: 18min 26s ### Remarks Best results among the notebooks so far, almost 5/7 questions are answered and from mine 7/17. I expected better results since Cross Encoders enhance much the performance of Sentence Bert. __Top-k__ Top-2 and 3 have lots of answers, as I noticed that are better that the first one. Also good results and with some tunning would be nearly to the wanted. ### Results ``` with pd.option_context('display.max_colwidth', None): display(queriesDf) with pd.option_context('display.max_colwidth', None): display(myQueriesDf) ``` ## 9000 papers with no summarization --- Session crashed due to RAM ## 6000 papers with paraphrase-distilroberta-base-v1 model and summarization --- ### Time needed for creating the embeddings: - CPU times: - user: 1min 18s - sys: 22.8 s - total: 1min 37s - Wall time: 1min 37s ### Remarks Not good results. From these results I think that the BERT summarizer parameters were not the appropriate and I should experiment with them. I shouldn't have so strict summarization and I may over summarized the papers. __Top-k__ Not good. ### Results ``` with pd.option_context('display.max_colwidth', None): display(queriesDf) with pd.option_context('display.max_colwidth', None): display(myQueriesDf) ``` ## 9000 papers with summarization --- ### Time needed for creating the embeddings: - CPU times: - user: 1min 48s - sys: 32.6 s - total: 2min 20s - Wall time: 2min 16s ### Remarks Again not good results and this is due my summarization tunning. ** Again I didn't have the time to re run and process again. ### Results ``` with pd.option_context('display.max_colwidth', None): display(queriesDf) with pd.option_context('display.max_colwidth', None): display(myQueriesDf) ``` # References [1] https://colab.research.google.com/drive/1l6stpYdRMmeDBK_vw0L5NitdiAuhdsAr?usp=sharing#scrollTo=D_hDi8KzNgMM [2] https://www.sbert.net/docs/package_reference/cross_encoder.html
github_jupyter
# Setup Before attending the workshp you should set up a scientific Python computing environment using the [Anaconda python distribution by Continuum Analytics](https://www.continuum.io/downloads). This page describes how. If this doesn't work, let [me](mailto:[email protected]) know and I will set you up with a virtual environment you can use on my server. ## Why Python? As is true in human language, there are hundreds of computer programming languages. While each has its own merit, the major languages for scientific computing are C, C++, R, MATLAB, Python, Java, and Fortran. MATLAB and Python are similar in syntax and typically read as if they were written in plain english. This makes both languages a useful tool for teaching but they are also very powerful languages and are very actively used in real-life research. MATLAB is proprietary while Python is open source. A benefit of being open source is that anyone can write and release Python packages. For science, there are many wonderful community-driven packages such as NumPy, SciPy, scikit-image, and Pandas just to name a few. ## Installing Python 3.7 with Anaconda There are several scientific Python distributions available for MacOS, Windows, and Linux. The most popular, [Anaconda](https://www.continuum.io/why-anaconda), is specifically designed for scientific computing and data science work. For this course, we will use the Anaconda Python 3.7 distribution. To install the correct version, follow the instructions below. 1. Navigate to the [Anaconda download page](https://www.anaconda.com/distribution/) and download the Python 3.7 graphical installer. 2. Launch the installer and follow the onscreen instructions. 3. Congratulations! You now have the beginnings of a scientific Python distribution. ## What is a Jupyter notebook? [Jupyter](http://jupyter.org/) is a browser-based system to write code, math, and text in the same document so you can clearly explain the concepts and practices used in your program. Jupyter is not only for Python, but can be used with R, Juila, MATLAB, and about 35 other languages as of this writing. All files are saved as a [JSON](http://www.json.org/) formatted text file with the extension `.ipynb`. ## How to launch the notebook A Jupyter Notebook server can either be launched from the command line or from a GUI program installed along with anaconda called Navigator. ### Launching from the Anaconda Navigator Installing Python 3 from Anaconda should also install a GUI application called [Anaconda Navigator](https://docs.continuum.io/anaconda/navigator). From here, you can launch several applications such as a QTconsole, the Spyder IDE, and a data visualization software called GlueViz. We are interested in the Jupyter Notebook application tab, which is shown boxed in red below: ![](http://www.rpgroup.caltech.edu/bige105/code/images/anaconda_navigator.png) By clicking on 'Launch', you will instantiate a Jupyter notebook server which should open in a new window. ### Launching from the terminal To launch a notebook server from the command line, simply open a terminal emulator (Terminal.app on OSX or gitbash on windows) and navigate to the directory you would like to set up a server by typing `cd path/to/folder` Once you are in the correct folder, you can launch a notebook server by typing: ``` jupyter notebook ``` This will open a screen in your default internet browser with a server containing your notebooks. Its address will be [`http://localhost:8888`](http://localhost:8888/) and is only available on your computer. **Note that once you start a server, you must keep the terminal window open.** This is where the 'guts' of the python kernel is. ## Interacting with the notebook If everything launched correctly, you should be able to see a screen which looks something like this: ![](http://www.rpgroup.caltech.edu/bige105/code/images/starting_notebook.png) To start a new python window, click on the right-hand side of the application window and select `New`. This will give you a bunch of options for new notebook kernels. In the above screen shot, there are two available Python kernels and one Matlab kernel. When starting a notebook, you should choose `Python 3` if it is available. If you have just a tab that says "Python", choose that one. Once you start a new notebook, you will be brought to the following screen. ![](http://www.rpgroup.caltech.edu/bige105/code/images/toolbars.png) Welcome to the Jupyter notebook! There are many available buttons for you to click. However, the three most important components of the notebook are highlighted in colored boxes. In blue is the name of the notebook. By clicking this, you can rename the notebook. In red is the cell formatting assignment. By default, it is registered as code, but it can also be set to markdown as described later. Finally, in purple, is the code cell. In this cell, you can type an execute Python code as well as text that will be formatted in a nicely readable format. ## Writing code All code you write in the notebook will be in the code cell. You can write single lines, to entire loops, to complete functions. As an example, we can write and evaluate a print statement in a code cell, as is shown below. To exectue the code, we can simply hit `shift + enter` while our cursor is in the code cell. ``` # This is a comment and is not read by Python print('Hello! This is the print function. Python will print this line below') ``` The box with the gray background contains the python code while the output is in the box with the white background. ## Next Steps Now that you have a Python environment up and running, proceed to the [Python] notebook to learn the basics of the language. *Note: This is a modified version of Griffin Chure's [Setting Up Python For Scientific Computing for Bi 1 - Principles of Biology](http://bi1.caltech.edu/code/t0a_setting_up_python.html). This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).*
github_jupyter
### Distributed MCMC Retrieval This notebook runs the MCMC retrievals on a local cluster using `ipyparallel`. ``` import ipyparallel as ipp c = ipp.Client(profile='gold') lview = c.load_balanced_view() ``` ## Retrieval Setup ``` %%px %env ARTS_BUILD_PATH=/home/simonpf/build/arts %env ARTS_INCLUDE_PATH=/home/simonpf/src/atms_simulations/:/home/simonpf/src/arts/controlfiles %env ARTS_DATA_PATH=/home/simonpf/src/arts_xml/ %env OMP_NUM_THREADS=1 import sys sys.path.insert(1,"/home/simonpf/src/atms_simulations/") sys.path.insert(1, "/home/simonpf/src/typhon/") import os os.chdir("/home/simonpf/src/atms_simulations") # This is important otherwise engines just crash. import matplotlib; matplotlib.use("agg") from typhon.arts.workspace import Workspace import atms import numpy as np ws = Workspace() channels = [0,15,16,17,19] atms.setup_atmosphere(ws) atms.setup_sensor(ws, channels) atms.checks(ws) ws.yCalc() %%px from typhon.arts.workspace import Workspace import atms import numpy as np ws = Workspace() channels = [0,15,16,17,19] atms.setup_atmosphere(ws) atms.setup_sensor(ws, channels) atms.checks(ws) ws.yCalc() ``` ## A Priori State The simulations are based on the a priori assumptions, that the profiles of specific humidity, temperature and ozone vary independently and that the relative variations can be described by Log-Gaussian distributions. ``` %%px qt_mean = np.load("data/qt_mean.npy").ravel() qt_cov = np.load("data/qt_cov.npy") qt_cov_inv = np.linalg.inv(qt_cov) ``` ## Jumping Functions The jumping functions are used inside the MCMC iteration and propose new atmospheric states for specific humidity, temperature and ozone, respectively. The proposed states are generated from random walks that use scaled versions of the a priori covariances. ``` %%px import numpy as np from typhon.retrieval.mcmc import RandomWalk c = (1.0 / np.sqrt(qt_mean.size)) ** 2 rw_qt = RandomWalk(c * qt_cov) def j_qt(ws, x, revert = False): if revert: x_new = x else: x_new = rw_qt.step(x) q_new = (np.exp(x_new[14::-1]).reshape((15,))) q_new = atms.mmr2vmr(ws, q_new, "h2o") ws.vmr_field.value[0, :, 0, 0] = q_new ws.t_field.value[:, 0, 0] = x_new[:14:-1] ws.sst = np.maximum(ws.t_field.value[0, 0, 0], 270.0) return x_new ``` ## A Priori Distributions These functions return the likelihood (up to an additive constant) of a given state for each of the variables. Note that the states of specific humidity, temperature and ozone are given by the logs of the relative variations. ``` %%px def p_a_qt(x): dx = x - qt_mean l = - 0.5 * np.dot(dx, np.dot(qt_cov_inv, dx)) return l ``` ## Measurement Uncertainty We assume that uncertainty of the measured brightness temperatures can be described by independent Gaussian error with a standard deviation of $1 K$. ``` %%px covmat_y = np.diag(np.ones(len(channels))) covmat_y_inv = np.linalg.inv(covmat_y) def p_y(y, yf): dy = y - yf l = - 0.5 * np.dot(dy, np.dot(covmat_y_inv, dy)) return l ``` # Running MCMC ### The Simulated Measurement For the simulated measurement, we sample a state from the a priori distribution of atmsopheric states and simulate the measured brightness temperatures. A simple heuristic is applied to ensure that reasonable acceptance rates are obtained during the MCMC simulations. After the initial burn-in phase, 1000 simulation steps are performed. If the acceptance rates during this simulation are too low/high that covariance matrices of the corresponding random walks are scaled by a factor 0.1 / 9.0, respectively. ``` %%px def adapt_covariances(a): if (np.sum(a[:, 0]) / a.shape[0]) < 0.2: rw_qt.covmat *= 0.7 if (np.sum(a[:, 0]) / a.shape[0]) > 0.4: rw_qt.covmat *= 1.5 %%px from typhon.retrieval.mcmc import MCMC from atms import vmr2cd dist = atms.StateDistribution() n_burn_in = 500 n_prod = 5000 drop = 10 from typhon.retrieval.mcmc import MCMC from atms import vmr2cd def run_retrieval(i): # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) # Generate True State dist.sample(ws) ws.yCalc() y_true = np.copy(ws.y) q_true = np.copy(ws.vmr_field.value[0, :, 0, 0].ravel()) t_true = np.copy(ws.t_field.value[:, 0, 0].ravel()) cwv_true = atms.vmr2cd(ws) dist.a_priori(ws) qt = np.zeros(qt_mean.size) # Add Noise y_true += np.random.randn(*y_true.shape) #try: mcmc = MCMC([[qt, p_a_qt, j_qt]], y_true, p_y, [vmr2cd]) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_1, s_1, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_2, s_2, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_3, s_3, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_4, s_4, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_5, s_5, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_6, s_6, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_7, s_7, _, _ = mcmc.run(ws, n_prod) # Reset covariance matrices. rw_qt.covmat = np.copy(c * qt_cov) qt_0 = dist.sample_factors() _, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, 200) adapt_covariances(a) _, _, _, a = mcmc.run(ws, n_burn_in) hist_8, s_8, _, _ = mcmc.run(ws, n_prod) profiles_q = np.stack([hist_1[0][::drop, :15], hist_2[0][::drop, :15], hist_3[0][::drop, :15], hist_4[0][::drop, :15], hist_5[0][::drop, :15], hist_6[0][::drop, :15], hist_7[0][::drop, :15], hist_8[0][::drop, :15]]) profiles_t = np.stack([hist_1[0][::drop, 15:], hist_2[0][::drop, 15:], hist_3[0][::drop, 15:], hist_4[0][::drop, 15:], hist_5[0][::drop, 15:], hist_6[0][::drop, 15:], hist_7[0][::drop, 15:], hist_8[0][::drop, 15:]]) cwv = np.stack([s_1[::drop], s_2[::drop], s_3[::drop], s_4[::drop], s_5[::drop],s_6[::drop],s_7[::drop],s_8[::drop]], axis=0) return y_true, q_true, cwv_true, profiles_q, profiles_t, cwv ``` ## Running the Retrievals ``` import numpy as np ids = np.arange(3500) rs = lview.map_async(run_retrieval, ids) from atms import create_output_file root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 15) for y_true, h2o_true, cwv_true, profiles_q, profiles_t, cwv in rs: if not y_true is None: t = v_cwv_true.shape[0] print("saving simulation: " + str(t)) steps=cwv.size v_y_true[t,:] = y_true ws.vmr_field.value[0,:,:,:] = h2o_true.reshape(-1,1,1) v_cwv_true[t] = cwv_true v_cwv[t, :steps] = cwv[:] v_h2o[t, :steps,:] = profiles_q.ravel().reshape(-1, 15) else: print("failure in simulation: " + str(t)) print(h2o_true) print(cwv_true) print(profiles) import matplotlib_settings import matplotlib.pyplot as plt root_group.close() root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 27) for i in range(1000, 1100): plt.plot(v_cwv[i, :]) plt.gca().axhline(v_cwv_true[i], c = 'k', ls = '--') v_h2o[118, 250:500, :].shape plt.plot(np.mean(profs_t[2, 0:200], axis = 0), p) plt.plot(np.mean(profs_t[2, 200:400], axis = 0), p) plt.title("Temperature Profiles") plt.xlabel("T [K]") plt.ylabel("P [hPa]") plt.gca().invert_yaxis() p = np.load("data/p_grid.npy") profiles_t[1, :, :].shape plt.plot(np.mean(np.exp(profs_q[1, 0:200]) * 18.0 / 28.9, axis = 0), p) plt.plot(np.mean(np.exp(profs_q[1, 200:400]) * 18.0/ 28.9, axis = 0), p) plt.gca().invert_yaxis() plt.title("Water Vapor Profiles") plt.xlabel("$H_2O$ [mol / mol]") plt.ylabel("P [hPa]") ```
github_jupyter
# Introduction ![alt text](https://techcrunch.com/wp-content/uploads/2017/08/anti-hate.jpg) This notebook provides a demo to use the methods used in the paper with new data. If new to collaboratory ,please check the following [link](https://medium.com/lean-in-women-in-tech-india/google-colab-the-beginners-guide-5ad3b417dfa) to know how to run the code. ### Import the required libraries: ``` #import from gensim.test.utils import datapath, get_tmpfile from gensim.models import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec import os import joblib import json import pandas as pd import numpy as np ###ipywigets from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets from sklearn import * from sklearn.model_selection import * from sklearn.metrics import * import nltk nltk.download('stopwords') #copy the git clone address here !git clone https://github.com/binny-mathew/Countering_Hate_Speech.git #Best binary classifier was XGBclassifier #Best multilabel classifier was XGBclassifier best_binary_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_1.joblib.pkl') best_multiclass_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_3.joblib.pkl') best_black_classifier = joblib.load('Countering_Hate_Speech/Best_model/black_XGB_classifier_task_2.joblib.pkl') best_jew_classifier = joblib.load('Countering_Hate_Speech/Best_model/jew_XGB_classifier_task_2.joblib.pkl') best_lgbt_classifier = joblib.load('Countering_Hate_Speech/Best_model/lgbt_XGB_classifier_task_2.joblib.pkl') ``` ###Word Embeddings Loaded Here ``` ####downloading the word embeddings !wget http://nlp.stanford.edu/data/glove.840B.300d.zip !unzip glove.840B.300d.zip ####extracting the glove model file #import zipfile #archive = zipfile.ZipFile('glove.840B.300d.zip', 'r') GLOVE_MODEL_FILE ='glove.840B.300d.txt' import numpy as np ## change the embedding dimension according to the model EMBEDDING_DIM = 300 ###change the method type ### method two def loadGloveModel2(glove_file): tmp_file = get_tmpfile("test_crawl_200.txt") # call glove2word2vec script # default way (through CLI): python -m gensim.scripts.glove2word2vec --input <glove_file> --output <w2v_file> glove2word2vec(glove_file, tmp_file) model=KeyedVectors.load_word2vec_format(tmp_file) return model word2vec_model = loadGloveModel2(GLOVE_MODEL_FILE) ``` ## Dataset is loaded here ``` #@title Select the type of file used type_of_file = 'X.json' #@param ['X.json','X.csv'] ``` ### File type information If the file type is **.json** then each element should contain the following fields:- 1. Community 2. CounterSpeech 3. Category 4. commentText 5. id If the file type is **.csv** then it must have the following columns:- 1. Community 2. CounterSpeech 3. Category 4. commentText 5. id Note:- If you don't have the Category or Community add an dummy element or column ``` ####CHANGE THE PATH OF THE FILE path_of_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json' def convert_class_label(input_text): if input_text: return 'counter' else: return 'noncounter' if(type_of_file=='X.json'): with open(path_of_file) as fp: train_data = json.load(fp) pd_train = pd.DataFrame(columns=['id','class','community','category','text']) for count, each in enumerate(train_data): try: pd_train.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']] except: pass print('Training Data Loading Completed...') elif(type_of_file=='X.csv'): pd_train=pd.read_csv(path_of_the_file) pd_train.head() #@title How your dataframe should look like after extraction {display-mode: "form"} # This code will be hidden when the notebook is loaded. path_of_data_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json' def convert_class_label(input_text): if input_text: return 'counter' else: return 'noncounter' with open(path_of_data_file) as fp: train_data = json.load(fp) pd_train_sample = pd.DataFrame(columns=['id','class','community','category','text']) for count, each in enumerate(train_data): try: pd_train_sample.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']] except: pass print('Training Data Loading Completed...') pd_train_sample.head() pd_train['text'].replace('', np.nan, inplace=True) pd_train.dropna(subset=['text'], inplace=True) import sys ####features module has the necessary function for feature generation from Countering_Hate_Speech.utils import features from Countering_Hate_Speech.utils import multi_features ###tokenize module has the tokenization funciton from Countering_Hate_Speech.utils.tokenize import * ###helper prints confusion matrix and stores results from Countering_Hate_Speech.utils.helper import * ###common preprocessing imports from Countering_Hate_Speech.utils.commen_preprocess import * ``` #### Next few sections cover three different classifiers namely - * Binary classification * Multlabel classification * Cross community You can run the cells corresponding to the result you want to analyse. ### **Binary Classification** ``` X,y= features.combine_tf_rem_google_rem_embed(pd_train,word2vec_model) label_map = { 'counter': 0, 'noncounter': 1 } temp=[] for data in y: temp.append(label_map[data]) y=np.array(temp) y_pred=best_binary_classifier.predict(X) report = classification_report(y, y_pred) cm=confusion_matrix(y, y_pred) plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix") plt.savefig('Confusion_matrix.png') df_result=pandas_classification_report(y,y_pred) df_result.to_csv('Classification_Report.csv', sep=',') print("You can download the files from the file directory now ") ``` ### **Multilabel Classification** ``` import scipy pd_train_multilabel =pd_train.copy() pd_train_multilabel =pd_train_multilabel[pd_train_multilabel['category']!='Default'] list1=[[],[],[],[],[],[],[],[],[],[]] for ele in pd_train_multilabel['category']: temp=[] if type(ele) is int: ele =str(ele) for i in range(0,len(ele),2): temp.append(ord(ele[i])-ord('0')) #print(temp) if(len(temp)==0): print(temp) for i in range(0,10): if i+1 in temp: list1[i].append(1) else: list1[i].append(0) y_train=np.array([np.array(xi) for xi in list1]) ### final dataframe for the task created pd_train_multilabel = pd.DataFrame({'text':list(pd_train_multilabel['text']),'cat0':list1[0],'cat1':list1[1],'cat2':list1[2],'cat3':list1[3],'cat4':list1[4],'cat5':list1[5],'cat6':list1[6],'cat7':list1[7],'cat8':list1[8],'cat9':list1[9]}) ### drop the entries having blank entries pd_train_multilabel['text'].replace('', np.nan, inplace=True) pd_train_multilabel.dropna(subset=['text'], inplace=True) X,y= multi_features.combine_tf_rem_google_rem_embed(pd_train_multilabel,word2vec_model) path='multilabel_res' os.makedirs(path, exist_ok=True) X = np.array(X) y = np.array(y) y_pred = best_multiclass_classifier.predict(X) if(scipy.sparse.issparse(y_pred)): ham,acc,pre,rec,f1=calculate_score(y,y_pred.toarray()) accuracy_test=accuracy_score(y,y_pred.toarray()) else: ham,acc,pre,rec,f1=calculate_score(y,y_pred) accuracy_test=my_accuracy_score(y,y_pred) for i in range(10): df_result=pandas_classification_report(y[:,i],y_pred[:,i]) df_result.to_csv(path+'/report'+str(i)+'.csv') f = open(path+'/final_report.txt', "w") f.write("best_model") f.write("The hard metric score is :- " + str(accuracy_test)) f.write("The accuracy is :- " + str(acc)) f.write("The precision is :- " + str(pre)) f.write("The recall is :- " + str(rec)) f.write("The f1_score is :- " + str(f1)) f.write("The hamming loss is :-" + str(ham)) f.close() !zip -r mulitlabel_results.zip multilabel_res ``` ### **Cross CommunityClassification** ``` pd_cross=pd_train.copy() part_j=pd_cross.loc[pd_train['community']=='jews'] part_b=pd_cross.loc[pd_train['community']=='black'] part_l=pd_cross.loc[pd_train['community']=='lgbt'] X_black,y_black= features.combine_tf_rem_google_rem_embed(part_b,word2vec_model) X_jew,y_jew= features.combine_tf_rem_google_rem_embed(part_j,word2vec_model) X_lgbt,y_lgbt= features.combine_tf_rem_google_rem_embed(part_l,word2vec_model) label_map = { 'counter': 0, 'noncounter': 1 } temp=[] for data in y_black: temp.append(label_map[data]) y_black=np.array(temp) y_pred_black=best_black_classifier.predict(X_black) report = classification_report(y_black, y_pred_black) cm=confusion_matrix(y_black, y_pred_black) plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix") plt.savefig('black_Confusion_matrix.png') df_result=pandas_classification_report(y_black,y_pred_black) df_result.to_csv('black_Classification_Report.csv', sep=',') print("You can download the files from the file directory now ") label_map = { 'counter': 0, 'noncounter': 1 } temp=[] for data in y_jew: temp.append(label_map[data]) y_jew=np.array(temp) y_pred_jew=best_jew_classifier.predict(X_jew) report = classification_report(y_jew, y_pred_jew) cm=confusion_matrix(y_jew, y_pred_jew) plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix") plt.savefig('jew_Confusion_matrix.png') df_result=pandas_classification_report(y_jew,y_pred_jew) df_result.to_csv('jew_Classification_Report.csv', sep=',') print("You can download the files from the file directory now ") label_map = { 'counter': 0, 'noncounter': 1 } temp=[] for data in y_lgbt: temp.append(label_map[data]) y_lgbt=np.array(temp) y_pred_lgbt=best_lgbt_classifier.predict(X_lgbt) report = classification_report(y_lgbt, y_pred_lgbt) cm=confusion_matrix(y_lgbt, y_pred_lgbt) plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix") plt.savefig('lgbt_Confusion_matrix.png') df_result=pandas_classification_report(y_lgbt,y_pred_lgbt) df_result.to_csv('lgbt_Classification_Report.csv', sep=',') print("You can download the files from the file directory now ") ```
github_jupyter
``` import logging from gensim.models import ldaseqmodel from gensim.corpora import Dictionary, bleicorpus, textcorpus import numpy as np from gensim.matutils import hellinger import time import pickle import pyLDAvis import matplotlib.pyplot as plt from scipy.stats import entropy import pandas as pd from numpy.linalg import norm alldata_new = pickle.load(open('output/dtm_processed_output.p', 'rb')) # load data doc_year=alldata_new['docs_per_year'] doc_ids =[0]+list(np.cumsum(doc_year)) term_topic = alldata_new['term_topic']# term_topic is n_years*n_topics*n_terms terms = alldata_new['terms'] doc_topicyrs = alldata_new['doc_topic'] doc_topic = [] for year in range(len(term_topic)): doc_topic.append(alldata_new['doc_topic'][doc_ids[year]:doc_ids[year+1]])# doc_topic is nyear*n_docs given year*n_topics # rename topics by the hand-picked names topic_labels = pickle.load(open('topicnames.p','rb')) def totvar(p,q): maxdist=np.max(abs(p-q)) maxid=np.argmax(abs(p-q)) return [maxdist,maxid] def JSD(P, Q): _P = P / norm(P, ord=1) _Q = Q / norm(Q, ord=1) _M = 0.5 * (_P + _Q) dist=0.5 * (entropy(_P, _M) + entropy(_Q, _M)) return dist # entropy change within a topic -- which topic's content has changed most in the past years epsilon = 1e-15 ntopics = 20 topicdelta=np.empty((ntopics,len(term_topic))) # distance from previous year: jenson-shannon distance topicshift=np.empty(ntopics) # distance from 2000 to 2017 topicdelta_tv=np.empty((ntopics,len(term_topic))) # distance from previous year: total variance topicshift_tv=np.empty(ntopics) # distance from 2000 to 2017:total variance deltaterm=[] shiftterm=[] for k in range(ntopics): sftterms=[] for iyear in range(len(term_topic)): topic = term_topic[iyear][k] # why not using KL: 1) avoid asymetry 2) avoid inf topic = topic/sum(topic) topicdelta[k,iyear] = JSD(topic,term_topic[max(iyear-1,0)][k]) # jensen-shannon distance [topicdelta_tv[k,iyear],maxterm]=totvar(topic,term_topic[max(iyear-1,0)][k]) # maxterm: term of biggest change from previous year sftterms.append(terms[maxterm]) topicshift[k] = JSD(term_topic[-1][k],term_topic[0][k]) [topicshift_tv[k],maxterm]=totvar(term_topic[-1][k],term_topic[0][k]) shiftterm.append(terms[maxterm]) # biggest shift from 2017 to 2000 deltaterm.append(sftterms) # biggest delta from prev year: max term for every year deltaterm[4] shiftidx=np.argsort(-topicshift) for idx in shiftidx: print(topic_labels[idx]+': %.3f'%topicshift[idx]) print('total variance:') shiftidx=np.argsort(-topicshift_tv) for idx in shiftidx: print(topic_labels[idx]+': %.3f'%topicshift_tv[idx]+' max shift word:'+shiftterm[idx]) #TODO: get the raise and fall terms for each topic...just copy the other code; set the jsd as titles # calculate the topic distribution for each year (should correspond to the topic evolution trend...can't find that code right now) ntopics = len(topic_labels) ptop_years = [] entrop_years = [] for iyear in range(len(term_topic)): ptopics = np.zeros(ntopics) for doc in doc_topic[iyear]: ptopics+=doc ptopics = ptopics/sum(ptopics) ptop_years.append(ptopics) entrop_years.append(entropy(ptopics)) print(entrop_years) # plot the entropy change across years years = np.arange(len(term_topic))+2000 plt.plot(years,entrop_years,'-o') plt.xlabel('year') plt.title('entropy of topic distribution') plt.show() # could be done: find the paper with highest / lowest entropy; find the topic with highest/lowest entropy # KL-divergence across years kl_years = [] gap=1 for iyear in range(len(term_topic)-gap): # kl_years.append(entropy(ptop_years[iyear],ptop_years[iyear+gap])) kl_years.append(entropy(ptop_years[iyear+gap],ptop_years[iyear]))# sanity check: reverse the direction of KL. not differen plt.plot(years[gap:],kl_years,'-o') plt.xlabel('year') plt.title('KL div with the previous year') plt.show() # TODO: eye-balling the distribution overlayed ``` **tentative conclusion** - the diversity of topics seem to increase over years - 2002 has a relatively less diverse topic distribution while 2013 was pretty diverse. - the year-to-year difference has been decreasing across years...it's like the field is changing more slowly? doesn't make sense to me... ``` # entropy of topics for iyear in range(len(term_topic)): print('\n Year='+str(years[iyear])) entrop_terms=[] for k in range(ntopics): topic = term_topic[iyear][k] # already normalized entrop_terms.append(entropy(topic)) sorted_H = np.sort(entrop_terms) idx = np.argsort(entrop_terms) [print(topic_labels[idx[j]]+':'+str(sorted_H[j])) for j in range(len(idx))] # turns out the ranking of entropy is pretty stable over the years. sum(term_topic[iyear][3]) ```
github_jupyter
""" This file was created with the purpose of developing a random forest classifier to identify market squeeze This squeeze classification depends of the comparison of 2 indicators: 2 std of a 20 period bollinger bands and 2 atr of a 20 period keltner channel our definition of squeeze: when the upper bollinger band (bbup) is less or equal to upper keltner band (kcup) AND lower bollinger band (bblo) is above or equal to lower keltner channel (kclo) """ """ To develop the random forest model, a csv file was prepared extracting prices, bollinger bands and squeeze classification from tradestation. A custom squeeze_id indicator was developed in easylanguage to obtain a column with values ranging 0 or 1 depending upon the market being on a squeeze or not (based on the requirements specified above) """ ``` # Import libraries and dependencies import pandas as pd import numpy as np from pathlib import Path %matplotlib inline import warnings warnings.filterwarnings('ignore') csv_path = Path('../Resources/ts_squeeze_jpm.csv') csv_path ts_file_df = pd.read_csv(csv_path, parse_dates=[['Date', 'Time']]) ts_file_df.tail() # set index as Date_Time and drop MidLine.1 column (it is a duplicate of MidLine) ts_file_df.set_index(pd.to_datetime(ts_file_df['Date_Time'], infer_datetime_format=True), inplace=True) ts_file_df.drop(columns=['Date_Time', 'MidLine.1'], inplace=True) ts_file_df.head() # Set a variable list of features to feed to our model x_var_list = ['Open', 'High', 'Low', 'Close', 'Up', 'Down', 'kcup', 'kclo', 'MidLine', 'bbup', 'bblo', 'FastEMA', 'SlowEMA'] ts_file_df[x_var_list].head() # Shift DataFrame values by 1 ts_file_df[x_var_list] = ts_file_df[x_var_list].shift(1) ts_file_df[x_var_list].head() ts_file_df.head() ts_file_df.dropna(inplace=True) ts_file_df.head() # Construct training start and training end dates training_start = ts_file_df.index.min().strftime(format='%Y-%m-%d') training_end = '2019-01-11' # Construct test start and test end dates testing_start = '2019-01-12' testing_end = '2019-06-12' # Construct validating start and validating end dates vali_start = '2019-06-13' vali_end = '2020-01-12' # Confirming training, testing and validating dates print(f"Training Start: {training_start}") print(f"Training End: {training_end}") print(f"Testing Start: {testing_start}") print(f"Testing End: {testing_end}") print(f"validating Start: {vali_start}") print(f"validating end: {vali_end}") # Construct the X_train and y_train datasets X_train = ts_file_df[x_var_list][training_start:training_end] y_train = ts_file_df['squeeze'][training_start:training_end] X_train.head() y_train.tail() # Construct the X test and y test datasets X_test = ts_file_df[x_var_list][testing_start:testing_end] y_test = ts_file_df['squeeze'][testing_start:testing_end] X_test.head() y_test.head() # Construct the X valid and y validation datasets X_vali = ts_file_df[x_var_list][vali_start:vali_end] y_vali = ts_file_df['squeeze'][vali_start:vali_end] X_vali.head() y_vali.tail() # Import SKLearn library and Classes from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification # Fit SKLearn regression with training datasets: model = RandomForestClassifier(n_estimators=1000, max_depth=5, random_state=1) model.fit(X_train, y_train) # Make predictions of "y" values from the X_test dataset predictions = model.predict(X_test) # Assemble actual y_test with predicted values compare_predict_df = y_test.to_frame() compare_predict_df["predict_squeeze"] = predictions compare_predict_df # Save the pre-trained model from joblib import dump, load dump(model, 'random_forest_model_squeeze.joblib') """ Below the exporting code to csv files """ X_testoutput_path = Path('../Resources/X_test.csv') X_test.to_csv(X_testoutput_path) model_results_path = Path('../Resources/results.csv') compare_predict_df.to_csv(model_results_path) # different datasets to csv files for reconfigurations X_valioutput_path = Path("../Resources/X_vali.csv") X_vali.to_csv(X_valioutput_path) y_valioutput_path = Path("../Resources/y_vali.csv") y_vali.to_csv(y_valioutput_path) ```
github_jupyter
## Outline * Recap of data * Feedforward network with Pytorch tensors and autograd * Using Pytorch's NN -> Functional, Linear, Sequential & Pytorch's Optim * Moving things to CUDA ``` import numpy as np import math import matplotlib.pyplot as plt import matplotlib.colors import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, mean_squared_error, log_loss from tqdm import tqdm_notebook import seaborn as sns import time from IPython.display import HTML import warnings warnings.filterwarnings('ignore') from sklearn.preprocessing import OneHotEncoder from sklearn.datasets import make_blobs import torch torch.manual_seed(0) my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"]) ``` ## Generate Dataset ``` data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0) print(data.shape, labels.shape) plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap) plt.show() X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0) print(X_train.shape, X_val.shape, labels.shape) ``` ## Using torch tensors and autograd ``` X_train, Y_train, X_val, Y_val = map(torch.tensor, (X_train, Y_train, X_val, Y_val)) print(X_train.shape, Y_train.shape) def model(x): a1 = torch.matmul(x, weights1) + bias1 # (N, 2) x (2, 2) -> (N, 2) h1 = a1.sigmoid() # (N, 2) a2 = torch.matmul(h1, weights2) + bias2 # (N, 2) x (2, 4) -> (N, 4) h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1) # (N, 4) return h2 y_hat = torch.tensor([[0.1, 0.2, 0.3, 0.4], [0.8, 0.1, 0.05, 0.05]]) y = torch.tensor([2, 0]) (-y_hat[range(y_hat.shape[0]), y].log()).mean().item() (torch.argmax(y_hat, dim=1) == y).float().mean().item() def loss_fn(y_hat, y): return -(y_hat[range(y.shape[0]), y].log()).mean() def accuracy(y_hat, y): pred = torch.argmax(y_hat, dim=1) return (pred == y).float().mean() torch.manual_seed(0) weights1 = torch.randn(2, 2) / math.sqrt(2) weights1.requires_grad_() bias1 = torch.zeros(2, requires_grad=True) weights2 = torch.randn(2, 4) / math.sqrt(2) weights2.requires_grad_() bias2 = torch.zeros(4, requires_grad=True) learning_rate = 0.2 epochs = 10000 X_train = X_train.float() Y_train = Y_train.long() loss_arr = [] acc_arr = [] for epoch in range(epochs): y_hat = model(X_train) loss = loss_fn(y_hat, Y_train) loss.backward() loss_arr.append(loss.item()) acc_arr.append(accuracy(y_hat, Y_train)) with torch.no_grad(): weights1 -= weights1.grad * learning_rate bias1 -= bias1.grad * learning_rate weights2 -= weights2.grad * learning_rate bias2 -= bias2.grad * learning_rate weights1.grad.zero_() bias1.grad.zero_() weights2.grad.zero_() bias2.grad.zero_() plt.plot(loss_arr, 'r-') plt.plot(acc_arr, 'b-') plt.show() print('Loss before training', loss_arr[0]) print('Loss after training', loss_arr[-1]) ``` ## Using NN.Functional ``` import torch.nn.functional as F torch.manual_seed(0) weights1 = torch.randn(2, 2) / math.sqrt(2) weights1.requires_grad_() bias1 = torch.zeros(2, requires_grad=True) weights2 = torch.randn(2, 4) / math.sqrt(2) weights2.requires_grad_() bias2 = torch.zeros(4, requires_grad=True) learning_rate = 0.2 epochs = 10000 loss_arr = [] acc_arr = [] for epoch in range(epochs): y_hat = model(X_train) loss = F.cross_entropy(y_hat, Y_train) loss.backward() loss_arr.append(loss.item()) acc_arr.append(accuracy(y_hat, Y_train)) with torch.no_grad(): weights1 -= weights1.grad * learning_rate bias1 -= bias1.grad * learning_rate weights2 -= weights2.grad * learning_rate bias2 -= bias2.grad * learning_rate weights1.grad.zero_() bias1.grad.zero_() weights2.grad.zero_() bias2.grad.zero_() plt.plot(loss_arr, 'r-') plt.plot(acc_arr, 'b-') plt.show() print('Loss before training', loss_arr[0]) print('Loss after training', loss_arr[-1]) ``` ## Using NN.Parameter ``` import torch.nn as nn class FirstNetwork(nn.Module): def __init__(self): super().__init__() torch.manual_seed(0) self.weights1 = nn.Parameter(torch.randn(2, 2) / math.sqrt(2)) self.bias1 = nn.Parameter(torch.zeros(2)) self.weights2 = nn.Parameter(torch.randn(2, 4) / math.sqrt(2)) self.bias2 = nn.Parameter(torch.zeros(4)) def forward(self, X): a1 = torch.matmul(X, self.weights1) + self.bias1 h1 = a1.sigmoid() a2 = torch.matmul(h1, self.weights2) + self.bias2 h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1) return h2 def fit(epochs = 1000, learning_rate = 1): loss_arr = [] acc_arr = [] for epoch in range(epochs): y_hat = fn(X_train) loss = F.cross_entropy(y_hat, Y_train) loss_arr.append(loss.item()) acc_arr.append(accuracy(y_hat, Y_train)) loss.backward() with torch.no_grad(): for param in fn.parameters(): param -= learning_rate * param.grad fn.zero_grad() plt.plot(loss_arr, 'r-') plt.plot(acc_arr, 'b-') plt.show() print('Loss before training', loss_arr[0]) print('Loss after training', loss_arr[-1]) fn = FirstNetwork() fit() ``` ## Using NN.Linear and Optim ``` class FirstNetwork_v1(nn.Module): def __init__(self): super().__init__() torch.manual_seed(0) self.lin1 = nn.Linear(2, 2) self.lin2 = nn.Linear(2, 4) def forward(self, X): a1 = self.lin1(X) h1 = a1.sigmoid() a2 = self.lin2(h1) h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1) return h2 fn = FirstNetwork_v1() fit() from torch import optim def fit_v1(epochs = 1000, learning_rate = 1): loss_arr = [] acc_arr = [] opt = optim.SGD(fn.parameters(), lr=learning_rate) for epoch in range(epochs): y_hat = fn(X_train) loss = F.cross_entropy(y_hat, Y_train) loss_arr.append(loss.item()) acc_arr.append(accuracy(y_hat, Y_train)) loss.backward() opt.step() opt.zero_grad() plt.plot(loss_arr, 'r-') plt.plot(acc_arr, 'b-') plt.show() print('Loss before training', loss_arr[0]) print('Loss after training', loss_arr[-1]) fn = FirstNetwork_v1() fit_v1() ``` ## Using NN.Sequential ``` class FirstNetwork_v2(nn.Module): def __init__(self): super().__init__() torch.manual_seed(0) self.net = nn.Sequential( nn.Linear(2, 2), nn.Sigmoid(), nn.Linear(2, 4), nn.Softmax() ) def forward(self, X): return self.net(X) fn = FirstNetwork_v2() fit_v1() def fit_v2(x, y, model, opt, loss_fn, epochs = 1000): for epoch in range(epochs): loss = loss_fn(model(x), y) loss.backward() opt.step() opt.zero_grad() return loss.item() fn = FirstNetwork_v2() loss_fn = F.cross_entropy opt = optim.SGD(fn.parameters(), lr=1) fit_v2(X_train, Y_train, fn, opt, loss_fn) ``` ## Running it on GPUs ``` device = torch.device("cuda") X_train=X_train.to(device) Y_train=Y_train.to(device) fn = FirstNetwork_v2() fn.to(device) tic = time.time() print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn)) toc = time.time() print('Time taken', toc - tic) class FirstNetwork_v3(nn.Module): def __init__(self): super().__init__() torch.manual_seed(0) self.net = nn.Sequential( nn.Linear(2, 1024*4), nn.Sigmoid(), nn.Linear(1024*4, 4), nn.Softmax() ) def forward(self, X): return self.net(X) device = torch.device("cpu") X_train=X_train.to(device) Y_train=Y_train.to(device) fn = FirstNetwork_v3() fn.to(device) tic = time.time() print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn)) toc = time.time() print('Time taken', toc - tic) ``` ## Exercises 1. Try out a deeper neural network, eg. 2 hidden layers 2. Try out different parameters in the optimizer (eg. try momentum, nestrov) -> check `optim.SGD` docs 3. Try out other optimization methods (eg. RMSProp and Adam) which are supported in `optim` 4. Try out different initialisation methods which are supported in `nn.init`
github_jupyter
# Grid algorithm for the beta-binomial hierarchical model [Bayesian Inference with PyMC](https://allendowney.github.io/BayesianInferencePyMC) Copyright 2021 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) ``` # If we're running on Colab, install PyMC and ArviZ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install pymc3 !pip install arviz # PyMC generates a FutureWarning we don't need to deal with yet import warnings warnings.filterwarnings("ignore", category=FutureWarning) import seaborn as sns def plot_hist(sample, **options): """Plot a histogram of goals. sample: sequence of values """ sns.histplot(sample, stat='probability', discrete=True, alpha=0.5, **options) def plot_kde(sample, **options): """Plot a distribution using KDE. sample: sequence of values """ sns.kdeplot(sample, cut=0, **options) import matplotlib.pyplot as plt def legend(**options): """Make a legend only if there are labels.""" handles, labels = plt.gca().get_legend_handles_labels() if len(labels): plt.legend(**options) def decorate(**options): plt.gca().set(**options) legend() plt.tight_layout() def decorate_heads(ylabel='Probability'): """Decorate the axes.""" plt.xlabel('Number of heads (k)') plt.ylabel(ylabel) plt.title('Distribution of heads') legend() def decorate_proportion(ylabel='Likelihood'): """Decorate the axes.""" plt.xlabel('Proportion of heads (x)') plt.ylabel(ylabel) plt.title('Distribution of proportion') legend() from empiricaldist import Cdf def compare_cdf(pmf, sample): pmf.make_cdf().plot(label='grid') Cdf.from_seq(sample).plot(label='mcmc') print(pmf.mean(), sample.mean()) decorate() ``` ## The Grid Algorithm ``` import numpy as np from scipy.stats import gamma alpha = 4 beta = 0.5 qs = np.linspace(0.1, 25, 100) ps = gamma(alpha, scale=1/beta).pdf(qs) from empiricaldist import Pmf prior_alpha = Pmf(ps, qs) prior_alpha.normalize() prior_alpha.index.name = 'alpha' prior_alpha.shape prior_alpha.plot() prior_alpha.mean() qs = np.linspace(0.1, 25, 90) ps = gamma(alpha, scale=1/beta).pdf(qs) prior_beta = Pmf(ps, qs) prior_beta.normalize() prior_beta.index.name = 'beta' prior_beta.shape prior_beta.plot() prior_beta.mean() def make_hyper(prior_alpha, prior_beta): PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij') hyper = PA * PB return hyper hyper = make_hyper(prior_alpha, prior_beta) hyper.shape import pandas as pd from utils import plot_contour plot_contour(pd.DataFrame(hyper)) ``` ## Make Prior ``` from scipy.stats import beta as betadist xs = np.linspace(0.01, 0.99, 80) prior_x = Pmf(betadist.pdf(xs, 2, 2), xs) prior_x.plot() from scipy.stats import beta as betadist def make_prior(hyper, prior_alpha, prior_beta, xs): A, B, X = np.meshgrid(prior_alpha.qs, prior_beta.qs, xs, indexing='ij') ps = betadist.pdf(X, A, B) totals = ps.sum(axis=2) nc = hyper / totals shape = nc.shape + (1,) prior = ps * nc.reshape(shape) return prior xs = np.linspace(0.01, 0.99, 80) prior = make_prior(hyper, prior_alpha, prior_beta, xs) prior.sum() def marginal(joint, axis): axes = [i for i in range(3) if i != axis] return joint.sum(axis=tuple(axes)) prior_a = Pmf(marginal(prior, 0), prior_alpha.qs) prior_alpha.plot() prior_a.plot() prior_a.mean() prior_b = Pmf(marginal(prior, 1), prior_beta.qs) prior_beta.plot() prior_b.plot() prior_x = Pmf(marginal(prior, 2), xs) prior_x.plot() ``` ## The Update ``` from scipy.stats import binom n = 250 ks = 140 X, K = np.meshgrid(xs, ks) like_x = binom.pmf(K, n, X).prod(axis=0) like_x.shape plt.plot(xs, like_x) def update(prior, data): n, ks = data X, K = np.meshgrid(xs, ks) like_x = binom.pmf(K, n, X).prod(axis=0) posterior = prior * like_x posterior /= posterior.sum() return posterior data = 250, 140 posterior = update(prior, data) marginal_x = Pmf(marginal(posterior, 2), xs) marginal_x.plot() marginal_x.mean() marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs) marginal_alpha.plot() marginal_alpha.mean() marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs) marginal_beta.plot() marginal_beta.mean() ``` ## One coin with PyMC ``` import pymc3 as pm n = 250 with pm.Model() as model1: alpha = pm.Gamma('alpha', alpha=4, beta=0.5) beta = pm.Gamma('beta', alpha=4, beta=0.5) x1 = pm.Beta('x1', alpha, beta) k1 = pm.Binomial('k1', n=n, p=x1, observed=140) pred = pm.sample_prior_predictive(1000) ``` Here's the graphical representation of the model. ``` pm.model_to_graphviz(model1) from utils import kde_from_sample kde_from_sample(pred['alpha'], prior_alpha.qs).plot() prior_alpha.plot() kde_from_sample(pred['beta'], prior_beta.qs).plot() prior_beta.plot() kde_from_sample(pred['x1'], prior_x.qs).plot() prior_x.plot() ``` Now let's run the sampler. ``` with model1: trace1 = pm.sample(500) ``` Here are the posterior distributions for the two coins. ``` compare_cdf(marginal_alpha, trace1['alpha']) compare_cdf(marginal_beta, trace1['beta']) compare_cdf(marginal_x, trace1['x1']) ``` ## Two coins ``` def get_hyper(joint): return joint.sum(axis=2) posterior_hyper = get_hyper(posterior) posterior_hyper.shape prior2 = make_prior(posterior_hyper, prior_alpha, prior_beta, xs) data = 250, 110 posterior2 = update(prior2, data) marginal_alpha2 = Pmf(marginal(posterior2, 0), prior_alpha.qs) marginal_alpha2.plot() marginal_alpha2.mean() marginal_beta2 = Pmf(marginal(posterior2, 1), prior_beta.qs) marginal_beta2.plot() marginal_beta2.mean() marginal_x2 = Pmf(marginal(posterior2, 2), xs) marginal_x2.plot() marginal_x2.mean() ``` ## Two coins with PyMC ``` with pm.Model() as model2: alpha = pm.Gamma('alpha', alpha=4, beta=0.5) beta = pm.Gamma('beta', alpha=4, beta=0.5) x1 = pm.Beta('x1', alpha, beta) x2 = pm.Beta('x2', alpha, beta) k1 = pm.Binomial('k1', n=n, p=x1, observed=140) k2 = pm.Binomial('k2', n=n, p=x2, observed=110) ``` Here's the graph for this model. ``` pm.model_to_graphviz(model2) ``` Let's run the sampler. ``` with model2: trace2 = pm.sample(500) ``` And here are the results. ``` kde_from_sample(trace2['alpha'], marginal_alpha.qs).plot() marginal_alpha2.plot() trace2['alpha'].mean(), marginal_alpha2.mean() kde_from_sample(trace2['beta'], marginal_beta.qs).plot() marginal_beta2.plot() trace2['beta'].mean(), marginal_beta2.mean() kde_from_sample(trace2['x2'], marginal_x.qs).plot() marginal_x2.plot() ``` ## Heart Attack Data This example is based on [Chapter 10 of *Probability and Bayesian Modeling*](https://bayesball.github.io/BOOK/bayesian-hierarchical-modeling.html#example-deaths-after-heart-attack); it uses data on death rates due to heart attack for patients treated at various hospitals in New York City. We can use Pandas to read the data into a `DataFrame`. ``` import os filename = 'DeathHeartAttackManhattan.csv' if not os.path.exists(filename): !wget https://github.com/AllenDowney/BayesianInferencePyMC/raw/main/DeathHeartAttackManhattan.csv import pandas as pd df = pd.read_csv(filename) df ``` The columns we need are `Cases`, which is the number of patients treated at each hospital, and `Deaths`, which is the number of those patients who died. ``` # shuffled = df.sample(frac=1) data_ns = df['Cases'].values data_ks = df['Deaths'].values ``` Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals. ## Hospital Data with grid ``` alpha = 4 beta = 0.5 qs = np.linspace(0.1, 25, 100) ps = gamma(alpha, scale=1/beta).pdf(qs) prior_alpha = Pmf(ps, qs) prior_alpha.normalize() prior_alpha.index.name = 'alpha' qs = np.linspace(0.1, 50, 90) ps = gamma(alpha, scale=1/beta).pdf(qs) prior_beta = Pmf(ps, qs) prior_beta.normalize() prior_beta.index.name = 'beta' prior_beta.shape prior_alpha.plot() prior_beta.plot() prior_alpha.mean() hyper = make_hyper(prior_alpha, prior_beta) hyper.shape xs = np.linspace(0.01, 0.99, 80) prior = make_prior(hyper, prior_alpha, prior_beta, xs) prior.shape for data in zip(data_ns, data_ks): print(data) posterior = update(prior, data) hyper = get_hyper(posterior) prior = make_prior(hyper, prior_alpha, prior_beta, xs) marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs) marginal_alpha.plot() marginal_alpha.mean() marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs) marginal_beta.plot() marginal_beta.mean() marginal_x = Pmf(marginal(posterior, 2), prior_x.qs) marginal_x.plot() marginal_x.mean() ``` ## Hospital Data with PyMC ``` with pm.Model() as model4: alpha = pm.Gamma('alpha', alpha=4, beta=0.5) beta = pm.Gamma('beta', alpha=4, beta=0.5) xs = pm.Beta('xs', alpha, beta, shape=len(data_ns)) ks = pm.Binomial('ks', n=data_ns, p=xs, observed=data_ks) trace4 = pm.sample(500) ``` Here's the graph representation of the model, showing that the observable is an array of 13 values. ``` pm.model_to_graphviz(model4) ``` Here's the trace. ``` kde_from_sample(trace4['alpha'], marginal_alpha.qs).plot() marginal_alpha.plot() trace4['alpha'].mean(), marginal_alpha.mean() kde_from_sample(trace4['beta'], marginal_beta.qs).plot() marginal_beta.plot() trace4['beta'].mean(), marginal_beta.mean() trace_xs = trace4['xs'].transpose() trace_xs.shape kde_from_sample(trace_xs[-1], marginal_x.qs).plot() marginal_x.plot() trace_xs[-1].mean(), marginal_x.mean() xs = np.linspace(0.01, 0.99, 80) hyper = get_hyper(posterior) post_all = make_prior(hyper, prior_alpha, prior_beta, xs) def forget(posterior, data): n, ks = data X, K = np.meshgrid(xs, ks) like_x = binom.pmf(K, n, X).prod(axis=0) prior = posterior / like_x prior /= prior.sum() return prior def get_marginal_x(post_all, data): prior = forget(post_all, data) hyper = get_hyper(prior) prior = make_prior(hyper, prior_alpha, prior_beta, xs) posterior = update(prior, data) marginal_x = Pmf(marginal(posterior, 2), prior_x.qs) return marginal_x data = 270, 16 marginal_x = get_marginal_x(post_all, data) kde_from_sample(trace_xs[0], marginal_x.qs).plot() marginal_x.plot() trace_xs[0].mean(), marginal_x.mean() ``` ## One at a time ``` prior.shape, prior.sum() likelihood = np.empty((len(df), len(xs))) for i, row in df.iterrows(): n = row['Cases'] k = row['Deaths'] likelihood[i] = binom.pmf(k, n, xs) prod = likelihood.prod(axis=0) prod.shape i = 3 all_but_one = prod / likelihood[i] prior hyper_i = get_hyper(prior * all_but_one) hyper_i.sum() prior_i = make_prior(hyper_i, prior_alpha, prior_beta, xs) data = df.loc[i, 'Cases'], df.loc[i, 'Deaths'] data posterior_i = update(prior_i, data) marginal_alpha = Pmf(marginal(posterior_i, 0), prior_alpha.qs) marginal_beta = Pmf(marginal(posterior_i, 1), prior_beta.qs) marginal_x = Pmf(marginal(posterior_i, 2), prior_x.qs) compare_cdf(marginal_alpha, trace4['alpha']) compare_cdf(marginal_beta, trace4['beta']) compare_cdf(marginal_x, trace_xs[i]) ```
github_jupyter
``` from datascience import * import numpy as np %matplotlib inline import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn') from scipy import stats from scipy.stats import norm import matplotlib matplotlib.__version__ import seaborn as sns sns.set(color_codes = True) #Data or Fe-based, Cuprates, Hydrides #There were no high T hydrides in the original data set features8 = pd.read_csv("https://raw.githubusercontent.com/9161AD/superconduct-/master/features_H_Cu_Fe2.csv") features8 len(features8) # We Remove the one outlier that contains Hg but no Cu to isolate the Hydrides #Already determined All Fe based SCs contain Cu features_Hydrides1 = features8[~features8.material_name.str.contains("Cu")] features_Hydrides2 = features_Hydrides1[~features_Hydrides1.material_name.str.contains("Hg")] features_Hydrides3 = features_Hydrides2[~features_Hydrides2.material_name.str.contains("Hf")] features_Hydrides4 = features_Hydrides3[~features_Hydrides3.material_name.str.contains("Hs")] features_Hydrides5 = features_Hydrides4[~features_Hydrides4.material_name.str.contains("Ho")] features_Hydrides6 = features_Hydrides5[~features_Hydrides5.material_name.str.contains("Fe")] features_Hydrides6.head() #Hydrides Groups Hydrides = features_Hydrides6.assign(Group='Hydride')[['Group'] + features_Hydrides6.columns.tolist()] Hydrides = Hydrides.drop(Hydrides.columns[1], axis=1) Hydrides.head() len(Hydrides) (len(Hydrides)/len(features8)) * 100 #9% Hydrides #Cuprate Groups --> Isolating Fe then picking out Cu features_Cuprates1 = features8[~features8.material_name.str.contains("Fe")] features_Cuprates2 = features_Cuprates1[features_Cuprates1.material_name.str.contains("Cu")] #Cuprates Groups Cuprates = features_Cuprates2.assign(Group='Cuprate')[['Group'] + features_Cuprates2.columns.tolist()] Cuprates = Cuprates.drop(Cuprates.columns[1], axis=1) Cuprates.head() len(Cuprates) len(Cuprates) (len(Cuprates)/len(features8)) * 100 #60 % Cuprates features_Fe = features8[features8.material_name.str.contains("Fe")] #Iron Groups Iron_Based = features_Fe.assign(Group='Iron-Based')[['Group'] + features_Fe.columns.tolist()] Iron_Based = Iron_Based.drop(Iron_Based.columns[1], axis=1) Iron_Based.head() len(Iron_Based) (len(Iron_Based)/len(features8)) * 100 # 7% Iron Based #Isolated 3 desired Classes Classes = Hydrides.append(Cuprates).append(Iron_Based) len(Classes) (len(Classes)) / 21263 * 100 #Now down to 5.66 % of dataset Box1 = sns.violinplot(x='Group', y='critical_temp', data=Classes) plt plt.title("Classes Critical Temperature Distributions", loc = "left") plt.xlabel("Class") plt.ylabel("Critical Temperature (K)") #Superposition of Jitter with Boxplot Box2 =sns.boxplot(x='Group', y='critical_temp', data = Classes) Box2 = sns.stripplot(x='Group', y = 'critical_temp', data= Classes, color = "orange", jitter = 0.2, size = 2.5) plt.title("Classes Critical Temperature Distributions", loc = "left") plt.xlabel("Class") plt.ylabel("Critical Temperature (K)") g = sns.pairplot(Classes, vars=["critical_temp", "number_of_elements"], hue = "Group") import seaborn as sns; sns.set(style="ticks", color_codes=True, hue = "Group") g = sns.pairplot(Classes) g #Normalized for all classes #features8.hist('critical_temp', bins = 16, range = (10,160), color = 'r', density=1) #plots.title('Critical Temperature for Iron-Based,Cuprates,Hydrides-- High T Superconductors') #plots.xlabel("Temperature (K)") #plots.ylabel("Count") import statsmodels.formula.api as smf #Begins groundwork for setting a linear regression model = 'critical_temp ~ %s'%(" + ".join(Classes.columns.values[2:])) #Multiple Regression Analysis on 3 combined classes linear_regression = smf.ols(model, data = Classes).fit() linear_regression.summary() import statsmodels.formula.api as smf #Begins groundwork for setting a linear regression model = 'critical_temp ~ %s'%(" + ".join(Hydrides.columns.values[2:])) #Train Test on Combined Classes #X contains predictors X1 = Classes.drop(['Group','material_name','critical_temp'],1) X1.head() #Make Y a true column vector containing the mpg for each superconductor Y1 = Classes[['critical_temp']] #Removed Material_names because they are not statistical predictors #, rather just labels Z1 = Classes[['Group', 'material_name']] from sklearn.model_selection import train_test_split # Split X and y into X_ #test size = 0.66 to match previous literature X1_train, X1_test, Y1_train, Y1_test = train_test_split(X1, Y1, test_size=0.66, random_state=1) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split lineReg = LinearRegression() lineReg.fit(X1_train, Y1_train) lineReg.score(X1_test, Y1_test) #Recent Literature had 74% for full data set. I matched this as well #priro to splitting up by class #See how reducing to single classes affects correlation #Train Test on HYDRIDES #X2 contains predictors X2 = Hydrides.drop(['Group','material_name','critical_temp'],1) len(X2) #Make Y2 a true column vector containing the mpg for each superconductor Y2 = Hydrides[['critical_temp']] #Removed Material_names because they are not statistical predictors #, rather just labels Z2 = Hydrides[['Group', 'material_name']] from sklearn.model_selection import train_test_split # Split X and y into X_ #test size = 0.66 to match previous literature X2_train, X2_test, Y2_train, Y2_test = train_test_split(X2, Y2, test_size=0.66, random_state=1) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split lineReg = LinearRegression() lineReg.fit(X2_train, Y2_train) lineReg.score(X2_test, Y2_test) #Test Cuprates Variable-3 #X2 contains predictors X3 = Cuprates.drop(['Group','material_name','critical_temp'],1) Y3 = Cuprates[['critical_temp']] Z3 = Cuprates[['Group', 'material_name']] from sklearn.model_selection import train_test_split X3_train, X3_test, Y3_train, Y3_test = train_test_split(X3, Y3, test_size=0.66, random_state=1) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split lineReg = LinearRegression() lineReg.fit(X3_train, Y3_train) abs(lineReg.score(X3_test, Y3_test)) #Test Fe-Based Variable - 4 #X4 contains predictors X4 = Iron_Based.drop(['Group','material_name','critical_temp'],1) Y4 = Iron_Based[['critical_temp']] Z4 = Iron_Based[['Group', 'material_name']] from sklearn.model_selection import train_test_split X4_train, X4_test, Y4_train, Y4_test = train_test_split(X4, Y4, test_size=0.66, random_state=1) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split lineReg = LinearRegression() lineReg.fit(X4_train, Y4_train) abs(lineReg.score(X4_test, Y4_test)) Groups = ['Hydrides', 'Cuprates', 'Iron-Based'] Number_Entries =[len(Hydrides),len(Cuprates),len(Iron_Based)] MR_Scores = [-0.78, 0.31, 0.27] Summary = pd.DataFrame({'Class': Groups, 'Number of Materials': Number_Entries, 'Coeffieicent of Multiple Determination': MR_Scores, }) Summary sns.lmplot(x='Number of Materials', y='MR_Scores', data=Summary) plt.ylim(-1,1) plt.xlim(0,1000) ```
github_jupyter
# BB84 Quantum Key Distribution (QKD) Protocol using Qiskit This notebook is a _demonstration_ of the BB84 Protocol for QKD using Qiskit. BB84 is a quantum key distribution scheme developed by Charles Bennett and Gilles Brassard in 1984 ([paper]). The first three sections of the paper are readable and should give you all the necessary information required. ![QKD Setup](https://raw.githubusercontent.com/deadbeatfour/quantum-computing-course/master/img/qkd.png) [paper]: http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt # Importing standard Qiskit libraries from qiskit import QuantumCircuit, execute from qiskit.providers.aer import QasmSimulator from qiskit.visualization import * ``` ## Choosing bases and encoding states Alice generates two binary strings. One encodes the basis for each qubit: $0 \rightarrow$ Computational basis $1 \rightarrow$ Hadamard basis The other encodes the state: $0 \rightarrow|0\rangle$ or $|+\rangle $ $1 \rightarrow|1\rangle$ or $|-\rangle $ Bob also generates a binary string and uses the same convention to choose a basis for measurement ``` num_qubits = 32 alice_basis = np.random.randint(2, size=num_qubits) alice_state = np.random.randint(2, size=num_qubits) bob_basis = np.random.randint(2, size=num_qubits) print(f"Alice's State:\t {np.array2string(alice_state, separator='')}") print(f"Alice's Bases:\t {np.array2string(alice_basis, separator='')}") print(f"Bob's Bases:\t {np.array2string(bob_basis, separator='')}") ``` ## Creating the circuit Based on the following results: $X|0\rangle = |1\rangle$ $H|0\rangle = |+\rangle$ $ HX|0\rangle = |-\rangle$ Our algorithm to construct the circuit is as follows: 1. Whenever Alice wants to encode 1 in a qubit, she applies an $X$ gate to the qubit. To encode 0, no action is needed. 2. Wherever she wants to encode it in the Hadamard basis, she applies an $H$ gate. No action is necessary to encode a qubit in the computational basis. 3. She then _sends_ the qubits to Bob (symbolically represented in this circuit using wires) 4. Bob measures the qubits according to his binary string. To measure a qubit in the Hadamard basis, he applies an $H$ gate to the corresponding qubit and then performs a mesurement on the computational basis. ``` def make_bb84_circ(enc_state, enc_basis, meas_basis): ''' enc_state: array of 0s and 1s denoting the state to be encoded enc_basis: array of 0s and 1s denoting the basis to be used for encoding 0 -> Computational Basis 1 -> Hadamard Basis meas_basis: array of 0s and 1s denoting the basis to be used for measurement 0 -> Computational Basis 1 -> Hadamard Basis ''' num_qubits = len(enc_state) bb84_circ = QuantumCircuit(num_qubits) # Sender prepares qubits for index in range(len(enc_basis)): if enc_state[index] == 1: bb84_circ.x(index) if enc_basis[index] == 1: bb84_circ.h(index) bb84_circ.barrier() # Receiver measures the received qubits for index in range(len(meas_basis)): if meas_basis[index] == 1: bb84_circ.h(index) bb84_circ.barrier() bb84_circ.measure_all() return bb84_circ ``` ## Creating the key Alice and Bob only keep the bits where their bases match. The following outcomes are possible for each bit sent using the BB84 protocol | Alice's bit | Alice's basis | Alice's State | Bob's basis | Bob's outcome | Bob's bit | Probability | |---------------------- |------------------------ |------------------------ |---------------------- |------------------------ |-------------------- |-------------------- | | 0 | C | 0 | C | 0 | 0 | 1/8 | | 0 | C | 0 | H | + | 0 | 1/16 | | 0 | C | 0 | H | - | 1 | 1/16 | | 0 | H | + | C | 0 | 0 | 1/16 | | 0 | H | + | C | 1 | 1 | 1/16 | | 0 | H | + | H | + | 0 | 1/8 | | 1 | C | 1 | C | 1 | 1 | 1/8 | | 1 | C | 1 | H | + | 0 | 1/16 | | 1 | C | 1 | H | - | 1 | 1/16 | | 1 | H | - | C | 0 | 0 | 1/16 | | 1 | H | - | C | 1 | 1 | 1/16 | | 1 | H | - | H | - | 1 | 1/8 | \begin{align*} P_{\text{same basis}} &= P_A(C)\times P_B(C) + P_A(H)\times P_B(H)\\ &= \frac{1}{2} \times \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} \\ &= \frac{1}{2} \end{align*} Thus, on average, only half of the total bits will be in the final key. It is also interesting to note that half of the key bits will be 0 and the other half will be 1 (again, on average) ``` bb84_circ = make_bb84_circ(alice_state, alice_basis, bob_basis) temp_key = execute(bb84_circ.reverse_bits(),backend=QasmSimulator(),shots=1).result().get_counts().most_frequent() key = '' for i in range(num_qubits): if alice_basis[i] == bob_basis[i]: # Only choose bits where Alice and Bob chose the same basis key += str(temp_key[i]) print(f'The length of the key is {len(key)}') print(f"The key contains {(key).count('0')} zeroes and {(key).count('1')} ones") print(f"Key: {key}") ```
github_jupyter
**[Back to Fan's Intro Stat Table of Content](https://fanwangecon.github.io/Stat4Econ/)** # Rescaling Standard Deviation and Covariance We have various tools at our disposal to summarize variables and the relationship between variables. Imagine that we have multiple toolboxes. This is the first one. There are two levels to this toolbox. ## Three Basic Tools Our three basic tools are: 1. (sample) Mean of X (or Y) 2. (sample) Standard Deviation of X (or Y) 3. (sample) Covariance of X and Y ## Two Rescaling Tools Additionally, we have two tools that combine the tools from the first level: 1. Coefficient of Variation = (Standard Deviation)/(Mean) 2. Correlation = (Covariance of X and Y)/((Standard Deviation of X)*(Standard Deviation of Y)) The tools on the second level rescale the standard deviation and covariance statistics. # Data Examples **The dataset, *EPIStateEduWage2017.csv*, can be downloaded [here](../data/EPIStateEduWage2017.csv).** ## College Education Share and Hourly Wage Two variables: 1. Fraction of individual with college degree in a state + this is in Fraction units, the minimum is 0.00, the maximum is 100 percent, which is 1.00 2. Average hourly salary in the state + this is in Dollar units ``` # Load in Data Tools # For Reading/Loading Data library(readr) library(tibble) library(dplyr) library(ggplot2) # Load in Data df_wgedu <- read_csv('../data/EPIStateEduWage2017.csv') ``` ## A Scatter Plot We can Visualize the Data with a Scatter Plot. There seems to be a positive relationship between the share of individuals in a state with a college education, and the average hourly salary in that state. While most states are along the trend line, we have some states, like WY, that are outliers. WY has a high hourly salary but low share with college education. ``` # Control Graph Size options(repr.plot.width = 5, repr.plot.height = 5) # Draw Scatter Plot # 1. specify x and y # 2. label each state # 3. add in trend line scatter <- ggplot(df_wgedu, aes(x=Share.College.Edu, y=Hourly.Salary)) + geom_point(size=1) + geom_text(aes(label=State), size=3, hjust=-.2, vjust=-.2) + geom_smooth(method=lm) + labs(title = 'Hourly Wage and College Share by States', x = 'Fraction with College Education', y = 'Hourly Wage', caption = 'Economic Policy Institute\n www.epi.org/data/') + theme_bw() print(scatter) ``` ## Standard Deviations and Coefficient of Variation The two variables above are in different units. We first calculate the mean, standard deviation, and covariance. With just these, it is hard to compare the standard deviation of the two variables, which are on different scales. The sample standard deviations for the two variables are: $0.051$ and $1.51$, in fraction and dollar units. Can we say the hourly salary has a larger standard deviation? But it is just a different scale. $1.51$ is a large number, but that does not mean that variable has greater variation than the fraction with college education variable. Converting the Statistics to Coefficient of Variations, now we have: $0.16$ and $0.09$. Because of the division, these are both in fraction units--standard deviations as a fraction of the mean. Now these are more comparable. ``` # We can compute the three basic statistics stats.msdv <- list( # Mean, SD and Var for the College Share variable Shr.Coll.Mean = mean(df_wgedu$Share.College.Edu), Shr.Coll.Std = sd(df_wgedu$Share.College.Edu), Shr.Coll.Var = var(df_wgedu$Share.College.Edu), # Mean, SD and Var for the Hourly Wage Variable Hr.Wage.Mean = mean(df_wgedu$Hourly.Salary), Hr.Wage.Std = sd(df_wgedu$Hourly.Salary), Hr.Wage.Var = var(df_wgedu$Hourly.Salary) ) # We can compute the three basic statistics stats.coefvari <- list( # Coefficient of Variation Shr.Coll.Coef.Variation = (stats.msdv$Shr.Coll.Std)/(stats.msdv$Shr.Coll.Mean), Hr.Wage.Coef.Variation = (stats.msdv$Hr.Wage.Std)/(stats.msdv$Hr.Wage.Mean) ) # Let's Print the Statistics we Computed as_tibble(stats.msdv) as_tibble(stats.coefvari) ``` ## Covariance and Correlation For covariance, hard to tell whether it is large or small. To make comparisons possible, we calculate the coefficient of variations and correlation statistics. The covariance we get is positive: $0.06$, but is this actually large positive relationship? $0.06$ seems like a small number. Rescaling covariance to correlation, the correlation between the two variables is: $0.78$. Since the correlation of two variable is below $-1$ and $+1$, we can now say actually the two variables are very positively related. A higher share of individuals with a college education is strongly positively correlated with a higher hourly salary. ``` # We can compute the three basic statistics states.covcor <- list( # Covariance between the two variables Shr.Wage.Cov = cov(df_wgedu$Hourly.Salary, df_wgedu$Share.College.Edu), # Correlation Shr.Wage.Cor = cor(df_wgedu$Hourly.Salary, df_wgedu$Share.College.Edu), Shr.Wage.Cor.Formula = (cov(df_wgedu$Hourly.Salary, df_wgedu$Share.College.Edu) /(stats.msdv$Shr.Coll.Std*stats.msdv$Hr.Wage.Std)) ) # Let's Print the Statistics we Computed as_tibble(states.covcor) ```
github_jupyter
Iterations, in programming, let coders repeat a set of instructions until a condition is met. Think about this as being stuck in a loop that will continue until something tells you to break out. ## While loop The `while` loop is one of two iteration types you'll learn about. In this loop, you must specify a condition first and then include the code that you want the loop to iterate over. The loop first checks if the condition is `True` and if it is, then it looks at the code inside the loop. When the condition becomes `False`, the code in the loop is skipped over and the program continues executing the rest of your code. If the condition in the loop is `False` to begin with, the code within the loop never executes. During a single loop, the program then goes through the loop and runs the code. Once the code is finished, it looks back at the condition to see if it is still `True`. It's necessary to change the variables in your loop to eventually have a condition that is `False`, or else an infinite loop will occur. As shown in the code below, to write a `while` loop, you must first type "while" and then the condition you'll check before every loop. End the line with a colon and be sure to indent the next line, which will be the actual loop. The code below prints out a countdown for a rocket. As you can see, the countdown variable in the condition section decreases in every loop until it reaches -1, at which point the condition is `False` and the loop ends. Predict what will happen when you run this code, then click the run button to verify you've understood. ``` countdown = 5 while countdown >= 0: print(countdown) countdown = countdown - 1 print("Lift Off") ``` In the following example, the condition is never met and the loop continues forever (if we don't stop it). In this code, the developer forgot to decrease the timer variable, so the condition is always true. ``` # Trying to find life outside our planet timer = 10 while timer > 0: print("Hello, I am from Earth") ``` This is an infinite loop and you must either wait for Python to terminate it or select the stop button at the top of the window. It's best to avoid infinite loops, if that wasn't already apparent. ## For loop `For` loops essentially perform the same task as `while` loops: they tend to focus on iterating a set number of times. `For` loops are great when you want to go through a list and look at every single element. In the code below, we make a list and then go through all the elements and print them out. ``` planets = "Mars", "Saturn", "Jupiter" for planet in planets: print(planet) ```
github_jupyter
My family know I like puzzles so they gave me this one recently: ![Boxed Snake Puzzle](snake-puzzle-boxed.jpg "Boxed Snake Puzzle") When you take it out the box it looks like this: ![Solved Snake Puzzle](snake-puzzle-solved.jpg "Solved Snake Puzzle") And very soon after it looked like this (which explains why I've christened the puzzle "the snake puzzle"): ![Flat Snake Puzzle](snake-puzzle-flat.jpg "Flat Snake Puzzle") The way it works is that there is a piece of elastic running through each block. On the majority of the blocks the elastic runs straight through, but on some of the it goes through a 90 degree bend. The puzzle is trying to make it back into a cube. After playing with it a while, I realised that it really is quite hard so I decided to write a program to solve it. The first thing to do is find a representation for the puzzle. Here is the one I chose. ``` # definition - number of straight bits, before 90 degree bend snake = [3,2,2,2,1,1,1,2,2,1,1,2,1,2,1,1,2] assert sum(snake) == 27 ``` If you look at the picture of it above where it is flattened you can see where the numbers came from. Start from the right hand side. That also gives us a way of calculating how many combinations there are. At each 90 degree joint, there are 4 possible rotations (ignoring the rotations of the 180 degree blocks) so there are ``` 4**len(snake) ``` 17 billion combinations. That will include some rotations and reflections, but either way it is a big number. However it is very easy to know when you've gone wrong with this kind of puzzle - as soon as you place a piece outside of the boundary of the 3x3x3 block you know it is wrong and should try something different. So how to represent the solution? The way I've chosen is to represent it as a 5x5x5 cube. This is larger than it needs to be but if we fill in the edges then we don't need to do any complicated comparisons to see if a piece is out of bounds. This is a simple trick but it saves a lot of code. I've also chosen to represent the 3d structure not as a 3d array but as a 1D array (or `list` in python speak) of length 5*5*5 = 125. To move in the `x` direction you add 1, to move in the `y` direction you add 5 and to move in the `z` direction you move 25. This simplifies the logic of the solver considerably - we don't need to deal with vectors. The basic definitions of the cube look like this: ``` N = 5 xstride=1 # number of pieces to move in the x direction ystride=N # number of pieces to move in the y direction zstride=N*N # number of pieces to move in the z direction ``` In our `list` we will represent empty space with `0` and space which can't be used with `-1`. ``` empty = 0 ``` Now define the empty cube with the boundary round the edges. ``` # Define cube as 5 x 5 x 5 with filled in edges but empty middle for # easy edge detection top = [-1]*N*N middle = [-1]*5 + [-1,0,0,0,-1]*3 + [-1]*5 cube = top + middle*3 + top ``` We're going to want a function to turn `x, y, z` co-ordinates into an index in the `cube` list. ``` def pos(x, y, z): """Convert x,y,z into position in cube list""" return x+y*ystride+z*zstride ``` So let's see what that cube looks like... ``` def print_cube(cube, margin=1): """Print the cube""" for z in range(margin,N-margin): for y in range(margin,N-margin): for x in range(margin,N-margin): v = cube[pos(x,y,z)] if v == 0: s = " . " else: s = "%02d " % v print(s, sep="", end="") print() print() print_cube(cube, margin = 0) ``` Normally we'll print it without the margin. Now let's work out how to place a segment. Assuming that the last piece was placed at `position` we want to place a segment of `length` in `direction`. Note the `assert` to check we aren't placing stuff on top of previous things, or out of the edges. ``` def place(cube, position, direction, length, piece_number): """Place a segment in the cube""" for _ in range(length): position += direction assert cube[position] == empty cube[position] = piece_number piece_number += 1 return position ``` Let's just try placing some segments and see what happens. ``` cube2 = cube[:] # copy the cube place(cube2, pos(0,1,1), xstride, 3, 1) print_cube(cube2) place(cube2, pos(3,1,1), ystride, 2, 4) print_cube(cube2) place(cube2, pos(3,3,1), zstride, 2, 6) print_cube(cube2) ``` The next thing we'll need is to undo a place. You'll see why in a moment. ``` def unplace(cube, position, direction, length): """Remove a segment from the cube""" for _ in range(length): position += direction cube[position] = empty unplace(cube2, pos(3,3,1), zstride, 2) print_cube(cube2) ``` Now let's write a function which returns whether a move is valid given a current `position` and a `direction` and a `length` of the segment we are trying to place. ``` def is_valid(cube, position, direction, length): """Returns True if a move is valid""" for _ in range(length): position += direction if cube[position] != empty: return False return True is_valid(cube2, pos(3,3,1), zstride, 2) is_valid(cube2, pos(3,3,1), zstride, 3) ``` Given `is_valid` it is now straight forward to work out what moves are possible at a given time, given a `cube` with a `position`, a `direction` and a `length` we are trying to place. ``` # directions next piece could go in directions = [xstride, -xstride, ystride, -ystride, zstride, -zstride] def moves(cube, position, direction, length): """Returns the valid moves for the current position""" valid_moves = [] for new_direction in directions: # Can't carry on in same direction, or the reverse of the same direction if new_direction == direction or new_direction == -direction: continue if is_valid(cube, position, new_direction, length): valid_moves.append(new_direction) return valid_moves moves(cube2, pos(3,3,1), ystride, 2) ``` So that is telling us that you can insert a segment of length 2 using a direction of `-xstride` or `zstride`. If you look at previous `print_cube()` output you'll see those are the only possible moves. Now we have all the bits to build a recursive solver. ``` def solve(cube, position, direction, snake, piece_number): """Recursive cube solver""" if len(snake) == 0: print("Solution") print_cube(cube) return length, snake = snake[0], snake[1:] valid_moves = moves(cube, position, direction, length) for new_direction in valid_moves: new_position = place(cube, position, new_direction, length, piece_number) solve(cube, new_position, new_direction, snake, piece_number+length) unplace(cube, position, new_direction, length) ``` This works by being passed in the `snake` of moves left. If there are no moves left then it must be solved, so we print the solution. Otherwise it takes the head off the `snake` with `length, snake = snake[0], snake[1:]` and makes the list of valid moves of that `length`. Then we `place` each move, and try to `solve` that cube using a recursive call to `solve`. We `unplace` the move so we can try again. This very quickly runs through all the possible solutions. ``` # Start just off the side position = pos(0,1,1) direction = xstride length = snake[0] # Place the first segment along one edge - that is the only possible place it can go position = place(cube, position, direction, length, 1) # Now solve! solve(cube, position, direction, snake[1:], length+1) ``` Wow! It came up with 2 solutions! However they are the same solution just rotated and reflected. But how do you use the solution? Starting from the correct end of the snake, place each piece into its corresponding number. Take the first layer of the solution as being the bottom (or top - whatever is easiest), the next layer is the middle and the one after the top. ![Flat Snake Puzzle Numbered](snake-puzzle-flat-numbered.jpg "Flat Snake Puzzle Numbered") After a bit of fiddling around you'll get... ![Solved Snake Puzzle](snake-puzzle-solved.jpg "Solved Snake Puzzle") I hope you enjoyed that introduction to puzzle solving with computer. If you want to try one yourselves, use the same technique to solve solitaire.
github_jupyter