text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# CaseLaw dataset to assist with Law-Research - EDA --- <dl> <dt>Acquiring the dataset</dt> <dd>We initially use dataset of all cases in USA to be able to train it and as a proof of concept.</dd> <dd>The dataset is available in XML format, which we will put in mongodb or firebase format based on how unstructured the dataset is.</dd> <dd>dataset url: (https://case.law/) </dd> <dt>Research</dt> <dd>We are looking into <em>NLP</em>, <em>LSTM</em> and <em>Sentiment Analysis</em>.</dd> </dl> ``` import jsonlines from pymongo import MongoClient # client = MongoClient() client = MongoClient() db = client.legal_ai cases = db.cases some_date = '1820-01' print(int(some_date[0:4])<1950) id_saved = [] with jsonlines.open('../data.jsonl') as reader: for obj in reader: if int(obj['decision_date'][0:4])>1950: case_id = cases.insert_one(obj).inserted_id id_saved.append(case_id) len(id_saved) ``` ## Testing out Similarity Mechanism --- ### Setup - Test PyDictionary to build keywords - Construct a mechanism, to extract keywords, and store in a searchable manner. --- ### Search - Build keywords out of your search - Search among dataset keywords - Nearest dates, highest weight, highest precidence shows up - Pagination scroll, continues the search. ``` # NLTK ``` ## Transforming dataset --- ### Extract the first data and study it - Identify the key elements that need to be transformed & list them - Build a mechanism to transform for one datapoint. --- ### Perform for entire dataset - Run a loop and apply the same changes for every datapoints. ``` # Extracting the first element first_case = cases.find_one() import xml.etree.ElementTree as ET root = ET.fromstring(first_case['casebody']['data']) root ``` # Getting the case body cleaned into a seperate field on db > ``` summary='' for child in root: for sub_child in child: if 'footnotemark' in sub_child.tag[sub_child.tag.index("}")+1:] or 'author' in sub_child.tag[sub_child.tag.index("}")+1:]: continue summary+=sub_child.text + "\n" print(summary) ``` # Do the same for all the files now! ``` all_cases = cases.find() all_cases.count() check_one = True for each_case in all_cases: root = ET.fromstring(each_case['casebody']['data']) summary='' for child in root: for sub_child in child: if 'footnotemark' in sub_child.tag[sub_child.tag.index("}")+1:] or 'author' in sub_child.tag[sub_child.tag.index("}")+1:]: continue summary+=sub_child.text + "\n" myquery = { "_id": each_case['_id'] } newvalues = { "$set": { "summary": summary } } cases.update_one(myquery, newvalues) ``` # Change Decision Date to mongodb date format ``` import datetime count = 0 for each_case in all_cases: try: decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m-%d %H:%M:%S") except: try: decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m-%d") except: try: decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m") except: try: decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y") except: pass myquery = { "_id": each_case['_id'] } newvalues = { "$set": { "decision_date": decision_date } } cases.update_one(myquery, newvalues) ``` # Elastic Search ``` import elasticsearch from datetime import date date = date(2000, 1, 1) cases.find({ "decision_date":{"$gte":date}}) # Take only the latest cases all_cases = cases.find({ "decision_date":{"$gte":date}}) all_cases.count() ```
github_jupyter
TSG034 - Livy logs ================== Description ----------- Steps ----- ### Parameters ``` import re tail_lines = 500 pod = None # All container = 'hadoop-livy-sparkhistory' log_files = [ '/var/log/supervisor/log/livy*' ] expressions_to_analyze = [ re.compile(".{17} WARN "), re.compile(".{17} ERROR ") ] ``` ### Instantiate Kubernetes client ``` # Instantiate the Python Kubernetes client into 'api' variable import os from IPython.display import Markdown try: from kubernetes import client, config from kubernetes.stream import stream if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ: config.load_incluster_config() else: try: config.load_kube_config() except: display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.')) raise api = client.CoreV1Api() print('Kubernetes client instantiated') except ImportError: display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.')) raise ``` ### Get the namespace for the big data cluster Get the namespace of the Big Data Cluster from the Kuberenetes API. **NOTE:** If there is more than one Big Data Cluster in the target Kubernetes cluster, then either: - set \[0\] to the correct value for the big data cluster. - set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio. ``` # Place Kubernetes namespace name for BDC into 'namespace' variable if "AZDATA_NAMESPACE" in os.environ: namespace = os.environ["AZDATA_NAMESPACE"] else: try: namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name except IndexError: from IPython.display import Markdown display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.')) raise print('The kubernetes namespace for your big data cluster is: ' + namespace) ``` ### Get tail for log ``` # Display the last 'tail_lines' of files in 'log_files' list pods = api.list_namespaced_pod(namespace) entries_for_analysis = [] for p in pods.items: if pod is None or p.metadata.name == pod: for c in p.spec.containers: if container is None or c.name == container: for log_file in log_files: print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'") try: output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True) except Exception: print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}") else: for line in output.split('\n'): for expression in expressions_to_analyze: if expression.match(line): entries_for_analysis.append(line) print(line) print("") print(f"{len(entries_for_analysis)} log entries found for further analysis.") ``` ### Analyze log entries and suggest relevant Troubleshooting Guides ``` # Analyze log entries and suggest further relevant troubleshooting guides from IPython.display import Markdown import os import json import requests import ipykernel import datetime from urllib.parse import urljoin from notebook import notebookapp def get_notebook_name(): """Return the full path of the jupyter notebook. Some runtimes (e.g. ADS) have the kernel_id in the filename of the connection file. If so, the notebook name at runtime can be determined using `list_running_servers`. Other runtimes (e.g. azdata) do not have the kernel_id in the filename of the connection file, therefore we are unable to establish the filename """ connection_file = os.path.basename(ipykernel.get_connection_file()) # If the runtime has the kernel_id in the connection filename, use it to # get the real notebook name at runtime, otherwise, use the notebook # filename from build time. try: kernel_id = connection_file.split('-', 1)[1].split('.')[0] except: pass else: for servers in list(notebookapp.list_running_servers()): try: response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01) except: pass else: for nn in json.loads(response.text): if nn['kernel']['id'] == kernel_id: return nn['path'] def load_json(filename): with open(filename, encoding="utf8") as json_file: return json.load(json_file) def get_notebook_rules(): """Load the notebook rules from the metadata of this notebook (in the .ipynb file)""" file_name = get_notebook_name() if file_name == None: return None else: j = load_json(file_name) if "azdata" not in j["metadata"] or \ "expert" not in j["metadata"]["azdata"] or \ "log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]: return [] else: return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"] rules = get_notebook_rules() if rules == None: print("") print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.") else: print(f"Applying the following {len(rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.") print(rules) hints = 0 if len(rules) > 0: for entry in entries_for_analysis: for rule in rules: if entry.find(rule[0]) != -1: print (entry) display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.')) hints = hints + 1 print("") print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.") print('Notebook execution complete.') ```
github_jupyter
# Training and Evaluating Machine Learning Models in cuML This notebook explores several basic machine learning estimators in cuML, demonstrating how to train them and evaluate them with built-in metrics functions. All of the models are trained on synthetic data, generated by cuML's dataset utilities. 1. Random Forest Classifier 2. UMAP 3. DBSCAN 4. Linear Regression [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rapidsai/cuml/blob/branch-0.15/docs/source/estimator_intro.ipynb) ### Shared Library Imports ``` import cuml from cupy import asnumpy from joblib import dump, load ``` ## 1. Classification ### Random Forest Classification and Accuracy metrics The Random Forest algorithm classification model builds several decision trees, and aggregates each of their outputs to make a prediction. For more information on cuML's implementation of the Random Forest Classification model please refer to : https://docs.rapids.ai/api/cuml/stable/api.html#cuml.ensemble.RandomForestClassifier Accuracy score is the ratio of correct predictions to the total number of predictions. It is used to measure the performance of classification models. For more information on the accuracy score metric please refer to: https://en.wikipedia.org/wiki/Accuracy_and_precision For more information on cuML's implementation of accuracy score metrics please refer to: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.accuracy.accuracy_score The cell below shows an end to end pipeline of the Random Forest Classification model. Here the dataset was generated by using sklearn's make_classification dataset. The generated dataset was used to train and run predict on the model. Random forest's performance is evaluated and then compared between the values obtained from the cuML and sklearn accuracy metrics. ``` from cuml.datasets.classification import make_classification from cuml.preprocessing.model_selection import train_test_split from cuml.ensemble import RandomForestClassifier as cuRF from sklearn.metrics import accuracy_score # synthetic dataset dimensions n_samples = 1000 n_features = 10 n_classes = 2 # random forest depth and size n_estimators = 25 max_depth = 10 # generate synthetic data [ binary classification task ] X, y = make_classification ( n_classes = n_classes, n_features = n_features, n_samples = n_samples, random_state = 0 ) X_train, X_test, y_train, y_test = train_test_split( X, y, random_state = 0 ) model = cuRF( max_depth = max_depth, n_estimators = n_estimators, seed = 0 ) trained_RF = model.fit ( X_train, y_train ) predictions = model.predict ( X_test ) cu_score = cuml.metrics.accuracy_score( y_test, predictions ) sk_score = accuracy_score( asnumpy( y_test ), asnumpy( predictions ) ) print( " cuml accuracy: ", cu_score ) print( " sklearn accuracy : ", sk_score ) # save dump( trained_RF, 'RF.model') # to reload the model uncomment the line below loaded_model = load('RF.model') ``` ## Clustering ### UMAP and Trustworthiness metrics UMAP is a dimensionality reduction algorithm which performs non-linear dimension reduction. It can also be used for visualization. For additional information on the UMAP model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.UMAP Trustworthiness is a measure of the extent to which the local structure is retained in the embedding of the model. Therefore, if a sample predicted by the model lied within the unexpected region of the nearest neighbors, then those samples would be penalized. For more information on the trustworthiness metric please refer to: https://scikit-learn.org/dev/modules/generated/sklearn.manifold.t_sne.trustworthiness.html the documentation for cuML's implementation of the trustworthiness metric is: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.trustworthiness.trustworthiness The cell below shows an end to end pipeline of UMAP model. Here, the blobs dataset is created by cuml's equivalent of make_blobs function to be used as the input. The output of UMAP's fit_transform is evaluated using the trustworthiness function. The values obtained by sklearn and cuml's trustworthiness are compared below. ``` from cuml.datasets import make_blobs from cuml.manifold.umap import UMAP as cuUMAP from sklearn.manifold import trustworthiness import numpy as np n_samples = 1000 n_features = 100 cluster_std = 0.1 X_blobs, y_blobs = make_blobs( n_samples = n_samples, cluster_std = cluster_std, n_features = n_features, random_state = 0, dtype=np.float32 ) trained_UMAP = cuUMAP( n_neighbors = 10 ).fit( X_blobs ) X_embedded = trained_UMAP.transform( X_blobs ) cu_score = cuml.metrics.trustworthiness( X_blobs, X_embedded ) sk_score = trustworthiness( asnumpy( X_blobs ), asnumpy( X_embedded ) ) print(" cuml's trustworthiness score : ", cu_score ) print(" sklearn's trustworthiness score : ", sk_score ) # save dump( trained_UMAP, 'UMAP.model') # to reload the model uncomment the line below # loaded_model = load('UMAP.model') ``` ### DBSCAN and Adjusted Random Index DBSCAN is a popular and a powerful clustering algorithm. For additional information on the DBSCAN model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.DBSCAN We create the blobs dataset using the cuml equivalent of make_blobs function. Adjusted random index is a metric which is used to measure the similarity between two data clusters, and it is adjusted to take into consideration the chance grouping of elements. For more information on Adjusted random index please refer to: https://en.wikipedia.org/wiki/Rand_index The cell below shows an end to end model of DBSCAN. The output of DBSCAN's fit_predict is evaluated using the Adjusted Random Index function. The values obtained by sklearn and cuml's adjusted random metric are compared below. ``` from cuml.datasets import make_blobs from cuml import DBSCAN as cumlDBSCAN from sklearn.metrics import adjusted_rand_score import numpy as np n_samples = 1000 n_features = 100 cluster_std = 0.1 X_blobs, y_blobs = make_blobs( n_samples = n_samples, n_features = n_features, cluster_std = cluster_std, random_state = 0, dtype=np.float32 ) cuml_dbscan = cumlDBSCAN( eps = 3, min_samples = 2) trained_DBSCAN = cuml_dbscan.fit( X_blobs ) cu_y_pred = trained_DBSCAN.fit_predict ( X_blobs ) cu_adjusted_rand_index = cuml.metrics.cluster.adjusted_rand_score( y_blobs, cu_y_pred ) sk_adjusted_rand_index = adjusted_rand_score( asnumpy(y_blobs), asnumpy(cu_y_pred) ) print(" cuml's adjusted random index score : ", cu_adjusted_rand_index) print(" sklearn's adjusted random index score : ", sk_adjusted_rand_index) # save and optionally reload dump( trained_DBSCAN, 'DBSCAN.model') # to reload the model uncomment the line below # loaded_model = load('DBSCAN.model') ``` ## Regression ### Linear regression and R^2 score Linear Regression is a simple machine learning model where the response y is modelled by a linear combination of the predictors in X. R^2 score is also known as the coefficient of determination. It is used as a metric for scoring regression models. It scores the output of the model based on the proportion of total variation of the model. For more information on the R^2 score metrics please refer to: https://en.wikipedia.org/wiki/Coefficient_of_determination For more information on cuML's implementation of the r2 score metrics please refer to : https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.regression.r2_score The cell below uses the Linear Regression model to compare the results between cuML and sklearn trustworthiness metric. For more information on cuML's implementation of the Linear Regression model please refer to : https://docs.rapids.ai/api/cuml/stable/api.html#linear-regression ``` from cuml.datasets import make_regression from cuml.preprocessing.model_selection import train_test_split from cuml.linear_model import LinearRegression as cuLR from sklearn.metrics import r2_score n_samples = 2**10 n_features = 100 n_info = 70 X_reg, y_reg = make_regression( n_samples = n_samples, n_features = n_features, n_informative = n_info, random_state = 123 ) X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split( X_reg, y_reg, train_size = 0.8, random_state = 10 ) cuml_reg_model = cuLR( fit_intercept = True, normalize = True, algorithm = 'eig' ) trained_LR = cuml_reg_model.fit( X_reg_train, y_reg_train ) cu_preds = trained_LR.predict( X_reg_test ) cu_r2 = cuml.metrics.r2_score( y_reg_test, cu_preds ) sk_r2 = r2_score( asnumpy( y_reg_test ), asnumpy( cu_preds ) ) print("cuml's r2 score : ", cu_r2) print("sklearn's r2 score : ", sk_r2) # save and reload dump( trained_LR, 'LR.model') # to reload the model uncomment the line below # loaded_model = load('LR.model') ```
github_jupyter
# Radiative Cores & Convective Envelopes Analysis of how magnetic fields influence the extent of radiative cores and convective envelopes in young, pre-main-sequence stars. Begin with some preliminaries. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import interp1d ``` Load a standard and magnetic isochrone with equivalent ages. Here, the adopted age is 10 Myr to look specifically at the predicted internal structure of stars in Upper Scorpius. ``` # read standard 10 Myr isochrone iso_std = np.genfromtxt('../models/iso/std/dmestar_00010.0myr_z+0.00_a+0.00_phx.iso') # read standard 5 Myr isochrone iso_5my = np.genfromtxt('../models/iso/std/dmestar_00005.0myr_z+0.00_a+0.00_phx.iso') # read magnetic isochrone iso_mag = np.genfromtxt('../models/iso/mag/dmestar_00010.0myr_z+0.00_a+0.00_phx_magBeq.iso') ``` The magnetic isochrone is known to begin at a lower mass than the standard isochrone and both isochrones have gaps where individual models failed to converge. Gaps need not occur at the same masses along each isochrone. To overcome these inconsistencies, we can interpolate both isochrones onto a pre-defined mass domain. ``` masses = np.arange(0.09, 1.70, 0.01) # new mass domain # create an interpolation curve for a standard isochrone icurve = interp1d(iso_std[:,0], iso_std, axis=0, kind='cubic') # and transform to new mass domain iso_std_eq = icurve(masses) # create interpolation curve for standard 5 Myr isochrone icurve = interp1d(iso_5my[:,0], iso_5my, axis=0, kind='linear') # and transform to a new mass domain iso_5my_eq = icurve(masses) # create an interpolation curve for a magnetic isochrone icurve = interp1d(iso_mag[:,0], iso_mag, axis=0, kind='cubic') # and transform to new mass domain iso_mag_eq = icurve(masses) ``` Let's compare the interpolated isochrones to the original, just to be sure that the resulting isochrones are smooth. ``` plt.plot(10**iso_std[:, 1], iso_std[:, 3], '-', lw=4, color='red') plt.plot(10**iso_std_eq[:, 1], iso_std_eq[:, 3], '--', lw=4, color='black') plt.plot(10**iso_mag[:, 1], iso_mag[:, 3], '-', lw=4, color='blue') plt.plot(10**iso_mag_eq[:, 1], iso_mag_eq[:, 3], '--', lw=4, color='black') plt.grid() plt.xlim(2500., 8000.) plt.ylim(-2, 1.1) plt.xlabel('$T_{\\rm eff}\ [K]$', fontsize=20) plt.ylabel('$\\log(L / L_{\\odot})$', fontsize=20) ``` The interpolation appears to have worked well as there are no egregious discrepancies between the real and interpolated isochrones. We can now analyze the properties of the radiative cores and the convective envelopes. Beginning with the radiative core, we can look as a function of stellar properties, how much of the total stellar mass is contained in the radiative core. ``` # as a function of stellar mass plt.plot(iso_std_eq[:, 0], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0], '--', lw=3, color='#333333') plt.plot(iso_5my_eq[:, 0], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0], '-.', lw=3, color='#333333') plt.plot(iso_mag_eq[:, 0], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0], '-' , lw=4, color='#01a9db') plt.grid() plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20) plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20) # as a function of effective temperature plt.plot(10**iso_std_eq[:, 1], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0], '--', lw=3, color='#333333') plt.plot(10**iso_5my_eq[:, 1], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0], '-.', lw=3, color='#333333') plt.plot(10**iso_mag_eq[:, 1], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0], '-' , lw=4, color='#01a9db') plt.grid() plt.xlim(3000., 7000.) plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20) plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20) ``` Now let's look at the relative difference in radiative core mass as a function of these stellar properties. ``` # as a function of stellar mass (note, there is a minus sign switch b/c we tabulate # convective envelope mass) plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_std_eq[:, -1]), '-' , lw=4, color='#01a9db') plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_5my_eq[:, -1]), '--' , lw=4, color='#01a9db') plt.grid() plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20) plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20) ``` Analysis ``` # interpolate into the temperature domain Teffs = np.log10(np.arange(3050., 7000., 50.)) icurve = interp1d(iso_std[:, 1], iso_std, axis=0, kind='linear') iso_std_te = icurve(Teffs) icurve = interp1d(iso_5my[:, 1], iso_5my, axis=0, kind='linear') iso_5my_te = icurve(Teffs) icurve = interp1d(iso_mag[:, 1], iso_mag, axis=0, kind='linear') iso_mag_te = icurve(Teffs) # as a function of stellar mass # (note, there is a minus sign switch b/c we tabulate convective envelope mass) # # plotting: standard - magnetic where + implies plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] - iso_std_te[:, 0] + iso_std_te[:, -1]), '-' , lw=4, color='#01a9db') plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] - iso_5my_te[:, 0] + iso_5my_te[:, -1]), '--' , lw=4, color='#01a9db') np.savetxt('../models/rad_core_comp.txt', np.column_stack((iso_std_te, iso_mag_te)), fmt="%10.6f") np.savetxt('../models/rad_core_comp_dage.txt', np.column_stack((iso_5my_te, iso_mag_te)), fmt="%10.6f") plt.grid() plt.xlim(3000., 7000.) plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20) plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20) ``` Stars are fully convective below 3500 K, regardless of whether there is magnetic inhibition of convection. On the other extreme, stars hotter than about 6500 K are approaching ignition of the CN-cycle, which coincides with the disappearnce of the outer convective envelope. However, delayed contraction means that stars of a given effective temperature have a higher mass in the magnetic case, which leads to a slight mass offset once the radiative core comprises nearly 100% of the star. Note that our use of the term "radiative core" is technically invalid in this regime due to the presence of a convective core.
github_jupyter
DIFAX Replication ================= This example replicates the traditional DIFAX images for upper-level observations. By: Kevin Goebbert Observation data comes from Iowa State Archive, accessed through the Siphon package. Contour data comes from the GFS 0.5 degree analysis. Classic upper-level data of Geopotential Height and Temperature are plotted. ``` import urllib.request from datetime import datetime, timedelta import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt import metpy.calc as mpcalc import numpy as np import xarray as xr from metpy.plots import StationPlot from metpy.units import units from siphon.simplewebservice.iastate import IAStateUpperAir ``` Plotting High/Low Symbols ------------------------- A helper function to plot a text symbol (e.g., H, L) for relative maximum/minimum for a given field (e.g., geopotential height). ``` def plot_maxmin_points(lon, lat, data, extrema, nsize, symbol, color='k', plotValue=True, transform=None): """ This function will find and plot relative maximum and minimum for a 2D grid. The function can be used to plot an H for maximum values (e.g., High pressure) and an L for minimum values (e.g., low pressue). It is best to used filetered data to obtain a synoptic scale max/min value. The symbol text can be set to a string value and optionally the color of the symbol and any plotted value can be set with the parameter color. Parameters ---------- lon : 2D array Plotting longitude values lat : 2D array Plotting latitude values data : 2D array Data that you wish to plot the max/min symbol placement extrema : str Either a value of max for Maximum Values or min for Minimum Values nsize : int Size of the grid box to filter the max and min values to plot a reasonable number symbol : str Text to be placed at location of max/min value color : str Name of matplotlib colorname to plot the symbol (and numerical value, if plotted) plot_value : Boolean (True/False) Whether to plot the numeric value of max/min point Return ------ The max/min symbol will be plotted on the current axes within the bounding frame (e.g., clip_on=True) """ from scipy.ndimage.filters import maximum_filter, minimum_filter if (extrema == 'max'): data_ext = maximum_filter(data, nsize, mode='nearest') elif (extrema == 'min'): data_ext = minimum_filter(data, nsize, mode='nearest') else: raise ValueError('Value for hilo must be either max or min') if lon.ndim == 1: lon, lat = np.meshgrid(lon, lat) mxx, mxy = np.where(data_ext == data) for i in range(len(mxy)): ax.text(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], symbol, color=color, size=36, clip_on=True, horizontalalignment='center', verticalalignment='center', transform=transform) ax.text(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], '\n' + str(np.int(data[mxx[i], mxy[i]])), color=color, size=12, clip_on=True, fontweight='bold', horizontalalignment='center', verticalalignment='top', transform=transform) ax.plot(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], marker='o', markeredgecolor='black', markerfacecolor='white', transform=transform) ax.plot(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], marker='x', color='black', transform=transform) ``` Station Information ------------------- A helper function for obtaining radiosonde station information (e.g., latitude/longitude) requried to plot data obtained from each station. Original code by github user sgdecker. ``` def station_info(stid): r"""Provide information about weather stations. Parameters ---------- stid: str or iterable object containing strs The ICAO or IATA code(s) for which station information is requested. with_units: bool Whether to include units for values that have them. Default True. Returns ------- info: dict Information about the station(s) within a dictionary with these keys: 'state': Two-character ID of the state/province where the station is located, if applicable 'name': The name of the station 'lat': The latitude of the station [deg] 'lon': The longitude of the station [deg] 'elevation': The elevation of the station [m] 'country': Two-character ID of the country where the station is located Modified code from Steven Decker, Rutgers University """ # Provide a helper function for later usage def str2latlon(s): deg = float(s[:3]) mn = float(s[-3:-1]) if s[-1] == 'S' or s[-1] == 'W': deg = -deg mn = -mn return deg + mn / 60. # Various constants describing the underlying data url = 'https://www.aviationweather.gov/docs/metar/stations.txt' # file = 'stations.txt' state_bnds = slice(0, 2) name_bnds = slice(3, 19) icao_bnds = slice(20, 24) iata_bnds = slice(26, 29) lat_bnds = slice(39, 45) lon_bnds = slice(47, 54) z_bnds = slice(55, 59) cntry_bnds = slice(81, 83) # Generalize to any number of IDs if isinstance(stid, str): stid = [stid] # Get the station dataset infile = urllib.request.urlopen(url) data = infile.readlines() # infile = open(file, 'rb') # data = infile.readlines() state = [] name = [] lat = [] lon = [] z = [] cntry = [] for s in stid: s = s.upper() for line_bytes in data: line = line_bytes.decode('UTF-8') icao = line[icao_bnds] iata = line[iata_bnds] if len(s) == 3 and s in iata or len(s) == 4 and s in icao: state.append(line[state_bnds].strip()) name.append(line[name_bnds].strip()) lat.append(str2latlon(line[lat_bnds])) lon.append(str2latlon(line[lon_bnds])) z.append(float(line[z_bnds])) cntry.append(line[cntry_bnds]) break else: state.append('NA') name.append('NA') lat.append(np.nan) lon.append(np.nan) z.append(np.nan) cntry.append('NA') infile.close() return {'state': np.array(state), 'name': np.array(name), 'lat': np.array(lat), 'lon': np.array(lon), 'elevation': np.array(z), 'country': np.array(cntry), 'units': {'lat': 'deg', 'lon': 'deg', 'z': 'm'}} ``` Observation Data ---------------- Set a date and time for upper-air observations (should only be 00 or 12 UTC for the hour). Request all data from Iowa State using the Siphon package. The result is a pandas DataFrame containing all of the sounding data from all available stations. ``` # Set date for desired UPA data today = datetime.utcnow() # Go back one day to ensure data availability date = datetime(today.year, today.month, today.day, 0) - timedelta(days=1) # Request data using Siphon request for data from Iowa State Archive data = IAStateUpperAir.request_all_data(date) ``` Subset Observational Data ------------------------- From the request above will give all levels from all radisonde sites available through the service. For plotting a pressure surface map there is only need to have the data from that level. Below the data is subset and a few parameters set based on the level chosen. Additionally, the station information is obtained and latitude and longitude data is added to the DataFrame. ``` level = 500 if (level == 925) | (level == 850) | (level == 700): cint = 30 def hght_format(v): return format(v, '.0f')[1:] elif level == 500: cint = 60 def hght_format(v): return format(v, '.0f')[:3] elif level == 300: cint = 120 def hght_format(v): return format(v, '.0f')[:3] elif level < 300: cint = 120 def hght_format(v): return format(v, '.0f')[1:4] # Create subset of all data for a given level data_subset = data.pressure == level df = data[data_subset] # Get station lat/lon from look-up file; add to Dataframe stn_info = station_info(list(df.station.values)) df.insert(10, 'latitude', stn_info['lat']) df.insert(11, 'longitude', stn_info['lon']) ``` Gridded Data ------------ Obtain GFS gridded output for contour plotting. Specifically, geopotential height and temperature data for the given level and subset for over North America. Data are smoothed for aesthetic reasons. ``` # Get GFS data and subset to North America for Geopotential Height and Temperature ds = xr.open_dataset('https://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg_ana/' 'GFS_Global_0p5deg_ana_{0:%Y%m%d}_{0:%H}00.grib2'.format( date)).metpy.parse_cf() # Geopotential height and smooth hght = ds.Geopotential_height_isobaric.metpy.sel( vertical=level*units.hPa, time=date, lat=slice(70, 15), lon=slice(360-145, 360-50)) smooth_hght = mpcalc.smooth_n_point(hght, 9, 10) # Temperature, smooth, and convert to Celsius tmpk = ds.Temperature_isobaric.metpy.sel( vertical=level*units.hPa, time=date, lat=slice(70, 15), lon=slice(360-145, 360-50)) smooth_tmpc = (mpcalc.smooth_n_point(tmpk, 9, 10)).to('degC') ``` Create DIFAX Replication ------------------------ Plot the observational data and contours on a Lambert Conformal map and add features that resemble the historic DIFAX maps. ``` # Set up map coordinate reference system mapcrs = ccrs.LambertConformal( central_latitude=45, central_longitude=-100, standard_parallels=(30, 60)) # Set up station locations for plotting observations point_locs = mapcrs.transform_points( ccrs.PlateCarree(), df['longitude'].values, df['latitude'].values) # Start figure and set graphics extent fig = plt.figure(1, figsize=(17, 15)) ax = plt.subplot(111, projection=mapcrs) ax.set_extent([-125, -70, 20, 55]) # Add map features for geographic reference ax.add_feature(cfeature.COASTLINE.with_scale('50m'), edgecolor='grey') ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor='white') ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='grey') # Plot plus signs every degree lat/lon plus_lat = [] plus_lon = [] other_lat = [] other_lon = [] for x in hght.lon.values[::2]: for y in hght.lat.values[::2]: if (x % 5 == 0) | (y % 5 == 0): plus_lon.append(x) plus_lat.append(y) else: other_lon.append(x) other_lat.append(y) ax.scatter(other_lon, other_lat, s=5, marker='o', transform=ccrs.PlateCarree(), color='lightgrey', zorder=-1) ax.scatter(plus_lon, plus_lat, s=30, marker='+', transform=ccrs.PlateCarree(), color='lightgrey', zorder=-1) # Add gridlines for every 5 degree lat/lon ax.gridlines(linestyle='solid', ylocs=range(15, 71, 5), xlocs=range(-150, -49, 5)) # Start the station plot by specifying the axes to draw on, as well as the # lon/lat of the stations (with transform). We also the fontsize to 10 pt. stationplot = StationPlot(ax, df['longitude'].values, df['latitude'].values, clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) # Plot the temperature and dew point to the upper and lower left, respectively, of # the center point. stationplot.plot_parameter('NW', df['temperature'], color='black') stationplot.plot_parameter('SW', df['dewpoint'], color='black') # A more complex example uses a custom formatter to control how the geopotential height # values are plotted. This is set in an earlier if-statement to work appropriate for # different levels. stationplot.plot_parameter('NE', df['height'], formatter=hght_format) # Add wind barbs stationplot.plot_barb(df['u_wind'], df['v_wind'], length=7, pivot='tip') # Plot Solid Contours of Geopotential Height cs = ax.contour(hght.lon, hght.lat, smooth_hght, range(0, 20000, cint), colors='black', transform=ccrs.PlateCarree()) clabels = plt.clabel(cs, fmt='%d', colors='white', inline_spacing=5, use_clabeltext=True) # Contour labels with black boxes and white text for t in cs.labelTexts: t.set_bbox({'facecolor': 'black', 'pad': 4}) t.set_fontweight('heavy') # Plot Dashed Contours of Temperature cs2 = ax.contour(hght.lon, hght.lat, smooth_tmpc, range(-60, 51, 5), colors='black', transform=ccrs.PlateCarree()) clabels = plt.clabel(cs2, fmt='%d', colors='white', inline_spacing=5, use_clabeltext=True) # Set longer dashes than default for c in cs2.collections: c.set_dashes([(0, (5.0, 3.0))]) # Contour labels with black boxes and white text for t in cs.labelTexts: t.set_bbox({'facecolor': 'black', 'pad': 4}) t.set_fontweight('heavy') # Plot filled circles for Radiosonde Obs ax.scatter(df['longitude'].values, df['latitude'].values, s=12, marker='o', color='black', transform=ccrs.PlateCarree()) # Use definition to plot H/L symbols plot_maxmin_points(hght.lon, hght.lat, smooth_hght.m, 'max', 50, symbol='H', color='black', transform=ccrs.PlateCarree()) plot_maxmin_points(hght.lon, hght.lat, smooth_hght.m, 'min', 25, symbol='L', color='black', transform=ccrs.PlateCarree()) # Add titles plt.title('Upper-air Observations at {}-hPa Analysis Heights/Temperature'.format(level), loc='left') plt.title(f'Valid: {date}', loc='right'); ```
github_jupyter
# Description This notebook contains the interpretation of a cluster (which features/latent variables in the original data are useful to distinguish traits in the cluster). See section [LV analysis](#lv_analysis) below # Modules loading ``` %load_ext autoreload %autoreload 2 import pickle import re from pathlib import Path import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from IPython.display import HTML from clustering.methods import ClusterInterpreter from data.recount2 import LVAnalysis from data.cache import read_data import conf ``` # Settings ``` PARTITION_K = None PARTITION_CLUSTER_ID = None ``` # Load MultiPLIER summary ``` multiplier_model_summary = read_data(conf.MULTIPLIER["MODEL_SUMMARY_FILE"]) multiplier_model_summary.shape multiplier_model_summary.head() ``` # Load data ## Original data ``` INPUT_SUBSET = "z_score_std" INPUT_STEM = "projection-smultixcan-efo_partial-mashr-zscores" input_filepath = Path( conf.RESULTS["DATA_TRANSFORMATIONS_DIR"], INPUT_SUBSET, f"{INPUT_SUBSET}-{INPUT_STEM}.pkl", ).resolve() display(input_filepath) assert input_filepath.exists(), "Input file does not exist" input_filepath_stem = input_filepath.stem display(input_filepath_stem) data = pd.read_pickle(input_filepath) data.shape data.head() ``` ## Clustering partitions ``` CONSENSUS_CLUSTERING_DIR = Path( conf.RESULTS["CLUSTERING_DIR"], "consensus_clustering" ).resolve() display(CONSENSUS_CLUSTERING_DIR) input_file = Path(CONSENSUS_CLUSTERING_DIR, "best_partitions_by_k.pkl").resolve() display(input_file) best_partitions = pd.read_pickle(input_file) best_partitions.shape best_partitions.head() ``` # Functions ``` def show_cluster_stats(data, partition, cluster): cluster_traits = data[partition == cluster].index display(f"Cluster '{cluster}' has {len(cluster_traits)} traits") display(cluster_traits) ``` # LV analysis <a id="lv_analysis"></a> ## Associated traits ``` display(best_partitions.loc[PARTITION_K]) part = best_partitions.loc[PARTITION_K, "partition"] show_cluster_stats(data, part, PARTITION_CLUSTER_ID) ``` ## Associated latent variables ``` ci = ClusterInterpreter( threshold=1.0, max_features=20, max_features_to_explore=100, ) ci.fit(data, part, PARTITION_CLUSTER_ID) ci.features_ # save interpreter instance output_dir = Path( conf.RESULTS["CLUSTERING_INTERPRETATION"]["BASE_DIR"], "cluster_lvs", f"part{PARTITION_K}", ) output_dir.mkdir(exist_ok=True, parents=True) output_file = Path( output_dir, f"cluster_interpreter-part{PARTITION_K}_k{PARTITION_CLUSTER_ID}.pkl" ) display(output_file) ci.features_.to_pickle(output_file) ``` ## Top attributes Here we go through the list of associated latent variables and, for each, we show associated pathways (prior knowledge), top traits, top genes and the top tissues/cell types where those genes are expressed. ``` for lv_idx, lv_info in ci.features_.iterrows(): display(HTML(f"<h2>LV{lv_idx}</h2>")) lv_name = lv_info["name"] lv_obj = lv_exp = LVAnalysis(lv_name, data) # show lv prior knowledge match (pathways) lv_pathways = multiplier_model_summary[ multiplier_model_summary["LV index"].isin((lv_name[2:],)) & ( (multiplier_model_summary["FDR"] < 0.05) | (multiplier_model_summary["AUC"] >= 0.75) ) ] display(lv_pathways) lv_data = lv_obj.get_experiments_data() display("") display(lv_obj.lv_traits.head(20)) display("") display(lv_obj.lv_genes.head(10)) lv_attrs = lv_obj.get_attributes_variation_score() _tmp = pd.Series(lv_attrs.index) lv_attrs = lv_attrs[ _tmp.str.match( "(?:cell.+type$)|(?:tissue$)|(?:tissue.+type$)", case=False, flags=re.IGNORECASE, ).values ].sort_values(ascending=False) display(lv_attrs) for _lva in lv_attrs.index: display(HTML(f"<h3>{_lva}</h3>")) display(lv_data[_lva].dropna().reset_index()["project"].unique()) with sns.plotting_context("paper", font_scale=1.0), sns.axes_style("whitegrid"): fig, ax = plt.subplots(figsize=(14, 8)) ax = lv_obj.plot_attribute(_lva, top_x_values=20) if ax is None: plt.close(fig) continue display(fig) plt.close(fig) ```
github_jupyter
# Lecture 55: Adversarial Autoencoder for Classification ## Load Packages ``` %matplotlib inline import os import math import torch import itertools import torch.nn as nn import torch.optim as optim from IPython import display import torch.nn.functional as F import matplotlib.pyplot as plt import torchvision.datasets as dsets import torchvision.transforms as transforms print(torch.__version__) # This code has been updated for PyTorch 1.0.0 ``` ## Load Data ``` # MNIST Dataset dataset = dsets.MNIST(root='./MNIST', train=True, transform=transforms.ToTensor(), download=True) testset = dsets.MNIST(root='./MNIST', train=False, transform=transforms.ToTensor(), download=True) # Data Loader (Input Pipeline) data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=100, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=testset, batch_size=100, shuffle=False) # Check availability of GPU use_gpu = torch.cuda.is_available() # use_gpu = False # Uncomment in case of GPU memory error if use_gpu: print('GPU is available!') device = "cuda" else: print('GPU is not available!') device = "cpu" ``` ## Defining network architecture ``` #Encoder class Q_net(nn.Module): def __init__(self,X_dim,N,z_dim): super(Q_net, self).__init__() self.lin1 = nn.Linear(X_dim, N) self.lin2 = nn.Linear(N, N) self.lin3gauss = nn.Linear(N, z_dim) def forward(self, x): x = F.dropout(self.lin1(x), p=0.25, training=self.training) x = F.relu(x) x = F.dropout(self.lin2(x), p=0.25, training=self.training) x = F.relu(x) x = self.lin3gauss(x) return x # Decoder class P_net(nn.Module): def __init__(self,X_dim,N,z_dim): super(P_net, self).__init__() self.lin1 = nn.Linear(z_dim, N) self.lin2 = nn.Linear(N, N) self.lin3 = nn.Linear(N, X_dim) def forward(self, x): x = F.dropout(self.lin1(x), p=0.25, training=self.training) x = F.relu(x) x = F.dropout(self.lin2(x), p=0.25, training=self.training) x = self.lin3(x) return torch.sigmoid(x) # Discriminator class D_net_gauss(nn.Module): def __init__(self,N,z_dim): super(D_net_gauss, self).__init__() self.lin1 = nn.Linear(z_dim, N) self.lin2 = nn.Linear(N, N) self.lin3 = nn.Linear(N, 1) def forward(self, x): x = F.dropout(self.lin1(x), p=0.2, training=self.training) x = F.relu(x) x = F.dropout(self.lin2(x), p=0.2, training=self.training) x = F.relu(x) return torch.sigmoid(self.lin3(x)) ``` ## Define optimizer ``` z_red_dims = 100 Q = Q_net(784,1000,z_red_dims).to(device) P = P_net(784,1000,z_red_dims).to(device) D_gauss = D_net_gauss(500,z_red_dims).to(device) # Set learning rates gen_lr = 0.0001 reg_lr = 0.00005 #encode/decode optimizers optim_P = optim.Adam(P.parameters(), lr=gen_lr) optim_Q_enc = optim.Adam(Q.parameters(), lr=gen_lr) #regularizing optimizers optim_Q_gen = optim.Adam(Q.parameters(), lr=reg_lr) optim_D = optim.Adam(D_gauss.parameters(), lr=reg_lr) ``` ## Test Data ``` num_test_samples = 100 test_noise = torch.randn(num_test_samples,z_red_dims).to(device) ``` ## Training ``` # create figure for plotting size_figure_grid = int(math.sqrt(num_test_samples)) fig, ax = plt.subplots(size_figure_grid, size_figure_grid, figsize=(6, 6)) for i, j in itertools.product(range(size_figure_grid), range(size_figure_grid)): ax[i,j].get_xaxis().set_visible(False) ax[i,j].get_yaxis().set_visible(False) data_iter = iter(data_loader) iter_per_epoch = len(data_loader) total_step = 5#5000 # Start training for step in range(total_step): # Reset the data_iter if (step+1) % iter_per_epoch == 0: data_iter = iter(data_loader) # Fetch the images and labels and convert them to variables images, labels = next(data_iter) images, labels = images.view(images.size(0), -1).to(device), labels.to(device) #reconstruction loss P.zero_grad() Q.zero_grad() D_gauss.zero_grad() z_sample = Q(images) #encode to z X_sample = P(z_sample) #decode to X reconstruction recon_loss = F.binary_cross_entropy(X_sample,images) recon_loss.backward() optim_P.step() optim_Q_enc.step() # Discriminator ## true prior is random normal (randn) ## this is constraining the Z-projection to be normal! Q.eval() z_real_gauss = torch.randn(images.size()[0], z_red_dims).to(device) D_real_gauss = D_gauss(z_real_gauss) z_fake_gauss = Q(images) D_fake_gauss = D_gauss(z_fake_gauss) D_loss = -torch.mean(torch.log(D_real_gauss) + torch.log(1 - D_fake_gauss)) D_loss.backward() optim_D.step() # Generator Q.train() z_fake_gauss = Q(images) D_fake_gauss = D_gauss(z_fake_gauss) G_loss = -torch.mean(torch.log(D_fake_gauss)) G_loss.backward() optim_Q_gen.step() P.eval() test_images = P(test_noise) P.train() if use_gpu: test_images = test_images.cpu().detach() for k in range(num_test_samples): i = k//10 j = k%10 ax[i,j].cla() ax[i,j].imshow(test_images[k,:].numpy().reshape(28, 28), cmap='Greys') display.clear_output(wait=True) display.display(plt.gcf()) ``` ## Classifier ``` #Encoder class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() self.l1 = Q self.l2 = nn.Linear(100,10) def forward(self, x): x = self.l1(x) x = self.l2(x) return x net = Classifier().to(device) print(net) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(net.parameters(), lr=1e-4) ``` ## Training ``` iterations = 10 for epoch in range(iterations): # loop over the dataset multiple times runningLoss = 0.0 for i, data in enumerate(data_loader, 0): # get the inputs inputs, labels = data inputs, labels = inputs.view(inputs.size(0), -1).to(device), labels.to(device) net.train() optimizer.zero_grad() # zeroes the gradient buffers of all parameters outputs = net(inputs) # forward loss = criterion(outputs, labels) # calculate loss loss.backward() # backpropagate the loss optimizer.step() correct = 0 total = 0 net.eval() with torch.no_grad(): for data in test_loader: inputs, labels = data inputs, labels = inputs.view(inputs.size(0), -1).to(device), labels.to(device) outputs = net(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels.data).sum() print('At Iteration : %d / %d ;Test Accuracy : %f'%(epoch + 1,iterations,100 * float(correct) /float(total))) print('Finished Training') ```
github_jupyter
``` from collections import defaultdict import pyspark.sql.types as stypes import operator import math d = sc.textFile("gs://lbanor/dataproc_example/data/2017-11-01").zipW r = (sc.textFile("gs://lbanor/dataproc_example/data/2017-11-01").zipWithIndex() .filter(lambda x: x[1] > 0) .map(lambda x: x[0].split(',')) .map(lambda x: (x[0], (x[1], 0.5 if x[2] == '1' else 2 if x[2] == '2' else 6))) .groupByKey().mapValues(list) .flatMap(lambda x: aggregate_skus(x))) print(r.collect()[:10]) print(r.collect()[:10]) d2 = spark.read.csv("gs://lbanor/dataproc_example/data/2017-11-01", header=True) t = sc.parallelize([('1', 'sku0', 1), ('2', 'sku2', 2), ('1', 'sku1', 1)]) t.zipWithIndex().map(lambda x: (x[0][0], (x[0][1], x[0][2]))).groupByKey().mapValues(list).collect()[:10] def aggregate_skus(row): """Aggregates skus from customers and their respective scores :type row: list :param row: list having values [user, (sku, score)] :rtype: list :returns: `yield` on [user, (sku, sum(score))] """ d = defaultdict(float) for inner_row in row[1]: d[inner_row[0]] += inner_row[1] yield (row[0], list(d.items())) r = d2.rdd.collect()[:10] r[0].user print(r.flatMap(lambda x: aggregate_skus(x)).collect()[:10]) r.toDF(schema=_load_users_matrix_schema()).write.json('gs://lbanor/dataproc_example/intermediary/2017-11-01') def _load_users_matrix_schema(): """Loads schema with data type [user, [(sku, score), (sku, score)]] :rtype: `pyspark.sql.type.StructType` :returns: schema speficiation for user -> (sku, score) data. """ return stypes.StructType(fields=[ stypes.StructField("user", stypes.StringType()), stypes.StructField('interactions', stypes.ArrayType( stypes.StructType(fields=[stypes.StructField('item', stypes.StringType()), stypes.StructField('score', stypes.FloatType())])))]) dir() t = sc.parallelize([[0, [1, 2]], [0, [3]]]) print(t.collect()) t.write.json? t = spark.read.json('gs://lbanor/dataproc_example/intermediary/2017-11-02', schema=_load_users_matrix_schema()) t = spark.read.json('gs://lbanor/dataproc_example/intermediary/2017-11-02/*.gz') t.rdd.map(lambda x: x).collect()[:10] t.head(3) t.rdd.reduceByKey(operator.add).collect()[:10] print(t.reduceByKey(operator.add).collect()) data = (t.rdd .reduceByKey(operator.add) .flatMap(lambda x: aggregate_skus(x)) .filter(lambda x: len(x[1]) > 1 and len(x[1]) < 10)) def _process_scores(row): """After all user -> score aggregation is done, this method loops through each sku for a given user and yields its squared score so that we can compute the norm ``||c||`` for each sku column. :type row: list :param row: list of type [(user, (sku, score))] :rtype: tuple :returns: tuple of type (sku, (score ** 2)) """ for inner_row in row[1]: yield (inner_row[0], inner_row[1] ** 2) norms = {sku: norm for sku, norm in (data.flatMap(lambda x: _process_scores(x)) .reduceByKey(operator.add) .map(lambda x: (x[0], math.sqrt(x[1]))) .collect())} data = (data .flatMap(lambda x: process_intersections(x, norms)) .reduceByKey(operator.add) .collect()[:20]) data def process_intersections(row, norms): for i in range(len(row[1])): for j in range(i + 1, len(row[1])): #yield row[1][i] yield ((row[1][i][0], row[1][j][0]), row[1][i][1] * row[1][j][1] / (norms[row[1][i][0]] * norms[row[1][j][0]])) re = t.flatMap(lambda x: process_intersections(x)) ```
github_jupyter
``` import os import json import tensorflow as tf import numpy as np import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from matplotlib import cm from tensor2tensor import problems from tensor2tensor import models from tensor2tensor.bin import t2t_decoder # To register the hparams set from tensor2tensor.utils import registry from tensor2tensor.utils import trainer_lib from tensor2tensor.data_generators import babi_qa ``` ## HParams ``` # HParams babi_task_id = 'qa3' subset = "1k" problem_name = 'babi_qa_sentence_task' + babi_task_id.replace("qa", "") + "_" + subset model_name = "babi_r_transformer" hparams_set = "r_transformer_act_step_position_timing_tiny" data_dir = '~/babi/data/' + problem_name # PUT THE MODEL YOU WANT TO LOAD HERE! CHECKPOINT = '~/babi/output/' + problem_name+ '/' + model_name + '/' + hparams_set + '/' print(CHECKPOINT) _TASKS = { 'qa1': 'qa1_single-supporting-fact', 'qa2': 'qa2_two-supporting-facts', 'qa3': 'qa3_three-supporting-facts', 'qa4': 'qa4_two-arg-relations', 'qa5': 'qa5_three-arg-relations', 'qa6': 'qa6_yes-no-questions', 'qa7': 'qa7_counting', 'qa8': 'qa8_lists-sets', 'qa9': 'qa9_simple-negation', 'qa10': 'qa10_indefinite-knowledge', 'qa11': 'qa11_basic-coreference', 'qa12': 'qa12_conjunction', 'qa13': 'qa13_compound-coreference', 'qa14': 'qa14_time-reasoning', 'qa15': 'qa15_basic-deduction', 'qa16': 'qa16_basic-induction', 'qa17': 'qa17_positional-reasoning', 'qa18': 'qa18_size-reasoning', 'qa19': 'qa19_path-finding', 'qa20': 'qa20_agents-motivations' } meta_data_filename = _TASKS[babi_task_id] + '-meta_data.json' metadata_path = os.path.join(data_dir, meta_data_filename) FLAGS = tf.flags.FLAGS FLAGS.data_dir = data_dir truncated_story_length = 130 if babi_task_id == 'qa3' else 70 with tf.gfile.GFile(metadata_path, mode='r') as f: metadata = json.load(f) max_story_length = metadata['max_story_length'] max_sentence_length = metadata['max_sentence_length'] max_question_length = metadata['max_question_length'] print(max_story_length) print(max_sentence_length) print(max_question_length) tf.reset_default_graph() class bAbiACTVisualizer(object): """Helper object for creating act visualizations.""" def __init__( self, hparams_set, model_name, data_dir, problem_name, beam_size=1): story, question, targets, samples, ponder_time = build_model( hparams_set, model_name, data_dir, problem_name, beam_size=beam_size) # Fetch the problem babi_problem = problems.problem(problem_name) encoders = babi_problem.feature_encoders(data_dir) self.story = story self.question = question self.targets = targets self.ponder_time = ponder_time self.samples = samples self.encoders = encoders def encode(self, story_str, question_str): """Input str to features dict, ready for inference.""" story_str = babi_qa._normalize_string(story_str) question_str = babi_qa._normalize_string(question_str) story = story_str.strip().split('.') story = [self.encoders[babi_qa.FeatureNames.STORY].encode(sentence) for sentence in story[-truncated_story_length:]] question = self.encoders[babi_qa.FeatureNames.QUESTION].encode(question_str) for sentence in story: for _ in range(max_sentence_length - len(sentence)): sentence.append(babi_qa.PAD) assert len(sentence) == max_sentence_length for _ in range(max_story_length - len(story)): story.append([babi_qa.PAD for _ in range(max_sentence_length)]) for _ in range(max_question_length - len(question)): question.append(babi_qa.PAD) assert len(story) == max_story_length assert len(question) == max_question_length story_flat = [token_id for sentence in story for token_id in sentence] batch_story = np.reshape(np.array(story_flat), [1, max_story_length, max_sentence_length, 1]) batch_question = np.reshape(np.array(question), [1, 1, max_question_length, 1]) return batch_story, batch_question def decode_story(self, integers): """List of ints to str.""" integers = np.squeeze(integers).tolist() story = [] for sent in integers: sent_decoded = self.encoders[babi_qa.FeatureNames.STORY].decode_list(sent) sent_decoded.append('.') story.append(sent_decoded) return story def decode_question(self, integers): """List of ints to str.""" integers = np.squeeze(integers).tolist() return self.encoders[babi_qa.FeatureNames.QUESTION].decode_list(integers) def decode_targets(self, integers): """List of ints to str.""" integers = np.squeeze(integers).tolist() return self.encoders["targets"].decode([integers]) def get_vis_data_from_string(self, sess, story_str, question_str): """Constructs the data needed for visualizing ponder_time. Args: sess: A tf.Session object. input_string: The input setence to be visulized. Returns: Tuple of ( output_string: The answer input_list: Tokenized input sentence. output_list: Tokenized answer. ponder_time: ponder_time matrices; ) """ encoded_story, encoded_question = self.encode(story_str, question_str) # Run inference graph to get the label. out = sess.run(self.samples, { self.story: encoded_story, self.question: encoded_question, }) # Run the decoded answer through the training graph to get the # ponder_time tensors. ponder_time = sess.run(self.ponder_time, { self.story: encoded_story, self.question: encoded_question, self.targets: np.reshape(out, [1, -1, 1, 1]), }) output = self.decode_targets(out) story_list = self.decode_story(encoded_story) question_list = self.decode_question(encoded_question) return story_list, question_list, output, ponder_time def build_model(hparams_set, model_name, data_dir, problem_name, beam_size=1): """Build the graph required to featch the ponder_times. Args: hparams_set: HParams set to build the model with. model_name: Name of model. data_dir: Path to directory contatining training data. problem_name: Name of problem. beam_size: (Optional) Number of beams to use when decoding a traslation. If set to 1 (default) then greedy decoding is used. Returns: Tuple of ( inputs: Input placeholder to feed in ids. targets: Targets placeholder to feed to th when fetching ponder_time. samples: Tensor representing the ids of the translation. ponder_time: Tensors representing the ponder_time. ) """ hparams = trainer_lib.create_hparams( hparams_set, data_dir=data_dir, problem_name=problem_name) babi_model = registry.model(model_name)( hparams, tf.estimator.ModeKeys.EVAL) story = tf.placeholder(tf.int32, shape=( 1, max_story_length, max_sentence_length, 1), name=babi_qa.FeatureNames.STORY) question = tf.placeholder(tf.int32, shape=( 1, 1, max_question_length, 1), name=babi_qa.FeatureNames.QUESTION) targets = tf.placeholder(tf.int32, shape=(1, 1, 1, 1), name='targets') babi_model({ babi_qa.FeatureNames.STORY: story, babi_qa.FeatureNames.QUESTION: question, 'targets': targets, }) # Must be called after building the training graph, so that the dict will # have been filled with the ponder_time tensors. BUT before creating the # interence graph otherwise the dict will be filled with tensors from # inside a tf.while_loop from decoding and are marked unfetchable. ponder_time = get_ponder_mats(babi_model) with tf.variable_scope(tf.get_variable_scope(), reuse=True): samples = babi_model.infer({ babi_qa.FeatureNames.STORY: story, babi_qa.FeatureNames.QUESTION: question, }, beam_size=beam_size)['outputs'] return story, question, targets, samples, ponder_time def get_ponder_mats(babi_model): """Get's the tensors representing the ponder_time from a build model. The ponder_time are stored in a dict on the Transformer object while building the graph. Args: babi_model: Transformer object to fetch the ponder_time from. Returns: Tuple of ponder_time matrices """ # print([n.name for n in tf.get_default_graph().as_graph_def().node]) attention_tensor_name = "babi_r_transformer/parallel_0_5/babi_r_transformer/body/encoder/r_transformer_act/while/self_attention/multihead_attention/dot_product_attention/attention_weights" ponder_time_tensor_name = "babi_r_transformer/parallel_0_5/babi_r_transformer/body/enc_ponder_times:0" ponder_time = tf.get_default_graph().get_tensor_by_name(ponder_time_tensor_name) return ponder_time ponder_visualizer = bAbiACTVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1) tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step') sess = tf.train.MonitoredTrainingSession( checkpoint_dir=CHECKPOINT, save_summaries_secs=0, ) if babi_task_id == 'qa1': # input_story = "John travelled to the hallway.Mary journeyed to the bathroom." # input_question = "Where is John?" #hallway input_story = "John travelled to the hallway.Mary journeyed to the bathroom.Daniel went back to the bathroom.John moved to the bedroom." input_question = "Where is Mary?" #bathroom elif babi_task_id == 'qa2': input_story = "Mary got the milk there.John moved to the bedroom.Sandra went back to the kitchen.Mary travelled to the hallway." input_question = "Where is the milk?" #hallway # input_story = "Mary got the milk there.John moved to the bedroom.Sandra went back to the kitchen.Mary travelled to the hallway.John got the football there.John went to the hallway." # input_question = "Where is the football?" #hallway elif babi_task_id == 'qa3': input_story = "Mary got the milk.John moved to the bedroom.Daniel journeyed to the office.John grabbed the apple there.John got the football.John journeyed to the garden.Mary left the milk.John left the football.Daniel moved to the garden.Daniel grabbed the football.Mary moved to the hallway.Mary went to the kitchen.John put down the apple there.John picked up the apple.Sandra moved to the hallway.Daniel left the football there.Daniel took the football.John travelled to the kitchen.Daniel dropped the football.John dropped the apple.John grabbed the apple.John went to the office.Sandra went back to the bedroom.Sandra took the milk.John journeyed to the bathroom.John travelled to the office.Sandra left the milk.Mary went to the bedroom.Mary moved to the office.John travelled to the hallway.Sandra moved to the garden.Mary moved to the kitchen.Daniel took the football.Mary journeyed to the bedroom.Mary grabbed the milk there.Mary discarded the milk.John went to the garden.John discarded the apple there." input_question = "Where was the apple before the bathroom?" #office # input_story = "Mary got the milk.John moved to the bedroom.Daniel journeyed to the office.John grabbed the apple there.John got the football.John journeyed to the garden.Mary left the milk.John left the football.Daniel moved to the garden.Daniel grabbed the football.Mary moved to the hallway.Mary went to the kitchen.John put down the apple there.John picked up the apple.Sandra moved to the hallway.Daniel left the football there.Daniel took the football.John travelled to the kitchen.Daniel dropped the football.John dropped the apple.John grabbed the apple.John went to the office.Sandra went back to the bedroom.Sandra took the milk.John journeyed to the bathroom.John travelled to the office.Sandra left the milk.Mary went to the bedroom.Mary moved to the office.John travelled to the hallway.Sandra moved to the garden.Mary moved to the kitchen.Daniel took the football.Mary journeyed to the bedroom.Mary grabbed the milk there.Mary discarded the milk.John went to the garden.John discarded the apple there.Sandra travelled to the bedroom.Daniel moved to the bathroom." # input_question = "Where was the apple before the hallway?" #office story_text, question_text, output, ponder_time = ponder_visualizer.get_vis_data_from_string(sess, input_story, input_question) # print(output) # print(story_text) # print(question_text) inp_text = [] for sent in story_text: inp_text.append(' '.join(sent)) inp_text.append(' '.join(question_text)) ponder_time = np.squeeze(np.array(ponder_time)).tolist() # print(ponder_time) def pad_remover(inp_text, ponder_time): pad_sent_index = [ i for i, sent in enumerate(inp_text) if sent.startswith('<pad>')] start = min(pad_sent_index) end = max(pad_sent_index) filtered_inp_text = inp_text[:start] + inp_text[end+1:] filtered_inp_text = [sent.replace('<pad> ', '') for sent in filtered_inp_text] filtered_ponder_time = ponder_time[:start] + ponder_time[end+1:] return filtered_inp_text, filtered_ponder_time filtered_inp_text, filtered_ponder_time = pad_remover(inp_text, ponder_time) for sent in filtered_inp_text: print(sent) print(output) print(filtered_ponder_time) df = pd.DataFrame( {'input': filtered_inp_text, 'ponder_time': filtered_ponder_time, }) f_size = (10,5) if babi_task_id == 'qa2': f_size = (15,5) if babi_task_id == 'qa3': f_size = (25,5) df.plot(kind='bar', x='input', y='ponder_time', rot=90, width=0.3, figsize=f_size, cmap='Spectral') ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> # TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use. These models can either be used as is, or they can be used for Transfer Learning. Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs. Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/). Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. # Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of. ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import tensorflow_hub as hub import tensorflow_datasets as tfds from tensorflow.keras import layers import logging logger = tf.get_logger() logger.setLevel(logging.ERROR) ``` # TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasets#tf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets. ``` (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', split=['train[:70%]', 'train[70%:]'], with_info=True, as_supervised=True, ) ``` # TODO: Print Information about the Flowers Dataset Now that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets. ``` num_classes = dataset_info.features['label'].num_classes num_training_examples = 0 num_validation_examples = 0 for example in training_set: num_training_examples += 1 for example in validation_set: num_validation_examples += 1 print('Total Number of Classes: {}'.format(num_classes)) print('Total Number of Training Images: {}'.format(num_training_examples)) print('Total Number of Validation Images: {} \n'.format(num_validation_examples)) ``` The images in the Flowers dataset are not all the same size. ``` for i, example in enumerate(training_set.take(5)): print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1])) ``` # TODO: Reformat Images and Create Batches In the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`. ``` IMAGE_RES = 224 def format_image(image, label): image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0 return image, label BATCH_SIZE = 32 train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1) validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1) ``` # Do Simple Transfer Learning with TensorFlow Hub Let's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. ### TODO: Create a Feature Extractor In the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter. ``` URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" feature_extractor = hub.KerasLayer(URL, input_shape=(IMAGE_RES, IMAGE_RES, 3)) ``` ### TODO: Freeze the Pre-Trained Model In the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer. ``` feature_extractor.trainable = False ``` ### TODO: Attach a classification head In the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model. ``` model = tf.keras.Sequential([ feature_extractor, layers.Dense(num_classes) ]) model.summary() ``` ### TODO: Train the model In the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs. ``` model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) EPOCHS = 6 history = model.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches) ``` You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). # TODO: Plot Training and Validation Graphs In the cell below, plot the training and validation accuracy/loss graphs. ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(EPOCHS) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ``` What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution. One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch. The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. # TODO: Check Predictions In the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names. ``` class_names = np.array(dataset_info.features['label'].names) print(class_names) ``` ### TODO: Create an Image Batch and Make Predictions In the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names. ``` image_batch, label_batch = next(iter(train_batches)) image_batch = image_batch.numpy() label_batch = label_batch.numpy() predicted_batch = model.predict(image_batch) predicted_batch = tf.squeeze(predicted_batch).numpy() predicted_ids = np.argmax(predicted_batch, axis=-1) predicted_class_names = class_names[predicted_ids] print(predicted_class_names) ``` ### TODO: Print True Labels and Predicted Indices In the cell below, print the true labels and the indices of predicted labels. ``` print("Labels: ", label_batch) print("Predicted labels: ", predicted_ids) ``` # Plot Model Predictions ``` plt.figure(figsize=(10,9)) for n in range(30): plt.subplot(6,5,n+1) plt.subplots_adjust(hspace = 0.3) plt.imshow(image_batch[n]) color = "blue" if predicted_ids[n] == label_batch[n] else "red" plt.title(predicted_class_names[n].title(), color=color) plt.axis('off') _ = plt.suptitle("Model predictions (blue: correct, red: incorrect)") ``` # TODO: Perform Transfer Learning with the Inception Model Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2. ``` IMAGE_RES = 299 (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', with_info=True, as_supervised=True, split=['train[:70%]', 'train[70%:]'], ) train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1) validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1) URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4" feature_extractor = hub.KerasLayer(URL, input_shape=(IMAGE_RES, IMAGE_RES, 3), trainable=False) model_inception = tf.keras.Sequential([ feature_extractor, tf.keras.layers.Dense(num_classes) ]) model_inception.summary() model_inception.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) EPOCHS = 6 history = model_inception.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches) ```
github_jupyter
# Feature Engineering and Labeling We'll use the price-volume data and generate features that we can feed into a model. We'll use this notebook for all the coding exercises of this lesson, so please open this notebook in a separate tab of your browser. Please run the following code up to and including "Make Factors." Then continue on with the lesson. ``` import sys !{sys.executable} -m pip install --quiet -r requirements.txt import numpy as np import pandas as pd import time import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (14, 8) ``` #### Registering data ``` import os import project_helper from zipline.data import bundles os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..', 'data', 'project_4_eod') ingest_func = bundles.csvdir.csvdir_equities(['daily'], project_helper.EOD_BUNDLE_NAME) bundles.register(project_helper.EOD_BUNDLE_NAME, ingest_func) print('Data Registered') from zipline.pipeline import Pipeline from zipline.pipeline.factors import AverageDollarVolume from zipline.utils.calendars import get_calendar universe = AverageDollarVolume(window_length=120).top(500) trading_calendar = get_calendar('NYSE') bundle_data = bundles.load(project_helper.EOD_BUNDLE_NAME) engine = project_helper.build_pipeline_engine(bundle_data, trading_calendar) universe_end_date = pd.Timestamp('2016-01-05', tz='UTC') universe_tickers = engine\ .run_pipeline( Pipeline(screen=universe), universe_end_date, universe_end_date)\ .index.get_level_values(1)\ .values.tolist() from zipline.data.data_portal import DataPortal data_portal = DataPortal( bundle_data.asset_finder, trading_calendar=trading_calendar, first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day, equity_minute_reader=None, equity_daily_reader=bundle_data.equity_daily_bar_reader, adjustment_reader=bundle_data.adjustment_reader) def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'): end_dt = pd.Timestamp(end_date.strftime('%Y-%m-%d'), tz='UTC', offset='C') start_dt = pd.Timestamp(start_date.strftime('%Y-%m-%d'), tz='UTC', offset='C') end_loc = trading_calendar.closes.index.get_loc(end_dt) start_loc = trading_calendar.closes.index.get_loc(start_dt) return data_portal.get_history_window( assets=assets, end_dt=end_dt, bar_count=end_loc - start_loc, frequency='1d', field=field, data_frequency='daily') ``` # Make Factors - We'll use the same factors we have been using in the lessons about alpha factor research. Factors can be features that we feed into the model. ``` from zipline.pipeline.factors import CustomFactor, DailyReturns, Returns, SimpleMovingAverage from zipline.pipeline.data import USEquityPricing factor_start_date = universe_end_date - pd.DateOffset(years=3, days=2) sector = project_helper.Sector() def momentum_1yr(window_length, universe, sector): return Returns(window_length=window_length, mask=universe) \ .demean(groupby=sector) \ .rank() \ .zscore() def mean_reversion_5day_sector_neutral(window_length, universe, sector): return -Returns(window_length=window_length, mask=universe) \ .demean(groupby=sector) \ .rank() \ .zscore() def mean_reversion_5day_sector_neutral_smoothed(window_length, universe, sector): unsmoothed_factor = mean_reversion_5day_sector_neutral(window_length, universe, sector) return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=window_length) \ .rank() \ .zscore() class CTO(Returns): """ Computes the overnight return, per hypothesis from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2554010 """ inputs = [USEquityPricing.open, USEquityPricing.close] def compute(self, today, assets, out, opens, closes): """ The opens and closes matrix is 2 rows x N assets, with the most recent at the bottom. As such, opens[-1] is the most recent open, and closes[0] is the earlier close """ out[:] = (opens[-1] - closes[0]) / closes[0] class TrailingOvernightReturns(Returns): """ Sum of trailing 1m O/N returns """ window_safe = True def compute(self, today, asset_ids, out, cto): out[:] = np.nansum(cto, axis=0) def overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe): cto_out = CTO(mask=universe, window_length=cto_window_length) return TrailingOvernightReturns(inputs=[cto_out], window_length=trail_overnight_returns_window_length) \ .rank() \ .zscore() def overnight_sentiment_smoothed(cto_window_length, trail_overnight_returns_window_length, universe): unsmoothed_factor = overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe) return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=trail_overnight_returns_window_length) \ .rank() \ .zscore() universe = AverageDollarVolume(window_length=120).top(500) sector = project_helper.Sector() pipeline = Pipeline(screen=universe) pipeline.add( momentum_1yr(252, universe, sector), 'Momentum_1YR') pipeline.add( mean_reversion_5day_sector_neutral_smoothed(20, universe, sector), 'Mean_Reversion_Sector_Neutral_Smoothed') pipeline.add( overnight_sentiment_smoothed(2, 10, universe), 'Overnight_Sentiment_Smoothed') all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date) all_factors.head() ``` #### Stop here and continue with the lesson section titled "Features". # Universal Quant Features * stock volatility: zipline has a custom factor called AnnualizedVolatility. The [source code is here](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/factors/basic.py) and also pasted below: ``` class AnnualizedVolatility(CustomFactor): """ Volatility. The degree of variation of a series over time as measured by the standard deviation of daily returns. https://en.wikipedia.org/wiki/Volatility_(finance) **Default Inputs:** :data:`zipline.pipeline.factors.Returns(window_length=2)` # noqa Parameters ---------- annualization_factor : float, optional The number of time units per year. Defaults is 252, the number of NYSE trading days in a normal year. """ inputs = [Returns(window_length=2)] params = {'annualization_factor': 252.0} window_length = 252 def compute(self, today, assets, out, returns, annualization_factor): out[:] = nanstd(returns, axis=0) * (annualization_factor ** .5) ``` ``` from zipline.pipeline.factors import AnnualizedVolatility AnnualizedVolatility() ``` #### Quiz We can see that the returns `window_length` is 2, because we're dealing with daily returns, which are calculated as the percent change from one day to the following day (2 days). The `AnnualizedVolatility` `window_length` is 252 by default, because it's the one-year volatility. Try to adjust the call to the constructor of `AnnualizedVolatility` so that this represents one-month volatility (still annualized, but calculated over a time window of 20 trading days) #### Answer ``` # TODO ``` #### Quiz: Create one-month and six-month annualized volatility. Create `AnnualizedVolatility` objects for 20 day and 120 day (one month and six-month) time windows. Remember to set the `mask` parameter to the `universe` object created earlier (this filters the stocks to match the list in the `universe`). Convert these to ranks, and then convert the ranks to zscores. ``` # TODO volatility_20d # ... volatility_120d # ... ``` #### Add to the pipeline ``` pipeline.add(volatility_20d, 'volatility_20d') pipeline.add(volatility_120d, 'volatility_120d') ``` #### Quiz: Average Dollar Volume feature We've been using [AverageDollarVolume](http://www.zipline.io/appendix.html#zipline.pipeline.factors.AverageDollarVolume) to choose the stock universe based on stocks that have the highest dollar volume. We can also use it as a feature that is input into a predictive model. Use 20 day and 120 day `window_length` for average dollar volume. Then rank it and convert to a zscore. ``` """already imported earlier, but shown here for reference""" #from zipline.pipeline.factors import AverageDollarVolume # TODO: 20-day and 120 day average dollar volume adv_20d = # ... adv_120d = # ... ``` #### Add average dollar volume features to pipeline ``` pipeline.add(adv_20d, 'adv_20d') pipeline.add(adv_120d, 'adv_120d') ``` ### Market Regime Features We are going to try to capture market-wide regimes: Market-wide means we'll look at the aggregate movement of the universe of stocks. High and low dispersion: dispersion is looking at the dispersion (standard deviation) of the cross section of all stocks at each period of time (on each day). We'll inherit from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). We'll feed in [DailyReturns](http://www.zipline.io/appendix.html?highlight=dailyreturns#zipline.pipeline.factors.DailyReturns) as the `inputs`. #### Quiz If the `inputs` to our market dispersion factor are the daily returns, and we plan to calculate the market dispersion on each day, what should be the `window_length` of the market dispersion class? #### Answer #### Quiz: market dispersion feature Create a class that inherits from `CustomFactor`. Override the `compute` function to calculate the population standard deviation of all the stocks over a specified window of time. **mean returns** $\mu = \sum_{t=0}^{T}\sum_{i=1}^{N}r_{i,t}$ **Market Dispersion** $\sqrt{\frac{1}{T} \sum_{t=0}^{T} \frac{1}{N}\sum_{i=1}^{N}(r_{i,t} - \mu)^2}$ Use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) to calculate the average market return $\mu$ and to calculate the average of the squared differences. ``` class MarketDispersion(CustomFactor): inputs = [DailyReturns()] window_length = # ... window_safe = True def compute(self, today, assets, out, returns): # TODO: calculate average returns mean_returns = # ... #TODO: calculate standard deviation of returns out[:] = # ... ``` #### Quiz Create the MarketDispersion object. Apply two separate smoothing operations using [SimpleMovingAverage](https://www.zipline.io/appendix.html?highlight=simplemovingaverage#zipline.pipeline.factors.SimpleMovingAverage). One with a one-month window, and another with a 6-month window. Add both to the pipeline. ``` # TODO: create MarketDispersion object dispersion = # ... # TODO: apply one-month simple moving average dispersion_20d = # ... # TODO: apply 6-month simple moving average dispersion_120d = # ... # Add to pipeline pipeline.add(dispersion_20d, 'dispersion_20d') pipeline.add(dispersion_120d, 'dispersion_120d') ``` #### Market volatility feature * High and low volatility We'll also build a class for market volatility, which inherits from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). This will measure the standard deviation of the returns of the "market". In this case, we're approximating the "market" as the equal weighted average return of all the stocks in the stock universe. ##### Market return $r_{m,t} = \frac{1}{N}\sum_{i=1}^{N}r_{i,t}$ for each day $t$ in `window_length`. ##### Average market return Also calculate the average market return over the `window_length` $T$ of days: $\mu_{m} = \frac{1}{T}\sum_{t=1}^{T} r_{m,t}$ #### Standard deviation of market return Then calculate the standard deviation of the market return $\sigma_{m,t} = \sqrt{252 \times \frac{1}{N} \sum_{t=1}^{T}(r_{m,t} - \mu_{m})^2 } $ ##### Hints * Please use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) so that it ignores null values. * When using `numpy.nanmean`: axis=0 will calculate one average for every column (think of it like creating a new row in a spreadsheet) axis=1 will calculate one average for every row (think of it like creating a new column in a spreadsheet) * The returns data in `compute` has one day in each row, and one stock in each column. * Notice that we defined a dictionary `params` that has a key `annualization_factor`. This `annualization_factor` can be used as a regular variable, and you'll be using it in the `compute` function. This is also done in the definition of AnnualizedVolatility (as seen earlier in the notebook). ``` class MarketVolatility(CustomFactor): inputs = [DailyReturns()] window_length = 1 # We'll want to set this in the constructor when creating the object. window_safe = True params = {'annualization_factor': 252.0} def compute(self, today, assets, out, returns, annualization_factor): # TODO """ For each row (each row represents one day of returns), calculate the average of the cross-section of stock returns So that market_returns has one value for each day in the window_length So choose the appropriate axis (please see hints above) """ mkt_returns = # ... # TODO # Calculate the mean of market returns mkt_returns_mu = # ... # TODO # Calculate the standard deviation of the market returns, then annualize them. out[:] = # ... # TODO: create market volatility features using one month and six-month windows market_vol_20d = # ... market_vol_120d = # ... # add market volatility features to pipeline pipeline.add(market_vol_20d, 'market_vol_20d') pipeline.add(market_vol_120d, 'market_vol_120d') ``` #### Stop here and continue with the lesson section "Sector and Industry" # Sector and Industry #### Add sector code Note that after we run the pipeline and get the data in a dataframe, we can work on enhancing the sector code feature with one-hot encoding. ``` pipeline.add(sector, 'sector_code') ``` #### Run pipeline to calculate features ``` all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date) all_factors.head() ``` #### One-hot encode sector Let's get all the unique sector codes. Then we'll use the `==` comparison operator to check when the sector code equals a particular value. This returns a series of True/False values. For some functions that we'll use in a later lesson, it's easier to work with numbers instead of booleans. We can convert the booleans to type int. So False becomes 0, and 1 becomes True. ``` sector_code_l = set(all_factors['sector_code']) sector_0 = all_factors['sector_code'] == 0 sector_0[0:5] sector_0_numeric = sector_0.astype(int) sector_0_numeric[0:5] ``` #### Quiz: One-hot encode sector Choose column names that look like "sector_code_0", "sector_code_1" etc. Store the values as 1 when the row matches the sector code of the column, 0 otherwise. ``` # TODO: one-hot encode sector and store into dataframe for s in sector_code_l: # ... all_factors.head() ``` #### Stop here and continue with the lesson section "Date Parts". # Date Parts * We will make features that might capture trader/investor behavior due to calendar anomalies. * We can get the dates from the index of the dataframe that is returned from running the pipeline. #### Accessing index of dates * Note that we can access the date index. using `Dataframe.index.get_level_values(0)`, since the date is stored as index level 0, and the asset name is stored in index level 1. This is of type [DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html). ``` all_factors.index.get_level_values(0) ``` #### [DateTimeIndex attributes](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html) * The `month` attribute is a numpy array with a 1 for January, 2 for February ... 12 for December etc. * We can use a comparison operator such as `==` to return True or False. * It's usually easier to have all data of a similar type (numeric), so we recommend converting booleans to integers. The numpy ndarray has a function `.astype()` that can cast the data to a specified type. For instance, `astype(int)` converts False to 0 and True to 1. ``` # Example print(all_factors.index.get_level_values(0).month) print(all_factors.index.get_level_values(0).month == 1) print( (all_factors.index.get_level_values(0).month == 1).astype(int) ) ``` ## Quiz * Create a numpy array that has 1 when the month is January, and 0 otherwise. Store it as a column in the all_factors dataframe. * Add another similar column to indicate when the month is December ``` # TODO: create a feature that indicate whether it's January all_factors['is_January'] = # ... # TODO: create a feature to indicate whether it's December all_factors['is_December'] = # ... ``` ## Weekday, quarter * add columns to the all_factors dataframe that specify the weekday, quarter and year. * As you can see in the [documentation for DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html), `weekday`, `quarter`, and `year` are attributes that you can use here. ``` # we can see that 0 is for Monday, 4 is for Friday set(all_factors.index.get_level_values(0).weekday) # Q1, Q2, Q3 and Q4 are represented by integers too set(all_factors.index.get_level_values(0).quarter) ``` #### Quiz Add features for weekday, quarter and year. ``` # TODO all_factors['weekday'] = # ... all_factors['quarter'] = # ... all_factors['year'] = # ... ``` ## Start and end-of features * The start and end of the week, month, and quarter may have structural differences in trading activity. * [Pandas.date_range](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) takes the start_date, end_date, and frequency. * The [frequency](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for end of month is `BM`. ``` # Example tmp = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM') tmp ``` #### Example Create a DatetimeIndex that stores the dates which are the last business day of each month. Use the `.isin` function, passing in these last days of the month, to create a series of booleans. Convert the booleans to integers. ``` last_day_of_month = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM') last_day_of_month tmp_month_end = all_factors.index.get_level_values(0).isin(last_day_of_month) tmp_month_end tmp_month_end_int = tmp_month_end.astype(int) tmp_month_end_int all_factors['month_end'] = tmp_month_end_int ``` #### Quiz: Start of Month Create a feature that indicates the first business day of each month. **Hint:** The frequency for first business day of the month uses the code `BMS`. ``` # TODO: month_start feature first_day_of_month = # pd.date_range() all_factors['month_start'] = # ... ``` #### Quiz: Quarter end and quarter start Create features for the last business day of each quarter, and first business day of each quarter. **Hint**: use `freq=BQ` for business day end of quarter, and `freq=BQS` for business day start of quarter. ``` # TODO: qtr_end feature last_day_qtr = # ... all_factors['qtr_end'] = # ... # TODO: qtr_start feature first_day_qtr = # ... all_factors['qtr_start'] = # ... ``` ## View all features ``` list(all_factors.columns) ``` Note that we can skip the sector_code feature, since we one-hot encoded it into separate features. ``` features = ['Mean_Reversion_Sector_Neutral_Smoothed', 'Momentum_1YR', 'Overnight_Sentiment_Smoothed', 'adv_120d', 'adv_20d', 'dispersion_120d', 'dispersion_20d', 'market_vol_120d', 'market_vol_20d', #'sector_code', # removed sector_code 'volatility_120d', 'volatility_20d', 'sector_code_0', 'sector_code_1', 'sector_code_2', 'sector_code_3', 'sector_code_4', 'sector_code_5', 'sector_code_6', 'sector_code_7', 'sector_code_8', 'sector_code_9', 'sector_code_10', 'sector_code_-1', 'is_January', 'is_December', 'weekday', 'quarter', 'year', 'month_start', 'qtr_end', 'qtr_start'] ``` #### Stop here and continue to the lesson section "Targets" # Targets (Labels) - We are going to try to predict the go forward 1-week return - Very important! Quantize the target. Why do we do this? - Makes it market neutral return - Normalizes changing volatility and dispersion over time - Make the target robust to changes in market regimes - The factor we create is the trailing 5-day return. ``` # we'll create a separate pipeline to handle the target pipeline_target = Pipeline(screen=universe) ``` #### Example We'll convert weekly returns into 2-quantiles. ``` return_5d_2q = Returns(window_length=5, mask=universe).quantiles(2) return_5d_2q pipeline_target.add(return_5d_2q, 'return_5d_2q') ``` #### Quiz Create another weekly return target that's converted to 5-quantiles. ``` # TODO: create a target using 5-quantiles return_5d_5q = # ... # TODO: add the feature to the pipeline # ... # Let's run the pipeline to get the dataframe targets_df = engine.run_pipeline(pipeline_target, factor_start_date, universe_end_date) targets_df.head() targets_df.columns ``` ## Solution [solution notebook](feature_engineering_solution.ipynb)
github_jupyter
# Introduction to Linear Algebra This is a tutorial designed to introduce you to the basics of linear algebra. Linear algebra is a branch of mathematics dedicated to studying the properties of matrices and vectors, which are used extensively in quantum computing to represent quantum states and operations on them. This tutorial doesn't come close to covering the full breadth of the topic, but it should be enough to get you comfortable with the main concepts of linear algebra used in quantum computing. This tutorial assumes familiarity with complex numbers; if you need a review of this topic, we recommend that you complete the [Complex Arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) tutorial before tackling this one. This tutorial covers the following topics: * Matrices and vectors * Basic matrix operations * Operations and properties of complex matrices * Inner and outer vector products * Tensor product * Eigenvalues and eigenvectors If you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/main/quickref/qsharp-quick-reference.pdf). This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it. > The exercises use Python's built-in representation of complex numbers. Most of the operations (addition, multiplication, etc.) work as you expect them to. Here are a few notes on Python-specific syntax: > > * If `z` is a complex number, `z.real` is the real component, and `z.imag` is the coefficient of the imaginary component. > * To represent an imaginary number, put `j` after a real number: $3.14i$ would be `3.14j`. > * To represent a complex number, simply add a real number and an imaginary number. > * The built-in function `abs` computes the modulus of a complex number. > > You can find more information in the [official documentation](https://docs.python.org/3/library/cmath.html). Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step.** Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac). ``` # Run this cell using Ctrl+Enter (⌘+Enter on Mac). from testing import exercise, create_empty_matrix from typing import List import math, cmath Matrix = List[List[complex]] ``` # Part I. Matrices and Basic Operations ## Matrices and Vectors A **matrix** is set of numbers arranged in a rectangular grid. Here is a $2$ by $2$ matrix: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ $A_{i,j}$ refers to the element in row $i$ and column $j$ of matrix $A$ (all indices are 0-based). In the above example, $A_{0,1} = 2$. An $n \times m$ matrix will have $n$ rows and $m$ columns, like so: $$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$ A $1 \times 1$ matrix is equivalent to a scalar: $$\begin{bmatrix} 3 \end{bmatrix} = 3$$ Quantum computing uses complex-valued matrices: the elements of a matrix can be complex numbers. This, for example, is a valid complex-valued matrix: $$\begin{bmatrix} 1 & i \\ -2i & 3 + 4i \end{bmatrix}$$ Finally, a **vector** is an $n \times 1$ matrix. Here, for example, is a $3 \times 1$ vector: $$V = \begin{bmatrix} 1 \\ 2i \\ 3 + 4i \end{bmatrix}$$ Since vectors always have a width of $1$, vector elements are sometimes written using only one index. In the above example, $V_0 = 1$ and $V_1 = 2i$. ## Matrix Addition The easiest matrix operation is **matrix addition**. Matrix addition works between two matrices of the same size, and adds each number from the first matrix to the number in the same position in the second matrix: $$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix} + \begin{bmatrix} y_{0,0} & y_{0,1} & \dotsb & y_{0,m-1} \\ y_{1,0} & y_{1,1} & \dotsb & y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n-1,0} & y_{n-1,1} & \dotsb & y_{n-1,m-1} \end{bmatrix} = \begin{bmatrix} x_{0,0} + y_{0,0} & x_{0,1} + y_{0,1} & \dotsb & x_{0,m-1} + y_{0,m-1} \\ x_{1,0} + y_{1,0} & x_{1,1} + y_{1,1} & \dotsb & x_{1,m-1} + y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} + y_{n-1,0} & x_{n-1,1} + y_{n-1,1} & \dotsb & x_{n-1,m-1} + y_{n-1,m-1} \end{bmatrix}$$ Similarly, we can compute $A - B$ by subtracting elements of $B$ from corresponding elements of $A$. Matrix addition has the following properties: * Commutativity: $A + B = B + A$ * Associativity: $(A + B) + C = A + (B + C)$ ### <span style="color:blue">Exercise 1</span>: Matrix addition. **Inputs:** 1. An $n \times m$ matrix $A$, represented as a two-dimensional list. 2. An $n \times m$ matrix $B$, represented as a two-dimensional list. **Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list. > When representing matrices as lists, each sub-list represents a row. > > For example, list `[[1, 2], [3, 4]]` represents the following matrix: > > $$\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ Fill in the missing code and run the cell below to test your work. <br/> <details> <summary><b>Need a hint? Click here</b></summary> A video explanation can be found <a href="https://www.youtube.com/watch?v=WR9qCSXJlyY">here</a>. </details> ``` @exercise def matrix_add(a : Matrix, b : Matrix) -> Matrix: # You can get the size of a matrix like this: rows = len(a) columns = len(a[0]) # You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer c = create_empty_matrix(rows, columns) # You can use a for loop to execute its body several times; # in this loop variable i will take on each value from 0 to n-1, inclusive for i in range(rows): # Loops can be nested for j in range(columns): # You can access elements of a matrix like this: x = a[i][j] y = b[i][j] # You can modify the elements of a matrix like this: c[i][j] = x + y return c ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-1:-Matrix-addition.).* ## Scalar Multiplication The next matrix operation is **scalar multiplication** - multiplying the entire matrix by a scalar (real or complex number): $$a \cdot \begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix} = \begin{bmatrix} a \cdot x_{0,0} & a \cdot x_{0,1} & \dotsb & a \cdot x_{0,m-1} \\ a \cdot x_{1,0} & a \cdot x_{1,1} & \dotsb & a \cdot x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ a \cdot x_{n-1,0} & a \cdot x_{n-1,1} & \dotsb & a \cdot x_{n-1,m-1} \end{bmatrix}$$ Scalar multiplication has the following properties: * Associativity: $x \cdot (yA) = (x \cdot y)A$ * Distributivity over matrix addition: $x(A + B) = xA + xB$ * Distributivity over scalar addition: $(x + y)A = xA + yA$ ### <span style="color:blue">Exercise 2</span>: Scalar multiplication. **Inputs:** 1. A scalar $x$. 2. An $n \times m$ matrix $A$. **Output:** Return the $n \times m$ matrix $x \cdot A$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> A video explanation can be found <a href="https://www.youtube.com/watch?v=TbaltFbJ3wE">here</a>. </details> ``` @exercise def scalar_mult(x : complex, a : Matrix) -> Matrix: # Fill in the missing code and run the cell to check your work. rows = len(a) columns = len(a[0]) c = create_empty_matrix(rows, columns) # You can use a for loop to execute its body several times; # in this loop variable i will take on each value from 0 to n-1, inclusive for i in range(rows): # Loops can be nested for j in range(columns): # You can access elements of a matrix like this: current_cell = a[i][j] # You can modify the elements of a matrix like this: c[i][j] = x * current_cell return c ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-2:-Scalar-multiplication.).* ## Matrix Multiplication **Matrix multiplication** is a very important and somewhat unusual operation. The unusual thing about it is that neither its operands nor its output are the same size: an $n \times m$ matrix multiplied by an $m \times k$ matrix results in an $n \times k$ matrix. That is, for matrix multiplication to be applicable, the number of columns in the first matrix must equal the number of rows in the second matrix. Here is how matrix product is calculated: if we are calculating $AB = C$, then $$C_{i,j} = A_{i,0} \cdot B_{0,j} + A_{i,1} \cdot B_{1,j} + \dotsb + A_{i,m-1} \cdot B_{m-1,j} = \sum_{t = 0}^{m-1} A_{i,t} \cdot B_{t,j}$$ Here is a small example: $$\begin{bmatrix} \color{blue} 1 & \color{blue} 2 & \color{blue} 3 \\ \color{red} 4 & \color{red} 5 & \color{red} 6 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} = \begin{bmatrix} (\color{blue} 1 \cdot 1) + (\color{blue} 2 \cdot 2) + (\color{blue} 3 \cdot 3) \\ (\color{red} 4 \cdot 1) + (\color{red} 5 \cdot 2) + (\color{red} 6 \cdot 3) \end{bmatrix} = \begin{bmatrix} 14 \\ 32 \end{bmatrix}$$ Matrix multiplication has the following properties: * Associativity: $A(BC) = (AB)C$ * Distributivity over matrix addition: $A(B + C) = AB + AC$ and $(A + B)C = AC + BC$ * Associativity with scalar multiplication: $xAB = x(AB) = A(xB)$ > Note that matrix multiplication is **not commutative:** $AB$ rarely equals $BA$. Another very important property of matrix multiplication is that a matrix multiplied by a vector produces another vector. An **identity matrix** $I_n$ is a special $n \times n$ matrix which has $1$s on the main diagonal, and $0$s everywhere else: $$I_n = \begin{bmatrix} 1 & 0 & \dotsb & 0 \\ 0 & 1 & \dotsb & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dotsb & 1 \end{bmatrix}$$ What makes it special is that multiplying any matrix (of compatible size) by $I_n$ returns the original matrix. To put it another way, if $A$ is an $n \times m$ matrix: $$AI_m = I_nA = A$$ This is why $I_n$ is called an identity matrix - it acts as a **multiplicative identity**. In other words, it is the matrix equivalent of the number $1$. ### <span style="color:blue">Exercise 3</span>: Matrix multiplication. **Inputs:** 1. An $n \times m$ matrix $A$. 2. An $m \times k$ matrix $B$. **Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> To solve this exercise, you will need 3 <code>for</code> loops: one to go over $n$ rows of the output matrix, one to go over $k$ columns, and one to add up $m$ products that form each element of the output: <pre> <code> for i in range(n): for j in range(k): sum = 0 for t in range(m): sum = sum + ... c[i][j] = sum </code> </pre> A video explanation can be found <a href="https://www.youtube.com/watch?v=OMA2Mwo0aZg">here</a>. </details> ``` @exercise def matrix_mult(a : Matrix, b : Matrix) -> Matrix: n = len(a) m = len(a[0]) k = len(b[0]) c = create_empty_matrix(n, k) def calc_sum_this_cell(i, j, m): sum_cell = 0 for t in range(m): sum_cell += a[i][t] * b[t][j] return sum_cell for i in range(n): for j in range(k): c[i][j] = calc_sum_this_cell(i, j, m) return c ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-3:-Matrix-multiplication.).* ## Inverse Matrices A square $n \times n$ matrix $A$ is **invertible** if it has an inverse $n \times n$ matrix $A^{-1}$ with the following property: $$AA^{-1} = A^{-1}A = I_n$$ In other words, $A^{-1}$ acts as the **multiplicative inverse** of $A$. Another, equivalent definition highlights what makes this an interesting property. For any matrices $B$ and $C$ of compatible sizes: $$A^{-1}(AB) = A(A^{-1}B) = B \\ (CA)A^{-1} = (CA^{-1})A = C$$ A square matrix has a property called the **determinant**, with the determinant of matrix $A$ being written as $|A|$. A matrix is invertible if and only if its determinant isn't equal to $0$. For a $2 \times 2$ matrix $A$, the determinant is defined as $|A| = (A_{0,0} \cdot A_{1,1}) - (A_{0,1} \cdot A_{1,0})$. For larger matrices, the determinant is defined through determinants of sub-matrices. You can learn more from [Wikipedia](https://en.wikipedia.org/wiki/Determinant) or from [Wolfram MathWorld](http://mathworld.wolfram.com/Determinant.html). ### <span style="color:blue">Exercise 4</span>: Matrix Inversion. **Input:** An invertible $2 \times 2$ matrix $A$. **Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> Try to come up with a general method of doing it by hand first. If you get stuck, you may find <a href="https://en.wikipedia.org/wiki/Invertible_matrix#Inversion_of_2_%C3%97_2_matrices">this Wikipedia article</a> useful. For this exercise, $|A|$ is guaranteed to be non-zero. <br> A video explanation can be found <a href="https://www.youtube.com/watch?v=01c12NaUQDw">here</a>. </details> ``` @exercise def matrix_inverse(m : Matrix) -> Matrix: #inverse must be same size as original (and should be square, which we could verify) m_inverse = create_empty_matrix(len(m), len(m[0])) a = m[0][0] b = m[0][1] c = m[1][0] d = m[1][1] determinant_m = a * d - b * c if determinant_m != 0: m_inverse[0][0] = d / determinant_m m_inverse[0][1] = -b / determinant_m m_inverse[1][0] = -c / determinant_m m_inverse[1][1] = a / determinant_m return m_inverse ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-4:-Matrix-Inversion.).* ## Transpose The **transpose** operation, denoted as $A^T$, is essentially a reflection of the matrix across the diagonal: $(A^T)_{i,j} = A_{j,i}$. Given an $n \times m$ matrix $A$, its transpose is the $m \times n$ matrix $A^T$, such that if: $$A = \begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$ then: $$A^T = \begin{bmatrix} x_{0,0} & x_{1,0} & \dotsb & x_{n-1,0} \\ x_{0,1} & x_{1,1} & \dotsb & x_{n-1,1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{0,m-1} & x_{1,m-1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$ For example: $$\begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^T = \begin{bmatrix} 1 & 3 & 5 \\ 2 & 4 & 6 \end{bmatrix}$$ A **symmetric** matrix is a square matrix which equals its own transpose: $A = A^T$. To put it another way, it has reflection symmetry (hence the name) across the main diagonal. For example, the following matrix is symmetric: $$\begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6 \end{bmatrix}$$ The transpose of a matrix product is equal to the product of transposed matrices, taken in reverse order: $$(AB)^T = B^TA^T$$ ### <span style="color:blue">Exercise 5</span>: Transpose. **Input:** An $n \times m$ matrix $A$. **Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> A video explanation can be found <a href="https://www.youtube.com/watch?v=TZrKrNVhbjI">here</a>. </details> ``` @exercise def transpose(a : Matrix) -> Matrix: n = len(a) m = len(a[0]) #transpose of n x m is m x n transpose_of_a = create_empty_matrix(m, n) #for each row, make it a column for i in range(n): for j in range(m): transpose_of_a[j][i] = a[i][j] return transpose_of_a ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-5:-Transpose.).* ## Conjugate The next important single-matrix operation is the **matrix conjugate**, denoted as $\overline{A}$. This, as the name might suggest, involves taking the [complex conjugate](../ComplexArithmetic/ComplexArithmetic.ipynb#Complex-Conjugate) of every element of the matrix: if $$A = \begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$ Then: $$\overline{A} = \begin{bmatrix} \overline{x}_{0,0} & \overline{x}_{0,1} & \dotsb & \overline{x}_{0,m-1} \\ \overline{x}_{1,0} & \overline{x}_{1,1} & \dotsb & \overline{x}_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{x}_{n-1,0} & \overline{x}_{n-1,1} & \dotsb & \overline{x}_{n-1,m-1} \end{bmatrix}$$ The conjugate of a matrix product equals to the product of conjugates of the matrices: $$\overline{AB} = (\overline{A})(\overline{B})$$ ### <span style="color:blue">Exercise 6</span>: Conjugate. **Input:** An $n \times m$ matrix $A$. **Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$. > As a reminder, you can get the real and imaginary components of complex number `z` using `z.real` and `z.imag`, respectively. <details> <summary><b>Need a hint? Click here</b></summary> To calculate the conjugate of a matrix take the conjugate of each element, check the <a href="../ComplexArithmetic/ComplexArithmetic.ipynb#Exercise-4:-Complex-conjugate.">complex arithmetic tutorial</a> to see how to calculate the conjugate of a complex number. </details> ``` @exercise def conjugate(a : Matrix) -> Matrix: # result is same size n = len(a) m = len(a[0]) conjugate_of_a = create_empty_matrix(n, m) for i in range(n): for j in range(m): conjugate_of_a[i][j] = a[i][j].real + (-1)* a[i][j].imag * 1j #1j is i in python ugh return conjugate_of_a ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-6:-Conjugate.).* ## Adjoint The final important single-matrix operation is a combination of the above two. The **conjugate transpose**, also called the **adjoint** of matrix $A$, is defined as $A^\dagger = \overline{(A^T)} = (\overline{A})^T$. A matrix is known as **Hermitian** or **self-adjoint** if it equals its own adjoint: $A = A^\dagger$. For example, the following matrix is Hermitian: $$\begin{bmatrix} 1 & i \\ -i & 2 \end{bmatrix}$$ The adjoint of a matrix product can be calculated as follows: $$(AB)^\dagger = B^\dagger A^\dagger$$ ### <span style="color:blue">Exercise 7</span>: Adjoint. **Input:** An $n \times m$ matrix $A$. **Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$. > Don't forget, you can re-use functions you've written previously. ``` @exercise def adjoint(a : Matrix) -> Matrix: #first do transpose, then do conjugate #size of result will be m x n because of the transpose n = len(a) m = len(a[0]) adjoint_of_a = create_empty_matrix(m, n) #transpose - for each row, make it a column for i in range(n): for j in range(m): adjoint_of_a[j][i] = a[i][j] #conjugate let a + bi become a - bi for i in range(m): for j in range(n): adjoint_of_a[i][j] = adjoint_of_a[i][j].real + (-1)* adjoint_of_a[i][j].imag * 1j return adjoint_of_a ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-7:-Adjoint.).* ## Unitary Matrices **Unitary matrices** are very important for quantum computing. A matrix is unitary when it is invertible, and its inverse is equal to its adjoint: $U^{-1} = U^\dagger$. That is, an $n \times n$ square matrix $U$ is unitary if and only if $UU^\dagger = U^\dagger U = I_n$. For example, the following matrix is unitary: $$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} \\ \end{bmatrix}$$ ### <span style="color:blue">Exercise 8</span>: Unitary Verification. **Input:** An $n \times n$ matrix $A$. **Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't. > Because of inaccuracy when dealing with floating point numbers on a computer (rounding errors), you won't always get the exact result you are expecting from a long series of calculations. To get around this, Python has a function `approx` which can be used to check if two numbers are "close enough:" `a == approx(b)`. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> Keep in mind, you have only implemented matrix inverses for $2 \times 2$ matrices, and this exercise may give you larger inputs. There is a way to solve this without taking the inverse. </details> ``` from pytest import approx @exercise def is_matrix_unitary(a : Matrix) -> bool: #if a is unitary, then a multiplied by its adjoint yields I #this will automatically handle the zero matrix corner case #this is for square nxn matrix n = len(a) product_matrix = matrix_mult(a, adjoint(a)) #check whether product_matrix is I is_unitary = True for i in range(n): for j in range(n): #diagonal must be 1, all others must be zero #holy ugly code batman if (i == j and product_matrix[i][j] != approx(1)) or (i != j and product_matrix[i][j] != approx(0)): is_unitary = False break; return is_unitary ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-8:-Unitary-Verification.).* ## Next Steps Congratulations! At this point, you should understand enough linear algebra to be able to get started with the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb) and on [single-qubit quantum gates](../SingleQubitGates/SingleQubitGates.ipynb). The next section covers more advanced matrix operations that help explain the properties of qubits and quantum gates. # Part II. Advanced Operations ## Inner Product The **inner product** is yet another important matrix operation that is only applied to vectors. Given two vectors $V$ and $W$ of the same size, their inner product $\langle V , W \rangle$ is defined as a product of matrices $V^\dagger$ and $W$: $$\langle V , W \rangle = V^\dagger W$$ Let's break this down so it's a bit easier to understand. A $1 \times n$ matrix (the adjoint of an $n \times 1$ vector) multiplied by an $n \times 1$ vector results in a $1 \times 1$ matrix (which is equivalent to a scalar). The result of an inner product is that scalar. To put it another way, to calculate the inner product of two vectors, take the corresponding elements $V_k$ and $W_k$, multiply the complex conjugate of $V_k$ by $W_k$, and add up those products: $$\langle V , W \rangle = \sum_{k=0}^{n-1}\overline{V_k}W_k$$ Here is a simple example: $$\langle \begin{bmatrix} -6 \\ 9i \end{bmatrix} , \begin{bmatrix} 3 \\ -8 \end{bmatrix} \rangle = \begin{bmatrix} -6 \\ 9i \end{bmatrix}^\dagger \begin{bmatrix} 3 \\ -8 \end{bmatrix} = \begin{bmatrix} -6 & -9i \end{bmatrix} \begin{bmatrix} 3 \\ -8 \end{bmatrix} = (-6) \cdot (3) + (-9i) \cdot (-8) = -18 + 72i$$ If you are familiar with the **dot product**, you will notice that it is equivalent to inner product for real-numbered vectors. > We use our definition for these tutorials because it matches the notation used in quantum computing. You might encounter other sources which define the inner product a little differently: $\langle V , W \rangle = W^\dagger V = V^T\overline{W}$, in contrast to the $V^\dagger W$ that we use. These definitions are almost equivalent, with some differences in the scalar multiplication by a complex number. An immediate application for the inner product is computing the **vector norm**. The norm of vector $V$ is defined as $||V|| = \sqrt{\langle V , V \rangle}$. This condenses the vector down to a single non-negative real value. If the vector represents coordinates in space, the norm happens to be the length of the vector. A vector is called **normalized** if its norm is equal to $1$. The inner product has the following properties: * Distributivity over addition: $\langle V + W , X \rangle = \langle V , X \rangle + \langle W , X \rangle$ and $\langle V , W + X \rangle = \langle V , W \rangle + \langle V , X \rangle$ * Partial associativity with scalar multiplication: $x \cdot \langle V , W \rangle = \langle \overline{x}V , W \rangle = \langle V , xW \rangle$ * Skew symmetry: $\langle V , W \rangle = \overline{\langle W , V \rangle}$ * Multiplying a vector by a unitary matrix **preserves the vector's inner product with itself** (and therefore the vector's norm): $\langle UV , UV \rangle = \langle V , V \rangle$ > Note that just like matrix multiplication, the inner product is **not commutative**: $\langle V , W \rangle$ won't always equal $\langle W , V \rangle$. ### <span style="color:blue">Exercise 9</span>: Inner product. **Inputs:** 1. An $n \times 1$ vector $V$. 2. An $n \times 1$ vector $W$. **Output:** Return a complex number - the inner product $\langle V , W \rangle$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> A video explanation can be found <a href="https://www.youtube.com/watch?v=FCmH4MqbFGs">here</a>. </details> ``` @exercise def inner_prod(v : Matrix, w : Matrix) -> complex: n = len(v) conjugate_of_v = conjugate(v) inner_product = 0 for k in range(n): inner_product += conjugate_of_v[k][0] * w[k][0] return inner_product ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-9:-Inner-product.).* ### <span style="color:blue">Exercise 10</span>: Normalized vectors. **Input:** A non-zero $n \times 1$ vector $V$. **Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> You might need the square root function to solve this exercise. As a reminder, <a href=https://docs.python.org/3/library/math.html#math.sqrt>Python's square root function</a> is available in the <code>math</code> library.<br> A video explanation can be found <a href="https://www.youtube.com/watch?v=7fn03DIW3Ak">here</a>. Note that when this method is used with complex vectors, you should take the modulus of the complex number for the division. </details> ``` @exercise def normalize(v : Matrix) -> Matrix: # sqrt of complex number?? norm = math.sqrt(inner_prod(v, v)) #try modulus of result of inner prod bc it's a complex number prod = inner_prod(v, v) modulus_of_prod = math.sqrt(prod.real**2 + prod.imag**2) norm = math.sqrt(modulus_of_prod) v_normalized = create_empty_matrix(len(v), 1) for k in range(len(v)): v_normalized[k][0] = v[k][0] / norm return v_normalized ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-10:-Normalized-vectors.).* ## Outer Product The **outer product** of two vectors $V$ and $W$ is defined as $VW^\dagger$. That is, the outer product of an $n \times 1$ vector and an $m \times 1$ vector is an $n \times m$ matrix. If we denote the outer product of $V$ and $W$ as $X$, then $X_{i,j} = V_i \cdot \overline{W_j}$. Here is a simple example: outer product of $\begin{bmatrix} -3i \\ 9 \end{bmatrix}$ and $\begin{bmatrix} 9i \\ 2 \\ 7 \end{bmatrix}$ is: $$\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix} \begin{bmatrix} \color{red} {9i} \\ \color{red} 2 \\ \color{red} 7 \end{bmatrix}^\dagger = \begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix} \begin{bmatrix} \color{red} {-9i} & \color{red} 2 & \color{red} 7 \end{bmatrix} = \begin{bmatrix} \color{blue} {-3i} \cdot \color{red} {(-9i)} & \color{blue} {-3i} \cdot \color{red} 2 & \color{blue} {-3i} \cdot \color{red} 7 \\ \color{blue} 9 \cdot \color{red} {(-9i)} & \color{blue} 9 \cdot \color{red} 2 & \color{blue} 9 \cdot \color{red} 7 \end{bmatrix} = \begin{bmatrix} -27 & -6i & -21i \\ -81i & 18 & 63 \end{bmatrix}$$ ### <span style="color:blue">Exercise 11</span>: Outer product. **Inputs:** 1. An $n \times 1$ vector $V$. 2. An $m \times 1$ vector $W$. **Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$. ``` @exercise def outer_prod(v : Matrix, w : Matrix) -> Matrix: #outer product equals v times adjoint of w return matrix_mult(v, adjoint(w)) ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-11:-Outer-product.).* ## Tensor Product The **tensor product** is a different way of multiplying matrices. Rather than multiplying rows by columns, the tensor product multiplies the second matrix by every element of the first matrix. Given $n \times m$ matrix $A$ and $k \times l$ matrix $B$, their tensor product $A \otimes B$ is an $(n \cdot k) \times (m \cdot l)$ matrix defined as follows: $$A \otimes B = \begin{bmatrix} A_{0,0} \cdot B & A_{0,1} \cdot B & \dotsb & A_{0,m-1} \cdot B \\ A_{1,0} \cdot B & A_{1,1} \cdot B & \dotsb & A_{1,m-1} \cdot B \\ \vdots & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot B & A_{n-1,1} \cdot B & \dotsb & A_{n-1,m-1} \cdot B \end{bmatrix} = \begin{bmatrix} A_{0,0} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & b_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{0,m-1} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \\ \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \end{bmatrix} = \\ = \begin{bmatrix} A_{0,0} \cdot \color{red} {B_{0,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{0,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{0,0} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{k-1,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,l-1}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{0,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{k-1,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,l-1}} \end{bmatrix}$$ Here is a simple example: $$\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \otimes \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 1 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 2 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \\ 3 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 4 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} 1 \cdot 5 & 1 \cdot 6 & 2 \cdot 5 & 2 \cdot 6 \\ 1 \cdot 7 & 1 \cdot 8 & 2 \cdot 7 & 2 \cdot 8 \\ 3 \cdot 5 & 3 \cdot 6 & 4 \cdot 5 & 4 \cdot 6 \\ 3 \cdot 7 & 3 \cdot 8 & 4 \cdot 7 & 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 5 & 6 & 10 & 12 \\ 7 & 8 & 14 & 16 \\ 15 & 18 & 20 & 24 \\ 21 & 24 & 28 & 32 \end{bmatrix}$$ Notice that the tensor product of two vectors is another vector: if $V$ is an $n \times 1$ vector, and $W$ is an $m \times 1$ vector, $V \otimes W$ is an $(n \cdot m) \times 1$ vector. The tensor product has the following properties: * Distributivity over addition: $(A + B) \otimes C = A \otimes C + B \otimes C$, $A \otimes (B + C) = A \otimes B + A \otimes C$ * Associativity with scalar multiplication: $x(A \otimes B) = (xA) \otimes B = A \otimes (xB)$ * Mixed-product property (relation with matrix multiplication): $(A \otimes B) (C \otimes D) = (AC) \otimes (BD)$ ### <span style="color:blue">Exercise 12</span>*: Tensor Product. **Inputs:** 1. An $n \times m$ matrix $A$. 2. A $k \times l$ matrix $B$. **Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$. ``` @exercise def tensor_product(a : Matrix, b : Matrix) -> Matrix: n = len(a) m = len(a[0]) k = len(b) l = len(b[0]) result = create_empty_matrix(n*k, m*l) #for each element in a, which is n x m for arow in range(n): for acol in range(m): acurrent = a[arow][acol] #copy B elements into result, multiplying by acurrent as we go for brow in range(k): for bcol in range(l): bcurrent = b[brow][bcol] #trick is indices in result result[arow*k + brow][acol*l + bcol] = acurrent * bcurrent return result ``` *Can't come up with a solution? See the explained solution in the* <i><a href="./Workbook_LinearAlgebra.ipynb#Exercise-12*:-Tensor-Product.">Linear Algebra Workbook</a></i>. ## Next Steps At this point, you know enough to complete the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb), [single-qubit gates](../SingleQubitGates/SingleQubitGates.ipynb), [multi-qubit systems](../MultiQubitSystems/MultiQubitSystems.ipynb), and [multi-qubit gates](../MultiQubitGates/MultiQubitGates.ipynb). The last part of this tutorial is a brief introduction to eigenvalues and eigenvectors, which are used for more advanced topics in quantum computing. Feel free to move on to the next tutorials, and come back here once you encounter eigenvalues and eigenvectors elsewhere. # Part III: Eigenvalues and Eigenvectors Consider the following example of multiplying a matrix by a vector: $$\begin{bmatrix} 1 & -3 & 3 \\ 3 & -5 & 3 \\ 6 & -6 & 4 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4 \\ 8 \end{bmatrix}$$ Notice that the resulting vector is just the initial vector multiplied by a scalar (in this case 4). This behavior is so noteworthy that it is described using a special set of terms. Given a nonzero $n \times n$ matrix $A$, a nonzero vector $V$, and a scalar $x$, if $AV = xV$, then $x$ is an **eigenvalue** of $A$, and $V$ is an **eigenvector** of $A$ corresponding to that eigenvalue. The properties of eigenvalues and eigenvectors are used extensively in quantum computing. You can learn more about eigenvalues, eigenvectors, and their properties at [Wolfram MathWorld](http://mathworld.wolfram.com/Eigenvector.html) or on [Wikipedia](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors). ### <span style="color:blue">Exercise 13</span>: Finding an eigenvalue. **Inputs:** 1. An $n \times n$ matrix $A$. 2. An eigenvector $V$ of matrix $A$. **Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> Multiply the matrix by the vector, then divide the elements of the result by the elements of the original vector. Don't forget though, some elements of the vector may be $0$. </details> ``` @exercise def find_eigenvalue(a : Matrix, v : Matrix) -> float: #eigenvalue = AV / V #AV will be (nxn) * (n * 1) = n * 1, so can divide each element n = len(a) prod_av = matrix_mult(a, v) result = create_empty_matrix(n, 1) eigenvalue = 0 for i in range(n): if (v[i][0] != 0): result[i][0] = prod_av[i][0] / v[i][0] #find first non-zero result for eigenvalue if result[i][0] != 0: eigenvalue = result[i][0] break; return eigenvalue ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-13:-Finding-an-eigenvalue.).* ### <span style="color:blue">Exercise 14</span>**: Finding an eigenvector. **Inputs:** 1. A $2 \times 2$ matrix $A$. 2. An eigenvalue $x$ of matrix $A$. **Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. <br/> <details> <summary><strong>Need a hint? Click here</strong></summary> A matrix and an eigenvalue will have multiple eigenvectors (infinitely many, in fact), but you only need to find one.<br/> Try treating the elements of the vector as variables in a system of two equations. Watch out for division by $0$! </details> ``` @exercise def find_eigenvector(a : Matrix, x : float) -> Matrix: result = create_empty_matrix(len(a), 1) return result ``` *Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-14**:-Finding-an-eigenvector.).*
github_jupyter
Author: Vo, Huynh Quang Nguyen # Acknowledgments The contents of this note are based on the lecture notes and the materials from the sources below. All rights reserved to respective owners. 1. **Deep Learning** textbook by Dr Ian Goodfellow, Prof. Yoshua Bengio, and Prof. Aaron Courville. Available at: [Deep Learning textbook](https://www.deeplearningbook.org/) 2. **Machine Learning with Python** course given by Prof. Alexander Jung from Aalto University, Finland. 3. **Machine Learning** course by Prof. Andrew Ng. Available in Coursera: [Machine Learning](https://www.coursera.org/learn/machine-learning) 4. **Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow** by Aurélien Géron. ## Disclaimer 1. This lecture note serves as a summary of fundamental concepts that are commonly used in machine learning. Thus, we strongly recommend this note to be used strictly as a reference: * For lectures, teachers on which topics they should include when organizing their own Machine Learning classes, and * For learners to get an overview of machine learning. 2. This lecture note is the second of the two-episodes series about the fundamentals of data science and machine learning. Thus, we strongly recommend to read this note after having finished the previous one. # Overview of Machine Learning ## Components of Machine Learning 1. As mentioned in the previous note, machine learning (ML) programs are algorithms that is capable of learning from data. According to the definition from Tom Mitchell, which was also introduced in the previous note, a ML program is a program that "learn from **experience $\mathcal{E}$** with respect to some **task $\mathcal{T}$** and some **performance measure $\mathcal{P}$**, if its performance on $\mathcal{T}$, as measured by $\mathcal{P}$, improves with experience $\mathcal{E}$". 2. Let's dive into details each component mentioned in Mitchell's definition. ### Task 1. ML tasks are usually described in terms of how the machine learning system should process an **example**, the latter of which is a collection of features that have been quantitatively measured from some object or event that we want the machine learning system to process. An example is typically represented as a vector $\mathbf{x} \in \mathbb{R}^n$ where each entry $x_i$ of the vector is another feature (also known as variable). 2. Here are the list of commonly tasks in ML. Noted that we have already encountered most of them in Data Science. * **Classification**: In this type of task, the computer program is asked to specify which of $k$ categories some input belongs to. To solve this task, the learning algorithm is usually asked to produce a function $f : \mathbb{R}^n \rightarrow {1, . . . , k}$. When $y = f(\mathbf{x})$, the model assigns an input $\mathbf{x}$ to a category identified by numeric code $y$. A harder version of this task is **classification with missing inputs**, where every measurement in its input is not guaranteed to always be provided. * **Regression**: In this type of task, the computer program is asked to predict a numerical value given some input. To solve this task, the learning algorithm is asked to output a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$. This type of task is similar to classification, except that the format of output is different. * **Machine translation**: In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language. * **Anomaly detection**: In this type of task, the computer program sifts through a set of events or objects, and flags some of them as being unusual or atypical. * **Synthesis and sampling**: In this type of task, the machine learning algorithm is asked to generate new examples that are similar to those in the training data. * **Imputation of missing values**: In this type of task, the machine learning algorithm is given a new example $\mathbf{x} \in \mathbb{R}^n$, but with some entries $x_i$ of $\mathbf{x}$ missing. The algorithm must provide a prediction of the values of the missing entries. * **Denoising**: In this type of task, the machine learning algorithm is given in input a corrupted example x˜ ∈ Rn obtained by an unknown corruption process from a clean example x ∈ Rn . The learner must predict the clean example x from its corrupted version x˜, or more generally predict the conditional probability distribution p(x | x˜). * Density estimation or probability mass function estimation: In the density estimation problem, the machine learning algorithm is asked to learn a function pmodel : R n → R, where pmodel(x) can be interpreted as a probability density function (if x is continuous) or a probability mass function (if x is discrete) on the space that the examples were drawn from. To do such a task well (we will specify exactly what that means when we discuss performance measures P), the algorithm needs to learn the structure of the data it has seen
github_jupyter
``` import pandas as pd import numpy as np from analysis_utils import * PAREDAO = "paredao13" CAND1_PATH = "data/paredao13/flay.csv" CAND2_PATH = "data/paredao13/thelma.csv" CAND3_PATH = "data/paredao13/babu.csv" DATE = 3 IGNORE_HASHTAGS = ["#bbb20", "#redebbb", "#bbb2020"] candidate1_df = pd.read_csv(CAND1_PATH) candidate2_df = pd.read_csv(CAND2_PATH) candidate3_df = pd.read_csv(CAND3_PATH) cand1 = candidate1_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]] cand2 = candidate2_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]] cand3 = candidate3_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]] ``` # Flayslene (eliminada) ``` cand1["sentiment"].hist() ``` # Thelma ``` cand2["sentiment"].hist() ``` # Babu ``` cand3["sentiment"].hist() ``` # Quantidades absolutas ``` candidates = {"flayslene": cand1, "thelma": cand2, "babu": cand3} qtds_df = get_raw_quantities(candidates) qtds_df qtds_df.plot.bar(rot=45, color=['green', 'gray', 'red']) ``` # Porcentagens em relação aos total de tweets de cada candidato ``` pcts_df = get_pct_by_candidate(candidates) pcts_df pcts_df.plot.bar(rot=45, color=['green', 'gray', 'red']) ``` # Porcentagens em relação ao total de tweets por categoria ``` qtds_df_copy = qtds_df.copy() qtds_df["positivos"] /= qtds_df["positivos"].sum() qtds_df["neutros"] /= qtds_df["neutros"].sum() qtds_df["negativos"] /= qtds_df["negativos"].sum() qtds_df qtds_df.plot.bar(rot=45, color=['green', 'gray', 'red']) ``` # Tweets por dia ``` names = list(candidates.keys()) tweets_by_day_df = get_tweets_by_day(candidates[names[0]], names[0]) for name in names[1:]: current = get_tweets_by_day(candidates[name], name) tweets_by_day_df = tweets_by_day_df.append(current) tweets_by_day_df.transpose().plot() ``` # Análise de hashtags ``` import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (20,10) unique_df = get_unique_hashtags(list(candidates.values())) unique_df.drop(index=IGNORE_HASHTAGS, inplace=True) unique_df.sort_values(by="quantidade", ascending=False).head(30).plot.bar(rot=45) alias = {"flayslene": "flay", "thelma": "thelma", "babu": "babu"} fica_fora_df = get_fica_fora_quantities(unique_df, alias) fica_fora_df ``` # Seleção de atributos ``` atributes_df = qtds_df_copy.join(pcts_df, rsuffix="_individual_pct") atributes_df = atributes_df.join(qtds_df, rsuffix="_global_pct") atributes_df = atributes_df.join(tweets_by_day_df) atributes_df = atributes_df.join(fica_fora_df) raw_participantes_info = get_participantes_info()[DATE] print("Seguidores atualizados em:", raw_participantes_info["date"]) participantes_info = raw_participantes_info["infos"] paredoes_info = get_paredoes_info() followers = [participantes_info[participante]["seguidores"] for participante in atributes_df.index] likes = [get_likes_count(candidates[participante]) for participante in atributes_df.index] retweets = [get_retweets_count(candidates[participante]) for participante in atributes_df.index] paredao_info = paredoes_info[PAREDAO]["candidatos"] results_info = {candidate["nome"]: candidate["porcentagem"]/100 for candidate in paredao_info} rejection = [results_info[participante] for participante in atributes_df.index] atributes_df["likes"] = likes atributes_df["retweets"] = retweets atributes_df["seguidores"] = followers atributes_df["rejeicao"] = rejection atributes_df atributes_df.to_csv("data/{}/paredao_atributes.csv".format(PAREDAO)) ```
github_jupyter
## Installation ``` !pip install -q --upgrade transformers datasets tokenizers !pip install -q emoji pythainlp sklearn-pycrfsuite seqeval !rm -r thai2transformers thai2transformers_parent !git clone -b dev https://github.com/vistec-AI/thai2transformers/ !mv thai2transformers thai2transformers_parent !mv thai2transformers_parent/thai2transformers . !pip install accelerate==0.5.1 !apt install git-lfs !pip install sentencepiece ! git clone https://github.com/Bjarten/early-stopping-pytorch.git import sys sys.path.insert(0, '/content/early-stopping-pytorch') import os os.environ['CUDA_LAUNCH_BLOCKING'] = "1" ``` ## Importing the libraries ``` from datasets import load_dataset,Dataset,DatasetDict,load_from_disk from transformers import DataCollatorWithPadding,AutoModelForSequenceClassification, Trainer, TrainingArguments,AutoTokenizer,AutoModel,AutoConfig from transformers.modeling_outputs import SequenceClassifierOutput from thai2transformers.preprocess import process_transformers import torch import torch.nn as nn import pandas as pd import numpy as np from sklearn.metrics import classification_report from pytorchtools import EarlyStopping from google.colab import drive drive.mount('/content/drive') ``` ## Loading the dataset ``` data = load_from_disk('/content/drive/MyDrive/Fake news/News-Dataset/dataset') def clean_function(examples): examples['text'] = process_transformers(examples['text']) return examples data = data.map(clean_function) ``` ## Fine-tuning ``` checkpoint = "airesearch/wangchanberta-base-att-spm-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) tokenizer.model_max_len=416 def tokenize(batch): return tokenizer(batch["text"], truncation=True,max_length=416) tokenized_dataset = data.map(tokenize, batched=True) tokenized_dataset tokenized_dataset.set_format("torch",columns=["input_ids", "attention_mask", "labels"]) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) class extract_tensor(nn.Module): def forward(self,x): # Output shape (batch, features, hidden) tensor, _ = x # Reshape shape (batch, hidden) return tensor[:, :] class CustomModel(nn.Module): def __init__(self,checkpoint,num_labels): super(CustomModel,self).__init__() self.num_labels = num_labels #Load Model with given checkpoint and extract its body self.model = model = AutoModel.from_pretrained(checkpoint,config=AutoConfig.from_pretrained(checkpoint, output_attentions=True,output_hidden_states=True)) self.dropout = nn.Dropout(0.1) self.classifier = nn.Sequential( nn.LSTM(768, 256, 1, batch_first=True), extract_tensor(), nn.Linear(256, 2) ) def forward(self, input_ids=None, attention_mask=None,labels=None): #Extract outputs from the body outputs = self.model(input_ids=input_ids, attention_mask=attention_mask) #Add custom layers sequence_output = self.dropout(outputs[0]) #outputs[0]=last hidden state logits = self.classifier(sequence_output[:,0,:].view(-1,768)) # calculate losses loss = None if labels is not None: loss_fct = nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=outputs.hidden_states,attentions=outputs.attentions) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model=CustomModel(checkpoint=checkpoint,num_labels=2).to(device) from torch.utils.data import DataLoader train_dataloader = DataLoader( tokenized_dataset["train"], shuffle=True, batch_size=8, collate_fn=data_collator ) eval_dataloader = DataLoader( tokenized_dataset["valid"], batch_size=8, collate_fn=data_collator ) from transformers import AdamW,get_scheduler optimizer = AdamW(model.parameters(), lr=5e-5) num_epochs = 50 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps, ) print(num_training_steps) from datasets import load_metric metric = load_metric("f1") from tqdm.auto import tqdm progress_bar_train = tqdm(range(num_training_steps)) progress_bar_eval = tqdm(range(num_epochs * len(eval_dataloader))) # to track the training loss as the model trains train_losses = [] # to track the validation loss as the model trains valid_losses = [] # to track the average training loss per epoch as the model trains avg_train_losses = [] # to track the average validation loss per epoch as the model trains avg_valid_losses = [] early_stopping = EarlyStopping(patience=7, verbose=True) for epoch in range(num_epochs): model.train() size = len(train_dataloader.dataset) for batch, X in enumerate(train_dataloader): X = {k: v.to(device) for k, v in X.items()} outputs = model(**X) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar_train.update(1) train_losses.append(loss.item()) model.eval() for batch, X in enumerate(eval_dataloader): X = {k: v.to(device) for k, v in X.items()} with torch.no_grad(): outputs = model(**X) loss = outputs.loss valid_losses.append(loss.item()) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=X["labels"]) progress_bar_eval.update(1) # print training/validation statistics # calculate average loss over an epoch train_loss = np.average(train_losses) valid_loss = np.average(valid_losses) avg_train_losses.append(train_loss) avg_valid_losses.append(valid_loss) epoch_len = len(str(num_epochs)) loss_msg = (f'[{epoch+1:>{epoch_len}}/{num_epochs:>{epoch_len}}] ' + f'train_loss: {train_loss:.5f} ' + f'valid_loss: {valid_loss:.5f}') print(loss_msg) # clear lists to track next epoch train_losses = [] valid_losses = [] # early_stopping needs the validation loss to check if it has decresed, # and if it has, it will make a checkpoint of the current model early_stopping(valid_loss, model) if early_stopping.early_stop: print("Early stopping") break print(metric.compute()) print('\n') model.load_state_dict(torch.load('checkpoint.pt')) # visualize the loss as the network trained import matplotlib.pyplot as plt fig = plt.figure(figsize=(10,8)) plt.plot(range(1,len(avg_train_losses)+1),avg_train_losses, label='Training Loss') plt.plot(range(1,len(avg_valid_losses)+1),avg_valid_losses,label='Validation Loss') # find position of lowest validation loss minposs = avg_valid_losses.index(min(avg_valid_losses))+1 plt.axvline(minposs, linestyle='--', color='r',label='Early Stopping Checkpoint') plt.xlabel('epochs') plt.ylabel('loss') plt.ylim(0, 0.5) # consistent scale plt.xlim(0, len(avg_train_losses)+1) # consistent scale plt.grid(True) plt.legend() plt.tight_layout() plt.show() fig.savefig('loss_plot.png', bbox_inches='tight') ``` ## Test Result ``` preds = torch.empty(0).cuda() model.eval() test_dataloader = DataLoader( tokenized_dataset["test"], batch_size=8, collate_fn=data_collator ) for batch in test_dataloader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) preds = torch.cat((preds, predictions), 0) metric.compute() text = tokenized_dataset["test"]["text"] y_true = tokenized_dataset["test"]["labels"] y_pred = preds.cpu() print(classification_report(y_true, y_pred, target_names=['true','fake'])) ``` ## Wrong Prediction ``` test_result = pd.DataFrame(zip(text, [int(x) for x in y_pred.tolist()], y_true.tolist()), columns=['text','pred','true']) wrong_prediction = test_result[test_result['pred'] != test_result['true']] wrong_prediction.head() ``` ## Confusion Matrix ``` import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix import seaborn as sn array = confusion_matrix(y_true, y_pred) df_cm = pd.DataFrame(array, range(2), range(2)) sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='g', cmap="flare") plt.show() torch.save(model, '/content/drive/MyDrive/Fake news/Model/sodabert-lstm') ```
github_jupyter
# Pre-Processing Methods ``` %%capture !pip3 install sparqlwrapper # Common methods to retrieve data from Wikidata import time from SPARQLWrapper import SPARQLWrapper, JSON import pandas as pd import urllib.request as url import json from SPARQLWrapper import SPARQLWrapper wiki_sparql = SPARQLWrapper("https://query.wikidata.org/sparql") wiki_sparql.setReturnFormat(JSON) wiki_sparql.setTimeout(timeout=25) wiki_cache = {} def get_wikidata_label(entity): if (entity in cache): #print("use of cache!") return cache[entity] query = """ PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX wd: <http://www.wikidata.org/entity/> SELECT * WHERE { wd:ENTITY rdfs:label ?label . FILTER (langMatches( lang(?label), "EN" ) ) } LIMIT 1 """ query_text = query.replace('ENTITY',entity) wiki_sparql.setQuery(query_text) result = "" while (result == ""): try: ret = wiki_sparql.queryAndConvert() if (len(ret["results"]["bindings"]) == 0): result = "-" for r in ret["results"]["bindings"]: result = r['label']['value'] except Exception as e: print("Error on wikidata query:",e) if "timed out" in str(e): result = "-" break cache[entity] = result return result def get_wikidata(query): if ("ASK" not in query) and ("LIMIT" not in query): query += " LIMIT 10" #print(query) key = query.replace(" ","_") if (key in cache): #print("use of cache!") return cache[key] wiki_sparql.setQuery(query) result = [] retries = 0 while (len(result) == 0) and (retries < 5): try: ret = wiki_sparql.queryAndConvert() #print(ret) if ("ASK" in query): result.append(str(ret['boolean'])) elif (len(ret["results"]["bindings"]) == 0): result.append("-") else: for r in ret["results"]["bindings"]: for k in r.keys(): tokens = r[k]['value'].split("/") result.append(tokens[len(tokens)-1]) except Exception as e: retries += 1 print("Error on wikidata query:",e) if "timed out" in str(e): result.append("-") break cache[key] = result return result def preprocess_questions(questions): rows = [] counter = 0 for question in data['questions']: if (counter % 1000 == 0): print("Queries processed:",counter, "Cache Size:",len(cache)) #print("#",question['question_id']) answer = question['query_answer'][0] subject_labels = [] subjects = [] predicates = [e.split(":")[1] for e in answer['sparql_template'].split(" ") if ":" in e] predicate_labels = [] for p in predicates: predicate_labels.append(get_wikidata_label(p.replace("*","").split("/")[0])) objects = get_wikidata(answer['sparql_query']) object_labels = [] for o in objects: if (len(o)>0) and (o[0]=="Q"): object_labels.append(get_wikidata_label(o)) else: object_labels.append(o) for entity in answer['entities']: subject_labels.append(entity['label']) subjects.append(entity['entity'].split(":")[1]) row = { 'subjects':subjects, 'predicates' : predicates, 'objects': objects, 'question': question['natural_language_question'], 'subject_labels':subject_labels, 'predicate_labels':predicate_labels, 'object_labels':object_labels } #print(row) rows.append(row) counter += 1 df = pd.DataFrame(rows) return df # Common methods to retrieve data from Wikidata import time from SPARQLWrapper import SPARQLWrapper, JSON import pandas as pd import urllib.request as url import json from SPARQLWrapper import SPARQLWrapper dbpedia_sparql = SPARQLWrapper("https://dbpedia.org/sparql/") dbpedia_sparql.setReturnFormat(JSON) dbpedia_sparql.setTimeout(timeout=60) dbpedia_cache = {} import hashlib def hash_text(text): hash_object = hashlib.md5(text.encode()) md5_hash = hash_object.hexdigest() return str(md5_hash) def get_dbpedia_label(entity,use_cache=True,verbose=False): key = entity+"_label" if (use_cache) and (key in dbpedia_cache): #print("use of cache!") return dbpedia_cache[key].copy() query = """ PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX dbr: <http://dbpedia.org/resource/> select distinct ?label { <ENTITY> rdfs:label ?label . filter langMatches(lang(?label), 'en') } LIMIT 250 """ query_text = query.replace('ENTITY',entity) dbpedia_sparql.setQuery(query_text) result = [] while (len(result) == 0): try: if (verbose): print("SPARQL Query:",query_text) ret = dbpedia_sparql.queryAndConvert() if (verbose): print("SPARQL Response:",ret) for r in ret["results"]["bindings"]: id = entity value = id if ('label' in r) and ('value' in r['label']): value = r['label']['value'] if (' id ' not in value.lower()) and (' link ' not in value.lower()) and ('has abstract' not in value.lower()) and ('wiki' not in value.lower()) and ('instance of' not in value.lower()): result.append({'id':id, 'value':value}) except Exception as e: print("Error on SPARQL query:",e) break dbpedia_cache[key] = result #print(len(result),"properties found") return result def get_dbpedia_property_value(filter,use_cache=True,verbose=False): key = hash_text(filter) if (use_cache) and (key in dbpedia_cache): #print("use of cache!") return dbpedia_cache[key].copy() query = """ PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX dbr: <http://dbpedia.org/resource/> select distinct ?object ?label { { FILTER } optional { ?object rdfs:label ?label . filter langMatches(lang(?label), 'en') } } LIMIT 250 """ query_text = query.replace('FILTER',filter) dbpedia_sparql.setQuery(query_text) result = [] while (len(result) == 0): try: if (verbose): print("SPARQL Query:",query_text) ret = dbpedia_sparql.queryAndConvert() if (verbose): print("SPARQL Response:",ret) for r in ret["results"]["bindings"]: id = r['object']['value'] value = id if ('label' in r) and ('value' in r['label']): value = r['label']['value'] if (' id ' not in value.lower()) and (' link ' not in value.lower()) and ('has abstract' not in value.lower()) and ('wiki' not in value.lower()) and ('instance of' not in value.lower()): result.append({'id':id, 'value':value}) except Exception as e: print("Error on SPARQL query:",e) break dbpedia_cache[key] = result #print(len(result),"properties found") return result def get_forward_dbpedia_property_value(entity,property,use_cache=True,verbose=False): query_filter ="<ENTITY> <PROPERTY> ?object" return get_dbpedia_property_value(query_filter.replace("ENTITY",entity).replace("PROPERTY",property),use_cache,verbose) def get_backward_dbpedia_property_value(entity,property,use_cache=True,verbose=False): query_filter ="?object <PROPERTY> <ENTITY>" return get_dbpedia_property_value(query_filter.replace("ENTITY",entity).replace("PROPERTY",property),use_cache,verbose) ``` # Datasets ## SimpleQuestions Dataset ### Wikidata SimpleQuestions ``` import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/askplatypus/wikidata-simplequestions/master/annotated_wd_data_test_answerable.txt', sep="\t", index_col=False, header=None, names=['subject','predicate','object','question']) df.head() ``` Retrieve labels from wikidata for subject, predicate and object: ``` object_labels = [] subject_labels = [] predicate_labels = [] for index, row in df.iterrows(): print(index,":",row) subject_labels.append(get_wikidata_label(row['subject'])) predicate_labels.append(get_wikidata_label(row['predicate'].replace("R","P"))) object_labels.append(get_wikidata_label(row['object'])) if (index % 100 == 0 ): print("Labels Identified:",index,"Cache Size:",len(cache)) index += 1 print(len(object_labels),"labels retrieved!") df['subject_label']=subject_labels df['predicate_label']=predicate_labels df['object_label']=object_labels df.to_csv('wsq-labels.csv') df.head() ``` ### SimpleDBpediaQuestions ``` # read dbpedia compatible SimpleQuestions import urllib.request as url import json import unidecode import pandas as pd def normalize(label): return unidecode.unidecode(label.strip()).lower() stream = url.urlopen("https://raw.githubusercontent.com/castorini/SimpleDBpediaQA/master/V1/test.json") content = stream.read() data = json.loads(content) ref_questions = [e.lower().strip() for e in pd.read_csv('data/wsq-labels.csv', index_col=0)['question'].tolist()] counter = 0 total = 0 rows = [] dbpedia_questions = [] for question in data['Questions']: total += 1 if (total % 100 == 0): print(total) question_query = question['Query'] if (question_query.lower().strip() in ref_questions): counter += 1 subject_val = question['Subject'] subject_label = '' ss = get_dbpedia_label(subject_val) if (len(ss) > 0): subject_label = ss[0]['value'] predicate = question['PredicateList'][0] property_val = predicate['Predicate'] property_label = '' pp = get_dbpedia_label(property_val) if (len(pp) > 0): property_label = pp[0]['value'] if (predicate['Direction'] == 'forward'): object_val = get_forward_dbpedia_property_value(subject_val,property_val) else: object_val = get_backward_dbpedia_property_value(subject_val,property_val) object_id = '' object_label = '' if len(object_val) > 0: object_id = object_val[0]['id'] object_label = object_val[0]['value'] row = {'subject':subject_val, 'predicate':property_val, 'object': object_id, 'question':question_query, 'subject_label':subject_label, 'property_label':property_label, 'object_label': object_label} rows.append(row) print("Total:",len(rows)) df = pd.DataFrame(rows) df.to_csv('dsq-labels.csv') df.head(10) ``` ## Wikidata QA Dataset From paper: https://arxiv.org/pdf/2107.02865v1.pdf ``` import urllib.request as url import json stream = url.urlopen("https://raw.githubusercontent.com/thesemanticwebhero/ElNeuKGQA/main/data/dataset_wikisparql.json") content = stream.read() data = json.loads(content) df = preprocess_questions(data) df.to_csv('wqa-labels.csv') df.head() df.describe(include='all') ``` ## LC-QuAD 2.0 Dataset From paper: https://arxiv.org/pdf/2107.02865v1.pdf ``` import urllib.request as url import json stream = url.urlopen("https://raw.githubusercontent.com/thesemanticwebhero/ElNeuKGQA/main/data/dataset_lcquad2.json") content = stream.read() data = json.loads(content) df = preprocess_questions(data) df.to_csv('lcquad2-labels.csv') df.head() ``` ## COVID-QA Dataset From paper: https://aclanthology.org/2020.nlpcovid19-acl.18.pdf ``` import urllib.request as url import json import pandas as pd stream = url.urlopen("https://raw.githubusercontent.com/sharonlevy/Open_Domain_COVIDQA/main/data/qa_test.json") content = stream.read() data = json.loads(content) rows = [] counter = 0 for item in data['data']: row = { 'article': item['title'], 'text' : item['context'], 'question': item['question'], 'answer': item['answers'][0]['text'] } rows.append(row) counter += 1 if (counter % 100 == 0 ): print("Questions processed:",counter) df = pd.DataFrame(rows) df.to_csv('covidqa-labels.csv') df.head() ```
github_jupyter
``` import matplotlib.pyplot as plt %matplotlib inline import numpy as np import numexpr as ne from scipy.ndimage import correlate1d from dphutils import scale import scipy.signal from timeit import Timer import pyfftw # test monkey patching (it doesn't work for rfftn) a = pyfftw.empty_aligned((512, 512), dtype='complex128') b = pyfftw.empty_aligned((512, 512), dtype='complex128') a[:] = np.random.randn(512, 512) + 1j*np.random.randn(512, 512) b[:] = np.random.randn(512, 512) + 1j*np.random.randn(512, 512) t = Timer(lambda: scipy.signal.fftconvolve(a, b, 'same')) print('Time with scipy.fftpack: %1.3f seconds' % t.timeit(number=10)) # Monkey patch in fftn and ifftn from pyfftw.interfaces.scipy_fftpack scipy.signal.signaltools.fftn = pyfftw.interfaces.scipy_fftpack.fftn scipy.signal.signaltools.ifftn = pyfftw.interfaces.scipy_fftpack.ifftn scipy.signal.signaltools.fftpack = pyfftw.interfaces.scipy_fftpack # can't monkey patch the rfft because it's used through np in the package. scipy.signal.fftconvolve(a, b, 'same') # We cheat a bit by doing the planning first # Turn on the cache for optimum performance pyfftw.interfaces.cache.enable() print('Time with monkey patched scipy_fftpack: %1.3f seconds' % t.timeit(number=10)) # Testing the best method to enforce positivity constraint. a = np.random.randn(1e3,1e3) print(a.max(), a.min()) %timeit a[a<0] = 0 print(a.max(), a.min()) a = np.random.randn(1e3,1e3) b=np.zeros_like(a) print(a.max(), a.min()) %timeit c = np.minimum(a,b) print(a.max(), a.min()) # testing speedups for numexpr a = np.random.randn(2**9,2**9) b = np.random.randn(2**9,2**9) %timeit a-b %timeit ne.evaluate("a-b") %timeit a/b %timeit ne.evaluate("a/b") # Standard Richardson-Lucy form skimage from skimage import color, data, restoration camera = color.rgb2gray(data.camera()) from scipy.signal import convolve2d psf = np.ones((5, 5)) / 25 camera = convolve2d(camera, psf, 'same') camera += 0.1 * camera.std() * np.random.poisson(size=camera.shape) deconvolved = restoration.richardson_lucy(camera, psf, 30, False) plt.matshow(camera, cmap='Greys_r') plt.matshow(deconvolved, cmap='Greys_r', vmin=camera.min(), vmax=camera.max()) # test monkey patching properly. from pyfftw.interfaces.numpy_fft import (ifftshift, fftshift, fftn, ifftn, rfftn, irfftn) from scipy.signal.signaltools import _rfft_lock, _rfft_mt_safe, _next_regular,_check_valid_mode_shapes,_centered def fftconvolve2(in1, in2, mode="full"): if in1.ndim == in2.ndim == 0: # scalar inputs return in1 * in2 elif not in1.ndim == in2.ndim: raise ValueError("in1 and in2 should have the same dimensionality") elif in1.size == 0 or in2.size == 0: # empty arrays return array([]) s1 = np.array(in1.shape) s2 = np.array(in2.shape) complex_result = (np.issubdtype(in1.dtype, complex) or np.issubdtype(in2.dtype, complex)) shape = s1 + s2 - 1 if mode == "valid": _check_valid_mode_shapes(s1, s2) # Speed up FFT by padding to optimal size for FFTPACK fshape = [_next_regular(int(d)) for d in shape] fslice = tuple([slice(0, int(sz)) for sz in shape]) # Pre-1.9 NumPy FFT routines are not threadsafe. For older NumPys, make # sure we only call rfftn/irfftn from one thread at a time. if not complex_result and (_rfft_mt_safe or _rfft_lock.acquire(False)): try: ret = (irfftn(rfftn(in1, fshape) * rfftn(in2, fshape), fshape)[fslice]. copy()) finally: if not _rfft_mt_safe: _rfft_lock.release() else: # If we're here, it's either because we need a complex result, or we # failed to acquire _rfft_lock (meaning rfftn isn't threadsafe and # is already in use by another thread). In either case, use the # (threadsafe but slower) SciPy complex-FFT routines instead. ret = ifftn(fftn(in1, fshape) * fftn(in2, fshape))[fslice].copy() if not complex_result: ret = ret.real if mode == "full": return ret elif mode == "same": return _centered(ret, s1) elif mode == "valid": return _centered(ret, s1 - s2 + 1) else: raise ValueError("Acceptable mode flags are 'valid'," " 'same', or 'full'.") %timeit scipy.signal.fftconvolve(camera, psf, 'same') %timeit fftconvolve2(camera, psf, 'same') def tv(im): """ Calculate the total variation image (1) Laasmaa, M.; Vendelin, M.; Peterson, P. Application of Regularized Richardson–Lucy Algorithm for Deconvolution of Confocal Microscopy Images. Journal of Microscopy 2011, 243 (2), 124–140. dx.doi.org/10.1111/j.1365-2818.2011.03486.x """ def m(a, b): ''' As described in (1) ''' return (sign(a)+sign(b))/2*minimum(abs(a), abs(b)) ndim = im.ndim g = np.zeros_like(p) i = 0 # g stores the gradients of out along each axis # e.g. g[0] is the first order finite difference along axis 0 for ax in range(ndim): a = 2*ax # backward difference g[a] = correlate1d(im, [-1, 1], ax) # forward difference g[a+1] = correlate1d(im, [-1, 1], ax, origin=-1) eps = finfo(float).eps oym, oyp, oxm, oxp = g return oxm*oxp/sqrt(oxp**2 +m(oyp,oym)**2+eps)+oym*oyp/sqrt(oyp**2 +m(oxp,oxm)**2+eps) def rl_update(convolve_method, kwargs): ''' A function that represents the core rl operation: $u^{(t+1)} = u^{(t)}\cdot\left(\frac{d}{u^{(t)}\otimes p}\otimes \hat{p}\right)$ Parameters ---------- image : ndarray original image to be deconvolved u_tm1 : ndarray previous u_t u_tp1 psf convolve_method ''' image = kwargs['image'] psf = kwargs['psf'] # use the prediction step to iterate on y_t = kwargs['y_t'] u_t = kwargs['u_t'] u_tm1 = kwargs['u_tm1'] g_tm1 = kwargs['g_tm1'] psf_mirror = psf[::-1, ::-1] blur = convolve_method(y_t, psf, 'same') relative_blur = ne.evaluate("image / blur") blur_blur = convolve_method(relative_blur, psf_mirror, 'same') u_tp1 = ne.evaluate("y_t*blur_blur") u_tp1[u_tp1 < 0] = 0 # update kwargs.update(dict( u_tm2 = u_tm1, u_tm1 = u_t, u_t = u_tp1, blur = blur_blur, g_tm2 = g_tm1, g_tm1 = ne.evaluate("u_tp1 - y_t") )) def richardson_lucy(image, psf, iterations=50, clip=False): """Richardson-Lucy deconvolution. Parameters ---------- image : ndarray Input degraded image (can be N dimensional). psf : ndarray The point spread function. iterations : int Number of iterations. This parameter plays the role of regularisation. clip : boolean, optional True by default. If true, pixel value of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. Returns ------- im_deconv : ndarray The deconvolved image. Examples -------- >>> from skimage import color, data, restoration >>> camera = color.rgb2gray(data.camera()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> camera = convolve2d(camera, psf, 'same') >>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape) >>> deconvolved = restoration.richardson_lucy(camera, psf, 5, False) References ---------- .. [1] http://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution """ # Stolen from the dev branch of skimage because stable branch is slow # compute the times for direct convolution and the fft method. The fft is of # complexity O(N log(N)) for each dimension and the direct method does # straight arithmetic (and is O(n*k) to add n elements k times) direct_time = np.prod(image.shape + psf.shape) fft_time = np.sum([n*np.log(n) for n in image.shape + psf.shape]) # see whether the fourier transform convolution method or the direct # convolution method is faster (discussed in scikit-image PR #1792) time_ratio = 40.032 * fft_time / direct_time if time_ratio <= 1 or len(image.shape) > 2: convolve_method = fftconvolve2 else: convolve_method = convolve image = image.astype(np.float) psf = psf.astype(np.float) im_deconv = 0.5 * np.ones(image.shape) psf_mirror = psf[::-1, ::-1] rl_dict = dict( image=image, u_tm2=None, u_tm1=None, g_tm2=None, g_tm1=None, u_t=None, y_t=image, psf=psf ) for i in range(iterations): # d/(u_t \otimes p) rl_update(convolve_method, rl_dict) alpha = 0 if rl_dict['g_tm1'] is not None and rl_dict['g_tm2'] is not None and i > 1: alpha = (rl_dict['g_tm1'] * rl_dict['g_tm2']).sum()/(rl_dict['g_tm2']**2).sum() alpha = max(min(alpha,1),0) if alpha != 0: if rl_dict['u_tm1'] is not None: h1_t = rl_dict['u_t'] - rl_dict['u_tm1'] h1_t if rl_dict['u_tm2'] is not None: h2_t = rl_dict['u_t'] - 2 * rl_dict['u_tm1'] + rl_dict['u_tm2'] else: h2_t = 0 else: h1_t = 0 else: h2_t = 0 h1_t = 0 rl_dict['y_t'] = rl_dict['u_t']+alpha*h1_t+alpha**2/2*h2_t rl_dict['y_t'][rl_dict['y_t'] < 0] = 0 im_deconv = rl_dict['u_t'] if clip: im_deconv[im_deconv > 1] = 1 im_deconv[im_deconv < -1] = -1 return rl_dict deconvolved2 = richardson_lucy(camera, psf, 10) plt.matshow(camera, cmap='Greys_r') plt.matshow(np.real(deconvolved2['u_t']), cmap='Greys_r', vmin=camera.min(), vmax=camera.max()) %timeit deconvolved2 = richardson_lucy(camera, psf, 10) ```
github_jupyter
# Using matplotlib basemap to project California data ``` %matplotlib inline import pandas as pd, numpy as np, matplotlib.pyplot as plt from geopandas import GeoDataFrame from mpl_toolkits.basemap import Basemap from shapely.geometry import Point # define basemap colors land_color = '#F6F6F6' water_color = '#D2F5FF' coastline_color = '#333333' border_color = '#999999' # load the point data and select only points in california df = pd.read_csv('data/usa-latlong.csv') usa_points = GeoDataFrame(df) usa_points['geometry'] = usa_points.apply(lambda row: Point(row['longitude'], row['latitude']), axis=1) states = GeoDataFrame.from_file('data/states_21basic/states.shp') california = states[states['STATE_NAME']=='California']['geometry'] california_polygon = california.iloc[0] california_points = usa_points[usa_points.within(california_polygon)] # first define a transverse mercator projection map_width_m = 1000 * 1000 map_height_m = 1200 * 1000 target_crs = {'datum':'WGS84', 'ellps':'WGS84', 'proj':'tmerc', 'lon_0':-119, 'lat_0':37.5} # plot the map fig_width = 6 plt.figure(figsize=[fig_width, fig_width * map_height_m / float(map_width_m)]) m = Basemap(ellps=target_crs['ellps'], projection=target_crs['proj'], lon_0=target_crs['lon_0'], lat_0=target_crs['lat_0'], width=map_width_m, height=map_height_m, resolution='l', area_thresh=10000) m.drawcoastlines(color=coastline_color) m.drawcountries(color=border_color) m.fillcontinents(color=land_color, lake_color=water_color) m.drawstates(color=border_color) m.drawmapboundary(fill_color=water_color) x, y = m(np.array(california_points['longitude']), np.array(california_points['latitude'])) m.scatter(x, y, s=80, color='r', edgecolor='#333333', alpha=0.4, zorder=10) plt.show() # next define an albers projection for california target_crs = {'datum':'NAD83', 'ellps':'GRS80', 'proj':'aea', 'lat_1':35, 'lat_2':39, 'lon_0':-119, 'lat_0':37.5, 'x_0':map_width_m/2, 'y_0':map_height_m/2, 'units':'m'} # plot the map fig_width = 6 plt.figure(figsize=[fig_width, fig_width * map_height_m / float(map_width_m)]) m = Basemap(ellps=target_crs['ellps'], projection=target_crs['proj'], lat_1=target_crs['lat_1'], lat_2=target_crs['lat_2'], lon_0=target_crs['lon_0'], lat_0=target_crs['lat_0'], width=map_width_m, height=map_height_m, resolution='l', area_thresh=10000) m.drawcoastlines(color=coastline_color) m.drawcountries(color=border_color) m.fillcontinents(color=land_color, lake_color=water_color) m.drawstates(color=border_color) m.drawmapboundary(fill_color=water_color) x, y = m(np.array(california_points['longitude']), np.array(california_points['latitude'])) m.scatter(x, y, s=80, color='r', edgecolor='#333333', alpha=0.4, zorder=10) plt.show() ```
github_jupyter
## Summarize all common compounds and their percent strong scores ``` suppressPackageStartupMessages(library(dplyr)) suppressPackageStartupMessages(library(ggplot2)) suppressPackageStartupMessages(library(patchwork)) source("viz_themes.R") source("plotting_functions.R") source("data_functions.R") results_dir <- file.path("../1.Data-exploration/Profiles_level4/results/") # First, obtain the threshold to consider strong phenotype cell_painting_pr_df <- load_percent_strong(assay = "cellpainting", results_dir = results_dir) l1000_pr_df <- load_percent_strong(assay = "l1000", results_dir = results_dir) pr_df <- dplyr::bind_rows(cell_painting_pr_df, l1000_pr_df) pr_df$dose <- factor(pr_df$dose, levels = dose_order) threshold_df <- pr_df %>% dplyr::filter(type == 'non_replicate') %>% dplyr::group_by(assay, dose) %>% dplyr::summarise(threshold = quantile(replicate_correlation, 0.95)) threshold_plot_ready_df <- threshold_df %>% reshape2::dcast(dose ~ assay, value.var = "threshold") # Next, get the median pairwise correlations and determine if they pass the threshold cell_painting_comp_df <- load_median_correlation_scores(assay = "cellpainting", results_dir = results_dir) l1000_comp_df <- load_median_correlation_scores(assay = "l1000", results_dir = results_dir) # Note that the variable significant_compounds contains ALL compounds and a variable indicating if they pass the threshold significant_compounds_df <- cell_painting_comp_df %>% dplyr::left_join(l1000_comp_df, by = c("dose", "compound"), suffix = c("_cellpainting", "_l1000")) %>% tidyr::drop_na() %>% dplyr::left_join(threshold_df %>% dplyr::filter(assay == "Cell Painting"), by = "dose") %>% dplyr::left_join(threshold_df %>% dplyr::filter(assay == "L1000"), by = "dose", suffix = c("_cellpainting", "_l1000")) %>% dplyr::mutate( pass_cellpainting_thresh = median_replicate_score_cellpainting > threshold_cellpainting, pass_l1000_thresh = median_replicate_score_l1000 > threshold_l1000 ) %>% dplyr::mutate(pass_both = pass_cellpainting_thresh + pass_l1000_thresh) %>% dplyr::mutate(pass_both = ifelse(pass_both == 2, TRUE, FALSE)) %>% dplyr::select( compound, dose, median_replicate_score_cellpainting, median_replicate_score_l1000, pass_cellpainting_thresh, pass_l1000_thresh, pass_both ) # Count in how many doses the particular compound was reproducible cp_reprod_count_df <- significant_compounds_df %>% dplyr::filter(pass_cellpainting_thresh) %>% dplyr::group_by(compound) %>% dplyr::count() %>% dplyr::rename(cell_painting_num_reproducible = n) l1000_reprod_count_df <- significant_compounds_df %>% dplyr::filter(pass_l1000_thresh) %>% dplyr::group_by(compound) %>% dplyr::count() %>% dplyr::rename(l1000_num_reproducible = n) significant_compounds_df <- significant_compounds_df %>% dplyr::left_join(cp_reprod_count_df, by = "compound") %>% dplyr::left_join(l1000_reprod_count_df, by = "compound") %>% tidyr::replace_na(list(l1000_num_reproducible = 0, cell_painting_num_reproducible = 0)) %>% dplyr::mutate(total_reproducible = cell_painting_num_reproducible + l1000_num_reproducible) significant_compounds_df$dose <- factor(significant_compounds_df$dose, levels = dose_order) significant_compounds_df$compound <- tolower(significant_compounds_df$compound) print(length(unique(significant_compounds_df$compound))) # Output file for further use output_file <- file.path("data", "significant_compounds_by_threshold_both_assays.tsv.gz") significant_compounds_df %>% readr::write_tsv(output_file) print(dim(significant_compounds_df)) head(significant_compounds_df, 3) ```
github_jupyter
# Parameter Values In this notebook, we explain how parameter values are set for a model. Information on how to add parameter values is provided in our [online documentation](https://pybamm.readthedocs.io/en/latest/tutorials/add-parameter-values.html) ## Setting up parameter values ``` %pip install pybamm -q # install PyBaMM if it is not installed import pybamm import tests import numpy as np import os import matplotlib.pyplot as plt from pprint import pprint os.chdir(pybamm.__path__[0]+'/..') ``` In `pybamm`, the object that sets parameter values for a model is the `ParameterValues` class, which extends `dict`. This takes the values of the parameters as input, which can be either a dictionary, ``` param_dict = {"a": 1, "b": 2, "c": 3} parameter_values = pybamm.ParameterValues(param_dict) print("parameter values are {}".format(parameter_values)) ``` or a csv file, ``` f = open("param_file.csv", "w+") f.write( """ Name [units],Value a, 4 b, 5 c, 6 """ ) f.close() parameter_values = pybamm.ParameterValues("param_file.csv") print("parameter values are {}".format(parameter_values)) ``` or using one of the pre-set chemistries ``` print("Marquis2019 chemistry set is {}".format(pybamm.parameter_sets.Marquis2019)) chem_parameter_values = pybamm.ParameterValues(chemistry=pybamm.parameter_sets.Marquis2019) print("Negative current collector thickness is {} m".format( chem_parameter_values["Negative current collector thickness [m]"]) ) ``` We can input functions into the parameter values, either directly (note we bypass the check that the parameter already exists) ``` def cubed(x): return x ** 3 parameter_values.update({"cube function": cubed}, check_already_exists=False) print("parameter values are {}".format(parameter_values)) ``` or by using `pybamm.load_function` to load from a path to the function or just a name (in which case the whole directory is searched) ``` f = open("squared.py","w+") f.write( """ def squared(x): return x ** 2 """ ) f.close() parameter_values.update({"square function": pybamm.load_function("squared.py")}, check_already_exists=False) print("parameter values are {}".format(parameter_values)) ``` ## Setting parameters for an expression We represent parameters in models using the classes `Parameter` and `FunctionParameter`. These cannot be evaluated directly, ``` a = pybamm.Parameter("a") b = pybamm.Parameter("b") c = pybamm.Parameter("c") func = pybamm.FunctionParameter("square function", {"a": a}) expr = a + b * c try: expr.evaluate() except NotImplementedError as e: print(e) ``` However, the `ParameterValues` class can walk through an expression, changing an `Parameter` objects it sees to the appropriate `Scalar` and any `FunctionParameter` object to the appropriate `Function`, and the resulting expression can be evaluated ``` expr_eval = parameter_values.process_symbol(expr) print("{} = {}".format(expr_eval, expr_eval.evaluate())) func_eval = parameter_values.process_symbol(func) print("{} = {}".format(func_eval, func_eval.evaluate())) ``` If a parameter needs to be changed often (for example, for convergence studies or parameter estimation), the `InputParameter` class should be used. This is not fixed by parameter values, and its value can be set on evaluation (or on solve): ``` d = pybamm.InputParameter("d") expr = 2 + d expr_eval = parameter_values.process_symbol(expr) print("with d = {}, {} = {}".format(3, expr_eval, expr_eval.evaluate(inputs={"d": 3}))) print("with d = {}, {} = {}".format(5, expr_eval, expr_eval.evaluate(inputs={"d": 5}))) ``` ## Solving a model The code below shows the entire workflow of: 1. Proposing a toy model 2. Discretising and solving it first with one set of parameters, 3. then updating the parameters and solving again The toy model used is: $$\frac{\mathrm{d} u}{\mathrm{d} t} = -a u$$ with initial conditions $u(0) = b$. The model is first solved with $a = 3, b = 2$, then with $a = 4, b = -1$ ``` # Create model model = pybamm.BaseModel() u = pybamm.Variable("u") a = pybamm.Parameter("a") b = pybamm.Parameter("b") model.rhs = {u: -a * u} model.initial_conditions = {u: b} model.variables = {"u": u, "a": a, "b": b} # Set parameters, with a as an input ######################## parameter_values = pybamm.ParameterValues({"a": "[input]", "b": 2}) parameter_values.process_model(model) ############################################################# # Discretise using default discretisation disc = pybamm.Discretisation() disc.process_model(model) # Solve t_eval = np.linspace(0, 2, 30) ode_solver = pybamm.ScipySolver() solution = ode_solver.solve(model, t_eval, inputs={"a": 3}) # Post-process, so that u1 can be called at any time t (using interpolation) t_sol1 = solution.t u1 = solution["u"] # Solve again with different inputs ############################### solution = ode_solver.solve(model, t_eval, inputs={"a": -1}) t_sol2 = solution.t u2 = solution["u"] ################################################################### # Plot t_fine = np.linspace(0,t_eval[-1],1000) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4)) ax1.plot(t_fine, 2 * np.exp(-3 * t_fine), t_sol1, u1(t_sol1), "o") ax1.set_xlabel("t") ax1.legend(["2 * exp(-3 * t)", "u1"], loc="best") ax1.set_title("a = 3, b = 2") ax2.plot(t_fine, 2 * np.exp(t_fine), t_sol2, u2(t_sol2), "o") ax2.set_xlabel("t") ax2.legend(["2 * exp(t)", "u2"], loc="best") ax2.set_title("a = -1, b = 2") plt.tight_layout() plt.show() model.rhs ``` ## Printing parameter values In most models, it is useful to define dimensionless parameters, which are combinations of other parameters. However, since parameters objects must be processed by the `ParameterValues` class before they can be evaluated, it can be difficult to quickly check the value of a dimensionless parameter. You can print all of the dimensionless parameters in a model by using the `print_parameters` function. Note that the `print_parameters` function also gives the dependence of the parameters on C-rate (as some dimensionless parameters vary with C-rate), but we can ignore that here ``` a = pybamm.Parameter("a") b = pybamm.Parameter("b") parameter_values = pybamm.ParameterValues({"a": 4, "b": 3}) parameters = {"a": a, "b": b, "a + b": a + b, "a * b": a * b} param_eval = parameter_values.print_parameters(parameters) for name, (value,C_dependence) in param_eval.items(): print("{}: {}".format(name, value)) ``` If you provide an output file to `print_parameters`, the parameters will be printed to that output file.
github_jupyter
# Classifying Ionosphere structure using K nearest neigbours algorithm <hr> ### Nearest neighbors Amongst the standard machine algorithms, Nearest neighbors is perhaps one of the most intuitive algorithms. To predict the class of a new sample, we look through the training dataset for the samples that are most similar to our new sample. We take the most similar sample and predict the class that the majority of those samples have. As an example, we wish to predict the class of the '?', based on which class it is more similar to (represented here by having similar objects closer together). We find the five nearest neighbors, which are three triangles, one circle and one plus. There are more triangles than circles and plus, and hence the predicted class for the '?' is, therefore, a triangle. <img src = "images/knn.png"> [[image source]](https://github.com/rasbt/python-machine-learning-book/tree/master/images/image_gallery) Nearest neighbors can be used for nearly any dataset-however, since we will have to compute the distance between all pairs of samples, it can be very computationally expensive to do so. For example if there are 10 samples in the dataset, there are 45 unique distances to compute. However, if there are 1000 samples, there are nearly 500,000! #### Distance metrics If we have two samples, we need to know how close they are to each other. Further more, we need to answer questions such as are these two samples more similar than the other two? The most common distance metric that you might have heard of is Euclidean distance, which is the real-world distance. Formally, Euclidean distance is the square root of the sum of the squared distances for each feature. It is intuitive, albeit provides poor accuracy if some features have larger values than others. It also gives poor results when lots of features have a value of 0, i.e our data is 'sparse'. There are other distance metrics in use; two commonly employed ones are the Manhattan and Cosine distance. The Manhattan distance is the sum of the absolute differences in each feature (with no use of square distances). While the Manhattan distance does suffer if some features have larger values than others, the effect is not as dramatic as in the case of Euclidean. Regardless for the implementation of KNN algorithm here, we would consider the Euclidean distance. ## Dataset To understand KNNs, We will use the Ionosphere dataset, which is the recording of many high-frequency antennas. The aim of the antennas is to determine whether there is a structure in the ionosphere and a region in the upper atmosphere. Those that have a structure are classified as good, while those that do not are classified as bad. Our aim is to determine whether an image is good or bad. You can download the dataset from : http://archive.ics.uci.edu/ml/datasets/Ionosphere. Save the ionosphere.data file from the Data Folder to a folder named "data" on your computer. For each row in the dataset, there are 35 values. The first 34 are measurements taken from the 17 antennas (two values for each antenna). The last is either 'g' or 'b'; that stands for good and bad, respectively. ``` import csv import numpy as np # Size taken from the dataset and is known X = np.zeros((351, 34), dtype='float') y = np.zeros((351,), dtype='bool') with open("data/Ionosphere/ionosphere.data", 'r') as input_file: reader = csv.reader(input_file) for i, row in enumerate(reader): # Get the data, converting each item to a float data = [float(datum) for datum in row[:-1]] # Set the appropriate row in our dataset X[i] = data # 1 if the class is 'g', 0 otherwise y[i] = row[-1] == 'g' ``` First, we load up the NumPy and csv modules. Then we create the X and y NumPy arrays to store the dataset in. The sizes of these arrays are known from the dataset. We take the first 34 values from this sample, turn each into a float, and save that to our dataset. Finally, we take the last value of the row and set the class. We set it to 1 (or True) if it is a good sample, and 0 if it is not. We now have a dataset of samples and features in X, and the corresponding classes in y Estimators in scikit-learn have two main functions: fit() and predict(). We train the algorithm using the fit method and our training set. We evaluate it using the predict method on our testing set. First, we need to create these training and testing sets. As before, import and run the train_test_split function: ``` from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=14) print("There are {} samples in the training dataset".format(X_train.shape[0])) print("There are {} samples in the testing dataset".format(X_test.shape[0])) print("Each sample has {} features".format(X_train.shape[1])) ``` Then, we import the nearest neighbor class and create an instance for it using the default parameters. By default, the algorithm will choose the five nearest neighbors to predict the class of a testing sample: ``` from sklearn.neighbors import KNeighborsClassifier estimator = KNeighborsClassifier() ``` After creating our estimator, we must then fit it on our training dataset. For the nearest neighbor class, this records our dataset, allowing us to find the nearest neighbor for a new data point, by comparing that point to the training dataset: estimator.fit(X_train, y_train) We then train the algorithm with our test set and evaluate with our testing set: ``` estimator.fit(X_train, y_train) y_predicted = estimator.predict(X_test) accuracy = np.mean(y_test == y_predicted) * 100 print("The accuracy is {0:.1f}%".format(accuracy)) ``` This scores 86.4 percent accuracy, which is impressive for a default algorithm and just a few lines of code! Most scikit-learn default parameters are chosen explicitly to work well with a range of datasets. However, you should always aim to choose parameters based on knowledge of the application experiment. ``` from sklearn.cross_validation import cross_val_score scores = cross_val_score(estimator, X, y, scoring='accuracy') average_accuracy = np.mean(scores) * 100 print("The average accuracy is {0:.1f}%".format(average_accuracy)) ``` Using cross validation, the model this gives a slightly more modest result of 82.3 percent, but it is still quite good considering we have not yet tried setting better parameters. ### Tuning parameters Almost all data mining algorithms have parameters that the user can set. This is often a cause of generalizing an algorithm to allow it to be applicable in a wide variety of circumstances. Setting these parameters can be quite difficult, as choosing good parameter values is often highly reliant on features of the dataset. The nearest neighbor algorithm has several parameters, but the most important one is that of the number of nearest neighbors to use when predicting the class of an unseen attribution. In scikit-learn, this parameter is called n_neighbors. In the following figure, we show that when this number is too low, a randomly labeled sample can cause an error. In contrast, when it is too high, the actual nearest neighbors have a lower effect on the result. If we want to test a number of values for the n_neighbors parameter, for example, each of the values from 1 to 20, we can rerun the experiment many times by setting n_neighbors and observing the result: ``` avg_scores = [] all_scores = [] parameter_values = list(range(1, 21)) # Including 20 for n_neighbors in parameter_values: estimator = KNeighborsClassifier(n_neighbors=n_neighbors) scores = cross_val_score(estimator, X, y, scoring='accuracy') avg_scores.append(np.mean(scores)) all_scores.append(scores) ``` We compute and store the average in our list of scores. We also store the full set of scores for later analysis. We can then plot the relationship between the value of n_neighbors and the accuracy. ``` %matplotlib inline ``` We then import pyplot from the matplotlib library and plot the parameter values alongside average scores: ``` from matplotlib import pyplot as plt plt.figure(figsize=(32,20)) plt.plot(parameter_values, avg_scores, '-o', linewidth=5, markersize=24) #plt.axis([0, max(parameter_values), 0, 1.0]) ``` While there is a lot of variance, the plot shows a decreasing trend as the number of neighbors increases. ### Preprocessing using pipelines When taking measurements of real-world objects, we can often get features in very different ranges. Like we saw in the case of classifying Animal data using Naive Bayes, if we are measuring the qualities of an animal, we considered several features, as follows: * Number of legs: This is between the range of 0-8 for most animals, while some have many more! * Weight: This is between the range of only a few micrograms, all the way to a blue whale with a weight of 190,000 kilograms! * Number of hearts: This can be between zero to five, in the case of the earthworm. For a mathematical-based algorithm to compare each of these features, the differences in the scale, range, and units can be difficult to interpret. If we used the above features in many algorithms, the weight would probably be the most influential feature due to only the larger numbers and not anything to do with the actual effectiveness of the feature. One of the methods to overcome this is to use a process called preprocessing to normalize the features so that they all have the same range, or are put into categories like small, medium and large. Suddenly, the large difference in the types of features has less of an impact on the algorithm, and can lead to large increases in the accuracy. Preprocessing can also be used to choose only the more effective features, create new features, and so on. Preprocessing in scikit-learn is done through Transformer objects, which take a dataset in one form and return an altered dataset after some transformation of the data. These don't have to be numerical, as Transformers are also used to extract features-however, in this section, we will stick with preprocessing. An example We can show an example of the problem by breaking the Ionosphere dataset. While this is only an example, many real-world datasets have problems of this form. First, we create a copy of the array so that we do not alter the original dataset: ``` X_broken = np.array(X) ``` Next, we break the dataset by dividing every second feature by 10: ``` X_broken[:,::2] /= 10 ``` In theory, this should not have a great effect on the result. After all, the values for these features are still relatively the same. The major issue is that the scale has changed and the odd features are now larger than the even features. We can see the effect of this by computing the accuracy: ``` estimator = KNeighborsClassifier() original_scores = cross_val_score(estimator, X, y,scoring='accuracy') print("The original average accuracy for is {0:.1f}%".format(np.mean(original_scores) * 100)) broken_scores = cross_val_score(estimator, X_broken, y,scoring='accuracy') print("The 'broken' average accuracy for is {0:.1f}%".format(np.mean(broken_scores) * 100)) ``` This gives a score of 82.3 percent for the original dataset, which drops down to 71.5 percent on the broken dataset. We can fix this by scaling all the features to the range 0 to 1. ### Standard preprocessing The preprocessing we will perform for this experiment is called feature-based normalization through the MinMaxScaler class. ``` from sklearn.preprocessing import MinMaxScaler ``` This class takes each feature and scales it to the range 0 to 1. The minimum value is replaced with 0, the maximum with 1, and the other values somewhere in between. To apply our preprocessor, we run the transform function on it. While MinMaxScaler doesn't, some transformers need to be trained first in the same way that the classifiers do. We can combine these steps by running the fit_transform function instead: ``` X_transformed = MinMaxScaler().fit_transform(X) ``` Here, X_transformed will have the same shape as X. However, each column will have a maximum of 1 and a minimum of 0. There are various other forms of normalizing in this way, which is effective for other applications and feature types: * Ensure the sum of the values for each sample equals to 1, using sklearn. preprocessing.Normalizer * Force each feature to have a zero mean and a variance of 1, using sklearn. preprocessing.StandardScaler, which is a commonly used starting point for normalization * Turn numerical features into binary features, where any value above a threshold is 1 and any below is 0, using sklearn.preprocessing. Binarizer . We can now create a workflow by combining the code from the previous sections, using the broken dataset previously calculated: ``` X_transformed = MinMaxScaler().fit_transform(X_broken) estimator = KNeighborsClassifier() transformed_scores = cross_val_score(estimator, X_transformed, y,scoring='accuracy') print("The average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100)) ``` This gives us back our score of 82.3 percent accuracy. The MinMaxScaler resulted in features of the same scale, meaning that no features overpowered others by simply being bigger values. While the Nearest Neighbor algorithm can be confused with larger features, some algorithms handle scale differences better. In contrast, some are much worse! ### Pipelines As experiments grow, so does the complexity of the operations. We may split up our dataset, binarize features, perform feature-based scaling, perform sample-based scaling, and many more operations. Keeping track of all of these operations can get quite confusing and can result in being unable to replicate the result. Problems include forgetting a step, incorrectly applying a transformation, or adding a transformation that wasn't needed. Another issue is the order of the code. In the previous section, we created our X_transformed dataset and then created a new estimator for the cross validation. If we had multiple steps, we would need to track all of these changes to the dataset in the code. Pipelines are a construct that addresses these problems (and others, which we will see in the next chapter). Pipelines store the steps in your data mining workflow. They can take your raw data in, perform all the necessary transformations, and then create a prediction. This allows us to use pipelines in functions such as cross_val_score, where they expect an estimator. First, import the Pipeline object: ``` from sklearn.pipeline import Pipeline ``` Pipelines take a list of steps as input, representing the chain of the data mining application. The last step needs to be an Estimator, while all previous steps are Transformers. The input dataset is altered by each Transformer, with the output of one step being the input of the next step. Finally, the samples are classified by the last step's estimator. In our pipeline, we have two steps: 1. Use MinMaxScaler to scale the feature values from 0 to 1 2. Use KNeighborsClassifier as the classification algorithms Each step is then represented by a tuple ('name', step). We can then create our pipeline: ``` scaling_pipeline = Pipeline([('scale', MinMaxScaler()), ('predict', KNeighborsClassifier())]) ``` The key here is the list of tuples. The first tuple is our scaling step and the second tuple is the predicting step. We give each step a name: the first we call scale and the second we call predict, but you can choose your own names. The second part of the tuple is the actual Transformer or estimator object. Running this pipeline is now very easy, using the cross validation code from before: ``` scores = cross_val_score(scaling_pipeline, X_broken, y, scoring='accuracy') print("The pipeline scored an average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100)) ``` This gives us the same score as before (82.3 percent), which is expected, as we are effectively running the same steps. Setting up pipelines is a great way to ensure that the code complexity does not grow unmanageably. <hr> ### Notes: The right choice of k is crucial to find a good balance between over- and underfitting. We also have to make sure that we choose a distance metric that is appropriate for the features in the dataset. Often, while using the Euclidean distance measure, it is important to standardize the data so that each feature contributes equally to the distance. #### The curse of dimensionality It is important to mention that KNN is very susceptible to overfitting due to the curse of dimensionality. The curse of dimensionality describes the phenomenon where the feature space becomes increasingly sparse for an increasing number of dimensions of a fixed-size training dataset. Intuitively, we can think of even the closest neighbors being too far away in a high-dimensional space to give a good estimate. In models where regularization is not applicable such as decision trees and KNN, we can use feature selection and dimensionality reduction techniques to help us avoid the curse of dimensionality. #### Parametric versus nonparametric models Machine learning algorithms can be grouped into parametric and nonparametric models. Using parametric models, we estimate parameters from the training dataset to learn a function that can classify new data points without requiring the original training dataset anymore. Typical examples of parametric models are the perceptron, logistic regression, and the linear SVM. In contrast, nonparametric models can't be characterized by a fixed set of parameters, and the number of parameters grows with the training data. Two examples of nonparametric models that we have seen so far are the decision tree classifier/random forest and the kernel SVM. KNN belongs to a subcategory of nonparametric models that is described as instance-based learning. Models based on instance-based learning are characterized by memorizing the training dataset, and lazy learning is a special case of instance-based learning that is associated with no (zero) cost during the learning process _____ ### Summary In this chapter, we used several of scikit-learn's methods for building a standard workflow to run and evaluate data mining models. We introduced the Nearest Neighbors algorithm, which is already implemented in scikit-learn as an estimator. Using this class is quite easy; first, we call the fit function on our training data, and second, we use the predict function to predict the class of testing samples. We then looked at preprocessing by fixing poor feature scaling. This was done using a Transformer object and the MinMaxScaler class. These functions also have a fit method and then a transform, which takes a dataset as an input and returns a transformed dataset as an output. ___
github_jupyter
``` #load watermark %load_ext watermark %watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim from preamble import * %matplotlib inline ``` ## Algorithm Chains and Pipelines ``` from sklearn.svm import SVC from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler # load and split the data cancer = load_breast_cancer() X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=0) # compute minimum and maximum on the training data scaler = MinMaxScaler().fit(X_train) # rescale the training data X_train_scaled = scaler.transform(X_train) svm = SVC() # learn an SVM on the scaled training data svm.fit(X_train_scaled, y_train) # scale the test data and score the scaled data X_test_scaled = scaler.transform(X_test) print("Test score: {:.2f}".format(svm.score(X_test_scaled, y_test))) ``` ### Parameter Selection with Preprocessing ``` from sklearn.model_selection import GridSearchCV # for illustration purposes only, don't use this code! param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100], 'gamma': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(SVC(), param_grid=param_grid, cv=5) grid.fit(X_train_scaled, y_train) print("Best cross-validation accuracy: {:.2f}".format(grid.best_score_)) print("Best parameters: ", grid.best_params_) print("Test set accuracy: {:.2f}".format(grid.score(X_test_scaled, y_test))) mglearn.plots.plot_improper_processing() ``` ### Building Pipelines ``` from sklearn.pipeline import Pipeline pipe = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC())]) pipe.fit(X_train, y_train) print("Test score: {:.2f}".format(pipe.score(X_test, y_test))) ``` ### Using Pipelines in Grid-searches ``` param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100], 'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(pipe, param_grid=param_grid, cv=5) grid.fit(X_train, y_train) print("Best cross-validation accuracy: {:.2f}".format(grid.best_score_)) print("Test set score: {:.2f}".format(grid.score(X_test, y_test))) print("Best parameters: {}".format(grid.best_params_)) mglearn.plots.plot_proper_processing() rnd = np.random.RandomState(seed=0) X = rnd.normal(size=(100, 10000)) y = rnd.normal(size=(100,)) from sklearn.feature_selection import SelectPercentile, f_regression select = SelectPercentile(score_func=f_regression, percentile=5).fit(X, y) X_selected = select.transform(X) print("X_selected.shape: {}".format(X_selected.shape)) from sklearn.model_selection import cross_val_score from sklearn.linear_model import Ridge print("Cross-validation accuracy (cv only on ridge): {:.2f}".format( np.mean(cross_val_score(Ridge(), X_selected, y, cv=5)))) pipe = Pipeline([("select", SelectPercentile(score_func=f_regression, percentile=5)), ("ridge", Ridge())]) print("Cross-validation accuracy (pipeline): {:.2f}".format( np.mean(cross_val_score(pipe, X, y, cv=5)))) ``` ### The General Pipeline Interface ``` def fit(self, X, y): X_transformed = X for name, estimator in self.steps[:-1]: # iterate over all but the final step # fit and transform the data X_transformed = estimator.fit_transform(X_transformed, y) # fit the last step self.steps[-1][1].fit(X_transformed, y) return self def predict(self, X): X_transformed = X for step in self.steps[:-1]: # iterate over all but the final step # transform the data X_transformed = step[1].transform(X_transformed) # predict using the last step return self.steps[-1][1].predict(X_transformed) ``` ![pipeline_illustration](images/pipeline.png) ### Convenient Pipeline creation with ``make_pipeline`` ``` from sklearn.pipeline import make_pipeline # standard syntax pipe_long = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC(C=100))]) # abbreviated syntax pipe_short = make_pipeline(MinMaxScaler(), SVC(C=100)) print("Pipeline steps:\n{}".format(pipe_short.steps)) from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA pipe = make_pipeline(StandardScaler(), PCA(n_components=2), StandardScaler()) print("Pipeline steps:\n{}".format(pipe.steps)) ``` #### Accessing step attributes ``` # fit the pipeline defined before to the cancer dataset pipe.fit(cancer.data) # extract the first two principal components from the "pca" step components = pipe.named_steps["pca"].components_ print("components.shape: {}".format(components.shape)) ``` #### Accessing Attributes in a Pipeline inside GridSearchCV ``` from sklearn.linear_model import LogisticRegression pipe = make_pipeline(StandardScaler(), LogisticRegression()) param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]} X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=4) grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) print("Best estimator:\n{}".format(grid.best_estimator_)) print("Logistic regression step:\n{}".format( grid.best_estimator_.named_steps["logisticregression"])) print("Logistic regression coefficients:\n{}".format( grid.best_estimator_.named_steps["logisticregression"].coef_)) ``` ### Grid-searching preprocessing steps and model parameters ``` from sklearn.datasets import load_boston boston = load_boston() X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=0) from sklearn.preprocessing import PolynomialFeatures pipe = make_pipeline( StandardScaler(), PolynomialFeatures(), Ridge()) param_grid = {'polynomialfeatures__degree': [1, 2, 3], 'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1) grid.fit(X_train, y_train) mglearn.tools.heatmap(grid.cv_results_['mean_test_score'].reshape(3, -1), xlabel="ridge__alpha", ylabel="polynomialfeatures__degree", xticklabels=param_grid['ridge__alpha'], yticklabels=param_grid['polynomialfeatures__degree'], vmin=0) print("Best parameters: {}".format(grid.best_params_)) print("Test-set score: {:.2f}".format(grid.score(X_test, y_test))) param_grid = {'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]} pipe = make_pipeline(StandardScaler(), Ridge()) grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) print("Score without poly features: {:.2f}".format(grid.score(X_test, y_test))) pipe = Pipeline([('preprocessing', StandardScaler()), ('classifier', SVC())]) from sklearn.ensemble import RandomForestClassifier param_grid = [ {'classifier': [SVC()], 'preprocessing': [StandardScaler(), None], 'classifier__gamma': [0.001, 0.01, 0.1, 1, 10, 100], 'classifier__C': [0.001, 0.01, 0.1, 1, 10, 100]}, {'classifier': [RandomForestClassifier(n_estimators=100)], 'preprocessing': [None], 'classifier__max_features': [1, 2, 3]}] X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=0) grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) print("Best params:\n{}\n".format(grid.best_params_)) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) print("Test-set score: {:.2f}".format(grid.score(X_test, y_test))) ``` ### Summary and Outlook ``` test complete ; Gopal ```
github_jupyter
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jfcrenshaw/pzflow/blob/main/examples/marginalization.ipynb) If running in Colab, to switch to GPU, go to the menu and select Runtime -> Change runtime type -> Hardware accelerator -> GPU. In addition, uncomment and run the following code: ``` # !pip install pzflow ``` ------------------- ## Marginalization during posterior calculation This example notebook demonstrates how to marginalize over missing variables during posterior calculation. We will use the Flow trained in the [redshift example](https://github.com/jfcrenshaw/pzflow/blob/main/examples/redshift_example.ipynb). ``` import jax.numpy as np import matplotlib.pyplot as plt from pzflow.examples import get_example_flow ``` First let's load the pre-trained flow, and use it to generate some samples: ``` flow = get_example_flow() samples = flow.sample(2, seed=123) samples ``` Remember that we can calculate posteriors for the data in samples. For example, let's plot redshift posteriors: ``` grid = np.linspace(0.25, 1.45, 100) pdfs = flow.posterior(samples, column="redshift", grid=grid) fig, axes = plt.subplots(1, 2, figsize=(5.5, 2), dpi=120, constrained_layout=True) for i, ax in enumerate(axes.flatten()): ax.plot(grid, pdfs[i], label="Redshift posterior") ztrue = samples["redshift"][i] ax.axvline(ztrue, c="C3", label="True redshift") ax.set( xlabel="redshift", xlim=(ztrue - 0.25, ztrue + 0.25), yticks=[] ) axes[0].legend( bbox_to_anchor=(0.55, 1.05, 1, 0.2), loc="lower left", mode="expand", borderaxespad=0, ncol=2, fontsize=8, ) plt.show() ``` But what if we have missing values? E.g. let's imagine that galaxy 1 wasn't observed in the u band, while galaxy 2 wasn't observed in the u or y bands. We will mark these non-observations with the value 99: ``` # make a new copy of the samples samples2 = samples.copy() # make the non-observations samples2.iloc[0, 1] = 99 samples2.iloc[1, 1] = 99 samples2.iloc[1, -1] = 99 # print the new samples samples2 ``` Now if we want to calculate posteriors, we can't simply call `flow.posterior()` as before because the flow will think that 99 is the actual value for those bands, rather than just a flag for a missing value. What we can do, however, is pass `marg_rules`, which is a dictionary of rules that tells the Flow how to marginalize over missing variables. `marg_rules` must include: - "flag": 99, which tells the posterior method that 99 is the flag for missing values - "u": callable, which returns an array of values for the u band over which to marginalize - "y": callable, which returns an array of values for the y band over which to marginalize "u" and "y" both map to callable, because you can use a function of the other values to decide what values of u and y to marginalize over. For example, maybe you expect the value of u to be close to the value of g. In which case you might use: ``` "u": lambda row: np.linspace(row["g"] - 1, row["g"] + 1, 100) ``` The only constraint is that regardless of the values of the other variables, the callable must *always* return an array of the same length. For this example, we won't make the marginalization rules a function of the other variables, but will instead return a fixed array. ``` marg_rules = { "flag": 99, # tells the posterior method that 99 means missing value "u": lambda row: np.linspace(26, 28, 40), # the array of u values to marginalize over "y": lambda row: np.linspace(24, 26, 40), # the array of y values to marginalize over } pdfs2 = flow.posterior(samples2, column="redshift", grid=grid, marg_rules=marg_rules) fig, axes = plt.subplots(1, 2, figsize=(5.5, 2), dpi=120, constrained_layout=True) for i, ax in enumerate(axes.flatten()): ax.plot(grid, pdfs[i], label="Posterior w/ all bands") ax.plot(grid, pdfs2[i], label="Posterior w/ missing bands marginalized") ztrue = samples["redshift"][i] ax.axvline(ztrue, c="C3", label="True redshift") ax.set( xlabel="redshift", xlim=(ztrue - 0.25, ztrue + 0.25), yticks=[] ) axes[0].legend( bbox_to_anchor=(0, 1.05, 2, 0.2), loc="lower left", mode="expand", borderaxespad=0, ncol=3, fontsize=7.5, ) plt.show() ``` You can see that marginalizing over the bands (aka throwing out information), degrades the posteriors. Warning that marginalizing over fine grids quickly gets very computationally expensive, especially when you have rows in your data frame that are missing multiple values.
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from scipy import ndimage as ndi import os from PIL import Image import PIL.ImageOps from skimage.morphology import watershed from skimage.feature import peak_local_max from skimage.filters import threshold_otsu from skimage.morphology import binary_closing from skimage.color import rgb2gray ``` Watershed with binarization first ``` arraydirectory= './edge_array/' photodirectory='./photos/' image=np.array(Image.open(photodirectory + '1449.jpg')) image = rgb2gray(image) thresh = threshold_otsu(image) img_bin = image > thresh image_closed=binary_closing(img_bin) # Now we want to separate the two objects in image # Generate the markers as local maxima of the distance to the background distance = ndi.distance_transform_edt(image_closed) local_maxi = peak_local_max(distance, indices=False) markers = ndi.label(local_maxi)[0] labels = watershed(-distance, markers, mask=image_closed) fig, axes = plt.subplots(ncols=3, figsize=(9, 3), sharex=True, sharey=True, subplot_kw={'adjustable': 'box-forced'}) ax = axes.ravel() ax[0].imshow(image_closed, cmap=plt.cm.gray, interpolation='nearest') ax[0].set_title('Overlapping objects') ax[1].imshow(-distance, cmap=plt.cm.gray, interpolation='nearest') ax[1].set_title('Distances') ax[2].imshow(labels, cmap=plt.cm.spectral, interpolation='nearest') ax[2].set_title('Separated objects') for a in ax: a.set_axis_off() fig.tight_layout() plt.show() ``` Watershed on image itself ``` arraydirectory= './edge_array/' photodirectory='./photos/' image=np.array(Image.open(photodirectory + '1449.jpg')) # Now we want to separate the two objects in image # Generate the markers as local maxima of the distance to the background distance = ndi.distance_transform_edt(image) local_maxi = peak_local_max(distance, indices=False) markers = ndi.label(local_maxi)[0] labels = watershed(-distance, markers, mask=image) fig, axes = plt.subplots(ncols=3, figsize=(9, 3), sharex=True, sharey=True, subplot_kw={'adjustable': 'box-forced'}) ax = axes.ravel() ax[0].imshow(image, cmap=plt.cm.gray, interpolation='nearest') ax[0].set_title('Overlapping objects') ax[1].imshow(-distance, cmap=plt.cm.gray, interpolation='nearest') ax[1].set_title('Distances') ax[2].imshow(labels, cmap=plt.cm.spectral, interpolation='nearest') ax[2].set_title('Separated objects') for a in ax: a.set_axis_off() fig.tight_layout() plt.show() ``` So we use Watershed on the binary picture. ``` arraydirectory= '../FeatureSampleFoodClassification/watershed_array/' photodirectory='../SampleFoodClassifier_Norm/' if not os.path.exists(arraydirectory): os.makedirs(arraydirectory) for fn in os.listdir(photodirectory): if os.path.isfile(photodirectory + fn) and '.jpg' in fn: img=np.array(Image.open(photodirectory + fn)) img = rgb2gray(img) thresh = threshold_otsu(img) img_bin = img > thresh img_closed=binary_closing(img_bin) # Now we want to separate the two objects in image # Generate the markers as local maxima of the distance to the background distance = ndi.distance_transform_edt(img_closed) local_maxi = peak_local_max(distance, indices=False) markers = ndi.label(local_maxi)[0] ws = watershed(-distance, markers, mask=img_closed) ws_flat=[item for sublist in ws for item in sublist] np.save(arraydirectory + fn,ws_flat) print('done') ```
github_jupyter
# Rerank with MonoT5 ``` !nvidia-smi from pygaggle.rerank.base import Query, Text from pygaggle.rerank.transformer import MonoT5 from trectools import TrecRun import ir_datasets monoT5Reranker = MonoT5() DIR='/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/retrievalExperiments/runs-ecir22/' DIR_v2='/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/retrievalExperiments/runs-marco-v2-ecir22/' def load_topics(version, file): import pandas as pd return pd.read_csv('../../Data/navigational-topics-and-qrels-ms-marco-v' + str(version) + '/' + file, sep='\t', names=['num', 'query']) df_popular_queries = load_topics(1, 'topics.msmarco-entrypage-popular.tsv') df_random_queries = load_topics(1, 'topics.msmarco-entrypage-random.tsv') df_popular_run = TrecRun(DIR + 'entrypage-popular/run.ms-marco-content.bm25-default.txt') df_random_run = TrecRun(DIR + 'entrypage-random/run.ms-marco-content.bm25-default.txt') df_popular_queries_v2 = load_topics(2, 'topics.msmarco-v2-entrypage-popular.tsv') df_random_queries_v2 = load_topics(2, 'topics.msmarco-v2-entrypage-random.tsv') df_popular_run_v2 = TrecRun(DIR_v2 + 'entrypage-popular/run.msmarco-doc-v2.bm25-default.txt') df_random_run_v2 = TrecRun(DIR_v2 + 'entrypage-random/run.msmarco-doc-v2.bm25-default.txt') df_popular_queries df_popular_run df_random_queries df_random_run df_random_run.run_data ``` # The actual reranking ``` def get_query_or_fail(df_queries, topic_number): ret = df_queries[df_queries['num'] == int(topic_number)] if len(ret) != 1: raise ValueError('Could not handle ' + str(topic_number)) return ret.iloc[0]['query'] marco_v1_doc_store = ir_datasets.load('msmarco-document').docs_store() marco_v2_doc_store = ir_datasets.load('msmarco-document-v2').docs_store() def get_doc_text(doc_id): if doc_id.startswith('msmarco_doc_'): ret = marco_v2_doc_store.get(doc_id) else: ret = marco_v1_doc_store.get(doc_id) return ret.title + ' ' + ret.body def docs_for_topic(df_run, topic_number): return df_run.run_data[df_run.run_data['query'] == int(topic_number)].docid def rerank_with_model(topic, df_queries, df_run, model): query = get_query_or_fail(df_queries, topic) print('rerank query ' + query) documents = [Text(get_doc_text(i), {'docid': i}, 0) for i in docs_for_topic(df_run, topic)[:100]] ret = sorted(model.rerank(Query(query), documents), key=lambda i: i.score, reverse=True) return [{'score': i.score, 'id': i.metadata['docid'], 'body': i.text} for i in ret] def rerank(file_name, df_run, df_queries, model, tag): from tqdm import tqdm with open(file_name, 'w') as out_file: for topic in tqdm(df_queries.num): for i in zip(range(100), rerank_with_model(topic, df_queries, df_run, model)): out_file.write(str(topic) + ' Q0 ' + i[1]['id'] + ' ' + str(i[0] + 1) + ' ' + str(i[1]['score']) + ' ' + tag + '\n') ``` # Marco V1 ``` rerank(DIR + 'entrypage-random/run.ms-marco-content.bm25-mono-t5.txt', df_random_run, df_random_queries.copy(), monoT5Reranker, 'mono-t5-at-bm25') rerank(DIR + 'entrypage-popular/run.ms-marco-content.bm25-mono-t5.txt', df_popular_run, df_popular_queries.copy(), monoT5Reranker, 'mono-t5-at-bm25') ``` # Marco V2 ``` rerank(DIR_v2 + 'entrypage-random/run.ms-marco-content.bm25-mono-t5.txt', df_random_run_v2, df_random_queries_v2.copy(), monoT5Reranker, 'mono-t5-at-bm25') rerank(DIR_v2 + 'entrypage-popular/run.ms-marco-content.bm25-mono-t5.txt', df_popular_run_v2, df_popular_queries_v2.copy(), monoT5Reranker, 'mono-t5-at-bm25') ``` # Rerank with MonoBERT ``` from pygaggle.rerank.transformer import MonoBERT monoBert = MonoBERT() rerank(DIR + 'entrypage-random/run.ms-marco-content.bm25-mono-bert.txt', df_random_run, df_random_queries.copy(), monoBert, 'mono-bert-at-bm25') rerank(DIR + 'entrypage-popular/run.ms-marco-content.bm25-mono-bert.txt', df_popular_run, df_popular_queries.copy(), monoBert, 'mono-bert-at-bm25') rerank(DIR_v2 + 'entrypage-random/run.ms-marco-content.bm25-mono-bert.txt', df_random_run_v2, df_random_queries_v2.copy(), monoBert, 'mono-bert-at-bm25') rerank(DIR_v2 + 'entrypage-popular/run.ms-marco-content.bm25-mono-bert.txt', df_popular_run_v2, df_popular_queries_v2.copy(), monoBert, 'mono-bert-at-bm25') ```
github_jupyter
# GitHub : Le réseau social des développeurs grâce à Git _Auteur_: Hugo Ducommun _Date_: 30 Mai 2019 _GitHub_ est un plateforme de projets de jeunes développeurs motivés qui souhaient publier leur travail de manière libre (OpenSource). _GitHub_ est connu pour être pratique lorsqu'on travaille en équipe. Il permet à chaque collaborateurs de travailler sur un seul et même projet sans influencer l'avancement des autres. Ce site web peut également être utilisé de manière professionnelle grâce à des comptes payants. ## On parle souvent de `git`, de quoi s'agit-il ? **git** est un logiciel de versioning (gestion de versions) que le site _GitHub_ utilise. Il permet par conséquent de faciliter l'accès à l'historique des anciennes versions d'un projet et de synchroniser facilement les fichiers entre eux grâce à un système de **branches** que je développerai dans le point suivant. En réalité c'est git qui est à la base du site web _GitHub_. Mais _GitHub_ a rajouté une interface graphique à git qui s'utilise principalement dans un terminal, c'est pour cela que _GitHub_ est plus connu que le logiciel de versioning lui-même. Pour cette raison, j'étudirai ici principalement le logiciel git et rajouterai quelques informations supplémentaires sur _GitHub_. # Fonctionnement de `git` ### Introduction imagée de la notion de *branch*. Git procède en branches. Voici un petit schéma que j'ai trouvé très expressif sur la manière dont ça fonctionne. ![git_branches](./img/git_branch.png) Nous avons donc deux types de *branches* différentes, oui oui deux et pas trois comme sur le schéma. Il y a la *branch* du milieu appelé **master branch**, c'est là où se situera la version officielle et fonctionnelle de votre projet. Puis un deuxième type appelé **feature branch** qui est caractérisé par les branches nommées sur le schéma, hat et glasses (en réalité ce sont toutes les branches exceptés la master branch). Le fonctionnement est simple, le projet est ici de rajouter un chapeau et des lunettes à l'image du pouple de base. Un collaborateur s'occupera donc du chapeau (C1) et un autre des lunettes (C2). Ils vont procéder de cette manière : 1. C1 et C2 vont donc copier le projet actuel (master branch) dans une feature branch personnelle. 2. Faire leurs modifications et les rendre fonctionnelles (ajoutez un chapeau ou des lunettes). 3. Uploader leurs modifications dans la master branch pour avoir un projet complet. ### Termes techniques Bien sûr, ceci est un peu plus compliqué dans un vrai projet, il y a beaucoup plus de choses à faire que rajouter deux accessoires, et nous devons procéder par ligne de commandes. Mais c'est une bonne approche de cette notion. Je vais donc ici détaillé quelques termes qu'utilise git dans son fonctionnement. --- #### ![icon_repository](./img/icons-repository.png) #### Repository Un repository (en français dépôt) est l'ensemble de votre projet : les documents que vous éditez et dont vous suivez les modifications s'y trouvent. Le repository peut être locale ou se trouver sur votre serveur dédié. --- #### Branch C'est une des branches copiées de la master branch par défaut. Après avoir créé la branche copiée de master, elle ne sera plus affectée par les changement opérés sur les autres branches du projet. Sur le schéma ci-dessous, 'Copie de A' est une branche de 'Branche A'. D'ailleurs pour avoir copié la branche A, l'utilisateur a dû **merge**. La commande pour créer une nouvelle branche est : `git branch nomNouvelleBranche` La commande : `git branch --list` vous permet de voir la liste de toutes les branches du repository actuel. ![branch](./img/branch.png) --- #### ![icon_pull-request](./img/icons-pull-request.png) #### Pull request Ce terme peut être traduit par demande de fusion (**merge**). C'est lorsque le collaborateur veut fusion sa branch avec une autre (généralement la master branch) pour appliquer les changements tels que les corrections de bugs ou ajout de fonctionnalité à la branche cible. Le responsable de la branche ciblée est libres de refuser ou accepter ce **pull request**. ![pull-request](./img/pull_request.png) --- #### ![icon_fork](./img/icons-code-fork.png) #### Fork Fork (littéralement fourchette en français), correspond à copier une branche déjà existante. On fork souvent la master branch au début pour pouvoir se créer notre propre branch et modifier le projet sans impacter sur la master branch. --- #### ![icon_merge](./img/icons-merge-git.png) #### Merge Merge est un peu le contraire de fork, après avoir modifié tout ce que l'on voulait, on peut fusionner notre branche avec une autre. Cette fonctionnalité est souvent protégée par un pull request sinon tout le monde pourrait modifier n'importe quelle branche. Les modifications de la branche B seront donc appliquées à la branche A si le merge fonctionne. La commande pour fusionner la branche B est : `git merge brancheB` Attention il est important d'exécuter cette commande depuis la branche A ! ![merge](./img/merge.png) --- #### ![icon_commit](./img/icons-commit-git.png) #### Commit Commit est l'action la plus courante que vous allez exécuter avec git. Elle correspond, comme l'indique son icône à une modification de la branche en question. Lorsque vous avez localement modifié une branche, vous devez **commit** pour enregister les changements, généralement avec un message d'information pour pouvoir par la suite mieux retrouver des anciennes modifications. La commande pour commit en rajoutant un message d'exemple est : `git commit -m 'Add the cow-boy hat'` ![commit](./img/commit.png) --- #### ![icon_push](./img/icons-arrow-up.png) #### Push Envoye tous vos commits dans le serveur dédié sur lequel est hebergé le repository (dépôt distant). Vous 'envoyez' en quelque sorte vos fichiers à vos collaborateurs. La commande pour push est : `git push` --- #### ![icon_pull](./img/icons-down-arrow.png) #### Pull Effet contraire de push, vous recevez les fichiers envoyés par vos collègues. Avant chaque grosse séance de travail, assurez vous de pull pour voir l'avancement de votre équipe. Il charge les dossiers et fichiers du repository sur votre machine en local. La commande pour pull est : `git pull` --- # Autres commandes dans git Nous avons déjà vu quelques commandes dans les termes techniques, voici le reste des commandes basiques : * `git init` : initialise votre dossier en tant qu'un dossier git * `git clone URL` : clone un repository déjà existant dans le dossier où vous exécutez la commande (exemple url : https://github.com/Bugnon/oc-2018.git) * `git status` : affiche le statuts des fichiers de votre repository. Permet de voir où nous en sommes. * `git add nomFichier` : ajouter des fichiers dans l'index (pré-sauvegarde). La commande `git add *` ajoute tous les fichiers modifiés. * `git checkout nomBranche` : s'utilise en tant que switch d'une branche à une autre (utilisation basique) Lors de la première utilisation de git, vous devrez enregistrer votre pseudo et votre email. Après avoir utilisez la commande `git init` qui initialise votre dossier en tant qu'un dossier git, utilisez les deux commandes ci-dessous : * `git config --global user.name 'hugoducom'` * `git config --global user.email '[email protected]'` L'avantage de Git reste que c'est une source particulièrement bien documentée en ligne car ceux qui le maîtrise sont en général assez actif sur les forums. On arrive toujours à trouver de l'aide sur les différentes plateformes. Aussi grâce aux commandes : `git help nomCmd` par exemple `git help checkout`. # Schéma récapitulatif de git et ses commandes ![git schematic](./img/git_schematic.png) _La commande `git fetch` ne sera pas traitée ici._ ### Publication d'un fichier En somme lorsqu'on veut publier un fichier, par exemple _index.html_ dans notre repository, il faut taper dans l'ordre : 1. `git pull` 2. `git add index.html` 3. `git commit -m 'Ajoute de mon fichier html'` 4. `git push` Le `git pull` du début permet de ne pas avoir de conflit de fichier lorsqu'on push en mettant à jour notre copie de travail versionnée. # Vous n'avez toujours pas compris ? Voici un exemple pratique Je suis un jeune codeur web qui souhaite partager mes premiers pas sur une plateforme de développement comme GitHub. Je crée donc un dossier sur mon bureau appelé "web". C'est ce dossier que je souhaite partager sur GitHub. Je télécharge donc [Git](https://git-scm.com/downloads). Il s'agit d'un petit projet, je travaillerai donc seulement dans la _master branch_ et ne créerai pas d'autre branche. Après l'installation, je fais clique-droit sur mon dossier et appuie sur **Git Bash Here**. Une console s'ouvrira et c'est depuis là que vous taperez vos commandes. Comme je me suis informé sur ce notebook, je tape d'abord `git init`. Et enregistre mes informations à l'aide de `git config`. ![init](./img/git-init.PNG) Je commence ensuite à développer tranquillement dans ce dossier. Je crée donc mon fichier _index.html_. Je commence à coder et arrive le moment où je souhaite mettre en ligne ce que j'ai fait. Je fais donc un `git add index.html` ou `git add *` si je veux add tous les fichiers de mon dossier web. Je remarque en utilisant la commande `git status` que mes fichiers sont prêts à être commit au dépôt local. Puis `git commit -m 'Ajout de la première version de mon site'`. ![git add](./img/git-add.PNG) Rendez-vous maintenant sur [GitHub](https://github.com/new) pour créer notre repository. Je me connecte et remplis les informations nécessaire. ![github](./img/github.PNG) Par la suite il faut exécuter les deux commandes que GitHub nous demande : * `git remote add origin https://github.com/hugoducom/web.git` * `git push -u origin master` Il se peut que lors de la deuxième commande, on vous demande votre login et votre mot de passe GitHub. Il faut donc avoir un compte GitHub. ![git remote](./img/git-remote.PNG) Le plus dur est fait ! Votre repository est en ligne sur GitHub, bravo ! En rechargeant la page https://github.com/hugoducom/web vous allez tomber sur votre fichier _index.html_. ![win](./img/win.PNG) Pour la suite de votre aventure de développement web, il vous faudra simplement suivre le point 'Publication d'un fichier' un peu plus haut de ce dossier. Lorsque vous aurez compris le principe de git et serez capable de tout faire en lignes de commande, vous pourrez télécharger des applications qui feront le travail à votre place comme [GitHub Desktop](https://desktop.github.com/), qui facilitera grandement vos partages de fichier dans votre carrière de développement. --- #### Sources: * https://gerardnico.com/code/version/git/branch * https://fr.wikipedia.org/wiki/GitHub * https://fr.wikipedia.org/wiki/Git * https://www.sebastien-gandossi.fr/blog/difference-entre-git-reset-et-git-rm-cached * https://www.youtube.com/watch?v=4o9qzbssfII
github_jupyter
<a href="https://colab.research.google.com/github/davemcg/scEiaD/blob/master/colab/cell_type_ML_labelling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Auto Label Retinal Cell Types ## tldr You can take your (retina) scRNA data and fairly quickly use the scEiaD ML model to auto label your cell types. I say fairly quickly because it is *best* if you re-quantify your data with the same reference and counter (kallisto) that we use. You *could* try using your counts from cellranger/whatever....but uh...stuff might get weird. # Install scvi and kallisto-bustools ``` import sys import re #if True, will install via pypi, else will install from source stable = True IN_COLAB = "google.colab" in sys.modules if IN_COLAB and stable: !pip install --quiet scvi-tools[tutorials]==0.9.0 #!pip install --quiet python==3.8 pandas numpy scikit-learn xgboost==1.3 !pip install --quiet kb-python !pip install --quiet pandas numpy scikit-learn xgboost==1.3.1 ``` # Download our kallisto index As our example set is mouse, we use the Gencode vM25 transcript reference. The script that makes the idx and t2g file is [here](https://github.com/davemcg/scEiaD/raw/c3a9dd09a1a159b1f489065a3f23a753f35b83c9/src/build_idx_and_t2g_for_colab.sh). This is precomputed as it takes about 30 minutes and 32GB of memory. There's one more wrinkle worth noting: as scEiaD was built across human, mouse, and macaque unified gene names are required. We chose to use the *human* ensembl ID (e.g. CRX is ENSG00000105392) as the base gene naming system. (Download links): ``` # Mouse https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv # Human https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.v35.transcripts.idx https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/v35.tr2gX.tsv ``` ``` %%time !wget -O idx.idx https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx !wget -O t2g.txt https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv ``` # Quantify with kbtools (Kallisto - Bustools wrapper) in one easy step. Going into the vagaries of turning a SRA deposit into a non-borked pair of fastq files is beyond the scope of this document. Plus I would swear a lot. So we just give an example set from a Human organoid retina 10x (version 2) experiment. The Pachter Lab has a discussion of how/where to get public data here: https://colab.research.google.com/github/pachterlab/kallistobustools/blob/master/notebooks/data_download.ipynb If you have your own 10X bam file, then 10X provides a very nice and simple tool to turn it into fastq file here: https://github.com/10XGenomics/bamtofastq To reduce run-time we have taken the first five million reads from this fastq pair. This will take ~3 minutes, depending on the internet speed between Google and our server You can also directly stream the file to improve wall-time, but I was getting periodic errors, so we are doing the simpler thing and downloading each fastq file here first. ``` %%time !wget -O sample_1.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_1.head.fastq.gz !wget -O sample_2.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_2.head.fastq.gz !kb count --overwrite --h5ad -i idx.idx -g t2g.txt -x DropSeq -o output --filter bustools -t 2 \ sample_1.fastq.gz \ sample_2.fastq.gz ``` # Download models (and our xgboost functions for cell type labelling) The scVI model is the same that we use to create the data for plae.nei.nih.gov The xgboost model is a simplified version that *only* uses the scVI latent dims and omits the Early/Late/RPC cell types and collapses them all into "RPC" ``` !wget -O scVI_scEiaD.tgz https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_03_17__scVI_scEiaD.tgz !tar -xzf scVI_scEiaD.tgz !wget -O celltype_ML_model.tar https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_cell_type_ML_all.tar !tar -xf celltype_ML_model.tar !wget -O celltype_predictor.py https://raw.githubusercontent.com/davemcg/scEiaD/master/src/cell_type_predictor.py ``` # Python time ``` import anndata import sys import os import numpy as np import pandas as pd import random import scanpy as sc from scipy import sparse import scvi import torch # 2 cores sc.settings.n_jobs = 2 # set seeds random.seed(234) scvi.settings.seed = 234 # set some args org = 'mouse' n_epochs = 15 confidence = 0.5 ``` # Load adata And process (mouse processing requires a bit more jiggling that can be skipped if you have human data) ``` # load query data adata_query = sc.read_h5ad('output/counts_filtered/adata.h5ad') adata_query.layers["counts"] = adata_query.X.copy() adata_query.layers["counts"] = sparse.csr_matrix(adata_query.layers["counts"]) # Set scVI model path scVI_model_dir_path = 'scVIprojectionSO_scEiaD_model/n_features-5000__transform-counts__partition-universe__covariate-batch__method-scVIprojectionSO__dims-8/' # Read in HVG genes used in scVI model var_names = pd.read_csv(scVI_model_dir_path + '/var_names.csv', header = None) # cut down query adata object to use just the var_names used in the scVI model training if org.lower() == 'mouse': adata_query.var_names = adata_query.var['gene_name'] n_missing_genes = sum(~var_names[0].isin(adata_query.var_names)) dummy_adata = anndata.AnnData(X=sparse.csr_matrix((adata_query.shape[0], n_missing_genes))) dummy_adata.obs_names = adata_query.obs_names dummy_adata.var_names = var_names[0][~var_names[0].isin(adata_query.var_names)] adata_fixed = anndata.concat([adata_query, dummy_adata], axis=1) adata_query_HVG = adata_fixed[:, var_names[0]] ``` # Run scVI (trained on scEiaD data) Goal: get scEiaD batch corrected latent space for *your* data ``` adata_query_HVG.obs['batch'] = 'New Data' scvi.data.setup_anndata(adata_query_HVG, batch_key="batch") vae_query = scvi.model.SCVI.load_query_data( adata_query_HVG, scVI_model_dir_path ) # project scVI latent dims from scEiaD onto query data vae_query.train(max_epochs=n_epochs, plan_kwargs=dict(weight_decay=0.0)) # get the latent dims into the adata adata_query_HVG.obsm["X_scVI"] = vae_query.get_latent_representation() ``` # Get Cell Type predictions (this xgboost model does NOT use the organim or Age information, but as those field were often used by use, they got hard-coded in. So we will put dummy values in). ``` # extract latent dimensions obs=pd.DataFrame(adata_query_HVG.obs) obsm=pd.DataFrame(adata_query_HVG.obsm["X_scVI"]) features = list(obsm.columns) obsm.index = obs.index.values obsm['Barcode'] = obsm.index obsm['Age'] = 1000 obsm['organism'] = 'x' # xgboost ML time from celltype_predictor import * CT_predictions = scEiaD_classifier_predict(inputMatrix=obsm, labelIdCol='ID', labelNameCol='CellType', trainedModelFile= os.getcwd() + '/2021_cell_type_ML_all', featureCols=features, predProbThresh=confidence) ``` # What do we have? ``` CT_predictions['CellType'].value_counts() ```
github_jupyter
## The QLBS model for a European option Welcome to your 2nd assignment in Reinforcement Learning in Finance. In this exercise you will arrive to an option price and the hedging portfolio via standard toolkit of Dynamic Pogramming (DP). QLBS model learns both the optimal option price and optimal hedge directly from trading data. **Instructions:** - You will be using Python 3. - Avoid using for-loops and while-loops, unless you are explicitly told to do so. - Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. - After coding your function, run the cell right below it to check if your result is correct. - When encountering **```# dummy code - remove```** please replace this code with your own **After this assignment you will:** - Re-formulate option pricing and hedging method using the language of Markov Decision Processes (MDP) - Setup foward simulation using Monte Carlo - Expand optimal action (hedge) $a_t^\star(X_t)$ and optimal Q-function $Q_t^\star(X_t, a_t^\star)$ in basis functions with time-dependend coefficients Let's get started! ## About iPython Notebooks ## iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter. ``` #import warnings #warnings.filterwarnings("ignore") import numpy as np import pandas as pd from scipy.stats import norm import random import time import matplotlib.pyplot as plt import sys sys.path.append("..") import grading ### ONLY FOR GRADING. DO NOT EDIT ### submissions=dict() assignment_key="wLtf3SoiEeieSRL7rCBNJA" all_parts=["15mYc", "h1P6Y", "q9QW7","s7MpJ","Pa177"] ### ONLY FOR GRADING. DO NOT EDIT ### COURSERA_TOKEN = 'gF094cwtidz2YQpP' # the key provided to the Student under his/her email on submission page COURSERA_EMAIL = '[email protected]' # the email ``` ## Parameters for MC simulation of stock prices ``` S0 = 100 # initial stock price mu = 0.05 # drift sigma = 0.15 # volatility r = 0.03 # risk-free rate M = 1 # maturity T = 24 # number of time steps N_MC = 10000 # number of paths delta_t = M / T # time interval gamma = np.exp(- r * delta_t) # discount factor ``` ### Black-Sholes Simulation Simulate $N_{MC}$ stock price sample paths with $T$ steps by the classical Black-Sholes formula. $$dS_t=\mu S_tdt+\sigma S_tdW_t\quad\quad S_{t+1}=S_te^{\left(\mu-\frac{1}{2}\sigma^2\right)\Delta t+\sigma\sqrt{\Delta t}Z}$$ where $Z$ is a standard normal random variable. Based on simulated stock price $S_t$ paths, compute state variable $X_t$ by the following relation. $$X_t=-\left(\mu-\frac{1}{2}\sigma^2\right)t\Delta t+\log S_t$$ Also compute $$\Delta S_t=S_{t+1}-e^{r\Delta t}S_t\quad\quad \Delta\hat{S}_t=\Delta S_t-\Delta\bar{S}_t\quad\quad t=0,...,T-1$$ where $\Delta\bar{S}_t$ is the sample mean of all values of $\Delta S_t$. Plots of 5 stock price $S_t$ and state variable $X_t$ paths are shown below. ``` # make a dataset starttime = time.time() np.random.seed(42) # stock price S = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) S.loc[:,0] = S0 # standard normal random numbers RN = pd.DataFrame(np.random.randn(N_MC,T), index=range(1, N_MC+1), columns=range(1, T+1)) for t in range(1, T+1): S.loc[:,t] = S.loc[:,t-1] * np.exp((mu - 1/2 * sigma**2) * delta_t + sigma * np.sqrt(delta_t) * RN.loc[:,t]) delta_S = S.loc[:,1:T].values - np.exp(r * delta_t) * S.loc[:,0:T-1] delta_S_hat = delta_S.apply(lambda x: x - np.mean(x), axis=0) # state variable X = - (mu - 1/2 * sigma**2) * np.arange(T+1) * delta_t + np.log(S) # delta_t here is due to their conventions endtime = time.time() print('\nTime Cost:', endtime - starttime, 'seconds') # plot 10 paths step_size = N_MC // 10 idx_plot = np.arange(step_size, N_MC, step_size) plt.plot(S.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('Stock Price Sample Paths') plt.show() plt.plot(X.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.ylabel('State Variable') plt.show() ``` Define function *terminal_payoff* to compute the terminal payoff of a European put option. $$H_T\left(S_T\right)=\max\left(K-S_T,0\right)$$ ``` def terminal_payoff(ST, K): # ST final stock price # K strike payoff = max(K - ST, 0) return payoff type(delta_S) ``` ## Define spline basis functions ``` import bspline import bspline.splinelab as splinelab X_min = np.min(np.min(X)) X_max = np.max(np.max(X)) print('X.shape = ', X.shape) print('X_min, X_max = ', X_min, X_max) p = 4 # order of spline (as-is; 3 = cubic, 4: B-spline?) ncolloc = 12 tau = np.linspace(X_min,X_max,ncolloc) # These are the sites to which we would like to interpolate # k is a knot vector that adds endpoints repeats as appropriate for a spline of order p # To get meaninful results, one should have ncolloc >= p+1 k = splinelab.aptknt(tau, p) # Spline basis of order p on knots k basis = bspline.Bspline(k, p) f = plt.figure() # B = bspline.Bspline(k, p) # Spline basis functions print('Number of points k = ', len(k)) basis.plot() plt.savefig('Basis_functions.png', dpi=600) type(basis) X.values.shape ``` ### Make data matrices with feature values "Features" here are the values of basis functions at data points The outputs are 3D arrays of dimensions num_tSteps x num_MC x num_basis ``` num_t_steps = T + 1 num_basis = ncolloc # len(k) # data_mat_t = np.zeros((num_t_steps, N_MC,num_basis )) print('num_basis = ', num_basis) print('dim data_mat_t = ', data_mat_t.shape) t_0 = time.time() # fill it for i in np.arange(num_t_steps): x = X.values[:,i] data_mat_t[i,:,:] = np.array([ basis(el) for el in x ]) t_end = time.time() print('Computational time:', t_end - t_0, 'seconds') # save these data matrices for future re-use np.save('data_mat_m=r_A_%d' % N_MC, data_mat_t) print(data_mat_t.shape) # shape num_steps x N_MC x num_basis print(len(k)) ``` ## Dynamic Programming solution for QLBS The MDP problem in this case is to solve the following Bellman optimality equation for the action-value function. $$Q_t^\star\left(x,a\right)=\mathbb{E}_t\left[R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\space|\space X_t=x,a_t=a\right],\space\space t=0,...,T-1,\quad\gamma=e^{-r\Delta t}$$ where $R_t\left(X_t,a_t,X_{t+1}\right)$ is the one-step time-dependent random reward and $a_t\left(X_t\right)$ is the action (hedge). Detailed steps of solving this equation by Dynamic Programming are illustrated below. With this set of basis functions $\left\{\Phi_n\left(X_t^k\right)\right\}_{n=1}^N$, expand the optimal action (hedge) $a_t^\star\left(X_t\right)$ and optimal Q-function $Q_t^\star\left(X_t,a_t^\star\right)$ in basis functions with time-dependent coefficients. $$a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}\quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$$ Coefficients $\phi_{nt}$ and $\omega_{nt}$ are computed recursively backward in time for $t=T−1,...,0$. Coefficients for expansions of the optimal action $a_t^\star\left(X_t\right)$ are solved by $$\phi_t=\mathbf A_t^{-1}\mathbf B_t$$ where $\mathbf A_t$ and $\mathbf B_t$ are matrix and vector respectively with elements given by $$A_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)\left(\Delta\hat{S}_t^k\right)^2}\quad\quad B_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left[\hat\Pi_{t+1}^k\Delta\hat{S}_t^k+\frac{1}{2\gamma\lambda}\Delta S_t^k\right]}$$ $$\Delta S_t=S_{t+1} - e^{-r\Delta t} S_t\space \quad t=T-1,...,0$$ where $\Delta\hat{S}_t$ is the sample mean of all values of $\Delta S_t$. Define function *function_A* and *function_B* to compute the value of matrix $\mathbf A_t$ and vector $\mathbf B_t$. ## Define the option strike and risk aversion parameter ``` risk_lambda = 0.001 # risk aversion K = 100 # option stike # Note that we set coef=0 below in function function_B_vec. This correspond to a pure risk-based hedging ``` ### Part 1 Calculate coefficients $\phi_{nt}$ of the optimal action $a_t^\star\left(X_t\right)$ **Instructions:** - implement function_A_vec() which computes $A_{nm}^{\left(t\right)}$ matrix - implement function_B_vec() which computes $B_n^{\left(t\right)}$ column vector ``` # functions to compute optimal hedges def function_A_vec(t, delta_S_hat, data_mat, reg_param): """ function_A_vec - compute the matrix A_{nm} from Eq. (52) (with a regularization!) Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat delta_S_hat - pandas.DataFrame of dimension N_MC x T data_mat - pandas.DataFrame of dimension T x N_MC x num_basis reg_param - a scalar, regularization parameter Return: - np.array, i.e. matrix A_{nm} of dimension num_basis x num_basis """ ### START CODE HERE ### (≈ 5-6 lines of code) # store result in A_mat for grading # # The cell above shows the equations we need # # Eq. (53) in QLBS Q-Learner in the Black-Scholes-Merton article we are trying to solve for # # Phi* = (At^-1)(Bt) # # # # This function solves for the A coeffecient, which is shown in the cell above, which is # # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article # # # # The article is located here # # https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3087076 # # Get the data matrix at this specific time index # Xt = data_mat[t,:,:] # # As shown in the description of the arguments in this function # # data_mat - pandas.DataFrame of dimension T x N_MC x num_basis # # # # We got Xt at a certain t time index, so # # Xt pandas.DataFrame of dimension N_MC x num_basis # # # # Therefore... # num_basis = Xt.shape[1] # # Now we need Delta S hat at this time index for the # # 'A' coefficient from the # # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article # # # # We are feed the parameter delta_S_hat into this function # # and # # delta_S_hat - pandas.DataFrame of dimension N_MC x T # # # # We what the delta_S_hat at this time index # # # # Therefore... # current_delta_S_hat = delta_S_hat.loc[:, t] # # The last term in the A coefficient calculation in the # # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article # # is delta_S_hat squared # # # # NOTE: There is .reshape(-1,1) which means that 1 for the columns # # MUST be respected, but the -1 for the rows means that whatever # # elements are left, fill it up to be whatever number. # current_delta_S_hat_squared = np.square(current_delta_S_hat).reshape( -1, 1) # # Now we have the terms to make up the equation. # # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article # # NOTE: The summation is not done in this function. # # NOTE: You do not see it in the equation # # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article # # but regularization is a technique used in Machine Learning. # # You add the term. # # np.eye() creates an identity matrix of size you specify. # # # # NOTE: When doing dot products, might have to transpose so the dimensions # # align. # A_mat = ( np.dot( Xt.T, Xt*current_delta_S_hat_squared ) # + # reg_param * np.eye(num_basis) ) X_mat = data_mat[t, :, :] num_basis_funcs = X_mat.shape[1] this_dS = delta_S_hat.loc[:, t] hat_dS2 = (this_dS ** 2).reshape(-1, 1) A_mat = np.dot(X_mat.T, X_mat * hat_dS2) + reg_param * np.eye(num_basis_funcs) ### END CODE HERE ### return A_mat def function_B_vec(t, Pi_hat, delta_S_hat=delta_S_hat, S=S, data_mat=data_mat_t, gamma=gamma, risk_lambda=risk_lambda): """ function_B_vec - compute vector B_{n} from Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of delta_S_hat Pi_hat - pandas.DataFrame of dimension N_MC x T of portfolio values delta_S_hat - pandas.DataFrame of dimension N_MC x T S - pandas.DataFrame of simulated stock prices of dimension N_MC x T data_mat - pandas.DataFrame of dimension T x N_MC x num_basis gamma - one time-step discount factor $exp(-r \delta t)$ risk_lambda - risk aversion coefficient, a small positive number Return: np.array() of dimension num_basis x 1 """ # coef = 1.0/(2 * gamma * risk_lambda) # override it by zero to have pure risk hedge ### START CODE HERE ### (≈ 5-6 lines of code) # store result in B_vec for grading # # Get the data matrix at this specific time index # Xt = data_mat[t,:,:] # # Computer the first term in the brackets. # first_term = Pi_hat[ :, t+1 ] * delta_S_hat.loc[:, t] # # NOTE: for the last term in the equation # # Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article # # # # would be # # last_term = 1.0/(2 * gamma * risk_lambda) * S.loc[:, t] # last_coefficient = 1.0/(2 * gamma * risk_lambda) # # # # But the instructions say make it equal override it by zero to have pure risk hedge # last_coefficient = 0 # last_term = last_coefficient * S.loc[:, t] # # Compute # second_factor = first_term + last_term # # Compute the equation # # NOTE: When doing dot products, might have to transpose so the dimensions # # align. # B_vec = np.dot(Xt.T, second_factor) tmp = Pi_hat.loc[:,t+1] * delta_S_hat.loc[:, t] X_mat = data_mat[t, :, :] # matrix of dimension N_MC x num_basis B_vec = np.dot(X_mat.T, tmp) ### END CODE HERE ### return B_vec ### GRADED PART (DO NOT EDIT) ### reg_param = 1e-3 np.random.seed(42) A_mat = function_A_vec(T-1, delta_S_hat, data_mat_t, reg_param) idx_row = np.random.randint(low=0, high=A_mat.shape[0], size=50) np.random.seed(42) idx_col = np.random.randint(low=0, high=A_mat.shape[1], size=50) part_1 = list(A_mat[idx_row, idx_col]) try: part1 = " ".join(map(repr, part_1)) except TypeError: part1 = repr(part_1) submissions[all_parts[0]]=part1 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions) A_mat[idx_row, idx_col] ### GRADED PART (DO NOT EDIT) ### ### GRADED PART (DO NOT EDIT) ### np.random.seed(42) risk_lambda = 0.001 Pi = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K)) Pi_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi_hat.iloc[:,-1] = Pi.iloc[:,-1] - np.mean(Pi.iloc[:,-1]) B_vec = function_B_vec(T-1, Pi_hat, delta_S_hat, S, data_mat_t, gamma, risk_lambda) part_2 = list(B_vec) try: part2 = " ".join(map(repr, part_2)) except TypeError: part2 = repr(part_2) submissions[all_parts[1]]=part2 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions) B_vec ### GRADED PART (DO NOT EDIT) ### ``` ## Compute optimal hedge and portfolio value Call *function_A* and *function_B* for $t=T-1,...,0$ together with basis function $\Phi_n\left(X_t\right)$ to compute optimal action $a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}$ backward recursively with terminal condition $a_T^\star\left(X_T\right)=0$. Once the optimal hedge $a_t^\star\left(X_t\right)$ is computed, the portfolio value $\Pi_t$ could also be computed backward recursively by $$\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0$$ together with the terminal condition $\Pi_T=H_T\left(S_T\right)=\max\left(K-S_T,0\right)$ for a European put option. Also compute $\hat{\Pi}_t=\Pi_t-\bar{\Pi}_t$, where $\bar{\Pi}_t$ is the sample mean of all values of $\Pi_t$. Plots of 5 optimal hedge $a_t^\star$ and portfolio value $\Pi_t$ paths are shown below. ``` starttime = time.time() # portfolio value Pi = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K)) Pi_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi_hat.iloc[:,-1] = Pi.iloc[:,-1] - np.mean(Pi.iloc[:,-1]) # optimal hedge a = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) a.iloc[:,-1] = 0 reg_param = 1e-3 # free parameter for t in range(T-1, -1, -1): A_mat = function_A_vec(t, delta_S_hat, data_mat_t, reg_param) B_vec = function_B_vec(t, Pi_hat, delta_S_hat, S, data_mat_t, gamma, risk_lambda) # print ('t = A_mat.shape = B_vec.shape = ', t, A_mat.shape, B_vec.shape) # coefficients for expansions of the optimal action phi = np.dot(np.linalg.inv(A_mat), B_vec) a.loc[:,t] = np.dot(data_mat_t[t,:,:],phi) Pi.loc[:,t] = gamma * (Pi.loc[:,t+1] - a.loc[:,t] * delta_S.loc[:,t]) Pi_hat.loc[:,t] = Pi.loc[:,t] - np.mean(Pi.loc[:,t]) a = a.astype('float') Pi = Pi.astype('float') Pi_hat = Pi_hat.astype('float') endtime = time.time() print('Computational time:', endtime - starttime, 'seconds') # plot 10 paths plt.plot(a.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('Optimal Hedge') plt.show() plt.plot(Pi.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('Portfolio Value') plt.show() ``` ## Compute rewards for all paths Once the optimal hedge $a_t^\star$ and portfolio value $\Pi_t$ are all computed, the reward function $R_t\left(X_t,a_t,X_{t+1}\right)$ could then be computed by $$R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=0,...,T-1$$ with terminal condition $R_T=-\lambda Var\left[\Pi_T\right]$. Plot of 5 reward function $R_t$ paths is shown below. ``` # Compute rewards for all paths starttime = time.time() # reward function R = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) R.iloc[:,-1] = - risk_lambda * np.var(Pi.iloc[:,-1]) for t in range(T): R.loc[1:,t] = gamma * a.loc[1:,t] * delta_S.loc[1:,t] - risk_lambda * np.var(Pi.loc[1:,t]) endtime = time.time() print('\nTime Cost:', endtime - starttime, 'seconds') # plot 10 paths plt.plot(R.T.iloc[:, idx_plot]) plt.xlabel('Time Steps') plt.title('Reward Function') plt.show() ``` ## Part 2: Compute the optimal Q-function with the DP approach Coefficients for expansions of the optimal Q-function $Q_t^\star\left(X_t,a_t^\star\right)$ are solved by $$\omega_t=\mathbf C_t^{-1}\mathbf D_t$$ where $\mathbf C_t$ and $\mathbf D_t$ are matrix and vector respectively with elements given by $$C_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)}\quad\quad D_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left(R_t\left(X_t,a_t^\star,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}$$ Define function *function_C* and *function_D* to compute the value of matrix $\mathbf C_t$ and vector $\mathbf D_t$. **Instructions:** - implement function_C_vec() which computes $C_{nm}^{\left(t\right)}$ matrix - implement function_D_vec() which computes $D_n^{\left(t\right)}$ column vector ``` def function_C_vec(t, data_mat, reg_param): """ function_C_vec - calculate C_{nm} matrix from Eq. (56) (with a regularization!) Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis reg_param - regularization parameter, a scalar Return: C_mat - np.array of dimension num_basis x num_basis """ ### START CODE HERE ### (≈ 5-6 lines of code) # your code here .... # C_mat = your code here ... X_mat = data_mat[t, :, :] num_basis_funcs = X_mat.shape[1] C_mat = np.dot(X_mat.T, X_mat) + reg_param * np.eye(num_basis_funcs) ### END CODE HERE ### return C_mat def function_D_vec(t, Q, R, data_mat, gamma=gamma): """ function_D_vec - calculate D_{nm} vector from Eq. (56) (with a regularization!) Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat Q - pandas.DataFrame of Q-function values of dimension N_MC x T R - pandas.DataFrame of rewards of dimension N_MC x T data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis gamma - one time-step discount factor $exp(-r \delta t)$ Return: D_vec - np.array of dimension num_basis x 1 """ ### START CODE HERE ### (≈ 5-6 lines of code) # your code here .... # D_vec = your code here ... X_mat = data_mat[t, :, :] D_vec = np.dot(X_mat.T, R.loc[:,t] + gamma * Q.loc[:, t+1]) ### END CODE HERE ### return D_vec ### GRADED PART (DO NOT EDIT) ### C_mat = function_C_vec(T-1, data_mat_t, reg_param) np.random.seed(42) idx_row = np.random.randint(low=0, high=C_mat.shape[0], size=50) np.random.seed(42) idx_col = np.random.randint(low=0, high=C_mat.shape[1], size=50) part_3 = list(C_mat[idx_row, idx_col]) try: part3 = " ".join(map(repr, part_3)) except TypeError: part3 = repr(part_3) submissions[all_parts[2]]=part3 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions) C_mat[idx_row, idx_col] ### GRADED PART (DO NOT EDIT) ### ### GRADED PART (DO NOT EDIT) ### Q = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1]) D_vec = function_D_vec(T-1, Q, R, data_mat_t,gamma) part_4 = list(D_vec) try: part4 = " ".join(map(repr, part_4)) except TypeError: part4 = repr(part_4) submissions[all_parts[3]]=part4 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions) D_vec ### GRADED PART (DO NOT EDIT) ### ``` Call *function_C* and *function_D* for $t=T-1,...,0$ together with basis function $\Phi_n\left(X_t\right)$ to compute optimal action Q-function $Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$ backward recursively with terminal condition $Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$. ``` starttime = time.time() # Q function Q = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1]) reg_param = 1e-3 for t in range(T-1, -1, -1): ###################### C_mat = function_C_vec(t,data_mat_t,reg_param) D_vec = function_D_vec(t, Q,R,data_mat_t,gamma) omega = np.dot(np.linalg.inv(C_mat), D_vec) Q.loc[:,t] = np.dot(data_mat_t[t,:,:], omega) Q = Q.astype('float') endtime = time.time() print('\nTime Cost:', endtime - starttime, 'seconds') # plot 10 paths plt.plot(Q.T.iloc[:, idx_plot]) plt.xlabel('Time Steps') plt.title('Optimal Q-Function') plt.show() ``` The QLBS option price is given by $C_t^{\left(QLBS\right)}\left(S_t,ask\right)=-Q_t\left(S_t,a_t^\star\right)$ ## Summary of the QLBS pricing and comparison with the BSM pricing Compare the QLBS price to European put price given by Black-Sholes formula. $$C_t^{\left(BS\right)}=Ke^{-r\left(T-t\right)}\mathcal N\left(-d_2\right)-S_t\mathcal N\left(-d_1\right)$$ ``` # The Black-Scholes prices def bs_put(t, S0=S0, K=K, r=r, sigma=sigma, T=M): d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) price = K * np.exp(-r * (T-t)) * norm.cdf(-d2) - S0 * norm.cdf(-d1) return price def bs_call(t, S0=S0, K=K, r=r, sigma=sigma, T=M): d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) price = S0 * norm.cdf(d1) - K * np.exp(-r * (T-t)) * norm.cdf(d2) return price ``` ## The DP solution for QLBS ``` # QLBS option price C_QLBS = - Q.copy() print('-------------------------------------------') print(' QLBS Option Pricing (DP solution) ') print('-------------------------------------------\n') print('%-25s' % ('Initial Stock Price:'), S0) print('%-25s' % ('Drift of Stock:'), mu) print('%-25s' % ('Volatility of Stock:'), sigma) print('%-25s' % ('Risk-free Rate:'), r) print('%-25s' % ('Risk aversion parameter: '), risk_lambda) print('%-25s' % ('Strike:'), K) print('%-25s' % ('Maturity:'), M) print('%-26s %.4f' % ('\nQLBS Put Price: ', C_QLBS.iloc[0,0])) print('%-26s %.4f' % ('\nBlack-Sholes Put Price:', bs_put(0))) print('\n') # plot 10 paths plt.plot(C_QLBS.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('QLBS Option Price') plt.show() ### GRADED PART (DO NOT EDIT) ### part5 = str(C_QLBS.iloc[0,0]) submissions[all_parts[4]]=part5 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions) C_QLBS.iloc[0,0] ### GRADED PART (DO NOT EDIT) ### ``` ### make a summary picture ``` # plot: Simulated S_t and X_t values # optimal hedge and portfolio values # rewards and optimal Q-function f, axarr = plt.subplots(3, 2) f.subplots_adjust(hspace=.5) f.set_figheight(8.0) f.set_figwidth(8.0) axarr[0, 0].plot(S.T.iloc[:,idx_plot]) axarr[0, 0].set_xlabel('Time Steps') axarr[0, 0].set_title(r'Simulated stock price $S_t$') axarr[0, 1].plot(X.T.iloc[:,idx_plot]) axarr[0, 1].set_xlabel('Time Steps') axarr[0, 1].set_title(r'State variable $X_t$') axarr[1, 0].plot(a.T.iloc[:,idx_plot]) axarr[1, 0].set_xlabel('Time Steps') axarr[1, 0].set_title(r'Optimal action $a_t^{\star}$') axarr[1, 1].plot(Pi.T.iloc[:,idx_plot]) axarr[1, 1].set_xlabel('Time Steps') axarr[1, 1].set_title(r'Optimal portfolio $\Pi_t$') axarr[2, 0].plot(R.T.iloc[:,idx_plot]) axarr[2, 0].set_xlabel('Time Steps') axarr[2, 0].set_title(r'Rewards $R_t$') axarr[2, 1].plot(Q.T.iloc[:,idx_plot]) axarr[2, 1].set_xlabel('Time Steps') axarr[2, 1].set_title(r'Optimal DP Q-function $Q_t^{\star}$') # plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu=r.png', dpi=600) # plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu>r.png', dpi=600) #plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu>r.png', dpi=600) plt.savefig('r.png', dpi=600) plt.show() # plot convergence to the Black-Scholes values # lam = 0.0001, Q = 4.1989 +/- 0.3612 # 4.378 # lam = 0.001: Q = 4.9004 +/- 0.1206 # Q=6.283 # lam = 0.005: Q = 8.0184 +/- 0.9484 # Q = 14.7489 # lam = 0.01: Q = 11.9158 +/- 2.2846 # Q = 25.33 lam_vals = np.array([0.0001, 0.001, 0.005, 0.01]) # Q_vals = np.array([3.77, 3.81, 4.57, 7.967,12.2051]) Q_vals = np.array([4.1989, 4.9004, 8.0184, 11.9158]) Q_std = np.array([0.3612,0.1206, 0.9484, 2.2846]) BS_price = bs_put(0) # f, axarr = plt.subplots(1, 1) fig, ax = plt.subplots(1, 1) f.subplots_adjust(hspace=.5) f.set_figheight(4.0) f.set_figwidth(4.0) # ax.plot(lam_vals,Q_vals) ax.errorbar(lam_vals, Q_vals, yerr=Q_std, fmt='o') ax.set_xlabel('Risk aversion') ax.set_ylabel('Optimal option price') ax.set_title(r'Optimal option price vs risk aversion') ax.axhline(y=BS_price,linewidth=2, color='r') textstr = 'BS price = %2.2f'% (BS_price) props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) # place a text box in upper left in axes coords ax.text(0.05, 0.95, textstr, fontsize=11,transform=ax.transAxes, verticalalignment='top', bbox=props) plt.savefig('Opt_price_vs_lambda_Markowitz.png') plt.show() ```
github_jupyter
### Installation `devtools::install_github("zji90/SCRATdatahg19")` `source("https://raw.githubusercontent.com/zji90/SCRATdata/master/installcode.R")` ### Import packages ``` library(devtools) library(GenomicAlignments) library(Rsamtools) library(SCRATdatahg19) library(SCRAT) ``` ### Obtain Feature Matrix ``` start_time = Sys.time() metadata <- read.table('./input/metadata.tsv', header = TRUE, stringsAsFactors=FALSE,quote="",row.names=1) SCRATsummary <- function (dir = "", genome, bamfile = NULL, singlepair = "automated", removeblacklist = T, log2transform = T, adjustlen = T, featurelist = c("GENE", "ENCL", "MOTIF_TRANSFAC", "MOTIF_JASPAR", "GSEA"), customfeature = NULL, Genestarttype = "TSSup", Geneendtype = "TSSdown", Genestartbp = 3000, Geneendbp = 1000, ENCLclunum = 2000, Motifflank = 100, GSEAterm = "c5.bp", GSEAstarttype = "TSSup", GSEAendtype = "TSSdown", GSEAstartbp = 3000, GSEAendbp = 1000) { if (is.null(bamfile)) { bamfile <- list.files(dir, pattern = ".bam$") } datapath <- system.file("extdata", package = paste0("SCRATdata", genome)) bamdata <- list() for (i in bamfile) { filepath <- file.path(dir, i) if (singlepair == "automated") { bamfile <- BamFile(filepath) tmpsingle <- readGAlignments(bamfile) tmppair <- readGAlignmentPairs(bamfile) pairendtf <- testPairedEndBam(bamfile) if (pairendtf) { tmp <- tmppair startpos <- pmin(start(first(tmp)), start(last(tmp))) endpos <- pmax(end(first(tmp)), end(last(tmp))) id <- which(!is.na(as.character(seqnames(tmp)))) tmp <- GRanges(seqnames=as.character(seqnames(tmp))[id],IRanges(start=startpos[id],end=endpos[id])) } else { tmp <- GRanges(tmpsingle) } } else if (singlepair == "single") { tmp <- GRanges(readGAlignments(filepath)) } else if (singlepair == "pair") { tmp <- readGAlignmentPairs(filepath) startpos <- pmin(start(first(tmp)), start(last(tmp))) endpos <- pmax(end(first(tmp)), end(last(tmp))) id <- which(!is.na(as.character(seqnames(tmp)))) tmp <- GRanges(seqnames=as.character(seqnames(tmp))[id],IRanges(start=startpos[id],end=endpos[id])) } if (removeblacklist) { load(paste0(datapath, "/gr/blacklist.rda")) tmp <- tmp[-as.matrix(findOverlaps(tmp, gr))[, 1], ] } bamdata[[i]] <- tmp } bamsummary <- sapply(bamdata, length) allres <- NULL datapath <- system.file("extdata", package = paste0("SCRATdata", genome)) if ("GENE" %in% featurelist) { print("Processing GENE features") load(paste0(datapath, "/gr/generegion.rda")) if (Genestarttype == "TSSup") { grstart <- ifelse(as.character(strand(gr)) == "+", start(gr) - as.numeric(Genestartbp), end(gr) + as.numeric(Genestartbp)) } else if (Genestarttype == "TSSdown") { grstart <- ifelse(as.character(strand(gr)) == "+", start(gr) + as.numeric(Genestartbp), end(gr) - as.numeric(Genestartbp)) } else if (Genestarttype == "TESup") { grstart <- ifelse(as.character(strand(gr)) == "+", end(gr) - as.numeric(Genestartbp), start(gr) + as.numeric(Genestartbp)) } else if (Genestarttype == "TESdown") { grstart <- ifelse(as.character(strand(gr)) == "+", end(gr) + as.numeric(Genestartbp), start(gr) - as.numeric(Genestartbp)) } if (Geneendtype == "TSSup") { grend <- ifelse(as.character(strand(gr)) == "+", start(gr) - as.numeric(Geneendbp), end(gr) + as.numeric(Geneendbp)) } else if (Geneendtype == "TSSdown") { grend <- ifelse(as.character(strand(gr)) == "+", start(gr) + as.numeric(Geneendbp), end(gr) - as.numeric(Geneendbp)) } else if (Geneendtype == "TESup") { grend <- ifelse(as.character(strand(gr)) == "+", end(gr) - as.numeric(Geneendbp), start(gr) + as.numeric(Geneendbp)) } else if (Geneendtype == "TESdown") { grend <- ifelse(as.character(strand(gr)) == "+", end(gr) + as.numeric(Geneendbp), start(gr) - as.numeric(Geneendbp)) } ngr <- names(gr) gr <- GRanges(seqnames = seqnames(gr), IRanges(start = pmin(grstart, grend), end = pmax(grstart, grend))) names(gr) <- ngr tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- end(gr) - start(gr) + 1 tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } if ("ENCL" %in% featurelist) { print("Processing ENCL features") load(paste0(datapath, "/gr/ENCL", ENCLclunum, ".rda")) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } if ("MOTIF_TRANSFAC" %in% featurelist) { print("Processing MOTIF_TRANSFAC features") load(paste0(datapath, "/gr/transfac1.rda")) gr <- flank(gr, as.numeric(Motifflank), both = T) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) load(paste0(datapath, "/gr/transfac2.rda")) gr <- flank(gr, as.numeric(Motifflank), both = T) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) if (genome %in% c("hg19", "hg38")) { load(paste0(datapath, "/gr/transfac3.rda")) gr <- flank(gr, as.numeric(Motifflank), both = T) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } } if ("MOTIF_JASPAR" %in% featurelist) { print("Processing MOTIF_JASPAR features") load(paste0(datapath, "/gr/jaspar1.rda")) gr <- flank(gr, as.numeric(Motifflank), both = T) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) load(paste0(datapath, "/gr/jaspar2.rda")) gr <- flank(gr, as.numeric(Motifflank), both = T) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } if ("GSEA" %in% featurelist) { print("Processing GSEA features") for (i in GSEAterm) { load(paste0(datapath, "/gr/GSEA", i, ".rda")) allgr <- gr for (sgrn in names(allgr)) { gr <- allgr[[sgrn]] if (GSEAstarttype == "TSSup") { grstart <- ifelse(as.character(strand(gr)) == "+", start(gr) - as.numeric(GSEAstartbp), end(gr) + as.numeric(GSEAstartbp)) } else if (GSEAstarttype == "TSSdown") { grstart <- ifelse(as.character(strand(gr)) == "+", start(gr) + as.numeric(GSEAstartbp), end(gr) - as.numeric(GSEAstartbp)) } else if (GSEAstarttype == "TESup") { grstart <- ifelse(as.character(strand(gr)) == "+", end(gr) - as.numeric(GSEAstartbp), start(gr) + as.numeric(GSEAstartbp)) } else if (GSEAstarttype == "TESdown") { grstart <- ifelse(as.character(strand(gr)) == "+", end(gr) + as.numeric(GSEAstartbp), start(gr) - as.numeric(GSEAstartbp)) } if (GSEAendtype == "TSSup") { grend <- ifelse(as.character(strand(gr)) == "+", start(gr) - as.numeric(GSEAendbp), end(gr) + as.numeric(GSEAendbp)) } else if (GSEAendtype == "TSSdown") { grend <- ifelse(as.character(strand(gr)) == "+", start(gr) + as.numeric(GSEAendbp), end(gr) - as.numeric(GSEAendbp)) } else if (GSEAendtype == "TESup") { grend <- ifelse(as.character(strand(gr)) == "+", end(gr) - as.numeric(GSEAendbp), start(gr) + as.numeric(GSEAendbp)) } else if (GSEAendtype == "TESdown") { grend <- ifelse(as.character(strand(gr)) == "+", end(gr) + as.numeric(GSEAendbp), start(gr) - as.numeric(GSEAendbp)) } ngr <- names(gr) gr <- GRanges(seqnames = seqnames(gr), IRanges(start = pmin(grstart, grend), end = pmax(grstart, grend))) names(gr) <- ngr allgr[[sgrn]] <- gr } gr <- allgr tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- sapply(gr, function(i) sum(end(i) - start(i) + 1)) tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } } if ("Custom" %in% featurelist) { print("Processing custom features") gr <- read.table(customfeature, as.is = T, sep = "\t") gr <- GRanges(seqnames = gr[, 1], IRanges(start = gr[, 2], end = gr[, 3])) tmp <- sapply(bamdata, function(i) countOverlaps(gr, i)) tmp <- sweep(tmp, 2, bamsummary, "/") * 10000 if (log2transform) { tmp <- log2(tmp + 1) } if (adjustlen) { grrange <- end(gr) - start(gr) + 1 tmp <- sweep(tmp, 1, grrange, "/") * 1e+06 } tmp <- tmp[rowSums(tmp) > 0, , drop = F] allres <- rbind(allres, tmp) } allres } df_out <- SCRATsummary(dir = "./input/sc-bams_nodup/", genome = "hg19", featurelist="MOTIF_JASPAR", log2transform = FALSE, adjustlen = FALSE, removeblacklist=FALSE) end_time <- Sys.time() end_time - start_time dim(df_out) df_out[1:5,1:5] colnames(df_out) = sapply(strsplit(colnames(df_out), "\\."),'[',1) dim(df_out) df_out[1:5,1:5] if(! all(colnames(df_out) == rownames(metadata))){ df_out = df_out[,rownames(metadata)] dim(df_out) df_out[1:5,1:5] } dim(df_out) df_out[1:5,1:5] saveRDS(df_out, file = './output/feature_matrices/FM_SCRAT_buenrostro2018_no_blacklist.rds') sessionInfo() save.image(file = 'SCRAT_buenrostro2018.RData') ```
github_jupyter
# Задание 2.1 - Нейронные сети В этом задании вы реализуете и натренируете настоящую нейроную сеть своими руками! В некотором смысле это будет расширением прошлого задания - нам нужно просто составить несколько линейных классификаторов вместе! <img src="https://i.redd.it/n9fgba8b0qr01.png" alt="Stack_more_layers" width="400px"/> ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 from dataset import load_svhn, random_split_train_val from gradient_check import check_layer_gradient, check_layer_param_gradient, check_model_gradient from layers import FullyConnectedLayer, ReLULayer from model import TwoLayerNet from trainer import Trainer, Dataset from optim import SGD, MomentumSGD from metrics import multiclass_accuracy ``` # Загружаем данные И разделяем их на training и validation. ``` def prepare_for_neural_network(train_X, test_X): train_flat = train_X.reshape(train_X.shape[0], -1).astype(np.float) / 255.0 test_flat = test_X.reshape(test_X.shape[0], -1).astype(np.float) / 255.0 # Subtract mean mean_image = np.mean(train_flat, axis = 0) train_flat -= mean_image test_flat -= mean_image return train_flat, test_flat train_X, train_y, test_X, test_y = load_svhn("data", max_train=10000, max_test=1000) train_X, test_X = prepare_for_neural_network(train_X, test_X) # Split train into train and val train_X, train_y, val_X, val_y = random_split_train_val(train_X, train_y, num_val = 1000) ``` # Как всегда, начинаем с кирпичиков Мы будем реализовывать необходимые нам слои по очереди. Каждый слой должен реализовать: - прямой проход (forward pass), который генерирует выход слоя по входу и запоминает необходимые данные - обратный проход (backward pass), который получает градиент по выходу слоя и вычисляет градиент по входу и по параметрам Начнем с ReLU, у которого параметров нет. ``` # TODO: Implement ReLULayer layer in layers.py # Note: you'll need to copy implementation of the gradient_check function from the previous assignment X = np.array([[1,-2,3], [-1, 2, 0.1] ]) assert check_layer_gradient(ReLULayer(), X) ``` А теперь реализуем полносвязный слой (fully connected layer), у которого будет два массива параметров: W (weights) и B (bias). Все параметры наши слои будут использовать для параметров специальный класс `Param`, в котором будут храниться значения параметров и градиенты этих параметров, вычисляемые во время обратного прохода. Это даст возможность аккумулировать (суммировать) градиенты из разных частей функции потерь, например, из cross-entropy loss и regularization loss. ``` # TODO: Implement FullyConnected layer forward and backward methods assert check_layer_gradient(FullyConnectedLayer(3, 4), X) # TODO: Implement storing gradients for W and B assert check_layer_param_gradient(FullyConnectedLayer(3, 4), X, 'W') assert check_layer_param_gradient(FullyConnectedLayer(3, 4), X, 'B') ``` ## Создаем нейронную сеть Теперь мы реализуем простейшую нейронную сеть с двумя полносвязным слоями и нелинейностью ReLU. Реализуйте функцию `compute_loss_and_gradients`, она должна запустить прямой и обратный проход через оба слоя для вычисления градиентов. Не забудьте реализовать очистку градиентов в начале функции. ``` # TODO: In model.py, implement compute_loss_and_gradients function model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 3, reg = 0) loss = model.compute_loss_and_gradients(train_X[:2], train_y[:2]) # TODO Now implement backward pass and aggregate all of the params check_model_gradient(model, train_X[:2], train_y[:2]) ``` Теперь добавьте к модели регуляризацию - она должна прибавляться к loss и делать свой вклад в градиенты. ``` # TODO Now implement l2 regularization in the forward and backward pass model_with_reg = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 3, reg = 1e1) loss_with_reg = model_with_reg.compute_loss_and_gradients(train_X[:2], train_y[:2]) assert loss_with_reg > loss and not np.isclose(loss_with_reg, loss), \ "Loss with regularization (%2.4f) should be higher than without it (%2.4f)!" % (loss, loss_with_reg) check_model_gradient(model_with_reg, train_X[:2], train_y[:2]) ``` Также реализуем функцию предсказания (вычисления значения) модели на новых данных. Какое значение точности мы ожидаем увидеть до начала тренировки? ``` # Finally, implement predict function! # TODO: Implement predict function # What would be the value we expect? multiclass_accuracy(model_with_reg.predict(train_X[:30]), train_y[:30]) ``` # Допишем код для процесса тренировки ``` model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 2e-3); dataset = Dataset(train_X, train_y, val_X, val_y); trainer = Trainer(model, dataset, SGD(), num_epochs=100, batch_size=100, learning_rate=5e-1, learning_rate_decay= 0.95); # TODO Implement missing pieces in Trainer.fit function # You should expect loss to go down and train and val accuracy go up for every epoch loss_history, train_history, val_history = trainer.fit() train_X[model.predict(train_X) != 1] train_y def ReLU(x): if x <= 0: return 0; else: return x; ReLU_vec = np.vectorize(ReLU); train_X[ReLU_vec(train_X) != 0] val_X_W = model.first.forward(val_X) val_X_W model.second.forward(model.ReLU.forward(val_X_W)) plt.plot(train_history) plt.plot(val_history) plt.plot(loss_history) ``` # Улучшаем процесс тренировки Мы реализуем несколько ключевых оптимизаций, необходимых для тренировки современных нейросетей. ## Уменьшение скорости обучения (learning rate decay) Одна из необходимых оптимизаций во время тренировки нейронных сетей - постепенное уменьшение скорости обучения по мере тренировки. Один из стандартных методов - уменьшение скорости обучения (learning rate) каждые N эпох на коэффициент d (часто называемый decay). Значения N и d, как всегда, являются гиперпараметрами и должны подбираться на основе эффективности на проверочных данных (validation data). В нашем случае N будет равным 1. ``` # TODO Implement learning rate decay inside Trainer.fit method # Decay should happen once per epoch model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 1e-3) dataset = Dataset(train_X, train_y, val_X, val_y) trainer = Trainer(model, dataset, SGD(), num_epochs=10, batch_size=100, learning_rate=5e-1, learning_rate_decay=0.99) initial_learning_rate = trainer.learning_rate loss_history, train_history, val_history = trainer.fit() assert trainer.learning_rate < initial_learning_rate, "Learning rate should've been reduced" assert trainer.learning_rate > 0.5*initial_learning_rate, "Learning rate shouldn'tve been reduced that much!" ``` # Накопление импульса (Momentum SGD) Другой большой класс оптимизаций - использование более эффективных методов градиентного спуска. Мы реализуем один из них - накопление импульса (Momentum SGD). Этот метод хранит скорость движения, использует градиент для ее изменения на каждом шаге, и изменяет веса пропорционально значению скорости. (Физическая аналогия: Вместо скорости градиенты теперь будут задавать ускорение, но будет присутствовать сила трения.) ``` velocity = momentum * velocity - learning_rate * gradient w = w + velocity ``` `momentum` здесь коэффициент затухания, который тоже является гиперпараметром (к счастью, для него часто есть хорошее значение по умолчанию, типичный диапазон -- 0.8-0.99). Несколько полезных ссылок, где метод разбирается более подробно: http://cs231n.github.io/neural-networks-3/#sgd https://distill.pub/2017/momentum/ ``` # TODO: Implement MomentumSGD.update function in optim.py model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 1e-3) dataset = Dataset(train_X, train_y, val_X, val_y) trainer = Trainer(model, dataset, MomentumSGD(), num_epochs=10, batch_size=30, learning_rate=5e-2, learning_rate_decay=0.99) # You should see even better results than before! loss_history, train_history, val_history = trainer.fit() ``` # Ну что, давайте уже тренировать сеть! ## Последний тест - переобучимся (overfit) на маленьком наборе данных Хороший способ проверить, все ли реализовано корректно - переобучить сеть на маленьком наборе данных. Наша модель обладает достаточной мощностью, чтобы приблизить маленький набор данных идеально, поэтому мы ожидаем, что на нем мы быстро дойдем до 100% точности на тренировочном наборе. Если этого не происходит, то где-то была допущена ошибка! ``` data_size = 15 model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 1e-1) dataset = Dataset(train_X[:data_size], train_y[:data_size], val_X[:data_size], val_y[:data_size]) trainer = Trainer(model, dataset, SGD(), learning_rate=1e-1, num_epochs=80, batch_size=5) # You should expect this to reach 1.0 training accuracy loss_history, train_history, val_history = trainer.fit() ``` Теперь найдем гипепараметры, для которых этот процесс сходится быстрее. Если все реализовано корректно, то существуют параметры, при которых процесс сходится в **20** эпох или еще быстрее. Найдите их! ``` # Now, tweak some hyper parameters and make it train to 1.0 accuracy in 20 epochs or less model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 0) dataset = Dataset(train_X[:data_size], train_y[:data_size], val_X[:data_size], val_y[:data_size]) # TODO: Change any hyperparamers or optimizators to reach training accuracy in 20 epochs trainer = Trainer(model, dataset, SGD(), learning_rate=1e-1, num_epochs=20, batch_size=3) loss_history, train_history, val_history = trainer.fit() ``` # Итак, основное мероприятие! Натренируйте лучшую нейросеть! Можно добавлять и изменять параметры, менять количество нейронов в слоях сети и как угодно экспериментировать. Добейтесь точности лучше **40%** на validation set. ``` # Let's train the best one-hidden-layer network we can learning_rates = 1e-4 reg_strength = 1e-3 learning_rate_decay = 0.999 hidden_layer_size = 128 num_epochs = 200 batch_size = 64 best_classifier = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 1e-3); dataset = Dataset(train_X, train_y, val_X, val_y); trainer = Trainer(best_classifier, dataset, MomentumSGD(), num_epochs=100, batch_size=100, learning_rate=1e-1, learning_rate_decay= 0.99); # TODO Implement missing pieces in Trainer.fit function # You should expect loss to go down and train and val accuracy go up for every epoch loss_history, train_history, val_history = trainer.fit(); best_val_accuracy = val_history[-1]; # TODO find the best hyperparameters to train the network # Don't hesitate to add new values to the arrays above, perform experiments, use any tricks you want # You should expect to get to at least 40% of valudation accuracy # Save loss/train/history of the best classifier to the variables above print('best validation accuracy achieved: %f' % best_val_accuracy) plt.figure(figsize=(15, 7)) plt.subplot(211) plt.title("Loss") plt.plot(loss_history) plt.subplot(212) plt.title("Train/validation accuracy") plt.plot(train_history) plt.plot(val_history) ``` # Как обычно, посмотрим, как наша лучшая модель работает на тестовых данных ``` test_pred = best_classifier.predict(test_X) test_accuracy = multiclass_accuracy(test_pred, test_y) print('Neural net test set accuracy: %f' % (test_accuracy, )) ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import statistics from scipy import stats buldy_RGG_50_rep100_045 = pd.read_csv('Raw_data/Processed/proc_buldy_RGG_50_rep100_045.csv') del buldy_RGG_50_rep100_045['Unnamed: 0'] buldy_RGG_50_rep100_045 buldy_RGG_50_rep100_067 = pd.read_csv('proc_buldy_RGG_50_rep100_067.csv') del buldy_RGG_50_rep100_067['Unnamed: 0'] buldy_RGG_50_rep100_067 buldy_RGG_200_rep100_0685 = pd.read_csv('proc_buldy_RGG_200_rep100_0685.csv') del buldy_RGG_200_rep100_0685['Unnamed: 0'] buldy_RGG_200_rep100_0685 buldy_RGG_200_rep100_095 = pd.read_csv('proc_buldy_RGG_200_rep100_095.csv') del buldy_RGG_200_rep100_095['Unnamed: 0'] buldy_RGG_200_rep100_095 buldy_RGG_50_rep100_045_rgg_rgg_data = buldy_RGG_50_rep100_045.copy() buldy_RGG_50_rep100_045_rgg_rand_data = buldy_RGG_50_rep100_045.copy() buldy_RGG_50_rep100_045_rand_rgg_data = buldy_RGG_50_rep100_045.copy() buldy_RGG_50_rep100_045_rand_rand_data = buldy_RGG_50_rep100_045.copy() rgg_rgg_drop_list = [] rgg_rand_drop_list = [] rand_rgg_drop_list = [] rand_rand_drop_list = [] for i in range(400): if i % 4 == 0: rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 1: rgg_rgg_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 2: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 3: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) buldy_RGG_50_rep100_045_rgg_rgg_data = buldy_RGG_50_rep100_045_rgg_rgg_data.drop(rgg_rgg_drop_list) buldy_RGG_50_rep100_045_rgg_rand_data = buldy_RGG_50_rep100_045_rgg_rand_data.drop(rgg_rand_drop_list) buldy_RGG_50_rep100_045_rand_rgg_data = buldy_RGG_50_rep100_045_rand_rgg_data.drop(rand_rgg_drop_list) buldy_RGG_50_rep100_045_rand_rand_data = buldy_RGG_50_rep100_045_rand_rand_data.drop(rand_rand_drop_list) buldy_RGG_50_rep100_045_rgg_rgg_data = buldy_RGG_50_rep100_045_rgg_rgg_data.reset_index(drop=True) buldy_RGG_50_rep100_045_rgg_rand_data = buldy_RGG_50_rep100_045_rgg_rand_data.reset_index(drop=True) buldy_RGG_50_rep100_045_rand_rgg_data = buldy_RGG_50_rep100_045_rand_rgg_data.reset_index(drop=True) buldy_RGG_50_rep100_045_rand_rand_data = buldy_RGG_50_rep100_045_rand_rand_data.reset_index(drop=True) buldy_RGG_50_rep100_045_rgg_rgg_data buldy_RGG_50_rep100_067_rgg_rgg_data = buldy_RGG_50_rep100_067.copy() buldy_RGG_50_rep100_067_rgg_rand_data = buldy_RGG_50_rep100_067.copy() buldy_RGG_50_rep100_067_rand_rgg_data = buldy_RGG_50_rep100_067.copy() buldy_RGG_50_rep100_067_rand_rand_data = buldy_RGG_50_rep100_067.copy() rgg_rgg_drop_list = [] rgg_rand_drop_list = [] rand_rgg_drop_list = [] rand_rand_drop_list = [] for i in range(400): if i % 4 == 0: rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 1: rgg_rgg_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 2: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 3: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) buldy_RGG_50_rep100_067_rgg_rgg_data = buldy_RGG_50_rep100_067_rgg_rgg_data.drop(rgg_rgg_drop_list) buldy_RGG_50_rep100_067_rgg_rand_data = buldy_RGG_50_rep100_067_rgg_rand_data.drop(rgg_rand_drop_list) buldy_RGG_50_rep100_067_rand_rgg_data = buldy_RGG_50_rep100_067_rand_rgg_data.drop(rand_rgg_drop_list) buldy_RGG_50_rep100_067_rand_rand_data = buldy_RGG_50_rep100_067_rand_rand_data.drop(rand_rand_drop_list) buldy_RGG_50_rep100_067_rgg_rgg_data = buldy_RGG_50_rep100_067_rgg_rgg_data.reset_index(drop=True) buldy_RGG_50_rep100_067_rgg_rand_data = buldy_RGG_50_rep100_067_rgg_rand_data.reset_index(drop=True) buldy_RGG_50_rep100_067_rand_rgg_data = buldy_RGG_50_rep100_067_rand_rgg_data.reset_index(drop=True) buldy_RGG_50_rep100_067_rand_rand_data = buldy_RGG_50_rep100_067_rand_rand_data.reset_index(drop=True) buldy_RGG_50_rep100_067_rgg_rgg_data buldy_RGG_200_rep100_0685_rgg_rgg_data = buldy_RGG_200_rep100_0685.copy() buldy_RGG_200_rep100_0685_rgg_rand_data = buldy_RGG_200_rep100_0685.copy() buldy_RGG_200_rep100_0685_rand_rgg_data = buldy_RGG_200_rep100_0685.copy() buldy_RGG_200_rep100_0685_rand_rand_data = buldy_RGG_200_rep100_0685.copy() rgg_rgg_drop_list = [] rgg_rand_drop_list = [] rand_rgg_drop_list = [] rand_rand_drop_list = [] for i in range(400): if i % 4 == 0: rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 1: rgg_rgg_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 2: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 3: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) buldy_RGG_200_rep100_0685_rgg_rgg_data = buldy_RGG_200_rep100_0685_rgg_rgg_data.drop(rgg_rgg_drop_list) buldy_RGG_200_rep100_0685_rgg_rand_data = buldy_RGG_200_rep100_0685_rgg_rand_data.drop(rgg_rand_drop_list) buldy_RGG_200_rep100_0685_rand_rgg_data = buldy_RGG_200_rep100_0685_rand_rgg_data.drop(rand_rgg_drop_list) buldy_RGG_200_rep100_0685_rand_rand_data = buldy_RGG_200_rep100_0685_rand_rand_data.drop(rand_rand_drop_list) buldy_RGG_200_rep100_0685_rgg_rgg_data = buldy_RGG_200_rep100_0685_rgg_rgg_data.reset_index(drop=True) buldy_RGG_200_rep100_0685_rgg_rand_data = buldy_RGG_200_rep100_0685_rgg_rand_data.reset_index(drop=True) buldy_RGG_200_rep100_0685_rand_rgg_data = buldy_RGG_200_rep100_0685_rand_rgg_data.reset_index(drop=True) buldy_RGG_200_rep100_0685_rand_rand_data = buldy_RGG_200_rep100_0685_rand_rand_data.reset_index(drop=True) buldy_RGG_200_rep100_0685_rgg_rgg_data buldy_RGG_200_rep100_095_rgg_rgg_data = buldy_RGG_200_rep100_095.copy() buldy_RGG_200_rep100_095_rgg_rand_data = buldy_RGG_200_rep100_095.copy() buldy_RGG_200_rep100_095_rand_rgg_data = buldy_RGG_200_rep100_095.copy() buldy_RGG_200_rep100_095_rand_rand_data = buldy_RGG_200_rep100_095.copy() rgg_rgg_drop_list = [] rgg_rand_drop_list = [] rand_rgg_drop_list = [] rand_rand_drop_list = [] for i in range(400): if i % 4 == 0: rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 1: rgg_rgg_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 2: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 3: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) buldy_RGG_200_rep100_095_rgg_rgg_data = buldy_RGG_200_rep100_095_rgg_rgg_data.drop(rgg_rgg_drop_list) buldy_RGG_200_rep100_095_rgg_rand_data = buldy_RGG_200_rep100_095_rgg_rand_data.drop(rgg_rand_drop_list) buldy_RGG_200_rep100_095_rand_rgg_data = buldy_RGG_200_rep100_095_rand_rgg_data.drop(rand_rgg_drop_list) buldy_RGG_200_rep100_095_rand_rand_data = buldy_RGG_200_rep100_095_rand_rand_data.drop(rand_rand_drop_list) buldy_RGG_200_rep100_095_rgg_rgg_data = buldy_RGG_200_rep100_095_rgg_rgg_data.reset_index(drop=True) buldy_RGG_200_rep100_095_rgg_rand_data = buldy_RGG_200_rep100_095_rgg_rand_data.reset_index(drop=True) buldy_RGG_200_rep100_095_rand_rgg_data = buldy_RGG_200_rep100_095_rand_rgg_data.reset_index(drop=True) buldy_RGG_200_rep100_095_rand_rand_data = buldy_RGG_200_rep100_095_rand_rand_data.reset_index(drop=True) buldy_RGG_200_rep100_095_rgg_rgg_data stats.kstest(buldy_RGG_200_rep100_0685_rand_rgg_data['alive_nodes'], 'norm') stats.kstest(buldy_RGG_200_rep100_0685_rand_rand_data['alive_nodes'], 'norm') stats.mannwhitneyu(buldy_RGG_200_rep100_0685_rand_rgg_data['alive_nodes'], buldy_RGG_200_rep100_0685_rand_rand_data['alive_nodes']) stats.kstest(buldy_RGG_200_rep100_095_rgg_rgg_data['alive_nodes'], 'norm') stats.kstest(buldy_RGG_200_rep100_095_rgg_rand_data['alive_nodes'], 'norm') stats.mannwhitneyu(buldy_RGG_200_rep100_095_rgg_rgg_data['alive_nodes'], buldy_RGG_200_rep100_095_rgg_rand_data['alive_nodes']) ``` # Data Dividing Done # ----------------------------------------------------------------------------------------------- # Plotting Starts ## find_inter_thres ``` find_inter_thres_list = [] for col in find_inter_thres.columns: if col != 'rep': find_inter_thres_list.append(statistics.mean(find_inter_thres[col].values.tolist())) print(find_inter_thres_list) Xs = [0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0] plt.plot(Xs, [i/500 for i in find_inter_thres_list]) plt.xticks([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) plt.axvline(x=0.7, color='r', linestyle='--') plt.savefig('find_inter_thres.png') ``` ## rep5_04_002 ``` rgg_rgg_dict = {} rgg_rand_dict = {} rand_rgg_dict = {} rand_rand_dict = {} for i in range(20): target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4] temp_rgg_rgg = rgg_rgg_data[i*5 + 0 : i*5 + 5] temp_rgg_rand = rgg_rand_data[i*5 + 0 : i*5 + 5] temp_rand_rgg = rand_rgg_data[i*5 + 0 : i*5 + 5] temp_rand_rand = rand_rand_data[i*5 + 0 : i*5 + 5] if i == 0: rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] rgg_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] rgg_rand_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] rand_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] rand_rand_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] else: rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) rgg_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) rgg_rand_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) rand_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) rand_rand_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) plt.plot(rgg_rgg_dict['intra_thres'], rgg_rgg_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rgg_rand_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rand_rgg_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.9 plt.plot([p * i for i in rgg_rgg_dict['init_mean_deg']], rgg_rgg_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_dict['init_mean_deg']], rgg_rand_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_dict['init_mean_deg']], rand_rgg_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_dict['init_mean_deg']], rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() ``` ## att30_rep5_04_002 ``` rgg_rgg_2_dict = {} rgg_rand_2_dict = {} rand_rgg_2_dict = {} rand_rand_2_dict = {} for i in range(50): target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4] temp_rgg_rgg = rgg_rgg_2_data[i*5 + 0 : i*5 + 5] temp_rgg_rand = rgg_rand_2_data[i*5 + 0 : i*5 + 5] temp_rand_rgg = rand_rgg_2_data[i*5 + 0 : i*5 + 5] temp_rand_rand = rand_rand_2_data[i*5 + 0 : i*5 + 5] if i == 0: rgg_rgg_2_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] rgg_rgg_2_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] rgg_rgg_2_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] rgg_rand_2_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] rgg_rand_2_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] rgg_rand_2_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] rand_rgg_2_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] rand_rgg_2_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] rand_rgg_2_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] rand_rand_2_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] rand_rand_2_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] rand_rand_2_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] else: rgg_rgg_2_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) rgg_rgg_2_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) rgg_rgg_2_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) rgg_rand_2_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) rgg_rand_2_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) rgg_rand_2_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) rand_rgg_2_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) rand_rgg_2_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) rand_rgg_2_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) rand_rand_2_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) rand_rand_2_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) rand_rand_2_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) plt.plot(rgg_rgg_2_dict['intra_thres'], rgg_rgg_2_dict['alive_nodes']) plt.plot(rgg_rgg_2_dict['intra_thres'], rgg_rand_2_dict['alive_nodes']) plt.plot(rgg_rgg_2_dict['intra_thres'], rand_rgg_2_dict['alive_nodes']) plt.plot(rgg_rgg_2_dict['intra_thres'], rand_rand_2_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.9 plt.plot([p * i for i in rgg_rgg_2_dict['init_mean_deg']], rgg_rgg_2_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_2_dict['init_mean_deg']], rgg_rand_2_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_2_dict['init_mean_deg']], rand_rgg_2_dict['alive_nodes']) plt.plot([p * i for i in rgg_rgg_2_dict['init_mean_deg']], rand_rand_2_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() ``` ## buldy_RGG_rep30_03_0005 ``` buldy_RGG_rep30_03_0005_rgg_rgg_dict = {} buldy_RGG_rep30_03_0005_rgg_rand_dict = {} buldy_RGG_rep30_03_0005_rand_rgg_dict = {} buldy_RGG_rep30_03_0005_rand_rand_dict = {} for i in range(100): target = list(range(i*30, (i+1)*30)) temp_rgg_rgg = buldy_RGG_rep30_03_0005_rgg_rgg_data[i*30 : (i+1)*30] temp_rgg_rand = buldy_RGG_rep30_03_0005_rgg_rand_data[i*30 : (i+1)*30] temp_rand_rgg = buldy_RGG_rep30_03_0005_rand_rgg_data[i*30 : (i+1)*30] temp_rand_rand = buldy_RGG_rep30_03_0005_rand_rand_data[i*30 : (i+1)*30] rgg_rgg_alive = 0 rgg_rand_alive = 0 rand_rgg_alive = 0 rand_rand_alive = 0 for index in target: if (temp_rgg_rgg['alive_nodes'][index] != 0) and (temp_rgg_rgg['fin_larg_comp'][index] != 0): rgg_rgg_alive += 1 if (temp_rgg_rand['alive_nodes'][index] != 0) and (temp_rgg_rand['fin_larg_comp'][index] != 0): rgg_rand_alive += 1 if (temp_rand_rgg['alive_nodes'][index] != 0) and (temp_rand_rgg['fin_larg_comp'][index] != 0): rand_rgg_alive += 1 if (temp_rand_rand['alive_nodes'][index] != 0) and (temp_rand_rand['fin_larg_comp'][index] != 0): rand_rand_alive += 1 if i == 0: buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive ratio'] = [rgg_rgg_alive / 30] buldy_RGG_rep30_03_0005_rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rand_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] buldy_RGG_rep30_03_0005_rgg_rand_dict['alive ratio'] = [rgg_rand_alive / 30] buldy_RGG_rep30_03_0005_rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rgg_dict['alive ratio'] = [rand_rgg_alive / 30] buldy_RGG_rep30_03_0005_rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rand_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] buldy_RGG_rep30_03_0005_rand_rand_dict['alive ratio'] = [rand_rand_alive / 30] else: buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive ratio'].append(rgg_rgg_alive / 30) buldy_RGG_rep30_03_0005_rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rand_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) buldy_RGG_rep30_03_0005_rgg_rand_dict['alive ratio'].append(rgg_rand_alive / 30) buldy_RGG_rep30_03_0005_rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rgg_dict['alive ratio'].append(rand_rgg_alive / 30) buldy_RGG_rep30_03_0005_rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rand_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) buldy_RGG_rep30_03_0005_rand_rand_dict['alive ratio'].append(rand_rand_alive / 30) plt.plot(buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot(buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.9 plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 50 attack size') plt.xlabel('p<k>') plt.ylabel('mean alive nodes') plt.savefig('buldy_RGG_rep30_03_0005.png') plt.show() p = 0.9 plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rgg_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rgg_rand_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rand_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_rep30_03_0005_rand_rand_dict['alive ratio']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 50 attack size') plt.xlabel('p<k>') plt.ylabel('alive ratio') plt.savefig('buldy_RGG_rep30_03_0005_ratio.png') plt.show() ``` ## buldy_RGG_100_rep30_03_0005 ``` buldy_RGG_100_rep30_03_0005_rgg_rgg_dict = {} buldy_RGG_100_rep30_03_0005_rgg_rand_dict = {} buldy_RGG_100_rep30_03_0005_rand_rgg_dict = {} buldy_RGG_100_rep30_03_0005_rand_rand_dict = {} for i in range(100): target = list(range(i*30, (i+1)*30)) temp_rgg_rgg = buldy_RGG_100_rep30_03_0005_rgg_rgg_data[i*30 : (i+1)*30] temp_rgg_rand = buldy_RGG_100_rep30_03_0005_rgg_rand_data[i*30 : (i+1)*30] temp_rand_rgg = buldy_RGG_100_rep30_03_0005_rand_rgg_data[i*30 : (i+1)*30] temp_rand_rand = buldy_RGG_100_rep30_03_0005_rand_rand_data[i*30 : (i+1)*30] rgg_rgg_alive = 0 rgg_rand_alive = 0 rand_rgg_alive = 0 rand_rand_alive = 0 for index in target: if (temp_rgg_rgg['alive_nodes'][index] != 0) and (temp_rgg_rgg['fin_larg_comp'][index] != 0): rgg_rgg_alive += 1 if (temp_rgg_rand['alive_nodes'][index] != 0) and (temp_rgg_rand['fin_larg_comp'][index] != 0): rgg_rand_alive += 1 if (temp_rand_rgg['alive_nodes'][index] != 0) and (temp_rand_rgg['fin_larg_comp'][index] != 0): rand_rgg_alive += 1 if (temp_rand_rand['alive_nodes'][index] != 0) and (temp_rand_rand['fin_larg_comp'][index] != 0): rand_rand_alive += 1 if i == 0: buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive ratio'] = [rgg_rgg_alive / 30] buldy_RGG_100_rep30_03_0005_rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rand_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive ratio'] = [rgg_rand_alive / 30] buldy_RGG_100_rep30_03_0005_rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive ratio'] = [rand_rgg_alive / 30] buldy_RGG_100_rep30_03_0005_rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rand_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive ratio'] = [rand_rand_alive / 30] else: buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive ratio'].append(rgg_rgg_alive / 30) buldy_RGG_100_rep30_03_0005_rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rand_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive ratio'].append(rgg_rand_alive / 30) buldy_RGG_100_rep30_03_0005_rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive ratio'].append(rand_rgg_alive / 30) buldy_RGG_100_rep30_03_0005_rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rand_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive ratio'].append(rand_rand_alive / 30) plt.plot(buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot(buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.8 plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 100 attack size') plt.xlabel('p<k>') plt.ylabel('mean alive nodes') plt.savefig('buldy_RGG_100_rep30_03_0005.png') plt.show() p = 0.8 plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rgg_rand_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rand_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_100_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_100_rep30_03_0005_rand_rand_dict['alive ratio']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 100 attack size') plt.xlabel('p<k>') plt.ylabel('alive ratio') plt.savefig('buldy_RGG_100_rep30_03_0005_ratio.png') plt.show() ``` ## buldy_RGG_200_rep30_03_0005 ``` buldy_RGG_200_rep30_03_0005_rgg_rgg_dict = {} buldy_RGG_200_rep30_03_0005_rgg_rand_dict = {} buldy_RGG_200_rep30_03_0005_rand_rgg_dict = {} buldy_RGG_200_rep30_03_0005_rand_rand_dict = {} for i in range(100): target = list(range(i*30, (i+1)*30)) temp_rgg_rgg = buldy_RGG_200_rep30_03_0005_rgg_rgg_data[i*30 : (i+1)*30] temp_rgg_rand = buldy_RGG_200_rep30_03_0005_rgg_rand_data[i*30 : (i+1)*30] temp_rand_rgg = buldy_RGG_200_rep30_03_0005_rand_rgg_data[i*30 : (i+1)*30] temp_rand_rand = buldy_RGG_200_rep30_03_0005_rand_rand_data[i*30 : (i+1)*30] rgg_rgg_alive = 0 rgg_rand_alive = 0 rand_rgg_alive = 0 rand_rand_alive = 0 for index in target: if (temp_rgg_rgg['alive_nodes'][index] != 0) and (temp_rgg_rgg['fin_larg_comp'][index] != 0): rgg_rgg_alive += 1 if (temp_rgg_rand['alive_nodes'][index] != 0) and (temp_rgg_rand['fin_larg_comp'][index] != 0): rgg_rand_alive += 1 if (temp_rand_rgg['alive_nodes'][index] != 0) and (temp_rand_rgg['fin_larg_comp'][index] != 0): rand_rgg_alive += 1 if (temp_rand_rand['alive_nodes'][index] != 0) and (temp_rand_rand['fin_larg_comp'][index] != 0): rand_rand_alive += 1 if i == 0: buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive ratio'] = [rgg_rgg_alive / 30] buldy_RGG_200_rep30_03_0005_rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rand_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive ratio'] = [rgg_rand_alive / 30] buldy_RGG_200_rep30_03_0005_rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive ratio'] = [rand_rgg_alive / 30] buldy_RGG_200_rep30_03_0005_rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rand_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive ratio'] = [rand_rand_alive / 30] else: buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive ratio'].append(rgg_rgg_alive / 30) buldy_RGG_200_rep30_03_0005_rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rand_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive ratio'].append(rgg_rand_alive / 30) buldy_RGG_200_rep30_03_0005_rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive ratio'].append(rand_rgg_alive / 30) buldy_RGG_200_rep30_03_0005_rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rand_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive ratio'].append(rand_rand_alive / 30) plt.plot(buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot(buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['intra_thres'], buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.6 plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 200 attack size') plt.xlabel('p<k>') plt.ylabel('mean alive nodes') plt.savefig('buldy_RGG_200_rep30_03_0005.png') plt.show() p = 0.6 plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rgg_rand_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rand_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_200_rep30_03_0005_rgg_rgg_dict['init_mean_deg']], buldy_RGG_200_rep30_03_0005_rand_rand_dict['alive ratio']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('500 Nodes, 2 Layers, 200 attack size') plt.xlabel('p<k>') plt.ylabel('alive ratio') plt.savefig('buldy_RGG_200_rep30_03_0005_ratio.png') plt.show() ``` ## buldy_RGG_30_rep30_04_0007 ``` buldy_RGG_30_rep30_04_0007_rgg_rgg_dict = {} buldy_RGG_30_rep30_04_0007_rgg_rand_dict = {} buldy_RGG_30_rep30_04_0007_rand_rgg_dict = {} buldy_RGG_30_rep30_04_0007_rand_rand_dict = {} for i in range(100): target = list(range(i*30, (i+1)*30)) temp_rgg_rgg = buldy_RGG_30_rep30_04_0007_rgg_rgg_data[i*30 : (i+1)*30] temp_rgg_rand = buldy_RGG_30_rep30_04_0007_rgg_rand_data[i*30 : (i+1)*30] temp_rand_rgg = buldy_RGG_30_rep30_04_0007_rand_rgg_data[i*30 : (i+1)*30] temp_rand_rand = buldy_RGG_30_rep30_04_0007_rand_rand_data[i*30 : (i+1)*30] rgg_rgg_alive = 0 rgg_rand_alive = 0 rand_rgg_alive = 0 rand_rand_alive = 0 for index in target: if (temp_rgg_rgg['alive_nodes'][index] != 0) and (temp_rgg_rgg['fin_larg_comp'][index] != 0): rgg_rgg_alive += 1 if (temp_rgg_rand['alive_nodes'][index] != 0) and (temp_rgg_rand['fin_larg_comp'][index] != 0): rgg_rand_alive += 1 if (temp_rand_rgg['alive_nodes'][index] != 0) and (temp_rand_rgg['fin_larg_comp'][index] != 0): rand_rgg_alive += 1 if (temp_rand_rand['alive_nodes'][index] != 0) and (temp_rand_rand['fin_larg_comp'][index] != 0): rand_rand_alive += 1 if i == 0: buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive ratio'] = [rgg_rgg_alive / 30] buldy_RGG_30_rep30_04_0007_rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rand_dict['init_mean_deg'] = [statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())] buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive ratio'] = [rgg_rand_alive / 30] buldy_RGG_30_rep30_04_0007_rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rgg_dict['init_mean_deg'] = [statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive ratio'] = [rand_rgg_alive / 30] buldy_RGG_30_rep30_04_0007_rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rand_dict['init_mean_deg'] = [statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())] buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive ratio'] = [rand_rand_alive / 30] else: buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rgg['init_mean_deg'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive ratio'].append(rgg_rgg_alive / 30) buldy_RGG_30_rep30_04_0007_rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rand_dict['init_mean_deg'].append(statistics.mean(temp_rgg_rand['init_mean_deg'].values.tolist())) buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive ratio'].append(rgg_rand_alive / 30) buldy_RGG_30_rep30_04_0007_rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rgg_dict['init_mean_deg'].append(statistics.mean(temp_rand_rgg['init_mean_deg'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive ratio'].append(rand_rgg_alive / 30) buldy_RGG_30_rep30_04_0007_rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rand_dict['init_mean_deg'].append(statistics.mean(temp_rand_rand['init_mean_deg'].values.tolist())) buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive ratio'].append(rand_rand_alive / 30) plt.plot(buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'], buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'], buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive_nodes']) plt.plot(buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'], buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive_nodes']) plt.plot(buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['intra_thres'], buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() p = 0.9 plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive_nodes']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive_nodes']) plt.title('300 Nodes, 2 Layers, 30 attack size') plt.xlabel('p<k>') plt.ylabel('mean alive nodes') plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.savefig('buldy_RGG_30_rep30_04_0007') plt.show() p = 0.9 plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rgg_rand_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rand_rgg_dict['alive ratio']) plt.plot([p * i for i in buldy_RGG_30_rep30_04_0007_rgg_rgg_dict['init_mean_deg']], buldy_RGG_30_rep30_04_0007_rand_rand_dict['alive ratio']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('300 Nodes, 2 Layers, 30 attack size') plt.xlabel('p<k>') plt.ylabel('alive ratio') plt.savefig('buldy_RGG_30_rep30_04_0007_ratio') plt.show() ``` ## buldy_RGG_50_rep100_045 ``` buldy_RGG_50_rep100_045_far_dead_node = {} cum_far_dead_node = {'rgg_rgg': [], 'rgg_rand': [], 'rand_rgg': [], 'rand_rand': []} for index in range(len(buldy_RGG_50_rep100_045_rgg_rgg_data.columns) - 21): for j in range(100): if buldy_RGG_50_rep100_045_rgg_rgg_data['step%d_far_dead_node' % index][j] != 0: if i == 0: buldy_RGG_50_rep100_045_far_dead_node['rgg_rgg'] = [statistics.mean(buldy_RGG_50_rep100_045_rgg_rgg_data['step%d_far_dead_node' % index].values.tolist())] buldy_RGG_50_rep100_045_far_dead_node['rgg_rand'] = [statistics.mean(buldy_RGG_50_rep100_045_rgg_rand_data['step%d_far_dead_node' % index].values.tolist())] buldy_RGG_50_rep100_045_far_dead_node['rand_rgg'] = [statistics.mean(buldy_RGG_50_rep100_045_rand_rgg_data['step%d_far_dead_node' % index].values.tolist())] buldy_RGG_50_rep100_045_far_dead_node['rand_rand'] = [statistics.mean(buldy_RGG_50_rep100_045_rand_rand_data['step%d_far_dead_node' % index].values.tolist())] else: buldy_RGG_50_rep100_045_far_dead_node['rgg_rgg'].append(statistics.mean(buldy_RGG_50_rep100_045_rgg_rgg_data['step%d_far_dead_node' % index].values.tolist())) buldy_RGG_50_rep100_045_far_dead_node['rgg_rand'].append(statistics.mean(buldy_RGG_50_rep100_045_rgg_rand_data['step%d_far_dead_node' % index].values.tolist())) buldy_RGG_50_rep100_045_far_dead_node['rand_rgg'].append(statistics.mean(buldy_RGG_50_rep100_045_rand_rgg_data['step%d_far_dead_node' % index].values.tolist())) buldy_RGG_50_rep100_045_far_dead_node['rand_rand'].append(statistics.mean(buldy_RGG_50_rep100_045_rand_rand_data['step%d_far_dead_node' % index].values.tolist())) cum_far_dead_node = {'rgg_rgg': [], 'rgg_rand': [], 'rand_rgg': [], 'rand_rand': []} for index, row in buldy_RGG_50_rep100_045_rgg_rgg_data.iterrows(): cur_row = row.tolist() length = int((len(buldy_RGG_50_rep100_045_rgg_rgg_data.columns) - 21) / 3) temp = [] for i in range(length): if cur_row[(3*i) + 23] != 0: temp.append(cur_row[(3*i) + 23]) else: temp.append(temp[i-2]) cum_far_dead_node['rgg_rgg'].append(temp) print(cum_far_dead_node['rgg_rgg']) step_nums = [] step_nums.append(statistics.mean(rgg_rgg_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rgg_rand_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rand_rgg_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rand_rand_data['cas_steps'].values.tolist())) index = np.arange(4) graph_types = ['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand'] plt.bar(index, step_nums, width=0.3, color='gray') plt.xticks(index, graph_types) plt.title('Number of steps') plt.savefig('The number of steps.png') plt.show() rgg_rgg_isol = [] rgg_rgg_unsupp = [] rgg_rand_isol = [] rgg_rand_unsupp = [] rand_rgg_isol = [] rand_rgg_unsupp = [] rand_rand_isol = [] rand_rand_unsupp =[] index = 1 for col_name in rgg_rgg_data: if col_name == ('step%d_isol' % index): rgg_rgg_isol.append(statistics.mean(rgg_rgg_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rgg_rgg_unsupp.append(statistics.mean(rgg_rgg_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rgg_rand_data: if col_name == ('step%d_isol' % index): rgg_rand_isol.append(statistics.mean(rgg_rand_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rgg_rand_unsupp.append(statistics.mean(rgg_rand_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rand_rgg_data: if col_name == ('step%d_isol' % index): rand_rgg_isol.append(statistics.mean(rand_rgg_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rand_rgg_unsupp.append(statistics.mean(rand_rgg_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rand_rand_data: if col_name == ('step%d_isol' % index): rand_rand_isol.append(statistics.mean(rand_rand_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rand_rand_unsupp.append(statistics.mean(rand_rand_data[col_name].values.tolist())) index += 1 print(len(rgg_rgg_isol)) print(len(rgg_rgg_unsupp)) print(len(rgg_rand_isol)) print(len(rgg_rand_unsupp)) print(len(rand_rgg_isol)) print(len(rand_rgg_unsupp)) print(len(rand_rand_isol)) print(len(rand_rand_unsupp)) cum_rgg_rgg_isol = [] cum_rgg_rgg_unsupp = [] cum_rgg_rand_isol = [] cum_rgg_rand_unsupp = [] cum_rand_rgg_isol = [] cum_rand_rgg_unsupp = [] cum_rand_rand_isol = [] cum_rand_rand_unsupp = [] total = [] for i in range(len(rgg_rgg_isol)): if i == 0: total.append(rgg_rgg_isol[i]) total.append(rgg_rgg_unsupp[i]) else: total[0] += rgg_rgg_isol[i] total[1] += rgg_rgg_unsupp[i] cum_rgg_rgg_isol.append(total[0]) cum_rgg_rgg_unsupp.append(total[1]) total = [] for i in range(len(rgg_rand_isol)): if i == 0: total.append(rgg_rand_isol[i]) total.append(rgg_rand_unsupp[i]) else: total[0] += rgg_rand_isol[i] total[1] += rgg_rand_unsupp[i] cum_rgg_rand_isol.append(total[0]) cum_rgg_rand_unsupp.append(total[1]) total = [] for i in range(len(rand_rgg_isol)): if i == 0: total.append(rand_rgg_isol[i]) total.append(rand_rgg_unsupp[i]) else: total[0] += rand_rgg_isol[i] total[1] += rand_rgg_unsupp[i] cum_rand_rgg_isol.append(total[0]) cum_rand_rgg_unsupp.append(total[1]) total = [] for i in range(len(rand_rand_isol)): if i == 0: total.append(rand_rand_isol[i]) total.append(rand_rand_unsupp[i]) else: total[0] += rand_rand_isol[i] total[1] += rand_rand_unsupp[i] cum_rand_rand_isol.append(total[0]) cum_rand_rand_unsupp.append(total[1]) ``` ## Isolation vs Unsupport ``` plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_isol) plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_unsupp) plt.legend(['rgg_rgg_isol','rgg_rgg_unsupp']) plt.title('Isolation vs Unsupport: RGG-RGG') plt.savefig('Isolation vs Unsupport_RGG-RGG.png') plt.show() plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_isol) plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_unsupp) plt.legend(['rgg_rand_isol','rgg_rand_unsupp']) plt.title('Isolation vs Unsupport: RGG-Rand') plt.savefig('Isolation vs Unsupport_RGG-Rand.png') plt.show() plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_isol) plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_unsupp) plt.legend(['rand_rgg_isol','rand_rgg_unsupp']) plt.title('Isolation vs Unsupport: Rand-RGG') plt.savefig('Isolation vs Unsupport_Rand-RGG.png') plt.show() plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_isol) plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_unsupp) plt.legend(['rand_rand_isol','rand_rand_unsupp']) plt.title('Isolation vs Unsupport: Rand-Rand') plt.savefig('Isolation vs Unsupport_Rand-Rand.png') plt.show() df_len = [] df_len.append(list(rgg_rgg_isol)) df_len.append(list(rgg_rand_isol)) df_len.append(list(rand_rgg_isol)) df_len.append(list(rand_rand_isol)) max_df_len = max(df_len, key=len) x_val = list(range(len(max_df_len))) proc_isol = [] proc_unsupp = [] proc_isol.append(cum_rgg_rgg_isol) proc_isol.append(cum_rgg_rand_isol) proc_isol.append(cum_rand_rgg_isol) proc_isol.append(cum_rand_rand_isol) proc_unsupp.append(cum_rgg_rgg_unsupp) proc_unsupp.append(cum_rgg_rand_unsupp) proc_unsupp.append(cum_rand_rgg_unsupp) proc_unsupp.append(cum_rand_rand_unsupp) for x in x_val: if len(rgg_rgg_isol) <= x: proc_isol[0].append(cum_rgg_rgg_isol[len(rgg_rgg_isol) - 1]) proc_unsupp[0].append(cum_rgg_rgg_unsupp[len(rgg_rgg_isol) - 1]) if len(rgg_rand_isol) <= x: proc_isol[1].append(cum_rgg_rand_isol[len(rgg_rand_isol) - 1]) proc_unsupp[1].append(cum_rgg_rand_unsupp[len(rgg_rand_isol) - 1]) if len(rand_rgg_isol) <= x: proc_isol[2].append(cum_rand_rgg_isol[len(rand_rgg_isol) - 1]) proc_unsupp[2].append(cum_rand_rgg_unsupp[len(rand_rgg_isol) - 1]) if len(rand_rand_isol) <= x: proc_isol[3].append(cum_rand_rand_isol[len(rand_rand_isol) - 1]) proc_unsupp[3].append(cum_rand_rand_unsupp[len(rand_rand_isol) - 1]) plt.plot(x_val, proc_isol[0]) plt.plot(x_val, proc_isol[1]) plt.plot(x_val, proc_isol[2]) plt.plot(x_val, proc_isol[3]) plt.legend(['rgg_rgg_isol','rgg_rand_isol', 'rand_rgg_isol', 'rand_rand_isol']) plt.title('Isolation trend') plt.show() plt.plot(x_val, proc_unsupp[0]) plt.plot(x_val, proc_unsupp[1]) plt.plot(x_val, proc_unsupp[2]) plt.plot(x_val, proc_unsupp[3]) plt.legend(['rgg_rgg_unsupp','rgg_rand_unsupp', 'rand_rgg_unsupp', 'rand_rand_unsupp']) plt.title('Unsupport trend') plt.show() ``` ## Pie Chart ``` init_death = 150 labels = ['Alive nodes', 'Initial death', 'Dead nodes from isolation', 'Dead nodes from unsupport'] alive = [] alive.append(statistics.mean(rgg_rgg_data['alive_nodes'])) alive.append(statistics.mean(rgg_rand_data['alive_nodes'])) alive.append(statistics.mean(rand_rgg_data['alive_nodes'])) alive.append(statistics.mean(rand_rand_data['alive_nodes'])) tot_isol = [] tot_isol.append(statistics.mean(rgg_rgg_data['tot_isol_node'])) tot_isol.append(statistics.mean(rgg_rand_data['tot_isol_node'])) tot_isol.append(statistics.mean(rand_rgg_data['tot_isol_node'])) tot_isol.append(statistics.mean(rand_rand_data['tot_isol_node'])) tot_unsupp = [] tot_unsupp.append(statistics.mean(rgg_rgg_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rgg_rand_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rand_rgg_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rand_rand_data['tot_unsupp_node'])) deaths = [alive[0], init_death, tot_isol[0], tot_unsupp[0]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('RGG-RGG death trend') plt.show() deaths = [alive[1], init_death, tot_isol[1], tot_unsupp[1]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('RGG-Rand death trend') plt.show() deaths = [alive[2], init_death, tot_isol[2], tot_unsupp[2]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('Rand-RGG death trend') plt.show() deaths = [alive[3], init_death, tot_isol[3], tot_unsupp[3]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('Rand-Rand death trend') plt.show() ``` ## Compute the number of nodes ``` x_val = np.arange(4) labels = ['initial', 'final'] plt.bar(x_val, alive) plt.xticks(x_val, graph_types) plt.title('Alive nodes') plt.savefig('alive nodes.png') plt.show() ``` ## Compare the number of edges ``` init_intra = [] init_intra.append(statistics.mean(rgg_rgg_data['init_intra_edge'])) init_intra.append(statistics.mean(rgg_rand_data['init_intra_edge'])) init_intra.append(statistics.mean(rand_rgg_data['init_intra_edge'])) init_intra.append(statistics.mean(rand_rand_data['init_intra_edge'])) init_inter = [] init_inter.append(statistics.mean(rgg_rgg_data['init_inter_edge'])) init_inter.append(statistics.mean(rgg_rand_data['init_inter_edge'])) init_inter.append(statistics.mean(rand_rgg_data['init_inter_edge'])) init_inter.append(statistics.mean(rand_rand_data['init_inter_edge'])) init_supp = [] init_supp.append(statistics.mean(rgg_rgg_data['init_supp_edge'])) init_supp.append(statistics.mean(rgg_rand_data['init_supp_edge'])) init_supp.append(statistics.mean(rand_rgg_data['init_supp_edge'])) init_supp.append(statistics.mean(rand_rand_data['init_supp_edge'])) fin_intra = [] fin_intra.append(statistics.mean(rgg_rgg_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rgg_rand_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rand_rgg_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rand_rand_data['fin_intra_edge'])) fin_inter = [] fin_inter.append(statistics.mean(rgg_rgg_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rgg_rand_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rand_rgg_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rand_rand_data['fin_inter_edge'])) fin_supp = [] fin_supp.append(statistics.mean(rgg_rgg_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rgg_rand_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rand_rgg_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rand_rand_data['fin_supp_edge'])) plt.bar(x_val-0.1, init_intra, width=0.2) plt.bar(x_val+0.1, fin_intra, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_intra_edge vs Final_intra_edge') plt.show() plt.bar(x_val-0.1, init_inter, width=0.2) plt.bar(x_val+0.1, fin_inter, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_inter_edge vs Final_inter_edge') plt.show() plt.bar(x_val-0.1, init_supp, width=0.2) plt.bar(x_val+0.1, fin_supp, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_support_edge vs Final_support_edge') plt.show() ``` ## Network Analysis ``` init_far = [] init_far.append(statistics.mean(rgg_rgg_data['init_far_node'])) init_far.append(statistics.mean(rgg_rand_data['init_far_node'])) init_far.append(statistics.mean(rand_rgg_data['init_far_node'])) init_far.append(statistics.mean(rand_rand_data['init_far_node'])) fin_far = [] fin_far.append(statistics.mean(rgg_rgg_data['fin_far_node'])) fin_far.append(statistics.mean(rgg_rand_data['fin_far_node'])) fin_far.append(statistics.mean(rand_rgg_data['fin_far_node'])) fin_far.append(statistics.mean(rand_rand_data['fin_far_node'])) plt.bar(x_val-0.1, init_far, width=0.2) plt.bar(x_val+0.1, fin_far, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_far_node vs Final_far_node') plt.show() init_clust = [] init_clust.append(statistics.mean(rgg_rgg_data['init_clust'])) init_clust.append(statistics.mean(rgg_rand_data['init_clust'])) init_clust.append(statistics.mean(rand_rgg_data['init_clust'])) init_clust.append(statistics.mean(rand_rand_data['init_clust'])) fin_clust = [] fin_clust.append(statistics.mean(rgg_rgg_data['fin_clust'])) fin_clust.append(statistics.mean(rgg_rand_data['fin_clust'])) fin_clust.append(statistics.mean(rand_rgg_data['fin_clust'])) fin_clust.append(statistics.mean(rand_rand_data['fin_clust'])) plt.bar(x_val-0.1, init_clust, width=0.2) plt.bar(x_val+0.1, fin_clust, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_clustering_coefficient vs Final_clustering_coefficient') plt.show() init_mean_deg = [] init_mean_deg.append(statistics.mean(rgg_rgg_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rgg_rand_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rand_rgg_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rand_rand_data['init_mean_deg'])) fin_mean_deg = [] fin_mean_deg.append(statistics.mean(rgg_rgg_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rgg_rand_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rand_rgg_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rand_rand_data['fin_mean_deg'])) plt.bar(x_val-0.1, init_mean_deg, width=0.2) plt.bar(x_val+0.1, fin_mean_deg, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_mean_degree vs Final_mean_degree') plt.show() init_larg_comp = [] init_larg_comp.append(statistics.mean(rgg_rgg_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rgg_rand_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rand_rgg_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rand_rand_data['init_larg_comp'])) fin_larg_comp = [] fin_larg_comp.append(statistics.mean(rgg_rgg_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rgg_rand_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rand_rgg_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rand_rand_data['fin_larg_comp'])) plt.bar(x_val-0.1, init_larg_comp, width=0.2) plt.bar(x_val+0.1, fin_larg_comp, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_largest_component_size vs Final_largest_component_size') plt.show() deg_assort = [] a = rgg_rgg_data['deg_assort'].fillna(0) b = rgg_rand_data['deg_assort'].fillna(0) c = rand_rgg_data['deg_assort'].fillna(0) d = rand_rand_data['deg_assort'].fillna(0) deg_assort.append(statistics.mean(a)) deg_assort.append(statistics.mean(b)) deg_assort.append(statistics.mean(c)) deg_assort.append(statistics.mean(d)) plt.bar(x_val, deg_assort) plt.xticks(x_val, graph_types) plt.title('Degree Assortativity') plt.show() dist_deg_cent = [] dist_deg_cent.append(statistics.mean(rgg_rgg_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rgg_rand_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rand_rgg_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rand_rand_data['dist_deg_cent'])) plt.bar(x_val, dist_deg_cent) plt.xticks(x_val, graph_types) plt.title('Distance to degree centre from the attack point') plt.show() dist_bet_cent = [] dist_bet_cent.append(statistics.mean(rgg_rgg_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rgg_rand_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rand_rgg_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rand_rand_data['dist_bet_cent'])) plt.bar(x_val, dist_bet_cent) plt.xticks(x_val, graph_types) plt.title('Distance to betweenes centre from the attack point') plt.show() ```
github_jupyter
# Access Computation This tutorial demonstrates how to compute access. ## Setup ``` import numpy as np import pandas as pd import plotly.graph_objs as go from ostk.mathematics.objects import RealInterval from ostk.physics.units import Length from ostk.physics.units import Angle from ostk.physics.time import Scale from ostk.physics.time import Instant from ostk.physics.time import Duration from ostk.physics.time import Interval from ostk.physics.time import DateTime from ostk.physics.time import Time from ostk.physics.coordinate.spherical import LLA from ostk.physics.coordinate.spherical import AER from ostk.physics.coordinate import Position from ostk.physics.coordinate import Frame from ostk.physics import Environment from ostk.physics.environment.objects.celestial_bodies import Earth from ostk.astrodynamics import Trajectory from ostk.astrodynamics.trajectory import Orbit from ostk.astrodynamics.trajectory.orbit.models import Kepler from ostk.astrodynamics.trajectory.orbit.models.kepler import COE from ostk.astrodynamics.trajectory.orbit.models import SGP4 from ostk.astrodynamics.trajectory.orbit.models.sgp4 import TLE from ostk.astrodynamics import Access from ostk.astrodynamics.access import Generator as AccessGenerator ``` --- ## Access An access represents an object-to-object visibility period. In this example, let's compute accesses between a fixed position on the ground and a satellite in LEO. ## Environment Let's setup an environment (which describes where planets are, etc...): ``` environment = Environment.default() ; ``` ### Origin Let's define a fixed ground position, using its geographic coordinates: ``` latitude = Angle.degrees(50.0) longitude = Angle.degrees(20.0) altitude = Length.meters(30.0) from_lla = LLA(latitude, longitude, altitude) from_position = Position.meters(from_lla.to_cartesian(Earth.equatorial_radius, Earth.flattening), Frame.ITRF()) ``` And derive a trajectory, fixed at that position: ``` from_trajectory = Trajectory.position(from_position) ``` ### Target Let's consider a satellite in **Low-Earth Orbit**. ``` earth = environment.access_celestial_object_with_name("Earth") ``` We can define its orbit with **Classical Orbital Elements**: ``` a = Earth.equatorial_radius + Length.kilometers(500.0) e = 0.000 i = Angle.degrees(97.8893) raan = Angle.degrees(100.372) aop = Angle.degrees(0.0) nu = Angle.degrees(0.0201851) coe = COE(a, e, i, raan, aop, nu) ``` ... and by using a **Keplerian** orbital model: ``` epoch = Instant.date_time(DateTime(2018, 1, 1, 0, 0, 0), Scale.UTC) keplerian_model = Kepler(coe, epoch, earth, Kepler.PerturbationType.J2) ``` Or with a **Two-Line Element** (TLE) set: ``` tle = TLE( "ISS (ZARYA)", "1 25544U 98067A 18268.86272795 .00002184 00000-0 40781-4 0 9990", "2 25544 51.6405 237.0010 0003980 205.4375 242.3358 15.53733046134172" ) ``` ... along with its associated **SGP4** orbital model: ``` sgp4_model = SGP4(tle) ``` Below, we select which orbital model to use: ``` orbital_model = keplerian_model # orbital_model = sgp4_model ``` We then obtain the satellite orbit (which is a **Trajectory** object): ``` satellite_orbit = Orbit(orbital_model, earth) ``` Alternatively, the **Orbit** class can provide some useful shortcuts (for usual orbit types): ``` epoch = Instant.date_time(DateTime(2018, 1, 1, 0, 0, 0), Scale.UTC) satellite_orbit = Orbit.sun_synchronous(epoch, Length.kilometers(500.0), Time(12, 0, 0), earth) ``` ### Access Now that the origin and the target trajectories are well defined, we can compute the **Access**. Let's first define an **analysis interval**: ``` start_instant = Instant.date_time(DateTime.parse("2018-01-01 00:00:00"), Scale.UTC) ; end_instant = Instant.date_time(DateTime.parse("2018-01-10 00:00:00"), Scale.UTC) ; interval = Interval.closed(start_instant, end_instant) ; ``` Then, using an **Access Generator**, we can compute the accesses within the intervals of interest: ``` azimuth_range = RealInterval.closed(0.0, 360.0) # [deg] elevation_range = RealInterval.closed(20.0, 90.0) # [deg] range_range = RealInterval.closed(0.0, 10000e3) # [m] # Access generator with Azimuth-Range-Elevation constraints access_generator = AccessGenerator.aer_ranges(azimuth_range, elevation_range, range_range, environment) accesses = access_generator.compute_accesses(interval, from_trajectory, satellite_orbit) ``` And format the output using a dataframe: ``` accesses_df = pd.DataFrame([[str(access.get_type()), repr(access.get_acquisition_of_signal()), repr(access.get_time_of_closest_approach()), repr(access.get_loss_of_signal()), float(access.get_duration().in_seconds())] for access in accesses], columns=['Type', 'AOS', 'TCA', 'LOS', 'Duration']) ``` ### Output Print accesses: ``` accesses_df ``` Let's calculate the geographic coordinate of the satellite, during access: ``` def compute_lla (state): lla = LLA.cartesian(state.get_position().in_frame(Frame.ITRF(), state.get_instant()).get_coordinates(), Earth.equatorial_radius, Earth.flattening) return [float(lla.get_latitude().in_degrees()), float(lla.get_longitude().in_degrees()), float(lla.get_altitude().in_meters())] def compute_aer (instant, from_lla, to_position): nedFrame = earth.get_frame_at(from_lla, Earth.FrameType.NED) fromPosition_NED = from_position.in_frame(nedFrame, instant) sunPosition_NED = to_position.in_frame(nedFrame, instant) aer = AER.from_position_to_position(fromPosition_NED, sunPosition_NED, True) return [float(aer.get_azimuth().in_degrees()), float(aer.get_elevation().in_degrees()), float(aer.get_range().in_meters())] def compute_time_lla_aer_state (state): instant = state.get_instant() lla = compute_lla(state) aer = compute_aer(instant, from_lla, state.get_position().in_frame(Frame.ITRF(), state.get_instant())) return [instant, lla[0], lla[1], lla[2], aer[0], aer[1], aer[2]] def compute_trajectory_geometry (aTrajectory, anInterval): return [compute_lla(state) for state in aTrajectory.get_states_at(anInterval.generate_grid(Duration.minutes(1.0)))] def compute_access_geometry (access): return [compute_time_lla_aer_state(state) for state in satellite_orbit.get_states_at(access.get_interval().generate_grid(Duration.seconds(1.0)))] satellite_orbit_geometry_df = pd.DataFrame(compute_trajectory_geometry(satellite_orbit, interval), columns=['Latitude', 'Longitude', 'Altitude']) satellite_orbit_geometry_df.head() access_geometry_dfs = [pd.DataFrame(compute_access_geometry(access), columns=['Time', 'Latitude', 'Longitude', 'Altitude', 'Azimuth', 'Elevation', 'Range']) for access in accesses] ; def get_max_elevation (df): return df.loc[df['Elevation'].idxmax()]['Elevation'] ``` And plot the geometries onto a map: ``` data = [] # Target geometry data.append( dict( type = 'scattergeo', lon = [float(longitude.in_degrees())], lat = [float(latitude.in_degrees())], mode = 'markers', marker = dict( size = 10, color = 'orange' ) ) ) # Orbit geometry data.append( dict( type = 'scattergeo', lon = satellite_orbit_geometry_df['Longitude'], lat = satellite_orbit_geometry_df['Latitude'], mode = 'lines', line = dict( width = 1, color = 'rgba(0, 0, 0, 0.1)', ) ) ) # Access geometry for access_geometry_df in access_geometry_dfs: data.append( dict( type = 'scattergeo', lon = access_geometry_df['Longitude'], lat = access_geometry_df['Latitude'], mode = 'lines', line = dict( width = 1, color = 'red', ) ) ) layout = dict( title = None, showlegend = False, height = 1000, geo = dict( showland = True, landcolor = 'rgb(243, 243, 243)', countrycolor = 'rgb(204, 204, 204)', ), ) figure = go.Figure(data = data, layout = layout) figure.show() ``` ---
github_jupyter
``` import tensorflow as tf from tensorflow.keras import models import numpy as np import matplotlib.pyplot as plt class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): #creating a callback function that activates if the accuracy is greater than 60% if(logs.get('accuracy')>0.99): print("\nim maxed out baby, too goated!") self.model.stop_training = True path = "mnist.npz" mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) callbacks = myCallback() x_train = x_train / 255.0 x_train = x_train.reshape(60000, 28, 28, 1) x_test = x_test.reshape(10000, 28, 28, 1) x_test = x_test / 255.0 model = tf.keras.models.Sequential([ #convolution part # creates a convolution layer with 64 filters with 3 by 3 dimensions # sets activation function to relu, with drops all negative values # sets input shape to 28 by 28 array, same as before, 1 denotes that the image is gray-scale, only 1 color channel tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)), # create a max pooling layer with a 2 by 2 pooling filter # means that the largest pixel value with be chosen out of every 4 pixels tf.keras.layers.MaxPooling2D(2, 2), # insert another set of convolutions and pooling so that the network can learn another set of convolutions # then pooling layer is added so that the images can get smaller again # this reduces number of dense layers needed tf.keras.layers.Conv2D(64, (3,3), activation='relu'), #deep neural network part tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.summary() #generates summary of parameters so we can see images journey throughout the network model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) #the model is much slower now when compiling #this is because there are 64 filters that are getting passed on each image multiple times, so the computation is much heavier #but our accuracy is much better now, hitting 99.7% on the first epoch model.fit(x_test, y_test, epochs=10, callbacks=[callbacks]) print(y_test[:100]) f, axarr = plt.subplots(3,4) FIRST_IMAGE=0 #0th element is 7 SECOND_IMAGE=11 #7th element is 9 THIRD_IMAGE=26 #26th element is 7 CONVOLUTION_NUMBER = 1 layer_outputs = [layer.output for layer in model.layers] activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs) #looking at effect that the convolution has on our model for x in range(4): f1 = activation_model.predict(x_test[FIRST_IMAGE].reshape(1, 28, 28, 1))[x] axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[0,x].grid(False) f2 = activation_model.predict(x_test[SECOND_IMAGE].reshape(1, 28, 28, 1))[x] axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[1,x].grid(False) f3 = activation_model.predict(x_test[THIRD_IMAGE].reshape(1, 28, 28, 1))[x] axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[2,x].grid(False) ```
github_jupyter
# TV Script Generation In this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern). ## Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] # test by Lingchen Zhu print("First 100 characters in text: {}".format(text[0:100])) words = text.split() print("First 10 words in text after splitting: {}".format(words[0:10])) charset = sorted(set(text)) # set up an ordered set of unique characters in text print("Number of unique characters in text: {}".format(len(charset))) print(charset) vocab = sorted(set(words)) print("Number of unique words in text (before pre-processing): {}".format(len(vocab))) ``` ## Explore the Data Play around with `view_sentence_range` to view different parts of the data. ``` view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ``` ## Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation ### Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call `vocab_to_int` - Dictionary to go from the id to word, we'll call `int_to_vocab` Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)` ``` import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab = sorted(set(text)) # set up an ordered set of unique elements in text int_to_vocab = dict(enumerate(vocab)) # set up a dictionary with int keys and char values vocab_to_int = {c: i for i, c in enumerate(vocab)} # set up a dictionary with char keys and int values return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) ``` ### Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". ``` def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function token_dict = {'.' : "||period||", ',' : "||comma||", '"' : "||quotation_mark||", ';' : "||semicolon||", '!' : "||exclamation_mark||", '?' : "||question_mark||", '(' : "||left_parentheses||", ')' : "||right_parentheses||", '--' : "||dash||", '\n': "||return||"} return token_dict """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) ``` ## Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data # load text, split text into words, set up vocabulary <-> int lookup tables and save data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() # test by Lingchen Zhu print("Number of total words in the vocabulary (after-preprocessing): {}".format(len(int_text))) print("Number of unique words in the vocabulary (after pre-processing): {}".format(len(vocab_to_int))) ``` ## Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches ### Check the Version of TensorFlow and Access to GPU ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.3'), 'Please use TensorFlow version 1.3 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ``` ### Input Implement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple `(Input, Targets, LearningRate)` ``` def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function Input = tf.placeholder(tf.int32, [None, None], name='input') Targets = tf.placeholder(tf.int32, [None, None], name='targets') LearningRate = tf.placeholder(tf.float32, name='learning_rate') return Input, Targets, LearningRate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) ``` ### Build RNN Cell and Initialize Stack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell). - The Rnn size should be set using `rnn_size` - Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the cell and initial state in the following tuple `(Cell, InitialState)` ``` def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function num_layers = 2 keep_prob = 0.6 def build_single_lstm_layer(rnn_size, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) lstm_with_dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return lstm_with_dropout Cell = tf.contrib.rnn.MultiRNNCell([build_single_lstm_layer(rnn_size, keep_prob) for l in range(num_layers)]) InitialState = Cell.zero_state(batch_size, tf.float32) InitialState = tf.identity(InitialState, name='initial_state') return Cell, InitialState """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) ``` ### Word Embedding Apply embedding to `input_data` using TensorFlow. Return the embedded sequence. ``` def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function embed = tf.contrib.layers.embed_sequence(input_data, vocab_size, embed_dim) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) ``` ### Build RNN You created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN. - Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the outputs and final_state state in the following tuple `(Outputs, FinalState)` ``` def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function Outputs, FinalState = tf.nn.dynamic_rnn(cell, inputs, initial_state=None, dtype=tf.float32) FinalState = tf.identity(FinalState, name='final_state') return Outputs, FinalState """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) ``` ### Build the Neural Network Apply the functions you implemented above to: - Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function. - Build RNN using `cell` and your `build_rnn(cell, inputs)` function. - Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) ``` def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function input_embed = get_embed(input_data, vocab_size, embed_dim) rnn_output, FinalState = build_rnn(cell, input_embed) Logits = tf.contrib.layers.fully_connected(rnn_output, vocab_size, activation_fn=None, weights_initializer=tf.truncated_normal_initializer(stddev=0.1), biases_initializer=tf.zeros_initializer()) return Logits, FinalState """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) ``` ### Batches Implement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements: - The first element is a single batch of **input** with the shape `[batch size, sequence length]` - The second element is a single batch of **targets** with the shape `[batch size, sequence length]` If you can't fill the last batch with enough data, drop the last batch. For example, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive. ``` def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_word_per_batch = batch_size * seq_length # number of words per batch n_batch = len(int_text) // n_word_per_batch # number of batches x_data = np.array(int_text[:n_batch * n_word_per_batch]) # keep only enough words to make full batches y_data = np.roll(x_data, -1) # shift the text to left by one place x_batches = np.split(x_data.reshape((batch_size, seq_length * n_batch)), n_batch, axis=1) y_batches = np.split(y_data.reshape((batch_size, seq_length * n_batch)), n_batch, axis=1) Batches = np.array(list(zip(x_batches, y_batches))) return Batches """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) # test by Lingchen Zhu test_batches = get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) print("test_batches.shape = {}".format(test_batches.shape)) print(test_batches) ``` ## Neural Network Training ### Hyperparameters Tune the following parameters: - Set `num_epochs` to the number of epochs. - Set `batch_size` to the batch size. - Set `rnn_size` to the size of the RNNs. - Set `embed_dim` to the size of the embedding. - Set `seq_length` to the length of sequence. - Set `learning_rate` to the learning rate. - Set `show_every_n_batches` to the number of batches the neural network should print progress. ``` # Number of Epochs num_epochs = 100 # Batch Size batch_size = 256 # RNN Size rnn_size = 1024 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 20 # Learning Rate learning_rate = 0.005 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' ``` ### Build the Graph Build the graph using the neural network you implemented. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ``` ## Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forums](https://discussions.udacity.com/) to see if anyone is having the same problem. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') ``` ## Save Parameters Save `seq_length` and `save_dir` for generating a new TV script. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) ``` # Checkpoint ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() ``` ## Implement Generate Functions ### Get Tensors Get tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ``` def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function InputTensor = loaded_graph.get_tensor_by_name('input:0') InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0') FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0') ProbsTensor = loaded_graph.get_tensor_by_name('probs:0') return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) ``` ### Choose Word Implement the `pick_word()` function to select the next word using `probabilities`. ``` def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function # greedy search: may result in the network "getting stuck" and picking the same word over and over # idx_max_prob = np.argmax(probabilities) # word_predict = int_to_vocab[idx_max_prob] top_n = 5 # number of the next word with highest probabilities probabilities[np.argsort(probabilities)[:-top_n]] = 0 # suppress small probabilities to zeros probabilities = probabilities / np.sum(probabilities) # normalize the remaining large probabilities idx_max_prob_random = np.random.choice(len(int_to_vocab), 1, p=probabilities)[0] # generates a random sample index from range(len(int_to_vocab)) with probabilities word_predict = int_to_vocab[idx_max_prob_random] return word_predict """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) ``` ## Generate TV Script This will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate. ``` gen_length = 500 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) ``` # The TV Script is Nonsensical It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckily there's more data! As we mentioned in the beggining of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course. # Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
## Importing Libraries & getting Data ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') data = pd.read_csv("dataset/winequalityN.csv") data.head() data.info() data.describe() data.columns columns = ['type', 'fixed acidity', 'volatile acidity', 'citric acid','residual sugar', 'chlorides', 'free sulfur dioxide','total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol','quality'] data['type'] sns.countplot(data=data ,x="quality") ``` ## Handling Missing Values ``` sns.heatmap(data.isnull(), yticklabels=False, cmap="viridis", cbar=False) data.isnull().values.sum() # replacing missing values with mean data = data.fillna(data.mean()) sns.heatmap(data.isnull(), yticklabels=False, cmap="viridis", cbar=False) # as 'type' is categorical variable ,remove it from the list of our feature columns labels = data.pop('type') cat_columns = ['fixed acidity', 'volatile acidity', 'citric acid','residual sugar', 'chlorides', 'free sulfur dioxide','total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol','quality'] data.head() ``` ## Scaling & Encoding ``` from sklearn.preprocessing import MinMaxScaler , LabelEncoder def scale_data(data): scaler = MinMaxScaler(feature_range=(0,1)) X = np.array(data) X = scaler.fit_transform(X) return X , scaler def encode_data(labels): y = np.array(labels) le = LabelEncoder() y = le.fit_transform(y) return y , le # another way to encode # labels.type = labels.type.apply(lambda x: 0 if x == "red" else 1) X , scaler = scale_data(data) print(X) print(scaler.inverse_transform(X)) y , le = encode_data(labels) print(y) print(le.inverse_transform(y)) ``` ## EDA ``` plt.figure(figsize=(10,10)) sns.heatmap(data.corr() , annot=True) plt.show() ``` ### For Handling Outliers ``` def univariate(var): sns.boxplot(data=data , y=var) plt.show() cat_columns univariate('fixed acidity') univariate('volatile acidity') univariate('citric acid') univariate('pH') univariate('sulphates') univariate('alcohol') univariate('total sulfur dioxide') univariate('chlorides') univariate('residual sugar') ``` ### Density and pH ``` sns.displot(data ,x="density" ,color='r',col="quality") sns.displot(data, x="pH", color='g', col="quality") ``` ## Bivariate Analysis ``` data['quality'].describe() ``` ### Numerical variables vs Target variable ``` for i in cat_columns: fig , ax = plt.subplots(1,3,figsize=(20,5)) plt.subplots_adjust(hspace=1) sns.barplot(data=data , y=i ,x="quality" , ax=ax[0]) sns.lineplot(data=data, y=i, x="quality", ax=ax[1]) sns.violinplot(data=data, y=i, x="quality", ax=ax[2]) ``` ## Model building with Random Forest classifier ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() rfc.fit(X_train , y_train) y_predicted = rfc.predict(X_test) y_predicted[:15] , y_test[:15] ``` ## Evaluation ``` from sklearn.metrics import accuracy_score ,confusion_matrix print("Accuracy :" , (accuracy_score(y_predicted , y_test))) sns.heatmap(confusion_matrix(y_predicted ,y_test),annot=True ,cmap='Purples' ,fmt='.4g') ```
github_jupyter
``` from fastai import * from fastai.vision import * from fastai.callbacks import * from fastai.utils.mem import * from fastai.vision.gan import * from PIL import Image import numpy as np import torch import torch.nn.functional as F import torch.nn as nn from torch.utils.data import DataLoader from torch.utils.data.dataset import TensorDataset import pdb path = Path()/'data'/'horse2zebra' ``` # Custom DataBunch Object ``` import fastai.vision.image as im class DoubleImage(ItemBase): def __init__(self, img1, img2): self.img1,self.img2 = img1,img2 self.data = [(-1+2*img1.data),(-1+2*img2.data)] def apply_tfms(self, tfms, **kwargs): self.img1 = self.img1.apply_tfms(tfms, **kwargs) self.img2 = self.img2.apply_tfms(tfms, **kwargs) self.data = [-1+2*self.img1.data,-1+2*self.img2.data] return self def __repr__(self)->str: return f'{self.__class__.__name__}' def to_one(self): tensor = 0.5+torch.cat(self.data,2)/2 return im.Image(tensor) class DoubleImageList(ImageList): def __init__(self, items, itemsB=None, **kwargs): super().__init__(items, **kwargs) self.itemsB = itemsB self.copy_new.append('itemsB') def get(self, i): img1 = super().get(i) fn = self.itemsB[random.randint(0, len(self.itemsB)-1)] return DoubleImage(img1, open_image(fn)) def reconstruct(self, t:Tensor): return t @classmethod def from_folders(cls, path, folderA, folderB, **kwargs): itemsB = ImageList.from_folder(path/folderB).items res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs) res.path = path return res def transform(self, tfms:Optional[Tuple[TfmList,TfmList]]=(None,None), **kwargs): "Set `tfms` to be applied to the xs of the train and validation set." if not tfms: tfms=(None,None) assert is_listy(tfms) and len(tfms) == 2, "Please pass a list of two lists of transforms (train and valid)." self.train.transform(tfms[0], **kwargs) self.valid.transform(tfms[1], **kwargs) if self.test: self.test.transform(tfms[1], **kwargs) return self def show_xys(self, xs, ys, figsize:Tuple[int,int]=(12,6), **kwargs): "Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method." rows = int(math.sqrt(len(xs))) fig, axs = plt.subplots(rows,rows,figsize=figsize) for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]): xs[i] = DoubleImage((xs[i][0]/2+0.5),(xs[i][1]/2+0.5)) xs[i].to_one().show(ax=ax, **kwargs) plt.tight_layout() #UNTESTED def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs): """Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`. `kwargs` are passed to the show method.""" figsize = ifnone(figsize, (12,3*len(xs))) fig,axs = plt.subplots(len(xs), 2, figsize=figsize) fig.suptitle('Ground truth / Predictions', weight='bold', size=14) for i,(x,z) in enumerate(zip(xs,zs)): x.to_one().show(ax=axs[i,0], **kwargs) z.to_one().show(ax=axs[i,1], **kwargs) data = DoubleImageList.from_folders(path, 'horse', 'zebra').split_by_rand_pct(0.2).label_from_folder() data = ImageDataBunch.create_from_ll(data, bs=1, size=224) data.show_batch() ``` # MultiUnet Trainer ``` class UnetBlock(nn.Module): "A quasi-UNet block, using `PixelShuffle_ICNR upsampling`." def __init__(self, up_in_c:int, x_in_c:int, hook:Hook, final_div:bool=True, blur:bool=False, leaky:float=None, self_attention:bool=False): super().__init__() self.hook = hook self.shuf = PixelShuffle_ICNR(up_in_c, up_in_c//2, blur=blur, leaky=leaky) self.bn = batchnorm_2d(x_in_c) ni = up_in_c//2 + x_in_c nf = ni if final_div else ni//2 self.conv1 = conv_layer(ni, nf, leaky=leaky) self.conv2 = conv_layer(nf, nf, leaky=leaky, self_attention=self_attention) self.relu = relu(leaky=leaky) def forward(self, up_in:Tensor) -> Tensor: s = self.hook.stored up_out = self.shuf(up_in) ssh = s.shape[-2:] if ssh != up_out.shape[-2:]: up_out = F.interpolate(up_out, s.shape[-2:], mode='nearest') cat_x = self.relu(torch.cat([up_out, self.bn(s)], dim=1)) return self.conv2(self.conv1(cat_x)) def _get_sfs_idxs(sizes:Sizes) -> List[int]: "Get the indexes of the layers where the size of the activation changes." feature_szs = [size[-1] for size in sizes] sfs_idxs = list(np.where(np.array(feature_szs[:-1]) != np.array(feature_szs[1:]))[0]) if feature_szs[0] != feature_szs[1]: sfs_idxs = [0] + sfs_idxs return sfs_idxs class UpBlock(nn.Module): def __init__(self, ni, nf): super(UpBlock, self).__init__() self.bn = batchnorm_2d(nf) self.conv = Conv2dBlock(nf, nf, ks=5, stride=1, norm="bn", activation="relu", padding=2) self.shuf = PixelShuffle_ICNR(ni, nf, blur=False, leaky=None) self.relu = nn.ReLU() def forward(self, xb, body=None): up_out = self.shuf(xb) if(body is not None): ssh = body.shape[-2:] if ssh != up_out.shape[-2:]: up_out = F.interpolate(up_out, body.shape[-2:], mode='nearest') up_out = self.relu(up_out+self.bn(body)) xb = self.conv(up_out) return xb class Conv2dBlock(nn.Module): def __init__(self, ni, nf, ks, stride, norm, activation, padding=1): super(Conv2dBlock, self).__init__() self.pad = nn.ZeroPad2d(padding) norm_dim = nf if norm == 'bn': self.norm = nn.BatchNorm2d(norm_dim) elif norm == 'in': #self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=True) self.norm = nn.InstanceNorm2d(norm_dim) elif norm == 'ln': self.norm = LayerNorm(norm_dim) elif norm == 'adain': self.norm = AdaptiveInstanceNorm2d(norm_dim) elif norm == 'none': self.norm = None if activation == 'relu': self.activation = nn.ReLU(inplace=True) elif activation == 'lrelu': self.activation = nn.LeakyReLU(0.2, inplace=True) elif activation == 'prelu': self.activation = nn.PReLU() elif activation == 'selu': self.activation = nn.SELU(inplace=True) elif activation == 'tanh': self.activation = nn.Tanh() elif activation == 'none': self.activation = None self.conv = nn.Conv2d(ni, nf, ks, stride) def forward(self, x): x = self.conv(self.pad(x)) if self.norm: x = self.norm(x) if self.activation: x = self.activation(x) return x class LayerNorm(nn.Module): def __init__(self, num_features, eps=1e-5, affine=True): super(LayerNorm, self).__init__() self.num_features = num_features self.affine = affine self.eps = eps if self.affine: self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_()) self.beta = nn.Parameter(torch.zeros(num_features)) def forward(self, x): shape = [-1] + [1] * (x.dim() - 1) # print(x.size()) if x.size(0) == 1: # These two lines run much faster in pytorch 0.4 than the two lines listed below. mean = x.view(-1).mean().view(*shape) std = x.view(-1).std().view(*shape) else: mean = x.view(x.size(0), -1).mean(1).view(*shape) std = x.view(x.size(0), -1).std(1).view(*shape) x = (x - mean) / (std + self.eps) if self.affine: shape = [1, -1] + [1] * (x.dim() - 2) x = x * self.gamma.view(*shape) + self.beta.view(*shape) return x class ResBlocks(nn.Module): def __init__(self, num_blocks, dim, norm='in', activation='relu', padding=1): super(ResBlocks, self).__init__() self.model = [] for i in range(num_blocks): self.model += [ResBlock(dim, norm=norm, activation=activation, padding=padding)] self.model = nn.Sequential(*self.model) def forward(self, x): return self.model(x) class ResBlock(nn.Module): def __init__(self, dim, norm='in', activation='relu', padding=1): super(ResBlock, self).__init__() self.model = [] self.model += [Conv2dBlock(dim, dim, 3, 1, norm, activation, padding)] self.model += [Conv2dBlock(dim, dim, 3, 1, norm, activation, padding)] self.model = nn.Sequential(*self.model) def forward(self, x): return self.model(x) + x class MultiUnet(nn.Module): def __init__(self, arch:Callable, pretrained:bool=True, cut=None): super().__init__() self.relu = relu(leaky=None) self.bodyA = create_body(arch, pretrained, cut=-3) self.bodyB = create_body(arch, pretrained, cut=-3) self.sfs_szs = model_sizes(self.bodyA, size=(224,224)) self.sfs_idxs = list(reversed(_get_sfs_idxs(self.sfs_szs))) self.sfsA = hook_outputs([self.bodyA[i] for i in self.sfs_idxs]) x = dummy_eval(self.bodyA, (224, 224)).detach() self.sfsB = hook_outputs([self.bodyB[i] for i in self.sfs_idxs]) x = dummy_eval(self.bodyB, (224, 224)).detach() unet_blocksA = [] x = torch.tensor([]) x = x.new_full((1, 512, 7, 7), 0) up_in_c = [] x_in_c = [] for i,idx in enumerate(self.sfs_idxs): up_in_c.append(int(x.shape[1])) x_in_c.append(int(self.sfs_szs[idx][1])) not_final = i!=len(self.sfs_idxs)-1 block = UnetBlock(int(x.shape[1]), int(self.sfs_szs[idx][1]), self.sfsA[i], final_div=not_final, blur=False, self_attention=False).eval() x = block(x) #DecoderA self.UpBlockA1 = UpBlock(256, 128) self.UpBlockA2 = UpBlock(128, 64) self.UpBlockA3 = UpBlock(64, 64) self.finalDecoderA = nn.Sequential(PixelShuffle_ICNR(64), conv_layer(64, 3)) self.ResA = ResBlocks(4, 256, 'in', 'relu', padding=1) #DecoderB self.UpBlockB1 = UpBlock(256, 128) self.UpBlockB2 = UpBlock(128, 64) self.UpBlockB3 = UpBlock(64, 64) self.ResB = ResBlocks(4, 256, 'in', 'relu', padding=1) self.finalDecoderB = nn.Sequential(PixelShuffle_ICNR(64), conv_layer(64, 3)) #Shared Layers self.sharedEncoderLayer = conv_layer(256, 512, stride=2) self.middleConv = nn.Sequential(nn.BatchNorm2d(512), nn.ReLU(512), conv_layer(512, 512*2, stride=1), nn.Conv2d(512*2, 512, 3, stride=1)) self.UpShared = UpBlock(512, 256) #Tan layer self.tanLayer = nn.Tanh() def EncoderA(self, xb): result = self.bodyA(xb) return result def EncoderB(self, xb): result = self.bodyB(xb) return result def sharedEncoder(self, xb): result = self.sharedEncoderLayer(xb) return result def MiddleConv(self, xb): result = self.middleConv(xb) return result def sharedDecoder(self, xb): return self.UpShared(xb, None) def DecoderA(self, xb, body): xb = self.ResA(xb) xb = self.UpBlockA1(xb, body[0].stored) xb = self.UpBlockA2(xb, body[1].stored) xb = self.UpBlockA3(xb, body[2].stored) return self.finalDecoderA(xb) def DecoderB(self, xb, body): xb = self.ResB(xb) xb = self.UpBlockB1(xb, body[0].stored) xb = self.UpBlockB2(xb, body[1].stored) xb = self.UpBlockB3(xb, body[2].stored) return self.finalDecoderB(xb) def forward(self, a, b, *pred): #get initial encodings of both a,b = self.EncoderA(a), self.EncoderB(b) #put both through shared encoder and middle conv a,b = self.sharedEncoder(a), self.sharedEncoder(b) a,b = self.middleConv(a), self.middleConv(b) #put images through shared decoder a,b = self.sharedDecoder(a), self.sharedDecoder(b) #Get images that are supposed to be aToA, bToB = self.DecoderA(a, body=self.sfsA),self.DecoderB(b, body=self.sfsB) #Get switched images aToB, bToA = self.DecoderB(a, body=self.sfsA), self.DecoderA(b, body=self.sfsB) allIm = torch.cat((self.tanLayer(aToA), self.tanLayer(bToB), self.tanLayer(aToB), self.tanLayer(bToA)), 0) return allIm ``` # Critic ``` def conv_and_res(ni, nf): return nn.Sequential(res_block(ni), conv_layer(ni, nf, stride=2, bias=True, use_activ=False, leaky=0.1)) class MultiUNITDiscriminator(nn.Module): def __init__(self): super(MultiUNITDiscriminator, self).__init__() self.convs = nn.Sequential( nn.Conv2d(3, 64, 3, 2, 1), conv_and_res(64, 128), conv_and_res(128, 256), conv_and_res(256, 512), nn.Conv2d(512, 1, 3, stride=1), Flatten() ) def forward(self, not_switched, switched, down=2): not_switched = self.convs(not_switched) switched = self.convs(switched) return (not_switched,switched) class critic_loss(nn.Module): #a is 0 and b is 1 for predictions def forward(self, output, garbage): pred_winter = output[0] pred_summer = output[1] targWin = pred_winter.new_zeros(*pred_winter.size()) targSum = pred_summer.new_ones(*pred_summer.size()) result_winter = F.mse_loss(pred_winter, targWin) result_summer = F.mse_loss(pred_summer, targSum) return result_winter + result_summer critic_learner = Learner(data, MultiUNITDiscriminator(), loss_func=critic_loss(), wd=1e-3) #critic_learner.fit_one_cycle(4, wd=0.1) #critic_learner.save('critic') #critic_learner.load('criticV5-h2z-zfirst') critic_learner.load('criticV5-sum2win-wfirst') #critic_learner.load('criticV5-an2la') ``` # Gan Wrapper ``` class GANLearner(Learner): "A `Learner` suitable for GANs." def __init__(self, data:DataBunch, generator:nn.Module, critic:nn.Module, gen_loss_func:LossFunction, crit_loss_func:LossFunction, n_crit=None, n_gen=None, switcher:Callback=None, gen_first:bool=False, switch_eval:bool=True, show_img:bool=True, clip:float=None, **learn_kwargs): print('in GANLearner') gan = GANModule(generator, critic) loss_func = GANLoss(gen_loss_func, crit_loss_func, gan) switcher = ifnone(switcher, partial(FixedGANSwitcher, n_crit=n_crit, n_gen=n_gen)) super().__init__(data, gan, loss_func=loss_func, callback_fns=[switcher], **learn_kwargs) trainer = GANTrainer(self, clip=clip, switch_eval=switch_eval, show_img=show_img) self.gan_trainer = trainer self.callbacks.append(trainer) class GANModule(nn.Module): "Wrapper around a `generator` and a `critic` to create a GAN." def __init__(self, generator:nn.Module=None, critic:nn.Module=None, gen_mode:bool=True): super().__init__() print('in GANModule') self.gen_mode = gen_mode if generator: self.generator,self.critic = generator,critic def forward(self, *args): return self.generator(*args) if self.gen_mode else self.critic(*args) def switch(self, gen_mode:bool=None): "Put the model in generator mode if `gen_mode`, in critic mode otherwise." self.gen_mode = (not self.gen_mode) if gen_mode is None else gen_mode class GANLoss(GANModule): "Wrapper around `loss_funcC` (for the critic) and `loss_funcG` (for the generator)." def __init__(self, loss_funcG:Callable, loss_funcC:Callable, gan_model:GANModule): super().__init__() print('in GANLoss') self.loss_funcG,self.loss_funcC,self.gan_model = loss_funcG,loss_funcC,gan_model def generator(self, output, x_a, x_b): "Evaluate the `output` with the critic then uses `self.loss_funcG` to combine it with `target`." output = torch.split(output, 2, dim=0) x_a_recon, x_b_recon = torch.split(output[0], 1, dim=0) x_ab, x_ba = torch.split(output[1], 1, dim=0) fake_pred_x_aa, fake_pred_x_bb = self.gan_model.critic(x_a_recon, x_b_recon) fake_pred_x_ab, fake_pred_x_ba = self.gan_model.critic(x_ab, x_ba) cycled_output = self.gan_model.generator(x_ba, x_ab) cycle_a = cycled_output[3] cycle_b = cycled_output[2] return self.loss_funcG(x_a, x_b, x_a_recon, x_b_recon, cycle_a, cycle_b, fake_pred_x_ab, fake_pred_x_ba) def critic(self, real_pred, b, c): fake = self.gan_model.generator(b.requires_grad_(False), c.requires_grad_(False)).requires_grad_(True) fake = torch.split(fake, 2, dim=0) fake_ns = torch.split(fake[0], 1, dim=0) fake_s = torch.split(fake[1], 1, dim=0) fake_pred_aToA, fake_pred_bToB = self.gan_model.critic(fake_ns[0], fake_ns[1]) fake_pred_aToB, fake_pred_bToA = self.gan_model.critic(fake_s[0], fake_s[1]) return self.loss_funcC(real_pred[0], real_pred[1], fake_pred_aToA, fake_pred_bToB, fake_pred_aToB, fake_pred_bToA) class GANTrainer(LearnerCallback): "Handles GAN Training." _order=-20 def __init__(self, learn:Learner, switch_eval:bool=False, clip:float=None, beta:float=0.98, gen_first:bool=False, show_img:bool=True): super().__init__(learn) self.switch_eval,self.clip,self.beta,self.gen_first,self.show_img = switch_eval,clip,beta,gen_first,show_img self.generator,self.critic = self.model.generator,self.model.critic def _set_trainable(self): train_model = self.generator if self.gen_mode else self.critic loss_model = self.generator if not self.gen_mode else self.critic requires_grad(train_model, True) requires_grad(loss_model, False) if self.switch_eval: train_model.train() loss_model.eval() def on_train_begin(self, **kwargs): "Create the optimizers for the generator and critic if necessary, initialize smootheners." if not getattr(self,'opt_gen',None): self.opt_gen = self.opt.new([nn.Sequential(*flatten_model(self.generator))]) else: self.opt_gen.lr,self.opt_gen.wd = self.opt.lr,self.opt.wd if not getattr(self,'opt_critic',None): self.opt_critic = self.opt.new([nn.Sequential(*flatten_model(self.critic))]) else: self.opt_critic.lr,self.opt_critic.wd = self.opt.lr,self.opt.wd self.gen_mode = self.gen_first self.switch(self.gen_mode) self.closses,self.glosses = [],[] self.smoothenerG,self.smoothenerC = SmoothenValue(self.beta),SmoothenValue(self.beta) self.recorder.add_metric_names(['gen_loss', 'disc_loss']) self.imgs,self.titles = [],[] def on_train_end(self, **kwargs): "Switch in generator mode for showing results." self.switch(gen_mode=True) def on_batch_begin(self, last_input, last_target, **kwargs): "Clamp the weights with `self.clip` if it's not None, return the correct input." if self.gen_mode: self.last_input = last_input if self.clip is not None: for p in self.critic.parameters(): p.data.clamp_(-self.clip, self.clip) test = {'last_input':last_input,'last_target':last_input} #print(test) return test def on_backward_begin(self, last_loss, last_output, **kwargs): "Record `last_loss` in the proper list." last_loss = last_loss.detach().cpu() if self.gen_mode: self.smoothenerG.add_value(last_loss) self.glosses.append(self.smoothenerG.smooth) self.last_gen = last_output.detach().cpu() last_gen_split = torch.split(self.last_gen, 1, 0) self.last_critic_preds_ns = self.gan_trainer.critic(last_gen_split[0].cuda(), last_gen_split[1].cuda()) self.last_critic_preds_s = self.gan_trainer.critic(last_gen_split[2].cuda(), last_gen_split[3].cuda()) else: self.smoothenerC.add_value(last_loss) self.closses.append(self.smoothenerC.smooth) def on_epoch_begin(self, epoch, **kwargs): "Put the critic or the generator back to eval if necessary." self.switch(self.gen_mode) def on_epoch_end(self, pbar, epoch, last_metrics, **kwargs): "Put the various losses in the recorder and show a sample image." if not hasattr(self, 'last_gen') or not self.show_img: return data = self.learn.data inputBPre = torch.unbind(self.last_input[1], dim=0) aToA = im.Image(self.last_gen[0]/2+0.5) bToB = im.Image(self.last_gen[1]/2+0.5) aToB = im.Image(self.last_gen[2]/2+0.5) bToA = im.Image(self.last_gen[3]/2+0.5) self.imgs.append(aToA) self.imgs.append(aToB) self.imgs.append(bToB) self.imgs.append(bToA) self.titles.append(f'Epoch {epoch}-A to A') self.titles.append(f'Epoch {epoch}-A to B') self.titles.append(f'Epoch {epoch}-B to B') self.titles.append(f'Epoch {epoch}-B to A') pbar.show_imgs(self.imgs, self.titles) return add_metrics(last_metrics, [getattr(self.smoothenerG,'smooth',None),getattr(self.smoothenerC,'smooth',None)]) def switch(self, gen_mode:bool=None): "Switch the model, if `gen_mode` is provided, in the desired mode." self.gen_mode = (not self.gen_mode) if gen_mode is None else gen_mode self.opt.opt = self.opt_gen.opt if self.gen_mode else self.opt_critic.opt self._set_trainable() self.model.switch(gen_mode) self.loss_func.switch(gen_mode) class FixedGANSwitcher(LearnerCallback): "Switcher to do `n_crit` iterations of the critic then `n_gen` iterations of the generator." def __init__(self, learn:Learner, n_crit=5, n_gen=1): super().__init__(learn) self.n_crit,self.n_gen = 1,1 def on_train_begin(self, **kwargs): "Initiate the iteration counts." self.n_c,self.n_g = 0,0 def on_batch_end(self, iteration, **kwargs): "Switch the model if necessary." if self.learn.gan_trainer.gen_mode: self.n_g += 1 n_iter,n_in,n_out = self.n_gen,self.n_c,self.n_g else: self.n_c += 1 n_iter,n_in,n_out = self.n_crit,self.n_g,self.n_c target = n_iter if isinstance(n_iter, int) else n_iter(n_in) if target == n_out: self.learn.gan_trainer.switch() self.n_c,self.n_g = 0,0 ``` # Training ``` class disc_loss(nn.Module): #a is 0 and b is 1 for predictions def forward(self, real_pred_a, real_pred_b, aToA, bToB, aToB, bToA): loss = 0 #Real Image Predictions loss += F.mse_loss(real_pred_a, real_pred_a.new_zeros(*real_pred_a.size())) loss += F.mse_loss(real_pred_b, real_pred_b.new_zeros(*real_pred_b.size())) #Translated Predictions loss += F.mse_loss(aToB, aToB.new_zeros(*aToB.size())) loss += F.mse_loss(bToA, bToA.new_ones(*bToA.size())) return loss class gen_loss(nn.Module): def content_similar(self, input, target): return F.l1_loss(input, target)*(10) def should_look_like_a(self, input_fake_pred): target = input_fake_pred.new_zeros(*input_fake_pred.size()) return F.mse_loss(input_fake_pred, target) def should_look_like_b(self, input_fake_pred): target = input_fake_pred.new_ones(*input_fake_pred.size()) return F.mse_loss(input_fake_pred, target) def forward(self, x_a, x_b, x_a_recon, x_b_recon, x_a_cycled, x_b_cycled, fake_pred_x_ab, fake_pred_x_ba): loss = 0 x_a, x_b, x_a_recon, x_b_recon = torch.unbind(x_a, dim=0)[0], torch.unbind(x_b, dim=0)[0], torch.unbind(x_a_recon, dim=0)[0], torch.unbind(x_a_recon, dim=0)[0] loss += self.should_look_like_a(fake_pred_x_ba) loss += self.should_look_like_b(fake_pred_x_ab) loss += self.content_similar(x_a, x_a_recon)*(0.5) loss += self.content_similar(x_b, x_b_recon)*(0.5) loss += self.content_similar(x_a, x_a_cycled) loss += self.content_similar(x_b, x_b_cycled) return loss ``` # GAN Training ``` generator = MultiUnet(models.resnet34) multiGan = GANLearner(data, generator=generator, critic=critic_learner.model, gen_loss_func=gen_loss(), crit_loss_func=disc_loss(), opt_func=partial(optim.Adam, betas=(0.5,0.99))) multiGan.fit_one_cycle(100, 1e-4) multiGan.load('v5-trial1') ``` # Results ``` #Show input images rows=2 x,y = next(iter(data.train_dl)) beforeA = torch.unbind(x[0], dim=0)[0].cpu() beforeA = im.Image(beforeA/2+0.5) beforeB = torch.unbind(x[1], dim=0)[0].cpu() beforeB = im.Image(beforeB/2+0.5) images = [beforeA, beforeB] fig, axs = plt.subplots(1,2,figsize=(8,8)) for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]): images[i].show(ax=ax) plt.tight_layout() #Show results pred = multiGan.gan_trainer.generator(x[0], x[1], True) predAA = pred[0] predBB = pred[1] predAB = pred[2] predBA = pred[3] predAA = im.Image(predAA.detach()/2+0.5) predBB = im.Image(predBB.detach()/2+0.5) predAB = im.Image(predAB.detach()/2+0.5) predBA = im.Image(predBA.detach()/2+0.5) images = [predAA, predAB, predBB, predBA] titles = ["A to A", "A to B", "B to B", "B to A"] fig, axs = plt.subplots(2,2,figsize=(8,8)) for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]): images[i].show(ax=ax, title=titles[i]) plt.tight_layout() ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ## Import Packages ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ``` ## Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ``` ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '3' import numpy as np import tensorflow as tf import json with open('dataset-bpe.json') as fopen: data = json.load(fopen) train_X = data['train_X'] train_Y = data['train_Y'] test_X = data['test_X'] test_Y = data['test_Y'] EOS = 2 GO = 1 vocab_size = 32000 train_Y = [i + [2] for i in train_Y] test_Y = [i + [2] for i in test_Y] from tensor2tensor.utils import beam_search def pad_second_dim(x, desired_size): padding = tf.tile([[[0.0]]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1], tf.shape(x)[2]], 0)) return tf.concat([x, padding], 1) class Translator: def __init__(self, size_layer, num_layers, embedded_size, learning_rate, beam_width = 5): def cells(size_layer = size_layer, reuse=False): return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(),reuse=reuse) def attention(encoder_out, seq_len, reuse=False): attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer, memory = encoder_out, memory_sequence_length = seq_len) return tf.contrib.seq2seq.AttentionWrapper( cell = tf.nn.rnn_cell.MultiRNNCell([cells(reuse=reuse) for _ in range(num_layers)]), attention_mechanism = attention_mechanism, attention_layer_size = size_layer) self.X = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.int32, [None, None]) self.X_seq_len = tf.count_nonzero(self.X, 1, dtype = tf.int32) self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype = tf.int32) batch_size = tf.shape(self.X)[0] embeddings = tf.Variable(tf.random_uniform([vocab_size, embedded_size], -1, 1)) encoder_out = tf.nn.embedding_lookup(embeddings, self.X) for n in range(num_layers): (out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn( cell_fw = cells(size_layer // 2), cell_bw = cells(size_layer // 2), inputs = encoder_out, sequence_length = self.X_seq_len, dtype = tf.float32, scope = 'bidirectional_rnn_%d'%(n)) encoder_out = tf.concat((out_fw, out_bw), 2) bi_state_c = tf.concat((state_fw.c, state_bw.c), -1) bi_state_h = tf.concat((state_fw.h, state_bw.h), -1) bi_lstm_state = tf.nn.rnn_cell.LSTMStateTuple(c=bi_state_c, h=bi_state_h) encoder_state = tuple([bi_lstm_state] * num_layers) main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1) dense = tf.layers.Dense(vocab_size) with tf.variable_scope('decode'): decoder_cells = attention(encoder_out, self.X_seq_len) states = decoder_cells.zero_state(batch_size, tf.float32).clone(cell_state=encoder_state) training_helper = tf.contrib.seq2seq.TrainingHelper( inputs = tf.nn.embedding_lookup(embeddings, decoder_input), sequence_length = self.Y_seq_len, time_major = False) training_decoder = tf.contrib.seq2seq.BasicDecoder( cell = decoder_cells, helper = training_helper, initial_state = states, output_layer = dense) training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode( decoder = training_decoder, impute_finished = True, maximum_iterations = tf.reduce_max(self.Y_seq_len)) self.training_logits = training_decoder_output.rnn_output with tf.variable_scope('decode', reuse=True): encoder_out_tiled = tf.contrib.seq2seq.tile_batch(encoder_out, beam_width) encoder_state_tiled = tf.contrib.seq2seq.tile_batch(encoder_state, beam_width) X_seq_len_tiled = tf.contrib.seq2seq.tile_batch(self.X_seq_len, beam_width) decoder_cell = attention(encoder_out_tiled, X_seq_len_tiled, reuse=True) states = decoder_cell.zero_state(batch_size * beam_width, tf.float32).clone( cell_state = encoder_state_tiled) predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder( cell = decoder_cell, embedding = embeddings, start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]), end_token = EOS, initial_state = states, beam_width = beam_width, output_layer = dense, length_penalty_weight = 0.0) predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode( decoder = predicting_decoder, impute_finished = False, maximum_iterations = 2 * tf.reduce_max(self.X_seq_len)) self.fast_result = predicting_decoder_output.predicted_ids[:, :, 0] masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits, targets = self.Y, weights = masks) self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost) y_t = tf.argmax(self.training_logits,axis=2) y_t = tf.cast(y_t, tf.int32) self.prediction = tf.boolean_mask(y_t, masks) mask_label = tf.boolean_mask(self.Y, masks) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) size_layer = 512 num_layers = 2 embedded_size = 256 learning_rate = 1e-3 batch_size = 128 epoch = 20 tf.reset_default_graph() sess = tf.InteractiveSession() model = Translator(size_layer, num_layers, embedded_size, learning_rate) sess.run(tf.global_variables_initializer()) pad_sequences = tf.keras.preprocessing.sequence.pad_sequences batch_x = pad_sequences(train_X[:10], padding='post') batch_y = pad_sequences(train_Y[:10], padding='post') sess.run([model.fast_result, model.cost, model.accuracy], feed_dict = {model.X: batch_x, model.Y: batch_y}) import tqdm for e in range(epoch): pbar = tqdm.tqdm( range(0, len(train_X), batch_size), desc = 'minibatch loop') train_loss, train_acc, test_loss, test_acc = [], [], [], [] for i in pbar: index = min(i + batch_size, len(train_X)) batch_x = pad_sequences(train_X[i : index], padding='post') batch_y = pad_sequences(train_Y[i : index], padding='post') feed = {model.X: batch_x, model.Y: batch_y} accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer], feed_dict = feed) train_loss.append(loss) train_acc.append(accuracy) pbar.set_postfix(cost = loss, accuracy = accuracy) pbar = tqdm.tqdm( range(0, len(test_X), batch_size), desc = 'minibatch loop') for i in pbar: index = min(i + batch_size, len(test_X)) batch_x = pad_sequences(test_X[i : index], padding='post') batch_y = pad_sequences(test_Y[i : index], padding='post') feed = {model.X: batch_x, model.Y: batch_y,} accuracy, loss = sess.run([model.accuracy,model.cost], feed_dict = feed) test_loss.append(loss) test_acc.append(accuracy) pbar.set_postfix(cost = loss, accuracy = accuracy) print('epoch %d, training avg loss %f, training avg acc %f'%(e+1, np.mean(train_loss),np.mean(train_acc))) print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1, np.mean(test_loss),np.mean(test_acc))) from tensor2tensor.utils import bleu_hook results = [] for i in tqdm.tqdm(range(0, len(test_X), batch_size)): index = min(i + batch_size, len(test_X)) batch_x = pad_sequences(test_X[i : index], padding='post') feed = {model.X: batch_x} p = sess.run(model.fast_result,feed_dict = feed) result = [] for row in p: result.append([i for i in row if i > 3]) results.extend(result) rights = [] for r in test_Y: rights.append([i for i in r if i > 3]) bleu_hook.compute_bleu(reference_corpus = rights, translation_corpus = results) ```
github_jupyter
# Workshop 2: Regression and Neural Networks https://github.com/Imperial-College-Data-Science-Society/workshops 1. Introduction to Data Science 2. **Regression and Neural Networks** 3. Classifying Character and Organ Images 4. Demystifying Causality and Causal Inference 5. A Primer to Data Engineering 6. Natural Language Processing (NLP) by using Attention 7. Art and Music using GANs 8. Probabilistic Programming in Practice 9. Missing Data in Supervised Learning ## Today ## You can access the material via: - Binder - Local Jupyter Notebook with a suitable virtual environment and dependencies installed - The PDF slides - Following my slides on MS Teams ![alt-text](icdss.jpeg) # Projects Thoughts? ![https://www.google.com/imgres?imgurl=https%3A%2F%2Fi.pinimg.com%2Foriginals%2Ffb%2Fa9%2F08%2Ffba908499343b32f308b2013dbabd459.jpg&imgrefurl=https%3A%2F%2Fwww.pinterest.com%2Fpin%2F131097039136709839%2F&tbnid=q7dzCXhTlV80sM&vet=12ahUKEwigmeXByL7sAhVM-RQKHVQwAg8QMygIegUIARDnAQ..i&docid=LLUQneqIq-d8RM&w=852&h=480&q=lightbulb%20idea&ved=2ahUKEwigmeXByL7sAhVM-RQKHVQwAg8QMygIegUIARDnAQ](bulb.png) References I used to prepare this session: - Past ICDSS workshops - Patrick Rebischini's notes: http://www.stats.ox.ac.uk/~rebeschi/teaching/AFoL/20/index.html - https://fleuret.org/ee559/ - https://en.wikipedia.org/wiki/Ordinary_least_squares - https://www.astroml.org/book_figures/chapter9/fig_neural_network.html - https://github.com/pytorch/examples/blob/master/mnist/main.py - Lakshminarayanan et al. (2016) http://papers.nips.cc/paper/5234-mondrian-forests-efficient-online-random-forests.pdf - Garnelo et al. (2018) https://arxiv.org/pdf/1807.01622.pdf Other recommended reading: - Regression and Other Stories by Andrew Gelman, Jennifer Hill and Aki Vehtari - Elements of Statistical Learning ## Introduction Suppose we have some $(x_1, y_1),\ldots,(x_{100},y_{100})$ that is generated by $y=2x + \text{noise}$. ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 1, 30) noise = 0.1*np.random.normal(size=30) y = 2*x + noise plt.scatter(x, y) plt.plot(x, 2*x) plt.xlabel("x") plt.ylabel("y") ``` In practice, we don't know the underlying data-generating process, but rather we can pose **"hypotheses"** as to how the data is generated. For example in the above example: - **Linear Models:** $y = x\beta + \sigma\mathcal{N}(0,1)$ where $\beta$ represents the gradient of the slope and $\sigma$ is the amplitude of the noise. - **Nonparametric models:**$y = f(x) + \sigma\mathcal{N}(0,1)$ where $f$ is some function in some hypothesis function space $\mathcal{H}$. E.g. $f(x) = x\beta$ - Neural networks - Regression trees, random forests - Gaussian processes - $y = \sum_{i=1}^T w_i\times f_i(x) + g(x)$ where $w_i$ represent weights, $f_i\in\mathcal{H}$ and $g(x)$ represents the noise for value $x$. - etc... Once we have a hypothesis we can **estimate $f$** using many different tools! ## Out of sample prediction## Given $x_*$, the prediction would be $f_*(x_*)$, where $f_*$ is the estimated function of $f$. But first, to formulate the hypothesis, we need to scrutinate the data via exploratory data analysis. For the data above, clearly a linear model (straight line) plus some small Gaussian noise is sufficient. So the task is just to estimate $\beta$ and $\sigma$ so that the line **fits the dots well**. ## General setting In practice, we have to deal with data coming in different formats and possible generating processes. E.g. from the exponential family: - **Count data**: $y_i\sim\text{Poisson}(f(x_i))$ or $y_i\sim\text{NegativeBinomial}(f(x_i))$ - Football goals, disease infection counts - **Binomial or Multinomial**: $y_i\sim\text{Binomial}(n, f(x_i))$, $y_i\sim\text{Multinomial}(f_1(x_i),\ldots, f_k(x_i))$ etc... - Coin toss outcomes, customer subscription outcome, classifying digits or characters - **Gamma**: $y_i\sim \text{Gamma}(k, f(x_i))$ - Rainfall ## Gaussian noise regression For illustration purposes, let's focus on regression in the setting $y=f(x) + \sigma \mathcal{N}(0,1)$ for $f\in\mathcal{H}$ and $\sigma\geq0$. ## Foundations of Statistical Learning Previously, I mentioned that we need to build a function that **fits the dots well**. There are 2 types of notions in statistical learning: **prediction** and **estimation**. We will use the following notation: - $n$ training points - $X_1,\ldots,X_n $ are *features* in a *feature space* $\mathcal{X}$. Could be a mixture of categorial or continuous features. - $Y_1,\ldots,Y_n $ are labels/response in a space $\mathcal{Y}$ (e.g. $\mathbb{R}$ or $\mathbb{R}^k$) - **[For your interest:]** Some probability space $(\mathcal{X}, \mathcal{B}, \mathbb{P})$ where we can measure probabilities of events in the set $\mathcal{B}$. e.g. the set of all possible cointoss outcomes is a set $\mathcal{B}$ - **Hypothesis space** $\mathcal{H}\subset \mathcal{C}:=\{f: f:\mathcal{X}\rightarrow\mathcal{Y}\}$: Restriction of the types of functions we want to use. e.g. for a type of neural network, the multilayer perceptron (MLP) with $m$ layers, we have $\mathcal{H}:= \{f:\mathcal{X}\rightarrow\mathcal{Y}: f(\cdot) = f(\cdot; \sigma_1,\ldots,\sigma_m, W_1,\ldots,W_m), \text{ where }\sigma_i, W_i \text{ are the activation functions and weights} \}$. - **Prediction Loss function** $\ell:\mathcal{H}\times\mathcal{X}\times\mathcal{Y}\rightarrow \mathbb{R}_+$: To define what **fits the dots well** means. ## Prediction We want to pick $f$ such that it minimises the **expected or population risk** when a new independent datapoint $(X, Y)$ comes in $$ f_* := \text{argmin}_{f\in\mathcal{C}} \mathbb{E}_{\mathbb{P}}\left[ \ell(f, X, Y) \right] := \text{argmin}_{f\in\mathcal{C}} r(f) $$ We denote $f_*$ is the **optimum**, which is unknown. We want to construct an approximation to $f_*$ based on the $n$ training points and the hypothesis $\mathcal{H}$ that controls the complexity of $f$. This approximation is close to $$ f_{**}:= \text{argmin}_{f\in\mathcal{H}} \mathbb{E}_{\mathbb{P}}\left[ \ell(f, X, Y) \right] $$ Define the **excess risk** as $$ r(f) - r(f_{**}) = [r(f) - r(f_*)] + [r(f_*) - r(f_{**})], $$ where $f\in \mathcal{H}$. **The goal of statistical learning for prediction is to minimise the excess risk** with respect to the sample size $n$ and the space of functions $\mathcal{H}$. Note that the decomposition yields an approximation and estimation error. Difficult to do in practice, so we need **empirical risk minimisation** via the observed training set $(X_i,Y_i)_{i=1}^n$ as a proxy for the expected/population risk: $$ R(f):= \frac{1}{n}\sum_{i=1}^n \ell(f, X_i, Y_i) ,\quad f_*^R := \text{argmin}_{f\in\mathcal{H}} R(f) $$ to minimise $$ r(f) - r(f_{**}). $$ ## Key takeaways and Bigger Picture:## - It is important to understand the tradeoff between optimisation and statistical errors. - Optimisation is only 1 part of the inequality, and vice versa for statistical modelling errors. More details in Rebischini's notes! ## Estimation We need: - Some training set of size $n$ generated by $f_*\in\mathcal{H}$ - Loss function $\ell:\mathcal{H}\times\mathcal{H}\rightarrow \mathbb{R}_+$ Return: - An algorithm that returns an estimate of $f_*$ that minimises and controls $\ell(f,f_*)$ based on the $n$ training points and $\mathcal{H}$. ## Back to Gaussian noise regression There are lots of ways we can pose this problem. One way is to use - $\ell(f, X, Y) = ||f(X) - y||_2^2 = (f(X) - y)^2$ - the **\ell_2 loss** - $\ell(f, X, Y) = |f(X) - y|$ - the **\ell_1 loss** - This yields the **mean squared error (MSE)** $R(f) = \frac{1}{n}\sum_{i=1}^n (f(x_i) - y_i)^2$ In theory, these give $$\ell_2: f_{**}(x) = E[Y|X=x]$$ $$\ell_1: f_{**}(x) = \text{Median}[Y|X=x]$$ Depending on the situation, we can either use approximate gradient-based methods (e.g. gradient descent), Monte Carlo methods or the analytical maximum likelihood estimation (MLE). ## Linear regression $$ y = X\beta_0 + \sigma\mathcal{N}(0,1) $$ $\beta_0 = (1,\beta_{0,1},\ldots,\beta_{0,d-1})^T$ - the 1 represents the intercept. We also call this **ordinary least squares**: - Assume that $X$ is full rank $$\hat{\beta} = \text{argmin}_{\beta} ||y- X\beta ||_2^2 \iff X^T(y - X\beta) = 0 \iff \hat{\beta} = X(X^TX)^{-1}X^T y \sim \mathcal{N}(X\beta_0, \sigma^2 X(X^TX)^{-1}X^T)$$ Geometrically: $y - X\hat{\beta} \perp X\beta_0 \iff \hat{\beta}$ minimises $||y-X\beta ||_2^2$ https://en.wikipedia.org/wiki/Ordinary_least_squares: ![](ols.png) Can also solve this via gradient descent: - Remember excess risk <= approximationLoss + statisticalLoss ``` import statsmodels.api as sm # fit the model m = sm.OLS(y, sm.tools.add_constant(x)) res = m.fit() print(res.summary(), "\n sigma~", np.sqrt(sum(res.resid**2) / (30 - 2))) ``` We can see that our algorithm manages to estimate the parameters of the models pretty well: - $\hat{\beta}\approx 2$ with $95\%$ confidence intervals [1.945, 2.091] - $const\approx 0$ with $95\%$ confidence intervals [-0.048, 0.036] - $\hat{\sigma}^2 \approx 0.01$ - **95% confidence intervals** = if I sample the data infinitely many times and estimate infinitely many confidence intervals, I will expect that 95% of the time the confidence intervals will contain the true, unknown parameter value. Given $x_*$ as a test point, the prediction would be $\hat{y} = x_*^T \hat{\beta}$. ``` # Fit of the OLS estimator x_test = np.linspace(1, 2, 10) noise = 0.1*np.random.normal(size=10) y_test = 2*x_test + noise plt.figure(figsize=(4,4)) plt.scatter(x, y) plt.plot(x, res.predict(sm.add_constant(x))) pred_int_train = res.get_prediction(sm.add_constant(x)).conf_int() plt.plot(x, pred_int_train[:,0], 'r--', lw=2); plt.plot(x, pred_int_train[:,1], 'r--', lw=2) # the prediction intervals. Note that htey can be larger plt.scatter(x_test, y_test) plt.plot(x_test, res.predict(sm.add_constant(x_test))) pred_int_test = res.get_prediction(sm.add_constant(x_test)).conf_int() plt.plot(x_test, pred_int_test[:,0], 'r--', lw=2); plt.plot(x_test, pred_int_test[:,1], 'r--', lw=2) plt.xlabel("x"); plt.ylabel("y") ``` ## Other regression methods - Regression trees: Classification and Regression Trees (CART) - XGBoost: Tree-boosting algorithm widely used in production pipelines for firms like Amazon - Random forest: Most popular tree-based algorithm - Mondrian Forest: Nice statistical and online properties ## Regression tree A tree is a histogram or step function. $f(x) = \sum_{k=1}^K \beta_k I(x\in \Omega_k)$. Example of a tree (Lakshminarayanan et al. (2016)) ![alt text](mondrian.png) ``` from sklearn.tree import DecisionTreeRegressor # Fit regression model m_tree = DecisionTreeRegressor(max_depth=5) m_tree.fit(np.expand_dims(x, 1), y) y_pred = m_tree.predict(np.expand_dims(x_test, 1)) plt.figure(figsize=(4,4)) plt.scatter(x, y) plt.plot(x, m_tree.predict(np.expand_dims(x, 1))) # the prediction intervals. Note that htey can be larger plt.scatter(x_test, y_test) plt.plot(x_test, y_pred) plt.xlabel("x") plt.ylabel("y") ``` ## XGBoost ``` import xgboost as xgb num_round = 10 m_xgb = xgb.XGBRegressor(objective ='reg:squarederror', n_estimators=1000) m_xgb.fit(np.expand_dims(x, 1), y) plt.figure(figsize=(4,4)) plt.scatter(x, y) plt.plot(x, m_xgb.predict(np.expand_dims(x, 1))) # the prediction intervals. Note that htey can be larger plt.scatter(x_test, y_test) plt.plot(x_test, m_xgb.predict(np.expand_dims(x_test, 1))) plt.xlabel("x") plt.ylabel("y") ``` ## Random Forest This essentially uses bagging: $$\hat{f}(x) = \frac{1}{T}\sum_{t=1}^T \hat{f}_t(x)$$, where $\hat{f}_t$ are trained regression trees from randomly sampled (with replacement) sets $\{(x_j, y_j)_j\}_t$ using random feature subsets. ``` from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import make_regression m_rf = RandomForestRegressor(max_depth=2, random_state=0) m_rf.fit(np.expand_dims(x, 1), y) m_rf.predict(np.expand_dims(x_test, 1)) plt.figure(figsize=(4,4)) plt.scatter(x, y) plt.plot(x, m_rf.predict(np.expand_dims(x, 1))) # the prediction intervals. Note that htey can be larger plt.scatter(x_test, y_test) plt.plot(x_test,m_rf.predict(np.expand_dims(x_test, 1))) plt.xlabel("x") plt.ylabel("y") ``` ## Neural Networks Neural networks are essentially parametric functions that are composed of **layers of neurons**. https://www.astroml.org/book_figures/chapter9/fig_neural_network.html ![alt text](fig_neural_network_1.png) - Multilayer Perceptron (MLP): $f(x) = f_n\circ\cdots f_1(x)$ with $f_j = W_j x + b_j$ with weights $W_j$ and biases $b_j$. - Can also have other useful layers like max-pooling, batch normalisation, attention and convolution (feature extraction). - Parameter optimisation via gradient-based methods such as stochastic gradient descent. Using the backpropagation trick, can allow for efficient optimisation. Optimisation speed can be enhanced using multiple GPU or TPU memory. **Key applications:** - Image processing: classification, denoising, inpainting, generation - Function approximations for complex models and algorithms - Time series, recommendation engines **Key issues:** - Overparameterisation: regularisation and sparsity - Feature engineering - Vanishing gradient: batch normalisation and dropout ## Image Classification ``` # https://github.com/pytorch/examples/blob/master/mnist/main.py # Code is in the folder in the main.py script # Don't run it during the session - might take a while! # %run main.py # we will use pretrained models from torchvision import torch from torchvision import datasets, transforms transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # https://github.com/rois-codh/kmnist # We will use the Kuzushiji-MNIST dataset dataset2 = datasets.KMNIST('../data', train=False, transform=transform, download=True) test_loader = torch.utils.data.DataLoader(dataset2, batch_size=16) for img_batch, label_batch in test_loader: print(img_batch.shape, label_batch) break ``` ![alt text](kmnist_examples.png) ``` from torchvision.utils import make_grid # Read images into torch.Tensor all_imgs = img_batch # Visualize sample on a grid img_grid = make_grid(all_imgs, nrow=4) plt.figure(figsize=(5,5)) plt.imshow(img_grid.permute(1, 2, 0).numpy()) from main import Net model = Net() # load some model I pretrained on GPU memory into CPU memory model.load_state_dict(torch.load("kmnist_cnn.pt", map_location=torch.device('cpu'))) model.eval() model(img_batch).argmax(dim=1, keepdim=True), label_batch ``` ## Image Inpainting https://arxiv.org/pdf/1807.01622.pdf ![alt text](np.png) ## High-dimensional regression Overview: - Classical statistics: $d < \infty$, $n\rightarrow\infty$ - Non-asymptotic: $d<\infty, n<\infty$ - Non-parametric: $d\rightarrow \infty, n<\infty$ - Asymptotic: $d\rightarrow \infty, n\rightarrow\infty$ In the realm of high-dimensional statistics, we usually have $d>n$ or e.g. $d= \mathcal{O}(n^\alpha)$, where $d$ is the number of features and $n$ is the number of data points. This happens when you have lots of features and the actual data generating features are **sparse**, i.e. $d$ is large but a small $d_0$ is used or are important for the regression. Therefore the usual linear regression assumption that **$X$ is full rank** will not hold. We can, however, introduce regularisation and use the Least-angle regression (LARS; Efron, Hastie, Johnstone and Tibshirani (2004)) algorithm to fit our model. The Lasso $$ \text{minimise } || y - X\beta||_2^2, \quad \text{subject to } \sum_{j=1}^d |\beta_j| \leq t $$ We now work with a diabetes dataset: Suppose we have a large number of features. We want to **select** the ones that can represent the sparsity. ``` import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model from sklearn import datasets X, y = datasets.load_diabetes(return_X_y=True) print("Computing regularization path using the LARS ...") _, _, coefs = linear_model.lars_path(X, y, method='lasso', verbose=True) xx = np.sum(np.abs(coefs.T), axis=1) xx /= xx[-1] plt.plot(xx, coefs.T) ymin, ymax = plt.ylim() plt.vlines(xx, ymin, ymax, linestyle='dashed') plt.xlabel('|coef| / max|coef|') plt.ylabel('Coefficients') plt.title('LASSO Path') plt.axis('tight') plt.show() ``` ## Importance Notice! We are proposal a constitution change so that we can better deliver quality events to you! Please do participate in our upcoming general meeting (even just coming for a vote will be very helpful!). Some of the changes: - Introduction of more official committee roles - Update of the manifesto e.g. societal goals, motto - we were only founded 3 years ago! More details to come! Thank you for your attention! https://github.com/Imperial-College-Data-Science-Society/workshops 1. Introduction to Data Science 2. **Regression and Neural Networks** 3. Classifying Character and Organ Images 4. Demystifying Causality and Causal Inference 5. A Primer to Data Engineering 6. Natural Language Processing (NLP) by using Attention 7. Art and Music using GANs 8. Probabilistic Programming in Practice 9. Missing Data in Supervised Learning ![alt-text](icdss.jpeg)
github_jupyter
``` %matplotlib widget from pathlib import Path from collections import namedtuple import matplotlib.pyplot as plt import numpy as np from numpy.linalg import svd import imageio from scipy import ndimage import h5py import stempy.io as stio import stempy.image as stim # Set up Cori paths ncemhub = Path('/global/cfs/cdirs/ncemhub/4Dcamera/') scratch = Path('/global/cscratch1/sd/percius/') # Set up mothership6 paths hdd1 = Path('/mnt/hdd1') def com_sparse_iterative(electron_events, scan_dimensions, crop_to=(576,576)): # Iterative version. Will be replaced in a future stempy release # Calculate the center of mass as a test com2 = np.zeros((2, scan_dimensions[0]*scan_dimensions[1]), np.float32) for ii, ev in enumerate(electron_events): if len(ev) > 0: x,y = np.unravel_index(ev,(576,576)) mm = len(ev) comx0 = np.sum(x) / mm comy0 = np.sum(y) / mm # Crop around the center keep = (x > comx0-crop_to[0]) & (x <= comx0+crop_to[0]) & (y > comy0-crop_to[1]) & (y <= comy0+crop_to[1]) x = x[keep] y = y[keep] mm = len(x) if mm > 0: comx = np.sum(x) comy = np.sum(y) comx = comx / mm comy = comy / mm else: comx = comx0 comy = comy0 com2[:, ii] = (comy,comx) else: com2[:, ii] = (np.nan, np.nan) com2 = com2.reshape((2, scan_dimensions[0], scan_dimensions[1])) return com2 def planeFit(points): """ p, n = planeFit(points) Given an array, points, of shape (d,...) representing points in d-dimensional space, fit an d-dimensional plane to the points. Return a point, p, on the plane (the point-cloud centroid), and the normal, n. """ points = np.reshape(points, (np.shape(points)[0], -1)) # Collapse trialing dimensions assert points.shape[0] <= points.shape[1], "There are only {} points in {} dimensions.".format(points.shape[1], points.shape[0]) ctr = points.mean(axis=1) x = points - ctr[:,np.newaxis] M = np.dot(x, x.T) # Could also use np.cov(x) here. return ctr, svd(M)[0][:,-1] # Close all previous windows to avoid too many windows plt.close('all') # Load a sparse vacuum 4D camera data set scan_num = 105 threshold = 4.0 data_dir = Path('2020.11.23') fname = hdd1 / data_dir / Path('data_scan{}_th{}_electrons.h5'.format(scan_num, threshold)) vacuum_scan = stio.load_electron_counts(fname) print('File: {}'.format(fname)) print('Initial scan dimensions = {}'.format(vacuum_scan.scan_dimensions)) # Show the summed diffraction pattern dp = stim.calculate_sum_sparse(vacuum_scan.data, vacuum_scan.frame_dimensions) fg,ax = plt.subplots(1,1) ax.imshow(dp) # Calculate the com iteratively #com2 = stim.com_sparse(vacuum_scan.data, vacuum_scan.frame_dimensions) com2 = com_sparse_iterative(vacuum_scan.data, vacuum_scan.scan_dimensions, crop_to=(30, 30)) # These will be removed in a future release print('Remove the code below in a future release.') # Nan values to average value np.nan_to_num(com2[0,],copy=False,nan=np.nanmean(com2[0,])) np.nan_to_num(com2[1,],copy=False,nan=np.nanmean(com2[1,])); com2 = com2.reshape((2,*vacuum_scan.scan_dimensions[::-1])) # Remove the outliers by median filtering com2_filt = np.zeros_like(com2) com2_filt[0,] = ndimage.median_filter(com2[0,], size=(3,3)) com2_filt[1,] = ndimage.median_filter(com2[1,], size=(3,3)) com2_median = np.median(com2_filt,axis=(1,2)) fg,ax = plt.subplots(1, 2,sharex=True,sharey=True) ax[0].imshow(com2_filt[0,]-com2_median[0],cmap='bwr',vmin=-25,vmax=25) ax[1].imshow(com2_filt[1,]-com2_median[1],cmap='bwr',vmin=-25,vmax=25) # Fit the COMs to planes to smooth it out YY, XX = np.mgrid[0:com2.shape[1],0:com2.shape[2]] planeCOM0 = planeFit(np.stack((YY,XX,com2_filt[0,]))) planeCOM1 = planeFit(np.stack((YY,XX,com2_filt[1,]))) print(planeCOM0) print(planeCOM1) # Generate points on the plane to fit the dataset size YY, XX = np.mgrid[0:vacuum_scan.scan_dimensions[1], 0:vacuum_scan.scan_dimensions[0]] normal = planeCOM0[1] d = np.dot(-planeCOM0[0], normal) # calculate corresponding z z0 = (-normal[0]*YY - normal[1]*XX - d)/normal[2] normal = planeCOM1[1] d = np.dot(-planeCOM1[0], normal) # calculate corresponding z z1 = (-normal[0]*YY - normal[1]*XX - d)/normal[2] fg,ax = plt.subplots(2,2) ax[0,0].imshow(com2_filt[0,],cmap='bwr') ax[0,1].imshow(z0, cmap='bwr') ax[1,0].imshow(com2_filt[1,],cmap='bwr') ax[1,1].imshow(z1, cmap='bwr'); # Test centering on the vacuum scan itself vacuum_scan_centered = namedtuple('ElectronCountedData', ['data', 'scan_dimensions', 'frame_dimensions']) vacuum_scan_centered.scan_dimensions = vacuum_scan.scan_dimensions vacuum_scan_centered.frame_dimensions = vacuum_scan.frame_dimensions vacuum_scan_centered.data = [] z0_round = np.round(z0).astype(np.int32) - int(z0.mean()) z1_round = np.round(z1).astype(np.int32) - int(z1.mean()) for ev, x, y in zip(vacuum_scan.data, z0_round.ravel(), z1_round.ravel()): evx, evy = np.unravel_index(ev, (576,576)) evx_centered = evx - y evy_centered = evy - x keep = (evx_centered < 576) & (evx_centered >= 0) * (evy_centered < 576) & (evy_centered >= 0) evx_centered = evx_centered[keep] evy_centered = evy_centered[keep] vacuum_scan_centered.data.append(np.ravel_multi_index((evx_centered,evy_centered), (576,576))) vacuum_scan_centered.data = np.array(vacuum_scan_centered.data, dtype=object) dp = stim.calculate_sum_sparse(vacuum_scan.data, vacuum_scan.frame_dimensions) dp2 = stim.calculate_sum_sparse(vacuum_scan_centered.data, vacuum_scan_centered.frame_dimensions) fg,ax = plt.subplots(1,2,sharex=True,sharey=True) ax[0].imshow(dp) ax[1].imshow(dp2) # Compare com_filtered to plane fit # Nan values to average value np.nan_to_num(com2[0,],copy=False,nan=np.nanmean(com2[0,])) np.nan_to_num(com2[1,],copy=False,nan=np.nanmean(com2[1,])) fg,ax = plt.subplots(2,2) ax[0,0].imshow(z0,cmap='bwr') ax[0,1].imshow(z1,cmap='bwr') ax[1,0].imshow(com2[0,]-z0,cmap='bwr') ax[1,1].imshow(com2[1,]-z1,cmap='bwr') ``` # Apply to experiment from a sample ``` # Load a sparse 4D camera data set scan_num =102 threshold = 4.0 data_dir = Path('2020.11.23') fname = hdd1 / data_dir / Path('data_scan{}_th{}_electrons.h5'.format(scan_num, threshold)) #fname = Path.home() / Path('data/temp/data_scan{scan_num}_th{}_electrons.h5'.format(scan_num, threshold)) experiment = stio.load_electron_counts(fname) print('File: {}'.format(fname)) print('Initial scan dimensions = {}'.format(experiment.scan_dimensions)) # Generate points on the plane to fit the dataset size factor = (experiment.scan_dimensions[0] / vacuum_scan.scan_dimensions[0], experiment.scan_dimensions[1] / vacuum_scan.scan_dimensions[1]) # Generate positions between vacuum positions YY, XX = np.mgrid[0:experiment.scan_dimensions[0], 0:experiment.scan_dimensions[1]] YY = YY.astype('<f4') / factor[1] XX = XX.astype('<f4') / factor[0] normal = planeCOM0[1] d = np.dot(-planeCOM0[0], normal) # calculate corresponding z z0 = (-normal[0]*YY - normal[1]*XX - d)/normal[2] normal = planeCOM1[1] d = np.dot(-planeCOM1[0], normal) # calculate corresponding z z1 = (-normal[0]*YY - normal[1]*XX - d)/normal[2] # Round to integers z0_round = np.round(z0 - z0.mean()).astype(np.int64) z1_round = np.round(z1 - z1.mean()).astype(np.int64) fg,ax = plt.subplots(2,2) ax[0,0].imshow(z0,cmap='bwr') ax[0,1].imshow(z0_round, cmap='bwr') ax[1,0].imshow(z1,cmap='bwr') ax[1,1].imshow(z1_round, cmap='bwr'); # Use the fitted plane from the vacuum scan to recenter the events scan_centered = [] for ev, x, y in zip(experiment.data, z0_round.ravel(), z1_round.ravel()): evx, evy = np.unravel_index(ev, (576,576)) evx_centered = evx - y # need to flip x and y evy_centered = evy - x # Some events will get pushed off the detetor by the shift. Remove them keep = (evx_centered < 576) & (evx_centered >= 0) & (evy_centered < 576) & (evy_centered >= 0) evx_centered = evx_centered[keep] evy_centered = evy_centered[keep] scan_centered.append(np.ravel_multi_index((evx_centered,evy_centered), (576,576))) scan_centered = np.array(scan_centered, dtype=object) # Create a stempy counted data namedtuple experiment_centered = namedtuple('ElectronCountedData', ['data', 'scan_dimensions', 'frame_dimensions']) experiment_centered.data = scan_centered experiment_centered.scan_dimensions = experiment.scan_dimensions[::1] experiment_centered.frame_dimensions = experiment.frame_dimensions dp = stim.calculate_sum_sparse(experiment.data, experiment.frame_dimensions) dp2 = stim.calculate_sum_sparse(experiment_centered.data, experiment_centered.frame_dimensions) fg,ax = plt.subplots(2,1,sharex=True,sharey=True) ax[0].imshow(dp) ax[1].imshow(np.log(dp2+0.1)) # Save to a stempy dataset out_name = fname.with_name('data_scan{}_th{}_electrons_centered.h5'.format(scan_num,threshold)) stio.save_electron_counts(out_name,experiment_centered) print(out_name) ```
github_jupyter
``` # input # pmid list: ../../data/ft_info/ft_id_lst.csv # (ft json file) ../../data/raw_data/ft/ # (ft abs file) ../../data/raw_data/abs/ # result file at ../../data/raw_data/ft/T0 (all section) # ../../data/raw_data/ft/T1 (no abs), etc # setp 1 download full-text import pandas as pd import pickle import os # get pmid list tar_lst = pd.read_csv("../../data/ft_info/ft_id_lst.csv", dtype=str) tar_lst.head() tar_lst.PMID # _f = os.path.join('../../data/ft_info/', 'PMID_lst') # with open(_f, 'rb' ) as fp: # PMID_lst = pickle.load(fp) # print(len(PMID_lst)) nl = list(tar_lst.PMID.values) import numpy as np np.random.shuffle(nl) nl # PMID_lst # if we only have pmid, we can obatin pmcid by: # # get pmcid # # !wget https://ftp.ncbi.nlm.nih.gov/pub/pmc/PMC-ids.csv.gz # # !gzip -d PMC-ids.csv.gz # _pmc_id_map = pd.read_csv("../../data/ft_info/PMC-ids.csv", dtype=str) # pmc_id_map = _pmc_id_map[['PMCID', 'PMID']] # pmc_id_map = pmc_id_map[pmc_id_map.notnull()] # tar_lst = pmc_id_map[pmc_id_map['PMID'].isin(pmid_l)] import sys sys.path.insert(1, '..') # import download_data # # downloading full-text # tar_id_lst = list(tar_lst.PMCID.values) # tar_dir = '../../data/raw_data/ft/' # url_prefix = "https://www.ncbi.nlm.nih.gov/research/pubtator-api/publications/export/biocjson?pmcids=" # _type='json' # cores=3 # hit_rec = download_data.download_from_lst_hd(tar_id_lst, tar_dir, url_prefix, _type, cores) # # downloading abs (as some full-text have no abstract) # tar_id_lst = list(tar_lst.PMID.values) # tar_dir = '../../data/raw_data/abs/' # url_prefix = "https://www.ncbi.nlm.nih.gov/research/pubtator-api/publications/export/pubtator?pmids=" # _type='abs' # cores=3 # hit_rec = download_data.download_from_lst_hd(tar_id_lst, tar_dir, url_prefix, _type, cores) s_df = pd.read_csv('../../data/ft_info/ft_500_n.tsv', sep = '\t') print('annotated table shape', s_df.shape) s_df = s_df[s_df['2nd_label'] != 0][['pmid', 'geneId', 'diseaseId', '2nd_label']] s_df.rename(columns={'2nd_label':'label'}, inplace=True) s_df.to_csv('../../data/ft_info/labels_n.tsv', sep=',', index=False) %load_ext autoreload %autoreload 2 import parse_data import subprocess # original all sections # in_pmid_d = '/mnt/bal31/jhsu/old/data/ptc/raw_ft/abs_off/' # in_pmcid_d = '/mnt/bal31/jhsu/old/data/ptc/raw_ft/ft/' # import subprocess # _lst = list(tar_lst.PMID.values) # for i in list(_lst): # cmd = 'cp ' + '/mnt/bal31/jhsu/old/data/ptc/raw_ft/abs_off/' + i + ' ' + '../../data/raw_data/abs/' + i # subprocess.check_call(cmd, shell=True) # _lst = list(tar_lst.PMCID.values) # for i in list(_lst): # cmd = 'cp ' + '/mnt/bal31/jhsu/old/data/ptc/raw_ft/ft/' + i + ' ' + '../../data/raw_data/ft/' + i # subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T0 out_dir = '../../data/raw_data/ft/T0/' tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T1 out_dir = '../../data/raw_data/ft/T1/' tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file ori_tar=['TITLE', '#', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T2 out_dir = '../../data/raw_data/ft/T2/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', '#', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T3 out_dir = '../../data/raw_data/ft/T3/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', 'INTRO', '#', 'RESULTS', 'DISCUSS', 'CONCL'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T4 out_dir = '../../data/raw_data/ft/T4/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', '#', 'DISCUSS', 'CONCL'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T5 out_dir = '../../data/raw_data/ft/T5/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', '#', 'CONCL'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T5 out_dir = '../../data/raw_data/ft/T5/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', '#'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) !mkdir -p ../../data/raw_data/ft/T6 out_dir = '../../data/raw_data/ft/T6/' #ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] ori_tar=['TITLE', 'ABSTRACT', 'INTRO', '#', 'RESULTS', 'DISCUSS', '#'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) #optional normalize the annotation cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' print(cmd) subprocess.check_call(cmd, shell=True) # import json # _pmcid = 'PMC7102640' # _pmid = '32171866' # abs_f_path = in_pmid_d + _pmid # print(_pmcid, end=', ') # with open(in_pmcid_d + _pmcid, encoding='utf-8') as f: # data = json.load(f) # rst = parse_data.parse_doc(data, abs_f_path, ori_tar, is_SeFi) # rst !mkdir -p ../../data/raw_data/ft/T7 out_dir = '../../data/raw_data/ft/T7/' ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] # ori_tar=['TITLE', 'ABSTRACT', 'INTRO', 'METHODS', 'RESULTS', 'DISCUSS', 'CONCL'] # ori_tar=['TITLE', 'ABSTRACT', '#', '#', '#', '#', '#'] tar_id_lst = list(tar_lst.PMCID.values) in_pmid_d = '../../data/raw_data/abs/' in_pmcid_d = '../../data/raw_data/ft/' parse_t = 'ft' # 'ft' or 'abs' if_has_s_f = True # if have section file is_SeFi = True info_l = parse_data.parse_data_lst_hd(tar_id_lst, in_pmid_d, in_pmcid_d, parse_t, out_dir, if_has_s_f, ori_tar, is_SeFi) # #optional normalize the annotation # cmd = 'python ../normalize_ann.py ' + '--in_f ' + out_dir + 'anns.txt' + ' ' + '--out_f ' + out_dir + 'anns_n.txt' # print(cmd) # subprocess.check_call(cmd, shell=True) info_l info_df = pd.DataFrame(info_l, columns=['pmid', 'pmcid', 'ttl_l', 'abs_l', 'par_l', 'txt_l', 'g_#', 'd_#', 'gd_p', 'gd_vp']) info_df.describe() # READING Sentences and tokenizer import argparse import sys import os import pandas as pd import numpy as np from raw import load_documents_vis from raw_handler import init_parser, loading_tokenizer from IPython.display import display, clear_output, HTML import ipywidgets as widgets import sys sys.argv = [''] parser = init_parser() args = parser.parse_args() args.ori_tokenizer = loading_tokenizer(args) args.token_voc_l = len(args.ori_tokenizer) print('tokenizer size %d' % (args.token_voc_l)) # RENET2 input data dir, target GDA file, etc args.raw_data_dir = out_dir args.fix_snt_n, args.fix_token_n = 400, 54 print('fix input sentences# %d, tokens# %d, batch size %d' % (args.fix_snt_n, args.fix_token_n, args.batch_size)) args.no_cache_file = True text_path = os.path.join(args.raw_data_dir, args.file_name_doc) sentence_path = os.path.join(args.raw_data_dir, args.file_name_snt) ner_path = os.path.join(args.raw_data_dir, args.file_name_ann) all_ori_seq, ner_df, session_info, ori_ner, all_sentence, all_session_map = load_documents_vis(text_path, sentence_path, ner_path, args.ori_tokenizer, args) def get_token_l(snt_l): tokens_s = 0 snt_s = 0 for snt in snt_l: tokens = tokenize(snt) tokens_l = len(tokens) tokens_s += tokens_l snt_s += 1 # print(tokens_l, tokens) return tokens_s, snt_s # all_sentence['16963499'] get_token_l(all_sentence['16963499']) t_s, t_t = 0, 0 for k, v in all_sentence.items(): _a, _b = get_token_l(v) t_s += _a t_t += _b # print(token_s) # break print(t_s / 500, t_t/500) token_s / 500 from utils.tokenizer import tokenize tokenize(all_sentence['16963499'][0]) all_sentence['16963499'][0] 12.4*16.1 PMID32171866 ```
github_jupyter
``` from __future__ import print_function import warnings warnings.filterwarnings(action='ignore') import keras from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, BatchNormalization import os batch_size = 16 num_classes = 10 epochs = 15 (x_train, y_train), (x_test, y_test) = cifar10.load_data() print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') print(y_train) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) y_train x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('Using real-time data augmentation.') # This will do preprocessing and realtime data augmentation: datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening zca_epsilon=1e-06, # epsilon for ZCA whitening rotation_range=60, # randomly rotate images in the range (degrees, 0 to 180) # randomly shift images horizontally (fraction of total width) width_shift_range=0.1, # randomly shift images vertically (fraction of total height) height_shift_range=0.3, shear_range=0., # set range for random shear zoom_range=1.5, # set range for random zoom channel_shift_range=0., # set range for random channel shifts # set mode for filling points outside the input boundaries fill_mode='nearest', cval=0., # value used for fill_mode = "constant" horizontal_flip=True, # randomly flip images vertical_flip=True, # randomly flip images # set rescaling factor (applied before any other transformation) rescale=None, # set function that will be applied on each input preprocessing_function=None, # image data format, either "channels_first" or "channels_last" data_format=None, # fraction of images reserved for validation (strictly between 0 and 1) validation_split=0.0) # Compute quantities required for feature-wise normalization # (std, mean, and principal components if ZCA whitening is applied). datagen.fit(x_train) filepath = "./savemodels/cifar10-model-{epoch:02d}-{val_accuracy:.2f}.hdf5" checkpoint = keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max') model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes)) model.add(Activation('softmax')) model.summary() opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) history = model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), epochs=epochs, validation_data=(x_test, y_test), workers=4, callbacks=[checkpoint]) epochs_range = range(15) validation_accuracy = history.history['val_accuracy'] training_accuracy = history.history['accuracy'] import matplotlib.pyplot as plt plt.plot(epochs_range, training_accuracy, 'b+', label='training accuracy') plt.plot(epochs_range, validation_accuracy, 'bo', label='validation accuracy') plt.xlabel('Epochs') plt.ylabel('Validation accuracy') plt.legend() plt.show() scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # TensorFlow 2 quickstart for beginners <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/beginner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Lihat di TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/id/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Jalankan di Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/id/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Lihat sumber kode di GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/id/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Unduh notebook</a> </td> </table> Note: Komunitas TensorFlow kami telah menerjemahkan dokumen-dokumen ini. Tidak ada jaminan bahwa translasi ini akurat, dan translasi terbaru dari [Official Dokumentasi - Bahasa Inggris](https://www.tensorflow.org/?hl=en) karena komunitas translasi ini adalah usaha terbaik dari komunitas translasi. Jika Anda memiliki saran untuk meningkatkan terjemahan ini, silakan kirim pull request ke [tensorflow/docs](https://github.com/tensorflow/docs) repositori GitHub. Untuk menjadi sukarelawan untuk menulis atau memeriksa terjemahan komunitas, hubungi [daftar [email protected]](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs). Panduan singkat ini akan menggunakan [Keras](https://www.tensorflow.org/guide/keras/overview) untuk: 1. Membangun jaringan saraf tiruan yang mengklasifikasikan gambar. 2. Melatih jaringan saraf tiruan tersebut. 3. Dan, pada akhirnya, mengevaluasi keakuratan dari model. Ini adalah file notebook [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb). Program python akan dijalankan langsung dari browser — cara yang bagus untuk mempelajari dan menggunakan TensorFlow. Untuk mengikuti tutorial ini, jalankan notebook di Google Colab dengan mengklik tombol di bagian atas halaman ini. 1. Di halaman Colab, sambungkan ke runtime Python: Di menu sebelah kanan atas, pilih * CONNECT *. 2. Untuk menjalankan semua sel kode pada notebook: Pilih * Runtime *> * Run all *. Download dan instal TensorFlow 2 dan impor TensorFlow ke dalam program Anda: ``` from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ``` Siapkan [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Ubah sampel dari bilangan bulat menjadi angka floating-point (desimal): ``` mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ``` Build model `tf.keras.Sequential` dengan cara menumpuk lapisan layer. Untuk melatih data, pilih fungsi untuk mengoptimalkan dan fungsi untuk menghitung kerugian: ``` model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ``` Melatih dan mengevaluasi model: ``` model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ``` Penggolong gambar tersebut, sekarang dilatih untuk akurasi ~ 98% pada dataset ini. Untuk mempelajari lebih lanjut, baca [tutorial TensorFlow](https://www.tensorflow.org/tutorials/).
github_jupyter
# Interpreting Neural Network Weights Neural nets (especially deep neural nets) are some of the most powerful machine learning algorithms available. However, it can be difficult to understand (intuitively) how they work. In the first part of this notebook, I highlight the connection between neural networks and template matching--a simple technique for classification that is popular in computer vision and signal processing. I find this observation is helpful for intution about how neural nets classify images--I hope you find this useful too! In the second part of the notebook, I point out that for convolutional neural nets it can be helpful to think of weights as sliding filters (e.g. edge detecting filters) which in the early network layers detect low-level image features . ## Template Matching [Template matching](https://en.wikipedia.org/wiki/Template_matching) is used in computer vision to compare images. It does this by treating images as vectors and computing their dot product: very similar images give a large dot product, and very disimilar images give a small (or negative) dot product. Why? Mathematically, if you represent images as vectors, you can compute the difference between two images $I_1$ and $I_2$ like $$|I_1 - I_2 |^2 = |I_1|^2 + |I_2|^2 - 2 \, I_1 \cdot I_2$$ Note that the dot product $I_1 \cdot I_2$ between two images is largest when the difference $|I_1 - I_2|$ between images is smallest, and vice versa. For example, here's a template for each digit: ``` import matplotlib.pyplot as plt import cv2 templates = [] for i in range(10): img = cv2.imread("templates/{}.png".format(str(i)), cv2.IMREAD_GRAYSCALE) if img is None: raise Exception("Error: Failed to load image {}".format(i)) templates.append(img) plt.subplot(2, 5, 1 + i) plt.imshow(img, cmap=plt.get_cmap('gray')) plt.show() ``` We can illustrate template matching by computing the dot products between digit 1 and every other digit. To make the results more robust, we compute the normalized dot product $$ \frac{I_1 \cdot I_2}{|I_1| |I_2|}$$ (It's important to normalize the dot product--otherwise brighter images will give stronger matches than darker ones, and that would not make sense.) ``` img = templates[1] for i in range(10): template = templates[i] print(" Normalized dot product between 1 and {} = {}".format(i, cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED )[0][0] )) ``` We can see that 1 is more strongly correlated to 1 than any other digit. That's the principle behind template matching--it can measure image similarity. Unfortunately, template matching is not robust to changes in image shapes, sizes, rotations, or partial occlusion. However, neural nets can be very robust to such image defects--that's why they are more powerful. ## Viewing Neural Network Weights as Templates In a neuron inside neural net, inputs $x$ (a vector) are combined with weights $W$ (another vector) to generate an output. Pictorally <img src="figs/neuron.png" style="width: 250px;"> The output of the neuron is computed by $$ f( W \cdot x + b) $$ where $f$ is called the *activation function* and b is called the *bias term*. Most important for this discussion, the dot product $W \cdot x$ resembles template matching. As we will show, in very simple neural nets (sometimes called *linear classifiers*) we can interpret the weights $W$ as templates--the neural net learns how to perform template matching! We want to make a linear classifier to recognize digits 0-9. We implement a softmax architecture (shown below) with 10 outputs. For example, if digit 7 is recognized, neuron 7 will have an output close to 1 and the remaining neurons will have outputs close to 0. (FYI, this means we will have to one-hot encode the labels before training.) The input (the image) is $x$, which we draw as a flattened (1d) vector. There are 10 weight vectors $W_0$ - $W_9$ (one for each neuron). <img src="figs/nnet.png" style="width: 400px;"> We write the $i^{\mathrm th}$ output as $$ \mathrm{output}_i = f( W_{i} \cdot x + b_i)$$ As we said, we expect each weight vector $W_i$ learned during training will be a template for digit $i$. Let's train the neural net on the MNIST data set (a set of 70,000 images of hand-written digits 0-9.) We'll use Keras to implement and train the neural net. ``` #Developed with Keras 2.0.2 and the tensorflow backend #Load the MNIST data set of handwritten digits 0-9: import numpy as np from keras.datasets import mnist from keras.utils import np_utils (x_train, y_train), (x_test, y_test) = mnist.load_data() #Invert the image colors, convert to float, and normalize values: x_train = (255 - x_train).astype('float')/255.0 x_test = (255 - x_test).astype('float')/255.0 # plot first 5 images for i in range(5): plt.subplot(1, 5, 1+i) plt.imshow(x_train[i], cmap=plt.get_cmap('gray')) plt.show() #Let's flatten the images to 1-d vectors for processing image_shape = (28, 28) num_pixels = 28*28 x_train = x_train.reshape(x_train.shape[0], num_pixels) x_test = x_test.reshape(x_test.shape[0], num_pixels) #Now let's one-hot encode the target variable before training y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] ``` We can see that after 1-hot encoding, each digit label becomes a 10-d vector. For example, for the digit 1, there is a 1 in position 1 and 0's in all other positions of the 10-d vector: ``` y_train[3] ``` Now let us create the neural net and train it ``` from keras.models import Sequential from keras.layers import Dense from keras.utils import np_utils from keras.utils import plot_model # fix random seed for reproducibility seed = 123 np.random.seed(seed) # Define the model: def linear_classifier_model(): """single layer, 10 output classes""" # create model model = Sequential() # one layer model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax', input_shape=(num_pixels,))) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model #Instantiate model model = linear_classifier_model() #Train model model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model.evaluate(x_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) ``` Keras lets us easily visualize the model to check that it has the correct architecture ``` plot_model(model, to_file='figs/model.png', show_shapes=True) ``` <img src="figs/model.png"> Finally, let us grab the weights from layer dense_1, unflatten the shape, and graph them: ``` layer_dense_1 = model.get_layer('dense_1') weights1 = layer_dense_1.get_weights()[0] bias1 = layer_dense_1.get_weights()[1] #Cast shape to 2d weights1 = weights1.reshape(28, 28, 10) #lot the weights for the first 4 digits for i in range(4): plt.subplot(1, 4, 1 + i) plt.imshow(weights1[:, :, i], cmap=plt.get_cmap('gray')) plt.show() ``` We can see that indeed the learned weights resember digits 0, 1, 2, 3, ... just as we expected of template matching. For further details, take a look at [http://cs231n.github.io/linear-classify/](http://cs231n.github.io/linear-classify/) ## Filters in Convolutional Neural Nets In convolutional neural networks, it is common to use small (3x3 or 5x5) sliding convolutional layers instead of large, fully-connected layers. In that case, it may be more helpful to think of the weights as sliding filters to detect low-level features such as edges, textures, and blobs. Indeed, the learned weights often resemble standard image processing filters. Let us try to see this. First, let us reshape (unflatten) the data so the images are again rectangular: ``` from keras import backend as K K.set_image_data_format('channels_last') #specify image format img_shape = (28, 28, 1) x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) x_test = x_test.reshape(x_test.shape[0], 28, 28, 1) ``` Now let us define the neural net: ``` from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D def covnet_model(): """Convolutional neural net""" model = Sequential() model.add(Convolution2D(32, kernel_size = 7, strides=1, padding='valid', input_shape=(28, 28, 1), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) #flatten 3d tensors into 1d vector. model.add(Flatten()) #Add 128 neuron feed forward hidden layer model.add(Dense(128, activation='relu')) #Add output, 10 neurons for 10 target classes, activation is softmax so outputs are #probabilities model.add(Dense(num_classes, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model model_covnet = covnet_model() ``` Let us plot the network architecture ``` plot_model(model_covnet, to_file='figs/model2.png', show_shapes=True) ``` <img src="figs/model2.png" style="width: 350px;"> Now let us train the network ``` model_covnet.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model_covnet.evaluate(x_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) ``` Let us now plot 16 convolutional weights (16 filters) learned in the first convolutional layer: ``` layer_conv2d_4 = model_covnet.get_layer('conv2d_1') weights2 = layer_conv2d_4.get_weights()[0] bias2 = layer_conv2d_4.get_weights()[1] #Cast shape to 2d weights2 = weights2.reshape(7, 7, 32) #Now plot the weights for the first 16 filters plt.figure(figsize=(7,5)) for i in range(16): plt.subplot(4, 4, 1 + i) plt.imshow(weights2[:, :, i], cmap=plt.get_cmap('gray')) plt.show() ``` For comparison, let's plot Sobel filters which are used in computer vision to detect horizontal and vertical edges. We see that the neural net weights look similar to Sobel filters. ``` sobelx = np.array([[1, 2, 0, -2, -1], [4, 8, 0, -8, -4], [6, 12, 0, -12, -6], [4, 8, 0, -8, -4], [1, 2, 0, -2, -1]]).astype('float32')/12.0 sobely = sobelx.transpose() plt.figure(figsize=(2, 2)) plt.subplot(1, 2, 1) plt.imshow(sobelx, cmap=plt.get_cmap('gray')) plt.subplot(1, 2, 2) plt.imshow(sobely, cmap=plt.get_cmap('gray')) plt.show() ```
github_jupyter
# Qcodes example with Alazar ATS 9360 ``` # import all necessary things %matplotlib nbagg import qcodes as qc import qcodes.instrument.parameter as parameter import qcodes.instrument_drivers.AlazarTech.ATS9360 as ATSdriver import qcodes.instrument_drivers.AlazarTech.ATS_acquisition_controllers as ats_contr ``` First lets list all the Alazar boards connected to this machine. In most cases this will probably only be a single one ``` ATSdriver.AlazarTech_ATS.find_boards() ats_inst = ATSdriver.AlazarTech_ATS9360(name='Alazar1') # Print all information about this Alazar card ats_inst.get_idn() ``` The Alazar is unusual compared to other instruments as it works together with an acquisition controller. The acquisition controller takes care of post processing and the driver takes care of the communication with the card. At the moment QCoDeS only ships with some very basic acquisition controllers. Here we use a controller that allows us to perform a demodulation of the signal acquired ``` # Instantiate an acquisition controller (In this case we are doing a simple DFT) and # provide the name of the name of the alazar card that this controller should control acquisition_controller = ats_contr.Demodulation_AcquisitionController(name='acquisition_controller', demodulation_frequency=10e6, alazar_name='Alazar1') ``` The parameters on the Alazar card are set in a slightly unusual way. As communicating with the card is slow and multiple parameters needs to be set with the same command we use a context manager that takes care of syncing all the paramters to the card after we set them. ``` with ats_inst.syncing(): ats_inst.clock_source('INTERNAL_CLOCK') ats_inst.sample_rate(1_000_000_000) ats_inst.clock_edge('CLOCK_EDGE_RISING') ats_inst.decimation(1) ats_inst.coupling1('DC') ats_inst.coupling2('DC') ats_inst.channel_range1(.4) ats_inst.channel_range2(.4) ats_inst.impedance1(50) ats_inst.impedance2(50) ats_inst.trigger_operation('TRIG_ENGINE_OP_J') ats_inst.trigger_engine1('TRIG_ENGINE_J') ats_inst.trigger_source1('EXTERNAL') ats_inst.trigger_slope1('TRIG_SLOPE_POSITIVE') ats_inst.trigger_level1(160) ats_inst.trigger_engine2('TRIG_ENGINE_K') ats_inst.trigger_source2('DISABLE') ats_inst.trigger_slope2('TRIG_SLOPE_POSITIVE') ats_inst.trigger_level2(128) ats_inst.external_trigger_coupling('DC') ats_inst.external_trigger_range('ETR_2V5') ats_inst.trigger_delay(0) ats_inst.timeout_ticks(0) ats_inst.aux_io_mode('AUX_IN_AUXILIARY') # AUX_IN_TRIGGER_ENABLE for seq mode on ats_inst.aux_io_param('NONE') # TRIG_SLOPE_POSITIVE for seq mode on ``` This command is specific to this acquisition controller. The kwargs provided here are being forwarded to instrument acquire function This way, it becomes easy to change acquisition specific settings from the ipython notebook ``` acquisition_controller.update_acquisitionkwargs(#mode='NPT', samples_per_record=1024, records_per_buffer=70, buffers_per_acquisition=1, #channel_selection='AB', #transfer_offset=0, #external_startcapture='ENABLED', #enable_record_headers='DISABLED', #alloc_buffers='DISABLED', #fifo_only_streaming='DISABLED', #interleave_samples='DISABLED', #get_processed_data='DISABLED', allocated_buffers=1, #buffer_timeout=1000 ) ``` Getting the value of the parameter `acquisition` of the instrument `acquisition_controller` performes the entire acquisition protocol. This again depends on the specific implementation of the acquisition controller ``` acquisition_controller.acquisition() # make a snapshot of the 'ats_inst' instrument ats_inst.snapshot() ``` Finally show that this instrument also works within a loop ``` dummy = parameter.ManualParameter(name="dummy") data = qc.Loop(dummy[0:50:1]).each(acquisition_controller.acquisition).run(name='AlazarTest') qc.MatPlot(data.acquisition_controller_acquisition) ```
github_jupyter
[//]: #![idaes_icon](idaes_icon.png) <img src="idaes_icon.png" width="100"> <h1><center>Welcome to the IDAES Stakeholder Workshop</center></h1> Welcome and thank you for taking the time to attend today's workshop. Today we will introduce you to the fundamentals of working with the IDAES process modeling toolset, and we will demonstrate how these tools can be applied for optimization applications. Today's workshop will be conducted using Jupyter Notebooks which provide an online, interactive Python environment for you to use (without the need for installing anything). Before we get started on some actual examples, let's make sure that everything is working correctly. The cell below contains a command to run a simple test script that will test that everything we will need for today is working properly. You can execute a cell by pressing `Shift+Enter`. ``` run "notebook_test_script.py" ``` If everything worked properly, you should see a message saying `All good!` and a summary of all the checks that were run. If you don't see this, please contact someone for assistance. ## Outline of Workshop Today's workshop is divided into four modules which will take you through the steps of setting up a flowsheet within the IDAES framework. Welcome Module (this one): * Introduction to Jupyter notebooks and Python * Introduction to Pyomo Module 1 will cover: * how to import models from the core IDAES model library, * how to create a model for a single unit operation, * how to define feed and operating conditions, * how to initialize and solve a single unit model, * some ways we can manipulate the model and examine the results. Module 2 will demonstrate: * how to combine unit models together to form flowsheets, * tools to initialize and solve flowsheets with recycle loops, * how to optimize process operating conditions to meet product specifications. Module 3 will demonstrate: * how to build new unit models using the IDAES tools, * how to include new unit models into flowsheets. ## Introduction to Jupyter Notebooks and Python In this short notebook, we will briefly describe the uses of Jupyter notebooks like this one, and provide you with the necessary background in Python for this workshop. We will cover `if` statements, looping, array-like containers called lists and dictionaries, as well as the use of some external packages for working with data. There are many additional tutorials online to learn more about the Python syntax. In Python, variables do not need to be declared before they are used. You can simply define a new variable using `x = 5`. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> In the cell below, assign a value of 5 to the variable x. Don't forget to type Shift+Enter to execute the line.</div> You can easily see the value of a variable using the built-in `print` function. For example, to print the value of `x` use `print(x)`. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Write the code to print the value of x. Don't forget to hit Shift+Enter to execute the cell. </div> <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Now change the value of the x variable to 8 and execute the cell. </div> ### Jupyter notebooks and execution order <div class="alert alert-block alert-warning"> <b>Note:</b> When using Jupyter notebooks, it is very important to know that the cells can be executed out of order (intentionally or not). The state of the environment (e.g., values of variables, imports, etc.) is defined by the execution order. </div> <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> To see this concept, select the cell above that contained the print statement and execute the cell again using Shift+Enter. </div> You should see that the value `8` is now printed. This may seem problematic if you are used to programming in environments where the state is linked to the order of the commands as *written*, not as *executed*. **Again, notice that the state of the environment is determined by the execution order.** Note also that the square brackets to the left of the cell show the order that cells were executed. If you scroll to the top, you should see that the code cells show an execution order of `[1]`, `[2]`, `[5]`, and `[4]`, indicating the actual execution order. There are some useful menu commands at the top of the Jupyter notebook to help with these problems and make sure you retain the execution order as expected. Some important commands to remember: * You can clear the current state with the menu item `Kernel | Restart & Clear Output` * It is often useful to clear the state using the menu command just described, and then execute all the lines **above the currently selected cell** using `Cell | Run All Above`. * You can clear all the state and re-run the entire notebook using `Kernel | Restart & Run All`. To show the use of these commands, complete the following. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> <ul> <li>Clear the current state (using Kernel | Restart & Clear Output). You should notice that the square brackets that listed the execution order are all now empty.</li> <li>Select the cell immediately below this text <li>Re-run all the code up to this point (Cell | Run All Above). You should now see that the square brackets indicate the expected execution order.</li> <li>Print the value of x again using the print function. You should see the value 8 printed, while the earlier cell printing x shows the value of 5 as expected.</li> </ul> </div> ``` print(x) ``` ### Python `if` statements In the code below, we show an example of an `if` statement in Python. ```python temp = 325 # some other code if temp > 320: print('temperature is too high') elif x < 290: print('temperature is too low') else: print('temperature is just right') ``` <div class="alert alert-block alert-warning"> <b>Note:</b> You will notice that there are no braces to separate blocks in the if-else tree. In Python, indentation is used to delineate blocks of code throughout Python (e.g., if statements, for loops, functions, etc.). The indentation in the above example is not only to improve legibility of the code. It is necessary for the code to run correctly. As well, the number of spaces required to define the indentation is arbitrary, but it must be consistent throughout the code. For example, we could use 3 spaces (instead of the 4 used in the example above, but we could not use 3 for one of the blocks and 4 for another). </div> Using the syntax above for the `if` statement, write the following code. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> <ul> <li>set the value of the variable T_degC to 20</li> <li>convert this from degrees Celsius to degrees Fahrenheit (use variable name T_degF)</li> <li>write an `if` statement that prints a message if the degrees Fahrenheit are below 70</li> </ul> </div> ``` T_degC = 20 # some other code T_degF = (T_degC * 9.0/5.0) + 32.0 # Todo: put the if statement here ``` ### Python list containers Now we will illustrate the use of lists in Python. Lists are similar to vectors or arrays in other languages. A list in Python is indexed by integers from 0 up to the length of the array minus 1. The list can contain standard types (int, float, string), or other objects. In the next inline exercise, we will create a list that contains the values from 0 to 50 by steps of 5 using a for loop. Note that the python function `range(n)` can be used to iterate from 0 to (n-1) in a for loop. Also note that lists have an `append` method which adds an entry to the end of the list (e.g., if the list `l` currently has 5 elements, then `l.append('temp')` will add the string "temp" as the sixth element). Print the new list after the for loop. If this is done correctly, you should see: `[0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50]` printed after the cell. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Complete the code block below to create the desired list and print the result. </div> ``` # Create a list with the values 0 to 50 with steps of 5. xlist = list() for i in range(11): # Todo: use the append method of list to append the correct value print(xlist) # Todo: print the value of xlist to verify the results ``` Python provides a short-hand notation for building lists called *list comprehensions*. An example of a list comprehension that creates all even numbers from 0 to 40 is: ```python values = [q*2 for q in range(21)] ``` Note also that list comprehensions can include if clauses. For example, we could also implement the above example with the following code: ```python values = [q for q in range(41) if q % 2 == 0] ``` Note that `%` is the modulus operator (it returns the remainder of the division). Therefore, in the above code, `q % 2` returns 0 if the value in `q` is exactly divisible by 2 (i.e., an even number). <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> In the cell below, create the same xlist that we created previously, but use the list comprehension notation. Verify that this result is correct by printing it. </div> ``` # Todo: define the list comprehension print(xlist) ``` You can easily check the length of a list using the python `len(l)` function. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Print the length of `xlist`. It should be 11. </div> ``` # Todo: print the len of the list ``` If you have a list of values or objects, it is easy to iterate through that list in a for loop. In the next inline exercise, we will create another list, `ylist` where each of the values is equal to the corresponding value in `xlist` squared. That is, $y_i = x_i^2$. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Modify the code below to create ylist as described above. Print the values in ylist to check the result. </div> ``` ylist = list() # Todo: define the for loop to add elements to ylist using the values in xlist print(ylist) ``` This same task could have been done with a list comprehension (using much less code). <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Write the list comprehension to compute the values of ylist. Print the values in ylist to check the result. </div> ``` # Todo: create ylist using a list comprehension and print the result ``` ### Python dictionary containers Another valuable data structure in Python are *dictionaries*. Dictionaries are an associative array; that is, a map from keys to values or objects. The keys can be *almost* anything, including floats, integers, and strings. The code below shows an example of creating a dictionary (here, to store the areas of some of the states). <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the lines below to see the areas dictionary. </div> ``` areas = dict() areas['South Dakota'] = 199742 areas['Oklahoma'] = 181035 print(areas) ``` Dictionaries can contain mixed types (i.e., it is valid to add `areas['Texas'] = 'Really big!'`) but this may lead to unpredictable behavior if the different types are unexpected in other parts of the code. You can loop through dictionaries in different ways. For example, ```python d = {'A': 2, 'B': 4, 'D': 16} for k in d.keys(): # loop through the keys in the dictionary # access the value with d[k] print('key=', k, 'value=', d[k]) for v in d.values(): # loop through the values in the dictionary, ignoring the keys print('value=', v) for k,v in d.items(): # loop through the entries in the dictionary, retrieving both # the key and the value print('key=', k, 'value=', v) ``` <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> The areas listed above for the two states are in square kilometers. Modify the loop below to create a new dictionary that contains the areas in square miles. Print the new dictionary to verify the correct behavior. Note that 1 kilometer is equal to 0.62137 miles. </div> ``` areas_mi = dict() for state_name, area in areas.items(): # Todo: convert the area to sq. mi and assign to the areas_mi dict. print(areas_mi) ``` Python also supports dictionary comprehensions much like list comprehensions. For example: ```python d = {'A': 2, 'B': 4, 'D': 16} d2 = {k:v**2 for k,v in d.items()} ``` <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Redo the conversion from square kilometers to square miles using a dictionary comprehension. </div> ``` # Todo: define areas_mi using a dictionary comprehension and print the result ``` ### Matplotlib for generating figures We will now briefly explore the use of the `matplotlib` package to generate figures. Before we do this, we will introduce some other helpful tools. Another effective way to create a list of evenly spaced numbers (e.g., for plotting or other computation) is to use the `linspace` function from the `numpy` package (more information [here](https://www.numpy.org/devdocs/)). Let's import the `numpy` package and use linspace function to create a list of 15 evenly spaced intervals (that is, 16 points) from 0 to 50 and store this in `xlist`. We will also create the `ylist` that corresponds to the square of the values in `xlist`. Note, we must first import the `numpy` package. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the next two cells to see the output. </div> ``` import numpy as np xlist = list(np.linspace(0,50,16)) ylist = [x**2 for x in xlist] print(xlist) print(ylist) ``` This printed output is not a very effective way to communicate these results. Let's use matplotlib to create a figure of x versus y. A full treatment of the `matplotlib` package is beyond the scope of this tutorial, and further documentation can be found [here](https://matplotlib.org/). For now, we will import the plotting capability and show how to generate a straightforward figure. You can consult the documentation for matplotlib for further details. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the next two cells to see the output. </div> ``` import matplotlib.pyplot as plt plt.plot(xlist, ylist) plt.title('Embedded x vs y figure') plt.xlabel('x') plt.ylabel('y') plt.legend(['data']) plt.show() ``` Next, we will use what you have learned so far to create a plot of `sin(x)` for `x` from 0 to $2 \pi$ with 100 points. Note, you can get the `sin` function and the value for $\pi$ from the `math` package. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the import statement in the next cell, and then complete the missing code in the following cell to create the figure discussed above. </div> ``` import math x = list(np.linspace(0,2*math.pi, 100)) # Todo: create the list for y # Todo: Generate the figure ``` ### Importing and exporting data using Pandas Often, it is useful to output the data in a general format so it can be imported into other tools or presented in a familiar application. Python makes this easy with many great packages already available. The next code shows how to use the `pandas` package to create a dataframe and export the data to a csv file that we can import to excel. You could also consult [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to see how to export the data directly to excel. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the code below that shows how to import some data into a DataFrame from the Pandas package and then export this data to a csv file. </div> ``` import pandas as pd df_sin = pd.DataFrame({'x': x, 'sin(x) (radians)': y}) print(df_sin) df_sin.to_csv('sin_data.csv') ``` If you go back to the browser tab that showed all the Jupyter notebook files and refresh, you will now see that there is a csv file with the x and y data. You can consult the Pandas documentation do learn about the many data analysis and statistical features of the `pandas` package. ### Further Information Further information of the packages mentioned above can be found using the following links: * [numpy](https://www.numpy.org/devdocs/) * [matplotlib](https://matplotlib.org/) * [pandas](http://pandas.pydata.org/pandas-docs/stable/) ## Introduction to Pyomo [Pyomo](www.pyomo.org) is an object-oriented, python-based package for equation-oriented (or *algebraic*) modeling and optimization, and the IDAES framework is built upon the Pyomo package. IDAES extends the Pyomo package and defines a class heirarchy for flowsheet based modeling, including definition of property packages, unit models, and flowsheets. The use of IDAES does not require extensive knowledge about Pyomo, however, it can be beneficial to have some familiarity with the Pyomo package for certain tasks: * IDAES models are open, and you can interrogating the underlying Pyomo model to view the variables, constraints, and objective functions defined in the model. * You can use Pyomo components to define your objective function or to create additional constraints. * Since IDAES models **are** Pyomo models, any advanced meta-algorithms or analysis tools that can be developed and/or used on a Pyomo model can also be used on an IDAES model. A full tutorial on Pyomo is beyond the scope of this workshop, however in this section we will briefly cover the commands required to specify an objective function or add a constraint to an existing model. In the next cell, we will create a Pyomo model, and add a couple of variables to that model. When using IDAES, you will define a flowsheet and the addition of variables and model equations will be handled by the IDAES framework. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the following cell to create a Pyomo model with some variables that will be used later. </div> ``` from pyomo.environ import ConcreteModel, Var model = ConcreteModel() model.x = Var() model.y = Var() ``` The Pyomo syntax to define a scalar objective function is shown below. This defines the objective function as $x^2$. By default Pyomo models (and IDAES models) seek to *minimize* the objective function. ```python model.obj = Objective(expr=model.x**2) ``` To maximize a quantity, include the keyword argument `sense=maximize` as in the following: ```python model.obj = Objective(expr=model.y, sense=maximize) ``` Note that `Objective` and `maximize` would need to be imported from `pyomo.environ`. The Pyomo syntax to define a scalar constraint is shown below. This code defines the equality constraint $x^2 + y^2 = 1$. ```python model.on_unit_circle_con = Constraint(expr=model.x**2 + model.y**2 == 1) ``` Pyomo also supports inequalities. For example, the code for the inequality constraint $x^2 + y^2 \le 1$ is given as the following. ```python model.inside_unit_circle_con = Constraint(expr=model.x**2 + model.y**2 <= 1) ``` Note that, as before, we would need to include the appropriate imports. In this case `Constraint` would need to be imported from `pyomo.environ`. Using the syntax shown above, we will now add the objective function: $\min x^2 + y^2$ and the constraint $x + y = 1$. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Complete the missing code in the cell below. If this is done correctly, after executing the cell, you should see the log output from the solver and the printed solution should show that x, y, and the objective value are all equal to 0.5. </div> ``` from pyomo.environ import Objective, Constraint, value, SolverFactory # Todo: add the objective function here # Todo: add the constraint here # now solve the problem status = SolverFactory('ipopt').solve(model, tee=True) # tee=True shows the solver log # print the values of x, y, and the objective function at the solution # Note that the results are automatically stored in the model variables print('x =', value(model.x)) print('y =', value(model.y)) print('obj =', value(model.obj)) ``` Notice that the code above also imported the `value` function. This is a Pyomo function that should be used to retrieve the value of variables in Pyomo (or IDAES) models. Note that you can display the complete list of all variables, objectives, and constraints (with their expressions) using `model.pprint()`. The `display` method is similar to the `pprint` method except that is shows the *values* of the constraints and objectives instead of the underlying expressions. The `pprint` and `display` methods can also be used on individual components. <div class="alert alert-block alert-info"> <b>Inline Exercise:</b> Execute the lines of code below to see the output from pprint and display for a Pyomo model. </div> ``` print('*** Output from model.pprint():') model.pprint() print() print('*** Output from model.display():') model.display() ```
github_jupyter
# Lazy Mode and Logging So far, we have seen Ibis in interactive mode. Interactive mode (also known as eager mode) makes Ibis return the results of an operation immediately. In most cases, instead of using interactive mode, it makes more sense to use the default lazy mode. In lazy mode, Ibis won't be executing the operations automatically, but instead, will generate an expression to be executed at a later time. Let's see this in practice, starting with the same example as in previous tutorials - the geography database. ``` !curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db' import os import tempfile import ibis connection = ibis.sqlite.connect(os.path.join(tempfile.gettempdir(), 'geography.db')) countries = connection.table('countries') ``` In previous tutorials, we set interactive mode to `True`, and we obtained the result of every operation. ``` ibis.options.interactive = True countries['name', 'continent', 'population'].limit(3) ``` But now let's see what happens if we leave the `interactive` option to `False` (the default), and we operate in lazy mode. ``` ibis.options.interactive = False countries['name', 'continent', 'population'].limit(3) ``` What we find is the graph of the expressions that would return the desired result instead of the result itself. Let's analyze the expressions in the graph: - We query the `countries` table (all rows and all columns) - We select the `name`, `continent` and `population` columns - We limit the results to only the first `3` rows Now consider that the data is in a database, possibly in a different host than the one executing Ibis. Also consider that the results returned to the user need to be moved to the memory of the host executing Ibis. When using interactive (or eager) mode, if we perform one operation at a time, we would do: - We would move all the rows and columns from the backend (database, big data system, etc.) into memory - Once in memory, we would discard all the columns but `name`, `continent` and `population` - After that, we would discard all the rows, except the first `3` This is not very efficient. If you consider that the table can have millions of rows, backed by a big data system like Spark or Impala, this may not even be possible (not enough memory to load all the data). The solution is to use lazy mode. In lazy mode, instead of obtaining the results after each operation, we build an expression (a graph) of all the operations that need to be done. After all the operations are recorded, the graph is sent to the backend which will perform the operation in an efficient way - only moving to memory the required data. You can think of this as writing a shopping list and requesting someone to go to the supermarket and buy everything you need once the list is complete. As opposed as getting someone to bring all the products of the supermarket to your home and then return everything you don't want. Let's continue with our example, save the expression in a variable `countries_expression`, and check its type. ``` countries_expression = countries['name', 'continent', 'population'].limit(3) type(countries_expression) ``` The type is an Ibis `TableExpr`, since the result is a table (in a broad way, you can consider it a dataframe). Now we have our query instructions (our expression, fetching only 3 columns and 3 rows) in the variable `countries_expression`. At this point, nothing has been requested from the database. We have defined what we want to extract, but we didn't request it from the database yet. We can continue building our expression if we haven't finished yet. Or once we are done, we can simply request it from the database using the method `.execute()`. ``` countries_expression.execute() ``` We can build other types of expressions, for example, one that instead of returning a table, returns a columns. ``` population_in_millions = (countries['population'] / 1_000_000).name('population_in_millions') population_in_millions ``` If we check its type, we can see how it is a `FloatingColumn` expression. ``` type(population_in_millions) ``` We can combine the previous expression to be a column of a table expression. ``` countries['name', 'continent', population_in_millions].limit(3) ``` Since we are in lazy mode (not interactive), those expressions don't request any data from the database unless explicitly requested with `.execute()`. ## Logging queries For SQL backends (and for others when it makes sense), the query sent to the database can be logged. This can be done by setting the `verbose` option to `True`. ``` ibis.options.verbose = True countries['name', 'continent', population_in_millions].limit(3).execute() ``` By default, the logging is done to the terminal, but we can process the query with a custom function. This allows us to save executed queries to a file, save to a database, send them to a web service, etc. For example, to save queries to a file, we can write a custom function that given a query, saves it to a log file. ``` import os import datetime import tempfile from pathlib import Path def log_query_to_file(query: str) -> None: """ Log queries to `data/tutorial_queries.log`. Each file is a query. Line breaks in the query are represented with the string '\n'. A timestamp of when the query is executed is added. """ dirname = Path(tempfile.gettempdir()) fname = dirname / 'tutorial_queries.log' query_in_a_single_line = query.replace('\n', r'\n') with fname.open(mode='a') as f: f.write(f'{query_in_a_single_line}\n') ``` Then we can set the `verbose_log` option to the custom function, execute one query, wait one second, and execute another query. ``` import time ibis.options.verbose_log = log_query_to_file countries.execute() time.sleep(1.) countries['name', 'continent', population_in_millions].limit(3).execute() ``` This has created a log file in `data/tutorial_queries.log` where the executed queries have been logged. ``` !cat -n data/tutorial_queries.log ```
github_jupyter
## Mutual information ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.feature_selection import mutual_info_classif, mutual_info_regression from sklearn.feature_selection import SelectKBest, SelectPercentile ``` ## Read Data ``` data = pd.read_csv('../UNSW_Train.csv') data.shape data.head() ``` ### Train - Test Split ``` # separate train and test sets X_train, X_test, y_train, y_test = train_test_split( data.drop(labels=['is_intrusion'], axis=1), data['is_intrusion'], test_size=0.2, random_state=0) X_train.shape, X_test.shape ``` ### Determine Mutual Information ``` mi = mutual_info_classif(X_test, y_test) mi # 1. Let's capture the above array in a pandas series # 2. Add the variable names in the index # 3. Sort the features based on their mutual information value # 4. And make a var plot mi = pd.Series(mi) mi.index = X_test.columns mi.sort_values(ascending=False).plot.bar(figsize=(20, 6)) plt.ylabel('Mutual Information') ``` ### Select top k features based on MI ``` # select the top 15 features based on their mutual information value sel_ = SelectKBest(mutual_info_classif, k=15).fit(X_test, y_test) # display features X_test.columns[sel_.get_support()] # to remove the rest of the features: X_train = sel_.transform(X_train) X_test = sel_.transform(X_test) X_train.shape, X_test.shape ``` ## Standardize Data ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(X_train) X_train = scaler.transform(X_train) ``` ## Classifiers ``` from sklearn import linear_model from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from catboost import CatBoostClassifier ``` ## Metrics Evaluation ``` from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve, f1_score from sklearn import metrics from sklearn.model_selection import cross_val_score ``` ### Logistic Regression ``` %%time clf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=25).fit(X_train, y_train) pred_y_test = clf_LR.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_test)) f1 = f1_score(y_test, pred_y_test) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_test) print('FPR:', fpr[1]) print('TPR:', tpr[1]) ``` ### Naive Bayes ``` %%time clf_NB = GaussianNB(var_smoothing=1e-08).fit(X_train, y_train) pred_y_testNB = clf_NB.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testNB)) f1 = f1_score(y_test, pred_y_testNB) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB) print('FPR:', fpr[1]) print('TPR:', tpr[1]) ``` ### Random Forest ``` %%time clf_RF = RandomForestClassifier(random_state=0,max_depth=100,n_estimators=1000).fit(X_train, y_train) pred_y_testRF = clf_RF.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testRF)) f1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_testRF) print('FPR:', fpr[1]) print('TPR:', tpr[1]) ``` ### KNN ``` %%time clf_KNN = KNeighborsClassifier(algorithm='ball_tree',leaf_size=1,n_neighbors=5,weights='uniform').fit(X_train, y_train) pred_y_testKNN = clf_KNN.predict(X_test) print('accuracy_score:', accuracy_score(y_test, pred_y_testKNN)) f1 = f1_score(y_test, pred_y_testKNN) print('f1:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_testKNN) print('fpr:', fpr[1]) print('tpr:', tpr[1]) ``` ### CatBoost ``` %%time clf_CB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train) pred_y_testCB = clf_CB.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testCB)) f1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_testCB) print('FPR:', fpr[1]) print('TPR:', tpr[1]) ``` ## Model Evaluation ``` data = pd.read_csv('../UNSW_Test.csv') data.shape # Create feature matrix X and target vextor y y_eval = data['is_intrusion'] X_eval = data.drop(columns=['is_intrusion']) X_eval = sel_.transform(X_eval) X_eval.shape ``` ### Model Evaluation - Logistic Regression ``` modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=25) modelLR.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredLR = modelLR.predict(X_eval) y_predLR = modelLR.predict(X_test) train_scoreLR = modelLR.score(X_train, y_train) test_scoreLR = modelLR.score(X_test, y_test) print("Training accuracy is ", train_scoreLR) print("Testing accuracy is ", test_scoreLR) from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score print('Performance measures for test:') print('--------') print('Accuracy:', test_scoreLR) print('F1 Score:',f1_score(y_test, y_predLR)) print('Precision Score:',precision_score(y_test, y_predLR)) print('Recall Score:', recall_score(y_test, y_predLR)) print('Confusion Matrix:\n', confusion_matrix(y_test, y_predLR)) ``` ### Cross validation - Logistic Regression ``` from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2)) precision = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='precision') print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2)) recall = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='recall') print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2)) ``` ### Model Evaluation - Naive Bayes ``` modelNB = GaussianNB(var_smoothing=1e-08) modelNB.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredNB = modelNB.predict(X_eval) y_predNB = modelNB.predict(X_test) train_scoreNB = modelNB.score(X_train, y_train) test_scoreNB = modelNB.score(X_test, y_test) print("Training accuracy is ", train_scoreNB) print("Testing accuracy is ", test_scoreNB) from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score print('Performance measures for test:') print('--------') print('Accuracy:', test_scoreNB) print('F1 Score:',f1_score(y_test, y_predNB)) print('Precision Score:',precision_score(y_test, y_predNB)) print('Recall Score:', recall_score(y_test, y_predNB)) print('Confusion Matrix:\n', confusion_matrix(y_test, y_predNB)) ``` ### Cross validation - Naive Bayes ``` from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2)) precision = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='precision') print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2)) recall = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='recall') print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2)) ``` ### Model Evaluation - Random Forest ``` modelRF = RandomForestClassifier(random_state=0,max_depth=100,n_estimators=1000) modelRF.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredRF = modelRF.predict(X_eval) y_predRF = modelRF.predict(X_test) train_scoreRF = modelRF.score(X_train, y_train) test_scoreRF = modelRF.score(X_test, y_test) print("Training accuracy is ", train_scoreRF) print("Testing accuracy is ", test_scoreRF) from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score print('Performance measures for test:') print('--------') print('Accuracy:', test_scoreRF) print('F1 Score:', f1_score(y_test, y_predRF, average='weighted', zero_division=0)) print('Precision Score:', precision_score(y_test, y_predRF, average='weighted', zero_division=0)) print('Recall Score:', recall_score(y_test, y_predRF, average='weighted', zero_division=0)) print('Confusion Matrix:\n', confusion_matrix(y_test, y_predRF)) ``` ### Cross validation - Random Forest ``` from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2)) precision = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='precision') print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2)) recall = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='recall') print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2)) ``` ### Model Evaluation - KNN ``` modelKNN = KNeighborsClassifier(algorithm='ball_tree',leaf_size=1,n_neighbors=5,weights='uniform') modelKNN.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredKNN = modelKNN.predict(X_eval) y_predKNN = modelKNN.predict(X_test) train_scoreKNN = modelKNN.score(X_train, y_train) test_scoreKNN = modelKNN.score(X_test, y_test) print("Training accuracy is ", train_scoreKNN) print("Testing accuracy is ", test_scoreKNN) from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score print('Performance measures for test:') print('--------') print('Accuracy:', test_scoreKNN) print('F1 Score:', f1_score(y_test, y_predKNN)) print('Precision Score:', precision_score(y_test, y_predKNN)) print('Recall Score:', recall_score(y_test, y_predKNN)) print('Confusion Matrix:\n', confusion_matrix(y_test, y_predKNN)) ``` ### Cross validation - KNN ``` from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2)) precision = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='precision') print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2)) recall = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='recall') print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2)) ``` ### Model Evaluation - CatBoost ``` modelCB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04) modelCB.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredCB = modelCB.predict(X_eval) y_predCB = modelCB.predict(X_test) train_scoreCB = modelCB.score(X_train, y_train) test_scoreCB = modelCB.score(X_test, y_test) print("Training accuracy is ", train_scoreCB) print("Testing accuracy is ", test_scoreCB) from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score print('Performance measures for test:') print('--------') print('Accuracy:', test_scoreCB) print('F1 Score:',f1_score(y_test, y_predCB, average='weighted', zero_division=0)) print('Precision Score:',precision_score(y_test, y_predCB, average='weighted', zero_division=0)) print('Recall Score:', recall_score(y_test, y_predCB, average='weighted', zero_division=0)) print('Confusion Matrix:\n', confusion_matrix(y_test, y_predCB)) ``` ### Cross validation - CatBoost ``` from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='accuracy') f = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='f1') precision = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='precision') recall = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='recall') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2)) print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2)) print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2)) ```
github_jupyter
# Plotting In this notebook, I'll develop a function to plot subjects and their labels. ``` from astropy.coordinates import SkyCoord import astropy.io.fits import astropy.wcs import h5py import matplotlib.pyplot as plt from matplotlib.pyplot import cm import numpy import skimage.exposure import sklearn.neighbors import sklearn.pipeline import sklearn.preprocessing CROWDASTRO_H5_PATH = 'data/crowdastro.h5' PATCH_DIAMETER = 200 FITS_CONVENTION = 1 ARCMIN = 1 / 60 IMAGE_SIZE = 200 * 200 NORRIS_DAT_PATH = 'data/norris_2006_atlas_classifications_ra_dec_only.dat' TRAINING_H5_PATH = 'data/training.h5' with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: N_ASTRO = 5 if f_h5.attrs['ir_survey'] == 'wise' else 6 %matplotlib inline ``` ## Displaying radio images Radio images look pretty terrible, so let's run a filter over them to make them a little easier to see. I'll use skimage and try a few different ones. Let's get an example and look at the basic output. ``` with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: image = f_h5['/atlas/cdfs/numeric'][250, 2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER)) plt.imshow(image, cmap='gray') plt.show() ``` It's hard to make out any features. Now, let's run some filters on it. ``` fig = plt.figure(figsize=(18, 27)) def subplot_imshow_hist(i, fig, im, title): ax = fig.add_subplot(6, 3, i) ax.imshow(im, cmap='gray') ax.set_title(title) ax.axis('off') ax = fig.add_subplot(6, 3, i + 3) ax.hist(im.ravel(), bins=256, histtype='step', color='black') ax.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0)) subplot_imshow_hist(1, fig, image, 'Default') subplot_imshow_hist(2, fig, skimage.exposure.equalize_adapthist(image, clip_limit=0.01), 'Adaptive equalisation') subplot_imshow_hist(3, fig, skimage.exposure.equalize_hist(image), 'Histogram equalisation') subplot_imshow_hist(7, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (0.75, 99.25)))), 'Constant stretching 0.75 - 99.25') subplot_imshow_hist(8, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (1, 99)))), 'Constant stretching 1 - 99') subplot_imshow_hist(9, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (2, 98)))), 'Constant stretching 2 - 98') subplot_imshow_hist(13, fig, numpy.sqrt(image - image.min()), 'Square root') subplot_imshow_hist(14, fig, numpy.log(image - image.min() + 1e-5), 'Logarithm + 1e-5') subplot_imshow_hist(15, fig, numpy.log(image + 1), 'Logarithm + 1') ``` Square root looks good, so let's blitz that over some random images and see how it looks. ``` fig = plt.figure(figsize=(18, 25)) with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: indices = numpy.arange(f_h5['/atlas/cdfs/numeric'].shape[0]) numpy.random.seed(10000) numpy.random.shuffle(indices) for j, i in enumerate(indices[:3]): image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER)) subplot_imshow_hist(j + 1, fig, numpy.sqrt(image - image.min()), str(i)) ``` ## Plotting IR objects This is an extremely unpleasant operation: We have to find the pixel coordinates of each IR location, which are all specified in RA/DEC. ``` from crowdastro.config import config with astropy.io.fits.open(config['data_sources']['atlas_image'], ignore_blank=True) as atlas_image: wcs = astropy.wcs.WCS(atlas_image[0].header).dropaxis(3).dropaxis(2) def ra_dec_to_pixels(subject_coords, coords): offset, = wcs.all_world2pix([subject_coords], FITS_CONVENTION) # The coords are of the middle of the subject. coords = wcs.all_world2pix(coords, FITS_CONVENTION) coords -= offset coords[:, 0] /= config['surveys']['atlas']['mosaic_scale_x'] * 424 / 200 coords[:, 1] /= config['surveys']['atlas']['mosaic_scale_y'] * 424 / 200 coords += [40, 40] return coords with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 296 image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape( (PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140] radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2] nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2] ir_coords = ra_dec_to_pixels(radio_coords, ir_coords) plt.imshow(numpy.sqrt(image - image.min()), cmap='gray') plt.scatter(ir_coords[:, 0], ir_coords[:, 1]) ``` ## Displaying classifications The simplest thing we can do is to just highlight the host galaxies, so let's load up the Norris et al. classifications and have a look. ``` # Load labels. with h5py.File(TRAINING_H5_PATH, 'r') as training_h5: crowdsourced_labels = training_h5['labels'].value with h5py.File(CROWDASTRO_H5_PATH, 'r') as crowdastro_h5: ir_names = crowdastro_h5['/swire/cdfs/string'].value ir_positions = crowdastro_h5['/swire/cdfs/numeric'].value[:, :2] ir_tree = sklearn.neighbors.KDTree(ir_positions) with open(NORRIS_DAT_PATH, 'r') as norris_dat: norris_coords = [r.strip().split('|') for r in norris_dat] norris_labels = numpy.zeros((len(ir_positions))) for ra, dec in norris_coords: # Find a neighbour. skycoord = SkyCoord(ra=ra, dec=dec, unit=('hourangle', 'deg')) ra = skycoord.ra.degree dec = skycoord.dec.degree ((dist,),), ((ir,),) = ir_tree.query([(ra, dec)]) if dist < 0.1: norris_labels[ir] = 1 with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 250 image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape( (PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140] radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2] nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2] ir_coords = ra_dec_to_pixels(radio_coords, ir_coords) plt.imshow(numpy.sqrt(image - image.min()), cmap='gray') plt.scatter(ir_coords[:, 0], ir_coords[:, 1]) labels = norris_labels[nearby].astype(bool) nearby_hosts = ir_coords[labels] plt.scatter(nearby_hosts[:, 0], nearby_hosts[:, 1], c='red') ``` What about displaying classifications from my classifier? ``` from crowdastro.classifier import RGZClassifier from sklearn.ensemble import RandomForestClassifier with h5py.File(TRAINING_H5_PATH, 'r') as f_h5: classifier = RGZClassifier(f_h5['features'].value, N_ASTRO) classifier.train(numpy.arange(f_h5['features'].shape[0]), norris_labels) with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 250 image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape( (PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140] radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2] nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2] ir_coords = ra_dec_to_pixels(radio_coords, ir_coords) vec = f_h5['/atlas/cdfs/numeric'][i, :] probs = classifier.predict_probabilities(vec)[nearby] nearby_norris = ir_coords[norris_labels[nearby].astype('bool')] nearby_rgz = ir_coords[crowdsourced_labels[nearby].astype('bool')] plt.figure(figsize=(15, 15)) base_size = 200 plt.imshow(numpy.sqrt(image - image.min()), cmap='gray') plt.scatter(ir_coords[:, 0], ir_coords[:, 1], s=probs * base_size, c=probs, marker='o', cmap='cool') plt.scatter(nearby_norris[:, 0], nearby_norris[:, 1], s=base_size, c='green', marker='*') plt.axis('off') # plt.scatter(nearby_rgz[:, 0], nearby_rgz[:, 1], s=50, c='cyan', marker='x', alpha=0.5) plt.xlim((0, 80)) plt.ylim((0, 80)) ``` ## Plotting a committee If we have multiple classifiers, how should we output their predictions? ``` with h5py.File(TRAINING_H5_PATH, 'r') as f_h5: classifiers = [RGZClassifier(f_h5['features'], N_ASTRO) for _ in range(10)] for classifier in classifiers: subset = numpy.arange(f_h5['features'].shape[0]) numpy.random.shuffle(subset) subset = subset[:len(subset) // 50] subset = sorted(subset) classifier.train(list(subset), norris_labels[subset]) with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 250 image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape( (PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140] radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2] nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2] ir_coords = ra_dec_to_pixels(radio_coords, ir_coords) vec = f_h5['/atlas/cdfs/numeric'][i, :] probs = [classifier.predict_probabilities(vec)[nearby] for classifier in classifiers] # Set all but the top n predictions to zero. n = 1 for probs_ in probs: top_n = sorted(probs_, reverse=True)[:n] for j, prob in enumerate(probs_): if prob not in top_n: probs_[j] = 0 plt.figure(figsize=(10, 10)) base_size = 200 plt.imshow(numpy.sqrt(image - image.min()), cmap='gray') colours = cm.rainbow(numpy.linspace(0, 1, 10)) for colour, probs_ in zip(colours, probs): plt.scatter(ir_coords[:, 0] + numpy.random.normal(size=ir_coords.shape[0], scale=0.5), ir_coords[:, 1] + numpy.random.normal(size=ir_coords.shape[0], scale=0.5), s=probs_ * base_size, marker='x', c=colour, alpha=1) plt.axis('off') plt.xlim((0, 80)) plt.ylim((0, 80)) ``` These classifiers have really low diversity because of the way I divided up the data, but this should work fine. ``` def plot_points_on_background(points, background, noise=False, base_size=200): plt.imshow(background, cmap='gray') colours = cm.rainbow(numpy.linspace(0, 1, len(points))) for colour, (x, y) in zip(colours, points): if noise: x += numpy.random.normal(scale=0.5) y += numpy.random.normal(scale=0.5) plt.scatter(x, y, marker='o', c=colour, s=base_size) plt.axis('off') plt.xlim((0, background.shape[0])) plt.ylim((0, background.shape[1])) def plot_classifications(atlas_vector, ir_matrix, labels, base_size=200): image = atlas_vector[2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER) )[60:140, 60:140] radio_coords = atlas_vector[:2] nearby = atlas_vector[2 + PATCH_DIAMETER ** 2:] < ARCMIN labels = labels[nearby] ir_coords = ir_matrix[nearby, :2][labels.astype(bool)] ir_coords = ra_dec_to_pixels(radio_coords, ir_coords) plot_points_on_background(ir_coords, image, base_size=base_size) with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 250 atlas_vector = f_h5['/atlas/cdfs/numeric'][i, :] ir_coords = f_h5['/swire/cdfs/numeric'] plot_classifications(atlas_vector, ir_coords, norris_labels) ``` ## Bringing it all together We want to plot classifications, RGZ labels, and Norris labels in the same row. ``` def plot_classifications_row(atlas_vector, ir_matrix, classifier_labels, rgz_labels, norris_labels, base_size=200): plt.subplot(1, 3, 1) plt.title('Classifier') plot_classifications(atlas_vector, ir_matrix, classifier_labels, base_size=base_size) plt.subplot(1, 3, 2) plt.title('RGZ') plot_classifications(atlas_vector, ir_matrix, rgz_labels, base_size=base_size) plt.subplot(1, 3, 3) plt.title('Norris') plot_classifications(atlas_vector, ir_matrix, norris_labels, base_size=base_size) with h5py.File(TRAINING_H5_PATH, 'r') as f_h5: classifier = RGZClassifier(f_h5['features'].value, N_ASTRO) classifier.train(numpy.arange(f_h5['features'].shape[0]), norris_labels) with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5: i = 250 vec = f_h5['/atlas/cdfs/numeric'][i, :] mat = f_h5['/swire/cdfs/numeric'] probs = classifier.predict_probabilities(vec) labels = numpy.zeros(probs.shape) labels[probs.argmax()] = 1 plt.figure(figsize=(20, 10)) plot_classifications_row(vec, mat, labels, crowdsourced_labels, norris_labels, base_size=200) ```
github_jupyter
# Single Beam This notebook will run the ISR simulator with a set of data created from a function that makes test data. The results along with error bars are plotted below. ``` %matplotlib inline import matplotlib.pyplot as plt import os,inspect from SimISR import Path import scipy as sp from SimISR.utilFunctions import readconfigfile,makeconfigfile from SimISR.IonoContainer import IonoContainer,MakeTestIonoclass from SimISR.runsim import main as runsim from SimISR.analysisplots import analysisdump import seaborn as sns ``` ## Set up Config Files Setting up a configuration files and the directory needed to run the simulation. The simualtor assumes that for each simulation there is a dedicated directory to save out data along the different processing stages. The simulator also assumes that there is a configuration file which is created in the following cell using a default one that comes with the code base. The only parameter the user should have to set is the number of pulses. ``` # set the number of pulses npulses = 2000 curloc = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) testpath = os.path.join(os.path.split(curloc)[0],'Testdata','Notebookexample1') if not os.path.isdir(testpath): os.mkdir(testpath) defaultpath = os.path.join(os.path.split(curloc)[0],'Test') defcon = os.path.join(defaultpath,'statsbase.ini') (sensdict,simparams) = readconfigfile(defcon) tint = simparams['IPP']*npulses ratio1 = tint/simparams['Tint'] simparams['Tint']=ratio1 * simparams['Tint'] simparams['Fitinter'] = ratio1 * simparams['Fitinter'] simparams['TimeLim'] = tint simparams['startfile']='startfile.h5' makeconfigfile(os.path.join(testpath,'stats.ini'),simparams['Beamlist'],sensdict['Name'],simparams) ``` ## Make Input Data This section will create a set of input parmeters that can be used to create ISR Data. It uses a function MakeTestIonoclass which will create a set of plasma parameters that varies with altitude depending on the the function inputs. This data is put into an ionocontainer class, which is used as a container class to move data between the radarData class, fitter class and plotting modules. It has a standard format so any radar data or plasma parameters for the simulator can be saved in this. A start file is also made which will be used as the starting parameter values used in the fitter. The starting points for the fitter use a nearest neighbor in space to what is found in the start file. ``` finalpath = os.path.join(testpath,'Origparams') if not os.path.isdir(finalpath): os.mkdir(finalpath) z = (50.+sp.arange(120)*5.) nz = len(z) coords = sp.column_stack((sp.zeros((nz,2)),z)) Icont1=MakeTestIonoclass(testv=False,testtemp=True,N_0=1e11,z_0=250.0,H_0=50.0,coords=coords,times =sp.array([[0,1e6]])) Icontstart = MakeTestIonoclass(testv=False,testtemp=False,N_0=1e11,z_0=250.0,H_0=50.0,coords=coords,times =sp.array([[0,1e6]])) finalfile = os.path.join(finalpath,'0 stats.h5') Icont1.saveh5(finalfile) Icontstart.saveh5(os.path.join(testpath,'startfile.h5')) ``` ## Run Simulation The simulation is run through the submodule runsim and its main function, renamed in this as runsim. This function will call all of the neccesary classes and functions to run the simulator. It will save out the data based off of an internal set of file names. This function must get a configuration file and a list of functionalities it is to perform. Below the runsim function will create spectra form the plasma parameters, create radar data and then fit it. ``` functlist = ['spectrums','radardata','fitting'] config = os.path.join(testpath,'stats.ini') runsim(functlist,testpath,config,True) ``` ## Plotting The data is plotted along with error bars derived from the fitter. ``` sns.set_style("whitegrid") sns.set_context("notebook") fig1,axmat =plt.subplots(1,3,figsize = (16,7),sharey=True) axvec = axmat.flatten() fittedfile = os.path.join(testpath,'Fitted','fitteddata.h5') fitiono = IonoContainer.readh5(fittedfile) paramlist = ['Ne','Te','Ti'] indlist =[sp.argwhere(ip==fitiono.Param_Names)[0][0] for ip in paramlist] n_indlist =[sp.argwhere(('n'+ip)==fitiono.Param_Names)[0][0] for ip in paramlist] altin =Icont1.Cart_Coords[:,2] altfit = fitiono.Cart_Coords[:,2] in_ind=[[1,0],[1,1],[0,1]] pbounds = [[1e10,1.2e11],[200.,3000.],[200.,2500.],[-100.,100.]] for i,iax in enumerate(axvec): iinind = in_ind[i] ifitind = indlist[i] n_ifitind = n_indlist[i] #plot input indata = Icont1.Param_List[:,0,iinind[0],iinind[1]] iax.plot(indata,altin) #plot fitted data fitdata = fitiono.Param_List[:,0,ifitind] fit_error = fitiono.Param_List[:,0,n_ifitind] ploth=iax.plot(fitdata,altfit)[0] iax.set_xlim(pbounds[i]) iax.errorbar(fitdata,altfit,xerr=fit_error,fmt='-o',color=ploth.get_color()) iax.set_title(paramlist[i]) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from xentropy import dihedrals from astropy import units as au ``` # single Gaussian distro ## create artificial data ``` data= np.random.randn(100000)*30 ``` ## perform kde ``` dih_ent = dihedrals.dihedralEntropy(data=data,verbose=True) dih_ent.calculate() ``` ## plot normalized histogram and kde ``` f,axs = plt.subplots(ncols=2,figsize=(12,6)) axs[0].hist(data,180,density=True, label="histogram") xs, ys = dih_ent.pdf_x_deg,dih_ent.pdf_deg axs[0].plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(dih_ent.entropy)) axs[0].set(xlabel="artif. dihedrals / degree", ylabel="prob. density / degree$^{-1}$") axs[1].hist(data/180*np.pi,180,density=True, label="histogram") xs, ys = dih_ent.pdf_x,dih_ent.pdf axs[1].plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(dih_ent.entropy)) axs[1].set(xlabel="artif. dihedrals / radian", ylabel="prob. density / radian$^{-1}$") for ax in axs: ax.legend(loc="upper right") ``` # Gaussians of variable width ## create artificial data ``` data= [np.random.randn(100000)*20, np.random.randn(100000)*30, np.random.randn(100000)*40, np.random.randn(100000)*50] ``` ## perform kde ``` dih_ent = dihedrals.dihedralEntropy(data=data,verbose=True,input_unit="degree") dih_ent.calculate() ``` ## plot normalized histogram and kde ``` f,axs = plt.subplots(2,2,figsize=(12,12),sharex=True,sharey=True) for ax,dat,xs,ys,S in zip(axs.flatten(),data,dih_ent.pdf_x_deg,dih_ent.pdf_deg, dih_ent.entropy): ax.hist(dat,180,density=True, label="histogram") ax.plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(S)) ax.set(xlabel="artificial dihedrals", ylabel="probability density") ax.legend() f.tight_layout() ``` # binodal distributions ## create artificial data ``` def binodal_data(n_samples=1001,w1=10,w2=10): n1 = n_samples//2 n2 = n_samples-n1 p1 = np.random.randn(n1)*w1-90 p2 = np.random.randn(n2)*w2+90 return np.concatenate([p1,p2]) data= [binodal_data(100000,5,25), binodal_data(100000,15,25), binodal_data(100000,25,25), binodal_data(100000,35,25)] ``` ## perform kde ``` dih_ent = dihedrals.dihedralEntropy(data=data, verbose=False, input_unit="degree") dih_ent.calculate() ``` ## plot normalized histogram and kde ``` f,axs = plt.subplots(2,2,figsize=(12,12),sharex=True,sharey=True) for ax,dat,xs,ys,S in zip(axs.flatten(), data,dih_ent.pdf_x_deg, dih_ent.pdf_deg, dih_ent.entropy): ax.hist(dat,180,density=True, label="histogram") ax.plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(S)) ax.set(xlabel="artificial dihedrals", ylabel="probability density") ax.legend() f.tight_layout() ``` # shifted binodal distributions ## create artificial data ``` def binodal_data(n_samples=1001,w1=10,w2=10): n1 = n_samples//2 n2 = n_samples-n1 p1 = np.random.randn(n1)*w1 p2 = np.random.randn(n2)*w2+180 return np.divmod(np.concatenate([p1,p2]),360)[1]-180 data= [binodal_data(100000,5,25), binodal_data(100000,15,25), binodal_data(100000,25,25), binodal_data(100000,35,25)] ``` ## perform kde ``` dih_ent = dihedrals.dihedralEntropy(data=data, verbose=False, input_unit="degree") dih_ent.calculate() ``` ## plot normalized histogram and kde ``` f,axs = plt.subplots(2,2,figsize=(12,12),sharex=True,sharey=True) for ax,dat,xs,ys,S in zip(axs.flatten(),data,dih_ent.pdf_x_deg,dih_ent.pdf_deg, dih_ent.entropy): ax.hist(dat,180,density=True, label="histogram") ax.plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(S)) ax.set(xlabel="artificial dihedrals", ylabel="probability density") ax.legend() f.tight_layout() ``` # trinodal distributions (butane-like) ## create artificial data ``` def trinodal_data(n_samples=1001,w1=20,w2=20,w3=20): n1 = int(n_samples*2/5) n2 = int((n_samples-n1)/2) n3 = n_samples-n1-n2 p1 = np.random.randn(n1)*w1 p2 = np.random.randn(n2)*w2-120 p3 = np.random.randn(n3)*w3+120 return np.concatenate([p1,p2,p3]) data= trinodal_data(100000) ``` ## perform kde ``` dih_ent = dihedrals.dihedralEntropy(data=data, verbose=False, input_unit="degree") dih_ent.calculate() ``` ## plot normalized histogram and kde ``` f,axs = plt.subplots() xs, ys = dih_ent.pdf_x_deg,dih_ent.pdf_deg axs.hist(data,180,density=True, label="histogram") axs.plot(xs,ys, lw=5,alpha=.7, label="XEntropy KDE\nS = {:.3f} J/(mol*K)".format(dih_ent.entropy)) axs.set(xlabel="artificial dihedrals", ylabel="probability density") axs.legend() ```
github_jupyter
``` """ This notebook contains codes to run hyper-parameter tuning using a genetic algorithm. Use another notebook if you wish to use *grid search* instead. # Under development. """ import os, sys import numpy as np import pandas as pd import tensorflow as tf import sklearn from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import matplotlib import matplotlib.pyplot as plt from pprint import pprint from typing import Dict, List import datetime import sys sys.path.append("../") # If this notebook file is not placed under in /notebook/ directory, # adding directory "../" might not correly add the project directory. # If adding "../" does not solve the importing problem, we need to setup # the directory mannually. try: import constants except ModuleNotFoundError: core_dir = input("Directory of core files >>> ") if not core_dir.endswith("/"): core_dir += "/" sys.path.append(core_dir) import constants from core.tools.metrics import * import core.tools.visualize as visualize from core.tools.time_series import * from core.tools.data_import import * import core.tools.rnn_prepare as rnn_prepare import core.tools.param_set_generator as param_set_generator import core.ga.genetic_hpt as genetic_hpt import core.models.stacked_lstm as stacked_lstm import core.training.hps_methods as hps_methods # data preparation phase. pprint(constants.DATA_DIR) choice = None while choice is None or choice not in constants.DATA_DIR.keys(): if choice is not None: print("Invalid data location received, try again...") choice = input("Select Dataset >>> ") # choice = "a" FILE_DIR = constants.DATA_DIR[choice] print(f"Dataset chosen: {FILE_DIR}") print("Avaiable configuration files found: ") for cf in os.listdir("../hps_configs"): if cf.endswith("config.py"): print("\t" + cf) config_name = input("Select config file >>> ") if config_name.endswith(".py"): config_name = config_name[:-3] # config_name = "mac_config" exec(f"import hps_configs.{config_name} as config") # print("Reading configuration file...") # for att in dir(config): # if att.endswith("_config"): # print(f"\tLoading: {att}") # exec(f"globals().update(config.{att})") def obj_func(param) -> float: df_ready = rnn_prepare.prepare_dataset( file_dir=FILE_DIR, periods=int(param["PERIODS"]), order=int(param["ORDER"]), remove=None, verbose=False ) # Split dataset. (X_train, X_val, X_test, y_train, y_val, y_test) = rnn_prepare.split_dataset( raw=df_ready, train_ratio=param["TRAIN_RATIO"], val_ratio=param["VAL_RATIO"], lags=param["LAGS"] ) # The gross dataset excluding the test set. # Excluding the test set for isolation purpose. data_feed = { "X_train": X_train, "X_val": X_val, "y_train": y_train, "y_val": y_val, } ep = param["epochs"] ckpts = range(int(ep * 0.95), ep) # Take the final 5% epochs. tf.reset_default_graph() model = stacked_lstm.StackedLSTM( param=param, prediction_checkpoints=ckpts, verbose=False ) ret_pack = model.fit(data=data_feed, ret=["mse_val"]) return float(np.mean(list(ret_pack["mse_val"].values()))) total_gen = 30 init_size = 10 ignore_set = ( "PERIODS", "ORDER", "TRAIN_RATIO", "VAL_RATIO", "num_outputs", "num_inputs", "report_periods", "tensorboard_path", "model_path", "fig_path" ) optimizer = genetic_hpt.GeneticHPT( gene_pool=config.main, pop_size=init_size, eval_func=obj_func, mode="min", retain=0.5, shot_prob=0.05, mutate_prob=0.05, verbose=False, ignore=ignore_set ) # sample_param = {'LAGS': 6, # 'ORDER': 1, # 'PERIODS': 1, # 'TRAIN_RATIO': 0.8, # 'VAL_RATIO': 0.1, # 'clip_grad': None, # 'epochs': 500, # 'fig_path': '/Volumes/Intel/debug/model_figs/', # 'learning_rate': 0.1, # 'model_path': '/Volumes/Intel/debug/saved_models/', # 'num_inputs': 1, # 'num_neurons': (32, 64), # 'num_outputs': 1, # 'num_time_steps': None, # 'report_periods': 10, # 'tensorboard_path': '/Volumes/Intel/debug/tensorboard/'} class HiddenPrints: def __enter__(self): self._original_stdout = sys.stdout sys.stdout = open(os.devnull, 'w') def __exit__(self, exc_type, exc_val, exc_tb): sys.stdout.close() sys.stdout = self._original_stdout start_time = datetime.datetime.now() # Training best_rec = list() worst_rec = list() print("Initial evaluation gen=0...") optimizer.evaluate(verbose=True) print(f"\nBest fitted entity validatiton MSE: {optimizer.population[0][1]: 0.7f}\ \nWorst fitted entity validation MSE: {optimizer.population[-1][1]: 0.7f}") for gen in range(total_gen): print(f"Generation: [{gen + 1}/{total_gen}]") optimizer.select() optimizer.evolve() optimizer.evaluate(verbose=True) print(f"\nBest fitted entity validation MSE: {optimizer.population[0][1]: 0.7f}\ \nWorst fitted entity validation MSE: {optimizer.population[-1][1]: 0.7f}") best_rec.append(optimizer.population[0][1]) worst_rec.append(optimizer.population[-1][1]) print(f"Final generation best fitted entity: {optimizer.population[0][0]}\ \nwith valudation set MSE (fitness): {optimizer.population[0][1]}") end_time = datetime.datetime.now() print(f"Time taken: {str(end_time - start_time)}") ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)" data-toc-modified-id="Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Demo of RISE for slides with Jupyter notebooks (Python)</a></span><ul class="toc-item"><li><span><a href="#Title-2" data-toc-modified-id="Title-2-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Title 2</a></span><ul class="toc-item"><li><span><a href="#Title-3" data-toc-modified-id="Title-3-1.1.1"><span class="toc-item-num">1.1.1&nbsp;&nbsp;</span>Title 3</a></span><ul class="toc-item"><li><span><a href="#Title-4" data-toc-modified-id="Title-4-1.1.1.1"><span class="toc-item-num">1.1.1.1&nbsp;&nbsp;</span>Title 4</a></span></li></ul></li></ul></li><li><span><a href="#Text" data-toc-modified-id="Text-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Text</a></span></li><li><span><a href="#Maths" data-toc-modified-id="Maths-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Maths</a></span></li><li><span><a href="#And-code" data-toc-modified-id="And-code-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>And code</a></span></li></ul></li><li><span><a href="#More-demo-of-Markdown-code" data-toc-modified-id="More-demo-of-Markdown-code-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>More demo of Markdown code</a></span><ul class="toc-item"><li><span><a href="#Lists" data-toc-modified-id="Lists-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Lists</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Images" data-toc-modified-id="Images-2.1.0.1"><span class="toc-item-num">2.1.0.1&nbsp;&nbsp;</span>Images</a></span></li><li><span><a href="#And-Markdown-can-include-raw-HTML" data-toc-modified-id="And-Markdown-can-include-raw-HTML-2.1.0.2"><span class="toc-item-num">2.1.0.2&nbsp;&nbsp;</span>And Markdown can include raw HTML</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#End-of-this-demo" data-toc-modified-id="End-of-this-demo-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>End of this demo</a></span></li></ul></div> # Demo of RISE for slides with Jupyter notebooks (Python) - This document is an example of a slideshow, written in a [Jupyter notebook](https://www.jupyter.org/) with the [RISE extension](https://github.com/damianavila/RISE). > By [Lilian Besson](http://perso.crans.org/besson/), Sept.2017. --- ## Title 2 ### Title 3 #### Title 4 ##### Title 5 ##### Title 6 ## Text With text, *emphasis*, **bold**, ~~striked~~, `inline code` and > *Quote.* > > -- By a guy. ## Maths With inline math $\sin(x)^2 + \cos(x)^2 = 1$ and equations: $$\sin(x)^2 + \cos(x)^2 = \left(\frac{\mathrm{e}^{ix} - \mathrm{e}^{-ix}}{2i}\right)^2 + \left(\frac{\mathrm{e}^{ix} + \mathrm{e}^{-ix}}{2}\right)^2 = \frac{-\mathrm{e}^{2ix}-\mathrm{e}^{-2ix}+2 \; ++\mathrm{e}^{2ix}+\mathrm{e}^{-2ix}+2}{4} = 1.$$ ## And code In Markdown: ```python from sys import version print(version) ``` And in a executable cell (with Python 3 kernel) : ``` from sys import version print(version) ``` # More demo of Markdown code ## Lists - Unordered - lists - are easy. And 1. and ordered also ! Just 2. start lines by `1.`, `2.` etc 3. or simply `1.`, `1.`, ... #### Images With a HTML `<img/>` tag or the `![alt](url)` Markdown code: <img width="100" src="agreg/images/dooku.jpg"/> ![agreg/images/dooku.jpg](agreg/images/dooku.jpg) ``` # https://gist.github.com/dm-wyncode/55823165c104717ca49863fc526d1354 """Embed a YouTube video via its embed url into a notebook.""" from functools import partial from IPython.display import display, IFrame width, height = (560, 315, ) def _iframe_attrs(embed_url): """Get IFrame args.""" return ( ('src', 'width', 'height'), (embed_url, width, height, ), ) def _get_args(embed_url): """Get args for type to create a class.""" iframe = dict(zip(*_iframe_attrs(embed_url))) attrs = { 'display': partial(display, IFrame(**iframe)), } return ('YouTubeVideo', (object, ), attrs, ) def youtube_video(embed_url): """Embed YouTube video into a notebook. Place this module into the same directory as the notebook. >>> from embed import youtube_video >>> youtube_video(url).display() """ YouTubeVideo = type(*_get_args(embed_url)) # make a class return YouTubeVideo() # return an object ``` #### And Markdown can include raw HTML <center><span style="color: green;">This is a centered span, colored in green.</span></center> Iframes are disabled by default, but by using the IPython internals we can include let say a YouTube video: ``` youtube_video("https://www.youtube.com/embed/FNg5_2UUCNU").display() print(2**2021) ``` # End of this demo - See [here for more notebooks](https://github.com/Naereen/notebooks/)! - This document, like my other notebooks, is distributed [under the MIT License](https://lbesson.mit-license.org/).
github_jupyter
``` # If you run on colab uncomment the following line #!pip install git+https://github.com/clementchadebec/benchmark_VAE.git import torch import torchvision.datasets as datasets %load_ext autoreload %autoreload 2 mnist_trainset = datasets.MNIST(root='../../data', train=True, download=True, transform=None) train_dataset = mnist_trainset.data[:-10000].reshape(-1, 1, 28, 28) / 255. eval_dataset = mnist_trainset.data[-10000:].reshape(-1, 1, 28, 28) / 255. from pythae.models import VAEGAN, VAEGANConfig from pythae.trainers import CoupledOptimizerAdversarialTrainer, CoupledOptimizerAdversarialTrainerConfig from pythae.pipelines.training import TrainingPipeline from pythae.models.nn.benchmarks.mnist import Encoder_VAE_MNIST, Decoder_AE_MNIST, LayeredDiscriminator_MNIST config = CoupledOptimizerAdversarialTrainerConfig( output_dir='my_model', learning_rate=1e-4, batch_size=100, num_epochs=100, ) model_config = VAEGANConfig( input_dim=(1, 28, 28), latent_dim=16, adversarial_loss_scale=0.8, reconstruction_layer= 3, margin=0.4, equilibrium= 0.68 ) model = VAEGAN( model_config=model_config, encoder=Encoder_VAE_MNIST(model_config), decoder=Decoder_AE_MNIST(model_config) ) pipeline = TrainingPipeline( training_config=config, model=model ) pipeline( train_data=train_dataset, eval_data=eval_dataset ) import os last_training = sorted(os.listdir('my_model'))[-1] trained_model = VAEGAN.load_from_folder(os.path.join('my_model', last_training, 'final_model')) from pythae.samplers import NormalSampler # create normal sampler normal_samper = NormalSampler( model=trained_model ) # sample gen_data = normal_samper.sample( num_samples=25 ) import matplotlib.pyplot as plt # show results with normal sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.) from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig # set up gmm sampler config gmm_sampler_config = GaussianMixtureSamplerConfig( n_components=10 ) # create gmm sampler gmm_sampler = GaussianMixtureSampler( sampler_config=gmm_sampler_config, model=trained_model ) # fit the sampler gmm_sampler.fit(train_dataset) # sample gen_data = gmm_sampler.sample( num_samples=25 ) # show results with gmm sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.) ``` ## ... the other samplers work the same
github_jupyter
# **PointRend - Image Segmentation as Rendering** **Authors: Alexander Kirillov, Yuxin Wu, Kaiming H,e Ross Girshick - Facebook AI Research (FAIR)** **Official Github**: https://github.com/facebookresearch/detectron2/tree/main/projects/PointRend --- **Edited By Su Hyung Choi (Key Summary & Code Practice)** If you have any issues on this scripts, please PR to the repository below. **[Github: @JonyChoi - Computer Vision Paper Reviews]** https://github.com/jonychoi/Computer-Vision-Paper-Reviews Edited Jan 10 2022 --- ### **Abstract** <table> <tbody> <tr> <td> <p> <i>We present a new method for efficient high-quality image segmentation of objects and scenes. By analogizing classical computer graphics methods for efficient rendering with over- and undersampling challenges faced in pixel labeling tasks, we develop a unique perspective of image segmentation as a rendering problem. From this vantage, we present the PointRend (Point-based Rendering) neural network module: a module that performs point-based segmentation predictions at adaptively selected locations based on an iterative subdivision algorithm. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-ofthe-art models. While many concrete implementations of the general idea are possible, we show that a simple design already achieves excellent results. Qualitatively, PointRend outputs crisp object boundaries in regions that are oversmoothed by previous methods. Quantitatively, PointRend yields significant gains on COCO and Cityscapes, for both instance and semantic segmentation. PointRend’s efficiency enables output resolutions that are otherwise impractical in terms of memory or computation compared to existing approaches. Code has been made available at https:// github.com/facebookresearch/detectron2/ tree/master/projects/PointRend.</i> </p> </td> </tr> </tbody> </table> ### **Introduction** <table> <tbody> <tr> <td> <p> Image segmentation tasks involve mapping pixels sampled on a regular grid to a label map, or a set of label maps, on the same grid. For semantic segmentation, the label map indicates the predicted category at each pixel. In the case of instance segmentation, a binary foreground vs. background map is predicted for each detected object. The modern tools of choice for these tasks are built on convolutional neural networks (CNNs) [27, 26]. </p> <table> <tbody> <tr> <td> <img src="./imgs/figure1.png" width="300" /> </td> <td> <img src="./imgs/figure1_description.png" width="350" /> </td> </tr> </tbody> </table> <p> CNNs for image segmentation typically operate on regular grids: the input image is a regular grid of pixels, their hidden representations are feature vectors on a regular grid, and their outputs are label maps on a regular grid. Regular grids are convenient, but not necessarily computationally ideal for image segmentation. The label maps predicted by these networks should be mostly smooth, i.e., neighboring pixels often take the same label, because highfrequency regions are restricted to the sparse boundaries between objects. A regular grid will unnecessarily oversample the smooth areas while simultaneously undersampling object boundaries. The result is excess computation in smooth regions and blurry contours (Fig. 1, upper-left). Image segmentation methods often predict labels on a low-resolution regular grid, e.g., 1/8-th of the input [35] for semantic segmentation, or 28×28 [19] for instance segmentation, as a compromise between undersampling and oversampling. </p> <p> Analogous sampling issues have been studied for decades in computer graphics. For example, a renderer maps a model (e.g., a 3D mesh) to a rasterized image, i.e. a regular grid of pixels. While the output is on a regular grid, computation is not allocated uniformly over the grid. Instead, a common graphics strategy is to compute pixel values at an irregular subset of adaptively selected points in the image plane. The classical subdivision technique of [48], as an example, yields a quadtree-like sampling pattern that efficiently renders an anti-aliased, high-resolution image. </p> <p> The central idea of this paper is to view image segmentation as a rendering problem and to adapt classical ideas from computer graphics to efficiently “render” highquality label maps (see Fig. 1, bottom-left). We encapsulate this computational idea in a new neural network module, called PointRend, that uses a subdivision strategy to adaptively select a non-uniform set of points at which to compute labels. PointRend can be incorporated into popular meta-architectures for both instance segmentation (e.g., Mask R-CNN [19]) and semantic segmentation (e.g., FCN [35]). Its subdivision strategy efficiently computes high-resolution segmentation maps using an order of magnitude fewer floating-point operations than direct, dense computation. </p> <img src="./imgs/figure2.png" /> <p> PointRend is a general module that admits many possible implementations. Viewed abstractly, a PointRend module accepts one or more typical CNN feature maps f(xi, yi) that are defined over regular grids, and outputs high-resolution predictions p(x0i, y0i) over a finer grid. Instead of making excessive predictions over all points on the output grid, PointRend makes predictions only on carefully selected points. To make these predictions, it extracts a point-wise feature representation for the selected points by interpolating f, and uses a small point head subnetwork to predict output labels from the point-wise features. We will present a simple and effective PointRend implementation. </p> <p> We evaluate PointRend on instance and semantic segmentation tasks using the COCO [29] and Cityscapes [9] benchmarks. Qualitatively, PointRend efficiently computes sharp boundaries between objects, as illustrated in Fig. 2 and Fig. 8. We also observe quantitative improvements even though the standard intersection-over-union based metrics for these tasks (mask AP and mIoU) are biased towards object-interior pixels and are relatively insensitive to boundary improvements. PointRend improves strong Mask RCNN and DeepLabV3 [5] models by a significant margin. </p> </td> </tr> </tbody> </table> ### **2. Related Work** <table> <tbody> <tr> <td> <p> <strong>Rendering</strong> algorithms in computer graphics output a regular grid of pixels. However, they usually compute these pixel values over a non-uniform set of points. Efficient procedures like subdivision [48] and adaptive sampling [38, 42] refine a coarse rasterization in areas where pixel values have larger variance. Ray-tracing renderers often use oversampling [50], a technique that samples some points more densely than the output grid to avoid aliasing effects. Here, we apply classical subdivision to image segmentation. </p> <p> Non-uniform grid representations. Computation on regular grids is the dominant paradigm for 2D image analysis, but this is not the case for other vision tasks. In 3D shape recognition, large 3D grids are infeasible due to cubic scaling. Most CNN-based approaches do not go beyond coarse 64×64×64 grids [12, 8]. Instead, recent works consider more efficient non-uniform representations such as meshes [47, 14], signed distance functions [37], and octrees [46]. Similar to a signed distance function, PointRend can compute segmentation values at any point. </p> <p> Recently, Marin et al. [36] propose an efficient semantic segmentation network based on non-uniform subsampling of the input image prior to processing with a standard semantic segmentation network. PointRend, in contrast, focuses on non-uniform sampling at the output. It may be possible to combine the two approaches, though [36] is currently unproven for instance segmentation. </p> <p> <strong>Instance segmentation</strong> methods based on the Mask RCNN meta-architecture [19] occupy top ranks in recent challenges [32, 3]. These region-based architectures typically predict masks on a 28×28 grid irrespective of object size. This is sufficient for small objects, but for large objects it produces undesirable “blobby” output that oversmooths the fine-level details of large objects (see Fig. 1, top-left). Alternative, bottom-up approaches group pixels to form object masks [31, 1, 25]. These methods can produce more detailed output, however, they lag behind regionbased approaches on most instance segmentation benchmarks [29, 9, 40]. TensorMask [7], an alternative slidingwindow method, uses a sophisticated network design to predict sharp high-resolution masks for large objects, but its accuracy also lags slightly behind. In this paper, we show that a region-based segmentation model equipped with PointRend can produce masks with fine-level details while improving the accuracy of region-based approaches. </p> <p> <strong>Semantic segmentation.</strong> Fully convolutional networks (FCNs) [35] are the foundation of modern semantic segmentation approaches. They often predict outputs that have lower resolution than the input grid and use bilinear upsampling to recover the remaining 8-16× resolution. Results may be improved with dilated/atrous convolutions that replace some subsampling layers [4, 5] at the expense of more memory and computation. </p> <table> <tbody> <tr> <td> <img src="./imgs/figure3.png" width="300" /> </td> <td> <img src="./imgs/figure3_description.png" width="290" /> </td> </tr> </tbody> </table> <p> Alternative approaches include encoder-decoder achitectures [6, 24, 44, 45] that subsample the grid representation in the encoder and then upsample it in the decoder, using skip connections [44] to recover filtered details. Current approaches combine dilated convolutions with an encoderdecoder structure [6, 30] to produce output on a 4× sparser grid than the input grid before applying bilinear interpolation. In our work, we propose a method that can efficiently predict fine-level details on a grid as dense as the input grid. </p> </tr> </tbody> </table> ### **3. Method** <table> <tbody> <tr> <td> <p> We analogize image segmentation (of objects and/or scenes) in computer vision to image rendering in computer graphics. Rendering is about displaying a model (e.g., a 3D mesh) as a regular grid of pixels, i.e., an image. While the output representation is a regular grid, the underlying physical entity (e.g., the 3D model) is continuous and its physical occupancy and other attributes can be queried at any real-value point on the image plane using physical and geometric reasoning, such as ray-tracing. </p> <p> Analogously, in computer vision, we can think of an image segmentation as the occupancy map of an underlying continuous entity, and the segmentation output, which is a regular grid of predicted labels, is “rendered” from it. The entity is encoded in the network’s feature maps and can be accessed at any point by interpolation. A parameterized function, that is trained to predict occupancy from these interpolated point-wise feature representations, is the counterpart to physical and geometric reasoning. </p> <p> Based on this analogy, we propose PointRend (Pointbased Rendering) as a methodology for image segmentation using point representations. A PointRend module accepts one or more typical CNN feature maps of C channels f ∈ R C×H×W , each defined over a regular grid (that is typically 4× to 16× coarser than the image grid), and outputs predictions for the K class labels p ∈ R K×H0×W0 over a regular grid of different (and likely higher) resolution. A PointRend module consists of three main components: (i) A point selection strategy chooses a small number of real-value points to make predictions on, avoiding excessive computation for all pixels in the high-resolution output grid. (ii) For each selected point, a point-wise feature representation is extracted. Features for a real-value point are computed by bilinear interpolation of f, using the point’s 4 nearest neighbors that are on the regular grid of f. As a result, it is able to utilize sub-pixel information encoded in the channel dimension of f to predict a segmentation that has higher resolution than f. (iii) A point head: a small neural network trained to predict a label from this point-wise feature representation, independently for each point. </p> <p> The PointRend architecture can be applied to instance segmentation (e.g., on Mask R-CNN [19]) and semantic segmentation (e.g., on FCNs [35]) tasks. For instance segmentation, PointRend is applied to each region. It computes masks in a coarse-to-fine fashion by making predictions over a set of selected points (see Fig. 3). For semantic segmentation, the whole image can be considered as a single region, and thus without loss of generality we will describe PointRend in the context of instance segmentation. We discuss the three main components in more detail next. </p> </td> </tr> </tbody> </table> ### **3.1. Point Selection for Inference and Training** <table> <thead> <tr> <th> Point Selection for Inference and Training </th> </tr> </thead> <tbody> <tr> <td> <p> At the core of our method is the idea of flexibly and adaptively selecting points in the image plane at which to predict segmentation labels. Intuitively, these points should be located more densely near high-frequency areas, such as object boundaries, analogous to the anti-aliasing problem in ray-tracing. We develop this idea for inference and training. </p> <p> <strong>Inference.</strong> Our selection strategy for inference is inspired by the classical technique of adaptive subdivision [48] in computer graphics. The technique is used to efficiently render high resolutions images (e.g., via ray-tracing) by computing only at locations where there is a high chance that the value is significantly different from its neighbors; for all other locations the values are obtained by interpolating already computed output values (starting from a coarse grid). </p> <p> For each region, we iteratively “render” the output mask in a coarse-to-fine fashion. The coarsest level prediction is made on the points on a regular grid (e.g., by using a standard coarse segmentation prediction head). In each iteration, PointRend upsamples its previously predicted segmentation using bilinear interpolation and then selects the N most uncertain points (e.g., those with probabilities closest to 0.5 for a binary mask) on this denser grid. PointRend then computes the point-wise feature representation (described shortly in §3.2) for each of these N points and predicts their labels. This process is repeated until the segmentation is upsampled to a desired resolution. One step of this procedure is illustrated on a toy example in Fig. 4. </p> <table> <tbody> <tr> <td> <img src="./imgs/figure4.png" width="400"/> </td> <td> <img src="./imgs/figure5.png" width="380"/> </td> </tr> </tbody> </table> <p> With a desired output resolution of M×M pixels and a starting resolution of M0×M0, PointRend requires no more than N log2 M M0 point predictions. This is much smaller than M×M, allowing PointRend to make high-resolution predictions much more effectively. For example, if M0 is 7 and the desired resolutions is M=224, then 5 subdivision steps are preformed. If we select N=282 points at each step, PointRend makes predictions for only 282 ·4.25 points, which is 15 times smaller than 2242 . Note that fewer than N log2 M M0 points are selected overall because in the first subdivision step only 142 points are available. </p> <p> <strong>Training.</strong> During training, PointRend also needs to select points at which to construct point-wise features for training the point head. In principle, the point selection strategy can be similar to the subdivision strategy used in inference. However, subdivision introduces sequential steps that are less friendly to training neural networks with backpropagation. Instead, for training we use a non-iterative strategy based on random sampling. </p> <p> The sampling strategy selects N points on a feature map to train on.1 It is designed to bias selection towards uncertain regions, while also retaining some degree of uniform coverage, using three principles. (i) Over generation: we over-generate candidate points by randomly sampling kN points (k>1) from a uniform distribution. (ii) Importance sampling: we focus on points with uncertain coarse predictions by interpolating the coarse prediction values at all kN points and computing a taskspecific uncertainty estimate (defined in §4 and §5). The most uncertain βN points (β ∈ [0, 1]) are selected from the kN candidates. (iii) Coverage: the remaining (1 − β)N points are sampled from a uniform distribution. We illustrate this procedure with different settings, and compare it to regular grid selection, in Fig. 5. </p> <p> At training time, predictions and loss functions are only computed on the N sampled points (in addition to the coarse segmentation), which is simpler and more efficient than backpropagation through subdivision steps. This design is similar to the parallel training of RPN + Fast R-CNN in a Faster R-CNN system [13], whose inference is sequential. </p> </td> </tr> </tbody> </table> ### **3.2. Point-wise Representation and Point Head** <table> <tbody> <tr> <td> <p> PointRend constructs point-wise features at selected points by combining (e.g., concatenating) two feature types, fine-grained and coarse prediction features, described next. </p> <p> <strong>Fine-grained features.</strong> To allow PointRend to render fine segmentation details we extract a feature vector at each sampled point from CNN feature maps. Because a point is a real-value 2D coordinate, we perform bilinear interpolation on the feature maps to compute the feature vector, following standard practice [22, 19, 10]. Features can be extracted from a single feature map (e.g., res2 in a ResNet); they can also be extracted from multiple feature maps (e.g., res2 to res5, or their feature pyramid [28] counterparts) and concatenated, following the Hypercolumn method [17]. </p> <p> <strong>Coarse prediction features.</strong> The fine-grained features enable resolving detail, but are also deficient in two regards. First, they do not contain region-specific information and thus the same point overlapped by two instances’ bounding boxes will have the same fine-grained features. Yet, the point can only be in the foreground of one instance. Therefore, for the task of instance segmentation, where different regions may predict different labels for the same point, additional region-specific information is needed. </p> <p> Second, depending on which feature maps are used for the fine-grained features, the features may contain only relatively low-level information (e.g., we will use res2 with DeepLabV3). In this case, a feature source with more contextual and semantic information can be helpful. This issue affects both instance and semantic segmentation. </p> <p> Based on these considerations, the second feature type is a coarse segmentation prediction from the network, i.e., a K-dimensional vector at each point in the region (box) representing a K-class prediction. The coarse resolution, by design, provides more globalized context, while the channels convey the semantic classes. These coarse predictions are similar to the outputs made by the existing architectures, and are supervised during training in the same way as existing models. For instance segmentation, the coarse prediction can be, for example, the output of a lightweight 7×7 resolution mask head in Mask R-CNN. For semantic segmentation, it can be, for example, predictions from a stride 16 feature map. </p> <p> <strong>Point head.</strong> Given the point-wise feature representation at each selected point, PointRend makes point-wise segmentation predictions using a simple multi-layer perceptron (MLP). This MLP shares weights across all points (and all regions), analogous to a graph convolution [23] or a PointNet [43]. Since the MLP predicts a segmentation label for each point, it can be trained by standard task-specific segmentation losses (described in §4 and §5). </p> </td> </tr> </tbody> </table> ### **4. Experiments: Instance Segmentation** <table> <tbody> <tr> <td> <p> <strong>Datasets.</strong> We use two standard instance segmentation datasets: COCO [29] and Cityscapes [9]. We report the standard mask AP metric [29] using the median of 3 runs for COCO and 5 for Cityscapes (it has higher variance). </p> <p> COCO has 80 categories with instance-level annotation. We train on train2017 (∼118k images) and report results on val2017 (5k images). As noted in [16], the COCO ground-truth is often coarse and AP for the dataset may not fully reflect improvements in mask quality. Therefore we supplement COCO results with AP measured using the 80 COCO category subset of LVIS [16], denoted by AP*. </p> <p> The LVIS annotations have significantly higher quality. Note that for AP? we use the same models trained on COCO and simply re-evaluate their predictions against the higherquality LVIS annotations using the LVIS evaluation API. Cityscapes is an ego-centric street-scene dataset with 8 categories, 2975 train images, and 500 validation images. The images are higher resolution compared to COCO (1024×2048 pixels) and have finer, more pixel-accurate ground-truth instance segmentations. </p> <p> <strong>Architecture.</strong> Our experiments use Mask R-CNN with a ResNet-50 [20] + FPN [28] backbone. The default mask head in Mask R-CNN is a region-wise FCN, which we denote by “4× conv”.2 We use this as our baseline for comparison. For PointRend, we make appropriate modifications to this baseline, as described next. </p> <p> <strong>Lightweight, coarse mask prediction head.</strong> To compute the coarse prediction, we replace the 4× conv mask head with a lighter weight design that resembles Mask R-CNN’s box head and produces a 7×7 mask prediction. Specifically, for each bounding box, we extract a 14×14 feature map from the P2 level of the FPN using bilinear interpolation. The features are computed on a regular grid inside the bounding box (this operation can seen as a simple version of RoIAlign). Next, we use a stride-two 2×2 convolution layer with 256 output channels followed by ReLU [39], which reduces the spatial size to 7×7. Finally, similar to Mask R-CNN’s box head, an MLP with two 1024-wide hidden layers is applied to yield a 7×7 mask prediction for each of the K classes. ReLU is used on the MLP’s hidden layers and the sigmoid activation function is applied to its outputs. </p> <p> <strong>PointRend.</strong> At each selected point, a K-dimensional feature vector is extracted from the coarse prediction head’s output using bilinear interpolation. PointRend also interpolates a 256-dimensional feature vector from the P2 level of the FPN. This level has a stride of 4 w.r.t. the input image. These coarse prediction and fine-grained feature vectors are concatenated. We make a K-class prediction at selected points using an MLP with 3 hidden layers with 256 channels. In each layer of the MLP, we supplement the 256 output channels with the K coarse prediction features to make the input vector for the next layer. We use ReLU inside the MLP and apply sigmoid to its output. </p> <p> <strong>Training.</strong> We use the standard 1× training schedule and data augmentation from Detectron2 [49] by default (full details are in the appendix). For PointRend, we sample 142 points using the biased sampling strategy described in the §3.1 with k=3 and β=0.75. We use the distance between 0.5 and the probability of the ground truth class interpolated from the coarse prediction as the point-wise uncertainty measure. For a predicted box with ground-truth class c, we sum the binary cross-entropy loss for the c-th MLP output over the 142 points. The lightweight coarse prediction head uses the average cross-entropy loss for the mask predicted for class c, i.e., the same loss as the baseline 4× conv head. We sum all losses without any re-weighting. </p> <p> During training, Mask R-CNN applies the box and mask heads in parallel, while during inference they run as a cascade. We found that training as a cascade does not improve the baseline Mask R-CNN, but PointRend can benefit from it by sampling points inside more accurate boxes, slightly improving overall performance (∼0.2% AP, absolute). </p> <p> <strong>Inference.</strong> For inference on a box with predicted class c, unless otherwise specified, we use the adaptive subdivision technique to refine the coarse 7×7 prediction for class c to the 224×224 in 5 steps. At each step, we select and update (at most) the N=282 most uncertain points based on the absolute difference between the predictions and 0.5. </p> </td> </tr> </tbody> </table> ### **4.1. Main Results** <table> <tbody> <tr> <td> <table> <tbody> <tr> <td> <img src="./imgs/table1.png" width="520" /> </td> <td> <img src="./imgs/figure6.png" width="470" /> </td> </tr> </tbody> </table> <table width="500"> <tbody> <tr> <td> <img src="./imgs/table2.png" width="500" /> <img src="./imgs/table2_description.png" width="500" /> </td> </tr> </tbody> </table> <p> We compare PointRend to the default 4× conv head in Mask R-CNN in Table 1. PointRend outperforms the default head on both datasets. The gap is larger when evaluating the COCO categories using the LVIS annotations (AP*) and for Cityscapes, which we attribute to the superior annotation quality in these datasets. Even with the same output resolution PointRend outperforms the baseline. The difference between 28×28 and 224×224 is relatively small because AP uses intersection-over-union [11] and, therefore, is heavily biased towards object-interior pixels and less sensitive to the boundary quality. Visually, however, the difference in boundary quality is obvious, see Fig. 6. </p> <p> Subdivision inference allows PointRend to yield a high resolution 224×224 prediction using more than 30 times less compute (FLOPs) and memory than the default 4× conv head needs to output the same resolution (based on taking a 112×112 RoIAlign input), see Table 2. PointRend makes high resolution output feasible in the Mask R-CNN framework by ignoring areas of an object where a coarse prediction is sufficient (e.g., in the areas far away from object boundaries). In terms of wall-clock runtime, our unoptimized implementation outputs 224×224 masks at ∼13 fps, which is roughly the same frame-rate as a 4× conv head modified to output 56×56 masks (by doubling the default RoIAlign size), a design that actually has lower COCO AP compared to the 28×28 4× conv head (34.5% vs. 35.2%). </p> <table> <tbody> <tr> <td> <img src="./imgs/table3.png" width="400" /> </td> <td> <img src="./imgs/figure7.png" width="390" /> </td> </tr> </tbody> </table> <p> Table 3 shows PointRend subdivision inference with different output resolutions and number of points selected at each subdivision step. Predicting masks at a higher resolution can improve results. Though AP saturates, visual improvements are still apparent when moving from lower (e.g., 56×56) to higher (e.g., 224×224) resolution outputs, see Fig. 7. AP also saturates with the number of points sampled in each subdivision step because points are selected in the most ambiguous areas first. Additional points may make predictions in the areas where a coarse prediction is already sufficient. For objects with complex boundaries, however, using more points may be beneficial. </p> <table> <tbody> <tr> <td> <img src="./imgs/table4.png" width="400" /> </td> <td> <img src="./imgs/table5.png" width="400" /> </td> </tr> </tbody> </table> </td> </tr> </tbody> </table> ### **4.2. Ablation Experiments** <table> <tbody> <tr> <td> <p> We conduct a number of ablations to analyze PointRend. In general we note that it is robust to the exact design of the point head MLP. Changes of its depth or width do not show any significant difference in our experiments. </p> <p> <strong>Point selection during training.</strong> During training we select 142 points per object following the biased sampling strategy (§3.1). Sampling only 142 points makes training computationally and memory efficient and we found that using more points does not improve results. Surprisingly, sampling only 49 points per box still maintains AP, though we observe an increased variance in AP. </p> <p> Table 4 shows PointRend performance with different selection strategies during training. Regular grid selection achieves similar results to uniform sampling. Whereas biasing sampling toward ambiguous areas improves AP. However, a sampling strategy that is biased too heavily towards boundaries of the coarse prediction (k>10 and β close to 1.0) decreases AP. Overall, we find a wide range of parameters 2<k<5 and 0.75<β<1.0 delivers similar results. </p> <p> <strong>Larger models, longer training.</strong> Training ResNet-50 + FPN (denoted R50-FPN) with the 1× schedule under-fits on COCO. In Table 5 we show that the PointRend improvements over the baseline hold with both longer training schedule and larger models (see the appendix for details). </p> <img src="./imgs/figure8.png" /> </td> </tr> </tbody> </table> ### **5. Experiments: Semantic Segmentation** <table> <tbody> <tr> <td> <p> PointRend is not limited to instance segmentation and can be extended to other pixel-level recognition tasks. Here, we demonstrate that PointRend can benefit two semantic segmentation models: DeeplabV3 [5], which uses dilated convolutions to make prediction on a denser grid, and SemanticFPN [24], a simple encoder-decoder architecture. </p> <p> <strong>Dataset.</strong> We use the Cityscapes [9] semantic segmentation set with 19 categories, 2975 training images, and 500 validation images. We report the median mIoU of 5 trials. </p> <p> <strong>Implementation details.</strong> We reimplemented DeeplabV3 and SemanticFPN following their respective papers. SemanticFPN uses a standard ResNet-101 [20], whereas DeeplabV3 uses the ResNet-103 proposed in [5].3 We follow the original papers’ training schedules and data augmentation (details are in the appendix). </p> <p> We use the same PointRend architecture as for instance segmentation. Coarse prediction features come from the (already coarse) output of the semantic segmentation model. Fine-grained features are interpolated from res2 for DeeplabV3 and from P2 for SemanticFPN. During training we sample as many points as there are on a stride 16 feature map of the input (2304 for deeplabV3 and 2048 for SemanticFPN). We use the same k=3, β=0.75 point selection strategy. During inference, subdivision uses N=8096 (i.e., the number of points in the stride 16 map of a 1024×2048 image) until reaching the input image resolution. To measure prediction uncertainty we use the same strategy during training and inference: the difference between the most confident and second most confident class probabilities. </p> <table> <tbody> <tr> <td> <img src="./imgs/table6.png" width="500" /> </td> <td> <img src="./imgs/figure9.png" width="400" /> </td> </tr> </tbody> </table> <table> <tbody> <tr> <td> <img src="./imgs/table7.png" width="500" /> </td> </tr> </tbody> </table> <p> <strong>DeeplabV3.</strong> In Table 6 we compare DeepLabV3 to DeeplabV3 with PointRend. The output resolution can also be increased by 2× at inference by using dilated convolutions in res4 stage, as described in [5]. Compared to both, PointRend has higher mIoU. Qualitative improvements are also evident, see Fig. 8. By sampling points adaptively, PointRend reaches 1024×2048 resolution (i.e. 2M points) by making predictions for only 32k points, see Fig. 9. </p> <p> <strong>SemanticFPN.</strong> Table 7 shows that SemanticFPN with PointRend improves over both 8× and 4× output stride variants without PointRend. </p> </td> </tr> </tbody> </table> ### **Appendix A. Instance Segmentation Details** <table> <thead> <tr> <th> Appendix A. Instance Segmentation Details </th> </tr> </thead> <tbody> <tr> <td> <p> We use SGD with 0.9 momentum; a linear learning rate warmup [15] over 1000 updates starting from a learning rate of 0.001 is applied; weight decay 0.0001 is applied; horizontal flipping and scale train-time data augmentation; the batch normalization (BN) [21] layers from the ImageNet pre-trained models are frozen (i.e., BN is not used); no testtime augmentation is used. </p> <p> <strong>COCO [29]:</strong> 16 images per mini-batch; the training schedule is 60k / 20k / 10k updates at learning rates of 0.02 / 0.002 / 0.0002 respectively; training images are resized randomly to a shorter edge from 640 to 800 pixels with a step of 32 pixels and inference images are resized to a shorter edge size of 800 pixels. </p> <p> <strong>Cityscapes [9]:</strong> 8 images per mini-batch the training schedule is 18k / 6k updates at learning rates of 0.01 / 0.001 respectively; training images are resized randomly to a shorter edge from 800 to 1024 pixels with a step of 32 pixels and inference images are resized to a shorter edge size of 1024 pixels. </p> <p> <strong>Longer schedule:</strong> The 3× schedule for COCO is 210k / 40k / 20k updates at learning rates of 0.02 / 0.002 / 0.0002, respectively; all other details are the same as the setting described above. </p> </td> </tr> </tbody> </table> ### **Appendix B. Semantic Segmentation Details** <table> <tbody> <tr> <td> <p> <strong>DeeplabV3 [5]:</strong> We use SGD with 0.9 momentum with 16 images per mini-batch cropped to a fixed 768×768 size; the training schedule is 90k updates with a poly learning rate [34] update strategy, starting from 0.01; a linear learning rate warmup [15] over 1000 updates starting from a learning rate of 0.001 is applied; the learning rate for ASPP and the prediction convolution are multiplied by 10; weight decay of 0.0001 is applied; random horizontal flipping and scaling of 0.5× to 2.0× with a 32 pixel step is used as training data augmentation; BN is applied to 16 images minibatches; no test-time augmentation is used; </p> <p> <strong>SemanticFPN [24]:</strong> We use SGD with 0.9 momentum with 32 images per mini-batch cropped to a fixed 512×1024 size; the training schedule is 40k / 15k / 10k updates at learning rates of 0.01 / 0.001 / 0.0001 respectively; a linear learning rate warmup [15] over 1000 updates starting from a learning rate of 0.001 is applied; weight decay 0.0001 is applied; horizontal flipping, color augmentation [33], and crop bootstrapping [2] are used during training; scale traintime data augmentation resizes an input image from 0.5× to 2.0× with a 32 pixel step; BN layers are frozen (i.e., BN is not used); no test-time augmentation is used. </p> </td> </tr> </tbody> </table> ### **Appendix C. AP* Computation** <table> <tbody> <tr> <td> <p> The first version (v1) of this paper on arXiv has an error in COCO mask AP evaluated against the LVIS annotations [16] (AP* ). The old version used an incorrect list of the categories not present in each evaluation image, which resulted in lower AP* values. </p> </td> </tr> </tbody> </table> ### **References** - [1] Anurag Arnab and Philip HS Torr. Pixelwise instance segmentation with a dynamically instantiated network. In CVPR, 2017. 3 - [2] Samuel Rota Bulo, Lorenzo Porzi, and Peter Kontschieder. ` In-place activated batchnorm for memory-optimized training of DNNs. In CVPR, 2018. 9 - [3] Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, et al. Hybrid task cascade for instance segmentation. In CVPR, 2019. 3 - [4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. PAMI, 2018. 3 - [5] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587, 2017. 2, 3, 8, 9 - [6] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018. 3 - [7] Xinlei Chen, Ross Girshick, Kaiming He, and Piotr Dollar. ´ TensorMask: A foundation for dense object segmentation. In ICCV, 2019. 3 - [8] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In ECCV, 2016. 3 - [9] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The Cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. 2, 3, 5, 8, 9 - [10] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In ICCV, 2017. 5 - [11] Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The PASCAL visual object classes challenge: A retrospective. IJCV, 2015. 6 - [12] Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016. 3 - [13] Ross Girshick. Fast R-CNN. In ICCV, 2015. 5 - [14] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh R-CNN. In ICCV, 2019. 3 9 - [15] Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noord- ´ huis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv:1706.02677, 2017. 9 - [16] Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In ICCV, 2019. 5, 6, 7, 9 - [17] Bharath Hariharan, Pablo Arbelaez, Ross Girshick, and Ji- ´ tendra Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015. 5 - [18] Kaiming He, Ross Girshick, and Piotr Dollar. Rethinking ´ imagenet pre-training. In ICCV, 2019. 7 - [19] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Gir- ´ shick. Mask R-CNN. In ICCV, 2017. 1, 2, 3, 4, 5, 6 - [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 2, 5, 8 - [21] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 9 - [22] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In NIPS, 2015. 5 - [23] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2017. 5 - [24] Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dollar. Panoptic feature pyramid networks. In ´ CVPR, 2019. 3, 8, 9 - [25] Alexander Kirillov, Evgeny Levinkov, Bjoern Andres, Bogdan Savchynskyy, and Carsten Rother. InstanceCut: from edges to instances with multicut. In CVPR, 2017. 3 - [26] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. 1 - [27] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989. 1 - [28] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, ´ Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 2, 5 - [29] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence ´ Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. 2, 3, 5, 9 - [30] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Autodeeplab: Hierarchical neural architecture search for semantic image segmentation. In CVPR, 2019. 3 - [31] Shu Liu, Jiaya Jia, Sanja Fidler, and Raquel Urtasun. SGN: Sequential grouping networks for instance segmentation. In CVPR, 2017. 3 - [32] Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In CVPR, 2018. 3 - [33] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In ECCV, 2016. 9 - [34] Wei Liu, Andrew Rabinovich, and Alexander C Berg. Parsenet: Looking wider to see better. arXiv:1506.04579, 2015. 9 - [35] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 1, 2, 3, 4 - [36] Dmitrii Marin, Zijian He, Peter Vajda, Priyam Chatterjee, Sam Tsai, Fei Yang, and Yuri Boykov. Efficient segmentation: Learning downsampling near semantic boundaries. In ICCV, 2019. 3 - [37] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, s2019. 3 - [38] Don P Mitchell. Generating antialiased images at low sampling densities. ACM SIGGRAPH Computer Graphics, 1987. 2 - [39] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. 6 - [40] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and ` Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In CVPR, 2017. 3 - [41] Paphio. Jo-Wilfried Tsonga - [19]. CC BY-NC-SA 2.0. https://www.flickr.com/photos/paphio/ 2855627782/, 2008. 1 - [42] Matt Pharr, Wenzel Jakob, and Greg Humphreys. Physically based rendering: From theory to implementation, chapter 7. Morgan Kaufmann, 2016. 2 - [43] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In CVPR, 2017. 5 - [44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 3 - [45] Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, and Jingdong Wang. High-resolution representations for labeling pixels and regions. arXiv:1904.04514, 2019. 3 - [46] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. In ICCV, 2017. 3 - [47] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2Mesh: Generating 3D mesh models from single RGB images. In ECCV, 2018. 3 - [48] Turner Whitted. An improved illumination model for shaded display. In ACM SIGGRAPH Computer Graphics, 1979. 2, 4 - [49] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github. com/facebookresearch/detectron2, 2019. 6 [50] Kun Zhou, Qiming Hou, Rui Wang, and Baining Guo. Realtime kd-tree construction on graphics hardware. In ACM Transactions on Graphics (TOG), 2008. 2
github_jupyter
<div style='background-image: url("share/baku.jpg") ; padding: 0px ; background-size: cover ; border-radius: 15px ; height: 250px; background-position: 0% 80%'> <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.9) ; width: 50% ; height: 150px"> <div style="position: relative ; top: 50% ; transform: translatey(-50%)"> <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.9) ; line-height: 100%">ObsPy Tutorial</div> <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.7)">Handling Event Metadata</div> </div> </div> </div> image: User:Abbaszade656 / Wikimedia Commons / <a href="http://creativecommons.org/licenses/by-sa/4.0/">CC-BY-SA-4.0</a> ## Workshop for the "Training in Network Management Systems and Analytical Tools for Seismic" ### Baku, October 2018 Seismo-Live: http://seismo-live.org ##### Authors: * Lion Krischer ([@krischer](https://github.com/krischer)) * Tobias Megies ([@megies](https://github.com/megies)) --- ![](images/obspy_logo_full_524x179px.png) ``` %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') plt.rcParams['figure.figsize'] = 12, 8 ``` - for event metadata, the de-facto standard is [QuakeML (an xml document structure)](https://quake.ethz.ch/quakeml/) - QuakeML files can be read using **`read_events()`** ``` import obspy catalog = obspy.read_events("./data/south_napa_with_some_aftershocks.xml") print(catalog) ``` - **`read_events()`** function returns a **`Catalog`** object, which is a collection of **`Event`** objects. ``` print(type(catalog)) print(type(catalog[0])) event = catalog[0] print(event) ``` - Event objects are again collections of other resources. - the nested ObsPy Event class structure (Catalog/Event/Origin/Magnitude/FocalMechanism/...) is closely modelled after QuakeML <img src="images/Event.svg" width=90%> ``` print(type(event.origins)) print(type(event.origins[0])) print(event.origins[0]) print(type(event.magnitudes)) print(type(event.magnitudes[0])) print(event.magnitudes[0]) # try event.<Tab> to get an idea what "children" elements event has ``` - The Catalog object contains some convenience methods to make working with events easier. - for example, the included events can be filtered with various keys. ``` largest_magnitude_events = catalog.filter("magnitude >= 4.0") print(largest_magnitude_events) ``` - There is a basic preview plot using the matplotlib basemap module. ``` catalog.plot(projection="local", resolution="i", label="magnitude"); ``` - a (modified) Catalog can be output to file in a number of different formats. ``` largest_magnitude_events.write("/tmp/large_events.xml", format="QUAKEML") !ls -l /tmp/large_events.xml ``` - the event type classes can be used to build up Events/Catalogs/Picks/.. from scratch in custom processing work flows and to share them with other researchers in the de facto standard format QuakeML ``` from obspy import UTCDateTime from obspy.core.event import Catalog, Event, Origin, Magnitude from obspy.geodetics import FlinnEngdahl cat = Catalog() cat.description = "Just a fictitious toy example catalog built from scratch" e = Event() e.event_type = "not existing" o = Origin() o.time = UTCDateTime(2014, 2, 23, 18, 0, 0) o.latitude = 47.6 o.longitude = 12.0 o.depth = 10000 o.depth_type = "operator assigned" o.evaluation_mode = "manual" o.evaluation_status = "preliminary" o.region = FlinnEngdahl().get_region(o.longitude, o.latitude) m = Magnitude() m.mag = 7.2 m.magnitude_type = "Mw" m2 = Magnitude() m2.mag = 7.4 m2.magnitude_type = "Ms" # also included could be: custom picks, amplitude measurements, station magnitudes, # focal mechanisms, moment tensors, ... # make associations, put everything together cat.append(e) e.origins = [o] e.magnitudes = [m, m2] m.origin_id = o.resource_id m2.origin_id = o.resource_id print(cat) cat.write("/tmp/my_custom_events.xml", format="QUAKEML") !cat /tmp/my_custom_events.xml ```
github_jupyter
# Simple Analysis with Pandas and Numpy ***ABSTRACT*** * If a donor gives aid for a project that the recipient government would have undertaken anyway, then the aid is financing some expenditure other than the intended project. The notion that aid in this sense may be "fungible," while long recognized, has recently been receiving some empirical support. The paper "What Does Aid to Africa Finance?" focuses on Sub-Saharan Africa—the region with the largest GDP share of aid—and presents results that indicate that aid may be partially fungible, and suggests some reasons why. This database contains data used for the analysis. #### Import Libraries & Load the data ``` import pandas as pd import numpy as np print('OK') df = pd.read_csv('data.csv') df.head(-5) df.info() df_new = df.copy() df1 = df_new.sample(frac = 0.25, random_state = 0) df_new = df_new.drop(df1.index) df1.head(3) df2 = df_new.sample(frac = 0.25, random_state = 0) df_new = df_new.drop(df2.index) df2.head(3) df3 = df_new.sample(frac = 0.25, random_state = 0) df3.head(3) df4 = df_new.drop(df3.index) # since all subsets' indexes were dropped df4.head(3) ``` ### Missing Values * **Interpolation** is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points while **imputation** is replacing the missing data of the mean of the column. ``` df3.isnull().sum() df3[df3['popn'].isnull() == True] df3['popn'].fillna(df3['popn'].mean(), inplace = True) # that's called imputation df3.isnull().sum() df1.isna().sum() df1['popn'].fillna(df1['popn'].interpolate(), inplace = True) df1.isna().sum() ``` ##### When to use interpolation or imputation? * Data has linear relationship = Interpolation otherwise imputation. ### Combine Data ``` df5 = df1.join(df2, lsuffix = '_left') # _left indicates columns from left hand side df5 # NaN = df1 is larger than df2 # Concat df6 = pd.concat([df1,df2], axis = 0) # 0 indicates rows df6 ``` #### Inner Join <img src="https://cdn.sqltutorial.org/wp-content/uploads/2016/03/SQL-INNER-JOIN.png"/> ``` df7 = pd.merge(df1,df2, on = 'year') df7 ``` #### Full Outer Inclusive Join <img src="https://cdn.sqltutorial.org/wp-content/uploads/2016/07/SQL-FULL-OUTER-JOIN.png"/> ``` df8 = pd.merge(df1,df2, how = 'outer') df8 ``` #### Left Inclusive Join <img src="https://cdn.sqltutorial.org/wp-content/uploads/2016/03/SQL-LEFT-JOIN.png"/> ``` df9 = pd.merge(df1,df2, how = 'left') df9 ``` #### Right Inclusive Join <img src="https://www.dofactory.com/img/sql/sql-right-join.png"/> ``` df10 = pd.merge(df1,df2, how = 'right') df10.head(5) ``` ### Sorting Data ``` df1.sort_values(by = ['agrgdp'], ascending = True) df1 df1.sort_index(axis = 0, ascending =True) ``` ### Selecting and Slicing Data ``` df1[['countryc', 'year']] df1.iloc[:,1:8].head() ``` ### Grouping & Aggregating ``` df1.groupby(['year', 'infmort']).agg(np.mean) df1.groupby(['schsec']).groups ```
github_jupyter
# Word vectors (FastText) for Baseline #### Create Spacy model from word vectors ```bash python -m spacy init-model en output/cord19_docrel/spacy/en_cord19_fasttext_300d --vectors-loc output/cord19_docrel/cord19.fasttext.w2v.txt python -m spacy init-model en output/acl_docrel/spacy/en_acl_fasttext_300d --vectors-loc output/acl_docrel/acl.fasttext.w2v.txt ``` ``` import gensim import json import os import requests import pickle import pandas as pd import logging from pathlib import Path from tqdm import tqdm_notebook as tqdm from smart_open import open from nlp import load_dataset import nlp import acl.utils from trainer_cli import ExperimentArguments ``` ## CORD19 ``` data_dir = Path('./output/cord19_docrel') experiment_args = ExperimentArguments( nlp_dataset='./datasets/cord19_docrel/cord19_docrel.py', nlp_cache_dir='./data/nlp_cache', doc_id_col='doi', doc_a_col='from_doi', doc_b_col='to_doi', cv_fold=1, ) docs_ds = load_dataset(experiment_args.nlp_dataset, name='docs', cache_dir=experiment_args.nlp_cache_dir, split=nlp.Split('docs')) # Extract tokens from each document and create token file. tokens_count = 0 with open(data_dir / 'tokens.txt', 'w') as f: for idx, doc in docs_ds.data.to_pandas().iterrows(): text = acl.utils.get_text_from_doc(doc) for token in gensim.utils.simple_preprocess(text, min_len=2, max_len=15): f.write(token + ' ') tokens_count += 1 f.write('\n') print(f'Total tokens: {tokens_count:,}') import fasttext model = fasttext.train_unsupervised(str(data_dir / 'tokens.txt'), model='skipgram', lr=0.05, # learning rate [0.05] dim=300, # size of word vectors [100] ws=5, # size of the context window [5] epoch=5, # number of epochs [5] thread=4, # number of threads [number of cpus] ) model.save_model(str(data_dir / 'cord19.fasttext.bin')) from gensim.models.wrappers import FastText ft_model = FastText.load_fasttext_format(str(data_dir / 'cord19.fasttext.bin')) ft_model.wv.save_word2vec_format(data_dir / 'cord19.fasttext.w2v.txt') # Unset del ft_model del model del docs_ds del experiment_args del data_dir ``` ## ACL ``` data_dir = Path('./output/acl_docrel') experiment_args = ExperimentArguments( nlp_dataset='./datasets/acl_docrel/acl_docrel.py', nlp_cache_dir='./data/nlp_cache', doc_id_col='s2_id', doc_a_col='from_s2_id', doc_b_col='to_s2_id', cv_fold=1, ) docs_ds = load_dataset(experiment_args.nlp_dataset, name='docs', cache_dir=experiment_args.nlp_cache_dir, split=nlp.Split('docs')) # Extract tokens from each document and create token file. tokens_count = 0 with open(data_dir / 'tokens.txt', 'w') as f: for idx, doc in docs_ds.data.to_pandas().iterrows(): text = acl.utils.get_text_from_doc(doc) for token in gensim.utils.simple_preprocess(text, min_len=2, max_len=15): f.write(token + ' ') tokens_count += 1 f.write('\n') # Total tokens: 2,194,010 print(f'Total tokens: {tokens_count:,}') import fasttext model = fasttext.train_unsupervised(str(data_dir / 'tokens.txt'), model='skipgram', lr=0.05, # learning rate [0.05] dim=300, # size of word vectors [100] ws=5, # size of the context window [5] epoch=5, # number of epochs [5] thread=4, # number of threads [number of cpus] ) model.save_model(str(data_dir / 'acl.fasttext.bin')) from gensim.models.wrappers import FastText ft_model = FastText.load_fasttext_format(str(data_dir / 'acl.fasttext.bin')) ft_model.wv.save_word2vec_format(data_dir / 'acl.fasttext.w2v.txt') ```
github_jupyter
``` !pip install tf-nightly-2.0-preview import tensorflow as tf import numpy as np import matplotlib.pyplot as plt print(tf.__version__) def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(False) def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.1, np.cos(season_time * 6 * np.pi), 2 / np.exp(9 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern at each period""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level time = np.arange(10 * 365 + 1, dtype="float32") baseline = 10 series = trend(time, 0.1) baseline = 10 amplitude = 40 slope = 0.005 noise_level = 3 # Create the series series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) # Update with noise series += noise(time, noise_level, seed=51) split_time = 3000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 20 batch_size = 32 shuffle_buffer_size = 1000 plot_series(time, series) def windowed_dataset(series, window_size, batch_size, shuffle_buffer): dataset = tf.data.Dataset.from_tensor_slices(series) dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1])) dataset = dataset.batch(batch_size).prefetch(1) return dataset tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) tf.keras.backend.clear_session() dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1), input_shape=[None]), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 10.0) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(dataset, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 30]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) tf.keras.backend.clear_session() dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1), input_shape=[None]), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 100.0) ]) model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9),metrics=["mae"]) history = model.fit(dataset,epochs=500,verbose=1) forecast = [] results = [] for time in range(len(series) - window_size): forecast.append(model.predict(series[time:time + window_size][np.newaxis])) forecast = forecast[split_time-window_size:] results = np.array(forecast)[:, 0, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, results) tf.keras.metrics.mean_absolute_error(x_valid, results).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- mae=history.history['mae'] loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot MAE and Loss #------------------------------------------------ plt.plot(epochs, mae, 'r') plt.plot(epochs, loss, 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.figure() epochs_zoom = epochs[200:] mae_zoom = mae[200:] loss_zoom = loss[200:] #------------------------------------------------ # Plot Zoomed MAE and Loss #------------------------------------------------ plt.plot(epochs_zoom, mae_zoom, 'r') plt.plot(epochs_zoom, loss_zoom, 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.figure() ```
github_jupyter
<table> <tr><td align="right" style="background-color:#ffffff;"> <img src="../images/logo.jpg" width="20%" align="right"> </td></tr> <tr><td align="right" style="color:#777777;background-color:#ffffff;font-size:12px;"> Abuzer Yakaryilmaz | April 30, 2019 (updated) </td></tr> <tr><td align="right" style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;"> This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr> </table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ <h2> <font color="blue"> Solutions for </font>Rotation Automata</h2> <a id="task1"></a> <h3> Task 1 </h3> Do the same task given above by using different angles. Test at least three different angles. Please modify the code above. <h3>Solution</h3> Any odd multiple of $ \frac{\pi}{16} $ works: $ i \frac{\pi}{16} $, where $ i \in \{1,3,5,7,\ldots\} $ <a id="task2"></a> <h3> Task 2 </h3> Let $ \mathsf{p} = 11 $. Determine an angle of rotation such that when the length of stream is a multiple of $ \sf p $, then we observe only state $ 0 $, and we can also observe state $ 1 $, otherwise. Test your rotation by using a quantum circuit. Execute the circuit for all streams of lengths from 1 to 11. <h3>Solution</h3> We can pick any angle $ k\frac{2\pi}{11} $ for $ k \in \{1,\ldots,10\} $. ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi from random import randrange # the angle of rotation r = randrange(1,11) print("the picked angle is",r,"times of 2pi/11") print() theta = r*2*pi/11 # we read streams of length from 1 to 11 for i in range(1,12): # quantum circuit with one qubit and one bit qreg = QuantumRegister(1) creg = ClassicalRegister(1) mycircuit = QuantumCircuit(qreg,creg) # the stream of length i for j in range(i): mycircuit.ry(2*theta,qreg[0]) # apply one rotation for each symbol # we measure after reading the whole stream mycircuit.measure(qreg[0],creg[0]) # execute the circuit 1000 times job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=1000) counts = job.result().get_counts(mycircuit) print("stream of lenght",i,"->",counts) ``` <a id="task3"></a> <h3> Task 3 </h3> List down 10 possible different angles for Task 2, where each angle should be between 0 and $2\pi$. <h3>Solution</h3> Any angle $ k\frac{2\pi}{11} $ for $ k \in \{1,\ldots,10\} $. <a id="task4"></a> <h3> Task 4 </h3> For each stream of length from 1 to 10, experimentially determine the best angle of rotation by using your circuit. <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi from random import randrange # for each stream of length from 1 to 10 for i in range(1,11): # we try each angle of the form k*2*pi/11 for k=1,...,10 # we try to find the best k for which we observe 1 the most number_of_one_state = 0 best_k = 1 all_outcomes_for_i = "length "+str(i)+"-> " for k in range(1,11): theta = k*2*pi/11 # quantum circuit with one qubit and one bit qreg = QuantumRegister(1) creg = ClassicalRegister(1) mycircuit = QuantumCircuit(qreg,creg) # the stream of length i for j in range(i): mycircuit.ry(2*theta,qreg[0]) # apply one rotation for each symbol # we measure after reading the whole stream mycircuit.measure(qreg[0],creg[0]) # execute the circuit 10000 times job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit) all_outcomes_for_i = all_outcomes_for_i + str(k)+ ":" + str(counts['1']) + " " if int(counts['1']) > number_of_one_state: number_of_one_state = counts['1'] best_k = k print(all_outcomes_for_i) print("for length",i,", the best k is",best_k) print() ``` <a id="task5"></a> <h3> Task 5 </h3> Let $ \mathsf{p} = 31 $. Create a circuit with three quantum states and three classical states. Rotate the qubits with angles $ 3\frac{2\pi}{31} $, $ 7\frac{2\pi}{31} $, and $ 11\frac{2\pi}{31} $, respectively. Execute your circuit for all streams of lengths from 1 to 30. Check whether the number of state $ \ket{000} $ is less than half or not. <i>Note that whether a key is in dictionary or not can be checked as follows:</i> if '000' in counts.keys(): c = counts['000'] else: c = 0 <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi from random import randrange # the angles of rotations theta1 = 3*2*pi/31 theta2 = 7*2*pi/31 theta3 = 11*2*pi/31 # we read streams of length from 1 to 30 for i in range(1,31): # quantum circuit with three qubits and three bits qreg = QuantumRegister(3) creg = ClassicalRegister(3) mycircuit = QuantumCircuit(qreg,creg) # the stream of length i for j in range(i): # apply rotations for each symbol mycircuit.ry(2*theta1,qreg[0]) mycircuit.ry(2*theta2,qreg[1]) mycircuit.ry(2*theta3,qreg[2]) # we measure after reading the whole stream mycircuit.measure(qreg,creg) # execute the circuit N times N = 1000 job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=N) counts = job.result().get_counts(mycircuit) print(counts) if '000' in counts.keys(): c = counts['000'] else: c = 0 print('000 is observed',c,'times out of',N) percentange = round(c/N*100,1) print("the ratio of 000 is ",percentange,"%") print() ``` <a id="task6"></a> <h3> Task 6 </h3> Let $ \mathsf{p} = 31 $. Create a circuit with three quantum states and three classical states. Rotate the qubits with random angles of the form $ k\frac{2\pi}{31}, $ where $ k \in \{1,\ldots,30\}.$ Execute your circuit for all streams of lengths from 1 to 30. Calculate the maximum percentage of observing the state $ \ket{000} $. Repeat this task for a few times. <i>Note that whether a key is in dictionary or not can be checked as follows:</i> if '000' in counts.keys(): c = counts['000'] else: c = 0 <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi from random import randrange # randomly picked angles of rotations k1 = randrange(1,31) theta1 = k1*2*pi/31 k2 = randrange(1,31) theta2 = k2*2*pi/31 k3 = randrange(1,31) theta3 = k3*2*pi/31 print("k1 =",k1,"k2 =",k2,"k3 =",k3) print() max_percentange = 0 # we read streams of length from 1 to 30 for i in range(1,31): # quantum circuit with three qubits and three bits qreg = QuantumRegister(3) creg = ClassicalRegister(3) mycircuit = QuantumCircuit(qreg,creg) # the stream of length i for j in range(i): # apply rotations for each symbol mycircuit.ry(2*theta1,qreg[0]) mycircuit.ry(2*theta2,qreg[1]) mycircuit.ry(2*theta3,qreg[2]) # we measure after reading the whole stream mycircuit.measure(qreg,creg) # execute the circuit N times N = 1000 job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=N) counts = job.result().get_counts(mycircuit) # print(counts) if '000' in counts.keys(): c = counts['000'] else: c = 0 # print('000 is observed',c,'times out of',N) percentange = round(c/N*100,1) if max_percentange < percentange: max_percentange = percentange # print("the ration of 000 is ",percentange,"%") # print() print("max percentage is",max_percentange) ``` <a id="task7"></a> <h3> Task 7 </h3> Repeat Task 6 by using four and five qubits. <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi from random import randrange number_of_qubits = 4 #number_of_qubits = 5 # randomly picked angles of rotations theta = [] for i in range(number_of_qubits): k = randrange(1,31) print("k",str(i),"=",k) theta += [k*2*pi/31] # print(theta) # we count the number of zeros zeros = '' for i in range(number_of_qubits): zeros = zeros + '0' print("zeros = ",zeros) print() max_percentange = 0 # we read streams of length from 1 to 30 for i in range(1,31): # quantum circuit with qubits and bits qreg = QuantumRegister(number_of_qubits) creg = ClassicalRegister(number_of_qubits) mycircuit = QuantumCircuit(qreg,creg) # the stream of length i for j in range(i): # apply rotations for each symbol for k in range(number_of_qubits): mycircuit.ry(2*theta[k],qreg[k]) # we measure after reading the whole stream mycircuit.measure(qreg,creg) # execute the circuit N times N = 1000 job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=N) counts = job.result().get_counts(mycircuit) # print(counts) if zeros in counts.keys(): c = counts[zeros] else: c = 0 # print('000 is observed',c,'times out of',N) percentange = round(c/N*100,1) if max_percentange < percentange: max_percentange = percentange # print("the ration of 000 is ",percentange,"%") # print() print("max percentage is",max_percentange) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns %matplotlib inline isolados = pd.read_csv('data/01 - geral_normalizada.csv') isolados.sample(5) df = pd.read_csv('data/02 - reacoes_normalizada.csv', names=['Ano','CCR','Composto','Resultado'], header=None, index_col=0) df.sample(5) # junta os termos em um unico identificador do isolado: df["Isolado"] = "UFT."+df['Ano'].astype(str)+"(L)"+df['CCR'].astype(str) # remove as colunas que não serao mais uteis: del df['Ano'] del df['CCR'] # define nova coluna de indice: df.set_index(['Isolado'], inplace=True) # compostos = [] # for i in range(1,81): # compostos.append(i) # compostos df.sample(5) # converte coluna de resultados em linhas: df = df.pivot(columns='Composto') # salva estado atual do dataframe em arquivo CSV: df.to_csv('03 - reacoes_formatadas.csv') df.sample(5) ``` ### Análise Exploratória de Dados: 1. Informações Gerais 2. Tratamento de valores Nulos 3. Questions to answer: * How many features do you have? * How many observations do you have? * What is the data type of each feature? ``` # dimensoes - linhas e colunas df.shape # informações diversas: # df.info() # descricao dos dados sob analise: df.describe(include='object') # mostra valores faltantes dentro de uma amostra dos dados: sns.heatmap(df.isnull()) plt.show() # plota cada característica categorica: for column in df.select_dtypes(include='object'): if df[column].nunique() < 10: sns.countplot(y=column, data=df) plt.show() ``` Are there so many missing values for a variable that you should drop that variable from your dataset? ``` # remocao das colunas com maioria dos valores nulos: #df.isnull().sum() df[df.columns[df.isnull().any()]].isnull().sum() * 100 / df.shape[0] # mostra colunas com qtd de valores nulos maior que 50% dos possiveis registros: total = df.isnull().sum().sort_values(ascending=False) percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Missing Percent']) missing_data['Missing Percent'] = missing_data['Missing Percent'].apply(lambda x: x * 100) missing_data.loc[missing_data['Missing Percent'] > 50] df.sample(5) # o dataframe é o mesmo, só eliminei manualmente o identificador "Resultado" e ajeitei o header do arquivo: data = pd.read_csv('data/03.1 - reacoes_formatadas.csv', index_col=0) data.sample(5) from pandas_profiling import ProfileReport relatorio_inicial = ProfileReport(data, title="Reações - Relatorio Inicial", correlations={"cramers": {"calculate": False}}) relatorio_inicial.to_widgets() relatorio_inicial.to_file("01 - relatorio-inicial_reacoes.html") ``` Deleta (arbitrariamente) colunas com menos de 50% de preenchimento dos dados: As colunas com mais de 50% de valores nulos são, respectivamente: 1, 39, 40, 45, 46, 48, 64, 67, 68, 69, 78 e 79; Logo, ... ``` # TODO: automatizar/melhorar isso aqui: colunas_excluir = [ '1', '39', '40', '45', '46', '48', '64', '67', '68', '69', '78', '79' ] # for coluna in colunas_excluir: data.drop(columns=colunas_excluir, axis=0, inplace=True) # dimensao atual, sem os 12 compostos removidos por pela alta taxa de valores em branco: data.shape ``` Verificar colunas com baixa cardinalidade (com cardinalidade igual a 1). Colunas cujos valores não possuem variação não contribuem para algoritmo de agrupamento; ``` # apenas um coluna apresenta cardinalidade == 1. Coluna correspondente ao composto 48. #Já removido na operação anterior por apresentar 92% dos dados faltantes. # salva estado atual da tabela: # salva estado atual do dataframe em arquivo CSV: data.to_csv('03.1 - reacoes_col_removidas.csv') # ainda temos valores nulos, vamos vê-los: # mostra colunas com qtd de valores nulos maior que 40% dos possiveis registros: total = data.isnull().sum().sort_values(ascending=False) percent = (data.isnull().sum()/data.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Missing Percent']) missing_data['Missing Percent'] = missing_data['Missing Percent'].apply(lambda x: x * 100) missing_data.loc[missing_data['Missing Percent'] > 40] ``` ### 4 - Como tratar essa grande qtd de valores faltantes??? 1. Ignorar, tratar values faltantes com o vector de zeros do **Dummie Enconding**; 2. Remover todos e tratar registros como linhas de uma matriz 'banguela'; 3. Converter esses valores para uma representação qualquer: fill in the missing values --converter e depois remover; 4. Usar interpolação (**Impute**): decidir, de maneira inteligente o que deverá substituir o espaço em branco --geralmente ecolhe o valor com maior frequência; ``` # 4.1. Ignorar valores nulos e tratar tudo com o One Hot Encoding: from sklearn.preprocessing import LabelEncoder, OneHotEncoder data.columns data.head() # colunas para transformar: todas, exceto a coluna de índice (coluna zero) # data[:-1] data_encoded = pd.get_dummies(data, prefix=data.columns, prefix_sep=' ', dummy_na=False) # a diferenca esta no drop_firt=True # este parametro permite que o dataframe fique mais enxuto gerando k-1 registros no One-Hot-Encoding: # o uso do drop_first permite que o encoding utilizado seja o verdadeiro dunny encoding (sem el o metodo faz o one hot encoding) # data_encoded2 = pd.get_dummies(data, prefix=data.columns, prefix_sep=' ', dummy_na=False, drop_first=True) data_encoded.head(7) data_encoded.shape # não é um metodo .shape() e sim uma propriedade .shape # salva estado atual do dataframe em arquivo CSV: data_encoded.to_csv('04.1 - reacoes_one-hot_encoded.csv') # gera o dataframe resultante do verdadeiro dummy_encoding: data_encoded2 = pd.get_dummies(data, prefix=data.columns, prefix_sep=' ', dummy_na=False, drop_first=True) # salva estado atual do dataframe em arquivo CSV: data_encoded2.to_csv('04.2 - reacoes_dummy_encoded.csv') data_encoded2.head() # 4.2 - TODO: construção da Matriz 'Banguela': data.sample(6) # 4.3 - TODO: # 4.4 - TODO: # geração do relatório do Pandas Profiling, novamente: relatorio_final = ProfileReport(data_encoded2, title="Reações - Relatorio Final", correlations={"cramers": {"calculate": False}}) relatorio_final.to_widgets() ```
github_jupyter
# GSD: Rpb1 orthologs in 1011 genomes collection This collects Rpb1 gene and protein sequences from a collection of natural isolates of sequenced yeast genomes from [Peter et al 2017](https://www.ncbi.nlm.nih.gov/pubmed/29643504), and then estimates the count of the heptad repeats. It builds directly on the notebook [here](GSD%20Rpb1_orthologs_in_PB_genomes.ipynb), which descends from [Searching for coding sequences in genomes using BLAST and Python](../Searching%20for%20coding%20sequences%20in%20genomes%20using%20BLAST%20and%20Python.ipynb). It also builds on the notebooks shown [here](https://nbviewer.jupyter.org/github/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb) and [here](https://github.com/fomightez/patmatch-binder). Reference for sequence data: [Genome evolution across 1,011 Saccharomyces cerevisiae isolates. Peter J, De Chiara M, Friedrich A, Yue JX, Pflieger D, Bergström A, Sigwalt A, Barre B, Freel K, Llored A, Cruaud C, Labadie K, Aury JM, Istace B, Lebrigand K, Barbry P, Engelen S, Lemainque A, Wincker P, Liti G, Schacherer J. Nature. 2018 Apr;556(7701):339-344. doi: 10.1038/s41586-018-0030-5. Epub 2018 Apr 11. PMID: 29643504](https://www.ncbi.nlm.nih.gov/pubmed/29643504) ----- ## Overview ![overview of steps](../../imgs/ortholog_mining_summarized.png) ## Preparation Get scripts and sequence data necessary. **DO NOT 'RUN ALL'. AN INTERACTION IS NECESSARY AT CELL FIVE. AFTER THAT INTERACTION, THE REST BELOW IT CAN BE RUN.** (Caveat: right now this is written for genes with no introns. Only a few hundred have in yeast and that is the organism in this example. Intron presence would only become important when trying to translate in late stages of this workflow.) ``` gene_name = "RPB1" size_expected = 5202 get_seq_from_link = False link_to_FASTA_of_gene = "https://gist.githubusercontent.com/fomightez/f46b0624f1d8e3abb6ff908fc447e63b/raw/625eaba76bb54e16032f90c8812350441b753a0c/uz_S288C_YOR270C_VPH1_coding.fsa" #**Possible future enhancement would be to add getting the FASTA of the gene from Yeastmine with just systematic id** ``` Get the `blast_to_df` script by running this commands. ``` import os file_needed = "blast_to_df.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/blast-utilities/blast_to_df.py import pandas as pd ``` **Now to get the entire collection or a subset of the 1011 genomes, the next cell will need to be edited.** I'll probably leave it with a small set for typical running purposes. However, to make it run fast, try the 'super-tiny' set with just two. ``` # Method to get ALL the genomes. TAKES A WHILE!!! # (ca. 1 hour and 15 minutes to download alone? + Extracting is a while.) # Easiest way to minotor extracting step is to open terminal, cd to # `GENOMES_ASSEMBLED`, & use `ls | wc -l` to count files extracted. #!curl -O http://1002genomes.u-strasbg.fr/files/1011Assemblies.tar.gz #!tar xzf 1011Assemblies.tar.gz #!rm 1011Assemblies.tar.gz # Small development set !curl -OL https://www.dropbox.com/s/f42tiygq9tr1545/medium_setGENOMES_ASSEMBLED.tar.gz !tar xzf medium_setGENOMES_ASSEMBLED.tar.gz # Tiny development set #!curl -OL https://www.dropbox.com/s/txufq2jflkgip82/tiny_setGENOMES_ASSEMBLED.tar.gz #!tar xzf tiny_setGENOMES_ASSEMBLED.tar.gz #!mv tiny_setGENOMES_ASSEMBLED GENOMES_ASSEMBLED #define directory with genomes genomes_dirn = "GENOMES_ASSEMBLED" ``` Before process the list of all of them, fix one that has an file name mismatch with what the description lines have. Specifically, the assembly file name is `CDH.re.fa`, but the FASTA-entries inside begin `CDH-3`. Simple file name mismatch. So next cell will change that file name to match. ``` import os import sys file_with_issues = "CDH.re.fa" if os.path.isfile("GENOMES_ASSEMBLED/"+file_with_issues): sys.stderr.write("\nFile with name non-matching entries ('{}') observed and" " fixed.".format(file_with_issues)) !mv GENOMES_ASSEMBLED/CDH.re.fa GENOMES_ASSEMBLED/CDH_3.re.fa #pause and then check if file with original name is there still because # it means this was attempted too soon and need to start over. import time time.sleep(12) #12 seconds if os.path.isfile("GENOMES_ASSEMBLED/"+file_with_issues): sys.stderr.write("\n***PROBLEM. TRIED THIS CELL BEFORE FINISHED UPLOADING.\n" "DELETE FILES ASSOCIATED AND START ALL OVER AGAIN WITH UPLOAD STEP***.") else: sys.stderr.write("\nFile '{}' not seen and so nothing done" ". Seems wrong.".format(file_with_issues)) sys.exit(1) # Get SGD gene sequence in FASTA format to search for best matches in the genomes import sys gene_filen = gene_name + ".fsa" if get_seq_from_link: !curl -o {gene_filen} {link_to_FASTA_of_gene} else: !touch {gene_filen} sys.stderr.write("\nEDIT THE FILE '{}' TO CONTAIN " "YOUR GENE OF INTEREST (FASTA-FORMATTED)" ".".format(gene_filen)) sys.exit(0) ``` **I PUT CONTENTS OF FILE `S288C_YDL140C_RPO21_coding.fsa` downloaded from [here](https://www.yeastgenome.org/locus/S000002299/sequence) as 'RPB1.fsa'.** Now you are prepared to run BLAST to search each PacBio-sequenced genomes for the best match to a gene from the Saccharomyces cerevisiae strain S288C reference sequence. ## Use BLAST to search the genomes for matches to the gene in the reference genome at SGD SGD is the [Saccharomyces cerevisiae Genome Database site](http:yeastgenome.org) and the reference genome is from S288C. This is going to go through each genome and make a database so it is searchable and then search for matches to the gene. The information on the best match will be collected. One use for that information will be collecting the corresponding sequences later. Import the script that allows sending BLAST output to Python dataframes so that we can use it here. ``` from blast_to_df import blast_to_df # Make a list of all `genome.fa` files, excluding `genome.fa.nhr` and `genome.fa.nin` and `genome.fansq` # The excluding was only necessary because I had run some queries preliminarily in development. Normally, it would just be the `.re.fa` at the outset. fn_to_check = "re.fa" genomes = [] import os import fnmatch for file in os.listdir(genomes_dirn): if fnmatch.fnmatch(file, '*'+fn_to_check): if not file.endswith(".nhr") and not file.endswith(".nin") and not file.endswith(".nsq") : # plus skip hidden files if not file.startswith("._"): genomes.append(file) len(genomes) ``` Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from BLAST for many sequences from filling up cell. (You can monitor the making of files ending in `.nhr` for all the FASTA files in `GENOMES_ASSEMBLED` to monitor progress'.) ``` %%time %%capture SGD_gene = gene_filen dfs = [] for genome in genomes: !makeblastdb -in {genomes_dirn}/{genome} -dbtype nucl result = !blastn -query {SGD_gene} -db {genomes_dirn}/{genome} -outfmt "6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq" -task blastn from blast_to_df import blast_to_df blast_df = blast_to_df(result.n) dfs.append(blast_df.head(1)) # merge the dataframes in the list `dfs` into one dataframe df = pd.concat(dfs) #Save the df filen_prefix = gene_name + "_orthologBLASTdf" df.to_pickle(filen_prefix+".pkl") df.to_csv(filen_prefix+'.tsv', sep='\t',index = False) #df ``` Computationally check if any genomes missing from the BLAST results list? ``` subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("results:",len(r)) print ("missing:",len(a-r)) if len(a-r): print("\n") print("ids missing:",a-r) #a - r ``` Sanity check: Report on how expected size compares to max size seen? ``` size_seen = df.length.max(0) print ("Expected size of gene:", size_expected) print ("Most frequent size of matches:", df.length.mode()[0]) print ("Maximum size of matches:", df.length.max(0)) ``` ## Collect the identified, raw sequences Get the expected size centered on the best match, plus a little flanking each because they might not exactly cover the entire open reading frame. (Although, the example here all look to be full size.) ``` # Get the script for extracting based on position (and install dependency pyfaidx) import os file_needed = "extract_subsequence_from_FASTA.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/Extract_from_FASTA/extract_subsequence_from_FASTA.py !pip install pyfaidx ``` For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output. For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be because the generated files only moved into the `raw` directory as last step of cell: ls seq_extracted* | wc -l (**NOTE: WHEN RUNNING WITH THE FULL SET, THIS CELL BELOW WILL REPORT AROUND A DOZEN `FileNotFoundError:`/Exceptions. HOWEVER, THEY DON'T CAUSE THE NOTEBOOK ITSELF TO CEASE TO RUN. SO DISREGARD THEM FOR THE TIME BEING.** ) ``` %%capture size_expected = size_expected # use value from above, or alter at this point. #size_expected = df.length.max(0) #bp length of SGD coding sequence; should be equivalent and that way not hardcoded? extra_add_to_start = 51 #to allow for 'fuzziness' at starting end extra_add_to_end = 51 #to allow for 'fuzziness' at far end genome_fn_end = "re.fa" def midpoint(items): ''' takes a iterable of items and returns the midpoint (integer) of the first and second values ''' return int((int(items[0])+int(items[1]))/2) #midpoint((1,100)) def determine_pos_to_get(match_start,match_end): ''' Take the start and end of the matched region. Calculate midpoint between those and then center expected size on that to determine preliminary start and preliminary end to get. Add the extra basepairs to get at each end to allow for fuzziness/differences of actual gene ends for orthologs. Return the final start and end positions to get. ''' center_of_match = midpoint((match_start,match_end)) half_size_expected = int(size_expected/2.0) if size_expected % 2 != 0: half_size_expected += 1 start_pos = center_of_match - half_size_expected end_pos = center_of_match + half_size_expected start_pos -= extra_add_to_start end_pos += extra_add_to_end # Because of getting some flanking sequences to account for 'fuzziness', it # is possible the start and end can exceed possible. 'End' is not a problem # because the `extract_subsequence_from_FASTA.py` script will get as much as # it from the indicated sequence if a larger than possible number is # provided. However,'start' can become negative and because the region to # extract is provided as a string the dash can become a problem. Dealing # with it here by making sequence positive only. # Additionally, because I rely on center of match to position where to get, # part being cut-off due to absence on sequence fragment will shift center # of match away from what is actually center of gene and to counter-balance # add twice the amount to the other end. (Actually, I feel I should adjust # the start end likewise if the sequence happens to be shorter than portion # I would like to capture but I don't know length of involved hit yet and # that would need to be added to allow that to happen!<--TO DO) if start_pos < 0: raw_amount_missing_at_start = abs(start_pos)# for counterbalancing; needs # to be collected before `start_pos` adjusted start_pos = 1 end_pos += 2 * raw_amount_missing_at_start return start_pos, end_pos # go through the dataframe using information on each to come up with sequence file, # specific indentifier within sequence file, and the start and end to extract # store these valaues as a list in a dictionary with the strain identifier as the key. extracted_info = {} start,end = 0,0 for row in df.itertuples(): #print (row.length) start_to_get, end_to_get = determine_pos_to_get(row.sstart, row.send) posns_to_get = "{}-{}".format(start_to_get, end_to_get) record_id = row.sseqid strain_id = row.sseqid.split("-")[0] seq_fn = strain_id + "." + genome_fn_end extracted_info[strain_id] = [seq_fn, record_id, posns_to_get] # Use the dictionary to get the sequences for id_ in extracted_info: #%run extract_subsequence_from_FASTA.py {*extracted_info[id_]} #unpacking doesn't seem to work here in `%run` %run extract_subsequence_from_FASTA.py {genomes_dirn}/{extracted_info[id_][0]} {extracted_info[id_][1]} {extracted_info[id_][2]} #package up the retrieved sequences archive_file_name = gene_name+"_raw_ortholog_seqs.tar.gz" # make list of extracted files using fnmatch fn_part_to_match = "seq_extracted" collected_seq_files_list = [] import os import sys import fnmatch for file in os.listdir('.'): if fnmatch.fnmatch(file, fn_part_to_match+'*'): #print (file) collected_seq_files_list.append(file) !tar czf {archive_file_name} {" ".join(collected_seq_files_list)} # use the list for archiving command sys.stderr.write("\n\nCollected RAW sequences gathered and saved as " "`{}`.".format(archive_file_name)) # move the collected raw sequences to a folder in preparation for # extracting encoding sequence from original source below !mkdir raw !mv seq_extracted*.fa raw ``` That archive should contain the "raw" sequence for each gene, even if the ends are a little different for each. At minimum the entire gene sequence needs to be there at this point; extra at each end is preferable at this point. You should inspect them as soon as possible and adjust the extra sequence to add higher or lower depending on whether the ortholog genes vary more or less, respectively. The reason they don't need to be perfect yet though is because next we are going to extract the longest open reading frame, which presumably demarcates the entire gene. Then we can return to use that information to clean up the collected sequences to just be the coding sequence. ## Collect protein translations of the genes and then clean up "raw" sequences to just be coding We'll assume the longest translatable frame in the collected "raw" sequences encodes the protein sequence for the gene orthologs of interest. Well base these steps on the [section '20.1.13 Identifying open reading frames'](http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc299) in the present version of the [Biopython Tutorial and Cookbook](http://biopython.org/DIST/docs/tutorial/Tutorial.html) (Last Update – 18 December 2018 (Biopython 1.73). (First run the next cell to get a script needed for dealing with the strand during the translation and gathering of thge encoding sequence.) ``` import os file_needed = "convert_fasta_to_reverse_complement.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/ConvertSeq/convert_fasta_to_reverse_complement.py ``` Now to perform the work described in the header to this section... For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output. For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be: ls *_ortholog_gene.fa | wc -l ``` %%capture # find the featured open reading frame and collect presumed protein sequences # Collect the corresponding encoding sequence from the original source def len_ORF(items): # orf is fourth item in the tuples return len(items[3]) def find_orfs_with_trans(seq, trans_table, min_protein_length): ''' adapted from the present section '20.1.13 Identifying open reading frames' http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc299 in the present version of the [Biopython Tutorial and Cookbook at http://biopython.org/DIST/docs/tutorial/Tutorial.html (Last Update – 18 December 2018 (Biopython 1.73) Same as there except altered to sort on the length of the open reading frame. ''' answer = [] seq_len = len(seq) for strand, nuc in [(+1, seq), (-1, seq.reverse_complement())]: for frame in range(3): trans = str(nuc[frame:].translate(trans_table)) trans_len = len(trans) aa_start = 0 aa_end = 0 while aa_start < trans_len: aa_end = trans.find("*", aa_start) if aa_end == -1: aa_end = trans_len if aa_end-aa_start >= min_protein_length: if strand == 1: start = frame+aa_start*3 end = min(seq_len,frame+aa_end*3+3) else: start = seq_len-frame-aa_end*3-3 end = seq_len-frame-aa_start*3 answer.append((start, end, strand, trans[aa_start:aa_end])) aa_start = aa_end+1 answer.sort(key=len_ORF, reverse = True) return answer def generate_rcoutput_file_name(file_name,suffix_for_saving = "_rc"): ''' from https://github.com/fomightez/sequencework/blob/master/ConvertSeq/convert_fasta_to_reverse_complement.py Takes a file name as an argument and returns string for the name of the output file. The generated name is based on the original file name. Specific example ================= Calling function with ("sequence.fa", "_rc") returns "sequence_rc.fa" ''' main_part_of_name, file_extension = os.path.splitext( file_name) #from #http://stackoverflow.com/questions/541390/extracting-extension-from-filename-in-python if '.' in file_name: #I don't know if this is needed with the os.path.splitext method but I had it before so left it return main_part_of_name + suffix_for_saving + file_extension else: return file_name + suffix_for_saving + ".fa" def add_strand_to_description_line(file,strand="-1"): ''' Takes a file and edits description line to add strand info at end. Saves the fixed file ''' import sys output_file_name = "temp.txt" # prepare output file for saving so it will be open and ready with open(output_file_name, 'w') as output_file: # read in the input file with open(file, 'r') as input_handler: # prepare to give feeback later or allow skipping to certain start lines_processed = 0 for line in input_handler: lines_processed += 1 if line.startswith(">"): new_line = line.strip() + "; {} strand\n".format(strand) else: new_line = line # Send text to output output_file.write(new_line) # replace the original file with edited !mv temp.txt {file} # Feedback sys.stderr.write("\nIn {}, strand noted.".format(file)) table = 1 #sets translation table to standard nuclear, see # https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi min_pro_len = 80 #cookbook had the standard `100`. Feel free to adjust. prot_seqs_info = {} #collect as dictionary with strain_id as key. Values to # be list with source id as first item and protein length as second and # strand in source seq as third item, and start and end in source sequence as fourth and fifth, # and file name of protein and gene as sixth and seventh. # Example key and value pair: 'YPS138':['<source id>','<protein length>',-1,52,2626,'<gene file name>','<protein file name>'] gene_seqs_fn_list = [] prot_seqs_fn_list = [] from Bio import SeqIO for raw_seq_filen in collected_seq_files_list: #strain_id = raw_seq_filen[:-len_genome_fn_end] #if was dealing with source seq strain_id = raw_seq_filen.split("-")[0].split("seq_extracted")[1] record = SeqIO.read("raw/"+raw_seq_filen,"fasta") raw_seq_source_fn = strain_id + "." + genome_fn_end raw_seq_source_id = record.description.split(":")[0] orf_list = find_orfs_with_trans(record.seq, table, min_pro_len) orf_start, orf_end, strand, prot_seq = orf_list[0] #longest ORF seq for protein coding location_raw_seq = record.description.rsplit(":",1)[1] #get to use in calculating # the start and end position in original genome sequence. raw_loc_parts = location_raw_seq.split("-") start_from_raw_seq = int(raw_loc_parts[0]) end_from_raw_seq = int(raw_loc_parts[1]) length_extracted = len(record) #also to use in calculating relative original #Fix negative value. (Somehow Biopython can report negative value when hitting # end of sequence without encountering stop codon and negatives messes up # indexing later it seems.) if orf_start < 0: orf_start = 0 # Trim back to the first Methionine, assumed to be the initiating MET. # (THIS MIGHT BE A SOURCE OF EXTRA 'LEADING' RESIDUES IN SOME CASES & ARGUES # FOR LIMITING THE AMOUNT OF FLANKING SEQUENCE ADDED TO ALLOW FOR FUZINESS.) try: amt_resi_to_trim = prot_seq.index("M") except ValueError: sys.stderr.write("**ERROR**When searching for initiating methionine,\n" "no Methionine found in the traslated protein sequence.**ERROR**") sys.exit(1) prot_seq = prot_seq[amt_resi_to_trim:] len_seq_trimmed = amt_resi_to_trim * 3 # Calculate the adjusted start and end values for the untrimmed ORF adj_start = start_from_raw_seq + orf_start adj_end = end_from_raw_seq - (length_extracted - orf_end) # Adjust for trimming for appropriate strand. if strand == 1: adj_start += len_seq_trimmed #adj_end += 3 # turns out stop codon is part of numbering biopython returns elif strand == -1: adj_end -= len_seq_trimmed #adj_start -= 3 # turns out stop codon is part of numbering biopython returns else: sys.stderr.write("**ERROR**No strand match option detected!**ERROR**") sys.exit(1) # Collect the sequence for the actual gene encoding region from # the original sequence. This way the original numbers will # be put in the file. start_n_end_str = "{}-{}".format(adj_start,adj_end) %run extract_subsequence_from_FASTA.py {genomes_dirn}/{raw_seq_source_fn} {raw_seq_source_id} {start_n_end_str} # rename the extracted subsequence a more distinguishing name and notify g_output_file_name = strain_id +"_" + gene_name + "_ortholog_gene.fa" !mv {raw_seq_filen} {g_output_file_name} # because the sequence saved happens to # be same as raw sequence file saved previously, that name can be used to # rename new file. gene_seqs_fn_list.append(g_output_file_name) sys.stderr.write("\n\nRenamed gene file to " "`{}`.".format(g_output_file_name)) # Convert extracted sequence to reverse complement if translation was on negative strand. if strand == -1: %run convert_fasta_to_reverse_complement.py {g_output_file_name} # replace original sequence file with the produced file produced_fn = generate_rcoutput_file_name(g_output_file_name) !mv {produced_fn} {g_output_file_name} # add (after saved) onto the end of the description line for that `-1 strand` # No way to do this in my current version of convert sequence. So editing descr line. add_strand_to_description_line(g_output_file_name) #When settled on actual protein encoding sequence, fill out # description to use for saving the protein sequence. prot_descr = (record.description.rsplit(":",1)[0]+ " "+ gene_name + "_ortholog"+ "| " +str(len(prot_seq)) + " aas | from " + raw_seq_source_id + " " + str(adj_start) + "-"+str(adj_end)) if strand == -1: prot_descr += "; {} strand".format(strand) # save the protein sequence as FASTA chunk_size = 70 #<---amino acids per line to have in FASTA prot_seq_chunks = [prot_seq[i:i+chunk_size] for i in range( 0, len(prot_seq),chunk_size)] prot_seq_fa = ">" + prot_descr + "\n"+ "\n".join(prot_seq_chunks) p_output_file_name = strain_id +"_" + gene_name + "_protein_ortholog.fa" with open(p_output_file_name, 'w') as output: output.write(prot_seq_fa) prot_seqs_fn_list.append(p_output_file_name) sys.stderr.write("\n\nProtein sequence saved as " "`{}`.".format(p_output_file_name)) # at end store information in `prot_seqs_info` for later making a dataframe # and then text table for saving summary #'YPS138':['<source id>',<protein length>,-1,52,2626,'<gene file name>','<protein file name>'] prot_seqs_info[strain_id] = [raw_seq_source_id,len(prot_seq),strand,adj_start,adj_end, g_output_file_name,p_output_file_name] sys.stderr.write("\n******END OF A SET OF PROTEIN ORTHOLOG " "AND ENCODING GENE********") # use `prot_seqs_info` for saving a summary text table (first convert to dataframe?) table_fn_prefix = gene_name + "_orthologs_table" table_fn = table_fn_prefix + ".tsv" pkl_table_fn = table_fn_prefix + ".pkl" import pandas as pd info_df = pd.DataFrame.from_dict(prot_seqs_info, orient='index', columns=['descr_id', 'length', 'strand', 'start','end','gene_file','prot_file']) # based on # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html and # note from Python 3.6 that `pd.DataFrame.from_items` is deprecated; #"Please use DataFrame.from_dict" info_df.to_pickle(pkl_table_fn) info_df.to_csv(table_fn, sep='\t') # keep index is default sys.stderr.write("Text file of associated details saved as '{}'.".format(table_fn)) # pack up archive of gene and protein sequences plus the table seqs_list = gene_seqs_fn_list + prot_seqs_fn_list + [table_fn,pkl_table_fn] archive_file_name = gene_name+"_ortholog_seqs.tar.gz" !tar czf {archive_file_name} {" ".join(seqs_list)} # use the list for archiving command sys.stderr.write("\nCollected gene and protein sequences" " (plus table of details) gathered and saved as " "`{}`.".format(archive_file_name)) ``` Save the tarballed archive to your local machine. ----- ## Estimate the count of the heptad repeats Make a table of the estimate of heptad repeats for each orthlogous protein sequence. ``` # get the 'patmatch results to dataframe' script !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/patmatch-utilities/patmatch_results_to_df.py ``` Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from `patmatch_results_to_df` function from filling up cell. ``` %%time %%capture # Go through each protein sequence file and look for matches to heptad pattern # LATER POSSIBLE IMPROVEMENT. Translate pasted gene sequence and add SGD REF S228C as first in list `prot_seqs_fn_list`. Because # although this set of orthologs includes essentially S228C, other lists won't and best to have reference for comparing. heptad_pattern = "[YF]SP[TG]SP[STAGN]" # will catch repeats#2 through #26 of S288C according to Corden, 2013 PMID: 24040939 from patmatch_results_to_df import patmatch_results_to_df sum_dfs = [] raw_dfs = [] for prot_seq_fn in prot_seqs_fn_list: !perl ../../patmatch_1.2/unjustify_fasta.pl {prot_seq_fn} output = !perl ../../patmatch_1.2/patmatch.pl -p {heptad_pattern} {prot_seq_fn}.prepared os.remove(os.path.join(prot_seq_fn+".prepared")) #delete file made for PatMatch raw_pm_df = patmatch_results_to_df(output.n, pattern=heptad_pattern, name="CTD_heptad") raw_pm_df.sort_values('hit_number', ascending=False, inplace=True) sum_dfs.append(raw_pm_df.groupby('FASTA_id').head(1)) raw_dfs.append(raw_pm_df) sum_pm_df = pd.concat(sum_dfs, ignore_index=True) sum_pm_df.sort_values('hit_number', ascending=False, inplace=True) sum_pm_df = sum_pm_df[['FASTA_id','hit_number']] #make protein length into dictionary with ids as keys to map to FASTA_ids in # order to add protein length as a column in summary table length_info_by_id= dict(zip(info_df.descr_id,info_df.length)) sum_pm_df['prot_length'] = sum_pm_df['FASTA_id'].map(length_info_by_id) sum_pm_df = sum_pm_df.reset_index(drop=True) raw_pm_df = pd.concat(raw_dfs, ignore_index=True) ``` Because of use of `%%capture` to suppress output, need a separate cell to see results summary. (Only showing parts here because will add more useful information below.) ``` sum_pm_df.head() # don't show all yet since lots and want to make this dataframe more useful below sum_pm_df.tail() # don't show all yet since lots and want to make this dataframe more useful below ``` I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. WHAT ONES MISSING NOW? Computationally check if any genomes missing from the list of orthologs? ``` subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) ortholg_ids = sum_pm_df.FASTA_id.tolist() ortholg_ids = [x.split("-")[0] for x in ortholg_ids] a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("BLAST results:",len(r)) print ("missing from BLAST:",len(a-r)) if len(a-r): #print("\n") print("ids missing in BLAST results:",a-r) #a - r print ("\n\n=====POST-BLAST=======\n\n") o = set(ortholg_ids) print("orthologs extracted:",len(o)) print ("missing post-BLAST:",len(r-o)) if len(r-o): print("\n") print("ids lost post-BLAST:",r-o) #r - o print ("\n\n\n=====SUMMARY=======\n\n") if len(a-r) and len(r-o): print("\nAll missing in end:",(a-r) | (r-o)) ``` ## Make the Summarizing Dataframe more informative Add information on whether a stretch of 'N's is present. Making the data suspect and fit to be filtered out. Distinguish between cases where it is in what corresponds to the last third of the protein vs. elsewhere, if possible. Plus whether stop codon is present at end of encoding sequence because such cases also probably should be filtered out. Add information from the supplemental data table so possible patterns can be assessed more easily. #### Add information about N stretches and stop codon ``` # Collect following information for each gene sequence: # N stretch of at least two or more present in first 2/3 of gene sequence # N stretch of at least two or more present in last 1/3 of gene sequence # stop codon encoded at end of sequence? import re min_number_Ns_in_row_to_collect = 2 pattern_obj = re.compile("N{{{},}}".format(min_number_Ns_in_row_to_collect), re.I) # adpated from # code worked out in `collapse_large_unknown_blocks_in_DNA_sequence.py`, which relied heavily on # https://stackoverflow.com/a/250306/8508004 def longest_stretch2ormore_found(string, pattern_obj): ''' Check if a string has stretches of Ns of length two or more. If it does, return the length of longest stretch. If it doesn't return zero. Based on https://stackoverflow.com/a/1155805/8508004 and GSD Assessing_ambiguous_nts_in_nuclear_PB_genomes.ipynb ''' longest_match = '' for m in pattern_obj.finditer(string): if len(m.group()) > len(longest_match): longest_match = m.group() if longest_match == '': return 0 else: return len(longest_match) def chunk(xs, n): '''Split the list, xs, into n chunks; from http://wordaligned.org/articles/slicing-a-list-evenly-with-python''' L = len(xs) assert 0 < n <= L s, r = divmod(L, n) chunks = [xs[p:p+s] for p in range(0, L, s)] chunks[n-1:] = [xs[-r-s:]] return chunks n_stretch_last_third_by_id = {} n_stretch_first_two_thirds_by_id = {} stop_codons = ['TAA','TAG','TGA'] stop_codon_presence_by_id = {} for fn in gene_seqs_fn_list: # read in sequence without using pyfaidx because small and not worth making indexing files lines = [] with open(fn, 'r') as seqfile: for line in seqfile: lines.append(line.strip()) descr_line = lines[0] seq = ''.join(lines[1:]) gene_seq_id = descr_line.split(":")[0].split(">")[1]#first line parsed for all in front of ":" and without caret # determine first two-thirds and last third chunks = chunk(seq,3) assert len(chunks) == 3, ("The sequence must be split in three parts'.") first_two_thirds = chunks[0] + chunks[1] last_third = chunks[-1] # Examine each part n_stretch_last_third_by_id[gene_seq_id] = longest_stretch2ormore_found(last_third,pattern_obj) n_stretch_first_two_thirds_by_id[gene_seq_id] = longest_stretch2ormore_found(first_two_thirds,pattern_obj) #print(gene_seq_id) #print (seq[-3:] in stop_codons) #stop_codon_presence_by_id[gene_seq_id] = seq[-3:] in stop_codons stop_codon_presence_by_id[gene_seq_id] = "+" if seq[-3:] in stop_codons else "-" # Add collected information to sum_pm_df sum_pm_df['NstretchLAST_THIRD'] = sum_pm_df['FASTA_id'].map(n_stretch_last_third_by_id) sum_pm_df['NstretchELSEWHERE'] = sum_pm_df['FASTA_id'].map(n_stretch_first_two_thirds_by_id) sum_pm_df['stop_codon'] = sum_pm_df['FASTA_id'].map(stop_codon_presence_by_id) # Safe to ignore any warnings about copy. I think because I swapped columns in and out # of sum_pm_df earlier perhaps. ``` #### Add details on strains from the published supplemental information This section is based on [this notebook entitled 'GSD: Add Supplemental data info to nt count data for 1011 cerevisiae collection'](https://github.com/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb). ``` !curl -OL https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-018-0030-5/MediaObjects/41586_2018_30_MOESM3_ESM.xls !pip install xlrd import pandas as pd #sum_pm_TEST_df = sum_pm_df.copy() supp_df = pd.read_excel('41586_2018_30_MOESM3_ESM.xls', sheet_name=0, header=3, skipfooter=31) supp_df['Standardized name'] = supp_df['Standardized name'].str.replace('SACE_','') suppl_info_dict = supp_df.set_index('Standardized name').to_dict('index') #Make new column with simplified strain_id tags to use for relating to supplemental table def add_id_tags(fasta_fn): return fasta_fn[:3] sum_pm_df["id_tag"] = sum_pm_df['FASTA_id'].apply(add_id_tags) ploidy_dict_by_id = {x:suppl_info_dict[x]['Ploidy'] for x in suppl_info_dict} aneuploidies_dict_by_id = {x:suppl_info_dict[x]['Aneuploidies'] for x in suppl_info_dict} eco_origin_dict_by_id = {x:suppl_info_dict[x]['Ecological origins'] for x in suppl_info_dict} clade_dict_by_id = {x:suppl_info_dict[x]['Clades'] for x in suppl_info_dict} sum_pm_df['Ploidy'] = sum_pm_df.id_tag.map(ploidy_dict_by_id) #Pandas docs has `Index.map` (uppercase `I`) but only lowercase works. sum_pm_df['Aneuploidies'] = sum_pm_df.id_tag.map(aneuploidies_dict_by_id) sum_pm_df['Ecological origin'] = sum_pm_df.id_tag.map(eco_origin_dict_by_id) sum_pm_df['Clade'] = sum_pm_df.id_tag.map(clade_dict_by_id) # remove the `id_tag` column add for relating details from supplemental to summary df sum_pm_df = sum_pm_df.drop('id_tag',1) # use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE #with pd.option_context('display.max_rows', None, 'display.max_columns', None): # display(sum_pm_df) sum_pm_df ``` I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ## Filter collected set to those that are 'complete' For plotting and summarizing with a good set of information, best to remove any where the identified ortholog gene has stretches of 'N's or lacks a stop codon. (Keep unfiltered dataframe around though.) ``` sum_pm_UNFILTEREDdf = sum_pm_df.copy() #subset to those where there noth columns for Nstretch assessment are zero sum_pm_df = sum_pm_df[(sum_pm_df[['NstretchLAST_THIRD','NstretchELSEWHERE']] == 0).all(axis=1)] # based on https://codereview.stackexchange.com/a/185390 #remove any where there isn't a stop codon sum_pm_df = sum_pm_df.drop(sum_pm_df[sum_pm_df.stop_codon != '+'].index) ``` Computationally summarize result of filtering in comparison to previous steps: ``` subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) ortholg_ids = sum_pm_UNFILTEREDdf.FASTA_id.tolist() ortholg_ids = [x.split("-")[0] for x in ortholg_ids] filtered_ids = sum_pm_df.FASTA_id.tolist() filtered_ids =[x.split("-")[0] for x in filtered_ids] a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("BLAST results:",len(r)) print ("missing from BLAST:",len(a-r)) if len(a-r): #print("\n") print("ids missing in BLAST results:",a-r) #a - r print ("\n\n=====POST-BLAST=======\n\n") o = set(ortholg_ids) print("orthologs extracted:",len(o)) print ("missing post-BLAST:",len(r-o)) if len(r-o): print("\n") print("ids lost post-BLAST:",r-o) #r - o print ("\n\n\n=====PRE-FILTERING=======\n\n") print("\nNumber before filtering:",len(sum_pm_UNFILTEREDdf)) if len(a-r) and len(r-o): print("\nAll missing in unfiltered:",(a-r) | (r-o)) print ("\n\n\n=====POST-FILTERING SUMMARY=======\n\n") f = set(filtered_ids) print("\nNumber left in filtered set:",len(sum_pm_df)) print ("Number removed by filtering:",len(o-f)) if len(a-r) and len(r-o) and len(o-f): print("\nAll missing in filtered:",(a-r) | (r-o) | (o-f)) # use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE with pd.option_context('display.max_rows', None, 'display.max_columns', None): display(sum_pm_df) #sum_pm_df ``` I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. #### Archive the 'Filtered' set of sequences Above I saved all the gene and deduced protein sequences of the orthologs in a single archive. It might be useful to just have an archive of the 'filtered' set. ``` # pack up archive of gene and protein sequences for the 'filtered' set. # Include the summary table too. # This is different than the other sets I made because this 'filtering' was # done using the dataframe and so I don't have the file associations. The file names # though can be generated using the unfiltered file names for the genes and proteins # and sorting which ones don't remain in the filtered set using 3-letter tags at # the beginning of the entries in `FASTA_id` column to relate them. # Use the `FASTA_id` column of sum_pm_df to make a list of tags that remain in filtered set tags_remaining_in_filtered = [x[:3] for x in sum_pm_df.FASTA_id.tolist()] # Go through the gene and protein sequence list and collect those where the first # three letters match the tag gene_seqs_FILTfn_list = [x for x in gene_seqs_fn_list if x[:3] in tags_remaining_in_filtered] prot_seqs_FILTfn_list = [x for x in prot_seqs_fn_list if x[:3] in tags_remaining_in_filtered] # Save the files in those two lists along with the sum_pm_df (as tabular data and pickled form) patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary" patmatchsum_fn = patmatchsum_fn_prefix + ".tsv" pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl" import pandas as pd sum_pm_df.to_pickle(pklsum_patmatch_fn) sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default FILTEREDseqs_n_df_list = gene_seqs_FILTfn_list + prot_seqs_FILTfn_list + [patmatchsum_fn,pklsum_patmatch_fn] archive_file_name = gene_name+"_ortholog_seqsFILTERED.tar.gz" !tar czf {archive_file_name} {" ".join(FILTEREDseqs_n_df_list)} # use the list for archiving command sys.stderr.write("\nCollected gene and protein sequences" " (plus table of details) for 'FILTERED' set gathered and saved as " "`{}`.".format(archive_file_name)) ``` Download the 'filtered' sequences to your local machine. ## Summarizing with filtered set Plot distribution. ``` %matplotlib inline import math import matplotlib.pyplot as plt import seaborn as sns sns.set() #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeat_distribution' #sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"])); p= sns.countplot(sum_pm_df["hit_number"], order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)), color="C0", alpha= 0.93) #palette="Blues"); # `order` to get those categories with zero # counts to show up from https://stackoverflow.com/a/45359713/8508004 p.set_xlabel("heptad repeats") #add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004 ncount = len(sum_pm_df) for pat in p.patches: x=pat.get_bbox().get_points()[:,0] y=pat.get_bbox().get_points()[1,1] # note that this check on the next line was necessary to add when I went back to cases where there's # no counts for certain categories and so `y` was coming up `nan` for for thos and causing error # about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004 if not math.isnan(y): p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333') if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ``` However, with the entire 1011 collection, those at the bottom can not really be seen. The next plot shows this by limiting y-axis to 103. It should be possible to make a broken y-axis plot for this eventually but not right now as there is no automagic way. So for now will need to composite the two plots together outside. (Note that adding percents annotations makes height of this plot look odd in the notebook cell for now.) ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set() #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeat_distributionLIMIT103' #sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"])); p= sns.countplot(sum_pm_df["hit_number"], order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)), color="C0", alpha= 0.93) #palette="Blues"); # `order` to get those categories with zero # counts to show up from https://stackoverflow.com/a/45359713/8508004 p.set_xlabel("heptad repeats") plt.ylim(0, 103) #add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004 ncount = len(sum_pm_df) for pat in p.patches: x=pat.get_bbox().get_points()[:,0] y=pat.get_bbox().get_points()[1,1] # note that this check on the next line was necessary to add when I went back to cases where there's # no counts for certain categories and so `y` was coming up `nan` for those and causing error # about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004 if not math.isnan(y): p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333') if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png') fig.savefig(saveplot_fn_prefix + '.svg'); ``` I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ``` %matplotlib inline # above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic. # Visualization # This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts. # For example, see `GC-clusters relative mito chromosome and feature` where I ran # `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom` # add the strain info for listing that without chr info & add species information for coloring on that chromosome_id_prefix = "-" def FASTA_id_to_strain(FAid): ''' use FASTA_id column value to convert to strain_id and then return the strain_id ''' return FAid.split(chromosome_id_prefix)[0] sum_pm_df_for_plot = sum_pm_df.copy() sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain) # sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips # it is easier to add species column first and then use map instead of doing both at same with one `apply` # of a function or both separately, both with `apply` of two different function. # sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species) sum_pm_df_for_plot['species'] = 'cerevisiae' #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeats_by_strain' import matplotlib.pyplot as plt if len(sum_pm_df) > 60: plt.figure(figsize=(8,232)) else: plt.figure(figsize=(8,12)) import seaborn as sns sns.set() # Simple look - Comment out everything below to the next two lines to see it again. p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b") p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="Clade") # NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data # and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller. p.set_xlabel("heptad repeats") #p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below # and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up # needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept #p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks ''' xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004 for i in range(len(xticks)): #print (i) # WAS FOR DEBUGGING keep_ticks = [1,3,5] #harcoding essentially again, but at least it works if i not in keep_ticks: xticks[i].set_visible(False) ''' ''' # Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get # those with highest number of repeats with combination I could come up with. sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or # tried `.astype('category')` get plotting of the 0.5 values too sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when # added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise # it was putting the first rows on the left, which happened to be the 'higher' repeat values #p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot? p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D", # size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have # strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored p.set_xlabel("heptad repeats") sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df; ''' if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ``` (Hexagons are used for those without an assigned clade in [the supplemental data Table 1](https://www.nature.com/articles/s41586-018-0030-5) in the plot above.) I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ``` %matplotlib inline # above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic. # Visualization # This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts. # For example, see `GC-clusters relative mito chromosome and feature` where I ran # `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom` # add the strain info for listing that without chr info & add species information for coloring on that chromosome_id_prefix = "-" def FASTA_id_to_strain(FAid): ''' use FASTA_id column value to convert to strain_id and then return the strain_id ''' return FAid.split(chromosome_id_prefix)[0] sum_pm_df_for_plot = sum_pm_df.copy() sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain) # sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips # it is easier to add species column first and then use map instead of doing both at same with one `apply` # of a function or both separately, both with `apply` of two different function. # sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species) sum_pm_df_for_plot['species'] = 'cerevisiae' #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeats_by_proteinlen' import matplotlib.pyplot as plt if len(sum_pm_df) > 60: plt.figure(figsize=(8,232)) else: plt.figure(figsize=(8,12)) import seaborn as sns sns.set() # Simple look - Comment out everything below to the next two lines to see it again. #p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b") p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="prot_length") # NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data # and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller. p.set_xlabel("heptad repeats") #p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below # and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up # needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept #p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks ''' xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004 for i in range(len(xticks)): #print (i) # WAS FOR DEBUGGING keep_ticks = [1,3,5] #harcoding essentially again, but at least it works if i not in keep_ticks: xticks[i].set_visible(False) ''' ''' # Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get # those with highest number of repeats with combination I could come up with. sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or # tried `.astype('category')` get plotting of the 0.5 values too sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when # added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise # it was putting the first rows on the left, which happened to be the 'higher' repeat values #p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot? p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D", # size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have # strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored p.set_xlabel("heptad repeats") sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df; ''' if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ``` I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ## Make raw and summary data available for use elsewhere All the raw data is there for each strain in `raw_pm_df`. For example, the next cell shows how to view the data associated with the summary table for isolate ADK_8: ``` ADK_8_raw = raw_pm_df[raw_pm_df['FASTA_id'] == 'ADK_8-20587'].sort_values('hit_number', ascending=True).reset_index(drop=True) ADK_8_raw ``` The summary and raw data will be packaged up into one file in the cell below. One of the forms will be a tabular text data ('.tsv') files that can be opened in any spreadsheet software. ``` # save summary and raw results for use elsewhere (or use `.pkl` files for reloading the pickled dataframe into Python/pandas) patmatch_fn_prefix = gene_name + "_orthologs_patmatch_results" patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary" patmatchsumFILTERED_fn_prefix = gene_name + "_orthologs_patmatch_results_summaryFILTERED" patmatch_fn = patmatch_fn_prefix + ".tsv" pkl_patmatch_fn = patmatch_fn_prefix + ".pkl" patmatchsumUNF_fn = patmatchsumFILTERED_fn_prefix + ".tsv" pklsum_patmatchUNF_fn = patmatchsumFILTERED_fn_prefix + ".pkl" patmatchsum_fn = patmatchsum_fn_prefix + ".tsv" pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl" import pandas as pd sum_pm_df.to_pickle(pklsum_patmatch_fn) sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default sys.stderr.write("Text file of summary details after filtering saved as '{}'.".format(patmatchsum_fn)) sum_pm_UNFILTEREDdf.to_pickle(pklsum_patmatchUNF_fn) sum_pm_UNFILTEREDdf.to_csv(patmatchsumUNF_fn, sep='\t') # keep index is default sys.stderr.write("\nText file of summary details before filtering saved as '{}'.".format(patmatchsumUNF_fn)) raw_pm_df.to_pickle(pkl_patmatch_fn) raw_pm_df.to_csv(patmatch_fn, sep='\t') # keep index is default sys.stderr.write("\nText file of raw details saved as '{}'.".format(patmatchsum_fn)) # pack up archive dataframes pm_dfs_list = [patmatch_fn,pkl_patmatch_fn,patmatchsumUNF_fn,pklsum_patmatchUNF_fn, patmatchsum_fn,pklsum_patmatch_fn] archive_file_name = patmatch_fn_prefix+".tar.gz" !tar czf {archive_file_name} {" ".join(pm_dfs_list)} # use the list for archiving command sys.stderr.write("\nCollected pattern matching" " results gathered and saved as " "`{}`.".format(archive_file_name)) ``` Download the tarballed archive of the files to your computer. For now that archive doesn't include the figures generated from the plots because with a lot of strains they can get large. Download those if you want them. (Look for `saveplot_fn_prefix` settings in the code to help identify file names.) ---- ``` import time def executeSomething(): #code here print ('.') time.sleep(480) #60 seconds times 8 minutes while True: executeSomething() ```
github_jupyter
``` #pip install seaborn ``` # Import Libraries ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns ``` # Read the CSV and Perform Basic Data Cleaning ``` # Raw dataset drop NA df = pd.read_csv("../resources/train_predict.csv") # Drop the null columns where all values are null df1 = df.dropna(axis='columns', how='all') df1.head() #Reviewing the % of null values 100*df1.isnull().sum()/df.shape[0] # Drop the null rows data cleaning, making all column headers lowercase loan_df = df.dropna() loan_df.columns=df.columns.str.lower() loan_df.head() #Update column names loan_df.columns=['loan_id', 'gender', 'married', 'dependents', 'education','self_employed' , 'income', 'co_income' , 'loan_amount', 'loan_term', 'credit_history', 'property_area', 'loan_status'] #Test data_df after drop NAN loan_df.dtypes loan_df.shape #Reviewing data loan_df['dependents'].unique() #Reviewing data loan_df['self_employed'].unique() #Reviewing data loan_df['loan_term'].unique() #Reviewing data loan_df['credit_history'].unique() loan_df.describe() ``` # Select your features (columns) ``` # Set features. This will also be used as your x values. Removed 'loan_id', 'property_area' loan_features_df = loan_df[['gender', 'married', 'dependents', 'education','self_employed' , 'income', 'co_income' , 'loan_amount', 'loan_term', 'credit_history', 'loan_status']] loan_features_df.head() sns.countplot(y='gender', hue ='loan_status',data =loan_features_df) sns.countplot(y='married', hue ='loan_status',data =loan_features_df) sns.countplot(y='credit_history', hue ='loan_status',data =loan_features_df) sns.countplot(y='loan_term', hue ='loan_status',data =loan_features_df) ``` # Create a Train Test Split Use `loan_status` for the y values ``` y = loan_features_df[["loan_status"]] X = loan_features_df.drop(columns=["loan_status"]) print(X.shape, y.shape) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y) #code to numberic Hold-> ‘Urban’: 3, ‘Semiurban’: 2,’Rural’: 1, code_numeric = {'Female': 1, 'Male': 2,'Yes': 1, 'No': 2, 'Graduate': 1, 'Not Graduate': 2, 'Y': 1, 'N': 0, '3+': 3} loan_features_df = loan_features_df.applymap(lambda s: code_numeric.get(s) if s in code_numeric else s) loan_features_df.info() ``` # Pre-processing Scale the data and perform some feature selection ``` # Scale Data from sklearn.preprocessing import StandardScaler # Create a StandardScater model and fit it to the training data X_scaler = StandardScaler().fit(X_train) #y_scaler = StandardScaler().fit(y_train) # to_categorical(y) # StandardScaler().fit(X) # Preprocessing #from sklearn.preprocessing import LabelEncoder from tensorflow.keras.utils import to_categorical # label_encoder = LabelEncoder() # label_encoder.fit(y_train) # encoded_y_train = label_encoder.transform(y_train) # encoded_y_test = label_encoder.transform(y_test) y_train_categorical = to_categorical(y_train) y_test_categorical = to_categorical(y_test) ``` # Train the Model ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() model.add(Dense(units=500, activation='relu', input_dim=10)) # model.add(Dense(units=100, activation='relu')) model.add(Dense(units=2, activation='softmax')) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # Fit the model to the training data model.fit( X_scaled, y_train_categorical, epochs=100, shuffle=True, verbose=2 ) from sklearn.svm import SVC model = SVC(kernel='linear') model.fit(X_train_scaled, y_train.values.ravel()) print(f"Training Data Score: {model.score(X_train_scaled, y_train)}") print(f"Testing Data Score: {model.score(X_test_scaled, y_test)}") from sklearn.metrics import classification_report predictions = model.predict(X_test) print(classification_report(y_test, predictions)) ``` # Hyperparameter Tuning Use `GridSearchCV` to tune the model's parameters ``` # Create the GridSearchCV model from sklearn.model_selection import GridSearchCV param_grid = {'C': [1, 2, 10, 50], 'gamma': [0.0001, 0.0005, 0.001, 0.005]} grid = GridSearchCV(model, param_grid, verbose=3) # Train the model with GridSearch grid.fit(X_train, y_train.values.ravel()) #print params, scores print(grid.best_params_) print(grid.best_score_) ``` # Save the Model ``` import joblib # save your model by updating "your_name" with your name # and "your_model" with your model variable # be sure to turn this in to BCS # if joblib fails to import, try running the command to install in terminal/git-bash filename = 'finalized_Plant_model1.sav' joblib.dump(model, filename) #To be done later # load the model from disk loaded_model = joblib.load(filename) result = loaded_model.score(X_test, y_test_categorical) print(result) ```
github_jupyter
## Dependencies ``` import glob import numpy as np import pandas as pd from transformers import TFDistilBertModel from tokenizers import BertWordPieceTokenizer import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate # Auxiliary functions # Transformer inputs def preprocess_test(text, context, tokenizer, max_seq_len): context_encoded = tokenizer.encode(context) context_encoded = context_encoded.ids[1:-1] encoded = tokenizer.encode(text) encoded.pad(max_seq_len) encoded.truncate(max_seq_len) input_ids = encoded.ids attention_mask = encoded.attention_mask token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3)) input_ids = [101] + context_encoded + [102] + input_ids # update input ids and attentions masks size input_ids = input_ids[:-3] attention_mask = [1] * 3 + attention_mask[:-3] x = [np.asarray(input_ids, dtype=np.int32), np.asarray(attention_mask, dtype=np.int32), np.asarray(token_type_ids, dtype=np.int32)] return x def get_data_test(df, tokenizer, MAX_LEN): x_input_ids = [] x_attention_masks = [] x_token_type_ids = [] for row in df.itertuples(): x = preprocess_test(getattr(row, "text"), getattr(row, "sentiment"), tokenizer, MAX_LEN) x_input_ids.append(x[0]) x_attention_masks.append(x[1]) x_token_type_ids.append(x[2]) x_data = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)] return x_data def decode(pred_start, pred_end, text, tokenizer): offset = tokenizer.encode(text).offsets if pred_end >= len(offset): pred_end = len(offset)-1 decoded_text = "" for i in range(pred_start, pred_end+1): decoded_text += text[offset[i][0]:offset[i][1]] if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]: decoded_text += " " return decoded_text ``` # Load data ``` test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) ``` # Model parameters ``` MAX_LEN = 128 base_path = '/kaggle/input/qa-transformers/distilbert/' base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5' config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json' input_base_path = '/kaggle/input/7-tweet-train-distilbert-lower-lower-v2/' tokenizer_path = input_base_path + 'vocab.txt' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep = "\n") ``` # Tokenizer ``` tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) ``` # Pre process ``` test['text'].fillna('', inplace=True) test["text"] = test["text"].apply(lambda x: x.lower()) x_test = get_data_test(test, tokenizer, MAX_LEN) ``` # Model ``` def model_fn(): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='sigmoid', name='y_start')(x) y_end = Dense(MAX_LEN, activation='sigmoid', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) return model ``` # Make predictions ``` NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) test_end_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) for model_path in model_path_list: print(model_path) model = model_fn() model.load_weights(model_path) test_preds = model.predict(x_test) test_start_preds += test_preds[0] / len(model_path_list) test_end_preds += test_preds[1] / len(model_path_list) ``` # Post process ``` test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) ``` # Test set predictions ``` submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test["selected_text"] submission.to_csv('submission.csv', index=False) submission.head(10) ```
github_jupyter
## Importing necessary library ``` import snscrape.modules.twitter as sntwitter import pandas as pd import itertools import plotly.graph_objects as go from datetime import datetime ``` ## Creating a data frame called "df" for storing the data to be scraped. Here, "2019 Elections" was the search keyword" ``` df = pd.DataFrame(itertools.islice(sntwitter.TwitterSearchScraper( '"2019 elections"').get_items(), 5000000)) ``` ## Reading the column names from the dataframe to check the attributes ``` df.columns ``` ## Calculate the time for scraping the 5000000 tweets Here our search parameters are modified to search for tweets around Abuja within __2017-01-01 to 2021-10-23__ using the keyword __2019 elections__. __NB:__ we set the result to be returned to __5000000__ so we can get as much as possible results (tweets). ``` # Set start time start_time = datetime.now() #Creating dataframe called 'data' and storing the tweets data = pd.DataFrame(itertools.islice(sntwitter.TwitterSearchScraper( '"2019 elections near:Abuja since:2017-01-01 until:2021-10-23"').get_items(), 5000000)) # Set end time end_time = datetime.now() #Printing the time duration for scraping these tweets print('Duration: {}'.format(end_time - start_time)) #keeping only date, id, content, user, and url and stored into dataframe called 'df' df = data[['date', 'id', 'content', 'username', 'url']] # If you don't have transformers library installed before, kindly install it using the command: # !pip install transformers. # PS: Remember to remove the leading # in front of "pip install transformers" #Importing the pipeline from Transformers. from transformers import pipeline sentiment_classifier = pipeline('sentiment-analysis') #Taking only 1000000 (20%) records and creating new dataframe called df1 df1 = df.head(1000000) # Passing the tweets into the sentiment pipeline and extracting the sentiment score and label df1 = (df1.assign(sentiment = lambda x: x['content'].apply(lambda s: sentiment_classifier(s))) .assign( label = lambda x: x['sentiment'].apply(lambda s: (s[0]['label'])), score = lambda x: x['sentiment'].apply(lambda s: (s[0]['score'])))) df1.head() #checking the 1000th tweet, to check the sentiment label whether it is "positive" or “negative” df1['content'][1000] # Visualizing the sentiments fig = go.Figure() fig.add_trace(go.Bar(x = df1["score"], y = df1["label"], orientation = "h")) #set orientation to horizontal because we want to flip the x and y-axis fig.update_layout(plot_bgcolor = "white") fig.show() # Taking the entire 5000000 (100%) records and creating new dataframe called df1 df2 = df # Passing the tweets into the sentiment pipeline and extracting the sentiment score and label df2 = (df2.assign(sentiment = lambda x: x['content'].apply(lambda s: sentiment_classifier(s))) .assign( label = lambda x: x['sentiment'].apply(lambda s: (s[0]['label'])), score = lambda x: x['sentiment'].apply(lambda s: (s[0]['score'])))) df2.head() #Visualizing the sentiments fig1 = go.Figure() fig1.add_trace(go.Bar(x = df2["Sentiment score"], y = df2["Sentiment label"], orientation = "h")) #set orientation to horizontal because we want to flip the x and y-axis fig1.update_layout(plot_bgcolor = "white") fig1.show() df2.to_csv('Abj-Elect-Tweets-Sentiment.csv', index=True) df1.to_csv('Abj-Elect-Tweets-Sentiment1.csv', index=True) ```
github_jupyter
# 머신 러닝 교과서 3판 # HalvingGridSearchCV ### 경고: 이 노트북은 사이킷런 0.24 이상에서 실행할 수 있습니다. ``` # 코랩에서 실행할 경우 최신 버전의 사이킷런을 설치합니다. !pip install --upgrade scikit-learn import pandas as pd df = pd.read_csv('https://archive.ics.uci.edu/ml/' 'machine-learning-databases' '/breast-cancer-wisconsin/wdbc.data', header=None) from sklearn.preprocessing import LabelEncoder X = df.loc[:, 2:].values y = df.loc[:, 1].values le = LabelEncoder() y = le.fit_transform(y) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.20, stratify=y, random_state=1) ``` 비교를 위해 `GridSearchCV` 실행 결과를 출력합니다. ``` from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.pipeline import make_pipeline import numpy as np pipe_svc = make_pipeline(StandardScaler(), SVC(random_state=1)) param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0] param_grid = [{'svc__C': param_range, 'svc__kernel': ['linear']}, {'svc__C': param_range, 'svc__gamma': param_range, 'svc__kernel': ['rbf']}] gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, cv=10, n_jobs=-1) gs = gs.fit(X_train, y_train) print(gs.best_score_) print(gs.best_params_) print(np.sum(gs.cv_results_['mean_fit_time'])) ``` 사이킷런 0.24 버전에서 추가된 `HalvingGridsearchCV`는 모든 파라미터 조합에 대해 제한된 자원으로 실행한 다음 가장 좋은 후보를 골라서 더 많은 자원을 투여하는 식으로 반복적으로 탐색을 수행합니다. 이런 방식을 SH(Successive Halving)이라고 부릅니다. `HalvingGridsearchCV`의 `resource` 매개변수는 반복마다 늘려갈 자원을 정의합니다. 기본값은 `'n_samples'`로 샘플 개수입니다. 이 외에도 탐색 대상 모델에서 양의 정수 값을 가진 매개변수를 지정할 수 있습니다. 예를 들면 랜덤 포레스트의 `n_estimators`가 가능합니다. `factor` 매개변수는 반복마다 선택할 후보의 비율을 지정합니다. 기본값은 3으로 후보 중에서 성능이 높은 1/3만 다음 반복으로 전달합니다. `max_resources` 매개변수는 각 후보가 사용할 최대 자원을 지정합니다. 기본값은 `'auto'`로 `resources='n_samples'`일 때 샘플 개수가 됩니다. `min_resources`는 첫 번째 반복에서 각 후보가 사용할 최소 자원을 지정합니다. `resources='n_samples'`이고 `min_resources='smallest'`이면 회귀일 때 `cv` $\times$ 2가 되고 분류일 때는 `cv` $\times$ 클래스개수 $\times$ 2가 됩니다. 그외에는 1입니다. `min_resources='exhaust'`이면 앞에서 계산한 값과 `max_resources`를 `factor`\*\*`n_required_iterations`으로 나눈 몫 중 큰 값입니다. 기본값은 `'exhaust'`입니다(`n_required_iterations`는 $ \text{log}_{factor}(전체 후보 갯수) + 1$ 입니다). 마지막으로 `aggressive_elimination` 매개변수를 `True`로 지정하면 마지막 반복에서 `factor`만큼 후보가 남을 수 있도록 자원을 늘리지 않고 초기에 반복을 여러 번 진행합니다. 기본값은 `False`입니다. `HalvingGridsearchCV` 아직 실험적이기 때문에 `sklearn.experimental` 패키지 아래에 있는 `enable_halving_search_cv`을 임포트해야 사용할 수 있습니다. `verbose=1`로 지정하면 각 반복 과정을 자세히 살펴 볼 수 있습니다. ``` from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import HalvingGridSearchCV hgs = HalvingGridSearchCV(estimator=pipe_svc, param_grid=param_grid, cv=10, n_jobs=-1, verbose=1) hgs = hgs.fit(X_train, y_train) print(hgs.best_score_) print(hgs.best_params_) ``` 출력 결과를 보면 첫 번째 반복(iter: 0)에서 72개의 후보를 40개의 샘플로 교차 검증을 수행합니다. 여기에서 72/3 = 24개의 후보를 뽑아 두 번째 반복(iter: 1)을 수행합니다. 두 번째 반복에서는 40 * 3 = 120개의 샘플을 사용합니다. 같은 방식으로 세 번째 반복(iter: 2)에서는 8개의 후보가 360개의 샘플로 평가됩니다. 최종 결과는 98.3%로 `GridSearchCV` 보다 조금 낮습니다. 찾은 매개변수 조합도 달라진 것을 볼 수 있습니다. 3번의 반복 동안 `HalvingGridSearchCV`가 수행한 교차 검증 횟수는 모두 104번입니다. 각 교차 검증에 걸린 시간은 `cv_results_` 속성의 `mean_fit_time`에 저장되어 있습니다. 이를 `GridSearchCV`와 비교해 보면 5배 이상 빠른 것을 볼 수 있습니다. ``` print(np.sum(hgs.cv_results_['mean_fit_time'])) ``` 각 반복 단계에서 사용한 샘플 개수와 후보 개수는 각각 `n_resources_` 속성과 `n_candidates_` 속성에 저장되어 있습니다. ``` print('자원 리스트:', hgs.n_resources_) print('후보 리스트:', hgs.n_candidates_) ```
github_jupyter
# RadiusNeighborsRegressor with MinMaxScaler & Polynomial Features **This Code template is for the regression analysis using a RadiusNeighbors Regression and the feature rescaling technique MinMaxScaler along with Polynomial Features as a feature transformation technique in a pipeline** ### Required Packages ``` import warnings as wr import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import MinMaxScaler,PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.neighbors import RadiusNeighborsRegressor from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error wr.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) #reading file df.head()#displaying initial entries print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1]) df.columns.tolist() ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` plt.figure(figsize = (15, 10)) corr = df.corr() mask = np.triu(np.ones_like(corr, dtype = bool)) sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f") plt.show() correlation = df[df.columns[1:]].corr()[target][:] correlation ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` #spliting data into X(features) and Y(Target) X=df[features] Y=df[target] ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` #we can choose randomstate and test_size as over requerment X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting ``` ### Data Scaling **Used MinMaxScaler** * Transform features by scaling each feature to a given range. * This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. ### Feature Transformation **PolynomialFeatures :** * Generate polynomial and interaction features. * Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. ## Model **RadiusNeighborsRegressor** RadiusNeighborsRegressor implements learning based on the neighbors within a fixed radius of the query point, where is a floating-point value specified by the user. **Tuning parameters :-** * **radius:** Range of parameter space to use by default for radius_neighbors queries. * **algorithm:** Algorithm used to compute the nearest neighbors: * **leaf_size:** Leaf size passed to BallTree or KDTree. * **p:** Power parameter for the Minkowski metric. * **metric:** the distance metric to use for the tree. * **outlier_label:** label for outlier samples * **weights:** weight function used in prediction. ``` #training the RadiusNeighborsRegressor model = make_pipeline(MinMaxScaler(),PolynomialFeatures(),RadiusNeighborsRegressor(radius=1.5)) model.fit(X_train,y_train) ``` #### Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100)) #prediction on testing set prediction=model.predict(X_test) ``` ### Model evolution **r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. **MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. **MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model. ``` print('Mean Absolute Error:', mean_absolute_error(y_test, prediction)) print('Mean Squared Error:', mean_squared_error(y_test, prediction)) print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction))) print("R-squared score : ",r2_score(y_test,prediction)) #ploting actual and predicted red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red") green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green") plt.title("Comparison of Regression Algorithms") plt.xlabel("Index of Candidate") plt.ylabel("target") plt.legend((red,green),('RadiusNeighborsRegressor', 'REAL')) plt.show() ``` ### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(10,6)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(X_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Vipin Kumar , Github: [Profile](https://github.com/devVipin01)
github_jupyter
# Read Washington Medicaid Fee Schedules The Washington state Health Care Authority website for fee schedules is [here](http://www.hca.wa.gov/medicaid/rbrvs/Pages/index.aspx). * Fee schedules come in Excel format * Fee schedules are *usually* biannual (January and July) * Publicly available fee schedules go back to January 2011 However, Washington's Medicaid fee schedules are a pain in the ass. They are publicly available as Microsoft Excel files but... * File names are not systematic * They do not read directly into R nicely (using either the `readxl` or `xlsx` packages) * Data lines start at different rows All these issues makes codifying difficult. As a workaround, the following steps were taken. 1. Excel files are saved locally 2. Excel files are converted to CSV 3. CSV files are version controlled in this repository (since they are not large) 4. CSV files are read into R The first 3 steps were done manually. The SHA for the commit of the CSV files is 5bde7f3e33e0c83bdace0ed0cf04553a41a8efb1 (5/5/2016). Step 4 is below. ``` files <- list.files(file.path(getwd(), "Data")) files files <- paste("Data", files, sep="/") ``` ## Physician-Related/Professional Services ``` library(data.table) readFS <- function (f, skip) { require(data.table, quietly=TRUE) for (i in 11:16) {if (grepl(sprintf("%d\\.csv", i), f)) {year <- as.numeric(sprintf("20%d", i))}} for (i in 1:12) { monname <- format(as.Date(sprintf("%d-%d-01", year, i)), format="%B") if (grepl(sprintf("_%02d", i), f) | grepl(tolower(monname), f, ignore.case=TRUE)) { mm <- i } } colClasses <- rep("character", 9) D <- data.table(read.csv(f, header=FALSE, colClasses=colClasses, skip=skip, na.strings=c(""), strip.white=TRUE)) old <- names(D) keep <- c("code_status_indicator", "code", "mod", "nfs_maximum_allowable", "fs_maximum_allowable", "pa_required", "global_days", "comments") if (length(old) > length(keep)) {new <- c(keep, old[(length(keep) + 1):length(old)])} else {new <- keep} setnames(D, old, new) D <- D[, effective_date := as.Date(sprintf("%d-%d-01", year, mm))] D[, c(keep, "effective_date"), with=FALSE] } fs <- rbindlist(list(readFS(file.path(getwd(), "Data/HCA_PREOH_January_1_2013.csv"), 9), readFS(file.path(getwd(), "Data/physician_010114.csv"), 9), readFS(file.path(getwd(), "Data/physician_010115.csv"), 9), readFS(file.path(getwd(), "Data/physician_010116.csv"), 10), readFS(file.path(getwd(), "Data/physician_040115.csv"), 9), readFS(file.path(getwd(), "Data/physician_040116.csv"), 10), readFS(file.path(getwd(), "Data/physician_070114.csv"), 9), readFS(file.path(getwd(), "Data/physician_070115.csv"), 10), readFS(file.path(getwd(), "Data/physician_100115.csv"), 10), readFS(file.path(getwd(), "Data/preoh_010112.csv"), 6), readFS(file.path(getwd(), "Data/preoh_01012011.csv"), 6), readFS(file.path(getwd(), "Data/preoh_070112.csv"), 9), readFS(file.path(getwd(), "Data/preoh_070113.csv"), 9), readFS(file.path(getwd(), "Data/preoh_07012011.csv"), 6))) str(fs) fs[, .N, effective_date][order(effective_date)] head(fs) tail(fs) ``` Rename object ``` fsPhysician <- fs ``` ## Ambulance Transportation ``` library(data.table) f <- file.path(getwd(), "Data/ambulance_transportation_022016.csv") D <- data.table(read.csv(f, header=TRUE, na.strings=c(""), strip.white=TRUE, stringsAsFactors=FALSE)) old <- names(D) new <- c("code_status_indicator", "code", "description", "fs_maximum_allowable", "limits") setnames(D, old, new) D <- D[, fs_maximum_allowable := as.numeric(gsub("[^0-9\\.]", "", fs_maximum_allowable))] D <- D[, effective_date := as.Date("2006-07-01")] str(D) D ```
github_jupyter
# Convolutional Neural Networks --- In this notebook, we train a **CNN** to classify images from the CIFAR-10 database. The images in this database are small color images that fall into one of ten classes; some example images are pictured below. ![cifar data](https://github.com/lbleal1/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/cifar-cnn/notebook_ims/cifar_data.png?raw=true) ### Test for [CUDA](http://pytorch.org/docs/stable/cuda.html) Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation. ``` import torch import numpy as np # check if CUDA is available train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...') ``` --- ## Load the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html) Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. ``` from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # percentage of training set to use as validation valid_size = 0.2 # convert data to a normalized torch.FloatTensor transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) # choose the training and test datasets train_data = datasets.CIFAR10('data', train=True, download=True, transform=transform) test_data = datasets.CIFAR10('data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # specify the image classes classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] ``` ### Visualize a Batch of Training Data ``` import matplotlib.pyplot as plt %matplotlib inline # helper function to un-normalize and display an image def imshow(img): img = img / 2 + 0.5 # unnormalize plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # convert images to numpy for display # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) # display 20 images for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) imshow(images[idx]) ax.set_title(classes[labels[idx]]) ``` ### View an Image in More Detail Here, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images. ``` rgb_img = np.squeeze(images[3]) channels = ['red channel', 'green channel', 'blue channel'] fig = plt.figure(figsize = (36, 36)) for idx in np.arange(rgb_img.shape[0]): ax = fig.add_subplot(1, 3, idx + 1) img = rgb_img[idx] ax.imshow(img, cmap='gray') ax.set_title(channels[idx]) width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), horizontalalignment='center', verticalalignment='center', size=8, color='white' if img[x][y]<thresh else 'black') ``` --- ## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html) This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following: * [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), which can be thought of as stack of filtered images. * [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer. * The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output. A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. ![layer conv](https://github.com/lbleal1/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/cifar-cnn/notebook_ims/2_layer_conv.png?raw=true) #### TODO: Define a model with multiple convolutional layers, and define the feedforward network behavior. The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. #### Output volume for a convolutional layer To compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/#layers)): > We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output. ``` import torch.nn as nn import torch.nn.functional as F # define the CNN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # convolutional layer (sees 32x32x3 image tensor) self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # convolutional layer (sees 16x16x16 tensor) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) self.pool = nn.MaxPool2d(2, 2) # convolutional layer (sees 8x8x32 tensor) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) # max pooling layer self.pool = nn.MaxPool2d(2, 2) # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 4 * 4, 500) # linear layer (500 -> 10) self.fc2 = nn.Linear(500, 10) # dropout layer (p=0.25) self.dropout = nn.Dropout(0.25) def forward(self, x): # add sequence of convolutional and max pooling layers x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) # flatten image input x = x.view(-1, 64 * 4 * 4) # add dropout layer x = self.dropout(x) # add 1st hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add 2nd hidden layer, with relu activation function x = self.fc2(x) return x # create a complete CNN model = Net() print(model) # move tensors to GPU if CUDA is available if train_on_gpu: model.cuda() ``` ### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html) Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. #### TODO: Define the loss and optimizer and see how these choices change the loss over time. ``` import torch.optim as optim # specify loss function criterion = torch.nn.CrossEntropyLoss() # specify optimizer optimizer = optim.Adam(model.parameters(), lr=0.005) ``` --- ## Train the Network Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting. ``` # number of epochs to train the model n_epochs = 15 # you may increase this number to train a final model valid_loss_min = np.Inf # track change in validation loss train_losses, test_losses = [], [] for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() for data, target in train_loader: # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item()*data.size(0) ###################### # validate the model # ###################### model.eval() for data, target in valid_loader: # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update average validation loss valid_loss += loss.item()*data.size(0) # calculate average losses train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(valid_loader.dataset) train_losses.append(train_loss/len(train_loader)) test_losses.append(test_loss/len(test_loader)) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss)) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(model.state_dict(), 'model_cifar.pt') valid_loss_min = valid_loss ``` ### Load the Model with the Lowest Validation Loss ``` model.load_state_dict(torch.load('model_cifar.pt')) ``` --- ## Test the Trained Network Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images. ``` # track test loss test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # iterate over test data for data, target in test_loader: # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct_tensor = pred.eq(target.data.view_as(pred)) correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy()) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # average test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total))) ``` ### Question: What are your model's weaknesses and how might they be improved? **Answer**: (double-click to edit and add an answer) ### Visualize Sample Test Results ``` # obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images.numpy() # move model inputs to cuda, if GPU available if train_on_gpu: images = images.cuda() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds_tensor = torch.max(output, 1) preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy()) # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) imshow(images.cpu()[idx]) ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx].item() else "red")) ```
github_jupyter
# **CatBoost** ### За основу взят ноутбук из вебинара "CatBoost на больших данных", канал Karpov.Courses, ведущий вебинара Александр Савченко Репозиторий с исходником: https://github.com/AlexKbit/pyspark-catboost-example ``` %%capture !pip install pyspark==3.0.3 from pyspark.ml import Pipeline from pyspark.ml.feature import VectorAssembler, StringIndexer from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.sql import SparkSession from pyspark.sql import DataFrame from pyspark.sql.functions import col from pyspark.sql.types import StructField, StructType spark = SparkSession.builder\ .master('local[*]')\ .appName('CatBoostWithSpark')\ .config("spark.jars.packages", "ai.catboost:catboost-spark_3.0_2.12:1.0.3")\ .config("spark.executor.cores", "2")\ .config("spark.task.cpus", "2")\ .config("spark.driver.memory", "2g")\ .config("spark.driver.memoryOverhead", "2g")\ .config("spark.executor.memory", "2g")\ .config("spark.executor.memoryOverhead", "2g")\ .getOrCreate() spark import catboost_spark schema_dataset = "col1 String, col2 String, col3 Double, col4 Double, col5 Double, target Integer" df = spark.read.csv('/content/data.csv',sep=',',header=True,schema = schema_dataset) df.printSchema() print(df.describe().show()) print(df.show(7)) TARGET_LABEL = 'target' evaluator = MulticlassClassificationEvaluator( labelCol=TARGET_LABEL, predictionCol="prediction", metricName='f1') train_df, test_df = df.randomSplit([0.75, 0.25]) ``` ### Train CatBoost with Pool ``` col1_indexer = StringIndexer(inputCol='col1', outputCol="col1_index") col2_indexer = StringIndexer(inputCol='col2', outputCol="col2_index") features = ["col1_index", "col2_index", "col3", "col4", "col5"] assembler = VectorAssembler(inputCols=features, outputCol='features') def prepare_vector(df: DataFrame)-> DataFrame: result_df = col1_indexer.fit(df).transform(df) result_df = col2_indexer.fit(result_df).transform(result_df) result_df = assembler.transform(result_df) return result_df train = prepare_vector(train_df) test = prepare_vector(test_df) print(train.show(7)) train_pool = catboost_spark.Pool(train.select(['features', TARGET_LABEL])) train_pool.setLabelCol(TARGET_LABEL) train_pool.setFeaturesCol('features') classifier = catboost_spark.CatBoostClassifier(featuresCol='features', labelCol=TARGET_LABEL) classifier.setIterations(50) classifier.setDepth(5) model = classifier.fit(train_pool) predict = model.transform(test) print(f'Model F1 = {evaluator.evaluate(predict)}') print(predict.show(7)) model.saveNativeModel('catboost_native') model.write().overwrite().save('catboost_spark') ``` ### Pipeline model with CatBoost ``` col1_indexer = StringIndexer(inputCol='col1', outputCol="col1_index") col2_indexer = StringIndexer(inputCol='col2', outputCol="col2_index") features = ["col1_index", "col2_index", "col3", "col4", "col5"] assembler = VectorAssembler(inputCols=features, outputCol='features') classifier = catboost_spark.CatBoostClassifier(featuresCol='features', labelCol=TARGET_LABEL) classifier.setIterations(50) classifier.setDepth(5) pipeline = Pipeline(stages=[col1_indexer, col2_indexer, assembler, classifier]) p_model = pipeline.fit(train_df) print(test_df.show(7)) predictions = p_model.transform(test_df) print(predictions.show(7)) print(f'Model F1 = {evaluator.evaluate(predictions)}') type(p_model) p_model.write().overwrite().save('catboost_pipeline') ```
github_jupyter
--- ### Universidad de Costa Rica #### IE0405 - Modelos Probabilísticos de Señales y Sistemas --- # `Py4` - *Librerías de manipulación de datos* > **Pandas**, en particular, es una útil librería de manipulación de datos que ofrece estructuras de datos para el análisis de tablas numéricas y series de tiempo. Esta es una introducción al objeto `DataFrame` y otras características básicas. --- ## Librería Pandas Para trabajar con una gran cantidad de datos, es deseable un conjunto de herramientas que nos permitan efectuar operaciones comunes de forma intuitiva y eficiente. Pandas, es la solución por defecto para hacerlo en Python. Esta guía está basada en ["10 minutes to pandas"](https://pandas.pydata.org/docs/getting_started/10min.html). ``` import numpy as np import pandas as pd import datetime ``` --- ## 4.1 - `Series` En Python, las `Series` corresponden a un arreglo de una dimensión que admite diversos tipos de datos (números enteros, palabras, números flotantes, objetos de Python, etc.) que además están etiquetados mediante un índice que el usuario puede definir o permitir que Python lo cree por defecto. De manera que para crear una lista de valores y dejando que Python los etiqute, se utiliza el siguiente comando: ``` s = pd.Series([1, 3, 5, np.nan, "modelos", 8.5]) print(s) ``` Utilizado el comando de numpy `random.randn` para generar datos aleatorios para la lista y si se desea agregar indices distintos a los numéricos se utiliza el siguiente comando: ``` s = pd.Series(np.random.randn(5), index = ['a', 'b', 'c', 'd', 'e']) s ``` Una vez creada la `Serie` se pueden ejecutar operaciones vetoriales con la misma o agregar atributos como un nombre, como se muestra a continuación: ``` d= pd.Series(s+s, name = 'suma') d ``` --- ## 4.2 - `DataFrame` En Python, la asignación de `DataFrames` corresponde a un arreglo de 2 dimensiones etiquetado, semejante a concatenar varias `Series` y de igual forma admite varios tipos de datos, algo así como una hoja de cálculo o una tabla SQL. De igual forma la asignación de las etiquetas puede ser decidida por el usuario y Python hará coincidir los valores, en caso de diferencias en los tamaños de las listas agregadas, rellenará esos espacios siguiendo reglas de sentido común. A continuación un ejemplo de dos `Series` de diferentes tamaños: ``` d = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two': pd.Series([1., 2., 3., 4.], index=['a', 'c', 'd', 'b'])} df1 = pd.DataFrame(d) df1 ``` Estos indices también pueden indicar una estampa de tiempo, tal como se muestra en el siguiente ejemplo: ``` dates = pd.date_range('20200501', periods=6) df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD')) df ``` De igual forma que las `Series`, los `DataFrame` pueden asignarse como diccionarios, utilizando diferentes tipos de datos en cada columna, como se muestra a continuación: ``` df2 = pd.DataFrame({'A': 1., 'B': pd.Timestamp('20200521'), 'C': pd.Series(1, index=list(range(4)), dtype='float32'), 'D': np.array([3] * 4, dtype='int32'), 'E': pd.Categorical(["ceviche", "pizza", "nachos", "chifrijo"]), 'F': 'foo'}) df2 ``` Una vez incializada, se pueden ejecutar acciones como extraer, eliminar e insertar de la misma manera que los diccionarios. A continuación un ejemplo: ``` df2['E'] del df2['C'] df2 df2['A']=pd.Series(np.random.randn(4), index=list(range(4))) df2['mayorA1']=df2['A']>1 df2 ``` --- ## 4.3 - Visualizar datos En Python, la visualización de datos permite decidir cuáles datos se quieren ver, por ejemplo del `DataFrame` llamado `df`, para ver las primeras filas de datos se utiliza el comando `head`: ``` df.head(2) ``` Pero si sólo se desea visualizar las útimas tres líneas se utiliza el comando `tail`: ``` df.tail(3) ``` Si bien solo se desean visualizar los indices, se utiliza: ``` df.index ``` Además, en el caso de un `DataFrame` con elementos del mismo tipo de datos, se puede transformar en un dato compatible con Numpy: ``` df.to_numpy() ``` Incluso si el `DataFrame` tiene diversos tipos de datos, también se puede transferir los datos a un arreglo de Numpy: ``` df2.to_numpy() ``` Sin embargo, si todos los elementos son del mismo tipo, se pueden ejecutar más funciones como una rápida revisión de las principales características estadísticas de cada columna: ``` df.describe() ``` O también reordenar los datos con alguna columna de referencia: ``` df.sort_values(by='B') ``` --- ## 4.4 - Seleccionar datos En Python, la selección de datos utilizando Pandas es más eficiente que las expresiones para seleccionar y obtener datos en Numpy. Por ejemplo, para ubicar una fila de datos, se puede utilizar el comando `loc`: ``` df2.loc[2] ``` También se pueden seleccionar un rango de columnas al mismo tiempo: ``` df[0:3] ``` Para obtener una posición en específico, se debe indicar la fila y la columna mediante el comando `at`: ``` df.at[dates[2], 'A'] ``` De igual forma se puede ubicar ese mismo elemento por medio de la posición en lugar de los indices, utilizando el comando `iloc`: ``` df.iloc[2, 0] ``` De igual manera se pueden ubicar los datos que cumplan con cierta condición booleana: ``` df[df['A']>0] ``` --- ## 4.5 - Operaciones sobre datos En Python, las operaciones se ejecutan sobre todos los datos arrojando el valor de salida por filas o columnas, por ejemplo para calcular la media estadística de los datos de cada columna, se utiliza el comando `mean` de la siguiente manera: ``` df.mean() ``` Si en cambio se desea conocer la media de los valores por filas, se utiliza la siguiente variación: ``` df.mean(1) ``` También se pueden aplicar operaciones tales como el conteo sobre dichos datos: ``` f = pd.Series(np.random.randint(0, 7, size=10)) f f.value_counts() ``` También existen operaciones que se pueden aplicar sobre `Series` de palabras: ``` g = pd.Series(['ARbOL', 'BLanCO', 'AvE', 'BuRRo', np.nan]) g.str.lower() ``` --- ## 4.6 - Fusionar datos En Python, para concatenar datos se utiliza el comando `concat()` de la siguiente forma: ``` df = pd.DataFrame(np.random.randn(10,2)) df2 = pd.DataFrame(np.random.randn(10,2)) pieces = [df[:], df2[:]] pd.concat(pieces) ``` --- ## 4.7 - Agrupar datos En Python, la agrupación se refiere a: - Separar los datos en grupos basandose en un criterio. - Aplicar una función a cada grupo independientemente. - Combinar los resultados en una estructura de datos. A continuación un ejemplo de agrupación aplicando una suma a los datos: ``` df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C': np.random.randn(8), 'D': np.random.randn(8)}) df df.groupby('A').sum() df.groupby(['A', 'B']).sum() ``` --- ## 4.8 - Reacomodar datos En Python, una forma de reacomodar los datos es comprimiendolos mediante el comando `stack`: ``` stacked = df.stack() stacked ``` También se puede cambiar la forma de ordenar los datos como tablas de pivot: ``` df=pd.DataFrame({'A': ['one', 'one', 'two', 'three']*3, 'B': ['A', 'B', 'C']*4, 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar']*2, 'D': np.random.randn(12), 'E': np.random.randn(12)}) df pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C']) ``` --- ## 4.9 - Series de tiempo En Python, la asignación de series de tiempo permite generar secuencias con una frecuencia fija y un lapso de tiempo, como por ejemplo: ``` dti = pd.date_range('1-5-2020', periods=3, freq='H') dti ``` Cuya hora se puede convertir a una zona horaria diferente, como Central Time: ``` dti = dti.tz_localize('UTC') dti ``` O el Pacífico de los Estados Unidos: ``` dti.tz_convert('US/Pacific') ``` También se pueden convertir una serie de tiempo a una frecuencia particular: ``` idx = pd.date_range('2020-05-01', periods=5, freq='H') ts = pd.Series(range(len(idx)), index=idx) ts ts.resample('2H').mean() ``` --- ## 4.10 - Gráficas En Python, se utiliza la asignación estándar para utilizar los comandos del API de `matplotlib`, con el cuál se puede graficar una `Serie` de datos: ``` import matplotlib.pyplot as plt plt.close('all') ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/5/2020', periods=1000)) ts = ts.cumsum() ts.plot() ``` También se pueden graficar arreglos del tipo `DataFrame` de manera que se grafican varias curvas en una misma gráfica como se muestra a continuación: ``` df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=['A', 'B', 'C', 'D']) df=df.cumsum() plt.figure() df.plot() plt.legend(loc='best') ``` --- ## 4.11 - Importar y exportar datos En Python, se puede escribir en un archivo de excel mediante el siguiente comando: ``` df.to_csv('modelos') ``` Cuyo contenido se puede llamar desde python utilizando el comando: ``` pd.read_csv('modelos') ``` --- ### Más información * [Página web](https://www.google.com/) * Libro o algo * Tutorial [w3schools](https://www.w3schools.com/python/) --- --- **Universidad de Costa Rica** Facultad de Ingeniería Escuela de Ingeniería Eléctrica ---
github_jupyter
``` import os import sys import numpy as np import cv2 from data_loader import * from fbs_config import TrainFBSConfig, InferenceFBSConfig from fbs_dataset import FBSDataset from mrcnn import model as modellib from datahandler import DataHandler from sklearn.metrics import f1_score from scipy.ndimage import _ni_support from scipy.ndimage.morphology import distance_transform_edt, binary_erosion,\ generate_binary_structure from tqdm import tqdm from medpy.io import save from math import ceil, floor import skimage.color from skimage.morphology import cube, binary_closing from skimage.measure import label ROOT_DIR = os.path.abspath('../../../') sys.path.append(ROOT_DIR) DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, 'logs') DEFAULT_MODEL_DIR = os.path.join(DEFAULT_LOGS_DIR, 'mask_rcnn/kfold') kernel = np.ones((5,5),np.uint8) dh = DataHandler() def destiny_directory(dice_score, post_processing = False): if post_processing: pre = './data/eval_pp/mask_rcnn/' else: pre = './data/eval/mask_rcnn/' if dice_score >= 98: return pre + 'dice_98_100/' elif dice_score >= 96: return pre + 'dice_96_98/' elif dice_score >= 94: return pre + 'dice_94_96/' elif dice_score >= 92: return pre + 'dice_92_94/' elif dice_score >= 90: return pre + 'dice_90_92/' elif dice_score >= 88: return pre + 'dice_88_90/' elif dice_score >= 85: return pre + 'dice_85_88' elif dice_score >= 80: return pre + 'dice_80_85/' elif dice_score >= 70: return pre + 'dice_70_80/' elif dice_score >= 60: return pre + 'dice_60_70/' else: return pre + 'dice_less_60' def getFileName(fname): original_name = fname.split('/')[-1] original_name = original_name[:original_name.index('.')] return original_name image_files, mask_files = load_data_files('data/kfold_data/') skf = getKFolds(image_files, mask_files, n=10) kfold_indices = [] for train_index, test_index in skf.split(image_files, mask_files): kfold_indices.append({'train': train_index, 'val': test_index}) def getDataset(val_index): image_val_files = np.take(image_files, val_index) mask_val_files = np.take(mask_files, val_index) val_files = ([image_val_files], [mask_val_files]) dataset_val = FBSDataset() len_dataset_val = dataset_val.load_data(val_files) dataset_val.prepare() return dataset_val def getDiceScore(ground_truth, prediction): #convert to boolean values and flatten ground_truth = np.asarray(ground_truth, dtype=np.bool).flatten() prediction = np.asarray(prediction, dtype=np.bool).flatten() return f1_score(ground_truth, prediction) def hd(result, reference, voxelspacing=None, connectivity=1): hd1 = __surface_distances(result, reference, voxelspacing, connectivity).max() hd2 = __surface_distances(reference, result, voxelspacing, connectivity).max() hd = max(hd1, hd2) return hd def hd95(result, reference, voxelspacing=None, connectivity=1): hd1 = __surface_distances(result, reference, voxelspacing, connectivity) hd2 = __surface_distances(reference, result, voxelspacing, connectivity) hd95 = np.percentile(np.hstack((hd1, hd2)), 95) return hd95 def __surface_distances(result, reference, voxelspacing=None, connectivity=1): result = np.atleast_1d(result.astype(np.bool)) reference = np.atleast_1d(reference.astype(np.bool)) if voxelspacing is not None: voxelspacing = _ni_support._normalize_sequence(voxelspacing, result.ndim) voxelspacing = np.asarray(voxelspacing, dtype=np.float64) if not voxelspacing.flags.contiguous: voxelspacing = voxelspacing.copy() footprint = generate_binary_structure(result.ndim, connectivity) if 0 == np.count_nonzero(result): raise RuntimeError('The first supplied array does not contain any binary object.') if 0 == np.count_nonzero(reference): raise RuntimeError('The second supplied array does not contain any binary object.') result_border = result ^ binary_erosion(result, structure=footprint, iterations=1) reference_border = reference ^ binary_erosion(reference, structure=footprint, iterations=1) dt = distance_transform_edt(~reference_border, sampling=voxelspacing) sds = dt[result_border] return sds def evaluateMask(gt_mask, pred_mask): return getDiceScore(gt_mask, pred_mask), hd(gt_mask, pred_mask), hd95(gt_mask, pred_mask) import random def prepareForSaving(image): #image = np.swapaxes(image, -1, 0) image = np.moveaxis(image, 0, -1) return image def predictAll(inferenceFBSConfig, val_indices, post_processing = False): model = modellib.MaskRCNN(mode='inference', config=inferenceFBSConfig, model_dir=DEFAULT_MODEL_DIR) inferenceFBSConfig.display() print(DEFAULT_MODEL_DIR) weights_path = model.find_last() print('Loading weights from %s'%weights_path) model.load_weights(weights_path, by_name=True) dice_scores = [] hd_scores = [] hd95_scores = [] names = [] for image_index in tqdm(val_indices): #for saving fname = getFileName(image_files[image_index]) not_used_full_image, hdr = dh.getImageData(image_files[image_index]) dataset = getDataset(image_index) prediction = [] gt_mask = [] for img_id in dataset.image_ids: image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, inferenceFBSConfig, img_id, use_mini_mask=False) results = model.detect([image], verbose=0) r = results[0] pred = r['masks'] if(len(pred.shape) > 2 and pred.shape[2] == 0): pred = np.zeros((pred.shape[0],pred.shape[1],1)) if(mask.shape[2] == 0): mask = np.zeros((pred.shape[0],pred.shape[1],1)) pred[pred>=0.5] = 1 pred[pred<0.5] = 0 pred = np.asarray(pred, dtype=np.uint8) pred = cv2.dilate(pred,kernel,iterations = 1) prediction.append(pred) gt_mask.append(mask) pred_mask = np.asarray(prediction) gt_mask = np.asarray(gt_mask) gt_mask = np.squeeze(gt_mask) pred_mask = np.squeeze(pred_mask) if post_processing: pred_mask = binary_closing(pred_mask, cube(2)) try: labels = label(pred_mask) pred_mask = (labels == np.argmax(np.bincount(labels.flat)[1:])+1).astype(int) except: pred_mask = pred_mask pred_mask = np.array(pred_mask, dtype=np.uint16) dice_score, hd_score, hd95_score = evaluateMask(np.squeeze(gt_mask), pred_mask) if dice_score == 0: dice_scores.append(dice_score) hd_scores.append(200) hd95_scores.append(200) names.append(fname) pred_mask = prepareForSaving(pred_mask) save_path = destiny_directory(int_dice_score, post_processing=post_processing) save_path = os.path.join(ROOT_DIR, save_path) save(pred, os.path.join(save_path, fname + '_mask_rcnn_' + str(int_dice_score) + '.nii'), hdr) continue names.append(fname) dice_scores.append(dice_score) hd_scores.append(hd_score) hd95_scores.append(hd95_score) int_dice_score = floor(dice_score * 100) pred_mask = prepareForSaving(pred_mask) save_path = destiny_directory(int_dice_score, post_processing=post_processing) save_path = os.path.join(ROOT_DIR, save_path) save(pred_mask, os.path.join(save_path, fname + '_mask_rcnn_' + str(int_dice_score) + '.nii'), hdr) return dice_scores, hd_scores, hd95_scores, names all_dice = [] all_hd = [] all_hd95 = [] all_names = [] for post_processing in [False, True]: for i in range(10):#len(kfold_indices)): configParams = {'da': True,'tl': True, 'mask_dim': 28, 'wl': True, 'kfold_i': i} trainFBSConfig = TrainFBSConfig(**configParams) inferenceFBSConfig = InferenceFBSConfig(**configParams) print(inferenceFBSConfig.display()) dice_scores, hd_scores, hd95_scores, names = predictAll(inferenceFBSConfig, kfold_indices[i]['val'], post_processing = post_processing) print('Finished K%d'%i) all_dice += dice_scores all_hd += hd_scores all_hd95 += hd95_scores all_names.extend(names) if post_processing: report_name = 'data/eval_pp/mask_rcnn/mask_rcnn_report.txt' else: report_name = 'data/eval/mask_rcnn/mask_rcnn_report.txt' report_name = os.path.join(ROOT_DIR, report_name) with open(report_name, 'w+') as f: for i in range(len(all_dice)): f.write("%s, %f, %f, %f\n"%(all_names[i], all_dice[i], all_hd[i], all_hd95[i])) f.write('\n') f.write('Final results for mask_rcnn\n') f.write('dice %f\n'%np.mean(all_dice)) f.write('hd %f\n'%np.mean(all_hd)) f.write('hd95 %f\n'%np.mean(all_hd95)) print('dice') for score in all_dice: print(score) print() print('hd') for score in all_hd: print(score) print() print('hd95') for score in all_hd95: print(score) ```
github_jupyter
<a name="top"></a> <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/src/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Intermediate NumPy</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> <div style="float:right; width:250 px"><img src="http://www.contribute.geeksforgeeks.org/wp-content/uploads/numpy-logo1.jpg" alt="NumPy Logo" style="height: 250px;"></div> ### Questions 1. How do we work with the multiple dimensions in a NumPy Array? 1. How can we extract irregular subsets of data? 1. How can we sort an array? ### Objectives 1. <a href="#indexing">Using axes to slice arrays</a> 1. <a href="#boolean">Index arrays using true and false</a> 1. <a href="#integers">Index arrays using arrays of indices</a> <a name="indexing"></a> ## 1. Using axes to slice arrays The solution to the last exercise in the Numpy Basics notebook introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`: ``` # Convention for import to get shortened namespace import numpy as np # Create an array for testing a = np.arange(12).reshape(3, 4) a # This calculates the total of all values in the array np.sum(a) # Keep this in mind: a.shape # Instead, take the sum across the rows: np.sum(a, axis=0) # Or do the same and take the some across columns: np.sum(a, axis=1) ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Finish the code below to calculate advection. The trick is to figure out how to do the summation.</li> </ul> </div> ``` # Synthetic data temp = np.random.randn(100, 50) u = np.random.randn(100, 50) v = np.random.randn(100, 50) # Calculate the gradient components gradx, grady = np.gradient(temp) # Turn into an array of vectors: # axis 0 is x position # axis 1 is y position # axis 2 is the vector components grad_vec = np.dstack([gradx, grady]) print(grad_vec.shape) # Turn wind components into vector wind_vec = np.dstack([u, v]) # Calculate advection, the dot product of wind and the negative of gradient # DON'T USE NUMPY.DOT (doesn't work). Multiply and add. ``` <div class="alert alert-info"> <b>SOLUTION</b> </div> ``` # %load solutions/advection.py ``` <a href="#top">Top</a> <hr style="height:2px;"> <a name="boolean"></a> ## 2. Indexing Arrays with Boolean Values Numpy can easily create arrays of boolean values and use those to select certain values to extract from an array ``` # Create some synthetic data representing temperature and wind speed data np.random.seed(19990503) # Make sure we all have the same data temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) + 50 + 2 * np.random.randn(100)) spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) + 10 + 5 * np.random.randn(100))) %matplotlib inline import matplotlib.pyplot as plt plt.plot(temp, 'tab:red') plt.plot(spd, 'tab:blue'); ``` By doing a comparision between a NumPy array and a value, we get an array of values representing the results of the comparison between each element and the value ``` temp > 45 ``` We can take the resulting array and use this to index into the NumPy array and retrieve the values where the result was true ``` print(temp[temp > 45]) ``` So long as the size of the boolean array matches the data, the boolean array can come from anywhere ``` print(temp[spd > 10]) # Make a copy so we don't modify the original data temp2 = temp.copy() # Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it temp2[spd < 10] = np.nan plt.plot(temp2, 'tab:red') ``` Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence. ``` print(temp[(temp < 45) & (spd > 10)]) ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.</li> </ul> </div> ``` # Here's the "data" np.random.seed(19990503) # Make sure we all have the same data temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) + 80 + 2 * np.random.randn(100)) rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) + 50 + 5 * np.random.randn(100))) # Create a mask for the two conditions described above # good_heat_index = # Use this mask to grab the temperature and relative humidity values that together # will give good heat index values # temp[] ? # BONUS POINTS: Plot only the data where heat index is defined by # inverting the mask (using `~mask`) and setting invalid values to np.nan ``` <div class="alert alert-info"> <b>SOLUTION</b> </div> ``` # %load solutions/heat_index.py ``` <a href="#top">Top</a> <hr style="height:2px;"> <a name="integers"></a> ## 3. Indexing using arrays of indices You can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element: ``` print(temp[0]) ``` We can also extract the first, fifth, and tenth elements: ``` print(temp[[0, 4, 9]]) ``` One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data": ``` inds = np.argsort(temp) print(inds) ``` We can use this array of indices to pass into temp to get it in sorted order: ``` print(temp[inds]) ``` Or we can slice `inds` to only give the 10 highest temperatures: ``` ten_highest = inds[-10:] print(temp[ten_highest]) ``` There are other numpy arg functions that return indices for operating: ``` np.*arg*? ``` <a href="#top">Top</a> <hr style="height:2px;">
github_jupyter
<h1> 2c. Refactoring to add batching and feature-creation </h1> In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways: <ol> <li> Refactor the input to read data in batches. <li> Refactor the feature creation so that it is not one-to-one with inputs. </ol> The Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option. ``` import tensorflow as tf import numpy as np import shutil print(tf.__version__) ``` <h2> 1. Refactor the input </h2> Read data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API. ``` CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key'] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']] def read_dataset(filename, mode, batch_size = 512): def _input_fn(): def decode_csv(value_column): columns = tf.decode_csv(value_column, record_defaults = DEFAULTS) features = dict(zip(CSV_COLUMNS, columns)) label = features.pop(LABEL_COLUMN) return features, label # Create list of files that match pattern file_list = tf.gfile.Glob(filename) # Create dataset from file list dataset = tf.data.TextLineDataset(file_list).map(decode_csv) if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely dataset = dataset.shuffle(buffer_size = 10 * batch_size) else: num_epochs = 1 # end-of-input after this dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() return _input_fn def get_train(): return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN) def get_valid(): return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL) def get_test(): return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL) ``` <h2> 2. Refactor the way features are created. </h2> For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features. ``` INPUT_COLUMNS = [ tf.feature_column.numeric_column('pickuplon'), tf.feature_column.numeric_column('pickuplat'), tf.feature_column.numeric_column('dropofflat'), tf.feature_column.numeric_column('dropofflon'), tf.feature_column.numeric_column('passengers'), ] def add_more_features(feats): # Nothing to add (yet!) return feats feature_cols = add_more_features(INPUT_COLUMNS) ``` <h2> Create and train the model </h2> Note that we train for num_steps * batch_size examples. ``` tf.logging.set_verbosity(tf.logging.INFO) OUTDIR = 'taxi_trained' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time model = tf.estimator.LinearRegressor( feature_columns = feature_cols, model_dir = OUTDIR) model.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed ``` <h3> Evaluate model </h3> As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab. ``` def print_rmse(model, name, input_fn): metrics = model.evaluate(input_fn = input_fn, steps = 1) print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss']))) print_rmse(model, 'validation', get_valid()) ``` Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
``` import torch torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False import numpy as np import pickle from collections import namedtuple from tqdm import tqdm import torch torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms from adabound import AdaBound import matplotlib.pyplot as plt transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.MNIST(root='./data_mnist', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=200, shuffle=True, num_workers=4) testset = torchvision.datasets.MNIST(root='./data_mnist', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=200, shuffle=False, num_workers=4) device = 'cuda:0' optim_configs = { '1e-4': { 'optimizer': optim.Adam, 'kwargs': { 'lr': 1e-4, 'weight_decay': 0, 'betas': (0.9, 0.999), 'eps': 1e-08, 'amsgrad': False } }, '5e-3': { 'optimizer': optim.Adam, 'kwargs': { 'lr': 5e-3, 'weight_decay': 0, 'betas': (0.9, 0.999), 'eps': 1e-08, 'amsgrad': False } }, '1e-2': { 'optimizer': optim.Adam, 'kwargs': { 'lr': 1e-2, 'weight_decay': 0, 'betas': (0.9, 0.999), 'eps': 1e-08, 'amsgrad': False } }, '1e-3': { 'optimizer': optim.Adam, 'kwargs': { 'lr': 1e-3, 'weight_decay': 0, 'betas': (0.9, 0.999), 'eps': 1e-08, 'amsgrad': False } }, '5e-4': { 'optimizer': optim.Adam, 'kwargs': { 'lr': 5e-4, 'weight_decay': 0, 'betas': (0.9, 0.999), 'eps': 1e-08, 'amsgrad': False } }, } class MLP(nn.Module): def __init__(self, hidden_size=256): super(MLP, self).__init__() self.fc1 = nn.Linear(28 * 28, hidden_size) self.fc2 = nn.Linear(hidden_size, 10) def forward(self, x): x = x.view(-1, 28 * 28) x = F.relu(self.fc1(x)) x = self.fc2(x) return x criterion = nn.CrossEntropyLoss() hidden_sizes = [256, 512, 1024, 2048] for h_size in hidden_sizes: Stat = namedtuple('Stat', ['losses', 'accs']) train_results = {} test_results = {} for optim_name, optim_config in optim_configs.items(): torch.manual_seed(0) np.random.seed(0) train_results[optim_name] = Stat(losses=[], accs=[]) test_results[optim_name] = Stat(losses=[], accs=[]) net = MLP(hidden_size=h_size).to(device) optimizer = optim_config['optimizer'](net.parameters(), **optim_config['kwargs']) print(optimizer) for epoch in tqdm(range(100)): # loop over the dataset multiple times train_stat = { 'loss': .0, 'correct': 0, 'total': 0 } test_stat = { 'loss': .0, 'correct': 0, 'total': 0 } for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data inputs = inputs.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() _, predicted = torch.max(outputs, 1) c = (predicted == labels).sum() # calculate train_stat['loss'] += loss.item() train_stat['correct'] += c.item() train_stat['total'] += labels.size()[0] train_results[optim_name].losses.append(train_stat['loss'] / (i + 1)) train_results[optim_name].accs.append(train_stat['correct'] / train_stat['total']) with torch.no_grad(): for i, data in enumerate(testloader, 0): inputs, labels = data inputs = inputs.to(device) labels = labels.to(device) outputs = net(inputs) loss = criterion(outputs, labels) _, predicted = torch.max(outputs, 1) c = (predicted == labels).sum() test_stat['loss'] += loss.item() test_stat['correct'] += c.item() test_stat['total'] += labels.size()[0] test_results[optim_name].losses.append(test_stat['loss'] / (i + 1)) test_results[optim_name].accs.append(test_stat['correct'] / test_stat['total']) # Save stat! stat = { 'train': train_results, 'test': test_results } with open(f'adam_stat_mlp_{h_size}.pkl', 'wb') as f: pickle.dump(stat, f) # Plot loss f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 5)) for optim_name in optim_configs: if 'Bound' in optim_name: ax1.plot(train_results[optim_name].losses, '--', label=optim_name) else: ax1.plot(train_results[optim_name].losses, label=optim_name) ax1.set_ylabel('Training Loss') ax1.set_xlabel('# of Epcoh') ax1.legend() for optim_name in optim_configs: if 'Bound' in optim_name: ax2.plot(test_results[optim_name].losses, '--', label=optim_name) else: ax2.plot(test_results[optim_name].losses, label=optim_name) ax2.set_ylabel('Test Loss') ax2.set_xlabel('# of Epcoh') ax2.legend() plt.suptitle(f'Training Loss and Test Loss for MLP({h_size}) on MNIST', y=1.01) plt.tight_layout() plt.show() # Plot accuracy f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 5)) for optim_name in optim_configs: if 'Bound' in optim_name: ax1.plot(train_results[optim_name].accs, '--', label=optim_name) else: ax1.plot(train_results[optim_name].accs, label=optim_name) ax1.set_ylabel('Training Accuracy %') ax1.set_xlabel('# of Epcoh') ax1.legend() for optim_name in optim_configs: if 'Bound' in optim_name: ax2.plot(test_results[optim_name].accs, '--', label=optim_name) else: ax2.plot(test_results[optim_name].accs, label=optim_name) ax2.set_ylabel('Test Accuracy %') ax2.set_xlabel('# of Epcoh') ax2.legend() plt.suptitle(f'Training Accuracy and Test Accuracy for MLP({h_size}) on MNIST', y=1.01) plt.tight_layout() plt.show() ```
github_jupyter
<img alt="Colaboratory logo" height="45px" src="https://colab.research.google.com/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"> <h1>Welcome to Colaboratory!</h1> Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. ## Running code Code cells can be executed in sequence by pressing Shift-ENTER. Try it now. ``` import math import tensorflow as tf from matplotlib import pyplot as plt print("Tensorflow version " + tf.__version__) a=1 b=2 a+b ``` ## Hidden cells Some cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, **you must run the hidden cells for the code inside to be interpreted**. Try it now, the cell is marked **RUN ME**. ``` #@title "Hidden cell with boring code [RUN ME]" def display_sinusoid(): X = range(180) Y = [math.sin(x/10.0) for x in X] plt.plot(X, Y) display_sinusoid() ``` Did it work ? If not, run the collapsed cell marked **RUN ME** and try again! ## Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in *Runtime > Change runtime type* The cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras. ``` # Detect hardware try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection strategy = tf.distribute.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too) #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines # How many accelerators do we have ? print("Number of accelerators: ", strategy.num_replicas_in_sync) # To use the selected distribution strategy: # with strategy.scope: # # --- define your (Keras) model here --- # # For distributed computing, the batch size and learning rate need to be adjusted: # global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs. # learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync ``` ## License --- author: Martin Gorner<br> twitter: @martin_gorner --- Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --- This is not an official Google product but sample code provided for an educational purpose
github_jupyter
# PyTorch # Intro to Neural Networks Lets use some simple models and try to match some simple problems ``` import numpy as np import torch import torch.nn as nn from tensorboardX import SummaryWriter import matplotlib.pyplot as plt ``` ### Data Loading Before we dive deep into the nerual net, lets take a brief aside to discuss data loading. Pytorch provides a Dataset class which is fairly easy to inherit from. We need only implement two methods for our data load: 9. __len__(self) -> return the size of our dataset 9. __getitem__(self, idx) -> return a data at a given index. The *real* benefit of implimenting a Dataset class comes from using the DataLoader class. For data sets which are too large to fit into memory (or more likely, GPU memory), the DataLoader class gives us two advantages: 9. Efficient shuffling and random sampling for batches 9. Data is loaded in a seperate *processes*. Number (2) above is *important*. The Python interpretter is single threaded only, enforced with a GIL (Global Interpreter Lock). Without (2), we waste valuable (and potentially expensive) processing time shuffling and sampling and building tensors. So lets invest a little time to build a Dataset and use the DataLoader. In or example below, we are going to mock a dataset with a simple function, this time: y = sin(x) + 0.01 * x^2 ``` fun = lambda x: np.sin(x) + 0.01 * x * x X = np.linspace(-3, 3, 100) Y = fun(X) plt.figure(figsize=(7,7)) plt.scatter(X,Y) plt.legend() plt.show() ``` ### Our First Neural Net Lets now build our first neural net. In this case, we'll take a classic approach with 2 fully connected hidden layers and a fully connected output layer. ``` class FirstNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(FirstNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): x = x.view(-1,1) out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out net = FirstNet(input_size=1, hidden_size=64, num_classes=1) print(net) ``` Lets look at a few key features of our net: 1) We have 2 fully connected layers, defined in our init function. 2) We define a *forward pass* method which is the prediction of the neural net given an input X 3) Note that we make a *view* of our input array. In our simple model, we expect a 1D X value, and we output a 1D Y value. For efficiency, we may wish to pass in *many* X values, particularly when training. Thus, we need to set up a *view* of our input array: Many 1D X values. -1 in this case indicates that the first dimension (number of X values) is inferred from the tensor's shape. ### Logging and Visualizing to TensorboardX Lets track the progress of our training and visualize in tensorboard (using tensorboardX). We'll also add a few other useful functions to help visualize things. To view the output, run: `tensorboard --logdir nb/run` ``` tbwriter = SummaryWriter() ``` ### Graph Visualization and Batching We will begin by adding a graph visualization to tensorboard. To do this, we need a valid input to our network. Our network is simple - floating point in, floating point out. *However*, pytorch expects us to *batch* our inputs - therefore it expects an *array* of inputs instead of a single input. There are many ways to work around this, I like "unsqueeze". ``` X = torch.FloatTensor([0.0]) tbwriter.add_graph(net, X) ``` ### Cuda IF you have a GPU available, your training will run much faster. Moving data back and forth between the CPU and the GPU is fairly straightforward - although it can be easy to forget. ``` use_cuda = torch.cuda.is_available() if use_cuda: net = net.cuda() def makeFig(iteration): X = np.linspace(-3, 3, 100, dtype=np.float32) X = torch.FloatTensor(X) if use_cuda: Y = net.forward(X.cuda()).cpu() else: Y = net.forward(X) fig = plt.figure() plt.plot(X.data.numpy(), Y.data.numpy()) plt.title('Prediciton at iter: {}'.format(iteration)) return fig def showFig(iteration): fig = makeFig(iteration) plt.show() plt.close() def logFig(iteration): fig = makeFig(iteration) fig.canvas.draw() raw = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') raw = raw.reshape(fig.canvas.get_width_height()[::-1] + (3,)) tbwriter.add_image('Prediction at iter: {}'.format(iteration), raw) plt.close() showFig(0) ``` Ok, we have a ways to go. Lets use our data loader and do some training. Here we will use MSE loss (mean squared error) and SGD optimizer. ``` %%time learning_rate = 0.01 num_epochs = 4000 if use_cuda: net = net.cuda() criterion = nn.MSELoss() optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate) net.train() X = np.linspace(-3, 3, 100) Y = fun(X) X = torch.FloatTensor(X) Y = torch.FloatTensor(Y).view(-1,1) if use_cuda: X = X.cuda() Y = Y.cuda() for epoch in range(num_epochs): pred = net.forward(X) loss = criterion(pred, Y) optimizer.zero_grad() loss.backward() optimizer.step() tbwriter.add_scalar("Loss", loss.data[0]) if (epoch % 100 == 99): print("Epoch: {:>4} Loss: {}".format(epoch, loss.data[0])) for name, param in net.named_parameters(): tbwriter.add_histogram(name, param.clone().cpu().data.numpy(), epoch) logFig(epoch) net.eval() showFig(0) ``` ## Conclusions We've written our first network, take a moment and play with some of our models here. Try inputting a different function into the functional dataset, such as: dataset = FunctionalDataset(lambda x: 1.0 if x > 0 else -1.0 Try experimenting with the network - change the number of neurons in the layer, or add more layers. Try changing the learning rate (and probably the number of epochs). And lastly, try disabling cuda (if you have a gpu). #### How well does the prediction match our input function? #### How long does it take to train? One last note: we are absolutely *over-fitting* our dataset here. In this example, that's ok. For real work, we will need to be more careful. Speaking of real work, lets do some real work identifying customer cohorts.
github_jupyter
<a href="https://colab.research.google.com/github/WuilsonEstacio/Procesamiento-de-lenguaje-natural/blob/main/codigo_para_abrir_y_contar_palabras_de_archivos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # para leer un archivo archivo = open('/content/Hash.txt','r') for linea in archivo: print(linea) archivo.close() archivo="/content/Hash.txt" with open(archivo) as f: text=f.read() for char in "abcdefghijklmnopqrsrtuvwxyz": perc=100*count_char(text, char)/len(text) print("{0}-{1}%".format(char, round(perc, 2))) # Which of the following is the correct regular expression to extract all the phone numbers from the following chunk of text: import re patter = '[(]\d{3}[)]\s\d{3}[-]\d{4}' print(patter) re.findall(patter,archivo) #con este codigo se puede contar las palabras que hay en un archivo import numpy as np import pandas as pd def count_char(text, char): count=0 for c in text: if c == char: count +=1 return count # con esto cambiamos el contenido de Hash.txt y modificamos el escrito y lo guardamos file =open("/content/Hash.txt","w") file.write(""" Usted puede interponer demanda ante los jueces civiles del circuito que conocen en primera instancia de los procesos contenciosos de mayor cuantía por responsabilidad médica. Pretendiendo el pago de los perjuicios materiales """) file.close() filename="20-12-2020.txt" with open('/content/Hash.txt') as f: text=f.read() for char in "abcdefghijklmnopqrsrtuvwxyz": perc=100*count_char(text, char)/len(text) print("{0}-{1}%".format(char, round(perc, 2))) import numpy as np import pandas as pd filename=input("ingrese el nombre del archivo: ") with open( filename ) as f: text=f.read() filename = open("20-12-2020.txt","r") for linea in filename.readlines(): #str=filename.read() #print(len(str)) print(linea) filename.close() # importamos librerias import nltk nltk.download('cess_esp') # para preeentener from nltk.corpus import cess_esp as cess from nltk import UnigramTagger as ut # etiquetador por unigramas from nltk import BigramTagger as bt # etiquetador por bigramas # https://www.delftstack.com/es/howto/python-pandas/how-to-load-data-from-txt-with-pandas/#read_csv-m%25C3%25A9todo-para-cargar-los-datos-del-archivo-de-texto # una forma de leer el archivo con pandas import pandas as pd df = pd.read_csv( '/content/Hash.txt', sep=" ",header=None) print(df) # leemos el archivo import pandas as pd import numpy as np archivo = open('/content/Hash.txt','r') for linea in archivo: print(linea) archivo.close() # pip install win_unicode_console # Utilizado para vizualizar caracteres correctamente en consola import codecs import win_unicode_console from nltk.tokenize import sent_tokenize from nltk.tokenize import word_tokenize # Abrimos el archivo archivo = codecs.open('/content/Hash.txt', 'r', encoding='utf-8') texto = "" #Almacenamos el texto en una variable for linea in archivo: linea = linea.strip() texto = texto + " " + linea text = word_tokenize(texto) nltk.pos_tag(text) # etiquetado aplicado al text #Realizamos el Tokenizing con Sent_Tokenize() a cada una de las sentencias del texto # tokens = sent_tokenize(texto) ``` # **Test** 1. Si tenemos un dataset etiquetado donde la categoría adjetivo (ADJ) aparece un total de 500 veces entre todos los tokens, y de esas veces solamente la palabra "noble" le corresponde 200 veces, entonces podemos decir que: La probabilidad de emisión P(noble|ADJ) = 40% 2. El proceso mediante el cual un Modelo Markoviano Latente determina la secuencia de etiquetas más probable para una secuencia de palabras es: Usando el algoritmo de Viterbi para obtener la categoría más probable, palabra por palabra. 3. Dada una cadena de texto text en español, el procedimiento para asignar las etiquetas gramaticales con Stanza es a partir de un objeto nlp(text), donde: nlp = stanza.Pipeline('es', processors='tokenize,pos') 4. La ingeniería de atributos se usa para: Construir atributos particulares de palabras y textos que permitan dar un input más apropiado a un modelo de clasificación. 5. El problema de clasificación de texto pertenece a la categoría de Machine Learning supervisado porque: Durante el entrenamiento, el modelo tiene conocimiento de las etiquetas correctas que debería predecir. 6. En un modelo de clasificación por categorías gramaticales, el algoritmo de Viterbi se usa para: El proceso de decodificación: encontrar la secuencia de etiquetas más probable. 7. En un Modelo Markoviano Latente se necesitan los siguientes ingredientes: Matrices de transición, emisión y distribución inicial de estados. 8. En un problema de clasificación de emails entre SPAM y HAM, la métrica de recall tiene la siguiente interpretación: De todos los correos que realmente son SPAM, la fracción que el modelo logró identificar. 9. Para entrenar un clasificador de Naive Bayes en NLTK, se escribe en Python: nltk.NaiveBayesClassifier.train(data) 10. Si tienes un modelo de clasificación binaria que luego de entrenarlo, obtienes que el número de verdaderos positivos es 200 y el número de falsos positivos es 120, entonces la métrica de precisión de dicho modelo tiene un valor de: 200/320 11. Un algoritmo general de clasificación de texto: Es un algoritmo de Machine Learning supervisado. 12. El tokenizador por defecto en NLTK para el idioma inglés es: punkt 13. En una cadena de Markov se necesitan los siguientes elementos: Matriz de transiciones y distribución inicial de estados. 14. Entrenar un Modelo Markoviano Latente significa: Calcular las matrices de probabilidad de transición y emisión con un corpus de textos. 15. Una de las siguientes no es una categoría de ambigüedades del lenguaje: Vectorial 16. El suavizado de Laplace se usa en un algoritmo de clasificación con el objetivo de: Evitar probabilidades nulas y denominadores iguales a cero. 17. El clasificador de Naive Bayes es: Un clasificador probabilístico que hace uso de la regla de Bayes. 18. En la frase: "mi hermano es muy noble", la palabra noble hace referencia a: Un adjetivo 19. Con Naive Bayes preferimos hacer cálculos en espacio logarítmico para: Evitar productos de números demasiado pequeños para la precisión de máquina. 20. En un modelo MEMM: El proceso de decodificación es similar al de un HMM, y por lo tanto se puede usar un tipo de algoritmo de Viterbi. 21. El accuracy de entrenamiento de un modelo se calcula como: (número de veces que el modelo predice la categoría correcta) / (total de datos usados para entrenamiento) 22. Si tenemos una cadena de Markov para describir las probabilidades de transición en cuanto al clima de un dia para otro, y observamos la siguiente secuencia de estados día tras día: (frío, frío, caliente, frío, tibio, caliente, tibio, frío), entonces la probabilidad de transición P(caliente|frío) es: 50% 23. En un Modelo Markoviano Latente, el problema de calcular la secuencia de etiquetas más probable se expresa con la siguiente expresión matemática: $${\arg \max}_{(t^n)}\prod_i P(w_i \vert t_i)P(t_i \vert t_{i-1})$$ 24. Para un modelo de clasificación de palabras con Naive Bayes en NLTK, debemos entrenar el algoritmo usando: nltk.NaiveBayesClassifier.train(train_set) donde usamos una funcion que extrae atributos llamada atributos() y: train_set = [(atributos(palabra), categoría de la palabra), ...] 25. Dada una cadena de texto text en inglés, el procedimiento para asignar las etiquetas gramaticales con NLTK es: nltk.pos_tag(word_tokenize(text))
github_jupyter
``` fuelNeeded = 42/1000 tank1 = 36/1000 tank2 = 6/1000 tank1 + tank2 >= fuelNeeded from decimal import Decimal fN = Decimal(fuelNeeded) t1 = Decimal(tank1) t2 = Decimal(tank2) t1 + t2 >= fN class Rational(object): def __init__ (self, num, denom): self.numerator = num self.denominator = denom def add(self, other): newNumerator = self.numerator * other.denominator + self.denominator * other.numerator newDenominator = self.denominator*other.denominator return Rational(newNumerator, newDenominator) r1 = Rational(36, 1000) r2 = Rational(6, 1000) import numpy as np from mayavi import mlab mlab.init_notebook() s = mlab.test_plot3d() s from numpy import pi, sin, cos, mgrid dphi, dtheta = pi/250.0, pi/250.0 [phi,theta] = mgrid[0:pi+dphi*1.5:dphi,0:2*pi+dtheta*1.5:dtheta] m0 = 4; m1 = 3; m2 = 2; m3 = 3; m4 = 6; m5 = 2; m6 = 6; m7 = 4; r = sin(m0*phi)**m1 + cos(m2*phi)**m3 + sin(m4*theta)**m5 + cos(m6*theta)**m7 x = r*sin(phi)*cos(theta) y = r*cos(phi) z = r*sin(phi)*sin(theta) #对该数据进行三维可视化 s = mlab.mesh(x, y, z) s mlab.savefig('example.png') import numpy as np from mayavi import mlab @mlab.animate(delay = 100) def updateAnimation(): t = 0.0 while True: ball.mlab_source.set(x = np.cos(t), y = np.sin(t), z = 0) t += 0.1 yield ball = mlab.points3d(np.array(1.), np.array(0.), np.array(0.)) updateAnimation() mlab.show() import numpy from mayavi import mlab def lorenz(x, y, z, s=10., r=28., b=8. / 3.): """The Lorenz system.""" u = s * (y - x) v = r * x - y - x * z w = x * y - b * z return u, v, w # Sample the space in an interesting region. x, y, z = numpy.mgrid[-50:50:100j, -50:50:100j, -10:60:70j] u, v, w = lorenz(x, y, z) fig = mlab.figure(size=(400, 300), bgcolor=(0, 0, 0)) # Plot the flow of trajectories with suitable parameters. f = mlab.flow(x, y, z, u, v, w, line_width=3, colormap='Paired') f.module_manager.scalar_lut_manager.reverse_lut = True f.stream_tracer.integration_direction = 'both' f.stream_tracer.maximum_propagation = 200 # Uncomment the following line if you want to hide the seed: #f.seed.widget.enabled = False # Extract the z-velocity from the vectors and plot the 0 level set # hence producing the z-nullcline. src = f.mlab_source.m_data e = mlab.pipeline.extract_vector_components(src) e.component = 'z-component' zc = mlab.pipeline.iso_surface(e, opacity=0.5, contours=[0, ], color=(0.6, 1, 0.2)) # When using transparency, hiding 'backface' triangles often gives better # results zc.actor.property.backface_culling = True # A nice view of the plot. mlab.view(140, 120, 113, [0.65, 1.5, 27]) mlab.savefig('example.png') import numpy as np import mayavi.mlab as mlab import moviepy.editor as mpy ```
github_jupyter
# Week 7 worksheet: Spherically symmetric parabolic PDEs This worksheet contains a number of exercises covering only the numerical aspects of the course. Some parts, however, still require you to solve the problem by hand, i.e. with pen and paper. The rest needs you to write pythob code. It should usually be obvious which parts require which. #### Suggested reading You will see lists of links to further reading and resources throughout the worksheets, in sections titled **Learn more:**. These will include links to the Python documentation on the topic at hand, or links to relevant book sections or other online resources. Unless explicitly indicated, these are not mandatory reading, although of course we strongly recommend that you consult them! #### Displaying solutions Solutions will be released after the workshop, as a new `.txt` file in the same GitHub repository. After pulling the file to Noteable, **run the following cell** to create clickable buttons under each exercise, which will allow you to reveal the solutions. ## Note: This workbook expects to find a diretory called figures in the same folder as well as the scripts folder. Please make sure you download figures (and the files it contains) from the GitHub. ``` %run scripts/create_widgets.py W07 ``` *How it works: You will see cells located below each exercise, each containing a command starting with `%run scripts/show_solutions.py`. You don't need to run those yourself; the command above runs a script which automatically runs these specific cells for you. The commands in each of these cells each create the button for the corresponding exercise. The Python code to achieve this is contained in `scripts/show_solutions.py`, and relies on [IPython widgets](https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Basics.html) --- feel free to take a look at the code if you are curious.* ``` %%javascript MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); ``` ## Exercise 1 $$ \newcommand{\vect}[1]{\bm #1} \newcommand{\grad}{\nabla} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\pdderiv}[2]{\frac{\partial^2 #1}{\partial #2^2}} $$ Consider the spherically symmetric form of the heat conduction equation $$ \pdderiv{u}{r} + \frac{2}{r}\pderiv{u}{r} = \frac1\kappa\pderiv{u}{t} $$ ### Part a) Define $$ v(r,t) = r u(r,t) $$ and show that $v$ satisfies the standard one-dimensional heat conduction equation. What can we expect of a solution as $r\to\infty$? **Remarks:** - The worksheet requires understanding of the material from Analytical methods Part 6: Spherical coordinates - The material is applied in Analytical methods Example 5: Radially symmetric heat conduction example 9.24b ``` %run scripts/show_solutions.py W07_ex1_parta ``` ### Part b) Solve the equation in the annulus $a\le r\le b$ subject to the boundary conditions \begin{align*} u(a,t) &= T_0, \quad & t>0 \\ u(b,t) &= 0, \quad & t>0 \\ u(r,0) &= 0, & a\le r\le b \end{align*} Show that the solution has the form $$ T(r,t) = \frac{a T_0}{r} \left[\frac{b-r}{b-a} - \sum_{N=1}^\infty A_N e^{-\kappa\lambda^2 t} \sin\left(\frac{r-a}{b-a}N\pi\right) \right] $$ where $\lambda(b-a)=N\pi$. Evaluate the Fourier coefficients $A_N$. ``` %run scripts/show_solutions.py W07_ex1_partb ``` ### Part c) Modify the 1D solver from the Explicit-Parabolic Solver workbook so that it is solving the spherically symmetric form of the heat conduction equation, $$ \pdderiv{u}{r} + \frac{2}{r}\pderiv{u}{r} = \frac1\kappa\pderiv{u}{t}. $$ Remember that you will need to discretise the first derivative $\pderiv{u}{r}$ using the central 2nd order finite difference approximation and will then need to find the coefficients for the spherical form of the FTCS scheme. Use this solver to solve the problem on an annulus where $a=0.1$, $b=1$ and $T_0=100$ Celcius. Compare your solution with the analytical solution from part (b) ``` %run scripts/show_solutions.py W07_ex1_partc ```
github_jupyter
``` from pymongo import MongoClient import pandas as pd import datetime # Open Database and find history data collection client = MongoClient() db = client.test_database shdaily = db.indexdata # KDJ calculation formula def KDJCalculation(K1, D1, high, low, close): # input last K1, D1, max value, min value and current close value #设定KDJ基期值 #count = 9 #设定k、d平滑因子a、b,不过目前已经约定俗成,固定为1/3 a = 1.0/3 b = 1.0/3 # 取得过去count天的最低价格 low_price = low #low.min() #min(list1) # 取得过去count天的最高价格 high_price = high #high.max() #max(list1) # 取得当日收盘价格 current_close = close if high_price!=low_price: #计算未成熟随机值RSV(n)=(Ct-Ln)/(Hn-Ln)×100 RSV = (current_close-low_price)/(high_price-low_price)*100 else: RSV = 50 #当日K值=(1-a)×前一日K值+a×当日RSV K2=(1-a)*K1+a*RSV #当日D值=(1-a)×前一日D值+a×当日K值 D2=(1-b)*D1+b*K2 #计算J值 J2 = 3*K2-2*D2 #log.info("Daily K1: %s, D1: %s, K2: %s, D2: %s, J2: %s" % (K1,D1,K2,D2,J2)) return K1,D1,K2,D2,J2 # Put the first dataset in # List the data # initial Values K1 = 50 D1 = 50 # for each day, calculate data and insert into db for d in shdaily.find()[:10]: date = d['date'] datalist = pd.DataFrame(list(shdaily.find({'date':{"$lte": date}}).sort('date', -1))) data = datalist[:9] # get previous KDJ data from database K1 = data.ix[1]['KDJ_K'] D1 = data.ix[1]['KDJ_D'] high = data['high'].values low = data['low'].values close = data[:1]['close'].values K1,D1,K2,D2,J2 = KDJCalculation(K1,D1,max(high),min(low),close) d['KDJ_K'] = K2[0] d['KDJ_D'] = D2[0] d['KDJ_J'] = J2[0] # K1 = K2 # D1 = D2 print d #datalist = pd.DataFrame(list(shdaily.find().sort('date', -1))) #date1 = datetime.strptime("01/01/16", "%d/%m/%y") # List out the data before or equal a specific date #list(shdaily.find({'date':{"$lte":'2016-02-08'}}).sort('date', -1)) # Get last day KDJ data from database datalist = pd.DataFrame(list(shdaily.find({'date':{"$lte": '2016-02-10'}}).sort('date', -1))) data = datalist.ix[1] data['KDJ_K'] # Save data to db # data = datalist[:9] # data # K1 = 50 # D1 = 50 # high = data['high'].values # low = data['low'].values # close = data[:1]['close'].values # K1,D1,K2,D2,J2 = KDJCalculation(K1,D1,max(high),min(low),close) # Another KDJ Calculation based on dataframe def CalculateKDJ(stock_data): # Initiate KDJ parameters endday = pd.datetime.today() N1= 9 N2= 3 N3= 3 # Perform calculation #stock_data = get_price(stock, end_date=endday) low_list = pd.rolling_min(stock_data['LowPx'], N1) low_list.fillna(value=pd.expanding_min(stock_data['LowPx']), inplace=True) high_list = pd.rolling_max(stock_data['HighPx'], N1) high_list.fillna(value=pd.expanding_max(stock_data['HighPx']), inplace=True) #rsv = (stock_data['ClosingPx'] - low_list) / (high_list - low_list) * 100 rsv = (stock_data['ClosingPx'] - stock_data['LowPx']) / (stock_data['HighPx'] - stock_data['LowPx']) * 100 stock_data['KDJ_K'] = pd.ewma(rsv, com = N2) stock_data['KDJ_D'] = pd.ewma(stock_data['KDJ_K'], com = N3) stock_data['KDJ_J'] = 3 * stock_data['KDJ_K'] - 2 * stock_data['KDJ_D'] KDJ = stock_data[['KDJ_K','KDJ_D','KDJ_J']] return KDJ ```
github_jupyter
<a href="https://colab.research.google.com/github/julianox5/Desafios-Resolvidos-do-curso-machine-learning-crash-course-google/blob/master/numpy_para_machine_learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Importando o numpy ``` import numpy as np ``` ## Preencher matrizes com números expecíficos Criando uma matriz com o numpy.array() ``` myArray = np.array([1,2,3,4,5,6,7,8,9,0]) print(myArray) ``` Criando uma matriz bidimensional 3 x 2 ``` matriz_bi = np.array([[6 , 5], [11 , 4], [5 , 9] ]) print(matriz_bi) ``` Prencher um matriz com uma sequência de numeros, numpy.arange() ``` metodArange = np.arange(5, 12) print(metodArange) ``` ## Preencher matrizes com sequência de números Numpy possui varias funções para preencher matrizes com números aleatórios em determinados intervalos. ***numpy.random.randint*** gera números inteiros aleatórios entre um valor baixo e alto. ``` aleatorio_randint = np.random.randint(low = 10, high=100, size=(10)) print(aleatorio_randint) ``` Criar valores aleatórios de ponto flutuante entre 0,0 e 1,0 use **numpy.random.random()** ``` float_random = np.random.random([10]) print(float_random) ``` O Numpy possui um truque chamado broadcasting que expande virtualmente o operando menor para dimensões compatíveis com a álgebra linear. ``` random_floats_2_e_3 = float_random + 2.0 print (random_floats_2_e_3) ``` ## Tarefa 1: Criar um conjunto de dados linear Seu objetivo é criar um conjunto de dados simples que consiste em um único recurso e um rótulo da seguinte maneira: 1. Atribua uma sequência de números inteiros de 6 a 20 (inclusive) a uma matriz NumPy denominada `feature`. 2.Atribua 15 valores a uma matriz NumPy denominada de labelmodo que: ``` label = (3)(feature) + 4 ``` Por exemplo, o primeiro valor para `label`deve ser: ``` label = (3)(6) + 4 = 22 ``` ``` feature = np.arange(6, 21) print(feature) label = (feature * 3) + 4 print(label) ``` ## Tarefa 2: adicionar algum ruído ao conjunto de dados Para tornar seu conjunto de dados um pouco mais realista, insira um pouco de ruído aleatório em cada elemento da labelmatriz que você já criou. Para ser mais preciso, modifique cada valor atribuído rótulo, adicionando um valor de ponto flutuante aleatório diferente entre -2 e +2. ão confie na transmissão. Em vez disso, crie um ruido na matriz com a mesma dimensão que rótulo. ``` noise = (np.random.random([15]) * 4) -2 print(noise) label += noise print(label) #@title Example form fields #@markdown Forms support many types of fields. no_type_checking = '' #@param string_type = 'example' #@param {type: "string"} slider_value = 142 #@param {type: "slider", min: 100, max: 200} number = 102 #@param {type: "number"} date = '2010-11-05' #@param {type: "date"} pick_me = "monday" #@param ['monday', 'tuesday', 'wednesday', 'thursday'] select_or_input = "apples" #@param ["apples", "bananas", "oranges"] {allow-input: true} #@markdown --- ```
github_jupyter
# Doom Deadly Corridor with Dqn The purpose of this scenario is to teach the agent to navigate towards his fundamental goal (the vest) and make sure he survives at the same time. ### Enviroment Map is a corridor with shooting monsters on both sides (6 monsters in total). A green vest is placed at the oposite end of the corridor.Reward is proportional (negative or positive) to change of the distance between the player and the vest. If player ignores monsters on the sides and runs straight for the vest he will be killed somewhere along the way. ### Action - MOVE_LEFT - MOVE_RIGHT - ATTACK - MOVE_FORWARD - MOVE_BACKWARD - TURN_LEFT - TURN_RIGHT ### Rewards - +dX for getting closer to the vest. - -dX for getting further from the vest. - -100 death penalty ## Step 1: Import the libraries ``` import numpy as np import random # Handling random number generation import time # Handling time calculation import cv2 import torch from vizdoom import * # Doom Environment import matplotlib.pyplot as plt from IPython.display import clear_output from collections import namedtuple, deque import math %matplotlib inline import sys sys.path.append('../../') from algos.agents import DQNAgent from algos.models import DQNCnn from algos.preprocessing.stack_frame import preprocess_frame, stack_frame ``` ## Step 2: Create our environment Initialize the environment in the code cell below. ``` def create_environment(): game = DoomGame() # Load the correct configuration game.load_config("doom_files/deadly_corridor.cfg") # Load the correct scenario (in our case defend_the_center scenario) game.set_doom_scenario_path("doom_files/deadly_corridor.wad") # Here our possible actions possible_actions = np.identity(7, dtype=int).tolist() return game, possible_actions game, possible_actions = create_environment() # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("Device: ", device) ``` ## Step 3: Viewing our Enviroment ``` print("The size of frame is: (", game.get_screen_height(), ", ", game.get_screen_width(), ")") print("No. of Actions: ", possible_actions) game.init() plt.figure() plt.imshow(game.get_state().screen_buffer.transpose(1, 2, 0)) plt.title('Original Frame') plt.show() game.close() ``` ### Execute the code cell below to play Pong with a random policy. ``` def random_play(): game.init() game.new_episode() score = 0 while True: reward = game.make_action(possible_actions[np.random.randint(3)]) done = game.is_episode_finished() score += reward time.sleep(0.01) if done: print("Your total score is: ", score) game.close() break random_play() ``` ## Step 4:Preprocessing Frame ``` game.init() plt.figure() plt.imshow(preprocess_frame(game.get_state().screen_buffer.transpose(1, 2, 0), (0, -60, -40, 60), 84), cmap="gray") game.close() plt.title('Pre Processed image') plt.show() ``` ## Step 5: Stacking Frame ``` def stack_frames(frames, state, is_new=False): frame = preprocess_frame(state, (0, -60, -40, 60), 84) frames = stack_frame(frames, frame, is_new) return frames ``` ## Step 6: Creating our Agent ``` INPUT_SHAPE = (4, 84, 84) ACTION_SIZE = len(possible_actions) SEED = 0 GAMMA = 0.99 # discount factor BUFFER_SIZE = 100000 # replay buffer size BATCH_SIZE = 32 # Update batch size LR = 0.0001 # learning rate TAU = .1 # for soft update of target parameters UPDATE_EVERY = 100 # how often to update the network UPDATE_TARGET = 10000 # After which thershold replay to be started EPS_START = 0.99 # starting value of epsilon EPS_END = 0.01 # Ending value of epsilon EPS_DECAY = 100 # Rate by which epsilon to be decayed agent = DQNAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, BUFFER_SIZE, BATCH_SIZE, GAMMA, LR, TAU, UPDATE_EVERY, UPDATE_TARGET, DQNCnn) ``` ## Step 7: Watching untrained agent play ``` # watch an untrained agent game.init() score = 0 state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) while True: action = agent.act(state, 0.01) score += game.make_action(possible_actions[action]) done = game.is_episode_finished() if done: print("Your total score is: ", score) break else: state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False) game.close() ``` ## Step 8: Loading Agent Uncomment line to load a pretrained agent ``` start_epoch = 0 scores = [] scores_window = deque(maxlen=20) ``` ## Step 9: Train the Agent with DQN ``` epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY) plt.plot([epsilon_by_epsiode(i) for i in range(1000)]) def train(n_episodes=1000): """ Params ====== n_episodes (int): maximum number of training episodes """ game.init() for i_episode in range(start_epoch + 1, n_episodes+1): game.new_episode() state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) score = 0 eps = epsilon_by_epsiode(i_episode) while True: action = agent.act(state, eps) reward = game.make_action(possible_actions[action]) done = game.is_episode_finished() score += reward if done: agent.step(state, action, reward, state, done) break else: next_state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False) agent.step(state, action, reward, next_state, done) state = next_state scores_window.append(score) # save most recent score scores.append(score) # save most recent score clear_output(True) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() print('\rEpisode {}\tAverage Score: {:.2f}\tEpsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="") game.close() return scores scores = train(5000) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ``` ## Step 10: Watch a Smart Agent! ``` game.init() score = 0 state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) while True: action = agent.act(state, 0.01) score += game.make_action(possible_actions[action]) done = game.is_episode_finished() if done: print("Your total score is: ", score) break else: state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False) game.close() ```
github_jupyter
# TRTR Dataset D ``` #import libraries import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd import os print('Libraries imported!!') #define directory of functions and actual directory HOME_PATH = '' #home path of the project FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/UTILITY' ACTUAL_DIR = os.getcwd() #change directory to functions directory os.chdir(HOME_PATH + FUNCTIONS_DIR) #import functions for data labelling analisys from utility_evaluation import DataPreProcessor from utility_evaluation import train_evaluate_model #change directory to actual directory os.chdir(ACTUAL_DIR) print('Functions imported!!') ``` ## 1. Read data ``` #read real dataset train_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv') categorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation', 'standard_of_living_index','media_exposure','contraceptive_method_used'] for col in categorical_columns : train_data[col] = train_data[col].astype('category') train_data #read test data test_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/D_ContraceptiveMethod_Real_Test.csv') for col in categorical_columns : test_data[col] = test_data[col].astype('category') test_data target = 'contraceptive_method_used' #quick look at the breakdown of class values print('Train data') print(train_data.shape) print(train_data.groupby(target).size()) print('#####################################') print('Test data') print(test_data.shape) print(test_data.groupby(target).size()) ``` ## 2. Pre-process training data ``` target = 'contraceptive_method_used' categorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation', 'standard_of_living_index','media_exposure'] numerical_columns = train_data.select_dtypes(include=['int64','float64']).columns.tolist() categories = [np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1]), np.array([0, 1]), np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1])] data_preprocessor = DataPreProcessor(categorical_columns, numerical_columns, categories) x_train = data_preprocessor.preprocess_train_data(train_data.loc[:, train_data.columns != target]) y_train = train_data.loc[:, target] x_train.shape, y_train.shape ``` ## 3. Preprocess test data ``` x_test = data_preprocessor.preprocess_test_data(test_data.loc[:, test_data.columns != target]) y_test = test_data.loc[:, target] x_test.shape, y_test.shape ``` ## 4. Create a dataset to save the results ``` results = pd.DataFrame(columns = ['model','accuracy','precision','recall','f1']) results ``` ## 4. Train and evaluate Random Forest Classifier ``` rf_results = train_evaluate_model('RF', x_train, y_train, x_test, y_test) results = results.append(rf_results, ignore_index=True) rf_results ``` ## 5. Train and Evaluate KNeighbors Classifier ``` knn_results = train_evaluate_model('KNN', x_train, y_train, x_test, y_test) results = results.append(knn_results, ignore_index=True) knn_results ``` ## 6. Train and evaluate Decision Tree Classifier ``` dt_results = train_evaluate_model('DT', x_train, y_train, x_test, y_test) results = results.append(dt_results, ignore_index=True) dt_results ``` ## 7. Train and evaluate Support Vector Machines Classifier ``` svm_results = train_evaluate_model('SVM', x_train, y_train, x_test, y_test) results = results.append(svm_results, ignore_index=True) svm_results ``` ## 8. Train and evaluate Multilayer Perceptron Classifier ``` mlp_results = train_evaluate_model('MLP', x_train, y_train, x_test, y_test) results = results.append(mlp_results, ignore_index=True) mlp_results ``` ## 9. Save results file ``` results.to_csv('RESULTS/models_results_real.csv', index=False) results ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Distributed training in TensorFlow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/distribute/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> ## Overview The `tf.distribute.Strategy` API provides an abstraction for distributing your training across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes. This tutorial uses the `tf.distribute.MirroredStrategy`, which does in-graph replication with synchronous training on many GPUs on one machine. Essentially, it copies all of the model's variables to each processor. Then, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors and applies the combined value to all copies of the model. `MirroredStategy` is one of several distribution strategy available in TensorFlow core. You can read about more strategies at [distribution strategy guide](../../guide/distribute_strategy.ipynb). ### Keras API This example uses the `tf.keras` API to build the model and training loop. For custom training loops, see [this tutorial](training_loops.ipynb). ## Import Dependencies ``` from __future__ import absolute_import, division, print_function # Import TensorFlow !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os ``` ## Download the dataset Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format. Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `ds_info`. Among other things, this metadata object includes the number of train and test examples. ``` datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] ``` ## Define Distribution Strategy Create a `MirroredStrategy` object. This will handle distribution, and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside. ``` strategy = tf.distribute.MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync)) ``` ## Setup Input pipeline If a model is trained on multiple GPUs, the batch size should be increased accordingly so as to make effective use of the extra computing power. Moreover, the learning rate should be tuned accordingly. ``` # You can also do ds_info.splits.total_num_examples to get the total # number of examples in the dataset. num_train_examples = ds_info.splits['train'].num_examples num_test_examples = ds_info.splits['test'].num_examples BUFFER_SIZE = 10000 BATCH_SIZE_PER_REPLICA = 64 BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync ``` Pixel values, which are 0-255, [have to be normalized to the 0-1 range](https://en.wikipedia.org/wiki/Feature_scaling). Define this scale in a function. ``` def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label ``` Apply this function to the training and test data, shuffle the training data, and [batch it for training](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch). ``` train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE) ``` ## Create the model Create and compile the Keras model in the context of `strategy.scope`. ``` with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) ``` ## Define the callbacks. The callbacks used here are: * *Tensorboard*: This callback writes a log for Tensorboard which allows you to visualize the graphs. * *Model Checkpoint*: This callback saves the model after every epoch. * *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch. For illustrative purposes, add a print callback to display the *learning rate* in the notebook. ``` # Define the checkpoint directory to store the checkpoints checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") # Function for decaying the learning rate. # You can define any decay function you need. def decay(epoch): if epoch < 3: return 1e-3 elif epoch >= 3 and epoch < 7: return 1e-4 else: return 1e-5 # Callback for printing the LR at the end of each epoch. class PrintLR(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): print ('\nLearning rate for epoch {} is {}'.format(epoch + 1, model.optimizer.lr.numpy())) callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='./logs'), tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix, save_weights_only=True), tf.keras.callbacks.LearningRateScheduler(decay), PrintLR() ] ``` ## Train and evaluate Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not. ``` model.fit(train_dataset, epochs=10, callbacks=callbacks) ``` As you can see below, the checkpoints are getting saved. ``` # check the checkpoint directory !ls {checkpoint_dir} ``` To see how the model perform, load the latest checkpoint and call `evaluate` on the test data. Call `evaluate` as before using appropriate datasets. ``` model.load_weights(tf.train.latest_checkpoint(checkpoint_dir)) eval_loss, eval_acc = model.evaluate(eval_dataset) print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` To see the output, you can download and view the TensorBoard logs at the terminal. ``` $ tensorboard --logdir=path/to/log-directory ``` ``` !ls -sh ./logs ``` ## Export to SavedModel If you want to export the graph and the variables, SavedModel is the best way of doing this. The model can be loaded back with or without the scope. Moreover, SavedModel is platform agnostic. ``` path = 'saved_model/' tf.keras.experimental.export_saved_model(model, path) ``` Load the model without `strategy.scope`. ``` unreplicated_model = tf.keras.experimental.load_from_saved_model(path) unreplicated_model.compile( loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset) print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` Load the model with `strategy.scope`. ``` with strategy.scope(): replicated_model = tf.keras.experimental.load_from_saved_model(path) replicated_model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = replicated_model.evaluate(eval_dataset) print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` ## What's next? Read the [distribution strategy guide](../../guide/distribute_strategy.ipynb). Try the [Distributed Training with Custom Training Loops](training_loops.ipynb) tutorial. Note: `tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.png) # Train and hyperparameter tune on Iris Dataset with Scikit-learn In this tutorial, we demonstrate how to use the Azure ML Python SDK to train a support vector machine (SVM) on a single-node CPU with Scikit-learn to perform classification on the popular [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service. ## Prerequisites * Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML Workspace ``` # Check core SDK version number import azureml.core print("SDK version:", azureml.core.VERSION) ``` ## Diagnostics Opt-in diagnostics for better experience, quality, and security of future releases. ``` from azureml.telemetry import set_diagnostics_collection set_diagnostics_collection(send_diagnostics=True) ``` ## Initialize workspace Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`. ``` from azureml.core.workspace import Workspace ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep = '\n') ``` ## Create AmlCompute You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ``` from azureml.core.compute import ComputeTarget # choose a name for your cluster cluster_name = "cpu-cluster" compute_target = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing compute target.') # use get_status() to get a detailed status for the current cluster. print(compute_target.get_status().serialize()) ``` The above code retrieves an existing CPU compute target. Scikit-learn does not support GPU computing. ## Train model on the remote compute Now that you have your data and training script prepared, you are ready to train on your remote compute. You can take advantage of Azure compute to leverage a CPU cluster. ### Create a project directory Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on. ``` import os project_folder = './sklearn-iris' os.makedirs(project_folder, exist_ok=True) ``` ### Prepare training script Now you will need to create your training script. In this tutorial, the training script is already provided for you at `train_iris`.py. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code. However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script. In `train_iris.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML Run object within the script: ```python from azureml.core.run import Run run = Run.get_context() ``` Further within `train_iris.py`, we log the kernel and penalty parameters, and the highest accuracy the model achieves: ```python run.log('Kernel type', np.string(args.kernel)) run.log('Penalty', np.float(args.penalty)) run.log('Accuracy', np.float(accuracy)) ``` These run metrics will become particularly important when we begin hyperparameter tuning our model in the "Tune model hyperparameters" section. Once your script is ready, copy the training script `train_iris.py` into your project directory. ``` import shutil shutil.copy('train_iris.py', project_folder) ``` ### Create an experiment Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Scikit-learn tutorial. ``` from azureml.core import Experiment experiment_name = 'train_iris' experiment = Experiment(ws, name=experiment_name) ``` ### Create a Scikit-learn estimator The Azure ML SDK's Scikit-learn estimator enables you to easily submit Scikit-learn training jobs for single-node runs. The following code will define a single-node Scikit-learn job. ``` from azureml.train.sklearn import SKLearn script_params = { '--kernel': 'linear', '--penalty': 1.0, } estimator = SKLearn(source_directory=project_folder, script_params=script_params, compute_target=compute_target, entry_script='train_iris.py', pip_packages=['joblib'] ) ``` The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. ### Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous. ``` run = experiment.submit(estimator) ``` ## Monitor your run You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. ``` from azureml.widgets import RunDetails RunDetails(run).show() run.cancel() ``` ## Tune model hyperparameters Now that we've seen how to do a simple Scikit-learn training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities. ### Start a hyperparameter sweep First, we will define the hyperparameter space to sweep over. Let's tune the `kernel` and `penalty` parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, `Accuracy`. ``` from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig from azureml.train.hyperdrive.sampling import RandomParameterSampling from azureml.train.hyperdrive.run import PrimaryMetricGoal from azureml.train.hyperdrive.parameter_expressions import choice param_sampling = RandomParameterSampling( { "--kernel": choice('linear', 'rbf', 'poly', 'sigmoid'), "--penalty": choice(0.5, 1, 1.5) } ) hyperdrive_run_config = HyperDriveRunConfig(estimator=estimator, hyperparameter_sampling=param_sampling, primary_metric_name='Accuracy', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=12, max_concurrent_runs=4) ``` Finally, lauch the hyperparameter tuning job. ``` # start the HyperDrive run hyperdrive_run = experiment.submit(hyperdrive_run_config) ``` ## Monitor HyperDrive runs You can monitor the progress of the runs with the following Jupyter widget. ``` RunDetails(hyperdrive_run).show() hyperdrive_run.wait_for_completion(show_output=True) ``` ### Find and register best model When all jobs finish, we can find out the one that has the highest accuracy. ``` best_run = hyperdrive_run.get_best_run_by_primary_metric() print(best_run.get_details()['runDefinition']['arguments']) ``` Now, let's list the model files uploaded during the run. ``` print(best_run.get_file_names()) ``` We can then register the folder (and all files in it) as a model named `sklearn-iris` under the workspace for deployment ``` model = best_run.register_model(model_name='sklearn-iris', model_path='model.joblib') ```
github_jupyter
# Combine DESI Imaging ccds for DR9 The eboss ccd files did not have the same dtype, therefore we could not easily combine them. We have to enfore a dtype to all of them. ``` # import modules import fitsio as ft import numpy as np from glob import glob # read files ccdsn = glob('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-*.fits') print(ccdsn) # ccdfiles names prt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime', 'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3', 'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt'] # read one file to check the columns d = ft.read(ccdsn[0], columns=prt_keep) print(d.dtype) # attrs for the general quicksip # 'crval1', 'crval2', 'crpix1', 'crpix2', 'cd1_1', # 'cd1_2', 'cd2_1', 'cd2_2', 'width', 'height' # dtype = np.dtype([('filter', 'S1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), ('airmass', '>f4'),\ # ('fwhm', '>f4'), ('width', '>i2'), ('height', '>i2'), ('crpix1', '>f4'), ('crpix2', '>f4'),\ # ('crval1', '>f8'), ('crval2', '>f8'), ('cd1_1', '>f4'), ('cd1_2', '>f4'), ('cd2_1', '>f4'),\ # ('cd2_2', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'), ('ccdskycounts', '>f4'), # ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')]) # # only read & combine the following columns # this is what the pipeline need to make the MJD maps prt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime', 'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3', 'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt'] # camera could be different for 90prime, decam, mosaic -- we pick S7 dtype = np.dtype([('camera', '<U7'),('filter', '<U1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), ('airmass', '>f4'), ('fwhm', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'), ('ccdskycounts', '>f4'), ('ra0', '>f8'), ('dec0', '>f8'), ('ra1', '>f8'), ('dec1', '>f8'), ('ra2', '>f8'), ('dec2', '>f8'), ('ra3', '>f8'), ('dec3', '>f8'), ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')]) def fixdtype(data_in, indtype=dtype): m = data_in.size data_out = np.zeros(m, dtype=dtype) for name in dtype.names: data_out[name] = data_in[name].astype(dtype[name]) return data_out # # read each ccd file > fix its dtype > move on to the next ccds_data = [] for ccd_i in ccdsn: print('working on .... %s'%ccd_i) data_in = ft.FITS(ccd_i)[1].read(columns=prt_keep) #print(data_in.dtype) data_out = fixdtype(data_in) print('number of ccds in this file : %d'%data_in.size) print('number of different dtypes (before) : %d'%len(np.setdiff1d(dtype.descr, data_in.dtype.descr)), np.setdiff1d(dtype.descr, data_in.dtype.descr)) print('number of different dtypes (after) : %d'%len(np.setdiff1d(dtype.descr, data_out.dtype.descr)), np.setdiff1d(dtype.descr, data_out.dtype.descr)) ccds_data.append(data_out) ccds_data_c = np.concatenate(ccds_data) print('Total number of combined ccds : %d'%ccds_data_c.size) ft.write('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-dr9-combined.fits', ccds_data_c, header=dict(NOTE='dr9 combined'), clobber=True) ```
github_jupyter
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/work-with-data/dataprep/how-to-guides/join.png) # Join Copyright (c) Microsoft Corporation. All rights reserved.<br> Licensed under the MIT License.<br> In Data Prep you can easily join two Dataflows. ``` import azureml.dataprep as dprep ``` First, get the left side of the data into a shape that is ready for the join. ``` # get the first Dataflow and derive desired key column dflow_left = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/testfiles/BostonWeather.csv') dflow_left = dflow_left.derive_column_by_example(source_columns='DATE', new_column_name='date_timerange', example_data=[('11/11/2015 0:54', 'Nov 11, 2015 | 12AM-2AM'), ('2/1/2015 0:54', 'Feb 1, 2015 | 12AM-2AM'), ('1/29/2015 20:54', 'Jan 29, 2015 | 8PM-10PM')]) dflow_left = dflow_left.drop_columns(['DATE']) # convert types and summarize data dflow_left = dflow_left.set_column_types(type_conversions={'HOURLYDRYBULBTEMPF': dprep.TypeConverter(dprep.FieldType.DECIMAL)}) dflow_left = dflow_left.filter(expression=~dflow_left['HOURLYDRYBULBTEMPF'].is_error()) dflow_left = dflow_left.summarize(group_by_columns=['date_timerange'],summary_columns=[dprep.SummaryColumnsValue('HOURLYDRYBULBTEMPF', dprep.api.engineapi.typedefinitions.SummaryFunction.MEAN, 'HOURLYDRYBULBTEMPF_Mean')] ) # cache the result so the steps above are not executed every time we pull on the data import os from pathlib import Path cache_dir = str(Path(os.getcwd(), 'dataflow-cache')) dflow_left.cache(directory_path=cache_dir) dflow_left.head(5) ``` Now let's prepare the data for the right side of the join. ``` # get the second Dataflow and desired key column dflow_right = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/bike-share/*-hubway-tripdata.csv') dflow_right = dflow_right.keep_columns(['starttime', 'start station id']) dflow_right = dflow_right.derive_column_by_example(source_columns='starttime', new_column_name='l_date_timerange', example_data=[('2015-01-01 00:21:44', 'Jan 1, 2015 | 12AM-2AM')]) dflow_right = dflow_right.drop_columns('starttime') # cache the results dflow_right.cache(directory_path=cache_dir) dflow_right.head(5) ``` There are three ways you can join two Dataflows in Data Prep: 1. Create a `JoinBuilder` object for interactive join configuration. 2. Call ```join()``` on one of the Dataflows and pass in the other along with all other arguments. 3. Call ```Dataflow.join()``` method and pass in two Dataflows along with all other arguments. We will explore the builder object as it simplifies the determination of correct arguments. ``` # construct a builder for joining dataflow_l with dataflow_r join_builder = dflow_left.builders.join(right_dataflow=dflow_right, left_column_prefix='l', right_column_prefix='r') join_builder ``` So far the builder has no properties set except default values. From here you can set each of the options and preview its effect on the join result or use Data Prep to determine some of them. Let's start with determining appropriate column prefixes for left and right side of the join and lists of columns that would not conflict and therefore don't need to be prefixed. ``` join_builder.detect_column_info() join_builder ``` You can see that Data Prep has performed a pull on both Dataflows to determine the column names in them. Given that `dataflow_r` already had a column starting with `l_` new prefix got generated which would not collide with any column names that are already present. Additionally columns in each Dataflow that won't conflict during join would remain unprefixed. This apprach to column naming is crucial for join robustness to schema changes in the data. Let's say that at some time in future the data consumed by left Dataflow will also have `l_date_timerange` column in it. Configured as above the join will still run as expected and the new column will be prefixed with `l2_` ensuring that ig column `l_date_timerange` was consumed by some other future transformation it remains unaffected. Note: `KEY_generated` is appended to both lists and is reserved for Data Prep use in case Autojoin is performed. ### Autojoin Autojoin is a Data prep feature that determines suitable join arguments given data on both sides. In some cases Autojoin can even derive a key column from a number of available columns in the data. Here is how you can use Autojoin: ``` # generate join suggestions join_builder.generate_suggested_join() # list generated suggestions join_builder.list_join_suggestions() ``` Now let's select the first suggestion and preview the result of the join. ``` # apply first suggestion join_builder.apply_suggestion(0) join_builder.preview(10) ``` Now, get our new joined Dataflow. ``` dflow_autojoined = join_builder.to_dataflow().drop_columns(['l_date_timerange']) ``` ### Joining two Dataflows without pulling the data If you don't want to pull on data and know what join should look like, you can always use the join method on the Dataflow. ``` dflow_joined = dprep.Dataflow.join(left_dataflow=dflow_left, right_dataflow=dflow_right, join_key_pairs=[('date_timerange', 'l_date_timerange')], left_column_prefix='l2_', right_column_prefix='r_') dflow_joined.head(5) dflow_joined = dflow_joined.filter(expression=dflow_joined['r_start station id'] == '67') df = dflow_joined.to_pandas_dataframe() df ```
github_jupyter
# Credit Risk Resampling Techniques ``` import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pathlib import Path from collections import Counter ``` # Read the CSV and Perform Basic Data Cleaning ``` columns = [ "loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc", "verification_status", "issue_d", "loan_status", "pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc", "initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt", "total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee", "recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d", "collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl", "num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl", "num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m", "num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag" ] target = ["loan_status"] # Load the data file_path = Path('../Resources/LoanStats_2019Q1.csv.zip') df = pd.read_csv(file_path, skiprows=1)[:-2] df = df.loc[:, columns].copy() # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() # Remove the `Issued` loan status issued_mask = df['loan_status'] != 'Issued' df = df.loc[issued_mask] # convert interest rate to numerical df['int_rate'] = df['int_rate'].str.replace('%', '') df['int_rate'] = df['int_rate'].astype('float') / 100 # Convert the target column values to low_risk and high_risk based on their values x = {'Current': 'low_risk'} df = df.replace(x) x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk') df = df.replace(x) df.reset_index(inplace=True, drop=True) df.head() ``` # Split the Data into Training and Testing ``` # Create our features X = # YOUR CODE HERE # Create our target y = # YOUR CODE HERE X.describe() # Check the balance of our target values y['loan_status'].value_counts() # Create X_train, X_test, y_train, y_test # YOUR CODE HERE ``` ## Data Pre-Processing Scale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`). ``` # Create the StandardScaler instance from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Fit the Standard Scaler with the training data # When fitting scaling functions, only train on the training dataset # YOUR CODE HERE # Scale the training and testing data # YOUR CODE HERE ``` # Oversampling In this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps: 1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model. 3. Calculate the balanced accuracy score from sklearn.metrics. 4. Print the confusion matrix from sklearn.metrics. 5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn. Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests ### Naive Random Oversampling ``` # Resample the training data with the RandomOversampler # YOUR CODE HERE # Train the Logistic Regression model using the resampled data # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE ``` ### SMOTE Oversampling ``` # Resample the training data with SMOTE # YOUR CODE HERE # Train the Logistic Regression model using the resampled data # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE ``` # Undersampling In this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps: 1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model. 3. Calculate the balanced accuracy score from sklearn.metrics. 4. Print the confusion matrix from sklearn.metrics. 5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn. Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests ``` # Resample the data using the ClusterCentroids resampler # YOUR CODE HERE # Train the Logistic Regression model using the resampled data # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE ``` # Combination (Over and Under) Sampling In this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps: 1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model. 3. Calculate the balanced accuracy score from sklearn.metrics. 4. Print the confusion matrix from sklearn.metrics. 5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn. Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests ``` # Resample the training data with SMOTEENN # YOUR CODE HERE # Train the Logistic Regression model using the resampled data # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE ```
github_jupyter
# <font color='Purple'>Gravitational Wave Generation Array</font> A Phase Array of dumbells can make a detectable signal... #### To do: 1. Calculate the dumbell parameters for given mass and frequency 1. How many dumbells? 1. Far-field radiation pattern from many radiators. 1. Beamed GW won't be a plane wave. So what? 1. How much energy is lost to keep it spinning? 1. How do we levitate while spinning? ##### Related work on GW radiation 1. https://www.mit.edu/~iancross/8901_2019A/readings/Quadrupole-GWradiation-Ferrari.pdf 1. Wikipedia article on the GW Quadrupole formula (https://en.wikipedia.org/wiki/Quadrupole_formula) 1. MIT 8.901 lecture on GW radiation (http://www.mit.edu/~iancross/8901_2019A/lec005.pdf) ## <font color='Orange'>Imports, settings, and constants</font> ``` import numpy as np #import matplotlib as mpl import matplotlib.pyplot as plt #import multiprocessing as mproc #import scipy.signal as sig import scipy.constants as scc #import scipy.special as scsp #import sys, time from scipy.io import loadmat # http://www.astropy.org/astropy-tutorials/Quantities.html # http://docs.astropy.org/en/stable/constants/index.html from astropy import constants as ascon # Update the matplotlib configuration parameters: plt.rcParams.update({'text.usetex': False, 'lines.linewidth': 4, 'font.family': 'serif', 'font.serif': 'Georgia', 'font.size': 22, 'xtick.direction': 'in', 'ytick.direction': 'in', 'xtick.labelsize': 'medium', 'ytick.labelsize': 'medium', 'axes.labelsize': 'medium', 'axes.titlesize': 'medium', 'axes.grid.axis': 'both', 'axes.grid.which': 'both', 'axes.grid': True, 'grid.color': 'xkcd:beige', 'grid.alpha': 0.253, 'lines.markersize': 12, 'legend.borderpad': 0.2, 'legend.fancybox': True, 'legend.fontsize': 'small', 'legend.framealpha': 0.8, 'legend.handletextpad': 0.5, 'legend.labelspacing': 0.33, 'legend.loc': 'best', 'figure.figsize': ((12, 8)), 'savefig.dpi': 140, 'savefig.bbox': 'tight', 'pdf.compression': 9}) def setGrid(ax): ax.grid(which='major', alpha=0.6) ax.grid(which='major', linestyle='solid', alpha=0.6) cList = [(0, 0.1, 0.9), (0.9, 0, 0), (0, 0.7, 0), (0, 0.8, 0.8), (1.0, 0, 0.9), (0.8, 0.8, 0), (1, 0.5, 0), (0.5, 0.5, 0.5), (0.4, 0, 0.5), (0, 0, 0), (0.3, 0, 0), (0, 0.3, 0)] G = scc.G # N * m**2 / kg**2; gravitational constant c = scc.c ``` ## Terrestrial Dumbell (Current Tech) ``` sigma_yield = 9000e6 # Yield strength of annealed silicon [Pa] m_dumb = 100 # mass of the dumbell end [kg] L_dumb = 10 # Length of the dumbell [m] r_dumb = 1 # radius of the dumbell rod [m] rho_pb = 11.34e3 # density of lead [kg/m^3] r_ball = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3) f_rot = 1e3 / 2 lamduh = c / f_rot v_dumb = 2*np.pi*(L_dumb/2) * f_rot a_dumb = v_dumb**2 / (L_dumb / 2) F = a_dumb * m_dumb stress = F / (np.pi * r_dumb**2) print('Ball radius is ' + '{:0.2f}'.format(r_ball) + ' m') print(r'Acceleration of ball = ' + '{:0.2g}'.format(a_dumb) + r' m/s^2') print('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress') ``` #### Futuristic Dumbell ``` sigma_yield = 5000e9 # ultimate tensile strength of ??? [Pa] m_f = 1000 # mass of the dumbell end [kg] L_f = 3000 # Length of the dumbell [m] r_f = 40 # radius of the dumbell rod [m] rho_pb = 11.34e3 # density of lead [kg/m^3] r_b = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3) f_f = 37e3 / 2 lamduh_f = c / f_f v_f = 2*np.pi*(L_f/2) * f_f a_f = v_f**2 / (L_f / 2) F = a_f * m_f stress = F / (np.pi * r_f**2) print('Ball radius = ' + '{:0.2f}'.format(r_f) + ' m') print('Acceleration of ball = ' + '{:0.2g}'.format(a_f) + ' m/s**2') print('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress') ``` ## <font color='Navy'>Radiation of a dumbell</font> The dumbell is levitated from its middle point using a magnet. So we can spin it at any frequency without friction. The quadrupole formula for the strain from this rotating dumbell is: $\ddot{I} = \omega^2 \frac{M R^2}{2}$ $\ddot{I} = \frac{1}{2} \sigma_{yield}~A~(L_{dumb} / 2)$ The resulting strain is: $h = \frac{2 G}{c^4 r} \ddot{I}$ ``` def h_of_f(omega_rotor, M_ball, d_earth_alien, L_rotor): I_ddot = 1/2 * M_ball * (L_rotor/2)**2 * (omega_rotor**2) h = (2*G)/(c**4 * d_earth_alien) * I_ddot return h r = 2 * lamduh # take the distance to be 2 x wavelength #h_2020 = (2*G)/(c**4 * r) * (1/2 * m_dumb * (L_dumb/2)**2) * (2*np.pi*f_rot)**2 w_rot = 2 * np.pi * f_rot h_2020 = h_of_f(w_rot, m_dumb, r, L_dumb) d_ref = c * 3600*24*365 * 1000 # 1000 light years [m] d = 1 * d_ref h_2035 = h_of_f(w_rot, m_dumb, d, L_dumb) print('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} km'.format(h=h_2020, r=r/1000)) print('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years'.format(h=h_2035, r=d/d_ref)) r = 2 * lamduh_f # take the distance to be 2 x wavelength h_f = (2*G)/(c**4 * r) * (1/2 * m_f * (L_f/2)**2) * (2*np.pi*f_f)**2 h_2345 = h_of_f(2*np.pi*f_f, m_f, d, L_dumb) N_rotors = 100e6 print("Strain from a single (alien) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years".format(h=h_2345, r=d/d_ref)) print("Strain from many many (alien) dumbells is " + '{:0.3g}'.format(N_rotors*h_2345) + ' at ' + str(1) + ' k lt-yr') ``` ## <font color='Navy'>Phased Array</font> Beam pattern for a 2D grid of rotating dumbells Treat them like point sources? Make an array and add up all the spherical waves
github_jupyter
# Introduction to Linear Regression *Adapted from Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)* ||continuous|categorical| |---|---|---| |**supervised**|**regression**|classification| |**unsupervised**|dimension reduction|clustering| ## Motivation Why are we learning linear regression? - widely used - runs fast - easy to use (not a lot of tuning required) - highly interpretable - basis for many other methods ## Libraries Will be using [Statsmodels](http://statsmodels.sourceforge.net/) for **teaching purposes** since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on [scikit-learn](http://scikit-learn.org/stable/) since it provides significantly more useful functionality for machine learning in general. ``` # imports import pandas as pd import matplotlib.pyplot as plt # this allows plots to appear directly in the notebook %matplotlib inline ``` ## Example: Advertising Data Let's take a look at some data, ask some questions about that data, and then use linear regression to answer those questions! ``` # read data into a DataFrame data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0) data.head() ``` What are the **features**? - TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars) - Radio: advertising dollars spent on Radio - Newspaper: advertising dollars spent on Newspaper What is the **response**? - Sales: sales of a single product in a given market (in thousands of widgets) ``` # print the shape of the DataFrame data.shape ``` There are 200 **observations**, and thus 200 markets in the dataset. ``` # visualize the relationship between the features and the response using scatterplots fig, axs = plt.subplots(1, 3, sharey=True) data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8)) data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1]) data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2]) ``` ## Questions About the Advertising Data Let's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future? This general question might lead you to more specific questions: 1. Is there a relationship between ads and sales? 2. How strong is that relationship? 3. Which ad types contribute to sales? 4. What is the effect of each ad type of sales? 5. Given ad spending in a particular market, can sales be predicted? We will explore these questions below! ## Simple Linear Regression Simple linear regression is an approach for predicting a **quantitative response** using a **single feature** (or "predictor" or "input variable"). It takes the following form: $y = \beta_0 + \beta_1x$ What does each term represent? - $y$ is the response - $x$ is the feature - $\beta_0$ is the intercept - $\beta_1$ is the coefficient for x Together, $\beta_0$ and $\beta_1$ are called the **model coefficients**. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales! ## Estimating ("Learning") Model Coefficients Generally speaking, coefficients are estimated using the **least squares criterion**, which means we are find the line (mathematically) which minimizes the **sum of squared residuals** (or "sum of squared errors"): <img src="08_estimating_coefficients.png"> What elements are present in the diagram? - The black dots are the **observed values** of x and y. - The blue line is our **least squares line**. - The red lines are the **residuals**, which are the distances between the observed values and the least squares line. How do the model coefficients relate to the least squares line? - $\beta_0$ is the **intercept** (the value of $y$ when $x$=0) - $\beta_1$ is the **slope** (the change in $y$ divided by change in $x$) Here is a graphical depiction of those calculations: <img src="08_slope_intercept.png"> Let's use **Statsmodels** to estimate the model coefficients for the advertising data: ``` # this is the standard import if you're using "formula notation" (similar to R) import statsmodels.formula.api as smf # create a fitted model in one line lm = smf.ols(formula='Sales ~ TV', data=data).fit() # print the coefficients lm.params ``` ## Interpreting Model Coefficients How do we interpret the TV coefficient ($\beta_1$)? - A "unit" increase in TV ad spending is **associated with** a 0.047537 "unit" increase in Sales. - Or more clearly: An additional $1,000 spent on TV ads is **associated with** an increase in sales of 47.537 widgets. Note that if an increase in TV ad spending was associated with a **decrease** in sales, $\beta_1$ would be **negative**. ## Using the Model for Prediction Let's say that there was a new market where the TV advertising spend was **$50,000**. What would we predict for the Sales in that market? $$y = \beta_0 + \beta_1x$$ $$y = 7.032594 + 0.047537 \times 50$$ ``` # manually calculate the prediction 7.032594 + 0.047537*50 ``` Thus, we would predict Sales of **9,409 widgets** in that market. Of course, we can also use Statsmodels to make the prediction: ``` # you have to create a DataFrame since the Statsmodels formula interface expects it X_new = pd.DataFrame({'TV': [50]}) X_new.head() # use the model to make predictions on a new value lm.predict(X_new) ``` ## Plotting the Least Squares Line Let's make predictions for the **smallest and largest observed values of x**, and then use the predicted values to plot the least squares line: ``` # create a DataFrame with the minimum and maximum values of TV X_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]}) X_new.head() # make predictions for those x values and store them preds = lm.predict(X_new) preds # first, plot the observed data data.plot(kind='scatter', x='TV', y='Sales') # then, plot the least squares line plt.plot(X_new, preds, c='red', linewidth=2) ``` ## Confidence in our Model **Question:** Is linear regression a high bias/low variance model, or a low bias/high variance model? **Answer:** High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). Note that low variance is a useful characteristic when you don't have a lot of training data! A closely related concept is **confidence intervals**. Statsmodels calculates 95% confidence intervals for our model coefficients, which are interpreted as follows: If the population from which this sample was drawn was **sampled 100 times**, approximately **95 of those confidence intervals** would contain the "true" coefficient. ``` # print the confidence intervals for the model coefficients lm.conf_int() ``` Keep in mind that we only have a **single sample of data**, and not the **entire population of data**. The "true" coefficient is either within this interval or it isn't, but there's no way to actually know. We estimate the coefficient with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is **probably** within. Note that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like. ## Hypothesis Testing and p-values Closely related to confidence intervals is **hypothesis testing**. Generally speaking, you start with a **null hypothesis** and an **alternative hypothesis** (that is opposite the null). Then, you check whether the data supports **rejecting the null hypothesis** or **failing to reject the null hypothesis**. (Note that "failing to reject" the null is not the same as "accepting" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.) As it relates to model coefficients, here is the conventional hypothesis test: - **null hypothesis:** There is no relationship between TV ads and Sales (and thus $\beta_1$ equals zero) - **alternative hypothesis:** There is a relationship between TV ads and Sales (and thus $\beta_1$ is not equal to zero) How do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval **does not include zero**. Conversely, the **p-value** represents the probability that the coefficient is actually zero: ``` # print the p-values for the model coefficients lm.pvalues ``` If the 95% confidence interval **includes zero**, the p-value for that coefficient will be **greater than 0.05**. If the 95% confidence interval **does not include zero**, the p-value will be **less than 0.05**. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.) In this case, the p-value for TV is far less than 0.05, and so we **believe** that there is a relationship between TV ads and Sales. Note that we generally ignore the p-value for the intercept. ## How Well Does the Model Fit the data? The most common way to evaluate the overall fit of a linear model is by the **R-squared** value. R-squared is the **proportion of variance explained**, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the **null model**. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.) R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared "looks like": <img src="08_r_squared.png"> You can see that the **blue line** explains some of the variance in the data (R-squared=0.54), the **green line** explains more of the variance (R-squared=0.64), and the **red line** fits the training data even further (R-squared=0.66). (Does the red line look like it's overfitting?) Let's calculate the R-squared value for our simple linear model: ``` # print the R-squared value for the model lm.rsquared ``` Is that a "good" R-squared value? It's hard to say. The threshold for a good R-squared value depends widely on the domain. Therefore, it's most useful as a tool for **comparing different models**. ## Multiple Linear Regression Simple linear regression can easily be extended to include multiple features. This is called **multiple linear regression**: $y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$ Each $x$ represents a different feature, and each feature has its own coefficient. In this case: $y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$ Let's use Statsmodels to estimate these coefficients: ``` # create a fitted model with all three features lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit() # print the coefficients lm.params ``` How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an **increase of $1000 in TV ad spending** is associated with an **increase in Sales of 45.765 widgets**. A lot of the information we have been reviewing piece-by-piece is available in the model summary output: ``` # print a summary of the fitted model lm.summary() ``` What are a few key things we learn from this output? - TV and Radio have significant **p-values**, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper. - TV and Radio ad spending are both **positively associated** with Sales, whereas Newspaper ad spending is **slightly negatively associated** with Sales. (However, this is irrelevant since we have failed to reject the null hypothesis for Newspaper.) - This model has a higher **R-squared** (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV. ## Feature Selection How do I decide **which features to include** in a linear model? Here's one idea: - Try different models, and only keep predictors in the model if they have small p-values. - Check whether the R-squared value goes up when you add new predictors. What are the **drawbacks** to this approach? - Linear models rely upon a lot of **assumptions** (such as the features being independent), and if those assumptions are violated (which they usually are), R-squared and p-values are less reliable. - Using a p-value cutoff of 0.05 means that if you add 100 predictors to a model that are **pure noise**, 5 of them (on average) will still be counted as significant. - R-squared is susceptible to **overfitting**, and thus there is no guarantee that a model with a high R-squared value will generalize. Below is an example: ``` # only include TV and Radio in the model lm = smf.ols(formula='Sales ~ TV + Radio', data=data).fit() lm.rsquared # add Newspaper to the model (which we believe has no association with Sales) lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit() lm.rsquared ``` **R-squared will always increase as you add more features to the model**, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model. There is alternative to R-squared called **adjusted R-squared** that penalizes model complexity (to control for overfitting), but it generally [under-penalizes complexity](http://scott.fortmann-roe.com/docs/MeasuringError.html). So is there a better approach to feature selection? **Cross-validation.** It provides a more reliable estimate of out-of-sample error, and thus is a better way to choose which of your models will best **generalize** to out-of-sample data. There is extensive functionality for cross-validation in scikit-learn, including automated methods for searching different sets of parameters and different models. Importantly, cross-validation can be applied to any model, whereas the methods described above only apply to linear models. ## Linear Regression in scikit-learn Let's redo some of the Statsmodels code above in scikit-learn: ``` # create X and y feature_cols = ['TV', 'Radio', 'Newspaper'] X = data[feature_cols] y = data.Sales # follow the usual sklearn pattern: import, instantiate, fit from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X, y) # print intercept and coefficients print lm.intercept_ print lm.coef_ # pair the feature names with the coefficients zip(feature_cols, lm.coef_) # predict for a new observation lm.predict([100, 25, 25]) # calculate the R-squared lm.score(X, y) ``` Note that **p-values** and **confidence intervals** are not (easily) accessible through scikit-learn. ## Handling Categorical Predictors with Two Categories Up to now, all of our predictors have been numeric. What if one of our predictors was categorical? Let's create a new feature called **Size**, and randomly assign observations to be **small or large**: ``` import numpy as np # set a seed for reproducibility np.random.seed(12345) # create a Series of booleans in which roughly half are True nums = np.random.rand(len(data)) mask_large = nums > 0.5 # initially set Size to small, then change roughly half to be large data['Size'] = 'small' data.loc[mask_large, 'Size'] = 'large' data.head() ``` For scikit-learn, we need to represent all data **numerically**. If the feature only has two categories, we can simply create a **dummy variable** that represents the categories as a binary value: ``` # create a new Series called IsLarge data['IsLarge'] = data.Size.map({'small':0, 'large':1}) data.head() ``` Let's redo the multiple linear regression and include the **IsLarge** predictor: ``` # create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients zip(feature_cols, lm.coef_) ``` How do we interpret the **IsLarge coefficient**? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average **increase** in Sales of 57.42 widgets (as compared to a Small market, which is called the **baseline level**). What if we had reversed the 0/1 coding and created the feature 'IsSmall' instead? The coefficient would be the same, except it would be **negative instead of positive**. As such, your choice of category for the baseline does not matter, all that changes is your **interpretation** of the coefficient. ## Handling Categorical Predictors with More than Two Categories Let's create a new feature called **Area**, and randomly assign observations to be **rural, suburban, or urban**: ``` # set a seed for reproducibility np.random.seed(123456) # assign roughly one third of observations to each group nums = np.random.rand(len(data)) mask_suburban = (nums > 0.33) & (nums < 0.66) mask_urban = nums > 0.66 data['Area'] = 'rural' data.loc[mask_suburban, 'Area'] = 'suburban' data.loc[mask_urban, 'Area'] = 'urban' data.head() ``` We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an **ordered relationship** between suburban and urban (and thus urban is somehow "twice" the suburban category). Instead, we create **another dummy variable**: ``` # create three dummy variables using get_dummies, then exclude the first dummy column area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:] # concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns) data = pd.concat([data, area_dummies], axis=1) data.head() ``` Here is how we interpret the coding: - **rural** is coded as Area_suburban=0 and Area_urban=0 - **suburban** is coded as Area_suburban=1 and Area_urban=0 - **urban** is coded as Area_suburban=0 and Area_urban=1 Why do we only need **two dummy variables, not three?** Because two dummies captures all of the information about the Area feature, and implicitly defines rural as the baseline level. (In general, if you have a categorical feature with k levels, you create k-1 dummy variables.) If this is confusing, think about why we only needed one dummy variable for Size (IsLarge), not two dummy variables (IsSmall and IsLarge). Let's include the two new dummy variables in the model: ``` # create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients zip(feature_cols, lm.coef_) ``` How do we interpret the coefficients? - Holding all other variables fixed, being a **suburban** area is associated with an average **decrease** in Sales of 106.56 widgets (as compared to the baseline level, which is rural). - Being an **urban** area is associated with an average **increase** in Sales of 268.13 widgets (as compared to rural). **A final note about dummy encoding:** If you have categories that can be ranked (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can potentially use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5). ## What Didn't We Cover? - Detecting collinearity - Diagnosing model fit - Transforming predictors to fit non-linear relationships - Interaction terms - Assumptions of linear regression - And so much more! You could certainly go very deep into linear regression, and learn how to apply it really, really well. It's an excellent way to **start your modeling process** when working a regression problem. However, it is limited by the fact that it can only make good predictions if there is a **linear relationship** between the features and the response, which is why more complex methods (with higher variance and lower bias) will often outperform linear regression. Therefore, we want you to understand linear regression conceptually, understand its strengths and weaknesses, be familiar with the terminology, and know how to apply it. However, we also want to spend time on many other machine learning models, which is why we aren't going deeper here. ## Resources - To go much more in-depth on linear regression, read Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), from which this lesson was adapted. Alternatively, watch the [related videos](http://www.dataschool.io/15-hours-of-expert-machine-learning-videos/) or read my [quick reference guide](http://www.dataschool.io/applying-and-interpreting-linear-regression/) to the key points in that chapter. - To learn more about Statsmodels and how to interpret the output, DataRobot has some decent posts on [simple linear regression](http://www.datarobot.com/blog/ordinary-least-squares-in-python/) and [multiple linear regression](http://www.datarobot.com/blog/multiple-regression-using-statsmodels/). - This [introduction to linear regression](http://people.duke.edu/~rnau/regintro.htm) is much more detailed and mathematically thorough, and includes lots of good advice. - This is a relatively quick post on the [assumptions of linear regression](http://pareonline.net/getvn.asp?n=2&v=8).
github_jupyter
# Testing Configurations The behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites** * You should have read the [chapter on grammars](Grammars.ipynb). * You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). ## Configuration Options When we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option: ``` !grep --help ``` All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. ## Options in Python Let us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed: ``` import argparse def process_numbers(args=[]): parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--sum', dest='accumulate', action='store_const', const=sum, help='sum the integers') group.add_argument('--min', dest='accumulate', action='store_const', const=min, help='compute the minimum') group.add_argument('--max', dest='accumulate', action='store_const', const=max, help='compute the maximum') args = parser.parse_args(args) print(args.accumulate(args.integers)) ``` Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum: ``` process_numbers(["--min", "100", "200", "300"]) ``` Or compute the sum of three numbers: ``` process_numbers(["--sum", "1", "2", "3"]) ``` When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator: ``` import fuzzingbook_utils from ExpectError import ExpectError with ExpectError(print_traceback=False): process_numbers(["--sum", "--max", "1", "2", "3"]) ``` ## A Grammar for Configurations How can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments: ``` from Grammars import crange, srange, convert_ebnf_grammar, is_valid_grammar, START_SYMBOL, new_symbol PROCESS_NUMBERS_EBNF_GRAMMAR = { "<start>": ["<operator> <integers>"], "<operator>": ["--sum", "--min", "--max"], "<integers>": ["<integer>", "<integers> <integer>"], "<integer>": ["<digit>+"], "<digit>": crange('0', '9') } assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR) PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR) ``` We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another: ``` from GrammarCoverageFuzzer import GrammarCoverageFuzzer f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10) for i in range(3): print(f.fuzz()) ``` Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`: ``` f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10) for i in range(3): args = f.fuzz().split() print(args) process_numbers(args) ``` In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ ## Mining Configuration Options In this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module. Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. ### Tracking Arguments Let us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method). ``` import sys import string def traceit(frame, event, arg): if event != "call": return method_name = frame.f_code.co_name if method_name != "add_argument": return locals = frame.f_locals print(method_name, locals) ``` What we get is a list of all calls to `add_argument()`, together with the method arguments passed: ``` sys.settrace(traceit) process_numbers(["--sum", "1", "2", "3"]) sys.settrace(None) ``` From the `args` argument, we can access the individual options and arguments to be defined: ``` def traceit(frame, event, arg): if event != "call": return method_name = frame.f_code.co_name if method_name != "add_argument": return locals = frame.f_locals print(locals['args']) sys.settrace(traceit) process_numbers(["--sum", "1", "2", "3"]) sys.settrace(None) ``` We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. ### A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options: ``` class ParseInterrupt(Exception): pass ``` The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined: ``` class OptionGrammarMiner(object): def __init__(self, function, log=False): self.function = function self.log = log ``` The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form ``` <start> ::= <option>* <arguments> <option> ::= <empty> <arguments> ::= <empty> ``` in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution. ``` class OptionGrammarMiner(OptionGrammarMiner): OPTION_SYMBOL = "<option>" ARGUMENTS_SYMBOL = "<arguments>" def mine_ebnf_grammar(self): self.grammar = { START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL], self.OPTION_SYMBOL: [], self.ARGUMENTS_SYMBOL: [] } self.current_group = self.OPTION_SYMBOL old_trace = sys.settrace(self.traceit) try: self.function() except ParseInterrupt: pass sys.settrace(old_trace) return self.grammar def mine_grammar(self): return convert_ebnf_grammar(self.mine_ebnf_grammar()) ``` The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group. ``` class OptionGrammarMiner(OptionGrammarMiner): def traceit(self, frame, event, arg): if event != "call": return if "self" not in frame.f_locals: return self_var = frame.f_locals["self"] method_name = frame.f_code.co_name if method_name == "add_argument": in_group = repr(type(self_var)).find("Group") >= 0 self.process_argument(frame.f_locals, in_group) elif method_name == "add_mutually_exclusive_group": self.add_group(frame.f_locals, exclusive=True) elif method_name == "add_argument_group": # self.add_group(frame.f_locals, exclusive=False) pass elif method_name == "parse_args": raise ParseInterrupt return None ``` The `process_arguments()` now analyzes the arguments passed and adds them to the grammar: * If the argument starts with `-`, it gets added as an optional element to the `<option>` list * Otherwise, it gets added to the `<argument>` list. The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator. Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job. ``` class OptionGrammarMiner(OptionGrammarMiner): def process_argument(self, locals, in_group): args = locals["args"] kwargs = locals["kwargs"] if self.log: print(args) print(kwargs) print() for arg in args: self.process_arg(arg, in_group, kwargs) class OptionGrammarMiner(OptionGrammarMiner): def process_arg(self, arg, in_group, kwargs): if arg.startswith('-'): if not in_group: target = self.OPTION_SYMBOL else: target = self.current_group metavar = None arg = " " + arg else: target = self.ARGUMENTS_SYMBOL metavar = arg arg = "" if "nargs" in kwargs: nargs = kwargs["nargs"] else: nargs = 1 param = self.add_parameter(kwargs, metavar) if param == "": nargs = 0 if isinstance(nargs, int): for i in range(nargs): arg += param else: assert nargs in "?+*" arg += '(' + param + ')' + nargs if target == self.OPTION_SYMBOL: self.grammar[target].append(arg) else: self.grammar[target].append(arg) ``` The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule. ``` import inspect class OptionGrammarMiner(OptionGrammarMiner): def add_parameter(self, kwargs, metavar): if "action" in kwargs: # No parameter return "" type_ = "str" if "type" in kwargs: given_type = kwargs["type"] # int types come as '<class int>' if inspect.isclass(given_type) and issubclass(given_type, int): type_ = "int" if metavar is None: if "metavar" in kwargs: metavar = kwargs["metavar"] else: metavar = type_ self.add_type_rule(type_) if metavar != type_: self.add_metavar_rule(metavar, type_) param = " <" + metavar + ">" return param ``` The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility. ``` class OptionGrammarMiner(OptionGrammarMiner): def add_type_rule(self, type_): if type_ == "int": self.add_int_rule() else: self.add_str_rule() def add_int_rule(self): self.grammar["<int>"] = ["(-)?<digit>+"] self.grammar["<digit>"] = crange('0', '9') def add_str_rule(self): self.grammar["<str>"] = ["<char>+"] self.grammar["<char>"] = srange( string.digits + string.ascii_letters + string.punctuation) def add_metavar_rule(self, metavar, type_): self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"] ``` The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, `<group>`) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in ``` <start> ::= <group><option>* <arguments> <group> ::= <empty> ``` and filled with the next calls to `add_argument()` within the group. ``` class OptionGrammarMiner(OptionGrammarMiner): def add_group(self, locals, exclusive): kwargs = locals["kwargs"] if self.log: print(kwargs) required = kwargs.get("required", False) group = new_symbol(self.grammar, "<group>") if required and exclusive: group_expansion = group if required and not exclusive: group_expansion = group + "+" if not required and exclusive: group_expansion = group + "?" if not required and not exclusive: group_expansion = group + "*" self.grammar[START_SYMBOL][0] = group_expansion + \ self.grammar[START_SYMBOL][0] self.grammar[group] = [] self.current_group = group ``` That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon. ``` miner = OptionGrammarMiner(process_numbers, log=True) process_numbers_grammar = miner.mine_ebnf_grammar() ``` Here is the extracted grammar: ``` process_numbers_grammar ``` The grammar properly identifies the group found: ``` process_numbers_grammar["<start>"] process_numbers_grammar["<group>"] ``` It also identifies a `--help` option provided not by us, but by the `argparse` module: ``` process_numbers_grammar["<option>"] ``` The grammar also correctly identifies the types of the arguments: ``` process_numbers_grammar["<arguments>"] process_numbers_grammar["<integers>"] ``` The rules for `int` are set as defined by `add_int_rule()` ``` process_numbers_grammar["<int>"] ``` We can take this grammar and convert it to BNF, such that we can fuzz with it right away: ``` assert is_valid_grammar(process_numbers_grammar) grammar = convert_ebnf_grammar(process_numbers_grammar) assert is_valid_grammar(grammar) f = GrammarCoverageFuzzer(grammar) for i in range(10): print(f.fuzz()) ``` Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. ## Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`: ``` !autopep8 --help ``` ### Autopep8 Setup We want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable: ``` import os def find_executable(name): for path in os.get_exec_path(): qualified_name = os.path.join(path, name) if os.path.exists(qualified_name): return qualified_name return None autopep8_executable = find_executable("autopep8") assert autopep8_executable is not None autopep8_executable ``` Next, we build a function that reads the contents of the file and executes it. ``` def autopep8(): executable = find_executable("autopep8") # First line has to contain "/usr/bin/env python" or like first_line = open(executable).readline() assert first_line.find("python") >= 0 contents = open(executable).read() exec(contents) ``` ### Mining an Autopep8 Grammar We can use the `autopep8()` function in our grammar miner: ``` autopep8_miner = OptionGrammarMiner(autopep8) ``` and extract a grammar for it: ``` autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar() ``` This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`: ``` print(autopep8_ebnf_grammar["<option>"]) ``` Metavariables like `<n>` or `<line>` are placeholders for integers. We assume all metavariables of the same name have the same type: ``` autopep8_ebnf_grammar["<line>"] ``` The grammar miner has inferred that the argument to `autopep8` is a list of files: ``` autopep8_ebnf_grammar["<arguments>"] ``` which in turn all are strings: ``` autopep8_ebnf_grammar["<files>"] ``` As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.) ``` autopep8_ebnf_grammar["<arguments>"] = [" <files>"] autopep8_ebnf_grammar["<files>"] = ["foo.py"] assert is_valid_grammar(autopep8_ebnf_grammar) ``` ### Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar: ``` autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar) assert is_valid_grammar(autopep8_grammar) ``` And we can use the grammar for fuzzing all options: ``` f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4) for i in range(20): print(f.fuzz()) ``` Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: ``` def create_foo_py(): open("foo.py", "w").write(""" def twice(x = 2): return x + x """) create_foo_py() print(open("foo.py").read(), end="") ``` We see how `autopep8` fixes the spacing: ``` !autopep8 foo.py ``` Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar. ``` from Fuzzer import ProgramRunner ``` Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.) ``` f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5) for i in range(20): invocation = "autopep8" + f.fuzz() print("$ " + invocation) args = invocation.split() autopep8 = ProgramRunner(args) result, outcome = autopep8.run() if result.stderr != "": print(result.stderr, end="") ``` Our `foo.py` file now has been formatted in place a number of times: ``` print(open("foo.py").read(), end="") ``` We don't need it anymore, so we clean up things: ``` import os os.remove("foo.py") ``` ## Classes for Fuzzing Configuration Options Let us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above. ``` class OptionRunner(ProgramRunner): def __init__(self, program, arguments=None): if isinstance(program, str): self.base_executable = program else: self.base_executable = program[0] self.find_contents() self.find_grammar() if arguments is not None: self.set_arguments(arguments) super().__init__(program) ``` First, we find the contents of the Python executable: ``` class OptionRunner(OptionRunner): def find_contents(self): self._executable = find_executable(self.base_executable) first_line = open(self._executable).readline() assert first_line.find("python") >= 0 self.contents = open(self._executable).read() def invoker(self): exec(self.contents) def executable(self): return self._executable ``` Next, we determine the grammar using the `OptionGrammarMiner` class: ``` class OptionRunner(OptionRunner): def find_grammar(self): miner = OptionGrammarMiner(self.invoker) self._ebnf_grammar = miner.mine_ebnf_grammar() def ebnf_grammar(self): return self._ebnf_grammar def grammar(self): return convert_ebnf_grammar(self._ebnf_grammar) ``` The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively. ``` class OptionRunner(OptionRunner): def set_arguments(self, args): self._ebnf_grammar["<arguments>"] = [" " + args] def set_invocation(self, program): self.program = program ``` We can instantiate the class on `autopep8` and immediately get the grammar: ``` autopep8_runner = OptionRunner("autopep8", "foo.py") print(autopep8_runner.ebnf_grammar()["<option>"]) ``` An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass. ``` class OptionFuzzer(GrammarCoverageFuzzer): def __init__(self, runner, *args, **kwargs): assert issubclass(type(runner), OptionRunner) self.runner = runner grammar = runner.grammar() super().__init__(grammar, *args, **kwargs) ``` When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context. ``` class OptionFuzzer(OptionFuzzer): def run(self, runner=None, inp=""): if runner is None: runner = self.runner assert issubclass(type(runner), OptionRunner) invocation = runner.executable() + " " + self.fuzz() runner.set_invocation(invocation.split()) return runner.run(inp) ``` ### Example: Autopep8 Let us apply this on the `autopep8` runner: ``` autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5) for i in range(3): print(autopep8_fuzzer.fuzz()) ``` We can now systematically test `autopep8` with these classes: ``` autopep8_fuzzer.run(autopep8_runner) ``` ### Example: MyPy We can extract options for the `mypy` static type checker for Python: ``` assert find_executable("mypy") is not None mypy_runner = OptionRunner("mypy", "foo.py") print(mypy_runner.ebnf_grammar()["<option>"]) mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5) for i in range(10): print(mypy_fuzzer.fuzz()) ``` ### Example: Notedown Here's the configuration options for the `notedown` Notebook to Markdown converter: ``` assert find_executable("notedown") is not None notedown_runner = OptionRunner("notedown") print(notedown_runner.ebnf_grammar()["<option>"]) notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5) for i in range(10): print(notedown_fuzzer.fuzz()) ``` ## Combinatorial Testing Our `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs. ``` from itertools import combinations option_list = notedown_runner.ebnf_grammar()["<option>"] pairs = list(combinations(option_list, 2)) ``` There's quite a number of pairs: ``` len(pairs) print(pairs[:20]) ``` Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated. ``` def pairwise(option_list): return [option_1 + option_2 for (option_1, option_2) in combinations(option_list, 2)] ``` Here's the first 20 pairs: ``` print(pairwise(option_list)[:20]) ``` The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list. ``` from copy import deepcopy notedown_grammar = notedown_runner.grammar() pairwise_notedown_grammar = deepcopy(notedown_grammar) pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"]) assert is_valid_grammar(pairwise_notedown_grammar) ``` Using the "pairwise" grammar to fuzz now covers one pair after another: ``` notedown_fuzzer = GrammarCoverageFuzzer( pairwise_notedown_grammar, max_nonterminals=4) for i in range(10): print(notedown_fuzzer.fuzz()) ``` Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering: ``` for combination_length in range(1, 20): tuples = list(combinations(option_list, combination_length)) print(combination_length, len(tuples)) ``` Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient $$ {n \choose k} = \frac{n!}{k!(n - k)!} $$ which for $k = 2$ (all pairs) gives us $$ {n \choose 2} = \frac{n!}{2(n - 2)!} = n \times (n - 1) $$ For `autopep8` with its 29 options... ``` len(autopep8_runner.ebnf_grammar()["<option>"]) ``` ... we thus need 812 tests to cover all pairs: ``` len(autopep8_runner.ebnf_grammar()["<option>"]) * \ (len(autopep8_runner.ebnf_grammar()["<option>"]) - 1) ``` For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted: ``` len(mypy_runner.ebnf_grammar()["<option>"]) len(mypy_runner.ebnf_grammar()["<option>"]) * \ (len(mypy_runner.ebnf_grammar()["<option>"]) - 1) ``` Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](#Exercises), below, have a number of options ready for you. ## Lessons Learned * Besides regular input data, program _configurations_ make an important testing target. * For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar. * To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. ## Next Steps If you liked the idea of mining a grammar from a program, do not miss: * [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on: * [how to parse and recombine inputs](Parser.ipynb) * [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb) * [how to simplify inputs that cause a failure](Reducer.ipynb) ## Background Although configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}. More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. ## Exercises ### Exercise 1: #ifdef Configuration Fuzzing In C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code ```C #ifdef LONG_FOO long foo() { ... } #else int foo() { ... } #endif ``` the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `#define`, as in `#define LONG_FOO`) or on the C compiler command line (using `-D<variable>` or `-D<variable>=<value>`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library: ```c #if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) # define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800 #endif #if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) # error #endif #if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__) #define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */ #endif #ifdef XML_UNICODE_WCHAR_T #define XML_T(x) (const wchar_t)x #define XML_L(x) L ## x #else #define XML_T(x) (const unsigned short)x #define XML_L(x) x #endif int fun(int x) { return XML_T(x); } ``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. #### Part 1: Extract Preprocessor Variables Write a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `#if` or `#ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that ```python cpp_identifiers(open("xmlparse.c").readlines()) ``` returns the set ```python {'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...} ``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`: ``` filename = "xmlparse.c" open(filename, "w").write( """ #if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) # define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800 #endif #if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) # error #endif #if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__) #define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */ #endif #ifdef XML_UNICODE_WCHAR_T #define XML_T(x) (const wchar_t)x #define XML_L(x) L ## x #else #define XML_T(x) (const unsigned short)x #define XML_L(x) x #endif int fun(int x) { return XML_T(x); } """); ``` To find C preprocessor `#if` directives and preprocessor variables, we use regular expressions matching them. ``` import re re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if") re_cpp_identifier = re.compile(r"[a-zA-Z_$]+") def cpp_identifiers(lines): identifiers = set() for line in lines: if re_cpp_if_directive.match(line): identifiers |= set(re_cpp_identifier.findall(line)) # These are preprocessor keywords identifiers -= { "if", "ifdef", "ifndef", "defined" } return identifiers cpp_ids = cpp_identifiers(open("xmlparse.c").readlines()) cpp_ids ``` #### Part 2: Derive an Option Grammar With the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D<variable>` for a preprocessor variable `<variable>`. Using this grammar `cpp_grammar`, a fuzzer ```python g = GrammarCoverageFuzzer(cpp_grammar) ``` would create C compiler invocations such as ```python [g.fuzz() for i in range(10)] ['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...] ``` **Solution.** This is not very difficult: ``` from Grammars import new_symbol cpp_grammar = { "<start>": ["cc -c<options> " + filename], "<options>": ["<option>", "<options><option>"], "<option>": [] } for id in cpp_ids: s = new_symbol(cpp_grammar, "<" + id + ">") cpp_grammar["<option>"].append(s) cpp_grammar[s] = [" -D" + id] cpp_grammar assert is_valid_grammar(cpp_grammar) ``` #### Part 3: C Preprocessor Configuration Fuzzing Using the grammar just produced, use a `GrammarCoverageFuzzer` to 1. Test each processor variable individually 2. Test each pair of processor variables, using `pairwise()`. What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above. ``` g = GrammarCoverageFuzzer(cpp_grammar) g.fuzz() from Fuzzer import ProgramRunner for i in range(10): invocation = g.fuzz() print("$", invocation) # subprocess.call(invocation, shell=True) cc_runner = ProgramRunner(invocation.split(' ')) (result, outcome) = cc_runner.run() print(result.stderr, end="") ``` To test all pairs, we can use `pairwise()`: ``` pairwise_cpp_grammar = deepcopy(cpp_grammar) pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"]) pairwise_cpp_grammar["<option>"][:10] for i in range(10): invocation = g.fuzz() print("$", invocation) # subprocess.call(invocation, shell=True) cc_runner = ProgramRunner(invocation.split(' ')) (result, outcome) = cc_runner.run() print(result.stderr, end="") ``` Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up: ``` os.remove("xmlparse.c") if os.path.exists("xmlparse.o"): os.remove("xmlparse.o") ``` ### Exercise 2: .ini Configuration Fuzzing Besides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html): ``` [DEFAULT] ServerAliveInterval = 45 Compression = yes CompressionLevel = 9 ForwardX11 = yes [bitbucket.org] User = hg [topsecret.server.com] Port = 50022 ForwardX11 = no ``` The above `ConfigParser` file can be created programmatically: ``` import configparser config = configparser.ConfigParser() config['DEFAULT'] = {'ServerAliveInterval': '45', 'Compression': 'yes', 'CompressionLevel': '9'} config['bitbucket.org'] = {} config['bitbucket.org']['User'] = 'hg' config['topsecret.server.com'] = {} topsecret = config['topsecret.server.com'] topsecret['Port'] = '50022' # mutates the parser topsecret['ForwardX11'] = 'no' # same here config['DEFAULT']['ForwardX11'] = 'yes' with open('example.ini', 'w') as configfile: config.write(configfile) with open('example.ini') as configfile: print(configfile.read(), end="") ``` and be read in again: ``` config = configparser.ConfigParser() config.read('example.ini') topsecret = config['topsecret.server.com'] topsecret['Port'] ``` #### Part 1: Read Configuration Using `configparser`, create a program reading in the above configuration file and accessing the individual elements. #### Part 2: Create a Configuration Grammar Design a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. #### Part 3: Mine a Configuration Grammar By dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`: ``` class TrackingConfigParser(configparser.ConfigParser): def __getitem__(self, key): print("Accessing", repr(key)) return super().__getitem__(key) ``` For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed: ``` tracking_config_parser = TrackingConfigParser() tracking_config_parser.read('example.ini') section = tracking_config_parser['topsecret.server.com'] ``` Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up: ``` import os os.remove("example.ini") ``` **Solution.** Left to the reader. Enjoy! ### Exercise 3: Extracting and Fuzzing C Command-Line Options In C programs, the `getopt()` function are frequently used to process configuration options. A call ``` getopt(argc, argv, "bf:") ``` indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). #### Part 1: Getopt Fuzzing Write a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this: 1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.) 2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run. 3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result. Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! #### Part 2: Fuzzing Long Options in C Same as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! ### Exercise 4: Expansions in Context In our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `<line>` parameters which both expand into the same `<int>` symbol: ``` <option> ::= ... | --line-range <line> <line> | ... <line> ::= <int> <int> ::= (-)?<digit>+ <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` ``` autopep8_runner.ebnf_grammar()["<line>"] autopep8_runner.ebnf_grammar()["<int>"] autopep8_runner.ebnf_grammar()["<digit>"] ``` Once the `GrammarCoverageFuzzer` has covered all variations of `<int>` (especially by covering all digits) for _one_ option, though, it will no longer strive to achieve such coverage for the next option. Yet, it could be desirable to achieve such coverage for each option separately. One way to achieve this with our existing `GrammarCoverageFuzzer` is again to change the grammar accordingly. The idea is to _duplicate_ expansions – that is, to replace an expansion of a symbol $s$ with a new symbol $s'$ whose definition is duplicated from $s$. This way, $s'$ and $s$ are separate symbols from a coverage point of view and would be independently covered. As an example, consider again the above `--line-range` option. If we want our tests to independently cover all elements of the two `<line>` parameters, we can duplicate the second `<line>` expansion into a new symbol `<line'>` with subsequent duplicated expansions: ``` <option> ::= ... | --line-range <line> <line'> | ... <line> ::= <int> <line'> ::= <int'> <int> ::= (-)?<digit>+ <int'> ::= (-)?<digit'>+ <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 <digit'> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ``` Design a function `inline(grammar, symbol)` that returns a duplicate of `grammar` in which every occurrence of `<symbol>` and its expansions become separate copies. The above grammar could be a result of `inline(autopep8_runner.ebnf_grammar(), "<line>")`. When copying, expansions in the copy should also refer to symbols in the copy. Hence, when expanding `<int>` in ```<int> ::= <int><digit>``` make that ```<int> ::= <int><digit> <int'> ::= <int'><digit'> ``` (and not `<int'> ::= <int><digit'>` or `<int'> ::= <int><digit>`). Be sure to add precisely one new set of symbols for each occurrence in the original grammar, and not to expand further in the presence of recursion. **Solution.** Again, left to the reader. Enjoy!
github_jupyter
# The Discrete Fourier Transform *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Fast Convolution The linear convolution of signals is a basic building block in many practical applications. The straightforward convolution of two finite-length signals $x[k]$ and $h[k]$ has considerable numerical complexity. This has led to the development of various algorithms that realize the convolution with lower complexity. The basic concept of the *fast convolution* is to exploit the [convolution theorem](theorems.ipynb#Convolution-Theorem) of the discrete Fourier transform (DFT). This theorem states that the periodic convolution of two signals is equal to a scalar multiplication of their spectra. The scalar multiplication has considerably less numerical operations that the convolution. The transformation of the signals can be performed efficiently by the [fast Fourier transform](fast_fourier_transform.ipynb) (FFT). Since the scalar multiplication of the spectra realizes a periodic convolution, special care has to be taken to realize a linear convolution in the spectral domain. The equivalence between linear and periodic convolution is discussed in the following. ### Equivalence of Linear and Periodic Convolution The [linear convolution](../discrete_systems/linear_convolution.ipynb#Finite-Length-Signals) of a causal signal $x_L[k]$ of length $L$ with a causal signal $h_N[k]$ of length $N$ reads \begin{equation} y[k] = x_L[k] * h_N[k] = \sum_{\kappa = 0}^{L-1} x_L[\kappa] \; h_N[k - \kappa] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \; x_L[k - \kappa] \end{equation} The resulting signal $y[k]$ is of finite length $M = N+L-1$. Without loss of generality it is assumed in the following that $N \leq L$. The computation of $y[k]$ for $k=0,1, \dots, M-1$ requires $M \cdot N$ multiplications and $M \cdot (N-1)$ additions. The computational complexity of the convolution is consequently [on the order of](https://en.wikipedia.org/wiki/Big_O_notation) $\mathcal{O}(M \cdot N)$. The periodic convolution of the two signals $x_L[k]$ and $h_N[k]$ is defined as \begin{equation} x_L[k] \circledast_P h_N[k] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \cdot \tilde{x}[k-\kappa] \end{equation} where $\tilde{x}[k]$ denotes the periodic summation of $x_L[k]$ with period $P$ \begin{equation} \tilde{x}[k] = \sum_{\nu = -\infty}^{\infty} x_L[k - \nu P] \end{equation} The result of the circular convolution is periodic with period $P$. To compute the linear convolution by a periodic convolution, one has to take care that the result of the linear convolution fits into one period of the periodic convolution. Hence, the periodicity has to be chosen as $P \geq M$ where $M = N+L-1$. This can be achieved by zero-padding of $x_L[k]$ to the total length $M$ resulting in the signal $x_M[k]$ of length $M$ which is defined as \begin{equation} x_M[k] = \begin{cases} x_L[k] & \text{for } 0 \leq k < L \\ 0 & \text{for } L \leq k < M \end{cases} \end{equation} and similar for $h_N[k]$ resulting in the zero-padded signal $h_M[k]$ which is defined as \begin{equation} h_M[k] = \begin{cases} h_N[k] & \text{for } 0 \leq k < N \\ 0 & \text{for } N \leq k < M \end{cases} \end{equation} Using these signals, the linear and periodic convolution are equivalent for the first $M$ samples $k = 0,1,\dots, M-1$ \begin{equation} x_L[k] * h_N[k] = x_M[k] \circledast_M h_M[k] \end{equation} #### Example The following example computes the linear, periodic and linear by periodic convolution of two signals $x[k] = \text{rect}_L[k]$ and $h[k] = \text{rect}_N[k]$. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from tools import cconv L = 8 # length of signal x[k] N = 10 # length of signal h[k] P = 14 # periodicity of periodic convolution # generate signals x = np.ones(L) h = np.ones(N) # linear convolution y1 = np.convolve(x, h, 'full') # periodic convolution y2 = cconv(x, h, P) # linear convolution via periodic convolution xp = np.append(x, np.zeros(N-1)) hp = np.append(h, np.zeros(L-1)) y3 = cconv(xp, hp, L+N-1) # plot results def plot_signal(x): plt.stem(x) plt.xlabel('$k$') plt.ylabel('$y[k]$') plt.xlim([0, N+L]) plt.gca().margins(y=0.1) plt.figure(figsize = (10, 8)) plt.subplot(3,1,1) plot_signal(y1) plt.title('Linear convolution') plt.subplot(3,1,2) plot_signal(y2) plt.title('Periodic convolution with period $P=%d$'%P) plt.subplot(3,1,3) plot_signal(y3) plt.title('Linear convolution as periodic convolution') plt.tight_layout() ``` **Exercise** * Change the lengths `L`, `N` and `P` and check how the results for the different convolutions change. ### The Fast Convolution Algorithm Using the above derived equality of the linear and periodic convolution one can express the linear convolution $y[k] = x_L[k] * h_N[k]$ by the DFT as $$ y[k] = \text{IDFT}_M \{ \; \text{DFT}_M\{ x_M[k] \} \cdot \text{DFT}_M\{ h_M[k] \} \; \} $$ The resulting algorithm is composed of the following steps 1. Zero-padding of the two input signals $x_L[k]$ and $h_N[k]$ to at least a total length of $M \geq N+L-1$ 2. Computation of the DFTs $X[\mu]$ and $H[\mu]$ using a FFT of length $M$ 3. Multiplication of the spectra $Y[\mu] = X[\mu] \cdot H[\mu]$ 4. Inverse DFT of $Y[\mu]$ using an inverse FFT of length $M$ The algorithm requires two DFTs of length $M$, $M$ complex multiplications and one IDFT of length $M$. On first sight this does not seem to be an improvement, since one DFT/IDFT requires $M^2$ complex multiplications and $M \cdot (M-1)$ complex additions. The overall numerical complexity is hence in the order of $\mathcal{O}(M^2)$. The DFT can be realized efficiently by the [fast Fourier transformation](fast_fourier_transform.ipynb) (FFT), which lowers the number of numerical operations for each DFT/IDFT significantly. The actual gain depends on the particular implementation of the FFT. Many FFTs are most efficient for lengths which are a power of two. It therefore can make sense, in terms of the number of numerical operations, to choose $M$ as a power of two instead of the shortest possible length $N+L-1$. In this case, the numerical complexity of the radix-2 algorithm is on the order of $\mathcal{O}(M \log_2 M)$. The introduced algorithm is known as *fast convolution* due to its computational efficiency when realized by the FFT. For real valued signals $x[k] \in \mathbb{R}$ and $h[k] \in \mathbb{R}$ the number of numerical operations can be reduced further by using a real valued FFT. #### Example The implementation of the fast convolution algorithm is straightforward. In the following example the fast convolution of two real-valued signals $x[k] = \text{rect}_L[k]$ and $h[k] = \text{rect}_N[k]$ is shown. The real valued FFT/IFFT is consequently used. Most implementations of the FFT include the zero-padding to a given length $M$, e.g as in `numpy` by `numpy.fft.rfft(x, M)`. ``` L = 8 # length of signal x[k] N = 10 # length of signal h[k] # generate signals x = np.ones(L) h = np.ones(N) # fast convolution M = N+L-1 y = np.fft.irfft(np.fft.rfft(x, M)*np.fft.rfft(h, M)) # show result plt.figure(figsize=(10, 3)) plt.stem(y) plt.xlabel('k') plt.ylabel('y[k]'); ``` ### Benchmark It was already argued that the numerical complexity of the fast convolution is considerably lower due to the usage of the FFT. As measure, the gain in terms of execution time with respect to the linear convolution is evaluated in the following. Both algorithms are executed for the convolution of two real-valued signals $x_L[k]$ and $h_N[k]$ of length $L=N=2^n$ for $n \in \mathbb{N}$. The length of the FFTs/IFFT was chosen as $M=2^{n+1}$. The results depend heavily on the implementation of the FFT and the hardware used. Note that the execution of the following script may take some time. ``` import timeit n = np.arange(17) # lengths = 2**n to evaluate reps = 20 # number of repetitions for timeit gain = np.zeros(len(n)) for N in n: length = 2**N # setup environment for timeit tsetup = 'import numpy as np; from numpy.fft import rfft, irfft; \ x=np.random.randn(%d); h=np.random.randn(%d)' % (length, length) # direct convolution tc = timeit.timeit('np.convolve(x, x, "full")', setup=tsetup, number=reps) # fast convolution tf = timeit.timeit('irfft(rfft(x, %d) * rfft(h, %d))' % (2*length, 2*length), setup=tsetup, number=reps) # speedup by using the fast convolution gain[N] = tc/tf # show the results plt.figure(figsize = (15, 10)) plt.barh(n-.5, gain, log=True) plt.plot([1, 1], [-1, n[-1]+1], 'r-') plt.yticks(n, 2**n) plt.xlabel('Gain of fast convolution') plt.ylabel('Length of signals') plt.title('Comparison of execution times between direct and fast convolution') plt.grid() ``` **Exercise** * For which lengths is the fast convolution faster than the linear convolution? * Why is it slower below a given signal length? * Is the trend of the gain as expected from above considerations? **Copyright** The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
github_jupyter
# Concise Implementation of Linear Regression :label:`sec_linear_concise` Broad and intense interest in deep learning for the past several years has inspired companies, academics, and hobbyists to develop a variety of mature open source frameworks for automating the repetitive work of implementing gradient-based learning algorithms. In :numref:`sec_linear_scratch`, we relied only on (i) tensors for data storage and linear algebra; and (ii) auto differentiation for calculating gradients. In practice, because data iterators, loss functions, optimizers, and neural network layers are so common, modern libraries implement these components for us as well. In this section, (**we will show you how to implement the linear regression model**) from :numref:`sec_linear_scratch` (**concisely by using high-level APIs**) of deep learning frameworks. ## Generating the Dataset To start, we will generate the same dataset as in :numref:`sec_linear_scratch`. ``` import numpy as np import torch from torch.utils import data from d2l import torch as d2l true_w = torch.tensor([2, -3.4]) true_b = 4.2 features, labels = d2l.synthetic_data(true_w, true_b, 1000) ``` ## Reading the Dataset Rather than rolling our own iterator, we can [**call upon the existing API in a framework to read data.**] We pass in `features` and `labels` as arguments and specify `batch_size` when instantiating a data iterator object. Besides, the boolean value `is_train` indicates whether or not we want the data iterator object to shuffle the data on each epoch (pass through the dataset). ``` def load_array(data_arrays, batch_size, is_train=True): #@save """Construct a PyTorch data iterator.""" dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) batch_size = 10 data_iter = load_array((features, labels), batch_size) ``` Now we can use `data_iter` in much the same way as we called the `data_iter` function in :numref:`sec_linear_scratch`. To verify that it is working, we can read and print the first minibatch of examples. Comparing with :numref:`sec_linear_scratch`, here we use `iter` to construct a Python iterator and use `next` to obtain the first item from the iterator. ``` next(iter(data_iter)) ``` ## Defining the Model When we implemented linear regression from scratch in :numref:`sec_linear_scratch`, we defined our model parameters explicitly and coded up the calculations to produce output using basic linear algebra operations. You *should* know how to do this. But once your models get more complex, and once you have to do this nearly every day, you will be glad for the assistance. The situation is similar to coding up your own blog from scratch. Doing it once or twice is rewarding and instructive, but you would be a lousy web developer if every time you needed a blog you spent a month reinventing the wheel. For standard operations, we can [**use a framework's predefined layers,**] which allow us to focus especially on the layers used to construct the model rather than having to focus on the implementation. We will first define a model variable `net`, which will refer to an instance of the `Sequential` class. The `Sequential` class defines a container for several layers that will be chained together. Given input data, a `Sequential` instance passes it through the first layer, in turn passing the output as the second layer's input and so forth. In the following example, our model consists of only one layer, so we do not really need `Sequential`. But since nearly all of our future models will involve multiple layers, we will use it anyway just to familiarize you with the most standard workflow. Recall the architecture of a single-layer network as shown in :numref:`fig_single_neuron`. The layer is said to be *fully-connected* because each of its inputs is connected to each of its outputs by means of a matrix-vector multiplication. In PyTorch, the fully-connected layer is defined in the `Linear` class. Note that we passed two arguments into `nn.Linear`. The first one specifies the input feature dimension, which is 2, and the second one is the output feature dimension, which is a single scalar and therefore 1. ``` # `nn` is an abbreviation for neural networks from torch import nn net = nn.Sequential(nn.Linear(2, 1)) ``` ## Initializing Model Parameters Before using `net`, we need to (**initialize the model parameters,**) such as the weights and bias in the linear regression model. Deep learning frameworks often have a predefined way to initialize the parameters. Here we specify that each weight parameter should be randomly sampled from a normal distribution with mean 0 and standard deviation 0.01. The bias parameter will be initialized to zero. As we have specified the input and output dimensions when constructing `nn.Linear`, now we can access the parameters directly to specify their initial values. We first locate the layer by `net[0]`, which is the first layer in the network, and then use the `weight.data` and `bias.data` methods to access the parameters. Next we use the replace methods `normal_` and `fill_` to overwrite parameter values. ``` net[0].weight.data.normal_(0, 0.01) net[0].bias.data.fill_(0) ``` ## Defining the Loss Function [**The `MSELoss` class computes the mean squared error (without the $1/2$ factor in :eqref:`eq_mse`).**] By default it returns the average loss over examples. ``` loss = nn.MSELoss() ``` ## Defining the Optimization Algorithm Minibatch stochastic gradient descent is a standard tool for optimizing neural networks and thus PyTorch supports it alongside a number of variations on this algorithm in the `optim` module. When we (**instantiate an `SGD` instance,**) we will specify the parameters to optimize over (obtainable from our net via `net.parameters()`), with a dictionary of hyperparameters required by our optimization algorithm. Minibatch stochastic gradient descent just requires that we set the value `lr`, which is set to 0.03 here. ``` trainer = torch.optim.SGD(net.parameters(), lr=0.03) ``` ## Training You might have noticed that expressing our model through high-level APIs of a deep learning framework requires comparatively few lines of code. We did not have to individually allocate parameters, define our loss function, or implement minibatch stochastic gradient descent. Once we start working with much more complex models, advantages of high-level APIs will grow considerably. However, once we have all the basic pieces in place, [**the training loop itself is strikingly similar to what we did when implementing everything from scratch.**] To refresh your memory: for some number of epochs, we will make a complete pass over the dataset (`train_data`), iteratively grabbing one minibatch of inputs and the corresponding ground-truth labels. For each minibatch, we go through the following ritual: * Generate predictions by calling `net(X)` and calculate the loss `l` (the forward propagation). * Calculate gradients by running the backpropagation. * Update the model parameters by invoking our optimizer. For good measure, we compute the loss after each epoch and print it to monitor progress. ``` num_epochs = 3 for epoch in range(num_epochs): for X, y in data_iter: l = loss(net(X) ,y) trainer.zero_grad() l.backward() trainer.step() l = loss(net(features), labels) print(f'epoch {epoch + 1}, loss {l:f}') ``` Below, we [**compare the model parameters learned by training on finite data and the actual parameters**] that generated our dataset. To access parameters, we first access the layer that we need from `net` and then access that layer's weights and bias. As in our from-scratch implementation, note that our estimated parameters are close to their ground-truth counterparts. ``` w = net[0].weight.data print('error in estimating w:', true_w - w.reshape(true_w.shape)) b = net[0].bias.data print('error in estimating b:', true_b - b) ``` ## Summary * Using PyTorch's high-level APIs, we can implement models much more concisely. * In PyTorch, the `data` module provides tools for data processing, the `nn` module defines a large number of neural network layers and common loss functions. * We can initialize the parameters by replacing their values with methods ending with `_`. ## Exercises 1. If we replace `nn.MSELoss(reduction='sum')` with `nn.MSELoss()`, how can we change the learning rate for the code to behave identically. Why? 1. Review the PyTorch documentation to see what loss functions and initialization methods are provided. Replace the loss by Huber's loss. 1. How do you access the gradient of `net[0].weight`? [Discussions](https://discuss.d2l.ai/t/45)
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.float_format', lambda x: '%.4f' % x) import seaborn as sns sns.set_context("paper", font_scale=1.3) sns.set_style('white') import warnings warnings.filterwarnings('ignore') from time import time import matplotlib.ticker as tkr from scipy import stats from statsmodels.tsa.stattools import adfuller from sklearn import preprocessing from statsmodels.tsa.stattools import pacf %matplotlib inline import math import keras from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras.layers import * from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from keras.callbacks import EarlyStopping df=pd.read_csv('3contrat.csv') print('Number of rows and columns:', df.shape) df.head(5) df.dtypes df.head(5) df['Time'] = pd.to_datetime(df['Time']) df['year'] = df['Time'].apply(lambda x: x.year) df['quarter'] = df['Time'].apply(lambda x: x.quarter) df['month'] = df['Time'].apply(lambda x: x.month) df['day'] = df['Time'].apply(lambda x: x.day) df=df.loc[:,['Time','Close', 'year','quarter','month','day']] df.sort_values('Time', inplace=True, ascending=True) df = df.reset_index(drop=True) df["weekday"]=df.apply(lambda row: row["Time"].weekday(),axis=1) df["weekday"] = (df["weekday"] < 5).astype(int) print('The time series starts from: ', df.Time.min()) print('The time series ends on: ', df.Time.max()) stat, p = stats.normaltest(df.Close) print('Statistics=%.3f, p=%.3f' % (stat, p)) alpha = 0.05 if p > alpha: print('Data looks Gaussian (fail to reject H0)') else: print('Data does not look Gaussian (reject H0)') sns.distplot(df.Close); print( 'Kurtosis of normal distribution: {}'.format(stats.kurtosis(df.Close))) print( 'Skewness of normal distribution: {}'.format(stats.skew(df.Close))) ``` Kurtosis: describes heaviness of the tails of a distribution. If the kurtosis is less than zero, then the distribution is light tails. Skewness: measures asymmetry of the distribution. If the skewness is between -0.5 and 0.5, the data are fairly symmetrical. If the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed. If the skewness is less than -1 or greater than 1, the data are highly skewed. ``` df1=df.loc[:,['Time','Close']] df1.set_index('Time',inplace=True) df1.plot(figsize=(12,5)) plt.ylabel('Close') plt.legend().set_visible(False) plt.tight_layout() plt.title('Close Price Time Series') sns.despine(top=True) plt.show(); plt.figure(figsize=(14,5)) plt.subplot(1,2,1) plt.subplots_adjust(wspace=0.2) sns.boxplot(x="year", y="Close", data=df) plt.xlabel('year') plt.title('Box plot of Yearly Close Price') sns.despine(left=True) plt.tight_layout() plt.subplot(1,2,2) sns.boxplot(x="quarter", y="Close", data=df) plt.xlabel('quarter') plt.title('Box plot of Quarterly Close Price') sns.despine(left=True) plt.tight_layout(); plt.figure(figsize=(14,6)) plt.subplot(1,2,1) df['Close'].hist(bins=50) plt.title('Close Price Distribution') plt.subplot(1,2,2) stats.probplot(df['Close'], plot=plt); df1.describe().T df.index = df.Time fig = plt.figure(figsize=(18,16)) fig.subplots_adjust(hspace=.4) ax1 = fig.add_subplot(5,1,1) ax1.plot(df['Close'].resample('D').mean(),linewidth=1) ax1.set_title('Mean Close Price resampled over day') ax1.tick_params(axis='both', which='major') ax2 = fig.add_subplot(5,1,2, sharex=ax1) ax2.plot(df['Close'].resample('W').mean(),linewidth=1) ax2.set_title('Mean Close Price resampled over week') ax2.tick_params(axis='both', which='major') ax3 = fig.add_subplot(5,1,3, sharex=ax1) ax3.plot(df['Close'].resample('M').mean(),linewidth=1) ax3.set_title('Mean Close Price resampled over month') ax3.tick_params(axis='both', which='major') ax4 = fig.add_subplot(5,1,4, sharex=ax1) ax4.plot(df['Close'].resample('Q').mean(),linewidth=1) ax4.set_title('Mean Close Price resampled over quarter') ax4.tick_params(axis='both', which='major') ax5 = fig.add_subplot(5,1,5, sharex=ax1) ax5.plot(df['Close'].resample('A').mean(),linewidth=1) ax5.set_title('Mean Close Price resampled over year') ax5.tick_params(axis='both', which='major'); plt.figure(figsize=(14,8)) plt.subplot(2,2,1) df.groupby('year').Close.agg('mean').plot() plt.xlabel('') plt.title('Mean Close Price by Year') plt.subplot(2,2,2) df.groupby('quarter').Close.agg('mean').plot() plt.xlabel('') plt.title('Mean Close Price by Quarter') plt.subplot(2,2,3) df.groupby('month').Close.agg('mean').plot() plt.xlabel('') plt.title('Mean Close Price by Month') plt.subplot(2,2,4) df.groupby('day').Close.agg('mean').plot() plt.xlabel('') plt.title('Mean Close Price by Day'); pd.pivot_table(df.loc[df['year'] != 2017], values = "Close", columns = "year", index = "month").plot(subplots = True, figsize=(12, 12), layout=(3, 5), sharey=True); dic={0:'Weekend',1:'Weekday'} df['Day'] = df.weekday.map(dic) a=plt.figure(figsize=(9,4)) plt1=sns.boxplot('year','Close',hue='Day',width=0.6,fliersize=3, data=df) a.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2) sns.despine(left=True, bottom=True) plt.xlabel('') plt.tight_layout() plt.legend().set_visible(False); plt1=sns.factorplot('year','Close',hue='Day', data=df, size=4, aspect=1.5, legend=False) plt.title('Factor Plot of Close Price by Weekday') plt.tight_layout() sns.despine(left=True, bottom=True) plt.legend(loc='upper right'); df2=df1.resample('D', how=np.mean) def test_stationarity(timeseries): rolmean = timeseries.rolling(window=30).mean() rolstd = timeseries.rolling(window=30).std() plt.figure(figsize=(14,5)) sns.despine(left=True) orig = plt.plot(timeseries, color='blue',label='Original') mean = plt.plot(rolmean, color='red', label='Rolling Mean') std = plt.plot(rolstd, color='black', label = 'Rolling Std') plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation') plt.show() print ('<Results of Dickey-Fuller Test>') dftest = adfuller(timeseries, autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used']) for key,value in dftest[4].items(): dfoutput['Critical Value (%s)'%key] = value print(dfoutput) test_stationarity(df2.Close.dropna()) ``` ### Dickey-Fuller test Null Hypothesis (H0): It suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure. Alternate Hypothesis (H1): It suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure. p-value > 0.05: Accept the null hypothesis (H0), the data has a unit root and is non-stationary. ``` dataset = df.Close.values #numpy.ndarray dataset = dataset.astype('float32') #arrary of close price dataset = np.reshape(dataset, (-1, 1)) #make each close price a list [839,],[900,] scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # 80% 20% split test set and training set train_size = int(len(dataset) * 0.80) # 396 test_size = len(dataset) - train_size # 99 train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] def create_dataset(dataset, look_back=1): X, Y = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] X.append(a) Y.append(dataset[i + look_back, 0]) return np.array(X), np.array(Y) look_back = 7 X_train, Y_train = create_dataset(train, look_back) # training X_test, Y_test = create_dataset(test, look_back) # testing create_dataset(train, look_back) # reshape input to be [samples, time steps, features] X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1])) X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1])) X_train.shape X_train create_dataset(train, look_back) X_train.shape model = Sequential() model.add(LSTM(100, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Dropout(0.2)) model.add(Dense(1)) model.compile(loss='mean_absolute_error', optimizer='adam') history = model.fit(X_train, Y_train, epochs=120, batch_size=15, validation_data=(X_test, Y_test), callbacks=[EarlyStopping(monitor='val_loss', patience=10)], verbose=1, shuffle=False) model.summary() # train data #make prediction train_predict = model.predict(X_train) test_predict = model.predict(X_test) # invert predictions train_predict = scaler.inverse_transform(train_predict) Y_train = scaler.inverse_transform([Y_train]) test_predict = scaler.inverse_transform(test_predict) Y_test = scaler.inverse_transform([Y_test]) print('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0])) print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0]))) print('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0])) print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0]))) Y_test plt.figure(figsize=(8,4)) plt.plot(history.history['loss'], label='Train Loss') plt.plot(history.history['val_loss'], label='Test Loss') plt.title('model loss') plt.ylabel('loss') plt.xlabel('epochs') plt.legend(loc='upper right') plt.show(); aa=[x for x in range(48)] plt.figure(figsize=(8,4)) plt.plot(aa, Y_test[0][:48], marker='.', label="actual") plt.plot(aa, test_predict[:,0][:48], 'r', label="prediction") # plt.tick_params(left=False, labelleft=True) #remove ticks plt.tight_layout() sns.despine(top=True) plt.subplots_adjust(left=0.07) plt.ylabel('Close', size=15) plt.xlabel('Time step', size=15) plt.legend(fontsize=15) plt.show(); Y_test ```
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '' os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/home/husein/t5/prepare/mesolitica-tpu.json' import malaya_speech.train.model.conformer as conformer import malaya_speech.train.model.transducer as transducer import malaya_speech import tensorflow as tf import numpy as np import json from glob import glob import pandas as pd subwords = malaya_speech.subword.load('transducer-singlish.subword') featurizer = malaya_speech.tf_featurization.STTFeaturizer( normalize_per_feature = True ) n_mels = 80 sr = 16000 maxlen = 18 minlen_text = 1 def mp3_to_wav(file, sr = sr): audio = AudioSegment.from_file(file) audio = audio.set_frame_rate(sr).set_channels(1) sample = np.array(audio.get_array_of_samples()) return malaya_speech.astype.int_to_float(sample), sr def generate(file): print(file) with open(file) as fopen: audios = json.load(fopen) for i in range(len(audios)): try: audio = audios[i][0] wav_data, _ = malaya_speech.load(audio, sr = sr) if (len(wav_data) / sr) > maxlen: # print(f'skipped audio too long {audios[i]}') continue if len(audios[i][1]) < minlen_text: # print(f'skipped text too short {audios[i]}') continue t = malaya_speech.subword.encode( subwords, audios[i][1], add_blank = False ) back = np.zeros(shape=(2000,)) front = np.zeros(shape=(200,)) wav_data = np.concatenate([front, wav_data, back], axis=-1) yield { 'waveforms': wav_data, 'targets': t, 'targets_length': [len(t)], } except Exception as e: print(e) def preprocess_inputs(example): s = featurizer.vectorize(example['waveforms']) mel_fbanks = tf.reshape(s, (-1, n_mels)) length = tf.cast(tf.shape(mel_fbanks)[0], tf.int32) length = tf.expand_dims(length, 0) example['inputs'] = mel_fbanks example['inputs_length'] = length example.pop('waveforms', None) example['targets'] = tf.cast(example['targets'], tf.int32) example['targets_length'] = tf.cast(example['targets_length'], tf.int32) return example def get_dataset( file, batch_size = 3, shuffle_size = 20, thread_count = 24, maxlen_feature = 1800, ): def get(): dataset = tf.data.Dataset.from_generator( generate, { 'waveforms': tf.float32, 'targets': tf.int32, 'targets_length': tf.int32, }, output_shapes = { 'waveforms': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_length': tf.TensorShape([None]), }, args = (file,), ) dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE) dataset = dataset.map( preprocess_inputs, num_parallel_calls = thread_count ) dataset = dataset.padded_batch( batch_size, padded_shapes = { 'inputs': tf.TensorShape([None, n_mels]), 'inputs_length': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_length': tf.TensorShape([None]), }, padding_values = { 'inputs': tf.constant(0, dtype = tf.float32), 'inputs_length': tf.constant(0, dtype = tf.int32), 'targets': tf.constant(0, dtype = tf.int32), 'targets_length': tf.constant(0, dtype = tf.int32), }, ) return dataset return get dev_dataset = get_dataset('test-set-imda.json', batch_size = 3)() features = dev_dataset.make_one_shot_iterator().get_next() features training = True config = malaya_speech.config.conformer_base_encoder_config config['dropout'] = 0.0 conformer_model = conformer.Model( kernel_regularizer = None, bias_regularizer = None, **config ) decoder_config = malaya_speech.config.conformer_base_decoder_config decoder_config['embed_dropout'] = 0.0 transducer_model = transducer.rnn.Model( conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config ) targets_length = features['targets_length'][:, 0] v = tf.expand_dims(features['inputs'], -1) z = tf.zeros((tf.shape(features['targets'])[0], 1), dtype = tf.int32) c = tf.concat([z, features['targets']], axis = 1) logits = transducer_model([v, c, targets_length + 1], training = training) decoded = transducer_model.greedy_decoder(v, features['inputs_length'][:, 0], training = training) decoded sess = tf.Session() sess.run(tf.global_variables_initializer()) var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) saver = tf.train.Saver(var_list = var_list) saver.restore(sess, 'asr-base-conformer-transducer-singlish/model.ckpt-800000') wer, cer = [], [] index = 0 while True: try: r = sess.run([decoded, features['targets']]) for no, row in enumerate(r[0]): d = malaya_speech.subword.decode(subwords, row[row > 0]) t = malaya_speech.subword.decode(subwords, r[1][no]) wer.append(malaya_speech.metrics.calculate_wer(t, d)) cer.append(malaya_speech.metrics.calculate_cer(t, d)) index += 1 except Exception as e: break np.mean(wer), np.mean(cer) for no, row in enumerate(r[0]): d = malaya_speech.subword.decode(subwords, row[row > 0]) t = malaya_speech.subword.decode(subwords, r[1][no]) print(no, d) print(t) print() ```
github_jupyter
``` import datafaucet as dfc # start the engine project = dfc.project.load() spark = dfc.context() df = spark.range(100) df.data.grid() (df .cols.get('name').obscure(alias='enc') .cols.get('enc').unravel(alias='dec') ).data.grid() df.data.grid().groupby(['id', 'name'])\ .agg({'fight':[max, 'min'], 'trade': 'count'}).stack(0) from pyspark.sql import functions as F df.cols.get('groupby('id', 'name')\ .agg({'fight':[F.max, 'min'], 'trade': 'min'}).data.grid() df.cols.groupby('id', 'name')\ .agg({'fight':[F.max, 'min'], 'trade': 'count'}, stack=True).data.grid() from pyspark.sql import functions as F df.groupby('id', 'name').agg( F.lit('fight').alias('colname'), F.min('fight').alias('min'), F.max('fight').alias('max'), F.lit(None).alias('count')).union( df.groupby('id', 'name').agg( F.lit('trade').alias('colname'), F.lit(None).alias('min'), F.lit(None).alias('max'), F.count('trade').alias('count')) ).data.grid() def string2func(func): if isinstance(func, str): f = A.all.get(func) if f: return (func,f) else: raise ValueError(f'function {func} not found') elif isinstance(func, (type(lambda x: x), type(max))): return (func.__name__, func) else: raise ValueError('Invalid aggregation function') def parse_single_func(func): if isinstance(func, (str, type(lambda x: x), type(max))): return string2func(func) elif isinstance(func, (tuple)): if len(func)==2: return (func[0], string2func(func[1])[1]) else: raise ValueError('Invalid list/tuple') else: raise ValueError(f'Invalid aggregation item {func}') def parse_list_func(func): func = [func] if type(func)!=list else func return [parse_single_func(x) for x in func] def parse_dict_func(func): func = {0: func} if not isinstance(func, dict) else func return {x[0]:parse_list_func(x[1]) for x in func.items()} lst = [ F.max, 'max', ('maxx', F.max), ('maxx', 'max'), ['max', F.max, ('maxx', F.max)], {'a': F.max}, {'a': 'max'}, {'a': ('maxx', F.max)}, {'a': ('maxx', 'max')}, {'a': ['max', F.max, ('maxx', F.max)]}, {'a': F.max, 'b': F.max}, {'a': 'max', 'b': 'max'}, {'a': ('maxx', F.max), 'b': ('maxx', F.max)}, {'a': ('maxx', 'max'), 'b': ('maxx', 'max')}, {'a': ['max', F.max, ('maxx', F.max)], 'b': ['min', F.min, ('minn', F.min)]} ] for i in lst: print('=====') print(i) funcs = parse_dict_func(i) all_cols = set() for k, v in funcs.items(): all_cols = all_cols.union(( x[0] for x in v )) print('all_cols:', all_cols) for c in ['a', 'b']: print('-----', c, '-----') agg_funcs = funcs.get(0, funcs.get(c)) if agg_funcs is None: continue agg_cols = set([x[0] for x in agg_funcs]) null_cols = all_cols - agg_cols print('column',c) print('all ',all_cols) print('agg ',agg_cols) print('null ', null_cols) for n,f in agg_funcs: print(c, n,f) df.cols.groupby('id', 'name').agg({ 'fight':['sum', 'min', 'max'], 'trade':['max', 'count']}).data.grid() pdf = df.data.grid() help(pdf.agg) # hash / rand columns which you wish to protect during ingest df = (df .cols.find('greedy').rand() .cols.get('name').hashstr(salt='foobar') .rows.sample(3) ) df.data.grid() from pyspark.sql import functions as F df.cols.agg({'type':'type', 'sample':'first'}).data.grid() df.save('races', 'minio') dfc.list('minio', 'races').data.grid() ```
github_jupyter
# Sonar - Decentralized Model Training Simulation (local) DISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together. # Getting Started: Installation ##### Step 1: install IPFS - https://ipfs.io/docs/install/ ##### Step 2: Turn on IPFS Daemon Execute on command line: > ipfs daemon ##### Step 3: Install Ethereum testrpc - https://github.com/ethereumjs/testrpc ##### Step 4: Turn on testrpc with 1000 initialized accounts (each with some money) Execute on command line: > testrpc -a 1000 ##### Step 5: install openmined/sonar and all dependencies (truffle) ##### Step 6: Locally Deploy Smart Contracts in openmined/sonar From the OpenMined/Sonar repository root run > truffle compile > truffle migrate you should see something like this when you run migrate: ``` Using network 'development'. Running migration: 1_initial_migration.js Deploying Migrations... Migrations: 0xf06039885460a42dcc8db5b285bb925c55fbaeae Saving successful migration to network... Saving artifacts... Running migration: 2_deploy_contracts.js Deploying ConvertLib... ConvertLib: 0x6cc86f0a80180a491f66687243376fde45459436 Deploying ModelRepository... ModelRepository: 0xe26d32efe1c573c9f81d68aa823dcf5ff3356946 Linking ConvertLib to MetaCoin Deploying MetaCoin... MetaCoin: 0x6d3692bb28afa0eb37d364c4a5278807801a95c5 ``` The address after 'ModelRepository' is something you'll need to copy paste into the code below when you initialize the "ModelRepository" object. In this case the address to be copy pasted is `0xe26d32efe1c573c9f81d68aa823dcf5ff3356946`. ##### Step 7: execute the following code # The Simulation: Diabetes Prediction In this example, a diabetes research center (Cure Diabetes Inc) wants to train a model to try to predict the progression of diabetes based on several indicators. They have collected a small sample (42 patients) of data but it's not enough to train a model. So, they intend to offer up a bounty of $5,000 to the OpenMined commmunity to train a high quality model. As it turns out, there are 400 diabetics in the network who are candidates for the model (are collecting the relevant fields). In this simulation, we're going to faciliate the training of Cure Diabetes Inc incentivizing these 400 anonymous contributors to train the model using the Ethereum blockchain. Note, in this simulation we're only going to use the sonar and syft packages (and everything is going to be deployed locally on a test blockchain). Future simulations will incorporate mine and capsule for greater anonymity and automation. ### Imports and Convenience Functions ``` import warnings import numpy as np import phe as paillier from sonar.contracts import ModelRepository,Model from syft.he.paillier.keys import KeyPair from syft.nn.linear import LinearClassifier from sklearn.datasets import load_diabetes def get_balance(account): return repo.web3.fromWei(repo.web3.eth.getBalance(account),'ether') warnings.filterwarnings('ignore') ``` ### Setting up the Experiment ``` # for the purpose of the simulation, we're going to split our dataset up amongst # the relevant simulated users diabetes = load_diabetes() y = diabetes.target X = diabetes.data validation = (X[0:5],y[0:5]) anonymous_diabetes_users = (X[6:],y[6:]) # we're also going to initialize the model trainer smart contract, which in the # real world would already be on the blockchain (managing other contracts) before # the simulation begins # ATTENTION: copy paste the correct address (NOT THE DEFAULT SEEN HERE) from truffle migrate output. repo = ModelRepository('0x6c7a23081b37e64adc5500c12ee851894d9fd500', ipfs_host='localhost', web3_host='localhost') # blockchain hosted model repository # we're going to set aside 10 accounts for our 42 patients # Let's go ahead and pair each data point with each patient's # address so that we know we don't get them confused patient_addresses = repo.web3.eth.accounts[1:10] anonymous_diabetics = list(zip(patient_addresses, anonymous_diabetes_users[0], anonymous_diabetes_users[1])) # we're going to set aside 1 account for Cure Diabetes Inc cure_diabetes_inc = repo.web3.eth.accounts[1] ``` ## Step 1: Cure Diabetes Inc Initializes a Model and Provides a Bounty ``` pubkey,prikey = KeyPair().generate(n_length=1024) diabetes_classifier = LinearClassifier(desc="DiabetesClassifier",n_inputs=10,n_labels=1) initial_error = diabetes_classifier.evaluate(validation[0],validation[1]) diabetes_classifier.encrypt(pubkey) diabetes_model = Model(owner=cure_diabetes_inc, syft_obj = diabetes_classifier, bounty = 1, initial_error = initial_error, target_error = 10000 ) model_id = repo.submit_model(diabetes_model) ``` ## Step 2: An Anonymous Patient Downloads the Model and Improves It ``` model_id model = repo[model_id] diabetic_address,input_data,target_data = anonymous_diabetics[0] repo[model_id].submit_gradient(diabetic_address,input_data,target_data) ``` ## Step 3: Cure Diabetes Inc. Evaluates the Gradient ``` repo[model_id] old_balance = get_balance(diabetic_address) print(old_balance) new_error = repo[model_id].evaluate_gradient(cure_diabetes_inc,repo[model_id][0],prikey,pubkey,validation[0],validation[1]) new_error new_balance = get_balance(diabetic_address) incentive = new_balance - old_balance print(incentive) ``` ## Step 4: Rinse and Repeat ``` model for i,(addr, input, target) in enumerate(anonymous_diabetics): try: model = repo[model_id] # patient is doing this model.submit_gradient(addr,input,target) # Cure Diabetes Inc does this old_balance = get_balance(addr) new_error = model.evaluate_gradient(cure_diabetes_inc,model[i+1],prikey,pubkey,validation[0],validation[1],alpha=2) print("new error = "+str(new_error)) incentive = round(get_balance(addr) - old_balance,5) print("incentive = "+str(incentive)) except: "Connection Reset" ```
github_jupyter
# NSCI 801 - Quantitative Neuroscience ## Reproducibility, reliability, validity Gunnar Blohm ### Outline * statistical considerations * multiple comparisons * exploratory analyses vs hypothesis testing * Open Science * general steps toward transparency * pre-registration / registered report * Open science vs. patents ### Multiple comparisons In [2009, Bennett et al.](https://teenspecies.github.io/pdfs/NeuralCorrelates.pdf) studies the brain of a salmon using fMRI and found and found significant activation despite the salmon being dead... (IgNobel Prize 2012) Why did they find this? They images 140 volumes (samples) of the brain and ran a standard preprocessing pipeline, including spatial realignment, co-registration of functional and anatomical volumes, and 8mm full-width at half maximum (FWHM) Gaussian smoothing. They computed voxel-wise statistics. <img style="float: center; width:750px;" src="stuff/salmon.png"> This is a prime example of what's known as the **multiple comparison problem**! “the problem that occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values” (Wikipedia) * problem that arises when implementing a large number of statistical tests in the same experiment * the more tests we do, the higher probability of obtaining, at least, one test with statistical significance ### Probability(false positive) = f(number comparisons) If you repeat a statistical test over and over again, the false positive ($FP$) rate ($P$) evolves as follows: $$P(FP)=1-(1-\alpha)^N$$ * $\alpha$ is the confidence level for each individual test (e.g. 0.05) * $N$ is the number of comparisons Let's see how this works... ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats plt.style.use('dark_background') ``` Let's create some random data... ``` rvs = stats.norm.rvs(loc=0, scale=10, size=1000) sns.displot(rvs) ``` Now let's run a t-test to see if it's different from 0 ``` statistic, pvalue = stats.ttest_1samp(rvs, 0) print(pvalue) ``` Now let's do this many times for different samples, e.g. different voxels of our salmon... ``` def t_test_function(alp, N): """computes t-test statistics on N random samples and returns number of significant tests""" counter = 0 for i in range(N): rvs = stats.norm.rvs(loc=0, scale=10, size=1000) statistic, pvalue = stats.ttest_1samp(rvs, 0) if pvalue <= alp: counter = counter + 1 print(counter) return counter N = 100 counter = t_test_function(0.05, N) print("The false positve rate was", counter/N*100, "%") ``` Well, we wanted a $\alpha=0.05$, so what's the problem? The problem is that we have hugely increased the likelihood of finding something significant by chance! (**p-hacking**) Take the above example: * running 100 independent tests with $\alpha=0.05$ resulted in a few positives * well, that's good right? Now we can see if there is astory here we can publish... * dead salmon! * remember, our data was just noise!!! There was NO signal! This is why we have corrections for multiple comparisons that adjust the p-value so that the **overall chance** to find a false positive stays at $\alpha$! Why does this matter? ### Exploratory analyses vs hypothesis testing Why do we distinguish between them? <img style="float: center; width:750px;" src="stuff/ExploreConfirm1.png"> But in science, confirmatory analyses that are hypothesis-driven are often much more valued. There is a temptation to frame *exploratory* analyses and *confirmatory*... **This leads to disaster!!!** * science is not solid * replication crisis (psychology, social science, medicine, marketing, economics, sports science, etc, etc...) * shaken trust in science <img style="float: center; width:750px;" src="stuff/crisis.jpeg"> ([Baker 2016](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970)) ### Quick excursion: survivorship bias "Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility." (Wikipedia) <img style="float: center; width:750px;" src="stuff/SurvivorshipBias.png"> **How does survivorship bias affect neuroscience?** Think about it... E.g. * people select neurons to analyze * profs say it's absolutely achievable to become a prof Just keep it in mind... ### Open science - transparency Open science can hugely help increasing transparency in many different ways so that findings and data can be evaluated for what they are: * publish data acquisition protocol and code: increases data reproducibility & credibility * publish data: data get second, third, etc... lives * publish data processing / analyses: increases reproducibility of results * publish figures code and stats: increases reproducibility and credibility of conclusions * pre-register hypotheses and analyses: ensures *confirmatory* analyses are not *exploratory* (HARKing) For more info, see NSCI800 lectures about Open Science: [OS1](http://www.compneurosci.com/NSCI800/OpenScienceI.pdf), [OS2](http://www.compneurosci.com/NSCI800/OpenScienceII.pdf) ### Pre-registration / registered reports <img style="float:right; width:500px;" src="stuff/RR.png"> * IPA guarantees publication * If original methods are followed * Main conclusions need to come from originally proposed analyses * Does not prevent exploratory analyses * Need to be labeled as such [https://Cos.io/rr](https://Cos.io/rr) Please follow **Stage 1** instructions of [the registered report intrustions from eNeuro](https://www.eneuro.org/sites/default/files/additional_assets/pdf/eNeuro%20Registered%20Reports%20Author%20Guidelines.pdf) for the course evaluation... Questions??? ### Open science vs. patents The goal of Open Science is to share all aspects of research with the public! * because knowledge should be freely available * because the public paid for the science to happen in the first place However, this prevents from patenting scientific results! * this is good for science, because patents obstruct research * prevents full privitazation of research: companies driving research is biased by private interest Turns out open science is good for business! * more people contribute * wider adoption * e.g. Github = Microsoft, Android = Google, etc * better for society * e.g. nonprofit pharma **Why are patents still a thing?** Well, some people think it's an outdated and morally corrupt concept. * goal: maximum profit * enabler: capitalism * victims: general public Think about it abd decide for yourself what to do with your research!!! ### THANK YOU!!! <img style="float:center; width:750px;" src="stuff/empower.jpg">
github_jupyter
<a href="https://colab.research.google.com/github/krmiddlebrook/intro_to_deep_learning/blob/master/machine_learning/mini_lessons/image_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Processing Image Data Computer vision is a field of machine learning that trains computers to interpret and understand the visual world. It is one of the most popular fields in deep learning (neural networks). In computer vision, it is common to use digital images from cameras and videos to train models to accurately identify and classify objects. Before we can solve computer vision tasks, it is important to understand how to handle image data. To this end, we will demonstrate how to process (prepare) image data for machine learning models. We will use the MNIST digits dataset, which is provided by Kera Datasets--a collection of ready-to-use datasets for machine learning. All datasets are available through the `tf.keras.datasets` API endpoint. Here is the lesson roadmap: - Load the dataset - Visualize the data - Transform the data - Normalize the data ``` # TensorFlow and tf.keras and TensorFlow datasets import tensorflow as tf from tensorflow import keras # Commonly used modules import numpy as np # Images, plots, display, and visualization import matplotlib.pyplot as plt ``` # Load the dataset When we want to solve a problem with machine learning methods, the first step is almost always to find a good dataset. As we mentioned above, we will retrieve the MNIST dataset using the `tf.keras.datasets` module. The MNIST dataset contains 70k grayscale images of handwritten digits (i.e., numbers between 0 and 9). Let's load the dataset into our notebook. ``` # the data, split between train and test sets (train_features, train_labels), (test_features, test_labels) = keras.datasets.mnist.load_data() print(f"training set shape: {train_features.shape}") print(f"test set shape: {test_features.shape}") print(f'dtypes of training and test set tensors: {train_features.dtype}, {test_features.dtype}') ``` We see that TensorFlow Datasets takes care of most of the processing we need to do. The `training_features` object tells us that there are 60k training images, and the `test_features` indicates there are 10k test images, so 70k total. We also see that the images are tensors of shape ($28 \times 28$) with integers of type uint8. ## Visualize the dataset Now that we have the dataset, let's visualize some samples. We will use the matplotlib plotting framework to display the images. Here are the first 5 images in the training dataset. ``` plt.figure(figsize=(10, 10)) for i in range(5): ax = plt.subplot(3, 3, i + 1) plt.imshow(train_features[i], cmap=plt.cm.binary) plt.title(int(train_labels[i])) plt.axis("off") ``` The above images give us a sense of the data, including samples belonging to different classes. # Transforming the data Before we start transforming data, let's discuss *tensors*--a key part of the machine learning (ML) process, particularly for deep learning methods. As we learned in previous lessons, data, whether it be categorical or numerical in nature, is converted to a numerical representation. This process makes the data useful for machine learning models. In deep learning (neural networks), the numerical data is often stored in objects called *tensors*. A tensor is a container that can house data in $N$ dimensions. ML researchers sometimes use the term "tensor" and "matrix" interchangeably because a matrix is a 2-dimensional tensor. But, tensors are generalizations of matrices to $N$-dimensional space. <figure> <img src='https://www.kdnuggets.com/wp-content/uploads/scalar-vector-matrix-tensor.jpg' width='75%'> <figcaption>A scalar, vector ($2 \times 1$), matrix ($2 \times 1$), and tensor ($2 \times 2 \times 2$) .</figcaption> </figure> ``` # a (2 x 2 x 2) tensor my_tensor = np.array([ [[1, 2], [3, 2]], [[1, 7],[5, 4]] ]) print('my_tensor shape:', my_tensor.shape) ``` Now let's discuss how images are stored in tensors. Computer screens are composed of pixels. Each pixel generates three colors of light (red, green, and blue) and the different colors we see are due to different combinations and intensities of these three primary colors. <figure> <img src='https://www.chem.purdue.edu/gchelp/cchem/RGBColors/BlackWhiteGray.gif' width='75%'> <figcaption>The colors black, white, and gray with a sketch of a pixel from each.</figcaption> </figure> We use tensors to store the pixel intensities for a given image. Colorized pictures have 3 different *channels*. Each channel contains a matrix that represents the intensity values that correspond to the pixels of a particular color (red, green, and blue; RGB for short). For instance, consider a small colorized $28 \times 28$ pixel image of a dog. Because the dog image is colorize, it has 3 channels, so its tensor shape is ($28 \times 28 \times 3$). Let's have a look at the shape of the images in the MNIST dataset. ``` train_features[0, :, :].shape ``` Using the `train_features.shape` method, we can extract the image shape and see that images are in the tensor shape $28 \times 28$. The returned shape has no 3rd dimension, this indicates that we are working with grayscale images. By grayscale, we mean the pixels don't have intensities for red, green, and blue channels but rather for one grayscale channel, which describes an image using combinations of various shades of gray. Pixel intensities range between $0$ and $255$, and in our case, they correspond to black $0$ to white $255$. Now let's reshape the images into $784 \times 1$ dimensional tensors. We call converting an image into an $n \times 1$ tensor "flattening" the tensor. ``` # get a subset of 5 images from the dataset original_shape = train_features.shape # Flatten the images. input_shape = (-1, 28*28) train_features = train_features.reshape(input_shape) test_features = test_features.reshape(input_shape) print(f'original shape: {original_shape}, flattened shape: {train_features.shape}') ``` We flattened all the images by using the NumPy `reshape` method. Since one shape dimension can be -1, and we may not always know the number of samples in the dataset we used $(-1,784)$ as the parameters to `reshape`. In our example, this means that each $28 \times 28$ image gets flattened into a $28 \cdot 28 = 784$ feature array. Then the images are stacked (because of the -1) to produce a final large tensor with shape $(\text{num samples}, 784$). # Normalize the data Another important transformation technique is *normalization*. We normalize data before training the model with it to encourage the model to learn generalizable features, which should lead to better results on unseen data. At a high level, normalization makes the data more, well...normal. There are various ways to normalize data. Perhaps the most common normalization approach for image data is to subtract the mean pixel value and divide by the standard deviation (this method is applied to every pixel). Before we can do any normalization, we have to cast the "uint8" tensors to the "float32" numeric type. ``` # convert to float32 type train_features = train_features.astype('float32') test_features = test_features.astype('float32') ``` Now we can normalize the data. We should mention that you always use the training set data to calculate normalization statistics like mean, standard deviation, etc.. Consequently, the test set is always normalized with the training set statistics. ``` # normalize the reshaped images mean = train_features.mean() std = train_features.std() train_features -= mean train_features /= std test_features -= mean test_features /= std print(f'pre-normalization mean and std: {round(mean, 4)}, {round(std, 4)}') print(f'normalized images mean and std: {round(train_features.mean(), 4)}, {round(train_features.std(), 4)}') ``` As the output above indicates, the normalized pixel values are now centered around 0 (i.e., mean = 0) and have a standard deviation of 1. # Summary In this lesson we learned: - Keras offers ready-to-use datasets. - Images are represented by *tensors* - Tensors can be transformed (reshaped) and normalized easily using NumPy (or any other frameworks that enable tensor operations). ``` ```
github_jupyter