prompt
stringlengths
501
4.98M
target
stringclasses
1 value
chunk_prompt
bool
1 class
kind
stringclasses
2 values
prob
float64
0.2
0.97
path
stringlengths
10
394
quality_prob
float64
0.4
0.99
learning_prob
float64
0.15
1
filename
stringlengths
4
221
# Isolation Forest (IF) outlier detector deployment Wrap a scikit-learn Isolation Forest python model for use as a prediction microservice in seldon-core and deploy on seldon-core running on minikube or a Kubernetes cluster using GCP. ## Dependencies - [helm](https://github.com/helm/helm) - [minikube](https://github.com/kubernetes/minikube) - [s2i](https://github.com/openshift/source-to-image) >= 1.1.13 python packages: - scikit-learn: pip install scikit-learn --> 0.20.1 ## Task The outlier detector needs to detect computer network intrusions using TCP dump data for a local-area network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack. There are 4 types of attacks in the dataset: - DOS: denial-of-service, e.g. syn flood; - R2L: unauthorized access from a remote machine, e.g. guessing password; - U2R: unauthorized access to local superuser (root) privileges; - probing: surveillance and other probing, e.g., port scanning. The dataset contains about 5 million connection records. There are 3 types of features: - basic features of individual connections, e.g. duration of connection - content features within a connection, e.g. number of failed log in attempts - traffic features within a 2 second window, e.g. number of connections to the same host as the current connection The outlier detector is only using 40 out of 41 features. ## Train locally Train on small dataset where you roughly know the fraction of outliers, defined by the "contamination" parameter. ``` # define columns to keep cols=['duration','protocol_type','flag','src_bytes','dst_bytes','land', 'wrong_fragment','urgent','hot','num_failed_logins','logged_in', 'num_compromised','root_shell','su_attempted','num_root','num_file_creations', 'num_shells','num_access_files','num_outbound_cmds','is_host_login', 'is_guest_login','count','srv_count','serror_rate','srv_serror_rate', 'rerror_rate','srv_rerror_rate','same_srv_rate','diff_srv_rate', 'srv_diff_host_rate','dst_host_count','dst_host_srv_count','dst_host_same_srv_rate', 'dst_host_diff_srv_rate','dst_host_same_src_port_rate','dst_host_srv_diff_host_rate', 'dst_host_serror_rate','dst_host_srv_serror_rate','dst_host_rerror_rate', 'dst_host_srv_rerror_rate','target'] cols_str = str(cols) !python train.py \ --dataset 'kddcup99' \ --samples 50000 \ --keep_cols "$cols_str" \ --contamination .1 \ --n_estimators 100 \ --max_samples .8 \ --max_features 1. \ --save_path './models/' ``` ## Test using Kubernetes cluster on GCP or Minikube Run the outlier detector as a model or a transformer. If you want to run the anomaly detector as a transformer, change the SERVICE_TYPE variable from MODEL to TRANSFORMER [here](./.s2i/environment), set MODEL = False and change ```OutlierIsolationForest.py``` to: ```python from CoreIsolationForest import CoreIsolationForest class OutlierIsolationForest(CoreIsolationForest): """ Outlier detection using Isolation Forests. Parameters ---------- threshold (float) : anomaly score threshold; scores below threshold are outliers """ def __init__(self,threshold=0.,load_path='./models/'): super().__init__(threshold=threshold, load_path=load_path) ``` ``` MODEL = True ``` Pick Kubernetes cluster on GCP or Minikube. ``` MINIKUBE = True if MINIKUBE: !minikube start --memory 4096 else: !gcloud container clusters get-credentials standard-cluster-1 --zone europe-west1-b --project seldon-demos ``` Create a cluster-wide cluster-admin role assigned to a service account named “default” in the namespace “kube-system”. ``` !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin \ --serviceaccount=kube-system:default !kubectl create namespace seldon ``` Add current context details to the configuration file in the seldon namespace. ``` !kubectl config set-context $(kubectl config current-context) --namespace=seldon ``` Create tiller service account and give it a cluster-wide cluster-admin role. ``` !kubectl -n kube-system create sa tiller !kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller !helm init --service-account tiller ``` Check deployment rollout status and deploy seldon/spartakus helm charts. ``` !kubectl rollout status deploy/tiller-deploy -n kube-system !helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usage_metrics.enabled=true --namespace seldon-system ``` Check deployment rollout status for seldon core. ``` !kubectl rollout status deploy/seldon-controller-manager -n seldon-system ``` Install Ambassador API gateway ``` !helm install stable/ambassador --name ambassador --set crds.keep=false !kubectl rollout status deployment.apps/ambassador ``` If Minikube used: create docker image for outlier detector inside Minikube using s2i. Besides the transformer image and the demo specific model image, the general model image for the Isolation Forest outlier detector is also available from Docker Hub as ***seldonio/outlier-if-model:0.1***. ``` if MINIKUBE & MODEL: !eval $(minikube docker-env) && \ s2i build . seldonio/seldon-core-s2i-python3:0.4 seldonio/outlier-if-model-demo:0.1 elif MINIKUBE: !eval $(minikube docker-env) && \ s2i build . seldonio/seldon-core-s2i-python3:0.4 seldonio/outlier-if-transformer:0.1 ``` Install outlier detector helm charts either as a model or transformer and set *threshold* hyperparameter value. ``` if MODEL: !helm install ../../../helm-charts/seldon-od-model \ --name outlier-detector \ --namespace=seldon \ --set model.type=isolationforest \ --set model.isolationforest.image.name=seldonio/outlier-if-model-demo:0.1 \ --set model.isolationforest.threshold=0 \ --set oauth.key=oauth-key \ --set oauth.secret=oauth-secret \ --set replicas=1 else: !helm install ../../../helm-charts/seldon-od-transformer \ --name outlier-detector \ --namespace=seldon \ --set outlierDetection.enabled=true \ --set outlierDetection.name=outlier-if \ --set outlierDetection.type=isolationforest \ --set outlierDetection.isolationforest.image.name=seldonio/outlier-if-transformer:0.1 \ --set outlierDetection.isolationforest.threshold=0 \ --set oauth.key=oauth-key \ --set oauth.secret=oauth-secret \ --set model.image.name=seldonio/mock_classifier:1.0 ``` ## Port forward Ambassador Run command in terminal: ``` kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080 ``` ## Import rest requests, load data and test requests ``` from utils import get_payload, rest_request_ambassador, send_feedback_rest, get_kdd_data, generate_batch data = get_kdd_data(keep_cols=cols,percent10=True) # load dataset print(data.shape) ``` Generate a random batch from the data ``` import numpy as np samples = 1 fraction_outlier = 0. X, labels = generate_batch(data,samples,fraction_outlier) print(X.shape) print(labels.shape) ``` Test the rest requests with the generated data. It is important that the order of requests is respected. First we make predictions, then we get the "true" labels back using the feedback request. If we do not respect the order and eg keep making predictions without getting the feedback for each prediction, there will be a mismatch between the predicted and "true" labels. This will result in errors in the produced metrics. ``` request = get_payload(X) response = rest_request_ambassador("outlier-detector","seldon",request,endpoint="localhost:8003") ``` If the outlier detector is used as a transformer, the output of the anomaly detection is added as part of the metadata. If it is used as a model, we send model feedback to retrieve custom performance metrics. ``` if MODEL: send_feedback_rest("outlier-detector","seldon",request,response,0,labels,endpoint="localhost:8003") ``` ## Analytics Install the helm charts for prometheus and the grafana dashboard ``` !helm install ../../../helm-charts/seldon-core-analytics --name seldon-core-analytics \ --set grafana_prom_admin_password=password \ --set persistence.enabled=false \ --namespace seldon ``` ## Port forward Grafana dashboard Run command in terminal: ``` kubectl port-forward $(kubectl get pods -n seldon -l app=grafana-prom-server -o jsonpath='{.items[0].metadata.name}') -n seldon 3000:3000 ``` You can then view an analytics dashboard inside the cluster at http://localhost:3000/dashboard/db/prediction-analytics?refresh=5s&orgId=1. Your IP address may be different. get it via minikube ip. Login with: Username : admin password : password (as set when starting seldon-core-analytics above) Import the outlier-detector-if dashboard from ../../../helm-charts/seldon-core-analytics/files/grafana/configs. ## Run simulation - Sample random network intrusion data with a certain outlier probability. - Get payload for the observation. - Make a prediction. - Send the "true" label with the feedback if the detector is run as a model. It is important that the prediction-feedback order is maintained. Otherwise there will be a mismatch between the predicted and "true" labels. View the progress on the grafana "Outlier Detection" dashboard. Most metrics need the outlier detector to be run as a model since they need model feedback. ``` import time n_requests = 100 samples = 1 for i in range(n_requests): fraction_outlier = .1 X, labels = generate_batch(data,samples,fraction_outlier) request = get_payload(X) response = rest_request_ambassador("outlier-detector","seldon",request,endpoint="localhost:8003") if MODEL: send_feedback_rest("outlier-detector","seldon",request,response,0,labels,endpoint="localhost:8003") time.sleep(1) if MINIKUBE: !minikube delete ```
true
code
0.51379
null
null
null
null
``` from IPython.display import Latex # Latex(r"""\begin{eqnarray} \large # Z_{n+1} = Z_{n}^(-e^(Z_{n}^p)^(e^(Z_{n}^p)^(-e^(Z_{n}^p)^(e^(Z_{n}^p)^(-e^(Z_{n}^p)))))) # \end{eqnarray}""") ``` # Parameterized machine learning algo: ## tanh(Z) = (a exp(Z) - b exp(-Z)) / (c exp(Z) + d exp(-Z)) ### with parameters a,b,c,d s.t. ad - bc = 1 Sequential iteration of difference equation: Z = ``` import warnings warnings.filterwarnings('ignore') import os import sys import numpy as np import time from IPython.display import display sys.path.insert(1, '../src'); import z_plane as zp import graphic_utility as gu; import itergataters as ig import numcolorpy as ncp def rnd_lambda(s=1): """ random parameters s.t. a*d - b*c = 1 """ b = np.random.random() c = np.random.random() ad = b*c + 1 a = np.random.random() d = ad / a lamb0 = {'a': a, 'b': b, 'c': c, 'd': d} lamb0 = np.array([a, b, c, d]) * s return lamb0 def tanh_lmbd(Z, p, Z0=None, ET=None): """ Z = starfish_ish(Z, p) Args: Z: a real or complex number p: a real of complex number Returns: Z: the result (complex) """ Zp = np.exp(Z) Zm = np.exp(-Z) return (p[0] * Zp - p[1] * Zm) / (p[2] * Zp + p[3] * Zm) def plane_gradient(X): """ DX, DY = plane_gradient(X) Args: X: matrix Returns: DX: gradient in X direction DY: gradient in Y direction """ n_rows = X.shape[0] n_cols = X.shape[1] DX = np.zeros(X.shape) DY = np.zeros(X.shape) for r in range(0, n_rows): xr = X[r, :] for c in range(0, n_cols - 1): DX[r,c] = xr[c+1] - xr[c] for c in range(0, n_cols): xc = X[:, c] for r in range(0, n_rows -1): DY[r, c] = xc[r+1] - xc[r] return DX, DY def grad_Im(X): """ Args: X: matrix Returns: Gradient_Image: positive matrix representation of the X-Y gradient of X """ DX, DY = plane_gradient(X) return gu.graphic_norm(DX + DY * 1j) def grad_pct(X): """ percentage of X s.t gradient > 0 """ I = grad_Im(X) return (I > 0).sum() / (X.shape[0] * X.shape[1]) def get_half_n_half(X): """ box counting, fractal dimension submatrix shortcut """ x_rows = X.shape[0] x_cols = X.shape[1] x_numel = x_rows * x_cols y_rows = np.int(np.ceil(x_rows / 2)) y_cols = np.int(np.ceil(x_cols / 2)) y_numel = y_rows * y_cols Y = np.zeros([y_rows, y_cols]) for r in range(0, y_rows): for c in range(0, y_cols): Y[r,c] = X[2*r, 2*c] return Y, y_numel, x_numel def get_fractal_dim(X): """ estimate fractal dimension by box counting """ Y, y_numel, x_numel = get_half_n_half(X) X_pct = grad_pct(X) + 1 Y_pct = grad_pct(Y) + 1 return X_pct / Y_pct X = np.random.random([5,5]) X[X < 0.5] = 0 Y, y_numel, x_numel = get_half_n_half(X) X_pct = grad_pct(X) Y_pct = grad_pct(Y) print(X_pct, Y_pct) print('y_numel', y_numel, '\nx_numel', x_numel) print(X_pct / Y_pct) # print(Y) # print(X) print(get_fractal_dim(X)) # -- machine with 8 cores -- P0 = [ 1.68458678, 1.72346312, 0.53931956, 2.92623535] P1 = [ 1.99808082, 0.68298986, 0.80686446, 2.27772581] P2 = [ 1.97243201, 1.32849475, 0.24972699, 2.19615225] P3 = [ 1.36537498, 1.02648965, 0.60966423, 3.38794403] p_scale = 2 P = rnd_lambda(p_scale) # P = np.array(P3) N = 200 par_set = {'n_rows': N, 'n_cols': N} par_set['center_point'] = 0.0 + 0.0j par_set['theta'] = np.pi / 2 par_set['zoom'] = 1/2 par_set['it_max'] = 16 par_set['max_d'] = 12 / par_set['zoom'] par_set['dir_path'] = os.getcwd() list_tuple = [(tanh_lmbd, (P))] t0 = time.time() ET, Z, Z0 = ig.get_primitives(list_tuple, par_set) tt = time.time() - t0 print(P, '\n', tt, '\t total time') Zd, Zr, ETn = ncp.etg_norm(Z0, Z, ET) print('Fractal Dimensionn = ', get_fractal_dim(ETn) - 1) ZrN = ncp.range_norm(Zr, lo=0.25, hi=1.0) display(ncp.gray_mat(ZrN)) ZrN = ncp.range_norm(gu.grad_Im(ETn), lo=0.25, hi=1.0) R = ncp.gray_mat(ZrN) display(R) # -- machine with 4 cores -- p_scale = 2 # P = rnd_lambda(p_scale) P = np.array([1.97243201, 1.32849475, 0.24972699, 2.19615225]) N = 800 par_set = {'n_rows': N, 'n_cols': N} par_set['center_point'] = 0.0 + 0.0j par_set['theta'] = np.pi / 2 par_set['zoom'] = 1/2 par_set['it_max'] = 16 par_set['max_d'] = 12 / par_set['zoom'] par_set['dir_path'] = os.getcwd() list_tuple = [(tanh_lmbd, (P))] t0 = time.time() ET, Z, Z0 = ig.get_primitives(list_tuple, par_set) tt = time.time() - t0 print(P, '\n', tt, '\t total time') t0 = time.time() Zd, Zr, ETn = ncp.etg_norm(Z0, Z, ET) print('converstion time =\t', time.time() - t0) t0 = time.time() # ZrN = ncp.range_norm(Zr, lo=0.25, hi=1.0) # R = ncp.gray_mat(ZrN) ZrN = ncp.range_norm(gu.grad_Im(ETn), lo=0.25, hi=1.0) R = ncp.gray_mat(ZrN) print('coloring time =\t',time.time() - t0) display(R) # def grad_pct(X): # """ percentage of X s.t gradient > 0 """ # I = gu.grad_Im(X) # nz = (I == 0).sum() # if nz > 0: # grad_pct = (I > 0).sum() / nz # else: # grad_pct = 1 # return grad_pct I = gu.grad_Im(ETn) nz = (I == 0).sum() nb = (I > 0).sum() print(nz, nb, ETn.shape[0] * ETn.shape[1], nz + nb) P0 = [ 1.68458678, 1.72346312, 0.53931956, 2.92623535] P1 = [ 1.99808082, 0.68298986, 0.80686446, 2.27772581] P2 = [ 1.97243201, 1.32849475, 0.24972699, 2.19615225] P3 = [ 1.36537498, 1.02648965, 0.60966423, 3.38794403] H = ncp.range_norm(1 - Zd, lo=0.5, hi=1.0) S = ncp.range_norm(1 - ETn, lo=0.0, hi=0.15) V = ncp.range_norm(Zr, lo=0.2, hi=1.0) t0 = time.time() Ihsv = ncp.rgb_2_hsv_mat(H, S, V) print('coloring time:\t',time.time() - t0) display(Ihsv) H = ncp.range_norm(Zd, lo=0.05, hi=0.55) S = ncp.range_norm(1 - ETn, lo=0.0, hi=0.35) V = ncp.range_norm(Zr, lo=0.0, hi=0.7) t0 = time.time() Ihsv = ncp.rgb_2_hsv_mat(H, S, V) print('coloring time:\t',time.time() - t0) display(Ihsv) # smaller for analysis par_set = {'n_rows': 200, 'n_cols': 200} par_set['center_point'] = 0.0 + 0.0j par_set['theta'] = 0.0 par_set['zoom'] = 5/8 par_set['it_max'] = 16 par_set['max_d'] = 10 / par_set['zoom'] par_set['dir_path'] = os.getcwd() # list_tuple = [(starfish_ish, (-0.040431211565+0.388620268274j))] list_tuple = [(tanh_lmbd, (P))] t0 = time.time() ET_sm, Z_sm, Z0_zm = ig.get_primitives(list_tuple, par_set) tt = time.time() - t0 print(tt, '\t total time') # view smaller - individual escape time starting points for t in range(1,7): print('ET =\t',t) I = np.ones(ET_sm.shape) I[ET_sm == t] = 0 display(ncp.mat_to_gray(I)) I = np.ones(ET_sm.shape) I[ET_sm > 7] = 0 display(ncp.mat_to_gray(I)) # view smaller - individual escape time frequency for k in range(0,int(ET_sm.max())): print(k, (ET_sm == k).sum()) print('\nHow many never escaped:\n>',(ET_sm > k).sum()) # get the list of unescaped starting points and look for orbit points Z_overs = Z0_zm[ET_sm == ET_sm.max()] v1 = Z_overs[0] d = '%0.2f'%(np.abs(v1)) theta = '%0.1f'%(180*np.arctan2(np.imag(v1), np.real(v1))/np.pi) print('One Unescaped Vector:\n\tV = ', d, theta, 'degrees\n') print('%9d'%Z_overs.size, 'total unescaped points\n') print('%9s'%('points'), 'near V', ' (plane units)') for denom0 in range(1,12): neighbor_distance = np.abs(v1) * 1/denom0 v1_list = Z_overs[np.abs(Z_overs-v1) < neighbor_distance] print('%9d'%len(v1_list), 'within V/%2d (%0.3f)'%(denom0, neighbor_distance)) ```
true
code
0.604953
null
null
null
null
# 08 - Common problems & bad data situations <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Licence" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" title='This work is licensed under a Creative Commons Attribution 4.0 International License.' align="right"/></a> In this notebook, we will revise common problems that might come up when dealing with real-world data. Maintainers: [@thempel](https://github.com/thempel), [@cwehmeyer](https://github.com/cwehmeyer), [@marscher](https://github.com/marscher), [@psolsson](https://github.com/psolsson) **Remember**: - to run the currently highlighted cell, hold <kbd>&#x21E7; Shift</kbd> and press <kbd>&#x23ce; Enter</kbd>; - to get help for a specific function, place the cursor within the function's brackets, hold <kbd>&#x21E7; Shift</kbd>, and press <kbd>&#x21E5; Tab</kbd>; - you can find the full documentation at [PyEMMA.org](http://www.pyemma.org). --- Most problems in Markov modeling of MD data arise from bad sampling combined with a poor discretization. For estimating a Markov model, it is required to have a connected data set, i.e., we must have observed each process we want to describe in both directions. PyEMMA checks if this requirement is fulfilled but, however, in certain situations this might be less obvious. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import mdshare import pyemma ``` ## Case 1: preprocessed, two-dimensional data (toy model) ### well-sampled double-well potential Let's again have a look at the double-well potential. Since we are only interested in the problematic situations here, we will simplify our data a bit and work with a 1D projection. ``` file = mdshare.fetch('hmm-doublewell-2d-100k.npz', working_directory='data') with np.load(file) as fh: data = [fh['trajectory'][:, 1]] ``` Since this particular example is simple enough, we can define a plotting function that combines histograms with trajectory data: ``` def plot_1D_histogram_trajectories(data, cluster=None, max_traj_length=200, ax=None): if ax is None: fig, ax = plt.subplots() for n, _traj in enumerate(data): ax.hist(_traj, bins=30, alpha=.33, density=True, color='C{}'.format(n)); ylims = ax.get_ylim() xlims = ax.get_xlim() for n, _traj in enumerate(data): ax.plot( _traj[:min(len(_traj), max_traj_length)], np.linspace(*ylims, min(len(_traj), max_traj_length)), alpha=0.6, color='C{}'.format(n), label='traj {}'.format(n)) if cluster is not None: ax.plot( cluster.clustercenters[cluster.dtrajs[n][:min(len(_traj), max_traj_length)], 0], np.linspace(*ylims, min(len(_traj), max_traj_length)), '.-', alpha=.6, label='dtraj {}'.format(n), linewidth=.3) ax.annotate( '', xy=(0.8500001 * xlims[1], 0.7 * ylims[1]), xytext=(0.85 * xlims[1], 0.3 * ylims[1]), arrowprops=dict(fc='C0', ec='None', alpha=0.6, width=2)) ax.text(0.86 * xlims[1], 0.5 * ylims[1], '$x(time)$', ha='left', va='center', rotation=90) ax.set_xlabel('TICA coordinate') ax.set_ylabel('histogram counts & trajectory time') ax.legend(loc=2) ``` As a reference, we visualize the histogram of this well-sampled trajectory along with the first $200$ steps (left panel) and the MSM implied timescales (right panel): ``` fig, axes = plt.subplots(1, 2, figsize=(10, 4)) cluster = pyemma.coordinates.cluster_regspace(data, dmin=0.05) plot_1D_histogram_trajectories(data, cluster=cluster, ax=axes[0]) lags = [i + 1 for i in range(10)] its = pyemma.msm.its(cluster.dtrajs, lags=lags) pyemma.plots.plot_implied_timescales(its, marker='o', ax=axes[1], nits=4) fig.tight_layout() ``` We see a nice, reversibly connected trajectory. That means we have sampled transitions between the basins in both directions that are correctly resolved by the discretization. As we see from the almost perfect overlay of discrete and continuous trajectory, nearly no discretization error is made. ### irreversibly connected double-well trajectories In MD simulations, we often face the problem that a process is sampled only in one direction. For example, consider protein-protein binding. The unbinding might take on the order of seconds to minutes and is thus difficult to sample. We will have a look what happens with the MSM in this case. Our example are two trajectories sampled from a double-well potential, each started in a different basin. They will be color coded. ``` file = mdshare.fetch('doublewell_oneway.npy', working_directory='data') data = [trj for trj in np.load(file)] plot_1D_histogram_trajectories(data, max_traj_length=data[0].shape[0]) ``` We note that the orange trajectory does not leave its potential well while the blue trajectory does overcome the barrier exactly once. ⚠️ Even though we have sampled one direction of the process, we do not sample the way out of one of the potential wells, thus effectively finding a sink state in our data. Let's have a look at the MSM. Since in higher dimensions, we often face the problem of poor discretization, we will simulate this situation by using too few cluster centers. ``` cluster_fine = pyemma.coordinates.cluster_regspace(data, dmin=0.1) cluster_poor = pyemma.coordinates.cluster_regspace(data, dmin=0.7) print(cluster_fine.n_clusters, cluster_poor.n_clusters) fig, axes = plt.subplots(2, 2, figsize=(10, 8), sharey='col') for cluster, ax in zip([cluster_poor, cluster_fine], axes): plot_1D_histogram_trajectories(data, cluster=cluster, max_traj_length=data[0].shape[0], ax=ax[0]) its = pyemma.msm.its(cluster.dtrajs, lags=[1, 10, 100, 200, 300, 500, 800, 1000]) pyemma.plots.plot_implied_timescales(its, marker='o', ax=ax[1], nits=4) axes[0, 0].set_title('poor discretization') axes[1, 0].set_title('fine discretization') fig.tight_layout() ``` #### What do we see? 1) We observe implied timescales that even look converged in the fine discretization case. 2) With poor clustering, the process cannot be resolved any more, i.e., the ITS does not convergence before the lag time exceeds the implied time scale. The obvious question is, what is the process that can be observed in the fine discretization case? PyEMMA checks for disconnectivity and thus should not find the process between the two wells. We follow this question by taking a look at the first eigenvector, which corresponds to that process. ``` msm = pyemma.msm.estimate_markov_model(cluster_fine.dtrajs, 200) fig, ax = plt.subplots() ax.plot( cluster_fine.clustercenters[msm.active_set, 0], msm.eigenvectors_right()[:, 1], 'o:', label='first eigvec') tx = ax.twinx() tx.hist(np.concatenate(data), bins=30, alpha=0.33) tx.set_yticklabels([]) tx.set_yticks([]) fig.legend() fig.tight_layout() ``` We observe a process which is entirely taking place in the left potential well. How come? PyEMMA estimates MSMs only on the largest connected set because they are only defined on connected sets. In this particular example, the largest connected set is the microstates in the left potential well. That means that we find a transition between the right and the left side of this well. This is not wrong, it might just be non-informative or even irrelevant. The set of microstates which is used for the MSM estimation is stored in the MSM object `msm` and can be retrieved via `.active_set`. ``` print('Active set: {}'.format(msm.active_set)) print('Active state fraction: {:.2}'.format(msm.active_state_fraction)) ``` In this example we clearly see that some states are missing. ### disconnected double-well trajectories with cross-overs This example covers the worst-case scenario. We have two trajectories that live in two separated wells and never transition to the other one. Due to a very bad clustering, we believe that the data is connected. This can happen if we cluster a large dataset in very high dimensions where it is especially difficult to debug. ``` file = mdshare.fetch('doublewell_disconnected.npy', working_directory='data') data = [trj for trj in np.load(file)] plot_1D_histogram_trajectories(data, max_traj_length=data[0].shape[0]) ``` We, again, compare a reasonable to a deliberately poor discretization: ``` cluster_fine = pyemma.coordinates.cluster_regspace(data, dmin=0.1) cluster_poor = pyemma.coordinates.cluster_regspace(data, dmin=0.7) print(cluster_fine.n_clusters, cluster_poor.n_clusters) fig, axes = plt.subplots(2, 2, figsize=(10, 8), sharey='col') for cluster, ax in zip([cluster_poor, cluster_fine], axes): plot_1D_histogram_trajectories(data, cluster=cluster, max_traj_length=data[0].shape[0], ax=ax[0]) its = pyemma.msm.its(cluster.dtrajs, lags=[1, 10, 100, 200, 300, 500, 800, 1000]) pyemma.plots.plot_implied_timescales(its, marker='o', ax=ax[1], nits=4) axes[0, 0].set_title('poor discretization') axes[1, 0].set_title('fine discretization') fig.tight_layout() ``` #### What do we see? 1) With the fine discretization, we observe some timescales that are converged. These are most probably processes within one of the wells, similar to the ones we saw before. 2) The poor discretization induces a large error and describes artificial short visits to the other basin. 3) The timescales in the poor discretization are much higher but not converged. The reason for the high timescales in 3) are in fact the artificial cross-over events created by the poor discretization. This process was not actually sampled and is an artifact of bad clustering. Let's look at it in more detail and see what happens if we estimate an MSM and even compute metastable states with PCCA++. ``` msm = pyemma.msm.estimate_markov_model(cluster_poor.dtrajs, 200) nstates = 2 msm.pcca(nstates) index_order = np.argsort(cluster_poor.clustercenters[:, 0]) fig, axes = plt.subplots(1, 3, figsize=(12, 3)) axes[0].plot( cluster_poor.clustercenters[index_order, 0], msm.eigenvectors_right()[index_order, 1], 'o:', label='1st eigvec') axes[0].set_title('first eigenvector') for n, metastable_distribution in enumerate(msm.metastable_distributions): axes[1].step( cluster_poor.clustercenters[index_order, 0], metastable_distribution[index_order], ':', label='md state {}'.format(n + 1), where='mid') axes[1].set_title('metastable distributions (md)') axes[2].step( cluster_poor.clustercenters[index_order, 0], msm.pi[index_order], 'k--', label='$\pi$', where='mid') axes[2].set_title('stationary distribution $\pi$') for ax in axes: tx = ax.twinx() tx.hist(np.concatenate(data), bins=30, alpha=0.33) tx.set_yticklabels([]) tx.set_yticks([]) fig.legend(loc=7) fig.tight_layout() ``` We observe that the first eigenvector represents a process that does not exist, i.e., is an artifact. Nevertheless, the PCCA++ algorithm can separate metastable states in a way we would expect. It finds the two disconnected states. However, the stationary distribution yields arbitrary results. #### How to detect disconnectivity? Generally, hidden Markov models (HMMs) are much more reliable because they come with an additional layer of hidden states. Cross-over events are thus unlikely to be counted as "real" transitions. Thus, it is a good idea to estimate an HMM. What happens if we try to estimate a two state HMM on the same, poorly discretized data? ⚠️ It is important to note that the HMM estimation is initialized from the PCCA++ metastable states that we already analyzed. ``` hmm = pyemma.msm.estimate_hidden_markov_model(cluster_poor.dtrajs, nstates, msm.lag) ``` We are getting an error message which already explains what is going wrong, i.e., that the (macro-) states are not connected and thus no unique stationary distribution can be estimated. This is equivalent to having two eigenvalues of magnitude 1 or an implied timescale of infinity which is what we observe in the implied timescales plot. ``` its = pyemma.msm.timescales_hmsm(cluster_poor.dtrajs, nstates, lags=[1, 3, 4, 10, 100]) pyemma.plots.plot_implied_timescales(its, marker='o', ylog=True); ``` As we see, the requested timescales above $4$ steps could not be computed because the underlying HMM is disconnected, i.e., the corresponding timescales are infinity. The implied timescales that could be computed are most likely the same process that we observed from the fine clustering before, i.e., jumps within one basin. In general, it is a non-trivial problem to show that processes were not sampled reversibly. In our experience, HMMs are a good choice here, even though situations can occur where they might not detect the problem as easily as in this example. <a id="poorly_sampled_dw"></a> ### poorly sampled double-well trajectories Let's now assume that everything worked out fine but our sampling is somewhat poor. This is a realistic scenario when dealing with large systems that were well-sampled but still contain only few events of interest. We expect that our trajectories are just long enough to sample a certain process but are too short to capture them with a large lag time. To rule out discretization issues and to make the example clear, we use the full data set for discretization. ``` file = mdshare.fetch('hmm-doublewell-2d-100k.npz', working_directory='data') with np.load(file) as fh: data = [fh['trajectory'][:, 1]] cluster = pyemma.coordinates.cluster_regspace(data, dmin=0.05) ``` We want to simulate a process that happens on a timescale that is on the order of magnitude of the trajectory length. To do so, we choose `n_trajs` chunks from the full data set that contain `traj_length` steps by splitting the original trajectory: ``` traj_length = 10 n_trajs = 50 data_short_trajs = list(data[0].reshape((data[0].shape[0] // traj_length, traj_length)))[:n_trajs] dtrajs_short = list(cluster.dtrajs[0].reshape((data[0].shape[0] // traj_length, traj_length)))[:n_trajs] ``` Now, let's plot the trajectories (left panel) and estimate implied timescales (right panel) as above. Since we know the true ITS of this process, we visualize it as a dotted line. ``` fig, axes = plt.subplots(1, 2, figsize=(10, 4)) for n, _traj in enumerate(data_short_trajs): axes[0].plot(_traj, np.linspace(0, 1, _traj.shape[0]) + n) lags = [i + 1 for i in range(9)] its = pyemma.msm.its(dtrajs_short, lags=lags) pyemma.plots.plot_implied_timescales(its, marker='o', ax=axes[1], nits=1) its_reference = pyemma.msm.its(cluster.dtrajs, lags=lags) pyemma.plots.plot_implied_timescales(its_reference, linestyle=':', ax=axes[1], nits=1) fig.tight_layout() ``` We note that the slowest process is clearly contained in the data chunks and is reversibly sampled (left panel, short trajectory pieces color coded and stacked). Due to very short trajectories, we find that this process can only be captured at a very short MSM lag time (right panel). Above that interval, the slowest timescale diverges. Luckily, here we know that it is already converged at $\tau = 1$, so we estimate an MSM: ``` msm_short_trajectories = pyemma.msm.estimate_markov_model(dtrajs_short, 1) ``` Let's now have a look at the CK-test: ``` pyemma.plots.plot_cktest(msm_short_trajectories.cktest(2), marker='.'); ``` As already discussed, we cannot expect new estimates above a certain lag time to agree with the model prediction due to too short trajectories. Indeed, we find that new estimates and model predictions diverge at very high lag times. This does not necessarily mean that the model at $\tau=1$ is wrong and in this particular case, we can even explain the divergence and find that it fits to the implied timescales divergence. This example mirrors another incarnation of the sampling problem: Working with large systems, we often have comparably short trajectories with few rare events. Thus, implied timescales convergence can often be achieved only in a certain interval and CK-tests will not converge up to arbitrary multiples of the lag time. It is the responsibility of the modeler to interpret these results and to ensure that a valid model can be obtained from the data. Please note that this is only a special case of a failed CK test. More general information about CK tests and what it means if it fails are explained in [Notebook 03 ➜ 📓](03-msm-estimation-and-validation.ipynb). ## Case 2: low-dimensional molecular dynamics data (alanine dipeptide) In this example, we will show how an ill-conducted TICA analysis can yield results that look metastable in the 2D histogram, but in fact are not describing the slow dynamics. Please note that this was deliberately broken with a nonsensical TICA-lagtime of almost trajectory length, which is 250 ns. We start off with adding all atom coordinates. That is a non-optimal choice because it artificially blows up the dimensionality, but might still be a reasonable choice depending on the problem. A well-conducted TICA projection can extract the slow coordinates, as we will see at the end of this example. ``` pdb = mdshare.fetch('alanine-dipeptide-nowater.pdb', working_directory='data') files = mdshare.fetch('alanine-dipeptide-*-250ns-nowater.xtc', working_directory='data') feat = pyemma.coordinates.featurizer(pdb) feat.add_all() data = pyemma.coordinates.load(files, features=feat) ``` TICA analysis is conducted with an extremely high lag time of almost $249.9$ ns. We map down to two dimensions. ``` tica = pyemma.coordinates.tica(data, lag=data[0].shape[0] - 100, dim=2) tica_output = tica.get_output() pyemma.plots.plot_free_energy(*np.concatenate(tica_output).T, legacy=False); ``` In the free energy plot, we recognize two defined basins that are nicely separated by the first TICA component. We thus continue with a discretization of this space and estimate MSM implied timescales. ``` cluster = pyemma.coordinates.cluster_kmeans(tica_output, k=200, max_iter=30, stride=100) its = pyemma.msm.its(cluster.dtrajs, lags=[1, 5, 10, 20, 30, 50]) pyemma.plots.plot_implied_timescales(its, marker='o', units='ps', nits=3); ``` Indeed, we observe a converged implied timescale. In this example we already know that it is way lower than expected, but in the general case we are unaware of the real dynamics of the system. Thus, we estimate an MSM at lag time $20 $ ps. Coarse graining and validation will be done with $2$ metastable states since we found $2$ basins in the free energy landscape and have one slow process in the ITS plot. ``` msm = pyemma.msm.estimate_markov_model(cluster.dtrajs, 20) nstates = 2 msm.pcca(nstates); stride = 10 metastable_trajs_strided = [msm.metastable_assignments[dtrj[::stride]] for dtrj in cluster.dtrajs] tica_output_strided = [i[::stride] for i in tica_output] _, _, misc = pyemma.plots.plot_state_map(*np.concatenate(tica_output_strided).T, np.concatenate(metastable_trajs_strided)); misc['cbar'].set_ticklabels(range(1, nstates + 1)) # set state numbers 1 ... nstates ``` As we see, the PCCA++ algorithm is perfectly able to separate the two basins. Let's go on with a Chapman-Kolmogorow validation. ``` pyemma.plots.plot_cktest(msm.cktest(nstates), units='ps'); ``` Congratulations, we have estimated a well-validated MSM. The only question remaining is: What does it actually describe? For this, we usually extract representative structures as described in [Notebook 00 ➜ 📓](00-pentapeptide-showcase.ipynb). We will not do this here but look at the metastable trajectories instead. #### What could be wrong about it? Let's have a look at the trajectories as assigned to PCCA++ metastable states. We have already computed them before but not looked at their time dependence. ``` fig, ax = plt.subplots(1, 1, figsize=(15, 6), sharey=True, sharex=True) ax_yticks_labels = [] for n, pcca_traj in enumerate(metastable_trajs_strided): ax.plot(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, color='k', linewidth=0.3) ax.scatter(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, c=pcca_traj, s=0.1) ax_yticks_labels.append(((msm.n_metastable * (2 * n + 1) - 1) / 2, n + 1)) ax.set_yticks([l[0] for l in ax_yticks_labels]) ax.set_yticklabels([str(l[1]) for l in ax_yticks_labels]) ax.set_ylabel('Trajectory #') ax.set_xlabel('time / {} ps'.format(stride)) fig.tight_layout() ``` #### What do we see? The above figure shows the metastable states visited by the trajectory over time. Each metastable state is color-coded, the trajectory is shown by the black line. This is clearly not a metastable trajectory as we would have expected. What did we do wrong? Let's have a look at the TICA trajectories, not only the histogram! ``` fig, axes = plt.subplots(2, 3, figsize=(12, 6), sharex=True, sharey='row') for n, trj in enumerate(tica_output): for dim, traj1d in enumerate(trj.T): axes[dim, n].plot(traj1d[::stride], linewidth=.5) for ax in axes[1]: ax.set_xlabel('time / {} ps'.format(stride)) for dim, ax in enumerate(axes[:, 0]): ax.set_ylabel('IC {}'.format(dim + 1)) for n, ax in enumerate(axes[0]): ax.set_title('Trajectory # {}'.format(n + 1)) fig.tight_layout() ``` This is essentially noise, so it is not surprising that the metastable trajectories do not show significant metastability. The MSM nevertheless found a process in the above TICA components which, however, does not seem to describe any of the slow dynamics. Thus, the model is not wrong, it is just not informative. As we see in this example, it can be instructive to keep the trajectories in mind and not to rely on the histograms alone. ⚠️ Histograms are no proof of metastability, they can only give us a hint towards defined states in a multi-dimensional state space which can be metastable. #### How to fix it? In this particular example, we already know the issue: the TICA lag time was deliberately chosen way too high. That's easy to fix. Let's now have a look at how the metastable trajectories should look for a decent model such as the one estimated in [Notebook 05 ➜ 📓](05-pcca-tpt.ipynb). We will take the same input data, do a TICA transform with a realistic lag time of $10$ ps, and coarse grain into $2$ metastable states in order to compare with the example above. ``` tica = pyemma.coordinates.tica(data, lag=10, dim=2) tica_output = tica.get_output() cluster = pyemma.coordinates.cluster_kmeans(tica_output, k=200, max_iter=30, stride=100) pyemma.plots.plot_free_energy(*np.concatenate(tica_output).T, legacy=False); ``` As wee see, TICA yields a very nice state separation. We will see that these states are in fact metastable. ``` msm = pyemma.msm.estimate_markov_model(cluster.dtrajs, lag=20) msm.pcca(nstates); metastable_trajs_strided = [msm.metastable_assignments[dtrj[::stride]] for dtrj in cluster.dtrajs] stride = 10 tica_output_strided = [i[::stride] for i in tica_output] _, _, misc = pyemma.plots.plot_state_map(*np.concatenate(tica_output_strided).T, np.concatenate(metastable_trajs_strided)); misc['cbar'].set_ticklabels(range(1, nstates + 1)) # set state numbers 1 ... nstates ``` We note that PCCA++ separates the two basins of the free energy plot. Let's have a look at the metastable trajectories: ``` fig, ax = plt.subplots(1, 1, figsize=(12, 6), sharey=True, sharex=True) ax_yticks_labels = [] for n, pcca_traj in enumerate(metastable_trajs_strided): ax.plot(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, color='k', linewidth=0.3) ax.scatter(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, c=pcca_traj, s=0.1) ax_yticks_labels.append(((msm.n_metastable * (2 * n + 1) - 1) / 2, n + 1)) ax.set_yticks([l[0] for l in ax_yticks_labels]) ax.set_yticklabels([str(l[1]) for l in ax_yticks_labels]) ax.set_ylabel('Trajectory #') ax.set_xlabel('time / {} ps'.format(stride)) fig.tight_layout() ``` These trajectories show the expected behavior of a metastable trajectory, i.e., it does not quickly jump back and forth between the states. ## Wrapping up In this notebook, we have learned about some problems that can arise when estimating MSMs with "real world" data at simple examples. In detail, we have seen - irreversibly connected dynamics and what it means for MSM estimation, - fully disconnected trajectories and how to identify them, - connected but poorly sampled trajectories and how convergence looks in this case, - ill-conducted TICA analysis and what it yields. The most important lesson from this tutorial is that histograms, which are usually calculated in a projected space, are not a sufficient means of identifying metastability or connectedness. It is crucial to remember that the underlying trajectories play the role of ground truth for the model. Ultimately, histograms only help us to understand this ground truth but cannot provide a complete picture.
true
code
0.651771
null
null
null
null
# Configuraciones para el Grupo de Estudio <img src="./img/f_mail.png" style="width: 700px;"/> ## Contenidos - ¿Por qué jupyter notebooks? - Bash - ¿Que es un *kernel*? - Instalación - Deberes ## Python y proyecto Jupyter <img src="./img/py.jpg" style="width: 500px;"/> <img src="./img/jp.png" style="width: 100px;"/> - Necesitamos llevar un registro del avance de cada integrante. - Lenguaje de programación interpretado de alto nivel. - Jupyter notebooks: son fáciles de usar - `Necesitamos que todos tengan una versión de Python con jupyter lab` ## ¿Cómo funciona Jupyter? - Es un derivado del proyecto `iPython`, que ofrece una interfaz interactiva para programadores. - Tiene formato `.ipynb` - Es posible usar otros lenguajes de programación diferentes a Python. - Permite al usuario configurar cómo se visualiza su código mediante `Markdown`. - Ahora una demostración <img src="./img/jupex.png" style="width: 500px;"/> ``` import matplotlib.pyplot as plt import numpy as np import math # constantes pi = math.pi; h = 6.626e-34; kB = 1.380e-23; c = 3.0e+8; Temps = [9940.00, 8500.00, 7500.00, 6627.00, 5810.93, 4231.15, 3000.00, 2973.15, 288.15] labels = ['Sirius', 'White star', 'Yellow-white star', 'Polaris', 'Sol', 'HfC', 'Bombilla', 'TaN', 'Atmósfera '] colors = ['r','g','#FF9633','c','m','#eeefff','y','b','k'] # arreglo de frecuencias freq = np.arange(0.25e14,3e15,0.25e14) # funcion spectral energy density (SED) def SED(f, T): energyDensity = ( 8*pi*h*(np.power(f, 3.0))/(c**3) ) / (np.exp((h/kB)*f/T) - 1) return energyDensity # Calculo de SED para temperaturas for i in range(len(Temps)): r = SED(freq,Temps[i]) plt.plot(freq*1e-12,r,color=colors[i],label=labels[i]) plt.legend(); plt.xlabel('frequency ( THz )'); plt.ylabel('SED_frequency ( J $m^{-3}$ $Hz^{-1}$ )') plt.xlim(0.25e2,2.5e3); plt.show() ``` ### Permite escribir expresiones matemáticas complejas Es posible escribir código en $\LaTeX$ si es necesario \begin{align} \frac{\partial u(\lambda, T)}{\partial \lambda} &= \frac{\partial}{\partial \lambda} \left( \frac{C_{1}}{\lambda^{5}}\left(\frac{1}{e^{C_{2}/T\lambda} -1}\right) \right) \\ 0 &= \left(\frac{-5}{e^{C_{2}/T\lambda} -1}\frac{1}{\lambda^{6}}\right) + \left( \frac{C_{2}e^{C_{2}/T\lambda}}{T\lambda^{7}} \right)\left(\frac{1}{e^{C_{2}/T\lambda} -1}\right)^{2} \\ 0 &= \frac{-\lambda T5}{C_{2}} + \frac{e^{C_{2}/T\lambda}}{e^{C_{2}/T\lambda} -1} \\ 0 &= -5 + \left(\frac{C_{2}}{\lambda T}\right) \left(\frac{e^{C_{2}/T\lambda}}{e^{C_{2}/T\lambda} -1}\right) \end{align} ## ¿Cómo es que usa un lenguaje diferente a Python? - Un kernel es una especie de `motor computacional` que ejecuta el código dentro de un archivo `.ipynb`. - Los kernels hay para varios lengajes de programación, como R, Bash, C++, julia. <img src="./img/ker.png" style="width: 250px;"/> ## ¿Por qué Bash? - Bash es un lenguaje de scripting que se comunica con la shell e históricamente ha ayudado a científicos a llevarse mejor con la bioinformática. ## ¿Dónde encontramos las instrucciones para instalar Python? - Es posible hacerlo de varias maneras: `Anaconda` y el `intérprete oficial` desde https://www.python.org/downloads/ - Usaremos el intérprete de `Anaconda`: es más fácil la instalación si no te acostumbras a usar la línea de comandos. - Si ustedes ya están familiarizados con Python y no desean instalar el intérprete de `Anaconda` pueden usar `pip` desde https://pypi.org/project/bash_kernel/ <img src="./img/qrgit.png" style="width: 250px;"/> ## Deberes - Creamos una carpeta en `google Drive` donde harán subirán los archivos `.ipynb` y una conversión a HTML, u otro tipo de archivo dependiendo de la sesión. - Vamos a tener un quiz cada semana, que les enviaremos por el servidor de Discord del grupo de estudio. - El deber para la siguiente semana: 1. Instalar Ubuntu si aún no lo poseen usando cualquiera de las alternativas presentadas. 2. Instalar Anaconda, jupyter lab y el kernel de bash. Se deben enviar un documento word o pdf con capturas de pantalla que compruebe esto. Si tienen algún problema, usen por favor los foros de `Discord` y nos ayudamos entre todos. <img src="./img/deberes.png" style="width: 500px;"/>
true
code
0.524577
null
null
null
null
## Moodle Database: Educational Data Log Analysis The Moodle LMS is a free and open-source learning management system written in PHP and distributed under the GNU General Public License. It is used for blended learning, distance education, flipped classroom and other e-learning projects in schools, universities, workplaces and other sectors. With customizable management features, it is used to create private websites with online courses for educators and trainers to achieve learning goals. Moodle allows for extending and tailoring learning environments using community-sourced plugins . In this notebokk we are going to explore the 10 Academy Moodle logs stored in the database together with many other relevant tables. # Table of content 1. Installing the required libraries 2. Importing the required libraries 3. Moodle database understanding 4. Data Extraction Transformation and Loading (ETL) ### Installing the necessary libraries ``` #!pip install ipython-sql #!pip install sqlalchemy #!pip install psycopg2 ``` ### Importing necessary libraries ``` import pandas as pd import numpy as np from sqlalchemy import create_engine import psycopg2 import logging from IPython.display import display #allowing connection to the database %load_ext sql #ipython-sql %sql postgresql://bessy:Streetdance53@localhost/moodle #sqlalchemy engine = create_engine('postgresql://bessy:Streetdance53@localhost/moodle') ``` ### Moodle database Understanding. Now, let's have a glance of how some of the tables look like.We will consider the following tables; `mdl_logstore_standard_log`, `mdl_context`, `mdl_user`, `mdl_course`, `mdl_modules `, `mdl_course_modules`, `mdl_course_modules_completion`, `mdl_grade_items`, `mdl_grade_grades`, `mdl_grade_categories`, `mdl_grade_items_history`, `mdl_grade_grades_history`, `mdl_grade_categories_history`, `mdl_forum`, `mdl_forum_discussions`, `mdl_forum_posts`. `Table:mdl_logstore_standard_log` ``` %%sql SELECT *FROM mdl_logstore_standard_log LIMIT 3; ``` `Table: mdl_context` ``` %%sql SELECT * FROM mdl_context LIMIT 3; ``` `mdl_course` ``` %%sql SELECT * FROM mdl_course LIMIT 3; ``` `mdl_user` ``` %%sql SELECT * FROM mdl_user LIMIT 3; ``` `mdl_modules` ``` %%sql SELECT * FROM mdl_modules LIMIT 3; ``` `mdl_course_modules` ``` %%sql SELECT * FROM mdl_course_modules LIMIT 3; ``` `mdl_course_modules_completion` ``` %%sql SELECT * FROM mdl_course_modules_completion LIMIT 3 ``` `mdl_grade_grades` ``` %%sql SELECT * FROM mdl_grade_grades LIMIT 3 ``` ### Number of tables in the database; ``` %%sql SELECT COUNT(*) FROM information_schema.tables ``` ### Number of records in the following tables; ``` mit = ['mdl_logstore_standard_log', 'mdl_context', 'mdl_user', 'mdl_course', 'mdl_modules' , 'mdl_course_modules', 'mdl_course_modules_completion', 'mdl_grade_items', 'mdl_grade_grades', 'mdl_grade_categories', 'mdl_grade_items_history', 'mdl_grade_grades_history', 'mdl_grade_categories_history', 'mdl_forum', 'mdl_forum_discussions', 'mdl_forum_posts'] # fetches and returns number of records of a given table in a moodle database def table_count(table): count = %sql SELECT COUNT(*) as {table}_count from {table} return count for table in mit: display(table_count(table)) ``` ### Number of quiz submission by time ``` %%sql select date_part('hour', timestamp with time zone 'epoch' + timefinish * interval '1 second') as hour, count(1) from mdl_quiz_attempts qa where qa.preview = 0 and qa.timefinish <> 0 group by date_part('hour', timestamp with time zone 'epoch' + timefinish * interval '1 second') order by hour %%sql SELECT COUNT(id), EXTRACT(HOUR FROM to_timestamp(timecreated)) FROM mdl_logstore_standard_log WHERE action ='submitted' AND component='mod_quiz' group by EXTRACT(HOUR FROM to_timestamp(timecreated)); ``` ## Monthly usage time of learners who have confirmed and are not deleted ``` %%sql select extract(month from to_timestamp(mdl_stats_user_monthly.timeend)) as calendar_month, count(distinct mdl_stats_user_monthly.userid) as total_users from mdl_stats_user_monthly inner join mdl_role_assignments on mdl_stats_user_monthly.userid = mdl_role_assignments.userid inner join mdl_context on mdl_role_assignments.contextid = mdl_context.id where mdl_stats_user_monthly.stattype = 'activity' and mdl_stats_user_monthly.courseid <>1 group by extract(month from to_timestamp(mdl_stats_user_monthly.timeend)) order by extract(month from to_timestamp(mdl_stats_user_monthly.timeend)) %%sql SELECT COUNT(lastaccess - firstaccess) AS usagetime, EXTRACT (MONTH FROM to_timestamp(firstaccess)) AS month FROM mdl_user WHERE confirmed = 1 AND deleted = 0 GROUP BY EXTRACT (MONTH FROM to_timestamp(firstaccess)) ``` ## Count of log events per user ``` actions = ['loggedin', 'viewed', 'started', 'submitted', 'uploaded', 'updated', 'searched', 'answered', 'attempted', 'abandoned'] # fetch and return count of log events of an action per user def event_count(action): count = %sql SELECT userid, COUNT(action) AS {action}_count FROM mdl_logstore_standard_log WHERE action='{action}' GROUP BY userid limit 5 return count for action in actions: display(event_count(action)) ``` ### python class to pull * Overall grade of learners * Number of forum posts ``` class PullGrade(): def __init__(self): pass def open_db(self, **kwargs): # extract args, if they are not provided assign a default value user = kwargs.get('user', 'briodev') password = kwargs.get('password', '14ConnectPsq') db = kwargs.get('db', 'moodle') # make a connection to PostgreSQL # use exception to show error message if failed to connect try: params = dict(user=user, password=password, host="127.0.0.1", port = "5432", database = db) proot = 'postgresql://{user}@{host}:5432/{database}'.format(**params) logging.info('Connecting to the PostgreSQL database... using sqlalchemy engine') engine = create_engine(proot) except (Exception, psycopg2.Error) as error: logging.error(r"Error while connecting to PostgreSQL {error}") return engine # fetch and return number of forum posts def forum_posts(self): count = %sql SELECT COUNT(*) from mdl_forum_posts return count # fetch and return overall grade of learners def overall_grade(self): overall = %sql SELECT userid, round(SUM(finalgrade)/count(*), 2) as overall_grade from mdl_grade_grades WHERE finalgrade is not null group by userid LIMIT 10 return overall db = PullGrade() db.open_db() #Forum_posts db.forum_posts() #Overall grade. db.overall_grade() ``` ### Data Extraction Transformation and Loading (ETL) ``` #reading the mdl_logstore_standard_log log_df = pd.read_sql("select * from mdl_logstore_standard_log", engine) def top_x(df, percent): total_len = df.shape[0] top = int((total_len * percent)/100) return df.iloc[:top,] ``` ### Login count ``` log_df_logged_in = log_df[log_df.action == 'loggedin'][['userid', 'action']] login_by_user = log_df_logged_in.groupby('userid').count().sort_values('action', ascending=False) login_by_user.columns = ["login_count"] top_x(login_by_user, 1) ``` ### Activity count ``` activity_log = log_df[['userid', 'action']] activity_log_by_user = activity_log.groupby('userid').count().sort_values('action', ascending=False) activity_log_by_user.columns = ['activity_count'] top_x(activity_log_by_user, 1) log_in_out = log_df[(log_df.action == "loggedin") | (log_df.action == "loggedout")] user_id = log_df.userid.unique() d_times = {} for user in user_id: log_user = log_df[log_df.userid == user].sort_values('timecreated') d_time = 0 isLoggedIn = 0 loggedIn_timecreated = 0 for i in range(len(log_user)): row = log_user.iloc[i,] row_next = log_user.iloc[i+1,] if i+1 < len(log_user) else row if(row.action == "loggedin"): isLoggedIn = 1 loggedIn_timecreated = row.timecreated if( (i+1 == len(log_user)) | ( (row_next.action == "loggedin") & (isLoggedIn == 1) ) ): d_time += row.timecreated - loggedIn_timecreated isLoggedIn = 0 d_times[user] = d_time dedication_time_df = pd.DataFrame({'userid':list(d_times.keys()), 'dedication_time':list(d_times.values())}) dedication_time_df top_x(dedication_time_df.sort_values('dedication_time', ascending=False), 35) ``` ### References * https://docs.moodle.org/39/en/Custom_SQL_queries_report * https://docs.moodle.org/39/en/ad-hoc_contributed_reports * https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.667&rep=rep1&type=pdf * http://informatics.ue-varna.bg/conference19/Conf.proceedings_Informatics-50.years%20177-187.pdf
true
code
0.432363
null
null
null
null
# Format DataFrame ``` import pandas as pd from sklearn.datasets import make_regression data = make_regression(n_samples=600, n_features=50, noise=0.1, random_state=42) train_df = pd.DataFrame(data[0], columns=["x_{}".format(_) for _ in range(data[0].shape[1])]) train_df["target"] = data[1] print(train_df.shape) train_df.head() ``` # Set Up Environment ``` from hyperparameter_hunter import Environment, CVExperiment from sklearn.metrics import explained_variance_score env = Environment( train_dataset=train_df, results_path="HyperparameterHunterAssets", metrics=dict(evs=explained_variance_score), cv_type="KFold", cv_params=dict(n_splits=3, shuffle=True, random_state=1337), runs=2, ) ``` Now that HyperparameterHunter has an active `Environment`, we can do two things: # 1. Perform Experiments *Note: If this is your first HyperparameterHunter example, the CatBoost classification example may be a better starting point.* In this Experiment, we're also going to use `model_extra_params` to provide arguments to `CatBoostRegressor`'s `fit` method, just like we would if we weren't using HyperparameterHunter. We'll be using the `verbose` argument to print evaluations of our `CatBoostRegressor` every 50 iterations, and we'll also be using the dataset sentinels offered by `Environment`. You can read more about the exciting thing you can do with the `Environment` sentinels in the documentation and in the example dedicated to them. For now, though, we'll be using them to provide each fold's `env.validation_input`, and `env.validation_target` to `CatBoostRegressor.fit` via its `eval_set` argument. You could also easily add `CatBoostRegressor.fit`'s `early_stopping_rounds` argument to `model_extra_params["fit"]` to use early stopping, but doing so here with only `iterations=100` doesn't make much sense. ``` from catboost import CatBoostRegressor experiment = CVExperiment( model_initializer=CatBoostRegressor, model_init_params=dict( iterations=100, learning_rate=0.05, depth=5, bootstrap_type="Bayesian", save_snapshot=False, allow_writing_files=False, ), model_extra_params=dict( fit=dict( verbose=50, eval_set=[(env.validation_input, env.validation_target)], ), ), ) ``` Notice above that CatBoost printed scores for our `eval_set` every 50 iterations just like we said in `model_extra_params["fit"]`; although, it made our results rather difficult to read, so we'll switch back to `verbose=False` during optimization. # 2. Hyperparameter Optimization Notice below that `optimizer` still recognizes the results of `experiment` as valid learning material even though their `verbose` values differ. This is because it knows that `verbose` has no effect on actual results. ``` from hyperparameter_hunter import DummyOptPro, Real, Integer, Categorical optimizer = DummyOptPro(iterations=10, random_state=777) optimizer.forge_experiment( model_initializer=CatBoostRegressor, model_init_params=dict( iterations=100, learning_rate=Real(0.001, 0.2), depth=Integer(3, 7), bootstrap_type=Categorical(["Bayesian", "Bernoulli"]), save_snapshot=False, allow_writing_files=False, ), model_extra_params=dict( fit=dict( verbose=False, eval_set=[(env.validation_input, env.validation_target)], ), ), ) optimizer.go() ```
true
code
0.627466
null
null
null
null
# Classification models using python and scikit-learn There are many users of online trading platforms and these companies would like to run analytics on and predict churn based on user activity on the platform. Keeping customers happy so they do not move their investments elsewhere is key to maintaining profitability. In this notebook, we'll use scikit-learn to predict classes. scikit-learn provides implementations of many classification algorithms. In here, we have chosen the random forest classification algorithm to walk through all the different steps. <a id="top"></a> ## Table of Contents 1. [Load libraries](#load_libraries) 2. [Data exploration](#explore_data) 3. [Prepare data for building classification model](#prepare_data) 4. [Split data into train and test sets](#split_data) 5. [Helper methods for graph generation](#helper_methods) 6. [Prepare Random Forest classification model](#prepare_model) 7. [Train Random Forest classification model](#train_model) 8. [Test Random Forest classification model](#test_model) 9. [Evaluate Random Forest classification model](#evaluate_model) 10.[Build K-Nearest classification model](#model_knn) 11. [Comparative study of both classification algorithms](#compare_classification) ### Quick set of instructions to work through the notebook If you are new to Notebooks, here's a quick overview of how to work in this environment. 1. The notebook has 2 types of cells - markdown (text) such as this and code such as the one below. 2. Each cell with code can be executed independently or together (see options under the Cell menu). When working in this notebook, we will be running one cell at a time. 3. To run the cell, position cursor in the code cell and click the Run (arrow) icon. The cell is running when you see the * next to it. Some cells have printable output. 4. Work through this notebook by reading the instructions and executing code cell by cell. Some cells will require modifications before you run them. <a id="load_libraries"></a> ## 1. Load libraries [Top](#top) Install python modules NOTE! Some pip installs require a kernel restart. The shell command pip install is used to install Python modules. Some installs require a kernel restart to complete. To avoid confusing errors, run the following cell once and then use the Kernel menu to restart the kernel before proceeding. ``` !pip install pandas==0.24.2 !pip install --user pandas_ml==0.6.1 #downgrade matplotlib to bypass issue with confusion matrix being chopped out !pip install matplotlib==3.1.0 !pip install --user scikit-learn==0.21.3 !pip install -q scikit-plot from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.compose import ColumnTransformer, make_column_transformer from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, classification_report import pandas as pd, numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import matplotlib.colors as mcolors import matplotlib.patches as mpatches import scikitplot as skplt ``` <a id="explore_data"></a> ## 2. Data exploration [Top](#top) In this tutorial, we use a data set that contains information about customers of an online trading platform to classify whether a given customer’s probability of churn will be high, medium, or low. This provides a good example to learn how a classification model is built from start to end. ``` df_churn_pd = pd.read_csv("https://raw.githubusercontent.com/IBM/ml-learning-path-assets/master/data/mergedcustomers_missing_values_GENDER.csv") df_churn_pd.head() ``` We use numpy and matplotlib to get some statistics and visualize data. print("The dataset contains columns of the following data types : \n" +str(df_churn_pd.dtypes)) Notice below that Gender has three missing values. This will be handled in one of the preprocessing steps that is to follow. ``` print("The dataset contains following number of records for each of the columns : \n" +str(df_churn_pd.count())) ``` If we are not satisfied with the representational data, now is the time to get more data to be used for training and testing. ``` print( "Each category within the churnrisk column has the following count : ") print(df_churn_pd.groupby(['CHURNRISK']).size()) #bar chart to show split of data index = ['High','Medium','Low'] churn_plot = df_churn_pd['CHURNRISK'].value_counts(sort=True, ascending=False).plot(kind='bar', figsize=(4,4),title="Total number for occurences of churn risk " + str(df_churn_pd['CHURNRISK'].count()), color=['#BB6B5A','#8CCB9B','#E5E88B']) churn_plot.set_xlabel("Churn Risk") churn_plot.set_ylabel("Frequency") ``` <a id="prepare_data"></a> ## 3. Data preparation [Top](#top) Data preparation is a very important step in machine learning model building. This is because the model can perform well only when the data it is trained on is good and well prepared. Hence, this step consumes the bulk of a data scientist's time spent building models. During this process, we identify categorical columns in the dataset. Categories need to be indexed, which means the string labels are converted to label indices. These label indices are encoded using One-hot encoding to a binary vector with at most a single value indicating the presence of a specific feature value from among the set of all feature values. This encoding allows algorithms which expect continuous features to use categorical features. ``` #remove columns that are not required df_churn_pd = df_churn_pd.drop(['ID'], axis=1) df_churn_pd.head() ``` ### [Preprocessing Data](https://scikit-learn.org/stable/modules/preprocessing.html) Scikit-learn provides a method to fill empty values with something that would be applicable in its context. We used the <i><b> SimpleImputer <b></i> class that is provided by Sklearn and filled the missing values with the most frequent value in the column. ### [One Hot Encoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) ``` # Defining the categorical columns categoricalColumns = ['GENDER', 'STATUS', 'HOMEOWNER'] print("Categorical columns : " ) print(categoricalColumns) impute_categorical = SimpleImputer(strategy="most_frequent") onehot_categorical = OneHotEncoder(handle_unknown='ignore') categorical_transformer = Pipeline(steps=[('impute',impute_categorical),('onehot',onehot_categorical)]) ``` The numerical columns from the data set are identified, and StandardScaler is applied to each of the columns. This way, each value is subtracted with the mean of its column and divided by its standard deviation.<br> ### [Standard Scaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) ``` # Defining the numerical columns numericalColumns = df_churn_pd.select_dtypes(include=[np.float,np.int]).columns print("Numerical columns : " ) print(numericalColumns) scaler_numerical = StandardScaler() numerical_transformer = Pipeline(steps=[('scale',scaler_numerical)]) ``` The preprocessing techniques that are applied must be customized for each of the columns. Sklearn provides a library called the [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html?highlight=columntransformer#sklearn.compose.ColumnTransformer), which allows a sequence of these techniques to be applied to selective columns using a pipeline. Only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through ``` preprocessorForCategoricalColumns = ColumnTransformer(transformers=[('cat', categorical_transformer, categoricalColumns)], remainder="passthrough") preprocessorForAllColumns = ColumnTransformer(transformers=[('cat', categorical_transformer, categoricalColumns), ('num',numerical_transformer,numericalColumns)], remainder="passthrough") ``` Machine learning algorithms cannot use simple text. We must convert the data from text to a number. Therefore, for each string that is a class we assign a label that is a number. For example, in the customer churn data set, the CHURNRISK output label is classified as high, medium, or low and is assigned labels 0, 1, or 2. We use the [LabelEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html?highlight=labelencoder#sklearn.preprocessing.LabelEncoder) class provided by Sklearn for this. ``` # prepare data frame for splitting data into train and test datasets features = [] features = df_churn_pd.drop(['CHURNRISK'], axis=1) label_churn = pd.DataFrame(df_churn_pd, columns = ['CHURNRISK']) label_encoder = LabelEncoder() label = df_churn_pd['CHURNRISK'] label = label_encoder.fit_transform(label) print("Encoded value of Churnrisk after applying label encoder : " + str(label)) ``` ### These are some of the popular preprocessing steps that are applied on the data sets. Look at [Data Processing in detail](https://developer.ibm.com/articles/data-preprocessing-in-detail/) for more information ``` area = 75 x = df_churn_pd['ESTINCOME'] y = df_churn_pd['DAYSSINCELASTTRADE'] z = df_churn_pd['TOTALDOLLARVALUETRADED'] pop_a = mpatches.Patch(color='#BB6B5A', label='High') pop_b = mpatches.Patch(color='#E5E88B', label='Medium') pop_c = mpatches.Patch(color='#8CCB9B', label='Low') def colormap(risk_list): cols=[] for l in risk_list: if l==0: cols.append('#BB6B5A') elif l==2: cols.append('#E5E88B') elif l==1: cols.append('#8CCB9B') return cols fig = plt.figure(figsize=(12,6)) fig.suptitle('2D and 3D view of churnrisk data') # First subplot ax = fig.add_subplot(1, 2,1) ax.scatter(x, y, alpha=0.8, c=colormap(label), s= area) ax.set_ylabel('DAYS SINCE LAST TRADE') ax.set_xlabel('ESTIMATED INCOME') plt.legend(handles=[pop_a,pop_b,pop_c]) # Second subplot ax = fig.add_subplot(1,2,2, projection='3d') ax.scatter(z, x, y, c=colormap(label), marker='o') ax.set_xlabel('TOTAL DOLLAR VALUE TRADED') ax.set_ylabel('ESTIMATED INCOME') ax.set_zlabel('DAYS SINCE LAST TRADE') plt.legend(handles=[pop_a,pop_b,pop_c]) plt.show() ``` <a id="split_data"></a> ## 4. Split data into test and train [Top](#top) Scikit-learn provides in built API to split the original dataset into train and test datasets. random_state is set to a number to be able to reproduce the same data split combination through multiple runs. [Split arrays or matrices into random train and test subsets](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) ``` X_train, X_test, y_train, y_test = train_test_split(features,label , random_state=0) print("Dimensions of datasets that will be used for training : Input features"+str(X_train.shape)+ " Output label" + str(y_train.shape)) print("Dimensions of datasets that will be used for testing : Input features"+str(X_test.shape)+ " Output label" + str(y_test.shape)) ``` <a id="helper_methods"></a> ## 5. Helper methods for graph generation [Top](#top) ``` def colormap(risk_list): cols=[] for l in risk_list: if l==0: cols.append('#BB6B5A') elif l==2: cols.append('#E5E88B') elif l==1: cols.append('#8CCB9B') return cols def two_d_compare(y_test,y_pred,model_name): #y_pred = label_encoder.fit_transform(y_pred) #y_test = label_encoder.fit_transform(y_test) area = (12 * np.random.rand(40))**2 plt.subplots(ncols=2, figsize=(10,4)) plt.suptitle('Actual vs Predicted data : ' +model_name + '. Accuracy : %.2f' % accuracy_score(y_test, y_pred)) plt.subplot(121) plt.scatter(X_test['ESTINCOME'], X_test['DAYSSINCELASTTRADE'], alpha=0.8, c=colormap(y_test)) plt.title('Actual') plt.legend(handles=[pop_a,pop_b,pop_c]) plt.subplot(122) plt.scatter(X_test['ESTINCOME'], X_test['DAYSSINCELASTTRADE'],alpha=0.8, c=colormap(y_pred)) plt.title('Predicted') plt.legend(handles=[pop_a,pop_b,pop_c]) plt.show() x = X_test['TOTALDOLLARVALUETRADED'] y = X_test['ESTINCOME'] z = X_test['DAYSSINCELASTTRADE'] pop_a = mpatches.Patch(color='#BB6B5A', label='High') pop_b = mpatches.Patch(color='#E5E88B', label='Medium') pop_c = mpatches.Patch(color='#8CCB9B', label='Low') def three_d_compare(y_test,y_pred,model_name): fig = plt.figure(figsize=(12,10)) fig.suptitle('Actual vs Predicted (3D) data : ' +model_name + '. Accuracy : %.2f' % accuracy_score(y_test, y_pred)) ax = fig.add_subplot(121, projection='3d') ax.scatter(x, y, z, c=colormap(y_test), marker='o') ax.set_xlabel('TOTAL DOLLAR VALUE TRADED') ax.set_ylabel('ESTIMATED INCOME') ax.set_zlabel('DAYS SINCE LAST TRADE') plt.legend(handles=[pop_a,pop_b,pop_c]) plt.title('Actual') ax = fig.add_subplot(122, projection='3d') ax.scatter(x, y, z, c=colormap(y_pred), marker='o') ax.set_xlabel('TOTAL DOLLAR VALUE TRADED') ax.set_ylabel('ESTIMATED INCOME') ax.set_zlabel('DAYS SINCE LAST TRADE') plt.legend(handles=[pop_a,pop_b,pop_c]) plt.title('Predicted') plt.show() def model_metrics(y_test,y_pred): print("Decoded values of Churnrisk after applying inverse of label encoder : " + str(np.unique(y_pred))) skplt.metrics.plot_confusion_matrix(y_test,y_pred,text_fontsize="small",cmap='Greens',figsize=(6,4)) plt.show() print("The classification report for the model : \n\n"+ classification_report(y_test, y_pred)) ``` <a id="prepare_model"></a> ## 6. Prepare Random Forest classification model [Top](#top) We instantiate a decision-tree based classification algorithm, namely, RandomForestClassifier. Next we define a pipeline to chain together the various transformers and estimators defined during the data preparation step before. Scikit-learn provides APIs that make it easier to combine multiple algorithms into a single pipeline. We fit the pipeline to training data and apply the trained model to transform test data and generate churn risk class prediction. [Understanding Random Forest Classifier](https://towardsdatascience.com/understanding-random-forest-58381e0602d2) ``` from sklearn.ensemble import RandomForestClassifier model_name = "Random Forest Classifier" randomForestClassifier = RandomForestClassifier(n_estimators=100, max_depth=2,random_state=0) ``` Pipelines are a convenient way of designing your data processing in a machine learning flow. The following code example shows how pipelines are set up using sklearn. Read more [Here](https://scikit-learn.org/stable/modules/classes.html?highlight=pipeline#module-sklearn.pipeline) ``` rfc_model = Pipeline(steps=[('preprocessorAll',preprocessorForAllColumns),('classifier', randomForestClassifier)]) ``` <a id="train_model"></a> ## 7. Train Random Forest classification model [Top](#top) ``` # Build models rfc_model.fit(X_train,y_train) ``` <a id="test_model"></a> ## 8. Test Random Forest classification model [Top](#top) ``` y_pred_rfc = rfc_model.predict(X_test) ``` <a id="evaluate_model"></a> ## 9. Evaluate Random Forest classification model [Top](#top) ### Model results In a supervised classification problem such as churn risk classification, we have a true output and a model-generated predicted output for each data point. For this reason, the results for each data point can be assigned to one of four categories: 1. True Positive (TP) - label is positive and prediction is also positive 2. True Negative (TN) - label is negative and prediction is also negative 3. False Positive (FP) - label is negative but prediction is positive 4. False Negative (FN) - label is positive but prediction is negative These four numbers are the building blocks for most classifier evaluation metrics. A fundamental point when considering classifier evaluation is that pure accuracy (i.e. was the prediction correct or incorrect) is not generally a good metric. The reason for this is because a dataset may be highly unbalanced. For example, if a model is designed to predict fraud from a dataset where 95% of the data points are not fraud and 5% of the data points are fraud, then a naive classifier that predicts not fraud, regardless of input, will be 95% accurate. For this reason, metrics like precision and recall are typically used because they take into account the type of error. In most applications there is some desired balance between precision and recall, which can be captured by combining the two into a single metric, called the F-measure. ``` two_d_compare(y_test,y_pred_rfc,model_name) #three_d_compare(y_test,y_pred_rfc,model_name) ``` ### Confusion matrix In the graph below we have printed a confusion matrix and a self-explanotary classification report. The confusion matrix shows that, 42 mediums were wrongly predicted as high, 2 mediums were wrongly predicted as low and 52 mediums were accurately predicted as mediums. ``` y_test = label_encoder.inverse_transform(y_test) y_pred_rfc = label_encoder.inverse_transform(y_pred_rfc) model_metrics(y_test,y_pred_rfc) ``` [Precision Recall Fscore support](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html) [Understanding the Confusion Matrix](https://towardsdatascience.com/confusion-matrix-for-your-multi-class-machine-learning-model-ff9aa3bf7826) ### Comparative study In the bar chart below, we have compared the random forest classification algorithm output classes against the actual values. ``` uniqueValues, occurCount = np.unique(y_test, return_counts=True) frequency_actual = (occurCount[0],occurCount[2],occurCount[1]) uniqueValues, occurCount = np.unique(y_pred_rfc, return_counts=True) frequency_predicted_rfc = (occurCount[0],occurCount[2],occurCount[1]) n_groups = 3 fig, ax = plt.subplots(figsize=(10,5)) index = np.arange(n_groups) bar_width = 0.1 opacity = 0.8 rects1 = plt.bar(index, frequency_actual, bar_width, alpha=opacity, color='g', label='Actual') rects6 = plt.bar(index + bar_width, frequency_predicted_rfc, bar_width, alpha=opacity, color='purple', label='Random Forest - Predicted') plt.xlabel('Churn Risk') plt.ylabel('Frequency') plt.title('Actual vs Predicted frequency.') plt.xticks(index + bar_width, ('High', 'Medium', 'Low')) plt.legend() plt.tight_layout() plt.show() ``` <a id="model_knn"></a> ## 10. Build K-Nearest classification model [Top](#top) K number of nearest points around the data point to be predicted are taken into consideration. These K points at this time, already belong to a class. The data point under consideration, is said to belong to the class with which most number of points from these k points belong to. ``` from sklearn.neighbors import KNeighborsClassifier model_name = "K-Nearest Neighbor Classifier" knnClassifier = KNeighborsClassifier(n_neighbors = 5, metric='minkowski', p=2) knn_model = Pipeline(steps=[('preprocessorAll',preprocessorForAllColumns),('classifier', knnClassifier)]) knn_model.fit(X_train,y_train) y_pred_knn = knn_model.predict(X_test) y_test = label_encoder.transform(y_test) two_d_compare(y_test,y_pred_knn,model_name) y_test = label_encoder.inverse_transform(y_test) y_pred_knn = label_encoder.inverse_transform(y_pred_knn) model_metrics(y_test,y_pred_knn) ``` <a id="compare_classification"></a> ## 11. Comparative study of both classification algorithms. [Top](#top) ``` uniqueValues, occurCount = np.unique(y_test, return_counts=True) frequency_actual = (occurCount[0],occurCount[2],occurCount[1]) uniqueValues, occurCount = np.unique(y_pred_rfc, return_counts=True) frequency_predicted_rfc = (occurCount[0],occurCount[2],occurCount[1]) uniqueValues, occurCount = np.unique(y_pred_knn, return_counts=True) frequency_predicted_knn = (occurCount[0],occurCount[2],occurCount[1]) n_groups = 3 fig, ax = plt.subplots(figsize=(10,5)) index = np.arange(n_groups) bar_width = 0.1 opacity = 0.8 rects1 = plt.bar(index, frequency_actual, bar_width, alpha=opacity, color='g', label='Actual') rects6 = plt.bar(index + bar_width*2, frequency_predicted_rfc, bar_width, alpha=opacity, color='purple', label='Random Forest - Predicted') rects4 = plt.bar(index + bar_width*4, frequency_predicted_knn, bar_width, alpha=opacity, color='b', label='K-Nearest Neighbor - Predicted') plt.xlabel('Churn Risk') plt.ylabel('Frequency') plt.title('Actual vs Predicted frequency.') plt.xticks(index + bar_width, ('High', 'Medium', 'Low')) plt.legend() plt.tight_layout() plt.show() ``` Until evaluation provides satisfactory scores, you would repeat the data preprocessing through evaluating steps by tuning what are called the hyperparameters. [Choosing the right estimator](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) ### For a comparative study of some of the current most popular algorithms Please refer to this [tutorial](https://developer.ibm.com/tutorials/learn-classification-algorithms-using-python-and-scikit-learn/) <p><font size=-1 color=gray> &copy; Copyright 2019 IBM Corp. All Rights Reserved. <p> Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. </font></p>
true
code
0.620047
null
null
null
null
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('dataset-of-10s.csv') data.head() ``` # checking basic integrity ``` data.shape data.info() ``` # no. of rows = non null values for each column -> no null value ``` data.head() ``` # checking unique records using uri ``` # extracting exact id def extract(x): splited_list = x.split(':') # spliting text at colons return splited_list[2] # returning third element data['uri'] = data['uri'].apply(extract) data.head() #successfully extracted the id ``` # checking for duplicate rows ``` data['uri'].nunique(), data['uri'].value_counts() data['uri'].value_counts().unique() dupe_mask = data['uri'].value_counts()==2 dupe_ids = dupe_mask[dupe_mask] dupe_ids.value_counts, dupe_ids.shape #converting duplicate ids into a list dupe_ids = dupe_ids.index dupe_ids = dupe_ids.tolist() dupe_ids duplicate_index = data.loc[data['uri'].isin(dupe_ids),:].index # all the duplicted records duplicate_index = duplicate_index.tolist() ``` # We will be removing all the duplication as they are few compared to data ``` data.drop(duplicate_index,axis=0,inplace=True) data.shape data.info() print("shape of data",data.shape ) print("no. of unique rows",data['uri'].nunique()) # no duplicates data.head() ``` # now we will be dropping all the unnecessary columns which contain string which cant be eficiently converted into numerics ``` data.drop(['track','artist','uri'],axis=1,inplace=True) data.head() ``` # Univariate analysis ``` #analysing class imbalance sns.countplot(data=data,x='target') data.columns # checking appropriate data type data[['danceability', 'energy', 'key', 'loudness']].info() # every feature have appropriate datatype # checking range of first 4 features data[['danceability', 'energy', 'key', 'loudness']].describe() plt.figure(figsize=(10,10)) plt.subplot(2,2,1) data['danceability'].plot() plt.subplot(2,2,2) plt.plot(data['energy'],color='red') plt.subplot(2,2,3) plt.plot(data[['key','loudness']]) ``` # danceabilty is well inside the range(0,1) # energy is well inside the range(0,1) # there's no -1 for keys-> every track has been assigned respective keys # loudness values are out of range(0,-60)db ``` loudness_error_idnex = data[data['loudness']>0].index loudness_error_idnex # removing rows with out of range values in loudness column data.drop(loudness_error_idnex,axis=0, inplace=True) data.shape # record is removed # checking appropriate datatype for next 5 columns data[['mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness',]].info() # datatypes are in acoordance with provided info data[['mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness',]].describe() # every feautre is within range sns.countplot(x=data['mode']) # have only two possible values 0 and 1, no noise in the feature data[['valence', 'tempo', 'duration_ms', 'time_signature', 'chorus_hit', 'sections']].info() # data type is in accordance with provided info data[['valence', 'tempo', 'duration_ms', 'time_signature', 'chorus_hit', 'sections']].describe() # all the data are in specified range ``` # Performing F-test to know the relation between every feature and target ``` data.head() x = data.iloc[:,:-1].values y = data.iloc[:,-1].values x.shape,y.shape from sklearn.feature_selection import f_classif f_stat,p_value = f_classif(x,y) feat_list = data.iloc[:,:-1].columns.tolist() # making a dataframe dict = {'Features':feat_list,'f_statistics':f_stat,'p_value':p_value} relation = pd.DataFrame(dict) relation.sort_values(by='p_value') ``` # Multivariate analysis ``` correlation = data.corr() plt.figure(figsize=(15,12)) sns.heatmap(correlation, annot=True) plt.tight_layout ``` # strong features(accordance with f-test) --> danceability, loudness, acousticness, instrumentalness, valence # less imortant feature(accordance with f-test)--> duration, section, mode, time_signature, chorus hit # least imortant--> energy,key,speecheness,liveliness,tempo
true
code
0.365471
null
null
null
null
# PART 3 - Metadata Knowledge Graph creation in Amazon Neptune. Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine. This engine is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL, enabling you to build queries that efficiently navigate highly connected datasets. https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview.html In that section we're going to use TinkerPop Gremlin as the language to create and query our graph. ### Important We need to downgrade the tornado library for the gremlin libraries to work in our notebook. Without doing this, you'll most likely run into the following error when executing some gremlin queries: "RuntimeError: Cannot run the event loop while another loop is running" ``` !pip install --upgrade tornado==4.5.3 ``` ### Restart your kernel Because the notebook itself has some dependencies with the tornado library, we need to restart the kernel before proceeding. To do so, go to the top menu > Kernel > Restart Kernel.. > Restart Then proceed and execute the following cells. ``` !pip install pandas !pip install jsonlines !pip install gremlinpython !pip install networkx !pip install matplotlib import os import jsonlines import networkx as nx import matplotlib.pyplot as plt import pandas as pd #load stored variable from previous notebooks %store -r ``` Loading the Gremlin libraries and connecting to our Neptune instance ``` from gremlin_python import statics from gremlin_python.process.anonymous_traversal import traversal from gremlin_python.process.graph_traversal import __ from gremlin_python.process.strategies import * from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection from gremlin_python.process.traversal import T from gremlin_python.process.traversal import Order from gremlin_python.process.traversal import Cardinality from gremlin_python.process.traversal import Column from gremlin_python.process.traversal import Direction from gremlin_python.process.traversal import Operator from gremlin_python.process.traversal import P from gremlin_python.process.traversal import Pop from gremlin_python.process.traversal import Scope from gremlin_python.process.traversal import Barrier from gremlin_python.process.traversal import Bindings from gremlin_python.process.traversal import WithOptions from gremlin_python.structure.graph import Graph graph = Graph() def start_remote_connection_neptune(): remoteConn = DriverRemoteConnection(your_neptune_endpoint_url,'g') g = graph.traversal().withRemote(remoteConn) return g # g is the traversal source to use to query the graph g = start_remote_connection_neptune() ``` <b>IMPORTANT:</b> - Note that the remote connection will time out after few minutes if unused so if you're encountering exceptions after having paused the notebook execution for a while, please re-run the above cell. - <b>Make sure your Neptune DB is created for the sole purpose of this labs as we'll be cleaning it before starting.</b> ``` #CAREFUL - the below line of code empties your graph. Again, make sure you're using a dedicated instance for this workshop g.V().drop().iterate() ``` ## A note on Gremlin Gremlin is a functional, data-flow language that enables users to succinctly express complex traversals on (or queries of) their application's property graph. Every Gremlin traversal is composed of a sequence of (potentially nested) steps. A step performs an atomic operation on the data stream. Every step is either a map-step (transforming the objects in the stream), a filter-step (removing objects from the stream), or a sideEffect-step (computing statistics about the stream). More info here: https://tinkerpop.apache.org/gremlin.html The image below is an extract from: https://tinkerpop.apache.org/docs/3.5.1/tutorials/getting-started/#_the_next_fifteen_minutes I highly recommend you to be familiar with the concepts of Vertex and Edges at the very minimum before proceeding with the notebook. ![Gremlin vertex edge](../static/gremlin-vertex-edge.png "Gremlin vertex edge") ## Vertices and Edges names See below the variables containing the labels for our vertices and edges that we'll create across the notebook. ``` #Vertex representing a Video V_VIDEO = "video" #Vertex representing a "scene" e.g. SHOT, TECHNICAL_CUE V_VIDEO_SCENE = "video_scene" #Vertex representing a Video segment. we arbitrary split our video into 1min segments and attach metadata to the segments itselves V_VIDEO_SEGMENT = 'video_segment' #Edge between VIDEO and SEGMENT E_HAS_SEGMENT = 'contains_segment' #Edge between VIDEO and SCENE E_HAS_SCENE = 'contains_scene' #Edge between Scene and Segment E_BELONG_TO_SEGMENT = 'belong_to_segment' #Vertex representing a label extracted by Rekognition from the video V_LABEL = 'label' #Edge between SEGMENT and LABEL E_HAS_LABEL = 'has_label' #Edge between parent LABEL and child LABEL e.g. construction -> bulldozer E_HAS_CHILD_LABEL = 'has_child_label' #Vertex representing the NER V_ENTITY = 'entities' #Vertex representing the type of NER V_ENTITY_TYPE = 'entity_type' #Edge between ENTITY and ENTITY_TYPE E_IS_OF_ENTITY_TYPE = 'is_of_entity_type' #Edge between SEGMENT and ENTITY E_HAS_ENTITY = 'has_entity' #Vertex representing a TOPIC V_TOPIC = 'topic' #Vertex representing a TOPIC_TERM V_TOPIC_TERM = 'topic_term' #Edge between a VIDEO_SEGMENT and a TOPIC E_HAS_TOPIC = 'has_topic' #Edge between a TOPIC and a TOPIC_TERM E_HAS_TERM = 'has_term' #Vertex representing a TERM V_TERM = 'term' ``` ## We start by adding our video to the Graph Note how I start with g, our traversal graph, then call the addV (V for Vertex) method and then attach properties to the new vertex. I end the line with ".next()" which will return the newly created node (similar to how an iterator would work). all method are "chained" together in one expression. ``` sample_video_vertex = g.addV(V_VIDEO).property("name", video_name).property("filename", video_file) .property('description', 'description of the video').next() ``` [QUERY] We're listing all the vertices in the graph with their metadata. At this stage, we only have one. Explanation: g.V() gets us all vertices in the graph, the .hasLabel() filters the vertices based on the vertex label(=type), the .valueMap() returns all properties for all vertices and the .toList() returns the full list. Note that you can use .next() instead of toList() to just return the next element in the list. ``` g.V().hasLabel(V_VIDEO).valueMap().toList() ``` [QUERY] Below is a different way to precisely return a vertex based on its name. Explanation: g.V() gives us all the vertices, .has() allows us to filter based on the name of the vertex and .next() returns the first (and only) item from the iterator. note that we haven't used .valueMap() so what is returned is the ID of the vertex. ``` g.V().has('name', video_name).next() ``` ## Creating 1min segments vertices in Neptune As mentioned in the previous notebook, we are creating metadata segments that we'll use to store labels and other information related to those 1min video segments. This will give us a more fine grained view of the video's topics and metadata. ``` print(segment_size_ms) #get the video duration by looking at the end of the last segment. def get_video_duration_in_ms(segment_detection_output): return segment_detection_output['Segments'][-1]['EndTimestampMillis'] #create a new segment vertex and connect it to the video def add_segment_vertex(video_name, start, end, g): #retrieving the video vertex video_vertex = g.V().has(V_VIDEO, 'name', video_name).next() #generating a segment ID segment_id = video_name + '-' + str(start) + '-' + str(end) #creating a new vertex for the segment new_segment_vert = g.addV(V_VIDEO_SEGMENT).property("name", segment_id).property('StartTimestampMillis', start).property('EndTimestampMillis', end).next() #connecting the video vertex to the segment vertex g.V(video_vertex).addE(E_HAS_SEGMENT).to(new_segment_vert).iterate() #generate segment vertices of a specific duration (default 60s) for a specific video def generate_segment_vertices(video_name, g, duration_in_millisecs, segment_size_in_millisecs=60000): #retrieve the mod modulo = duration_in_millisecs % segment_size_in_millisecs #counter that we'll increment by segment_size_in_millisecs steps counter = 0 while ((counter + segment_size_in_millisecs) < duration_in_millisecs) : start = counter end = counter + segment_size_in_millisecs add_segment_vertex(video_name, start, end, g) counter += segment_size_in_millisecs #adding the segment vertex to the video vertex add_segment_vertex(video_name, duration_in_millisecs - modulo, duration_in_millisecs, g) #add a vertex if it doesn't already exist def add_vertex(vertex_label, vertex_name, g): g.V().has(vertex_label,'name', vertex_name).fold().coalesce(__.unfold(), __.addV(vertex_label).property('name',vertex_name)).iterate() #add an edge between 2 vertices def add_edge(vertex_label_from, vertex_label_to, vertex_name_from, vertex_name_to, edge_name, g, weight=None): if weight == None: g.V().has(vertex_label_to, 'name', vertex_name_to).as_('v1').V().has(vertex_label_from, 'name', vertex_name_from).coalesce(__.outE(edge_name).where(__.inV().as_('v1')), __.addE(edge_name).to('v1')).iterate() else: g.V().has(vertex_label_to, 'name', vertex_name_to).as_('v1').V().has(vertex_label_from, 'name', vertex_name_from).coalesce(__.outE(edge_name).where(__.inV().as_('v1')), __.addE(edge_name).property('weight', weight).to('v1')).iterate() ``` Note: remember, the SegmentDetectionOutput object contains the output of the Amazon Rekognition segment (=scene) detection job ``` duration = get_video_duration_in_ms(SegmentDetectionOutput) generate_segment_vertices(video_name, g, duration, segment_size_ms) ``` [QUERY] Let's retrieve the segments that are connected to the video vertex via an edge, ordered by StartTimestampMillis. In that case we limit the result set to 5 items. Explanation: g.V() get us all vertices, .has(V_VIDEO, 'name', video_name) filters on the video vertices with name=video_name, .out() gives us all vertices connected to this vertex by an outgoing edge, .hasLabel(V_VIDEO_SEGMENT) filters the vertices to video segments only, .order().by() orders the vertices by StartTimestampMillis, .valueMap() gives us all properties for those vertices, .limit(5) reduces the results to 5 items, .toList() gives us the list of items. ``` list_of_segments = g.V().has(V_VIDEO, 'name', video_name).out().hasLabel(V_VIDEO_SEGMENT) \ .order().by('StartTimestampMillis', Order.asc).valueMap().limit(5).toList() list_of_segments ``` ## Graph Visualisation The networkx library alongside with matplotlib allows us to draw visually the graph. Let's draw our vertex video and the 1min segments we just created. ``` #Function printing the graph from a start vertex and a list of edges that will be traversed/displayed. def print_graph(start_vertex_label, start_vertex_name, list_edges, displayLabels=True, node_size=2000, node_limit=200): #getting the paths between vertices paths = g.V().has(start_vertex_label, 'name', start_vertex_name) #adding the edges that we want to traverse for edge in list_edges: paths = paths.out(edge) paths = paths.path().toList() #creating graph object G=nx.DiGraph() #counters to limit the number of nodes being displayed. limit_nodes_counter = 0 #creating the graph by iterating over the paths for p in paths: #depth of the graph depth = len(p) #we build our graph for i in range(0, depth -1): label1 = g.V(p[i]).valueMap().next()['name'][0] label2 = g.V(p[i+1]).valueMap().next()['name'][0] if limit_nodes_counter < node_limit: G.add_edge(label1, label2) limit_nodes_counter += 1 plt.figure(figsize=(12,7)) nx.draw(G, node_size=node_size, with_labels=displayLabels) plt.show() #please note that we limit the number of nodes being displayed print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT], node_limit=15) ``` # Add the scenes into our graph In the below steps we're connecting the scenes to the video itself and not the segments as we want to be able to search and list the different types of scenes at the video level. However, note that we're not going to attach any specific metadata at the scene level, only at the segment level. ``` def store_video_segment(original_video_name, json_segment_detection_output, orig_video_vertex): shot_counter = 0 tech_cue_counter = 0 for technicalCue in json_segment_detection_output['Segments']: #start frameStartValue = technicalCue['StartTimestampMillis'] / 1000 #end frameEndValue = technicalCue['EndTimestampMillis'] / 1000 #SHOT or TECHNICAL_CUE segment_type = technicalCue['Type'] counter = -1 if (segment_type == 'SHOT'): shot_counter += 1 counter = shot_counter elif (segment_type == 'TECHNICAL_CUE'): tech_cue_counter += 1 counter = tech_cue_counter segment_id = original_video_name + '-' + segment_type + '-' + str(counter) #creating the vertex for the video segment with all the metadata extracted from the segment generation job new_vert = g.addV(V_VIDEO_SCENE).property("name", segment_id).property("type", segment_type) \ .property('StartTimestampMillis', technicalCue['StartTimestampMillis']).property('EndTimestampMillis', technicalCue['EndTimestampMillis']) \ .property('StartFrameNumber', technicalCue['StartFrameNumber']).property('EndFrameNumber', technicalCue['EndFrameNumber']) \ .property('DurationFrames', technicalCue['DurationFrames']).next() #creating the edge between the original video vertex and the segment vertex with the type as a property of the relationship g.V(orig_video_vertex).addE(E_HAS_SCENE).to(new_vert).properties("type", segment_type).iterate() store_video_segment(video_name, SegmentDetectionOutput, sample_video_vertex) ``` [QUERY] We're retrieving the list of edges/branches created between the video and the scenes. Explanation: g.V() returns all vertices, .has(V_VIDEO, 'name', video_name) returns the V_VIDEO vertex with name=video_name, .out(E_HAS_SCENE) returns the list of vertices that are connected to the V_VIDEO vertex by a E_HAS_SCENE edge, toList() returns the list of items. ``` list_of_edges = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SCENE).toList() print(f"the sample video vertex has now {len(list_of_edges)} edges connecting to the scenes vertices") ``` [QUERY] Let's search for the technical cues (black and fix screens) at the end of the video. Explanation: g.V() returns all vertices, .has(V_VIDEO, 'name', video_name) returns the V_VIDEO vertex with name=video_name, .out(E_HAS_SCENE) returns the list of vertices that are connected to the V_VIDEO vertex by a E_HAS_SCENE edge, .has('type', 'TECHNICAL_CUE') filters the list on type=TECHNICAL_CUE, the rest was seen above already. ``` g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SCENE) \ .has('type', 'TECHNICAL_CUE') \ .order().by('EndTimestampMillis', Order.desc) \ .limit(5).valueMap().toList() ``` </br> Let's print the graph for those newly created SCENE vertices ``` #please note that we limit the number of nodes being displayed print_graph(V_VIDEO, video_name, [E_HAS_SCENE], node_limit=15) ``` ## Create the labels vertices and link them to the segments We're now going to create vertices to represent the labels in our graph and connect them to the 1min segments ``` def create_label_vertices(LabelDetectionOutput, video_name, g, confidence_threshold=80): labels = LabelDetectionOutput['Labels'] for instance in labels: #keeping only the labels with high confidence label_details_obj = instance['Label'] confidence = label_details_obj['Confidence'] if confidence > confidence_threshold: #adding then main label name to the list label_name = str(label_details_obj['Name']).lower() #adding the label vertex add_vertex(V_LABEL, label_name, g) #adding the link between video and label add_edge(V_VIDEO, V_LABEL, video_name, label_name, E_HAS_LABEL, g, weight=None) #adding parent labels too parents = label_details_obj['Parents'] if len(parents) > 0: for parent in parents: #create parent vertex if it doesn't exist parent_label_name = str(parent['Name']).lower() add_vertex(V_LABEL, parent_label_name, g) #create the relationship between parent and children if it doesn't already exist add_edge(V_LABEL, V_LABEL, parent_label_name, label_name, E_HAS_CHILD_LABEL, g, weight=None) create_label_vertices(LabelDetectionOutput, video_name, g, 80) ``` [QUERY] Let's list the labels vertices to see what was created above. Explanation: g.V() returns all vertices, .hasLabel(V_LABEL) returns only the vertices of label/type V_LABEL, .valueMap().limit(20).toList() gives us the list with properties for the first 20 items. ``` #retrieving a list of the first 20 labels label_list = g.V().hasLabel(V_LABEL).valueMap().limit(20).toList() label_list ``` Let's display a graph with our video's labels and the child labels relationships in between labels. ``` print_graph(V_VIDEO, video_name, [E_HAS_LABEL, E_HAS_CHILD_LABEL], node_limit=15) ``` [QUERY] A typical query would be to search for videos who have a specific label. Explanation: g.V().has(V_LABEL, 'name', ..) returns the first label vertex from the previous computed list, .in_(E_HAS_LABEL) returns all vertices who have an incoming edge (inE) pointing to this label vertex, .valueMap().toList() returns the list with properties. note that in_(E_HAS_LABEL) is equivalent to .inE(E_HAS_LABEL).outV() where .inE(E_HAS_LABEL) returns all incoming edges with the specified label and .outV() will traverse to the vertices attached to that edge. Obviously we only have the one result as we've only processed one video so far. ``` g.V().has(V_LABEL, 'name', label_list[0]['name'][0]).in_(E_HAS_LABEL).valueMap().toList() ``` ## Create the topics and associated topic terms vertices We are going to re-arrange a bit the raw results from the topic modeling job to make it more readable ``` comprehend_topics_df.head() ``` We extract the segment id/number from the docname column in a separate column, cast it to numeric values, drop the docname column and sort by segment_id ``` comprehend_topics_df['segment_id'] = comprehend_topics_df['docname'].apply(lambda x: x.split(':')[-1]) comprehend_topics_df['segment_id'] = pd.to_numeric(comprehend_topics_df['segment_id'], errors='coerce') comprehend_topics_df = comprehend_topics_df.drop('docname', axis=1) comprehend_topics_df = comprehend_topics_df.sort_values(by='segment_id') comprehend_topics_df.head(5) ``` Looks better! Note that: - a segment_id can belong to several topics - proportion = the proportion of the document that is concerned with the topic Let's now create our topic vertices ``` def create_topic_vertices(topics_df, terms_df, video_name, g): #retrieve all segments for the video segments_vertex_list = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SEGMENT).order().by('StartTimestampMillis', Order.asc).valueMap().toList() for index, row in topics_df.iterrows(): topic = row['topic'] segment_id = int(row['segment_id']) #string formating to use as name for our vertices topic_str = str(int(row['topic'])) #adding terms vertices that are associated with that topic and create the topic -> term edge list_of_terms = terms_df[comprehend_terms_df['topic'] == topic] #getting the segment name segment_name = segments_vertex_list[segment_id]['name'][0] #adding the topic vertex add_vertex(V_TOPIC, topic_str, g) #adding the link between entity and entity_type add_edge(V_VIDEO_SEGMENT, V_TOPIC, segment_name, topic_str, E_HAS_TOPIC, g, weight=None) #looping across all for index2, row2 in list_of_terms.iterrows(): term = row2['term'] weight = row2['weight'] add_vertex(V_TERM, term, g) add_edge(V_TOPIC, V_TERM, topic_str, term, E_HAS_TERM, g, weight=weight) create_topic_vertices(comprehend_topics_df, comprehend_terms_df, video_name, g) ``` Let's display our video, few segments and their associated topics ``` #please note that we limit the number of nodes being displayed print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_TOPIC], node_limit=10) ``` Let's display a partial graph showing relationships between the video -> segment -> topic -> term ``` print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_TOPIC, E_HAS_TERM], node_limit=20) ``` [QUERY] We're now listing all the segments that are in topic 2 (try different topic numbers if you want) Explanation: g.V().has(V_TOPIC, 'name', '2') returns the topic vertex with name=2, .in_(E_HAS_TOPIC) returns all vertices that have a edge pointing into that topic vertex, .valueMap().toList() returns the list of items with their properties ``` g.V().has(V_TOPIC, 'name', '2').in_(E_HAS_TOPIC).valueMap().toList() ``` ## Create the NER vertices and link them to the segments ``` #create the entity and entity_type vertices including the related edges def create_ner_vertices(ner_job_data, video_name, g, score_threshold=0.8): #retrieve all segments for the video segments_vertex_list = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SEGMENT).order().by('StartTimestampMillis', Order.asc).valueMap().toList() counter_vertex = 0 for doc in ner_job_data: #each jsonline from the ner job is already segmented by 1min chunks, so we're just matching them to our ordered segments list. segment_vertex_name = segments_vertex_list[counter_vertex]['name'][0] for entity in doc: text = entity['Text'] type_ = entity['Type'] score = entity['Score'] if score > score_threshold: #adding the entity type vertex entity_type_vertex = g.V().has(V_ENTITY_TYPE,'name', type_).fold().coalesce(__.unfold(), __.addV(V_ENTITY_TYPE).property('name',type_)).iterate() #adding the entity type vertex entity_vertex = g.V().has(V_ENTITY,'name', text).fold().coalesce(__.unfold(), __.addV(V_ENTITY).property('name',text)).iterate() #adding the link between entity and entity_type entity_entity_type_edge = g.V().has(V_ENTITY_TYPE, 'name', type_).as_('v1').V().has(V_ENTITY, 'name', text).coalesce(__.outE(E_IS_OF_ENTITY_TYPE).where(__.inV().as_('v1')), __.addE(E_IS_OF_ENTITY_TYPE).to('v1')).iterate() #adding the edge between entity and segment segment_entity_edge = g.V().has(V_ENTITY,'name', text).as_('v1').V().has(V_VIDEO_SEGMENT, 'name', segment_vertex_name).coalesce(__.outE(E_HAS_ENTITY).where(__.inV().as_('v1')), __.addE(E_HAS_ENTITY).to('v1')).iterate() #print(f"attaching entity: {text} to segment: {segment_vertex_name}") counter_vertex += 1 create_ner_vertices(ner_job_data, video_name, g, 0.8) ``` [QUERY] Let's get a list of the first 20 entities Explanation: g.V().hasLabel(V_ENTITY) returns all vertices of label/type V_ENTITY, .valueMap().limit(20).toList() returns the list of the first 20 items with their properties (just name in that case). ``` entities_list = g.V().hasLabel(V_ENTITY).valueMap().limit(20).toList() entities_list ``` [QUERY] Let's now look up the first entity of the previous entities_list and check its type Explanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .out(E_IS_OF_ENTITY_TYPE) returns vertices connected to this V_ENTITY vertex by a E_IS_OF_ENTITY_TYPE edge. ``` g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).out(E_IS_OF_ENTITY_TYPE).valueMap().toList() ``` [QUERY] Let's see now which video segments contains that entity Explanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .in_(E_HAS_ENTITY) returns all vertices that have an incoming edge into that V_ENTITY vertex and .valueMap().toList() returns the list with properties. ``` g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).in_(E_HAS_ENTITY).valueMap().toList() ``` [QUERY] Similar query but this time we traverse further the graph and only return the list of videos which have this specific entity. Explanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .in_(E_HAS_ENTITY) returns the V_VIDEO_SEGMENT vertices that have an incoming edge into that V_ENTITY vertex, .in_(E_HAS_SEGMENT) returns the V_VIDEO vertices that have an incoming edge into those V_VIDEO_SEGMENT vertices and .valueMap().toList() returns the list with properties. Note how by chaining the .in_() methods we are able to traverse the graph from one type of vertex to the other. ``` g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).in_(E_HAS_ENTITY).in_(E_HAS_SEGMENT).dedup().valueMap().toList() ``` </br> Let's now display a graph showing the relationship between Video -> Segment -> Entity ``` print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_ENTITY], node_size=800, node_limit=30) ``` # Summary This notebook only touched the surface of what you can do with Graph databases but it should give you an idea of how powerful they are at modeling highly dimensional relationships between entities. This specific architecture allows them to be especially scalable and performing even with billions of vertices and edges. Gremlin is the most widely used query language for graph DB and provides quite an intuitive way to traverse/query those graphs by chaining those instructions but if you want a more traditional SQL language, you can also look into SPARQL as an alternative. https://graphdb.ontotext.com/documentation/free/devhub/sparql.html#using-sparql-in-graphdb
true
code
0.458349
null
null
null
null
# SmallPebble [![](https://github.com/sradc/smallpebble/workflows/Python%20package/badge.svg)](https://github.com/sradc/smallpebble/commits/) **Project status: unstable.** <br><p align="center"><img src="https://raw.githubusercontent.com/sradc/SmallPebble/master/pebbles.jpg"/></p><br> SmallPebble is a minimal automatic differentiation and deep learning library written from scratch in [Python](https://www.python.org/), using [NumPy](https://numpy.org/)/[CuPy](https://cupy.dev/). The implementation is relatively small, and mainly in the file: [smallpebble.py](https://github.com/sradc/SmallPebble/blob/master/smallpebble/smallpebble.py). To help understand it, check out [this](https://sidsite.com/posts/autodiff/) introduction to autodiff, which presents an autodiff framework that works in the same way as SmallPebble (except using scalars instead of NumPy arrays). SmallPebble's *raison d'etre* is to be a simplified deep learning implementation, for those who want to learn what’s under the hood of deep learning frameworks. However, because it is written in terms of vectorised NumPy/CuPy operations, it performs well enough for non-trivial models to be trained using it. **Highlights** - Relatively simple implementation. - Can run on GPU, using CuPy. - Various operations, such as matmul, conv2d, maxpool2d. - Array broadcasting support. - Eager or lazy execution. - Powerful API for creating models. - It's easy to add new SmallPebble functions. **Notes** Graphs are built implicitly via Python objects referencing Python objects. When `get_gradients` is called, autodiff is carried out on the whole sub-graph. The default array library is NumPy. --- **Read on to see:** - Example models created and trained using SmallPebble. - A brief guide to using SmallPebble. ``` import matplotlib.pyplot as plt import numpy as np from tqdm.notebook import tqdm import smallpebble as sp from smallpebble.misc import load_data ``` ## Training a neural network to classify handwritten digits (MNIST) ``` "Load the dataset, and create a validation set." X_train, y_train, _, _ = load_data('mnist') # load / download from openml.org X_train = X_train/255 # normalize # Seperate out data for validation. X = X_train[:50_000, ...] y = y_train[:50_000] X_eval = X_train[50_000:60_000, ...] y_eval = y_train[50_000:60_000] "Plot, to check we have the right data." plt.figure(figsize=(5,5)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(X_train[i,:].reshape(28,28), cmap='gray', vmin=0, vmax=1) plt.show() "Create a model, with two fully connected hidden layers." X_in = sp.Placeholder() y_true = sp.Placeholder() h = sp.linearlayer(28*28, 100)(X_in) h = sp.Lazy(sp.leaky_relu)(h) h = sp.linearlayer(100, 100)(h) h = sp.Lazy(sp.leaky_relu)(h) h = sp.linearlayer(100, 10)(h) y_pred = sp.Lazy(sp.softmax)(h) loss = sp.Lazy(sp.cross_entropy)(y_pred, y_true) learnables = sp.get_learnables(y_pred) loss_vals = [] validation_acc = [] "Train model, while measuring performance on the validation dataset." NUM_ITERS = 300 BATCH_SIZE = 200 eval_batch = sp.batch(X_eval, y_eval, BATCH_SIZE) adam = sp.Adam() # Adam optimization for i, (xbatch, ybatch) in tqdm(enumerate(sp.batch(X, y, BATCH_SIZE)), total=NUM_ITERS): if i >= NUM_ITERS: break X_in.assign_value(sp.Variable(xbatch)) y_true.assign_value(ybatch) loss_val = loss.run() # run the graph if np.isnan(loss_val.array): print("loss is nan, aborting.") break loss_vals.append(loss_val.array) # Compute gradients, and use to carry out learning step: gradients = sp.get_gradients(loss_val) adam.training_step(learnables, gradients) # Compute validation accuracy: x_eval_batch, y_eval_batch = next(eval_batch) X_in.assign_value(sp.Variable(x_eval_batch)) predictions = y_pred.run() predictions = np.argmax(predictions.array, axis=1) accuracy = (y_eval_batch == predictions).mean() validation_acc.append(accuracy) # Plot results: print(f'Final validation accuracy: {np.mean(validation_acc[-10:])}') plt.figure(figsize=(14, 4)) plt.subplot(1, 2, 1) plt.ylabel('Loss') plt.xlabel('Iteration') plt.plot(loss_vals) plt.subplot(1, 2, 2) plt.ylabel('Validation accuracy') plt.xlabel('Iteration') plt.suptitle('Neural network trained on MNIST, using SmallPebble.') plt.ylim([0, 1]) plt.plot(validation_acc) plt.show() ``` ## Training a convolutional neural network on CIFAR-10, using CuPy This was run on [Google Colab](https://colab.research.google.com/), with a GPU. ``` "Load the CIFAR dataset." X_train, y_train, _, _ = load_data('cifar') # load/download from openml.org X_train = X_train/255 # normalize """Plot, to check it's the right data. (This cell's code is from: https://www.tensorflow.org/tutorials/images/cnn#verify_the_data) """ class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(X_train[i,:].reshape(32,32,3)) plt.xlabel(class_names[y_train[i]]) plt.show() "Switch array library to CuPy, so can use GPU." import cupy sp.use(cupy) print(sp.array_library.library.__name__) # should be 'cupy' "Convert data to CuPy arrays" X_train = cupy.array(X_train) y_train = cupy.array(y_train) # Seperate out data for validation as before. X = X_train[:45_000, ...] y = y_train[:45_000] X_eval = X_train[45_000:50_000, ...] y_eval = y_train[45_000:50_000] """Define a model.""" X_in = sp.Placeholder() y_true = sp.Placeholder() h = sp.convlayer(height=3, width=3, depth=3, n_kernels=32)(X_in) h = sp.Lazy(sp.leaky_relu)(h) h = sp.Lazy(lambda a: sp.maxpool2d(a, 2, 2, strides=[2, 2]))(h) h = sp.convlayer(3, 3, 32, 128, padding='VALID')(h) h = sp.Lazy(sp.leaky_relu)(h) h = sp.Lazy(lambda a: sp.maxpool2d(a, 2, 2, strides=[2, 2]))(h) h = sp.convlayer(3, 3, 128, 128, padding='VALID')(h) h = sp.Lazy(sp.leaky_relu)(h) h = sp.Lazy(lambda a: sp.maxpool2d(a, 2, 2, strides=[2, 2]))(h) h = sp.Lazy(lambda x: sp.reshape(x, [-1, 3*3*128]))(h) h = sp.linearlayer(3*3*128, 10)(h) h = sp.Lazy(sp.softmax)(h) y_pred = h loss = sp.Lazy(sp.cross_entropy)(y_pred, y_true) learnables = sp.get_learnables(y_pred) loss_vals = [] validation_acc = [] # Check we get the expected dimensions X_in.assign_value(sp.Variable(X[0:3, :].reshape([-1, 32, 32, 3]))) h.run().shape ``` Train the model. ``` NUM_ITERS = 3000 BATCH_SIZE = 128 eval_batch = sp.batch(X_eval, y_eval, BATCH_SIZE) adam = sp.Adam() for i, (xbatch, ybatch) in tqdm(enumerate(sp.batch(X, y, BATCH_SIZE)), total=NUM_ITERS): if i >= NUM_ITERS: break xbatch_images = xbatch.reshape([-1, 32, 32, 3]) X_in.assign_value(sp.Variable(xbatch_images)) y_true.assign_value(ybatch) loss_val = loss.run() if np.isnan(loss_val.array): print("Aborting, loss is nan.") break loss_vals.append(loss_val.array) # Compute gradients, and carry out learning step. gradients = sp.get_gradients(loss_val) adam.training_step(learnables, gradients) # Compute validation accuracy: x_eval_batch, y_eval_batch = next(eval_batch) X_in.assign_value(sp.Variable(x_eval_batch.reshape([-1, 32, 32, 3]))) predictions = y_pred.run() predictions = np.argmax(predictions.array, axis=1) accuracy = (y_eval_batch == predictions).mean() validation_acc.append(accuracy) print(f'Final validation accuracy: {np.mean(validation_acc[-10:])}') plt.figure(figsize=(14, 4)) plt.subplot(1, 2, 1) plt.ylabel('Loss') plt.xlabel('Iteration') plt.plot(loss_vals) plt.subplot(1, 2, 2) plt.ylabel('Validation accuracy') plt.xlabel('Iteration') plt.suptitle('CNN trained on CIFAR-10, using SmallPebble.') plt.ylim([0, 1]) plt.plot(validation_acc) plt.show() ``` It looks like we could improve our results by training for longer (and we could improve our model architecture). --- # Brief guide to using SmallPebble SmallPebble provides the following building blocks to make models with: - `sp.Variable` - Operations, such as `sp.add`, `sp.mul`, etc. - `sp.get_gradients` - `sp.Lazy` - `sp.Placeholder` (this is really just `sp.Lazy` on the identity function) - `sp.learnable` - `sp.get_learnables` The following examples show how these are used. ## Switching between NumPy and CuPy We can dynamically switch between NumPy and CuPy. (Assuming you have a CuPy compatible GPU and CuPy set up. Note, CuPy is available on Google Colab, if you change the runtime to GPU.) ``` import cupy import numpy import smallpebble as sp # Switch to CuPy sp.use(cupy) print(sp.array_library.library.__name__) # should be 'cupy' # Switch back to NumPy: sp.use(numpy) print(sp.array_library.library.__name__) # should be 'numpy' ``` ## sp.Variable & sp.get_gradients With SmallPebble, you can: - Wrap NumPy arrays in `sp.Variable` - Apply SmallPebble operations (e.g. `sp.matmul`, `sp.add`, etc.) - Compute gradients with `sp.get_gradients` ``` a = sp.Variable(np.random.random([2, 2])) b = sp.Variable(np.random.random([2, 2])) c = sp.Variable(np.random.random([2])) y = sp.mul(a, b) + c print('y.array:\n', y.array) gradients = sp.get_gradients(y) grad_a = gradients[a] grad_b = gradients[b] grad_c = gradients[c] print('grad_a:\n', grad_a) print('grad_b:\n', grad_b) print('grad_c:\n', grad_c) ``` Note that `y` is computed straight away, i.e. the (forward) computation happens immediately. Also note that `y` is a sp.Variable and we could continue to carry out SmallPebble operations on it. ## sp.Lazy & sp.Placeholder Lazy graphs are constructed using `sp.Lazy` and `sp.Placeholder`. ``` lazy_node = sp.Lazy(lambda a, b: a + b)(1, 2) print(lazy_node) print(lazy_node.run()) a = sp.Lazy(lambda a: a)(2) y = sp.Lazy(lambda a, b, c: a * b + c)(a, 3, 4) print(y) print(y.run()) ``` Forward computation does not happen immediately - only when .run() is called. ``` a = sp.Placeholder() b = sp.Variable(np.random.random([2, 2])) y = sp.Lazy(sp.matmul)(a, b) a.assign_value(sp.Variable(np.array([[1,2], [3,4]]))) result = y.run() print('result.array:\n', result.array) ``` You can use .run() as many times as you like. Let's change the placeholder value and re-run the graph: ``` a.assign_value(sp.Variable(np.array([[10,20], [30,40]]))) result = y.run() print('result.array:\n', result.array) ``` Finally, let's compute gradients: ``` gradients = sp.get_gradients(result) ``` Note that `sp.get_gradients` is called on `result`, which is a `sp.Variable`, not on `y`, which is a `sp.Lazy` instance. ## sp.learnable & sp.get_learnables Use `sp.learnable` to flag parameters as learnable, allowing them to be extracted from a lazy graph with `sp.get_learnables`. This enables a workflow of: building a model, while flagging parameters as learnable, and then extracting all the parameters in one go at the end. ``` a = sp.Placeholder() b = sp.learnable(sp.Variable(np.random.random([2, 1]))) y = sp.Lazy(sp.matmul)(a, b) y = sp.Lazy(sp.add)(y, sp.learnable(sp.Variable(np.array([5])))) learnables = sp.get_learnables(y) for learnable in learnables: print(learnable) ```
true
code
0.673809
null
null
null
null
# ChainerRL Quickstart Guide This is a quickstart guide for users who just want to try ChainerRL for the first time. If you have not yet installed ChainerRL, run the command below to install it: ``` %%bash pip install chainerrl ``` If you have already installed ChainerRL, let's begin! First, you need to import necessary modules. The module name of ChainerRL is `chainerrl`. Let's import `gym` and `numpy` as well since they are used later. ``` import chainer import chainer.functions as F import chainer.links as L import chainerrl import gym import numpy as np ``` ChainerRL can be used for any problems if they are modeled as "environments". [OpenAI Gym](https://github.com/openai/gym) provides various kinds of benchmark environments and defines the common interface among them. ChainerRL uses a subset of the interface. Specifically, an environment must define its observation space and action space and have at least two methods: `reset` and `step`. - `env.reset` will reset the environment to the initial state and return the initial observation. - `env.step` will execute a given action, move to the next state and return four values: - a next observation - a scalar reward - a boolean value indicating whether the current state is terminal or not - additional information - `env.render` will render the current state. Let's try 'CartPole-v0', which is a classic control problem. You can see below that its observation space consists of four real numbers while its action space consists of two discrete actions. ``` env = gym.make('CartPole-v0') print('observation space:', env.observation_space) print('action space:', env.action_space) obs = env.reset() env.render(close=True) print('initial observation:', obs) action = env.action_space.sample() obs, r, done, info = env.step(action) print('next observation:', obs) print('reward:', r) print('done:', done) print('info:', info) ``` Now you have defined your environment. Next, you need to define an agent, which will learn through interactions with the environment. ChainerRL provides various agents, each of which implements a deep reinforcement learning algorithm. To use [DQN (Deep Q-Network)](http://dx.doi.org/10.1038/nature14236), you need to define a Q-function that receives an observation and returns an expected future return for each action the agent can take. In ChainerRL, you can define your Q-function as `chainer.Link` as below. Note that the outputs are wrapped by `chainerrl.action_value.DiscreteActionValue`, which implements `chainerrl.action_value.ActionValue`. By wrapping the outputs of Q-functions, ChainerRL can treat discrete-action Q-functions like this and [NAFs (Normalized Advantage Functions)](https://arxiv.org/abs/1603.00748) in the same way. ``` class QFunction(chainer.Chain): def __init__(self, obs_size, n_actions, n_hidden_channels=50): super().__init__( l0=L.Linear(obs_size, n_hidden_channels), l1=L.Linear(n_hidden_channels, n_hidden_channels), l2=L.Linear(n_hidden_channels, n_actions)) def __call__(self, x, test=False): """ Args: x (ndarray or chainer.Variable): An observation test (bool): a flag indicating whether it is in test mode """ h = F.tanh(self.l0(x)) h = F.tanh(self.l1(h)) return chainerrl.action_value.DiscreteActionValue(self.l2(h)) obs_size = env.observation_space.shape[0] n_actions = env.action_space.n q_func = QFunction(obs_size, n_actions) ``` If you want to use CUDA for computation, as usual as in Chainer, call `to_gpu`. ``` # Uncomment to use CUDA # q_func.to_gpu(0) ``` You can also use ChainerRL's predefined Q-functions. ``` _q_func = chainerrl.q_functions.FCStateQFunctionWithDiscreteAction( obs_size, n_actions, n_hidden_layers=2, n_hidden_channels=50) ``` As in Chainer, `chainer.Optimizer` is used to update models. ``` # Use Adam to optimize q_func. eps=1e-2 is for stability. optimizer = chainer.optimizers.Adam(eps=1e-2) optimizer.setup(q_func) ``` A Q-function and its optimizer are used by a DQN agent. To create a DQN agent, you need to specify a bit more parameters and configurations. ``` # Set the discount factor that discounts future rewards. gamma = 0.95 # Use epsilon-greedy for exploration explorer = chainerrl.explorers.ConstantEpsilonGreedy( epsilon=0.3, random_action_func=env.action_space.sample) # DQN uses Experience Replay. # Specify a replay buffer and its capacity. replay_buffer = chainerrl.replay_buffer.ReplayBuffer(capacity=10 ** 6) # Since observations from CartPole-v0 is numpy.float64 while # Chainer only accepts numpy.float32 by default, specify # a converter as a feature extractor function phi. phi = lambda x: x.astype(np.float32, copy=False) # Now create an agent that will interact with the environment. agent = chainerrl.agents.DoubleDQN( q_func, optimizer, replay_buffer, gamma, explorer, replay_start_size=500, update_interval=1, target_update_interval=100, phi=phi) ``` Now you have an agent and an environment. It's time to start reinforcement learning! In training, use `agent.act_and_train` to select exploratory actions. `agent.stop_episode_and_train` must be called after finishing an episode. You can get training statistics of the agent via `agent.get_statistics`. ``` n_episodes = 200 max_episode_len = 200 for i in range(1, n_episodes + 1): obs = env.reset() reward = 0 done = False R = 0 # return (sum of rewards) t = 0 # time step while not done and t < max_episode_len: # Uncomment to watch the behaviour # env.render() action = agent.act_and_train(obs, reward) obs, reward, done, _ = env.step(action) R += reward t += 1 if i % 10 == 0: print('episode:', i, 'R:', R, 'statistics:', agent.get_statistics()) agent.stop_episode_and_train(obs, reward, done) print('Finished.') ``` Now you finished training the agent. How good is the agent now? You can test it by using `agent.act` and `agent.stop_episode` instead. Exploration such as epsilon-greedy is not used anymore. ``` for i in range(10): obs = env.reset() done = False R = 0 t = 0 while not done and t < 200: env.render(close=True) action = agent.act(obs) obs, r, done, _ = env.step(action) R += r t += 1 print('test episode:', i, 'R:', R) agent.stop_episode() ``` If test scores are good enough, the only remaining task is to save the agent so that you can reuse it. What you need to do is to simply call `agent.save` to save the agent, then `agent.load` to load the saved agent. ``` # Save an agent to the 'agent' directory agent.save('agent') # Uncomment to load an agent from the 'agent' directory # agent.load('agent') ``` RL completed! But writing code like this every time you use RL might be boring. So, ChainerRL has utility functions that do these things. ``` # Set up the logger to print info messages for understandability. import logging import sys gym.undo_logger_setup() # Turn off gym's default logger settings logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='') chainerrl.experiments.train_agent_with_evaluation( agent, env, steps=2000, # Train the agent for 2000 steps eval_n_runs=10, # 10 episodes are sampled for each evaluation max_episode_len=200, # Maximum length of each episodes eval_interval=1000, # Evaluate the agent after every 1000 steps outdir='result') # Save everything to 'result' directory ``` That's all of the ChainerRL quickstart guide. To know more about ChainerRL, please look into the `examples` directory and read and run the examples. Thank you!
true
code
0.519704
null
null
null
null
# DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanism such as multiprocessing and SCOOP. The following documentation presents the key concepts and many features to build your own evolutions. Library documentation: <a>http://deap.readthedocs.org/en/master/</a> ## One Max Problem (GA) This problem is very simple, we search for a 1 filled list individual. This problem is widely used in the evolutionary computation community since it is very simple and it illustrates well the potential of evolutionary algorithms. ``` import random from deap import base from deap import creator from deap import tools # creator is a class factory that can build new classes at run-time creator.create("FitnessMax", base.Fitness, weights=(1.0,)) creator.create("Individual", list, fitness=creator.FitnessMax) # a toolbox stores functions and their arguments toolbox = base.Toolbox() # attribute generator toolbox.register("attr_bool", random.randint, 0, 1) # structure initializers toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, 100) toolbox.register("population", tools.initRepeat, list, toolbox.individual) # evaluation function def evalOneMax(individual): return sum(individual), # register the required genetic operators toolbox.register("evaluate", evalOneMax) toolbox.register("mate", tools.cxTwoPoint) toolbox.register("mutate", tools.mutFlipBit, indpb=0.05) toolbox.register("select", tools.selTournament, tournsize=3) random.seed(64) # instantiate a population pop = toolbox.population(n=300) CXPB, MUTPB, NGEN = 0.5, 0.2, 40 # evaluate the entire population fitnesses = list(map(toolbox.evaluate, pop)) for ind, fit in zip(pop, fitnesses): ind.fitness.values = fit print(" Evaluated %i individuals" % len(pop)) # begin the evolution for g in range(NGEN): print("-- Generation %i --" % g) # select the next generation individuals offspring = toolbox.select(pop, len(pop)) # clone the selected individuals offspring = list(map(toolbox.clone, offspring)) # apply crossover and mutation on the offspring for child1, child2 in zip(offspring[::2], offspring[1::2]): if random.random() < CXPB: toolbox.mate(child1, child2) del child1.fitness.values del child2.fitness.values for mutant in offspring: if random.random() < MUTPB: toolbox.mutate(mutant) del mutant.fitness.values # evaluate the individuals with an invalid fitness invalid_ind = [ind for ind in offspring if not ind.fitness.valid] fitnesses = map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit print(" Evaluated %i individuals" % len(invalid_ind)) # the population is entirely replaced by the offspring pop[:] = offspring # gather all the fitnesses in one list and print the stats fits = [ind.fitness.values[0] for ind in pop] length = len(pop) mean = sum(fits) / length sum2 = sum(x*x for x in fits) std = abs(sum2 / length - mean**2)**0.5 print(" Min %s" % min(fits)) print(" Max %s" % max(fits)) print(" Avg %s" % mean) print(" Std %s" % std) best_ind = tools.selBest(pop, 1)[0] print("Best individual is %s, %s" % (best_ind, best_ind.fitness.values)) ``` ## Symbolic Regression (GP) Symbolic regression is one of the best known problems in GP. It is commonly used as a tuning problem for new algorithms, but is also widely used with real-life distributions, where other regression methods may not work. All symbolic regression problems use an arbitrary data distribution, and try to fit the most accurately the data with a symbolic formula. Usually, a measure like the RMSE (Root Mean Square Error) is used to measure an individual’s fitness. In this example, we use a classical distribution, the quartic polynomial (x^4 + x^3 + x^2 + x), a one-dimension distribution. 20 equidistant points are generated in the range [-1, 1], and are used to evaluate the fitness. ``` import operator import math import random import numpy from deap import algorithms from deap import base from deap import creator from deap import tools from deap import gp # define a new function for divison that guards against divide by 0 def protectedDiv(left, right): try: return left / right except ZeroDivisionError: return 1 # add aritmetic primitives pset = gp.PrimitiveSet("MAIN", 1) pset.addPrimitive(operator.add, 2) pset.addPrimitive(operator.sub, 2) pset.addPrimitive(operator.mul, 2) pset.addPrimitive(protectedDiv, 2) pset.addPrimitive(operator.neg, 1) pset.addPrimitive(math.cos, 1) pset.addPrimitive(math.sin, 1) # constant terminal pset.addEphemeralConstant("rand101", lambda: random.randint(-1,1)) # define number of inputs pset.renameArguments(ARG0='x') # create fitness and individual objects creator.create("FitnessMin", base.Fitness, weights=(-1.0,)) creator.create("Individual", gp.PrimitiveTree, fitness=creator.FitnessMin) # register evolution process parameters through the toolbox toolbox = base.Toolbox() toolbox.register("expr", gp.genHalfAndHalf, pset=pset, min_=1, max_=2) toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.expr) toolbox.register("population", tools.initRepeat, list, toolbox.individual) toolbox.register("compile", gp.compile, pset=pset) # evaluation function def evalSymbReg(individual, points): # transform the tree expression in a callable function func = toolbox.compile(expr=individual) # evaluate the mean squared error between the expression # and the real function : x**4 + x**3 + x**2 + x sqerrors = ((func(x) - x**4 - x**3 - x**2 - x)**2 for x in points) return math.fsum(sqerrors) / len(points), toolbox.register("evaluate", evalSymbReg, points=[x/10. for x in range(-10,10)]) toolbox.register("select", tools.selTournament, tournsize=3) toolbox.register("mate", gp.cxOnePoint) toolbox.register("expr_mut", gp.genFull, min_=0, max_=2) toolbox.register("mutate", gp.mutUniform, expr=toolbox.expr_mut, pset=pset) # prevent functions from getting too deep/complex toolbox.decorate("mate", gp.staticLimit(key=operator.attrgetter("height"), max_value=17)) toolbox.decorate("mutate", gp.staticLimit(key=operator.attrgetter("height"), max_value=17)) # compute some statistics about the population stats_fit = tools.Statistics(lambda ind: ind.fitness.values) stats_size = tools.Statistics(len) mstats = tools.MultiStatistics(fitness=stats_fit, size=stats_size) mstats.register("avg", numpy.mean) mstats.register("std", numpy.std) mstats.register("min", numpy.min) mstats.register("max", numpy.max) random.seed(318) pop = toolbox.population(n=300) hof = tools.HallOfFame(1) # run the algorithm pop, log = algorithms.eaSimple(pop, toolbox, 0.5, 0.1, 40, stats=mstats, halloffame=hof, verbose=True) ```
true
code
0.494202
null
null
null
null
<a href="https://colab.research.google.com/github/csy99/dna-nn-theory/blob/master/supervised_UCI_adam256_save_embedding.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt from itertools import product import re import time from sklearn.model_selection import train_test_split from sklearn.manifold import TSNE import tensorflow as tf from tensorflow import keras ``` # Read Data ``` !pip install PyDrive from google.colab import drive drive.mount('/content/gdrive') def convert_label(row): if row["Classes"] == 'EI': return 0 if row["Classes"] == 'IE': return 1 if row["Classes"] == 'N': return 2 data_path = '/content/gdrive/My Drive/Colab Notebooks/UCI/' splice_df = pd.read_csv(data_path + 'splice.data', header=None) splice_df.columns = ['Classes', 'Name', 'Seq'] splice_df["Seq"] = splice_df["Seq"].str.replace(' ', '').str.replace('N', 'A').str.replace('D', 'T').str.replace('S', 'C').str.replace('R', 'G') splice_df["Label"] = splice_df.apply(lambda row: convert_label(row), axis=1) print('The shape of the datasize is', splice_df.shape) splice_df.head() seq_num = 0 for seq in splice_df["Seq"]: char_num = 0 for char in seq: if char != 'A' and char != 'C' and char != 'T' and char != 'G': print("seq", seq_num, 'char', char_num, 'is', char) char_num += 1 seq_num += 1 # check if the length of the sequence is the same seq_len = len(splice_df.Seq[0]) print("The length of the sequence is", seq_len) for seq in splice_df.Seq[:200]: assert len(seq) == seq_len xtrain_full, xtest, ytrain_full, ytest = train_test_split(splice_df, splice_df.Label, test_size=0.2, random_state=100, stratify=splice_df.Label) xtrain, xval, ytrain, yval = train_test_split(xtrain_full, ytrain_full, test_size=0.2, random_state=100, stratify=ytrain_full) print("shape of training, validation, test set\n", xtrain.shape, xval.shape, xtest.shape, ytrain.shape, yval.shape, ytest.shape) word_size = 1 vocab = [''.join(p) for p in product('ACGT', repeat=word_size)] word_to_idx = {word: i for i, word in enumerate(vocab)} vocab_size = len(word_to_idx) print('vocab_size:', vocab_size) create1gram = keras.layers.experimental.preprocessing.TextVectorization( standardize=lambda x: tf.strings.regex_replace(x, '(.)', '\\1 '), ngrams=1 ) create1gram.adapt(vocab) def ds_preprocess(x, y): x_index = tf.subtract(create1gram(x), 2) return x_index, y # not sure the correct way to get mapping from word to its index create1gram('A C G T') - 2 BATCH_SIZE = 256 xtrain_ds = tf.data.Dataset.from_tensor_slices((xtrain['Seq'], ytrain)).map(ds_preprocess).batch(BATCH_SIZE) xval_ds = tf.data.Dataset.from_tensor_slices((xval['Seq'], yval)).map(ds_preprocess).batch(BATCH_SIZE) xtest_ds = tf.data.Dataset.from_tensor_slices((xtest['Seq'], ytest)).map(ds_preprocess).batch(BATCH_SIZE) latent_size = 30 model = keras.Sequential([ keras.Input(shape=(seq_len,)), keras.layers.Embedding(seq_len, latent_size), keras.layers.LSTM(latent_size, return_sequences=False), keras.layers.Dense(128, activation="relu", input_shape=[latent_size]), keras.layers.Dropout(0.2), keras.layers.Dense(64, activation="relu"), keras.layers.Dropout(0.2), keras.layers.Dense(32, activation="relu"), keras.layers.Dropout(0.2), keras.layers.Dense(16, activation="relu"), keras.layers.Dropout(0.2), keras.layers.Dense(3, activation="softmax") ]) model.summary() es_cb = keras.callbacks.EarlyStopping(patience=100, restore_best_weights=True) model.compile(keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) hist = model.fit(xtrain_ds, validation_data=xval_ds, epochs=4000, callbacks=[es_cb]) def save_hist(): filename = data_path + "baseline_uci_adam256_history.csv" hist_df = pd.DataFrame(hist.history) with open(filename, mode='w') as f: hist_df.to_csv(f) save_hist() fig, axes = plt.subplots(1, 2, figsize=(10, 5)) for i in range(1): ax1 = axes[0] ax2 = axes[1] ax1.plot(hist.history['loss'], label='training') ax1.plot(hist.history['val_loss'], label='validation') ax1.set_ylim((0.2, 1.2)) ax1.set_title('lstm autoencoder loss') ax1.set_xlabel('epoch') ax1.set_ylabel('loss') ax1.legend(['train', 'validation'], loc='upper left') ax2.plot(hist.history['accuracy'], label='training') ax2.plot(hist.history['val_accuracy'], label='validation') ax2.set_ylim((0.5, 1.0)) ax2.set_title('lstm autoencoder accuracy') ax2.set_xlabel('epoch') ax2.set_ylabel('accuracy') ax2.legend(['train', 'validation'], loc='upper left') fig.tight_layout() def eval_model(model, ds, ds_name="Training"): loss, acc = model.evaluate(ds, verbose=0) print("{} Dataset: loss = {} and acccuracy = {}%".format(ds_name, np.round(loss, 3), np.round(acc*100, 2))) eval_model(model, xtrain_ds, "Training") eval_model(model, xval_ds, "Validation") eval_model(model, xtest_ds, "Test") ```
true
code
0.58883
null
null
null
null
# Neural Transfer ## Input images ``` %matplotlib inline import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from PIL import Image import matplotlib.pyplot as plt import torchvision.transforms as transforms import torchvision.models as models import copy np.random.seed(37) torch.manual_seed(37) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False def get_device(): return torch.device('cuda' if torch.cuda.is_available() else 'cpu') def get_image_size(): imsize = 512 if torch.cuda.is_available() else 128 return imsize def get_loader(): image_size = get_image_size() loader = transforms.Compose([ transforms.Resize((image_size, image_size)), transforms.ToTensor()]) return loader def get_unloader(): unloader = transforms.ToPILImage() return unloader def image_loader(image_name): device = get_device() image = Image.open(image_name) # fake batch dimension required to fit network's input dimensions loader = get_loader() image = loader(image).unsqueeze(0) return image.to(device, torch.float) def imshow(tensor, title=None): image = tensor.cpu().clone() # we clone the tensor to not do changes on it image = image.squeeze(0) # remove the fake batch dimension unloader = get_unloader() image = unloader(image) plt.imshow(image) if title is not None: plt.title(title) plt.pause(0.001) style_img = image_loader("./styles/picasso-01.jpg") content_img = image_loader("./styles/dancing.jpg") input_img = content_img.clone() assert style_img.size() == content_img.size(), \ f'size mismatch, style {style_img.size()}, content {content_img.size()}' plt.ion() plt.figure() imshow(input_img, title='Input Image') plt.figure() imshow(style_img, title='Style Image') plt.figure() imshow(content_img, title='Content Image') ``` ## Loss functions ### Content loss ``` class ContentLoss(nn.Module): def __init__(self, target,): super(ContentLoss, self).__init__() # we 'detach' the target content from the tree used # to dynamically compute the gradient: this is a stated value, # not a variable. Otherwise the forward method of the criterion # will throw an error. self.target = target.detach() def forward(self, input): self.loss = F.mse_loss(input, self.target) return input ``` ### Style loss ``` def gram_matrix(input): a, b, c, d = input.size() # a=batch size(=1) # b=number of feature maps # (c,d)=dimensions of a f. map (N=c*d) features = input.view(a * b, c * d) # resise F_XL into \hat F_XL G = torch.mm(features, features.t()) # compute the gram product # we 'normalize' the values of the gram matrix # by dividing by the number of element in each feature maps. return G.div(a * b * c * d) class StyleLoss(nn.Module): def __init__(self, target_feature): super(StyleLoss, self).__init__() self.target = gram_matrix(target_feature).detach() def forward(self, input): G = gram_matrix(input) self.loss = F.mse_loss(G, self.target) return input ``` ## Model ``` device = get_device() cnn = models.vgg19(pretrained=True).features.to(device).eval() ``` ## Normalization ``` class Normalization(nn.Module): def __init__(self, mean, std): super(Normalization, self).__init__() # .view the mean and std to make them [C x 1 x 1] so that they can # directly work with image Tensor of shape [B x C x H x W]. # B is batch size. C is number of channels. H is height and W is width. self.mean = torch.tensor(mean).view(-1, 1, 1) self.std = torch.tensor(std).view(-1, 1, 1) def forward(self, img): # normalize img return (img - self.mean) / self.std cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device) cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device) ``` ## Loss ``` content_layers_default = ['conv_4'] style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5'] def get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img, content_layers=content_layers_default, style_layers=style_layers_default): cnn = copy.deepcopy(cnn) # normalization module normalization = Normalization(normalization_mean, normalization_std).to(device) # just in order to have an iterable access to or list of content/syle # losses content_losses = [] style_losses = [] # assuming that cnn is a nn.Sequential, so we make a new nn.Sequential # to put in modules that are supposed to be activated sequentially model = nn.Sequential(normalization) i = 0 # increment every time we see a conv for layer in cnn.children(): if isinstance(layer, nn.Conv2d): i += 1 name = 'conv_{}'.format(i) elif isinstance(layer, nn.ReLU): name = 'relu_{}'.format(i) # The in-place version doesn't play very nicely with the ContentLoss # and StyleLoss we insert below. So we replace with out-of-place # ones here. layer = nn.ReLU(inplace=False) elif isinstance(layer, nn.MaxPool2d): name = 'pool_{}'.format(i) elif isinstance(layer, nn.BatchNorm2d): name = 'bn_{}'.format(i) else: raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__)) model.add_module(name, layer) if name in content_layers: # add content loss: target = model(content_img).detach() content_loss = ContentLoss(target) model.add_module("content_loss_{}".format(i), content_loss) content_losses.append(content_loss) if name in style_layers: # add style loss: target_feature = model(style_img).detach() style_loss = StyleLoss(target_feature) model.add_module("style_loss_{}".format(i), style_loss) style_losses.append(style_loss) # now we trim off the layers after the last content and style losses for i in range(len(model) - 1, -1, -1): if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss): break model = model[:(i + 1)] return model, style_losses, content_losses ``` ## Optimizer ``` def get_input_optimizer(input_img): # this line to show that input is a parameter that requires a gradient optimizer = optim.LBFGS([input_img.requires_grad_()]) return optimizer ``` ## Transfer ``` import warnings from collections import namedtuple RESULTS = namedtuple('RESULTS', 'run style content') results = [] def run_style_transfer(cnn, normalization_mean, normalization_std, content_img, style_img, input_img, num_steps=600, style_weight=1000000, content_weight=1): model, style_losses, content_losses = get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img) optimizer = get_input_optimizer(input_img) run = [0] while run[0] <= num_steps: def closure(): # correct the values of updated input image input_img.data.clamp_(0, 1) optimizer.zero_grad() model(input_img) style_score = 0 content_score = 0 for sl in style_losses: style_score += sl.loss for cl in content_losses: content_score += cl.loss style_score *= style_weight content_score *= content_weight loss = style_score + content_score loss.backward() run[0] += 1 results.append(RESULTS(run[0], style_score.item(), content_score.item())) if run[0] % 10 == 0: s_score = style_score.item() c_score = content_score.item() print(f'[{run[0]}/{num_steps}] Style Loss {s_score:.4f}, Content Loss {c_score}') return style_score + content_score optimizer.step(closure) # a last correction... input_img.data.clamp_(0, 1) return input_img with warnings.catch_warnings(): warnings.simplefilter('ignore') output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std, content_img, style_img, input_img) ``` ## Results ``` x = [r.run for r in results] y1 = [r.style for r in results] y2 = [r.content for r in results] fig, ax1 = plt.subplots(figsize=(10, 5)) color = 'tab:red' ax1.plot(x, y1, color=color) ax1.set_ylabel('Style Loss', color=color) ax1.tick_params(axis='y', labelcolor=color) color = 'tab:blue' ax2 = ax1.twinx() ax2.plot(x, y2, color=color) ax2.set_ylabel('Content Loss', color=color) ax2.tick_params(axis='y', labelcolor=color) ``` ## Visualize ``` plt.figure() imshow(output, title='Output Image') # sphinx_gallery_thumbnail_number = 4 plt.ioff() plt.show() ```
true
code
0.857649
null
null
null
null
# NumPy Tutorial: Data analysis with Python [Source](https://www.dataquest.io/blog/numpy-tutorial-python/) NumPy is a commonly used Python data analysis package. By using NumPy, you can speed up your workflow, and interface with other packages in the Python ecosystem, like scikit-learn, that use NumPy under the hood. NumPy was originally developed in the mid 2000s, and arose from an even older package called Numeric. This longevity means that almost every data analysis or machine learning package for Python leverages NumPy in some way. In this tutorial, we'll walk through using NumPy to analyze data on wine quality. The data contains information on various attributes of wines, such as pH and fixed acidity, along with a quality score between 0 and 10 for each wine. The quality score is the average of at least 3 human taste testers. As we learn how to work with NumPy, we'll try to figure out more about the perceived quality of wine. The wines we'll be analyzing are from the Minho region of Portugal. The data was downloaded from the UCI Machine Learning Repository, and is available [here](https://archive.ics.uci.edu/ml/datasets/Wine+Quality). Here are the first few rows of the winequality-red.csv file, which we'll be using throughout this tutorial: ``` text "fixed acidity";"volatile acidity";"citric acid";"residual sugar";"chlorides";"free sulfur dioxide";"total sulfur dioxide";"density";"pH";"sulphates";"alcohol";"quality" 7.4;0.7;0;1.9;0.076;11;34;0.9978;3.51;0.56;9.4;5 7.8;0.88;0;2.6;0.098;25;67;0.9968;3.2;0.68;9.8;5 ``` The data is in what I'm going to call ssv (semicolon separated values) format -- each record is separated by a semicolon (;), and rows are separated by a new line. There are 1600 rows in the file, including a header row, and 12 columns. Before we get started, a quick version note -- we'll be using Python 3.5. Our code examples will be done using Jupyter notebook. If you want to jump right into a specific area, here are the topics: * Creating an Array * Reading Text Files * Array Indexing * N-Dimensional Arrays * Data Types * Array Math * Array Methods * Array Comparison and Filtering * Reshaping and Combining Arrays Lists Of Lists for CSV Data Before using NumPy, we'll first try to work with the data using Python and the csv package. We can read in the file using the csv.reader object, which will allow us to read in and split up all the content from the ssv file. In the below code, we: * Import the csv library. * Open the winequality-red.csv file. * With the file open, create a new csv.reader object. * Pass in the keyword argument delimiter=";" to make sure that the records are split up on the semicolon character instead of the default comma character. * Call the list type to get all the rows from the file. * Assign the result to wines. ``` import csv with open("winequality-red.csv", 'r') as f: wines = list(csv.reader(f, delimiter=";")) # print(wines[:3]) headers = wines[0] wines_only = wines[1:] # print the headers print(headers) # print the 1st row of data print(wines_only[0]) # print the 1st three rows of data print(wines_only[:3]) ``` The data has been read into a list of lists. Each inner list is a row from the ssv file. As you may have noticed, each item in the entire list of lists is represented as a string, which will make it harder to do computations. As you can see from the table above, we've read in three rows, the first of which contains column headers. Each row after the header row represents a wine. The first element of each row is the fixed acidity, the second is the volatile acidity, and so on. ## Calculate Average Wine Quality We can find the average quality of the wines. The below code will: * Extract the last element from each row after the header row. * Convert each extracted element to a float. * Assign all the extracted elements to the list qualities. * Divide the sum of all the elements in qualities by the total number of elements in qualities to the get the mean. ``` # calculate average wine quality with a loop qualities = [] for row in wines[1:]: qualities.append(float(row[-1])) sum(qualities) / len(wines[1:]) # calculate average wine quality with a list comprehension qualities = [float(row[-1]) for row in wines[1:]] sum(qualities) / len(wines[1:]) ``` Although we were able to do the calculation we wanted, the code is fairly complex, and it won't be fun to have to do something similar every time we want to compute a quantity. Luckily, we can use NumPy to make it easier to work with our data. # Numpy 2-Dimensional Arrays With NumPy, we work with multidimensional arrays. We'll dive into all of the possible types of multidimensional arrays later on, but for now, we'll focus on 2-dimensional arrays. A 2-dimensional array is also known as a matrix, and is something you should be familiar with. In fact, it's just a different way of thinking about a list of lists. A matrix has rows and columns. By specifying a row number and a column number, we're able to extract an element from a matrix. If we picked the element at the first row and the second column, we'd get volatile acidity. If we picked the element in the third row and the second column, we'd get 0.88. In a NumPy array, the number of dimensions is called the **rank**, and each dimension is called an **axis**. So * the rows are the first axis * the columns are the second axis Now that you understand the basics of matrices, let's see how we can get from our list of lists to a NumPy array. ## Creating A NumPy Array We can create a NumPy array using the numpy.array function. If we pass in a list of lists, it will automatically create a NumPy array with the same number of rows and columns. Because we want all of the elements in the array to be float elements for easy computation, we'll leave off the header row, which contains strings. One of the limitations of NumPy is that all the elements in an array have to be of the same type, so if we include the header row, all the elements in the array will be read in as strings. Because we want to be able to do computations like find the average quality of the wines, we need the elements to all be floats. In the below code, we: * Import the ```numpy``` package. * Pass the ```list``` of lists wines into the array function, which converts it into a NumPy array. * Exclude the header row with list slicing. * Specify the keyword argument ```dtype``` to make sure each element is converted to a ```float```. We'll dive more into what the ```dtype``` is later on. ``` import numpy as np np.set_printoptions(precision=2) # set the output print precision for readability # create the numpy array skipping the headers wines = np.array(wines[1:], dtype=np.float) # If we display wines, we'll now get a NumPy array: print(type(wines), wines) # We can check the number of rows and columns in our data using the shape property of NumPy arrays: wines.shape ``` ## Alternative NumPy Array Creation Methods There are a variety of methods that you can use to create NumPy arrays. It's useful to create an array with all zero elements in cases when you need an array of fixed size, but don't have any values for it yet. To start with, you can create an array where every element is zero. The below code will create an array with 3 rows and 4 columns, where every element is 0, using ```numpy.zeros```: ``` empty_array = np.zeros((3, 4)) empty_array ``` Creating arrays full of random numbers can be useful when you want to quickly test your code with sample arrays. You can also create an array where each element is a random number using ```numpy.random.rand```. ``` np.random.rand(2, 3) ``` ### Using NumPy To Read In Files It's possible to use NumPy to directly read ```csv``` or other files into arrays. We can do this using the ```numpy.genfromtxt``` function. We can use it to read in our initial data on red wines. In the below code, we: * Use the ``` genfromtxt ``` function to read in the ``` winequality-red.csv ``` file. * Specify the keyword argument ``` delimiter=";" ``` so that the fields are parsed properly. * Specify the keyword argument ``` skip_header=1 ``` so that the header row is skipped. ``` wines = np.genfromtxt("winequality-red.csv", delimiter=";", skip_header=1) wines ``` Wines will end up looking the same as if we read it into a list then converted it to an array of ```floats```. NumPy will automatically pick a data type for the elements in an array based on their format. ## Indexing NumPy Arrays We now know how to create arrays, but unless we can retrieve results from them, there isn't a lot we can do with NumPy. We can use array indexing to select individual elements, groups of elements, or entire rows and columns. One important thing to keep in mind is that just like Python lists, NumPy is **zero-indexed**, meaning that: * The index of the first row is 0 * The index of the first column is 0 * If we want to work with the fourth row, we'd use index 3 * If we want to work with the second row, we'd use index 1, and so on. We'll again work with the wines array: ||||||||||||| |-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| |7.4 |0.70 |0.00 |1.9 |0.076 |11 |34 |0.9978 |3.51 |0.56 |9.4 |5| |7.8 |0.88 |0.00 |2.6 |0.098 |25 |67 |0.9968 |3.20 |0.68 |9.8 |5| |7.8 |0.76 |0.04 |2.3 |0.092 |15 |54 |0.9970 |3.26 |0.65 |9.8 |5| |11.2|0.28 |0.56 |1.9 |0.075 |17 |60 |0.9980 |3.16 |0.58 |9.8 |6| |7.4 |0.70 |0.00 |1.9 |0.076 |11 |34 |0.9978 |3.51 |0.56 |9.4 |5| Let's select the element at **row 3** and **column 4**. We pass: * 2 as the row index * 3 as the column index. This retrieves the value from the **third row** and **fourth column** ``` wines[2, 3] wines[2][3] ``` Since we're working with a 2-dimensional array in NumPy we specify 2 indexes to retrieve an element. * The first index is the row, or **axis 1**, index * The second index is the column, or **axis 2**, index Any element in wines can be retrieved using 2 indexes. ``` # rows 1, 2, 3 and column 4 wines[0:3, 3] # all rows and column 3 wines[:, 2] ``` Just like with ```list``` slicing, it's possible to omit the 0 to just retrieve all the elements from the beginning up to element 3: ``` # rows 1, 2, 3 and column 4 wines[:3, 3] ``` We can select an entire column by specifying that we want all the elements, from the first to the last. We specify this by just using the colon ```:```, with no starting or ending indices. The below code will select the entire fourth column: ``` # all rows and column 4 wines[:, 3] ``` We selected an entire column above, but we can also extract an entire row: ``` # row 4 and all columns wines[3, :] ``` If we take our indexing to the extreme, we can select the entire array using two colons to select all the rows and columns in wines. This is a great party trick, but doesn't have a lot of good applications: ``` wines[:, :] ``` ## Assigning Values To NumPy Arrays We can also use indexing to assign values to certain elements in arrays. We can do this by assigning directly to the indexed value: ``` # assign the value of 10 to the 2nd row and 6th column print('Before', wines[1, 4:7]) wines[1, 5] = 10 print('After', wines[1, 4:7]) ``` We can do the same for slices. To overwrite an entire column, we can do this: ``` # Overwrites all the values in the eleventh column with 50. print('Before', wines[:, 9:12]) wines[:, 10] = 50 print('After', wines[:, 9:12]) ``` ## 1-Dimensional NumPy Arrays So far, we've worked with 2-dimensional arrays, such as wines. However, NumPy is a package for working with multidimensional arrays. One of the most common types of multidimensional arrays is the **1-dimensional array**, or **vector**. As you may have noticed above, when we sliced wines, we retrieved a 1-dimensional array. * A 1-dimensional array only needs a single index to retrieve an element. * Each row and column in a 2-dimensional array is a 1-dimensional array. Just like a list of lists is analogous to a 2-dimensional array, a single list is analogous to a 1-dimensional array. If we slice wines and only retrieve the third row, we get a 1-dimensional array: ``` third_wine = wines[3,:] third_wine ``` We can retrieve individual elements from ```third_wine``` using a single index. ``` # display the second item in third_wine third_wine[1] ``` Most NumPy functions that we've worked with, such as ```numpy.random.rand```, can be used with multidimensional arrays. Here's how we'd use ```numpy.random.rand``` to generate a random vector: ``` np.random.rand(3) ``` Previously, when we called ```np.random.rand```, we passed in a shape for a 2-dimensional array, so the result was a 2-dimensional array. This time, we passed in a shape for a single dimensional array. The shape specifies the number of dimensions, and the size of the array in each dimension. A shape of ```(10,10)``` will be a 2-dimensional array with **10 rows** and **10 columns**. A shape of ```(10,)``` will be a **1-dimensional** array with **10 elements**. Where NumPy gets more complex is when we start to deal with arrays that have more than 2 dimensions. ## N-Dimensional NumPy Arrays This doesn't happen extremely often, but there are cases when you'll want to deal with arrays that have greater than 3 dimensions. One way to think of this is as a list of lists of lists. Let's say we want to store the monthly earnings of a store, but we want to be able to quickly lookup the results for a quarter, and for a year. The earnings for one year might look like this: ``` python [500, 505, 490, 810, 450, 678, 234, 897, 430, 560, 1023, 640] ``` The store earned \$500 in January, \$505 in February, and so on. We can split up these earnings by quarter into a list of lists: ``` year_one = [ [500,505,490], # 1st quarter [810,450,678], # 2nd quarter [234,897,430], # 3rd quarter [560,1023,640] # 4th quarter ] ``` We can retrieve the earnings from January by calling ``` year_one[0][0] ```. If we want the results for a whole quarter, we can call ``` year_one[0] ``` or ``` year_one[1] ```. We now have a 2-dimensional array, or matrix. But what if we now want to add the results from another year? We have to add a third dimension: ``` earnings = [ [ # year 1 [500,505,490], # year 1, 1st quarter [810,450,678], # year 1, 2nd quarter [234,897,430], # year 1, 3rd quarter [560,1023,640] # year 1, 4th quarter ], [ # year =2 [600,605,490], # year 2, 1st quarter [345,900,1000],# year 2, 2nd quarter [780,730,710], # year 2, 3rd quarter [670,540,324] # year 2, 4th quarter ] ] ``` We can retrieve the earnings from January of the first year by calling ``` earnings[0][0][0] ```. We now need three indexes to retrieve a single element. A three-dimensional array in NumPy is much the same. In fact, we can convert earnings to an array and then get the earnings for January of the first year: ``` earnings = np.array(earnings) # year 1, 1st quarter, 1st month (January) earnings[0,0,0] # year 2, 3rd quarter, 1st month (July) earnings[1,2,0] # we can also find the shape of the array earnings.shape ``` Indexing and slicing work the exact same way with a 3-dimensional array, but now we have an extra axis to pass in. If we wanted to get the earnings for **January of all years**, we could do this: ``` # all years, 1st quarter, 1st month (January) earnings[:,0,0] ``` If we wanted to get first quarter earnings from both years, we could do this: ``` # all years, 1st quarter, all months (January, February, March) earnings[:,0,:] ``` Adding more dimensions can make it much easier to query your data if it's organized in a certain way. As we go from 3-dimensional arrays to 4-dimensional and larger arrays, the same properties apply, and they can be indexed and sliced in the same ways. ## NumPy Data Types As we mentioned earlier, each NumPy array can store elements of a single data type. For example, wines contains only float values. NumPy stores values using its own data types, **which are distinct from Python types** like ```float``` and ```str```. This is because the core of NumPy is written in a programming language called ```C```, **which stores data differently than the Python data types**. NumPy data types map between Python and C, allowing us to use NumPy arrays without any conversion hitches. You can find the data type of a NumPy array by accessing the dtype property: ``` wines.dtype ``` NumPy has several different data types, which mostly map to Python data types, like ```float```, and ```str```. You can find a full listing of NumPy data types [here](https://www.dataquest.io/blog/numpy-tutorial-python/), but here are a few important ones: * ```float``` -- numeric floating point data. * ```int``` -- integer data. * ```string``` -- character data. * ```object``` -- Python objects. Data types additionally end with a suffix that indicates how many bits of memory they take up. So ```int32``` is a **32 bit integer data type**, and ```float64``` is a **64 bit float data type**. ### Converting Data Types You can use the numpy.ndarray.astype method to convert an array to a different type. The method will actually **copy the array**, and **return a new array with the specified data type**. For instance, we can convert wines to the ```int``` data type: ``` # convert wines to the int data type wines.astype(int) ``` As you can see above, all of the items in the resulting array are integers. Note that we used the Python ```int``` type instead of a NumPy data type when converting wines. This is because several Python data types, including ```float```, ```int```, and ```string```, can be used with NumPy, and are automatically converted to NumPy data types. We can check the name property of the ```dtype``` of the resulting array to see what data type NumPy mapped the resulting array to: ``` # convert to int int_wines = wines.astype(int) # check the data type int_wines.dtype.name ``` The array has been converted to a **64-bit integer** data type. This allows for very long integer values, **but takes up more space in memory** than storing the values as 32-bit integers. If you want more control over how the array is stored in memory, you can directly create NumPy dtype objects like ```numpy.int32``` ``` np.int32 ``` You can use these directly to convert between types: ``` # convert to a 64-bit integer wines.astype(np.int64) # convert to a 32-bit integer wines.astype(np.int32) # convert to a 16-bit integer wines.astype(np.int16) # convert to a 8-bit integer wines.astype(np.int8) ``` ## NumPy Array Operations NumPy makes it simple to perform mathematical operations on arrays. This is one of the primary advantages of NumPy, and makes it quite easy to do computations. ### Single Array Math If you do any of the basic mathematical operations ```/```, ```*```, ```-```, ```+```, ```^``` with an array and a value, it will apply the operation to each of the elements in the array. Let's say we want to add 10 points to each quality score because we're feeling generous. Here's how we'd do that: ``` # add 10 points to the quality score wines[:,-1] + 10 ``` *Note: that the above operation won't change the wines array -- it will return a new 1-dimensional array where 10 has been added to each element in the quality column of wines.* If we instead did ```+=```, we'd modify the array in place: ``` print('Before', wines[:,11]) # modify the data in place wines[:,11] += 10 print('After', wines[:,11]) ``` All the other operations work the same way. For example, if we want to multiply each of the quality score by 2, we could do it like this: ``` # multiply the quality score by 2 wines[:,11] * 2 ``` ### Multiple Array Math It's also possible to do mathematical operations between arrays. This will apply the operation to pairs of elements. For example, if we add the quality column to itself, here's what we get: ``` # add the quality column to itself wines[:,11] + wines[:,11] ``` Note that this is equivalent to ```wines[:,11] * 2``` -- this is because NumPy adds each pair of elements. The first element in the first array is added to the first element in the second array, the second to the second, and so on. ``` # add the quality column to itself wines[:,11] * 2 ``` We can also use this to multiply arrays. Let's say we want to pick a wine that maximizes alcohol content and quality. We'd multiply alcohol by quality, and select the wine with the highest score: ``` # multiply alcohol content by quality alcohol_by_quality = wines[:,10] * wines[:,11] print(alcohol_by_quality) alcohol_by_quality.sort() print(alcohol_by_quality, alcohol_by_quality[-1]) ``` All of the common operations ```/```, ```*```, ```-```, ```+```, ```^``` will work between arrays. ## NumPy Array Methods In addition to the common mathematical operations, NumPy also has several methods that you can use for more complex calculations on arrays. An example of this is the ```numpy.ndarray.sum``` method. This finds the sum of all the elements in an array by default: ``` # find the sum of all rows and the quality column total = 0 for row in wines: total += row[11] print(total) # find the sum of all rows and the quality column wines[:,11].sum(axis=0) # find the sum of the rows 1, 2, and 3 across all columns totals = [] for i in range(3): total = 0 for col in wines[i,:]: total += col totals.append(total) print(totals) # find the sum of the rows 1, 2, and 3 across all columns wines[0:3,:].sum(axis=1) ``` We can pass the ```axis``` keyword argument into the sum method to find sums over an axis. If we call sum across the wines matrix, and pass in ```axis=0```, we'll find the sums over the first axis of the array. This will give us the **sum of all the values in every column**. This may seem backwards that the sums over the first axis would give us the sum of each column, but one way to think about this is that **the specified axis is the one "going away"**. So if we specify ```axis=0```, we want the **rows to go away**, and we want to find **the sums for each of the remaining axes across each row**: ``` # sum each column for all rows totals = [0] * len(wines[0]) for i, total in enumerate(totals): for row_val in wines[:,i]: total += row_val totals[i] = total print(totals) # sum each column for all rows wines.sum(axis=0) ``` We can verify that we did the sum correctly by checking the shape. The shape should be 12, corresponding to the number of columns: ``` wines.sum(axis=0).shape ``` If we pass in axis=1, we'll find the sums over the second axis of the array. This will give us the sum of each row: ``` # sum each row for all columns totals = [0] * len(wines) for i, total in enumerate(totals): for col_val in wines[i,:]: total += col_val totals[i] = total print(totals[0:3], '...', totals[-3:]) # sum each row for all columns wines.sum(axis=1) wines.sum(axis=1).shape ``` There are several other methods that behave like the sum method, including: * ```numpy.ndarray.mean``` — finds the mean of an array. * ```numpy.ndarray.std``` — finds the standard deviation of an array. * ```numpy.ndarray.min``` — finds the minimum value in an array. * ```numpy.ndarray.max``` — finds the maximum value in an array. You can find a full list of array methods [here](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html). ## NumPy Array Comparisons NumPy makes it possible to test to see if rows match certain values using mathematical comparison operations like ```<```, ```>```, ```>=```, ```<=```, and ```==```. For example, if we want to see which wines have a quality rating higher than 5, we can do this: ``` # return True for all rows in the Quality column that are greater than 5 wines[:,11] > 5 ``` We get a Boolean array that tells us which of the wines have a quality rating greater than 5. We can do something similar with the other operators. For instance, we can see if any wines have a quality rating equal to 10: ``` # return True for all rows that have a Quality rating of 10 wines[:,11] == 10 ``` ### Subsetting One of the powerful things we can do with a Boolean array and a NumPy array is select only certain rows or columns in the NumPy array. For example, the below code will only select rows in wines where the quality is over 7: ``` # create a boolean array for wines with quality greater than 15 high_quality = wines[:,11] > 15 print(len(high_quality), high_quality) # use boolean indexing to find high quality wines high_quality_wines = wines[high_quality,:] print(len(high_quality_wines), high_quality_wines) ``` We select only the rows where ```high_quality``` contains a ```True``` value, and all of the columns. This subsetting makes it simple to filter arrays for certain criteria. For example, we can look for wines with a lot of alcohol and high quality. In order to specify multiple conditions, we have to place each condition in **parentheses** ```(...)```, and separate conditions with an **ampersand** ```&```: ``` # create a boolean array for high alcohol content and high quality high_alcohol_and_quality = (wines[:,11] > 7) & (wines[:,10] > 10) print(high_alcohol_and_quality) # use boolean indexing to select out the wines wines[high_alcohol_and_quality,:] ``` We can combine subsetting and assignment to overwrite certain values in an array: ``` high_alcohol_and_quality = (wines[:,10] > 10) & (wines[:,11] > 7) wines[high_alcohol_and_quality,10:] = 20 ``` ## Reshaping NumPy Arrays We can change the shape of arrays while still preserving all of their elements. This often can make it easier to access array elements. The simplest reshaping is to flip the axes, so rows become columns, and vice versa. We can accomplish this with the ```numpy.transpose``` function: ``` np.transpose(wines).shape ``` We can use the ```numpy.ravel``` function to turn an array into a one-dimensional representation. It will essentially flatten an array into a long sequence of values: ``` wines.ravel() ``` Here's an example where we can see the ordering of ```numpy.ravel```: ``` array_one = np.array( [ [1, 2, 3, 4], [5, 6, 7, 8] ] ) array_one.ravel() ``` Finally, we can use the numpy.reshape function to reshape an array to a certain shape we specify. The below code will turn the second row of wines into a 2-dimensional array with 2 rows and 6 columns: ``` # print the current shape of the 2nd row and all columns wines[1,:].shape # reshape the 2nd row to a 2 by 6 matrix wines[1,:].reshape((2,6)) ``` ## Combining NumPy Arrays With NumPy, it's very common to combine multiple arrays into a single unified array. We can use ```numpy.vstack``` to vertically stack multiple arrays. Think of it like the second arrays's items being added as new rows to the first array. We can read in the ```winequality-white.csv``` dataset that contains information on the quality of white wines, then combine it with our existing dataset, wines, which contains information on red wines. In the below code, we: * Read in ```winequality-white.csv```. * Display the shape of white_wines. ``` white_wines = np.genfromtxt("winequality-white.csv", delimiter=";", skip_header=1) white_wines.shape ``` As you can see, we have attributes for 4898 wines. Now that we have the white wines data, we can combine all the wine data. In the below code, we: * Use the ```vstack``` function to combine wines and white_wines. * Display the shape of the result. ``` all_wines = np.vstack((wines, white_wines)) all_wines.shape ``` As you can see, the result has 6497 rows, which is the sum of the number of rows in wines and the number of rows in red_wines. If we want to combine arrays horizontally, where the number of rows stay constant, but the columns are joined, then we can use the ```numpy.hstack``` function. The arrays we combine need to have the same number of rows for this to work. Finally, we can use ```numpy.concatenate``` as a general purpose version of ```hstack``` and ```vstack```. If we want to concatenate two arrays, we pass them into concatenate, then specify the axis keyword argument that we want to concatenate along. * Concatenating along the first axis is similar to ```vstack``` * Concatenating along the second axis is similar to ```hstack```: ``` x = np.concatenate((wines, white_wines), axis=0) print(x.shape, x) ``` ## Broadcasting Unless the arrays that you're operating on are the exact same size, it's not possible to do elementwise operations. In cases like this, NumPy performs broadcasting to try to match up elements. Essentially, broadcasting involves a few steps: * The last dimension of each array is compared. * If the dimension lengths are equal, or one of the dimensions is of length 1, then we keep going. * If the dimension lengths aren't equal, and none of the dimensions have length 1, then there's an error. * Continue checking dimensions until the shortest array is out of dimensions. For example, the following two shapes are compatible: ``` python A: (50,3) B (3,) ``` This is because the length of the trailing dimension of array A is 3, and the length of the trailing dimension of array B is 3. They're equal, so that dimension is okay. Array B is then out of elements, so we're okay, and the arrays are compatible for mathematical operations. The following two shapes are also compatible: ``` python A: (1,2) B (50,2) ``` The last dimension matches, and A is of length 1 in the first dimension. These two arrays don't match: ``` python A: (50,50) B: (49,49) ``` The lengths of the dimensions aren't equal, and neither array has either dimension length equal to 1. There's a detailed explanation of broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), but we'll go through a few examples to illustrate the principle: ``` wines * np.array([1,2]) ``` The above example didn't work because the two arrays don't have a matching trailing dimension. Here's an example where the last dimension does match: ``` array_one = np.array( [ [1,2], [3,4] ] ) array_two = np.array([4,5]) array_one + array_two ``` As you can see, array_two has been broadcasted across each row of array_one. Here's an example with our wines data: ``` rand_array = np.random.rand(12) wines + rand_array ```
true
code
0.644449
null
null
null
null
``` %matplotlib inline import numpy as np import yt ``` This notebook shows how to use yt to make plots and examine FITS X-ray images and events files. ## Sloshing, Shocks, and Bubbles in Abell 2052 This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](https://ui.adsabs.harvard.edu/abs/2011ApJ...737...99B). They consist of two files, a "flux map" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV. ``` ds = yt.load( "xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits", auxiliary_files=["xray_fits/A2052_core_tmap_b1_m2000_.fits"], ) ``` Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the "flux" file) as a master file, and pass in the "temperature" file with the `auxiliary_files` keyword to `load`. Next, let's derive some new fields for the number of counts, the "pseudo-pressure", and the "pseudo-entropy": ``` def _counts(field, data): exposure_time = data.get_field_parameter("exposure_time") return data["fits", "flux"] * data["fits", "pixel"] * exposure_time ds.add_field( ("gas", "counts"), function=_counts, sampling_type="cell", units="counts", take_log=False, ) def _pp(field, data): return np.sqrt(data["gas", "counts"]) * data["fits", "projected_temperature"] ds.add_field( ("gas", "pseudo_pressure"), function=_pp, sampling_type="cell", units="sqrt(counts)*keV", take_log=False, ) def _pe(field, data): return data["fits", "projected_temperature"] * data["gas", "counts"] ** (-1.0 / 3.0) ds.add_field( ("gas", "pseudo_entropy"), function=_pe, sampling_type="cell", units="keV*(counts)**(-1/3)", take_log=False, ) ``` Here, we're deriving a "counts" field from the "flux" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \propto \rho^2$) to use the counts in each pixel as a "stand-in". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`: ``` exposure_time = ds.quan(ds.primary_header["exposure"], "s") ``` Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels. ``` slc = yt.SlicePlot( ds, "z", [ ("fits", "flux"), ("fits", "projected_temperature"), ("gas", "pseudo_pressure"), ("gas", "pseudo_entropy"), ], origin="native", field_parameters={"exposure_time": exposure_time}, ) slc.set_log(("fits", "flux"), True) slc.set_log(("gas", "pseudo_pressure"), False) slc.set_log(("gas", "pseudo_entropy"), False) slc.set_width(250.0) slc.show() ``` To add the celestial coordinates to the image, we can use `PlotWindowWCS`, if you have a recent version of AstroPy (>= 1.3) installed: ``` from yt.frontends.fits.misc import PlotWindowWCS wcs_slc = PlotWindowWCS(slc) wcs_slc.show() ``` We can make use of yt's facilities for profile plotting as well. ``` v, c = ds.find_max(("fits", "flux")) # Find the maximum flux and its center my_sphere = ds.sphere(c, (100.0, "code_length")) # Radius of 150 pixels my_sphere.set_field_parameter("exposure_time", exposure_time) ``` Such as a radial profile plot: ``` radial_profile = yt.ProfilePlot( my_sphere, "radius", ["counts", "pseudo_pressure", "pseudo_entropy"], n_bins=30, weight_field="ones", ) radial_profile.set_log("counts", True) radial_profile.set_log("pseudo_pressure", True) radial_profile.set_log("pseudo_entropy", True) radial_profile.set_xlim(3, 100.0) radial_profile.show() ``` Or a phase plot: ``` phase_plot = yt.PhasePlot( my_sphere, "pseudo_pressure", "pseudo_entropy", ["counts"], weight_field=None ) phase_plot.show() ``` Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a "cut region", using `ds9_region` (the [pyregion](https://pyregion.readthedocs.io) package needs to be installed for this): ``` from yt.frontends.fits.misc import ds9_region reg_file = [ "# Region file format: DS9 version 4.1\n", "global color=green dashlist=8 3 width=3 include=1 source=1 fk5\n", 'circle(15:16:44.817,+7:01:19.62,34.6256")', ] f = open("circle.reg", "w") f.writelines(reg_file) f.close() circle_reg = ds9_region( ds, "circle.reg", field_parameters={"exposure_time": exposure_time} ) ``` This region may now be used to compute derived quantities: ``` print( circle_reg.quantities.weighted_average_quantity("projected_temperature", "counts") ) ``` Or used in projections: ``` prj = yt.ProjectionPlot( ds, "z", [ ("fits", "flux"), ("fits", "projected_temperature"), ("gas", "pseudo_pressure"), ("gas", "pseudo_entropy"), ], origin="native", field_parameters={"exposure_time": exposure_time}, data_source=circle_reg, method="sum", ) prj.set_log(("fits", "flux"), True) prj.set_log(("gas", "pseudo_pressure"), False) prj.set_log(("gas", "pseudo_entropy"), False) prj.set_width(250.0) prj.show() ``` ## The Bullet Cluster This example uses an events table file from a ~100 ks exposure of the "Bullet Cluster" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in yt. However, you can make images of the object in different energy bands using the `setup_counts_fields` function. ``` from yt.frontends.fits.api import setup_counts_fields ``` `load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `"reblock"` parameter in the `parameters` dictionary in `load`. `"reblock"` must be a power of 2. ``` ds2 = yt.load("xray_fits/acisf05356N003_evt2.fits.gz", parameters={"reblock": 2}) ``` `setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid. ``` ebounds = [(0.1, 2.0), (2.0, 5.0)] setup_counts_fields(ds2, ebounds) ``` The "x", "y", "energy", and "time" fields in the events table are loaded as particle fields. Each one has a name given by "event\_" plus the name of the field: ``` dd = ds2.all_data() print(dd["io", "event_x"]) print(dd["io", "event_y"]) ``` Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet: ``` slc = yt.SlicePlot( ds2, "z", [("gas", "counts_0.1-2.0"), ("gas", "counts_2.0-5.0")], origin="native" ) slc.pan((100.0, 100.0)) slc.set_width(500.0) slc.show() ``` The counts fields can take the field parameter `"sigma"` and use [AstroPy's convolution routines](https://astropy.readthedocs.io/en/latest/convolution/) to smooth the data with a Gaussian: ``` slc = yt.SlicePlot( ds2, "z", [("gas", "counts_0.1-2.0"), ("gas", "counts_2.0-5.0")], origin="native", field_parameters={"sigma": 2.0}, ) # This value is in pixel scale slc.pan((100.0, 100.0)) slc.set_width(500.0) slc.set_zlim(("gas", "counts_0.1-2.0"), 0.01, 100.0) slc.set_zlim(("gas", "counts_2.0-5.0"), 0.01, 50.0) slc.show() ```
true
code
0.552841
null
null
null
null
# Imports ``` from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Dropout, Flatten, Input, Concatenate from tensorflow.keras.optimizers import Adam, RMSprop import numpy as np import matplotlib.pyplot as plt import copy ``` # Global Variables ``` epochs = 500 batch_size = 16 number_of_particles = epochs * 2 * batch_size dt = 0.1 ``` # Classes ``` class Particle: def __str__(self): return "Position: %s, Velocity: %s, Accleration: %s" % (self.position, self.velocity, self.acceleration) def __repr__(self): return "Position: %s, Velocity: %s, Accleration: %s" % (self.position, self.velocity, self.acceleration) def __init__(self): self.position = np.array([np.random.sample()*2-1,np.random.sample()*2-1]) # Position X, Y self.velocity = np.array([np.random.sample()*2-1,np.random.sample()*2-1]) # Velocity X, Y self.acceleration = np.array([np.random.sample()*2-1,np.random.sample()*2-1]) # Acceleration X, Y def apply_physics(self,dt): nextParticle = copy.deepcopy(self) # Copy to retain initial values nextParticle.position += self.velocity * dt nextParticle.velocity += self.acceleration * dt return nextParticle def get_list(self): return [self.position[0],self.position[1],self.velocity[0], self.velocity[1], self.acceleration[0], self.acceleration[1]] def get_list_physics(self,dt): n = self.apply_physics(dt) return [self.position[0],self.position[1],self.velocity[0], self.velocity[1], self.acceleration[0], self.acceleration[1], n.position[0], n.position[1], n.velocity[0], n.velocity[1]] class GAN: def __init__(self,input_size,output_size,dropout=0.4): self.input_size = input_size self.output_size = output_size self.dropout = dropout self.generator = self.generator_network() self.discriminator = self.discriminator_network() self.adverserial = self.adverserial_network() def discriminator_trainable(self, val): self.discriminator.trainable = val for l in self.discriminator.layers: l.trainable = val def generator_network(self): # Generator : Object(6) - Dense - Object(4) self.g_input = Input(shape=(self.input_size,), name="Generator_Input") g = Dense(128, activation='relu')(self.g_input) g = Dropout(self.dropout)(g) g = Dense(256, activation='relu')(g) g = Dropout(self.dropout)(g) g = Dense(512, activation='relu')(g) g = Dropout(self.dropout)(g) g = Dense(256, activation='relu')(g) g = Dropout(self.dropout)(g) g = Dense(128, activation='relu')(self.g_input) g = Dropout(self.dropout)(g) self.g_output = Dense(self.output_size, activation='tanh', name="Generator_Output")(g) m = Model(self.g_input, self.g_output, name="Generator") return m def discriminator_network(self): # Discriminator : Object(10) - Dense - Probability d_opt = RMSprop(lr=0.000125,decay=6e-8) d_input = Input(shape=(self.input_size+self.output_size,), name="Discriminator_Input") d = Dense(128, activation='relu')(d_input) d = Dense(256, activation='relu')(d) d = Dense(512, activation='relu')(d) d = Dense(256, activation='relu')(d) d = Dense(128, activation='relu')(d) d_output = Dense(1, activation='sigmoid', name="Discriminator_Output")(d) m = Model(d_input, d_output, name="Discriminator") m.compile(loss='binary_crossentropy', optimizer=d_opt) return m def adverserial_network(self): # Adverserial : Object(6) - Generator - Discriminator - Probability a_opt = RMSprop(lr=0.0001,decay=3e-8) d_input = Concatenate(name="Generator_Input_Output")([self.g_input,self.g_output]) m=Model(self.g_input, self.discriminator(d_input)) m.compile(loss='binary_crossentropy', optimizer=a_opt) return m def train_discriminator(self,val): self.discriminator.trainable = val for l in self.discriminator.layers: l.trainable = val def train(self, adverserial_set, discriminator_set, epochs, batch_size): losses = {"d":[], "g":[]} for i in range(epochs): batch = discriminator_set[int(i/2*batch_size/2):int((i/2+1)*batch_size/2)] # Gets a batch of real data for j in adverserial_set[int(i/2*batch_size/2):int((i/2+1)*batch_size/2)]: # Gets a batch of generated data n = copy.deepcopy(j) p = self.predict(j) for e in p: n.append(e) batch.append(n) #self.train_discriminator(True) # Turns on discriminator weights output = np.zeros(batch_size) # Sets output weight 0 for real and 1 for fakes output[int(batch_size/2):] = 1 losses["d"].append(self.discriminator.train_on_batch(np.array(batch), np.array(output))) # Train discriminator batch = adverserial_set[(i*batch_size):((i+1)*batch_size)] # Gets real data to train generator output = np.zeros(batch_size) #self.train_discriminator(False) # Turns off discriminator weights losses["g"].append(self.adverserial.train_on_batch(np.array(batch), np.array(output))) # Train generator print('Epoch %s - Adverserial Loss : %s, Discriminator Loss : %s' % (i+1, losses["g"][-1], losses["d"][-1])) self.generator.save("Generator.h5") self.discriminator.save("Discriminator.h5") return losses def predict(self, pred): return self.generator.predict(np.array(pred).reshape(-1,6))[0] ``` # Training Data ``` training_set = [] actual_set = [] for i in range(number_of_particles): p = Particle() if(i%2==0): training_set.append(p.get_list()) else: actual_set.append(p.get_list_physics(dt)) ``` # Training ``` network = GAN(input_size=6,output_size=4,dropout=0) loss = network.train(adverserial_set=training_set,discriminator_set=actual_set,epochs=epochs,batch_size=batch_size) fig = plt.figure(figsize=(13,7)) plt.title("Loss Function over Epochs") plt.xlabel("Epoch") plt.ylabel("Loss") plt.plot(loss["g"], label="Adversarial Loss") plt.plot(loss["d"], label="Discriminative Loss") plt.legend() plt.show() network.predict([0.1,0.2,0.1,0.1,0.1,0.1]) network.generator.summary() network.discriminator.summary() network.adverserial.summary() ```
true
code
0.723959
null
null
null
null
# Field operations There are several convenience methods that can be used to analyse the field. Let us first define the mesh we are going to work with. ``` import discretisedfield as df p1 = (-50, -50, -50) p2 = (50, 50, 50) n = (2, 2, 2) mesh = df.Mesh(p1=p1, p2=p2, n=n) ``` We are going to initialise the vector field (`dim=3`), with $$\mathbf{f}(x, y, z) = (xy, 2xy, xyz)$$ For that, we are going to use the following Python function. ``` def value_function(pos): x, y, z = pos return x*y, 2*x*y, x*y*z ``` Finally, our field is ``` field = df.Field(mesh, dim=3, value=value_function) ``` ## 1. Sampling the field As we have shown previously, a field can be sampled by calling it. The argument must be a 3-length iterable and it contains the coordinates of the point. ``` point = (0, 0, 0) field(point) ``` However if the point is outside the mesh, an exception is raised. ``` point = (100, 100, 100) try: field(point) except ValueError: print('Exception raised.') ``` ## 2. Extracting the component of a vector field A three-dimensional vector field can be understood as three separate scalar fields, where each scalar field is a component of a vector field value. A scalar field of a component can be extracted by accessing `x`, `y`, or `z` attribute of the field. ``` x_component = field.x x_component((0, 0, 0)) ``` Default names `x`, `y`, and (for dim 3) `z` are only available for fields with dimensionality 2 or 3. ``` field.components ``` It is possible to change the component names: ``` field.components = ['mx', 'my', 'mz'] field.mx((0, 0, 0)) ``` This overrides the component labels and the old `x`, `y` and `z` cannot be used anymore: ``` try: field.x except AttributeError as e: print(e) ``` We change the component labels back to `x`, `y`, and `z` for the rest of this notebook. ``` field.components = ['x', 'y', 'z'] ``` Custom component names can optionally also be specified during field creation. If not specified, the default values are used for fields with dimensions 2 or 3. Higher-dimensional fields have no defaults and custom labes have to be specified in order to access individual field components: ``` field_4d = df.Field(mesh, dim=4, value=[1, 1, 1, 1], components=['c1', 'c2', 'c3', 'c4']) field_4d field_4d.c1((0, 0, 0)) ``` ## 3. Computing the average The average of the field can be obtained by calling `discretisedfield.Field.average` property. ``` field.average ``` Average always return a tuple, independent of the dimension of the field's value. ``` field.x.average ``` ## 4. Iterating through the field The field object itself is an iterable. That means that it can be iterated through. As a result it returns a tuple, where the first element is the coordinate of the mesh point, whereas the second one is its value. ``` for coordinate, value in field: print(coordinate, value) ``` ## 5. Sampling the field along the line To sample the points of the field which are on a certain line, `discretisedfield.Field.line` method is used. It takes two points `p1` and `p2` that define the line and an integer `n` which defines how many mesh coordinates on that line are required. The default value of `n` is 100. ``` line = field.line(p1=(-10, 0, 0), p2=(10, 0, 0), n=5) ``` ## 6. Intersecting the field with a plane If we intersect the field with a plane, `discretisedfield.Field.plane` will return a new field object which contains only discretisation cells that belong to that plane. The planes allowed are the planes perpendicular to the axes of the Cartesian coordinate system. For instance, a plane parallel to the $yz$-plane (perpendicular to the $x$-axis) which intesects the $x$-axis at 1, can be written as $$x = 1$$ ``` field.plane(x=1) ``` If we want to cut through the middle of the mesh, we do not need to provide a particular value for a coordinate. ``` field.plane('x') ``` ## 7. Cascading the operations Let us say we want to compute the average of an $x$ component of the field on the plane $y=10$. In order to do that, we can cascade several operation in a single line. ``` field.plane(y=10).x.average ``` This gives the same result as for instance ``` field.x.plane(y=10).average ``` ## 8. Complex fields `discretisedfield` supports complex-valued fields. ``` cfield = df.Field(mesh, dim=3, value=(1+1.5j, 2, 3j)) ``` We can extract `real` and `imaginary` part. ``` cfield.real((0, 0, 0)) cfield.imag((0, 0, 0)) ``` Similarly we get `real` and `imaginary` parts of individual components. ``` cfield.x.real((0, 0, 0)) cfield.x.imag((0, 0, 0)) ``` Complex conjugate. ``` cfield.conjugate((0, 0, 0)) ``` Phase in the complex plane. ``` cfield.phase((0, 0, 0)) ``` ## 9. Applying `numpys` universal functions All numpy universal functions can be applied to `discretisedfield.Field` objects. Below we show a different examples. For available functions please refer to the `numpy` [documentation](https://numpy.org/doc/stable/reference/ufuncs.html#available-ufuncs). ``` import numpy as np f1 = df.Field(mesh, dim=1, value=1) f2 = df.Field(mesh, dim=1, value=np.pi) f3 = df.Field(mesh, dim=1, value=2) np.sin(f1) np.sin(f2)((0, 0, 0)) np.sum((f1, f2, f3))((0, 0, 0)) np.exp(f1)((0, 0, 0)) np.power(f3, 2)((0, 0, 0)) ``` ## Other Full description of all existing functionality can be found in the [API Reference](https://discretisedfield.readthedocs.io/en/latest/_autosummary/discretisedfield.Field.html).
true
code
0.294494
null
null
null
null
# [NTDS'19] tutorial 5: machine learning with scikit-learn [ntds'19]: https://github.com/mdeff/ntds_2019 [Nicolas Aspert](https://people.epfl.ch/nicolas.aspert), [EPFL LTS2](https://lts2.epfl.ch). * Dataset: [digits](https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits) * Tools: [scikit-learn](https://scikit-learn.org/stable/), [numpy](http://www.numpy.org), [scipy](https://www.scipy.org), [matplotlib](https://matplotlib.org) *scikit-learn* is a machine learning python library. Most commonly used algorithms for classification, clustering and regression are implemented as part of the library, e.g. * [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) * [k-means clustering](https://en.wikipedia.org/wiki/K-means_clustering) * [Support vector machines](https://en.wikipedia.org/wiki/Support-vector_machine) * ... The aim of this tutorial is to show basic usage of some simple machine learning techniques. Check the official [documentation](https://scikit-learn.org/stable/documentation.html) for more information, especially the [tutorials](https://scikit-learn.org/stable/tutorial/index.html) section. ``` %matplotlib inline import numpy as np from matplotlib import pyplot as plt import sklearn ``` ## Data loading We will use a dataset named *digits*. It is made of 1797 handwritten digits images (of size 8x8 pixels each) acquired from 44 different writers. Each image is labelled according to the digit present in the image. You can find more information about this dataset [here](https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits). ![digits](https://scikit-learn.org/stable/_images/sphx_glr_plot_lle_digits_001.png) Load the dataset. ``` from sklearn.datasets import load_digits digits = load_digits() ``` The `digits` variable contains several fields. In `images` you have all samples as 2-dimensional arrays. ``` print(digits.images.shape) print(digits.images[0]) plt.imshow(digits.images[0], cmap=plt.cm.gray); ``` In `data`, the same samples are represented as 1-d vectors of length 64. ``` print(digits.data.shape) print(digits.data[0]) ``` In `target` you have the label corresponding to each image. ``` print(digits.target.shape) print(digits.target) ``` Let us visualize the 20 first entries of the dataset (image display kept small on purpose) ``` fig = plt.figure(figsize=(15, 0.5)) for index, (image, label) in enumerate(zip(digits.images[0:20], digits.target[0:20])): ax = fig.add_subplot(1, 20, index+1) ax.imshow(image, cmap=plt.cm.gray) ax.set_title(label) ax.axis('off') ``` ### Training/Test set Before training our model, the [`train_test_split`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function will separate our dataset into a training set and a test set. The samples from the test set are never used during the training phase. This allows for a fair evaluation of the model's performance. ``` from sklearn.model_selection import train_test_split train_img, test_img, train_lbl, test_lbl = train_test_split( digits.data, digits.target, test_size=1/6) # keep ~300 images as test set ``` We can check that all classes are well balanced in the training and test sets. ``` np.histogram(train_lbl, bins=10) np.histogram(test_lbl, bins=10) ``` ## Supervised learning: logistic regression ### Linear regression reminder Linear regression is used to predict an dependent value $y$ from an n-dimensional vector $x$. The assumption made here is that the output depends linearly on the input components, i.e. $y = mx + b$. Given a set of input and output values, the goal is to compute $m$ and $b$ minimizing the [mean squared error (MSE)](https://en.wikipedia.org/wiki/Mean_squared_error) between the predicted and actual outputs. In scikit-learn this method is available through [`LinearRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html). ### Logistic regression Logistic regression is used to predict categorical data (e.g. yes/no, member/non-member, ham/spam, benign/malignant, ...). It uses the output of a linear predictor, and maps it to a probability using a [sigmoid function](https://en.wikipedia.org/wiki/Sigmoid_function), such as the logistic function $s(z) = \frac{1}{1+e^{-z}}$. The output is a probability score between 0 and 1, and using a simple thresholding the class output will be positive if the probability is greater than 0.5, negative if not. A [log-loss cost function](http://wiki.fast.ai/index.php/Logistic_Regression#Cost_Function) (not just the MSE as for linear regression) is used to train logistic regression (using gradient descent for instance). [Multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression) is an extension of the binary classification problem to a $n$-classes problem. We can now create a logistic regression object and fit the parameters using the training data. NB: as the dataset is quite simple, default parameters will give good results. Check the [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) for fine-tuning possibilities. ``` from sklearn.linear_model import LogisticRegression # All unspecified parameters are left to their default values. logisticRegr = LogisticRegression(verbose=1, solver='liblinear', multi_class='auto') # set solver and multi_class to silence warnings logisticRegr.fit(train_img, train_lbl) ``` ## Model performance evaluation For a binary classification problem, let us denote by $TP$, $TN$, $FP$, and $FN$ the number of true positives, true negatives, false positives and false negatives. ### Accuracy The *accuracy* is defined by $a = \frac{TP}{TP + TN + FP + FN}$ NB: in scikit-learn, models may have different definitions of the `score` method. For multi-class logistic regression, the value is the mean accuracy for each class. ``` score = logisticRegr.score(test_img, test_lbl) print(f'accuracy = {score:.4f}') ``` ### F1 score Accuracy only provides partial information about the performance of a model. Many other [metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-metrics) are part of scikit-learn. A metric that provides a more complete overview of the classification performance is the [F1 score](https://en.wikipedia.org/wiki/F1_score). It takes into account not only the valid predictions but also the incorrect ones, by combining precision and recall. *Precision* is the number of positive predictions divided by the total number of positive class values predicted, i.e. $p=\frac{TP}{TP+FP}$. A low precision indicates a high number of false positives. *Recall* is the number of positive predictions divided by the number of positive class values in the test data, i.e. $r=\frac{TP}{TP+FN}$. A low recall indicates a high number of false negatives. Finally the F1 score is the harmonic mean between precision and recall, i.e. $F1=2\frac{p.r}{p+r}$ Let us compute the predicted labels in the test set: ``` pred_lbl = logisticRegr.predict(test_img) from sklearn.metrics import f1_score, classification_report from sklearn.utils.multiclass import unique_labels ``` The [`f1_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) function computes the F1 score. The `average` parameter controls whether the result is computed globally over all classes (`average='micro'`) or if the F1 score is computed for each class then averaged (`average='macro'`). ``` f1_score(test_lbl, pred_lbl, average='micro') f1_score(test_lbl, pred_lbl, average='macro') ``` `classification_report` provides a synthetic overview of all results for each class, as well as globally. ``` print(classification_report(test_lbl, pred_lbl)) ``` ### Confusion matrix In the case of a multi-class problem, the *confusion matrix* is often used to present the results. ``` from sklearn.metrics import confusion_matrix def plot_confusion_matrix(y_true, y_pred, classes, normalize=False, title=None, cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) # Only use the labels that appear in the data classes = classes[unique_labels(y_true, y_pred)] if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) ax.figure.colorbar(im, ax=ax) # We want to show all ticks... ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), # ... and label them with the respective list entries xticklabels=classes, yticklabels=classes, title=title, ylabel='True label', xlabel='Predicted label') # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") fig.tight_layout() return ax plot_confusion_matrix(test_lbl, pred_lbl, np.array(list(map(lambda x: str(x), range(10)))), normalize=False) ``` ## Supervised learning: support-vector machines [Support-vector machines (SVM)](https://en.wikipedia.org/wiki/Support-vector_machine) are also used for classification tasks. For a binary classification task of $n$-dimensional feature vectors, a linear SVM try to return the ($n-1$)-dimensional hyperplane that separate the two classes with the largest possible margin. Nonlinear SVMs fit the maximum-margin hyperplane in a transformed feature space. Although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. The goal here is to show that a method (e.g. the previously used logistic regression) can be substituted transparently for another one. ``` from sklearn import svm ``` Default parameters perform well on this dataset. It might be needed to adjust $C$ and $\gamma$ (e.g. via a grid search) for optimal performance (cf. [SVC documentation](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC)). ``` clf = svm.SVC(gamma='scale') # default kernel is RBF clf.fit(train_img, train_lbl) ``` The classification accuracy improves with respect to logistic regression (here `score` also computes mean accuracy, as in logistic regression). ``` clf.score(test_img, test_lbl) ``` The F1 score is also improved. ``` pred_lbl_svm = clf.predict(test_img) print(classification_report(test_lbl, pred_lbl_svm)) ``` ## Unsupervised learning: $k$-means [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) aims at partitioning a samples into $k$ clusters, s.t. each sample belongs to the cluster having the closest mean. Its implementation is iterative, and relies on a prior knowledge of the number of clusters present. One important step in $k$-means clustering is the initialization, i.e. the choice of initial clusters to be refined. This choice can have a significant impact on results. ``` from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=10) kmeans.fit(digits.data) km_labels = kmeans.predict(digits.data) digits.target km_labels ``` Since we have ground truth information of classes, we can check if the $k$-means results make sense. However as you can see, the labels produced by $k$-means and the ground truth ones do not match. An agreement score based on [mutual information](https://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation), insensitive to labels permutation can be used to evaluate the results. ``` from sklearn.metrics import adjusted_mutual_info_score adjusted_mutual_info_score(digits.target, kmeans.labels_) ``` ## Unsupervized learning: dimensionality reduction You can also try to visualize the clusters as in this [scikit-learn demo](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html). Mapping the input features to lower dimensional embeddings (2D or 3D), e.g. using PCA otr tSNE is required for visualization. [This demo](https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html) provides an overview of the possibilities. ``` from matplotlib import offsetbox def plot_embedding(X, y, title=None): """Scale and visualize the embedding vectors.""" x_min, x_max = np.min(X, 0), np.max(X, 0) X = (X - x_min) / (x_max - x_min) plt.figure() ax = plt.subplot(111) for i in range(X.shape[0]): plt.text(X[i, 0], X[i, 1], str(y[i]), color=plt.cm.Set1(y[i] / 10.), fontdict={'weight': 'bold', 'size': 9}) if hasattr(offsetbox, 'AnnotationBbox'): # only print thumbnails with matplotlib > 1.0 shown_images = np.array([[1., 1.]]) # just something big for i in range(X.shape[0]): dist = np.sum((X[i] - shown_images) ** 2, 1) if np.min(dist) < 4e-3: # don't show points that are too close continue shown_images = np.r_[shown_images, [X[i]]] imagebox = offsetbox.AnnotationBbox( offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i]) ax.add_artist(imagebox) plt.xticks([]), plt.yticks([]) if title is not None: plt.title(title) from sklearn import manifold tsne = manifold.TSNE(n_components=2, init='pca', random_state=0) X_tsne = tsne.fit_transform(digits.data) plot_embedding(X_tsne, digits.target, "t-SNE embedding of the digits (ground truth labels)") plot_embedding(X_tsne, km_labels, "t-SNE embedding of the digits (kmeans labels)") ```
true
code
0.771865
null
null
null
null
``` import torch import torch.nn as nn import onmt import onmt.inputters import onmt.modules import onmt.utils ``` We begin by loading in the vocabulary for the model of interest. This will let us check vocab size and to get the special ids for padding. ``` vocab = dict(torch.load("../../data/data.vocab.pt")) src_padding = vocab["src"].stoi[onmt.inputters.PAD_WORD] tgt_padding = vocab["tgt"].stoi[onmt.inputters.PAD_WORD] ``` Next we specify the core model itself. Here we will build a small model with an encoder and an attention based input feeding decoder. Both models will be RNNs and the encoder will be bidirectional ``` emb_size = 10 rnn_size = 6 # Specify the core model. encoder_embeddings = onmt.modules.Embeddings(emb_size, len(vocab["src"]), word_padding_idx=src_padding) encoder = onmt.encoders.RNNEncoder(hidden_size=rnn_size, num_layers=1, rnn_type="LSTM", bidirectional=True, embeddings=encoder_embeddings) decoder_embeddings = onmt.modules.Embeddings(emb_size, len(vocab["tgt"]), word_padding_idx=tgt_padding) decoder = onmt.decoders.decoder.InputFeedRNNDecoder(hidden_size=rnn_size, num_layers=1, bidirectional_encoder=True, rnn_type="LSTM", embeddings=decoder_embeddings) model = onmt.models.model.NMTModel(encoder, decoder) # Specify the tgt word generator and loss computation module model.generator = nn.Sequential( nn.Linear(rnn_size, len(vocab["tgt"])), nn.LogSoftmax()) loss = onmt.utils.loss.NMTLossCompute(model.generator, vocab["tgt"]) ``` Now we set up the optimizer. This could be a core torch optim class, or our wrapper which handles learning rate updates and gradient normalization automatically. ``` optim = onmt.utils.optimizers.Optimizer(method="sgd", lr=1, max_grad_norm=2) optim.set_parameters(model.named_parameters()) ``` Now we load the data from disk. Currently will need to call a function to load the fields into the data as well. ``` # Load some data data = torch.load("../../data/data.train.1.pt") valid_data = torch.load("../../data/data.valid.1.pt") data.load_fields(vocab) valid_data.load_fields(vocab) data.examples = data.examples[:100] ``` To iterate through the data itself we use a torchtext iterator class. We specify one for both the training and test data. ``` train_iter = onmt.inputters.OrderedIterator( dataset=data, batch_size=10, device=-1, repeat=False) valid_iter = onmt.inputters.OrderedIterator( dataset=valid_data, batch_size=10, device=-1, train=False) ``` Finally we train. ``` trainer = onmt.Trainer(model, loss, loss, optim) def report_func(*args): stats = args[-1] stats.output(args[0], args[1], 10, 0) return stats for epoch in range(2): trainer.train(epoch, report_func) val_stats = trainer.validate() print("Validation") val_stats.output(epoch, 11, 10, 0) trainer.epoch_step(val_stats.ppl(), epoch) ``` To use the model, we need to load up the translation functions ``` import onmt.translate translator = onmt.translate.Translator(beam_size=10, fields=data.fields, model=model) builder = onmt.translate.TranslationBuilder(data=valid_data, fields=data.fields) valid_data.src_vocabs for batch in valid_iter: trans_batch = translator.translate_batch(batch=batch, data=valid_data) translations = builder.from_batch(trans_batch) for trans in translations: print(trans.log(0)) break ```
true
code
0.794644
null
null
null
null
# 决策树 - 非参数学习算法 - 天然解决多分类问题 - 也可以解决回归问题 - 非常好的可解释性 ``` import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import cross_val_score from sklearn import datasets iris = datasets.load_iris() print(iris.DESCR) X = iris.data[:, 2:] # 取后两个特征 y = iris.target plt.scatter(X[y==0, 0], X[y==0, 1]) plt.scatter(X[y==1, 0], X[y==1, 1]) plt.scatter(X[y==2, 0], X[y==2, 1]) ``` ### 1. scikit-learn 中的决策树 ``` from sklearn.tree import DecisionTreeClassifier # entropy : 熵 dt_clf = DecisionTreeClassifier(max_depth=3, criterion="entropy") dt_clf.fit(X, y) def plot_decision_boundary(model, axis): x0, x1 = np.meshgrid( np.linspace(axis[0], axis[1], int((axis[1] - axis[0])*100)).reshape(1, -1), np.linspace(axis[2], axis[3], int((axis[3] - axis[2])*100)).reshape(-1, 1) ) X_new = np.c_[x0.ravel(), x1.ravel()] y_predic = model.predict(X_new) zz = y_predic.reshape(x0.shape) from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#EF9A9A', '#FFF590', '#90CAF9']) plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap) plot_decision_boundary(dt_clf, axis=(0.5, 7.5, 0, 3)) plt.scatter(X[y==0, 0], X[y==0, 1]) plt.scatter(X[y==1, 0], X[y==1, 1]) plt.scatter(X[y==2, 0], X[y==2, 1]) ``` ### 2. 如何构建决策树 **问题** - 每个节点在那个维度做划分? - 某个维度在那个值上做划分? - 划分的标准就是,**划分后使得信息熵降低** **信息熵** - 熵在信息论中代表随机变量不确定的度量 - 熵越大,数据的不确定性越高 - 熵越小,数据的不确定性越低 $$H = -\sum_{i=1}^kp_i\log{(p_i)}$$ - 其中 $p_i$ 表示每一类信息在所有信息类别中占的比例 ![GArxBV.png](https://s1.ax1x.com/2020/03/28/GArxBV.png) - 对于二分类,香农公式为: $$H=-x\log(x)-(1-x)\log(1-x)$$ **信息熵函数** ``` def entropy(p): return -p * np.log(p) - (1-p) * np.log(1-p) x = np.linspace(0.01, 0.99) plt.plot(x, entropy(x)) ``` - 可以看出,当 x 越接近0.5,熵越高 ### 3. 模拟使用信息熵进行划分 ``` # 基于维度 d 的 value 值进行划分 def split(X, y, d, value): index_a = (X[:, d] <= value) index_b = (X[:, d] > value) return X[index_a], X[index_b], y[index_a], y[index_b] from collections import Counter from math import log # 计算每一类样本点的熵的和 def entropy(y): counter = Counter(y) res = 0.0 for num in counter.values(): p = num / len(y) res += -p * log(p) return res # 寻找要划分的 value 值,寻找最小信息熵及相应的点 def try_split(X, y): best_entropy = float('inf') # 最小的熵的值 best_d, best_v = -1, -1 # 划分的维度,划分的位置 # 遍历每一个维度 for d in range(X.shape[1]): # 每两个样本点在 d 这个维度中间的值. 首先把 d 维所有样本排序 sorted_index = np.argsort(X[:, d]) for i in range(1, len(X)): if X[sorted_index[i-1], d] != X[sorted_index[i], d]: v = (X[sorted_index[i-1], d] + X[sorted_index[i], d]) / 2 x_l, x_r, y_l, y_r = split(X, y, d, v) # 计算当前划分后的两部分结果熵是多少 e = entropy(y_l) + entropy(y_r) if e < best_entropy: best_entropy, best_d, best_v = e, d, v return best_entropy, best_d, best_v best_entropy, best_d, best_v = try_split(X, y) print("best_entropy = ", best_entropy) print("best_d", best_d) print("best_v", best_v) ``` **即在第 0 个维度的 2.45 位置进行划分,可以得到最低的熵,值为 0.693** ``` X1_l, X1_r, y1_l, y1_r = split(X, y, best_d, best_v) entropy(y1_r) entropy(y1_l) # 从上图可以看出,粉红色部分只有一类,故熵为 0 best_entropy2, best_d2, best_v2 = try_split(X1_r, y1_r) print("best_entropy = ", best_entropy2) print("best_d", best_d2) print("best_v", best_v2) X2_l, X2_r, y2_l, y2_r = split(X1_r, y1_r, best_d2, best_v2) entropy(y2_r) entropy(y2_l) ```
true
code
0.463626
null
null
null
null
# ClusterFinder Reference genomes reconstruction This notebook validates the 10 genomes we obtained from NCBI based on the ClusterFinder supplementary table. We check that the gene locations from the supplementary table match locations in the GenBank files. ``` from Bio import SeqIO from Bio.SeqFeature import FeatureLocation import pandas as pd from Bio import Entrez import seaborn as sns def get_features_of_type(sequence, feature_type): return [feature for feature in sequence.features if feature.type == feature_type] def get_reference_gene_location(gene_csv_row): start = gene_csv_row['gene start'] - 1 end = gene_csv_row['gene stop'] strand = 1 if gene_csv_row['gene strand'] == '+' else (-1 if gene_csv_row['gene strand'] == '-' else None) return FeatureLocation(start, end, strand) def feature_locus_matches(feature, reference_locus): return feature.qualifiers.get('locus_tag',[None])[0] == reference_locus ``` # Loading reference cluster gene locations ``` reference_genes = pd.read_csv('../data/clusterfinder/labelled/CF_labelled_genes_orig.csv', sep=';') reference_genes.head() ``` # Genes with no sequence ``` no_sequence_genes = reference_genes[reference_genes['NCBI ID'] == '?'] no_sequence_counts = no_sequence_genes.groupby('Genome ID')['gene locus'].count() print('{} genes don\'t have a sequence!'.format(len(no_sequence_genes))) pd.DataFrame({'missing genes':no_sequence_counts}) reference_ids = reference_genes[reference_genes['NCBI ID'] != '?']['NCBI ID'].unique() reference_ids ``` # Validating that reference genes are found in our sequences ``` def validate_genome(record, record_reference_genes): print('Validating {}'.format(record.id)) record_genes = get_features_of_type(record, 'gene') record_cds = get_features_of_type(record, 'CDS') validation = [] record_length = len(record.seq) min_location = record_length max_location = -1 prev_gene_index = None prev_cluster_start = None for i, reference_gene in record_reference_genes.iterrows(): reference_gene_location = get_reference_gene_location(reference_gene) reference_gene_locus = reference_gene['gene locus'] reference_cluster_start = reference_gene['NPL start'] gene_matches_locus = [f for f in record_genes if feature_locus_matches(f, reference_gene_locus)] cds_matches_locus = [f for f in record_cds if feature_locus_matches(f, reference_gene_locus)] gene_matches_location = [f for f in gene_matches_locus if reference_gene_location == f.location] cds_matches_location = [f for f in cds_matches_locus if reference_gene_location == f.location] validation.append({ 'gene_locus_not_found':not gene_matches_locus, 'cds_locus_not_found':not cds_matches_locus, 'gene_location_correct': bool(gene_matches_location), 'cds_location_correct': bool(cds_matches_location) }) if not cds_matches_locus: print('No CDS found for gene locus {}'.format(reference_gene_locus)) if gene_matches_locus: gene_match = gene_matches_locus[0] if not cds_matches_locus: print(' Gene: ', gene_match.qualifiers) # Use gene index to check if we have a consecutive sequence of genes (except when going from one cluster to another) gene_index = [gi for gi,f in enumerate(record_genes) if feature_locus_matches(f, reference_gene_locus)][0] if reference_cluster_start == prev_cluster_start and gene_index != prev_gene_index + 1: print('Additional unexpected genes found before {} (index {} -> {}) at cluster start {}'.format(reference_gene_locus, prev_gene_index, gene_index, reference_cluster_start)) # Calculate min and max cluster gene location to see how much of the sequence is covered by the reference genes min_location = min(gene_match.location.start, min_location) max_location = max(gene_match.location.end, max_location) prev_gene_index = gene_index prev_cluster_start = reference_cluster_start result = pd.DataFrame(validation).sum().to_dict() result['location correct'] = min(result['gene_location_correct'], result['cds_location_correct']) / len(validation) result['ID'] = record.id result['genome'] = record_reference_genes.iloc[0]['Genome ID'] result['sequence length'] = record_length result['total genes'] = len(record_genes) result['reference genes'] = len(record_reference_genes) result['first location'] = min_location / record_length result['last location'] = max_location / record_length result['covered'] = (max_location - min_location) / record_length return result validations = [] reference_gene_groups = reference_genes.groupby('NCBI ID') records = SeqIO.parse('../data/clusterfinder/labelled/CF_labelled_contigs.gbk', 'genbank') for record in records: ncbi_id = record.id print(ncbi_id) record_reference_genes = reference_gene_groups.get_group(ncbi_id) validations.append(validate_genome(record, record_reference_genes)) validations = pd.DataFrame(validations) validations.set_index('ID', inplace=True) validations validations['location correct'].mean() 1 - validations['location correct'].mean() validations[['genome','first location','last location','covered','location correct','reference genes','total genes']] ``` # Cluster genes ``` genes = pd.read_csv('../data/clusterfinder/labelled/CF_labelled_genes.csv', sep=';') genes.head() cluster_counts = genes.groupby('contig_id')['cluster_id'].nunique() cluster_counts.sort_values().plot.barh() gene_counts = genes.groupby('cluster_id')['locus_tag'].count() gene_counts.hist(bins=50) ```
true
code
0.264263
null
null
null
null
# Realization of Recursive Filters *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Cascaded Structures The realization of recursive filters with a high order may be subject to numerical issues. For instance, when the coefficients span a wide amplitude range, their quantization may require a small quantization step or may impose a large relative error for small coefficients. The basic concept of cascaded structures is to decompose a high order filter into a cascade of lower order filters, typically first and second order recursive filters. ### Decomposition into Second-Order Sections The rational transfer function $H(z)$ of a linear time-invariant (LTI) recursive system can be [expressed by its zeros and poles](introduction.ipynb#Transfer-Function) as \begin{equation} H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}} \end{equation} where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. The poles and zeros of a real-valued filter $h[k] \in \mathbb{R}$ are either single real valued or conjugate complex pairs. This motivates to split the transfer function into * first order filters constructed from a single pole and zero * second order filters constructed from a pair of conjugated complex poles and zeros Decomposing the transfer function into these two types by grouping the poles and zeros into single poles/zeros and conjugate complex pairs of poles/zeros results in \begin{equation} H(z) = K \cdot \prod_{\eta=1}^{S_1} \frac{(z - z_{0\eta})}{(z - z_{\infty\eta})} \cdot \prod_{\eta=1}^{S_2} \frac{(z - z_{0\eta}) (z - z_{0\eta}^*)} {(z - z_{\infty\eta})(z - z_{\infty\eta}^*)} \end{equation} where $K$ denotes a constant and $S_1 + 2 S_2 = N$ with $N$ denoting the order of the system. The cascade of two systems results in a multiplication of their transfer functions. Above decomposition represents a cascade of first- and second-order recursive systems. The former can be treated as a special case of second-order recursive systems. The decomposition is therefore known as decomposition into second-order sections (SOSs) or [biquad filters](https://en.wikipedia.org/wiki/Digital_biquad_filter). Using a cascade of SOSs the transfer function of the recursive system can be rewritten as \begin{equation} H(z) = \prod_{\mu=1}^{S} \frac{b_{0, \mu} + b_{1, \mu} \, z^{-1} + b_{2, \mu} \, z^{-2}}{1 + a_{1, \mu} \, z^{-1} + a_{2, \mu} \, z^{-2}} \end{equation} where $S = \lceil \frac{N}{2} \rceil$ denotes the total number of SOSs. These results state that any real valued system of order $N > 2$ can be decomposed into SOSs. This has a number of benefits * quantization effects can be reduced by sensible grouping of poles/zeros, e.g. such that the spanned amplitude range of the filter coefficients is limited * A SOS may be extended by a gain factor to further reduce quantization effects by normalization of the coefficients * efficient and numerically stable SOSs serve as generic building blocks for higher-order recursive filters ### Example - Cascaded second-order section realization of a lowpass The following example illustrates the decomposition of a higher-order recursive Butterworth lowpass filter into a cascade of second-order sections. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib.markers import MarkerStyle from matplotlib.patches import Circle import scipy.signal as sig N = 9 # order of recursive filter def zplane(z, p, title='Poles and Zeros'): "Plots zero and pole locations in the complex z-plane" ax = plt.gca() ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10) ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10) unit_circle = Circle((0,0), radius=1, fill=False, color='black', ls='solid', alpha=0.9) ax.add_patch(unit_circle) ax.axvline(0, color='0.7') ax.axhline(0, color='0.7') plt.title(title) plt.xlabel(r'Re{$z$}') plt.ylabel(r'Im{$z$}') plt.axis('equal') plt.xlim((-2, 2)) plt.ylim((-2, 2)) plt.grid() # design filter b, a = sig.butter(N, 0.2) # decomposition into SOS sos = sig.tf2sos(b, a, pairing='nearest') # print filter coefficients print('Coefficients of the recursive part \n') print(['%1.2f'%ai for ai in a]) print('\n') print('Coefficients of the recursive part of the individual SOS \n') print('Section \t a1 \t\t a2') for n in range(sos.shape[0]): print('%d \t\t %1.5f \t %1.5f'%(n, sos[n, 4], sos[n, 5])) # plot pole and zero locations plt.figure(figsize=(5,5)) zplane(np.roots(b), np.roots(a), 'Poles and Zeros - Overall') plt.figure(figsize=(10, 7)) for n in range(sos.shape[0]): plt.subplot(231+n) zplane(np.roots(sos[n, 0:3]), np.roots(sos[n, 3:6]), title='Poles and Zeros - Section %d'%n) plt.tight_layout() # compute and plot frequency response of sections plt.figure(figsize=(10,5)) for n in range(sos.shape[0]): Om, H = sig.freqz(sos[n, 0:3], sos[n, 3:6]) plt.plot(Om, 20*np.log10(np.abs(H)), label=r'Section %d'%n) plt.xlabel(r'$\Omega$') plt.ylabel(r'$|H_n(e^{j \Omega})|$ in dB') plt.legend() plt.grid() ``` **Exercise** * What amplitude range is spanned by the filter coefficients? * What amplitude range is spanned by the SOS coefficients? * Change the pole/zero grouping strategy from `pairing='nearest'` to `pairing='keep_odd'`. What changes? * Increase the order `N` of the filter. What changes? Solution: Inspecting both the coefficients of the recursive part of the original filter and of the individual SOS reveals that the spanned amplitude range is lower for the latter. The choice of the pole/zero grouping strategy influences the locations of the poles/zeros in the individual SOS, the spanned amplitude range of their coefficients and the transfer functions of the individual sections. The total number of SOS scales with the order of the original filter. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
true
code
0.709674
null
null
null
null
# Multi-wavelength maps New in version `0.2.1` is the ability for users to instantiate wavelength-dependent maps. Nearly all of the computational overhead in `starry` comes from computing rotation matrices and integrals of the Green's basis functions, which makes it **really** fast to compute light curves at different wavelengths if we simply recycle the results of all of these operations. By "wavelength-dependent map" we mean a map whose spherical harmonic coefficients are a function of wavelength. Specifically, instead of setting the coefficient at $l, m$ to a scalar value, we can set it to a vector, where each element corresponds to the coefficient in a particular wavelength bin. Let's look at some examples. ## Instantiating multi-wavelength maps The key is to pass the `nwav` keyword when instantiating a `starry` object. For simplicity, let's do `nwav=3`, corresponding to three wavelength bins. ``` %matplotlib inline from starry import Map map = Map(lmax=2, nwav=3) ``` Recall that the map coefficients are now *vectors*. Here's what the coefficient *matrix* now looks like: ``` map.y ``` Each row corresponds to a given spherical harmonic, and each column to a given wavelength bin. Let's set the $Y_{1,0}$ coefficient: ``` map[1, 0] = [0.3, 0.4, 0.5] ``` Here's our new map vector: ``` map.y ``` To visualize the map, we can call `map.show()` as usual, but now we actually get an *animation* showing us what the map looks like at each wavelength. ``` map.show() ``` (*Caveat: the* `map.animate()` *routine is disabled for multi-wavelength maps.*) Let's set a few more coefficients: ``` map[1, -1] = [0, 0.1, -0.1] map[2, -1] = [-0.1, -0.2, -0.1] map[2, 2] = [0.3, 0.2, 0.1] map.show() ``` OK, our map now has some interesting wavelength-dependent features. Let's compute some light curves! First, a simple phase curve: ``` import numpy as np theta = np.linspace(0, 360, 1000) map.axis = [0, 1, 0] phase_curve = map.flux(theta=theta) ``` Let's plot it. The blue line is the first wavelength bin, the orange line is the second bin, and the green line is the third: ``` import matplotlib.pyplot as pl %matplotlib inline fig, ax = pl.subplots(1, figsize=(14, 6)) ax.plot(theta, phase_curve); ax.set_xlabel(r'$\theta$ (degrees)', fontsize=16) ax.set_ylabel('Flux', fontsize=16); ``` We can also compute an occultation light curve: ``` xo = np.linspace(-1.5, 1.5, 1000) light_curve = map.flux(xo=xo, yo=0.2, ro=0.1) ``` Let's plot it. This time we normalize the light curve by the baseline for better plotting, since the map has a different total flux at each wavelength: ``` fig, ax = pl.subplots(1, figsize=(14, 6)) ax.plot(theta, light_curve / light_curve[0]); ax.set_xlabel('Occultor position', fontsize=16) ax.set_ylabel('Flux', fontsize=16); ``` As we mentioned above, there's not that much overhead to computing light curves in many different wavelength bins. Check it out: ``` import time np.random.seed(1234) def runtime(nwav, N=10): total_time = 0 xo = np.linspace(-1.5, 1.5, 1000) for n in range(N): map = Map(lmax=2, nwav=nwav) map[:, :] = np.random.randn(9, nwav) tstart = time.time() map.flux(xo=xo, yo=0.2, ro=0.1) total_time += time.time() - tstart return total_time / N nwav = np.arange(1, 50) t = [runtime(n) for n in nwav] fig, ax = pl.subplots(1, figsize=(14, 7)) ax.plot(nwav, t, '.') ax.plot(nwav, t, '-', color='C0', lw=1, alpha=0.3) ax.set_xlabel('nwav', fontsize=16) ax.set_ylabel('time (s)', fontsize=16); ax.set_ylim(0, 0.003); ```
true
code
0.439026
null
null
null
null
<h1>Model Deployment</h1> Once we have built and trained our models for feature engineering (using Amazon SageMaker Processing and SKLearn) and binary classification (using the XGBoost open-source container for Amazon SageMaker), we can choose to deploy them in a pipeline on Amazon SageMaker Hosting, by creating an Inference Pipeline. https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html This notebook demonstrates how to create a pipeline with the SKLearn model for feature engineering and the XGBoost model for binary classification. Let's define the variables first. ``` import sagemaker import sys import IPython # Let's make sure we have the required version of the SM PySDK. required_version = '2.49.2' def versiontuple(v): return tuple(map(int, (v.split(".")))) if versiontuple(sagemaker.__version__) < versiontuple(required_version): !{sys.executable} -m pip install -U sagemaker=={required_version} IPython.Application.instance().kernel.do_shutdown(True) import sagemaker print(sagemaker.__version__) import boto3 role = sagemaker.get_execution_role() region = boto3.Session().region_name sagemaker_session = sagemaker.Session() bucket_name = sagemaker_session.default_bucket() prefix = 'endtoendmlsm' print(region) print(role) print(bucket_name) ``` ## Retrieve model artifacts First, we need to create two Amazon SageMaker **Model** objects, which associate the artifacts of training (serialized model artifacts in Amazon S3) to the Docker container used for inference. In order to do that, we need to get the paths to our serialized models in Amazon S3. <ul> <li>For the SKLearn model, in Step 02 (data exploration and feature engineering) we defined the path where the artifacts are saved</li> <li>For the XGBoost model, we need to find the path based on Amazon SageMaker's naming convention. We are going to use a utility function to get the model artifacts of the last training job matching a specific base job name.</li> </ul> ``` from notebook_utilities import get_latest_training_job_name, get_training_job_s3_model_artifacts # SKLearn model artifacts path. sklearn_model_path = 's3://{0}/{1}/output/sklearn/model.tar.gz'.format(bucket_name, prefix) # XGBoost model artifacts path. training_base_job_name = 'end-to-end-ml-sm-xgb' latest_training_job_name = get_latest_training_job_name(training_base_job_name) xgboost_model_path = get_training_job_s3_model_artifacts(latest_training_job_name) print('SKLearn model path: ' + sklearn_model_path) print('XGBoost model path: ' + xgboost_model_path) ``` ## SKLearn Featurizer Model Let's build the SKLearn model. For hosting this model we also provide a custom inference script, that is used to process the inputs and outputs and execute the transform. The inference script is implemented in the `sklearn_source_dir/inference.py` file. The custom script defines: - a custom `input_fn` for pre-processing inference requests. Our input function accepts only CSV input, loads the input in a Pandas dataframe and assigns feature column names to the dataframe - a custom `predict_fn` for running the transform over the inputs - a custom `output_fn` for returning either JSON or CSV - a custom `model_fn` for deserializing the model ``` !pygmentize sklearn_source_dir/inference.py ``` Now, let's create the `SKLearnModel` object, by providing the custom script and S3 model artifacts as input. ``` import time from sagemaker.sklearn import SKLearnModel code_location = 's3://{0}/{1}/code'.format(bucket_name, prefix) sklearn_model = SKLearnModel(name='end-to-end-ml-sm-skl-model-{0}'.format(str(int(time.time()))), model_data=sklearn_model_path, entry_point='inference.py', source_dir='sklearn_source_dir/', code_location=code_location, role=role, sagemaker_session=sagemaker_session, framework_version='0.20.0', py_version='py3') ``` ## XGBoost Model Similarly to the previous steps, we can create an `XGBoost` model object. Also here, we have to provide a custom inference script. The inference script is implemented in the `xgboost_source_dir/inference.py` file. The custom script defines: - a custom `input_fn` for pre-processing inference requests. This input function is able to handle JSON requests, plus all content types supported by the default XGBoost container. For additional information please visit: https://github.com/aws/sagemaker-xgboost-container/blob/master/src/sagemaker_xgboost_container/encoder.py. The reason for adding the JSON content type is that the container-to-container default request content type in an inference pipeline is JSON. - a custom `model_fn` for deserializing the model ``` !pygmentize xgboost_source_dir/inference.py ``` Now, let's create the `XGBoostModel` object, by providing the custom script and S3 model artifacts as input. ``` import time from sagemaker.xgboost import XGBoostModel code_location = 's3://{0}/{1}/code'.format(bucket_name, prefix) xgboost_model = XGBoostModel(name='end-to-end-ml-sm-xgb-model-{0}'.format(str(int(time.time()))), model_data=xgboost_model_path, entry_point='inference.py', source_dir='xgboost_source_dir/', code_location=code_location, framework_version='0.90-2', py_version='py3', role=role, sagemaker_session=sagemaker_session) ``` ## Pipeline Model Once we have models ready, we can deploy them in a pipeline, by building a `PipelineModel` object and calling the `deploy()` method. ``` import sagemaker import time from sagemaker.pipeline import PipelineModel pipeline_model_name = 'end-to-end-ml-sm-xgb-skl-pipeline-{0}'.format(str(int(time.time()))) pipeline_model = PipelineModel( name=pipeline_model_name, role=role, models=[ sklearn_model, xgboost_model], sagemaker_session=sagemaker_session) endpoint_name = 'end-to-end-ml-sm-pipeline-endpoint-{0}'.format(str(int(time.time()))) print(endpoint_name) pipeline_model.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge', endpoint_name=endpoint_name) ``` <span style="color: red; font-weight:bold">Please take note of the endpoint name, since it will be used in the next workshop module.</span> ## Getting inferences Finally we can try invoking our pipeline of models and get some inferences: ``` from sagemaker.serializers import CSVSerializer from sagemaker.deserializers import JSONDeserializer from sagemaker.predictor import Predictor predictor = Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session, serializer=CSVSerializer(), deserializer=JSONDeserializer()) #'Type', 'Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]' payload = "L,298.4,308.2,1582,70.7,216" print(predictor.predict(payload)) payload = "M,298.4,308.2,1582,30.2,214" print(predictor.predict(payload)) payload = "L,298.4,308.2,30,70.7,216" print(predictor.predict(payload)) #predictor.delete_endpoint() ``` Once we have tested the endpoint, we can move to the next workshop module. Please access the module <a href="https://github.com/aws-samples/amazon-sagemaker-build-train-deploy/tree/master/05_API_Gateway_and_Lambda" target="_blank">05_API_Gateway_and_Lambda</a> on GitHub to continue.
true
code
0.285459
null
null
null
null
Let's design a LNA using Infineon's BFU520 transistor. First we need to import scikit-rf and a bunch of other utilities: ``` import numpy as np import skrf from skrf.media import DistributedCircuit import skrf.frequency as freq import skrf.network as net import skrf.util import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = [10, 10] f = freq.Frequency(0.4, 2, 101) tem = DistributedCircuit(f, z0=50) # import the scattering parameters/noise data for the transistor bjt = net.Network('BFU520_05V0_010mA_NF_SP.s2p').interpolate(f) bjt ``` Let's plot the smith chart for it: ``` bjt.plot_s_smith() ``` Now let's calculate the source and load stablity curves. I'm slightly misusing the `Network` type to plot the curves; normally the curves you pass in to `Network` should be a function of frequency, but it also works to draw these circles as long as you don't try to use any other functions on them ``` sqabs = lambda x: np.square(np.absolute(x)) delta = bjt.s11.s*bjt.s22.s - bjt.s12.s*bjt.s21.s rl = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s22.s) - sqabs(delta))) cl = np.conj(bjt.s22.s - delta*np.conj(bjt.s11.s))/(sqabs(bjt.s22.s) - sqabs(delta)) rs = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s11.s) - sqabs(delta))) cs = np.conj(bjt.s11.s - delta*np.conj(bjt.s22.s))/(sqabs(bjt.s11.s) - sqabs(delta)) def calc_circle(c, r): theta = np.linspace(0, 2*np.pi, 1000) return c + r*np.exp(1.0j*theta) for i, f in enumerate(bjt.f): # decimate it a little if i % 100 != 0: continue n = net.Network(name=str(f/1.e+9), s=calc_circle(cs[i][0, 0], rs[i][0, 0])) n.plot_s_smith() for i, f in enumerate(bjt.f): # decimate it a little if i % 100 != 0: continue n = net.Network(name=str(f/1.e+9), s=calc_circle(cl[i][0, 0], rl[i][0, 0])) n.plot_s_smith() ``` So we can see that we need to avoid inductive loads near short circuit in the input matching network and high impedance inductive loads on the output. Let's draw some constant noise circles. First we grab the noise parameters for our target frequency from the network model: ``` idx_915mhz = skrf.util.find_nearest_index(bjt.f, 915.e+6) # we need the normalized equivalent noise and optimum source coefficient to calculate the constant noise circles rn = bjt.rn[idx_915mhz]/50 gamma_opt = bjt.g_opt[idx_915mhz] fmin = bjt.nfmin[idx_915mhz] for nf_added in [0, 0.1, 0.2, 0.5]: nf = 10**(nf_added/10) * fmin N = (nf - fmin)*abs(1+gamma_opt)**2/(4*rn) c_n = gamma_opt/(1+N) r_n = 1/(1-N)*np.sqrt(N**2 + N*(1-abs(gamma_opt)**2)) n = net.Network(name=str(nf_added), s=calc_circle(c_n, r_n)) n.plot_s_smith() print("the optimum source reflection coefficient is ", gamma_opt) ``` So we can see from the chart that just leaving the input at 50 ohms gets us under 0.1 dB of extra noise, which seems pretty good. I'm actually not sure that these actually correspond to the noise figure level increments I have listed up there, but the circles should at least correspond to increasing noise figures So let's leave the input at 50 ohms and figure out how to match the output network to maximize gain and stability. Let's see what matching the load impedance with an unmatched input gives us: ``` gamma_s = 0.0 gamma_l = np.conj(bjt.s22.s - bjt.s21.s*gamma_s*bjt.s12.s/(1-bjt.s11.s*gamma_s)) gamma_l = gamma_l[idx_915mhz, 0, 0] is_gamma_l_stable = np.absolute(gamma_l - cl[idx_915mhz]) > rl[idx_915mhz] gamma_l, is_gamma_l_stable ``` This looks like it may be kind of close to the load instability circles, so it might make sense to pick a load point with less gain for more stability, or to pick a different source impedance with more noise. But for now let's just build a matching network for this and see how it performs: ``` def calc_matching_network_vals(z1, z2): flipped = np.real(z1) < np.real(z2) if flipped: z2, z1 = z1, z2 # cancel out the imaginary parts of both input and output impedances z1_par = 0.0 if abs(np.imag(z1)) > 1e-6: # parallel something to cancel out the imaginary part of # z1's impedance z1_par = 1/(-1j*np.imag(1/z1)) z1 = 1/(1./z1 + 1/z1_par) z2_ser = 0.0 if abs(np.imag(z2)) > 1e-6: z2_ser = -1j*np.imag(z2) z2 = z2 + z2_ser Q = np.sqrt((np.real(z1) - np.real(z2))/np.real(z2)) x1 = -1.j * np.real(z1)/Q x2 = 1.j * np.real(z2)*Q x1_tot = 1/(1/z1_par + 1/x1) x2_tot = z2_ser + x2 if flipped: return x2_tot, x1_tot else: return x1_tot, x2_tot z_l = net.s2z(np.array([[[gamma_l]]]))[0,0,0] # note that we're matching against the conjugate; # this is because we want to see z_l from the BJT side # if we plugged in z the matching network would make # the 50 ohms look like np.conj(z) to match against it, so # we use np.conj(z_l) so that it'll look like z_l from the BJT's side z_par, z_ser = calc_matching_network_vals(np.conj(z_l), 50) z_l, z_par, z_ser ``` Let's calculate what the component values are: ``` c_par = np.real(1/(2j*np.pi*915e+6*z_par)) l_ser = np.real(z_ser/(2j*np.pi*915e+6)) c_par, l_ser ``` The capacitance is kind of low but the inductance seems reasonable. Let's test it out: ``` output_network = tem.shunt_capacitor(c_par) ** tem.inductor(l_ser) amplifier = bjt ** output_network amplifier.plot_s_smith() ``` That looks pretty reasonable; let's take a look at the S21 to see what we got: ``` amplifier.s21.plot_s_db() ``` So about 18 dB gain; let's see what our noise figure is: ``` 10*np.log10(amplifier.nf(50.)[idx_915mhz]) ``` So 0.96 dB NF, which is reasonably close to the BJT tombstone optimal NF of 0.95 dB
true
code
0.435481
null
null
null
null
- Scipy의 stats 서브 패키지에 있는 binom 클래스는 이항 분포 클래스이다. n 인수와 p 인수를 사용하여 모수를 설정한다 ``` N = 10 theta = 0.6 rv = sp.stats.binom(N, theta) rv ``` - pmf 메서드를 사용하면, 확률 질량 함수 (pmf: probability mass function)를 계산할 수 있다. ``` %matplotlib inline xx = np.arange(N + 1) plt.bar(xx, rv.pmf(xx), align='center') plt.ylabel('p(x)') plt.title('binomial pmf') plt.show() ``` - 시뮬레이션을 하려면 rvs 메서드를 사용한다. ``` np.random.seed(0) x = rv.rvs(100) x sns.countplot(x) plt.title("Binomial Distribution's Simulation") plt.xlabel('Sample') plt.show() ``` - 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다. ``` y = np.bincount(x, minlength=N+1)/float(len(x)) df = pd.DataFrame({'Theory': rv.pmf(xx), 'simulation': y}).stack() df = df.reset_index() df.columns = ['values', 'type', 'ratio'] df.pivot('values', 'type', 'ratio') df sns.barplot(x='values', y='ratio', hue='type', data=df) plt.show() ``` #### 연습 문제 1 - 이항 확률 분포의 모수가 다음과 같을 경우에 각각 샘플을 생성한 후, 기댓값과 분산을 구하고 앞의 예제와 같이 확률 밀도 함수와 비교한 카운트 플롯을 그린다. - 샘풀의 갯수가 10개인 경우와 1000개인 경우에 대해 각각 위의 계산을 한다. - 1. Theta = 0.5, N = 5 - 2. Theta = 0.9, N = 20 ``` # 연습문제 1 - 1 N = 5 theta = 0.5 rv = sp.stats.binom(N, theta) xx10 = np.arange(N + 1) plt.bar(xx, rv.pmf(xx10), align='center') plt.ylabel('P(x)') plt.title('Binomail Distribution pmdf') plt.show() # sample 갯수 10개 일 때 np.random.seed(0) x10 = rv.rvs(10) sns.countplot(x10) plt.title('binomail distribution Simulation 10') plt.xlabel('values') plt.show() # sample 갯수가 1000개 일 때 x1000 = rv.rvs(1000) sns.countplot(x1000) plt.title('binomail distribution Simulation 10') plt.xlabel('values') plt.show() y10 = np.bincount(x10, minlength = N + 1)/float(len(x10)) df = pd.DataFrame({'Theory': rv.pmf(xx10), 'Simulation': y10}).stack() df = df.reset_index() df.columns = ['values', 'type', 'ratio'] df.pivot('values', 'type', 'ratio') sns.barplot(x='values', y='ratio', hue='type', data=df) plt.show() df ``` #### 샘플 갯수가 1000개일 경우에 theta = 0.9, N = 20 ``` N = 20 theta = 0.9 rv = sp.stats.binom(N, theta) xx = np.arange(N + 1) plt.bar(xx, rv.pmf(xx), align = 'center') plt.ylabel('P(x)') plt.title('binomial pmf when N=20') plt.show() x1000 = rv.rvs(1000) # sample 1000개 생성 sns.countplot(x1000) plt.title("Binomial Distribution's Simulation") plt.xlabel('values') plt.show() y1000 = np.bincount(x1000, minlength = N + 1)/float(len(x1000)) df = pd.DataFrame({'Theory':rv.pmf(xx), 'Simulation': y1000}).stack() df = df.reset_index() df.columns = ['values', 'type', 'ratio'] df.pivot('values', 'type', 'ratio') df sns.barplot(x='values', y='ratio', hue='type', data=df) plt.show() ```
true
code
0.624666
null
null
null
null
<table style="float:left; border:none"> <tr style="border:none; background-color: #ffffff"> <td style="border:none"> <a href="http://bokeh.pydata.org/"> <img src="assets/bokeh-transparent.png" style="width:50px" > </a> </td> <td style="border:none"> <h1>Bokeh Tutorial</h1> </td> </tr> </table> <div style="float:right;"><h2>07. Exporting and Embedding</h2></div> So far we have seen how to generate interactive Bokeh output directly inline in Jupyter notbeooks. It also possible to embed interactive Bokeh plots and layouts in other contexts, such as standalone HTML files, or Jinja templates. Additionally, Bokeh can export plots to static (non-interactive) PNG and SVG formats. We will look at all of these possibilities in this chapter. First we make the usual imports. ``` from bokeh.io import output_notebook, show output_notebook() ``` And also load some data that will be used throughout this chapter ``` import pandas as pd from bokeh.plotting import figure from bokeh.sampledata.stocks import AAPL df = pd.DataFrame(AAPL) df['date'] = pd.to_datetime(df['date']) ``` # Embedding Interactive Content To start we will look differnet ways of embedding live interactive Bokeh output in various situations. ## Displaying in the Notebook The first way to embed Bokeh output is in the Jupyter Notebooks, as we have already, seen. As a reminder, the cell below will generate a plot inline as output, because we executed `output_notebook` above. ``` p = figure(plot_width=800, plot_height=250, x_axis_type="datetime") p.line(df['date'], df['close'], color='navy', alpha=0.5) show(p) ``` ## Saving to an HTML File It is also often useful to generate a standalone HTML script containing Bokeh content. This is accomplished by calling the `output_file(...)` function. It is especially common to do this from standard Python scripts, but here we see that it works in the notebook as well. ``` from bokeh.io import output_file, show output_file("plot.html") show(p) # save(p) will save without opening a new browser tab ``` In addition the inline plot above, you should also have seen a new browser tab open with the contents of the newly saved "plot.html" file. It is important to note that `output_file` initiates a *persistent mode of operation*. That is, all subsequent calls to show will generate output to the specified file. We can "reset" where output will go by calling `reset_output`: ``` from bokeh.io import reset_output reset_output() ``` ## Templating in HTML Documents Another use case is to embed Bokeh content in a Jinja HTML template. We will look at a simple explicit case first, and then see how this technique might be used in a web app framework such as Flask. The simplest way to embed standalone (i.e. not Bokeh server) content is to use the `components` function. This function takes a Bokeh object, and returns a `<script>` tag and `<div>` tag that can be put in any HTML tempate. The script will eecute and load the Bokeh content into the associated div. The cells below show a complete example, including loading BokehJS JS and CSS resources in the temlpate. ``` import jinja2 from bokeh.embed import components # IMPORTANT NOTE!! The version of BokehJS loaded in the template should match # the version of Bokeh installed locally. template = jinja2.Template(""" <!DOCTYPE html> <html lang="en-US"> <link href="http://cdn.pydata.org/bokeh/dev/bokeh-0.13.0.min.css" rel="stylesheet" type="text/css" > <script src="http://cdn.pydata.org/bokeh/dev/bokeh-0.13.0.min.js" ></script> <body> <h1>Hello Bokeh!</h1> <p> Below is a simple plot of stock closing prices </p> {{ script }} {{ div }} </body> </html> """) p = figure(plot_width=800, plot_height=250, x_axis_type="datetime") p.line(df['date'], df['close'], color='navy', alpha=0.5) script, div = components(p) from IPython.display import HTML HTML(template.render(script=script, div=div)) ``` Note that it is possible to pass multiple objects to a single call to `components`, in order to template multiple Bokeh objects at once. See the [User's Guide for components](https://bokeh.pydata.org/en/latest/docs/user_guide/embed.html#components) for more information. Once we have the script and div from `components`, it is straighforward to serve a rendered page containing Bokeh content in a web application, e.g. a Flask app as shown below. ``` from flask import Flask app = Flask(__name__) @app.route('/') def hello_bokeh(): return template.render(script=script, div=div) # Uncomment to run the Flask Server. Use Kernel -> Interrupt from Notebook menubar to stop #app.run(port=5050) # EXERCISE: Create your own template (or modify the one above) ``` # Exporting Static Images Sometimes it is desirable to produce static images of plots or other Bokeh output, without any interactive capabilities. Bokeh supports exports to PNG and SVG formats. ## PNG Export Bokeh supports exporting a plot or layout to PNG image format with the `export_png` function. This function is alled with a Bokeh object to export, and a filename to write the PNG output to. Often the Bokeh object passed to `export_png` is a single plot, but it need not be. If a layout is exported, the entire lahyout is saved to one PNG image. ***Important Note:*** *the PNG export capability requires installing some additional optional dependencies. The simplest way to obtain them is via conda:* conda install selenium phantomjs pillow ``` from bokeh.io import export_png p = figure(plot_width=800, plot_height=250, x_axis_type="datetime") p.line(df['date'], df['close'], color='navy', alpha=0.5) export_png(p, filename="plot.png") from IPython.display import Image Image('plot.png') # EXERCISE: Save a layout of plots (e.g. row or column) as SVG and see what happens ``` ## SVG Export Bokeh can also generate SVG output in the browser, instead of rendering to HTML canvas. This is accomplished by setting `output_backend='svg'` on a figure. This can be be used to generate SVGs in `output_file` HTML files, or in content emebdded with `components`. It can also be used with the `export_svgs` function to save `.svg` files. Note that an SVG is created for *each canvas*. It is not possible to capture entire layouts or widgets in SVG output. ***Important Note:*** *There a currently some known issue with SVG output, it may not work for all use-cases* ``` from bokeh.io import export_svgs p = figure(plot_width=800, plot_height=250, x_axis_type="datetime", output_backend='svg') p.line(df['date'], df['close'], color='navy', alpha=0.5) export_svgs(p, filename="plot.svg") from IPython.display import SVG SVG('plot.svg') # EXERCISE: Save a layout of plots (e.g. row or column) as SVG and see what happens ```
true
code
0.408247
null
null
null
null
``` %matplotlib inline ``` # Demo Axes Grid Grid of 2x2 images with single or own colorbar. ``` import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid plt.rcParams["mpl_toolkits.legacy_colorbar"] = False def get_demo_image(): import numpy as np from matplotlib.cbook import get_sample_data f = get_sample_data("axes_grid/bivariate_normal.npy", asfileobj=False) z = np.load(f) # z is a numpy array of 15x15 return z, (-3, 4, -4, 3) def demo_simple_grid(fig): """ A grid of 2x2 images with 0.05 inch pad between images and only the lower-left axes is labeled. """ grid = ImageGrid(fig, 141, # similar to subplot(141) nrows_ncols=(2, 2), axes_pad=0.05, label_mode="1", ) Z, extent = get_demo_image() for ax in grid: ax.imshow(Z, extent=extent, interpolation="nearest") # This only affects axes in first column and second row as share_all=False. grid.axes_llc.set_xticks([-2, 0, 2]) grid.axes_llc.set_yticks([-2, 0, 2]) def demo_grid_with_single_cbar(fig): """ A grid of 2x2 images with a single colorbar """ grid = ImageGrid(fig, 142, # similar to subplot(142) nrows_ncols=(2, 2), axes_pad=0.0, share_all=True, label_mode="L", cbar_location="top", cbar_mode="single", ) Z, extent = get_demo_image() for ax in grid: im = ax.imshow(Z, extent=extent, interpolation="nearest") grid.cbar_axes[0].colorbar(im) for cax in grid.cbar_axes: cax.toggle_label(False) # This affects all axes as share_all = True. grid.axes_llc.set_xticks([-2, 0, 2]) grid.axes_llc.set_yticks([-2, 0, 2]) def demo_grid_with_each_cbar(fig): """ A grid of 2x2 images. Each image has its own colorbar. """ grid = ImageGrid(fig, 143, # similar to subplot(143) nrows_ncols=(2, 2), axes_pad=0.1, label_mode="1", share_all=True, cbar_location="top", cbar_mode="each", cbar_size="7%", cbar_pad="2%", ) Z, extent = get_demo_image() for ax, cax in zip(grid, grid.cbar_axes): im = ax.imshow(Z, extent=extent, interpolation="nearest") cax.colorbar(im) cax.toggle_label(False) # This affects all axes because we set share_all = True. grid.axes_llc.set_xticks([-2, 0, 2]) grid.axes_llc.set_yticks([-2, 0, 2]) def demo_grid_with_each_cbar_labelled(fig): """ A grid of 2x2 images. Each image has its own colorbar. """ grid = ImageGrid(fig, 144, # similar to subplot(144) nrows_ncols=(2, 2), axes_pad=(0.45, 0.15), label_mode="1", share_all=True, cbar_location="right", cbar_mode="each", cbar_size="7%", cbar_pad="2%", ) Z, extent = get_demo_image() # Use a different colorbar range every time limits = ((0, 1), (-2, 2), (-1.7, 1.4), (-1.5, 1)) for ax, cax, vlim in zip(grid, grid.cbar_axes, limits): im = ax.imshow(Z, extent=extent, interpolation="nearest", vmin=vlim[0], vmax=vlim[1]) cb = cax.colorbar(im) cb.set_ticks((vlim[0], vlim[1])) # This affects all axes because we set share_all = True. grid.axes_llc.set_xticks([-2, 0, 2]) grid.axes_llc.set_yticks([-2, 0, 2]) fig = plt.figure(figsize=(10.5, 2.5)) fig.subplots_adjust(left=0.05, right=0.95) demo_simple_grid(fig) demo_grid_with_single_cbar(fig) demo_grid_with_each_cbar(fig) demo_grid_with_each_cbar_labelled(fig) plt.show() ```
true
code
0.756166
null
null
null
null
### Generator States Let's look at a simple generator function: ``` def gen(s): for c in s: yield c ``` We create an generator object by calling the generator function: ``` g = gen('abc') ``` At this point the generator object is **created**, but we have not actually started running it. To do so, we call `next()`, which then starts running the function body until the first `yield` is encountered: ``` next(g) ``` Now the generator is **suspended**, waiting for us to call next again: ``` next(g) ``` Every time we call `next`, the generator function runs, or is in a **running** state until the next yield is encountered, or no more results are yielded and the function actually returns: ``` next(g) next(g) ``` Once we exhaust the generator, we get a `StopIteration` exception, and we can think of the generator as being **closed**. As we can see, a generator can be in one of four states: * created * running * suspended * closed We can actually request the state of a generator programmatically by using the `inspect` module's `getgeneratorstate()` function: ``` from inspect import getgeneratorstate g = gen('abc') getgeneratorstate(g) ``` We can start running the generator by calling `next`: ``` next(g) ``` And the state is now: ``` getgeneratorstate(g) ``` Once we exhaust the generator: ``` next(g), next(g), next(g) ``` The generator is now in a closed state: ``` getgeneratorstate(g) ``` Now we haven't seen the running state - to do that we just need to print the state from inside the generator - but to do that we need to have a reference to the generator object itself. This is not that easy to do, so I'm going to cheat and assume that the generator object will be referenced by a global variable `global_gen`: ``` def gen(s): for c in s: print(getgeneratorstate(global_gen)) yield c global_gen = gen('abc') next(global_gen) ``` So a generator can be in these four very distinct states. When the generator is created, it is not in a running or suspended state - it is simply in a **created** state. We have to kick-off, or prime, the generator by calling `next` on it. After the generator has yielded a value, it it is in **suspended** state. Finally, once the generator **returns** (not yields), i.e. the StopIteration is raised, the generator is **closed**. Finally it is really important to understand that when a `yield` is encountered, the generator is suspended **exactly** at that point, but not before it has evaluated the expression to the right of the yield statement so it can produce that value in the return value of the `next()` function. To see this, let's write a simple function and a generator function as follows: ``` def square(i): print(f'squaring {i}') return i ** 2 def squares(n): for i in range(n): yield square(i) print ('right after yield') sq = squares(5) next(sq) ``` As you can see `square(i)` was evaluated, **then** the value was yielded, and the genrator was suspended exactly at the point the `yield` statement was encountered: ``` next(sq) ``` As you can see, only now does the `right after yield` string get printed from our generator.
true
code
0.267887
null
null
null
null
<a href="https://colab.research.google.com/github/mrklees/pgmpy/blob/feature%2Fcausalmodel/examples/Causal_Games.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Causal Games Causal Inference is a new feature for pgmpy, so I wanted to develop a few examples which show off the features that we're developing! This particular notebook walks through the 5 games that used as examples for building intuition about backdoor paths in *The Book of Why* by Judea Peal. I have consistently been using them to test different implementations of backdoor adjustment from different libraries and include them as unit tests in pgmpy, so I wanted to walk through them and a few other related games as a potential resource to both understand the implementation of CausalInference in pgmpy, as well as develope some useful intuitions about backdoor paths. ## Objective of the Games For each game we get a causal graph, and our goal is to identify the set of deconfounders (often denoted $Z$) which will close all backdoor paths from nodes $X$ to $Y$. For the time being, I'll assume that you're familiar with the concept of backdoor paths, though I may expand this portion to explain it. ``` import sys !pip3 install -q daft import matplotlib.pyplot as plt %matplotlib inline import daft from daft import PGM # We can now import the development version of pgmpy from pgmpy.models.BayesianModel import BayesianModel from pgmpy.inference.CausalInference import CausalInference def convert_pgm_to_pgmpy(pgm): """Takes a Daft PGM object and converts it to a pgmpy BayesianModel""" edges = [(edge.node1.name, edge.node2.name) for edge in pgm._edges] model = BayesianModel(edges) return model #@title # Game 1 #@markdown While this is a "trivial" example, many statisticians would consider including either or both A and B in their models "just for good measure". Notice though how controlling for A would close off the path of causal information from X to Y, actually *impeding* your effort to measure that effect. pgm = PGM(shape=[4, 3]) pgm.add_node(daft.Node('X', r"X", 1, 2)) pgm.add_node(daft.Node('Y', r"Y", 3, 2)) pgm.add_node(daft.Node('A', r"A", 2, 2)) pgm.add_node(daft.Node('B', r"B", 2, 1)) pgm.add_edge('X', 'A') pgm.add_edge('A', 'Y') pgm.add_edge('A', 'B') pgm.render() plt.show() #@markdown Notice how there are no nodes with arrows pointing into X. Said another way, X has no parents. Therefore, there can't be any backdoor paths confounding X and Y. pgmpy will confirm this in the following way: game1 = convert_pgm_to_pgmpy(pgm) inference1 = CausalInference(game1) print(f"Are there are active backdoor paths? {not inference1.is_valid_backdoor_adjustment_set('X', 'Y')}") adj_sets = inference1.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}") #@title # Game 2 #@markdown This graph looks harder, but actualy is also trivial to solve. The key is noticing the one backdoor path, which goes from X <- A -> B <- D -> E -> Y, has a collider at B (or a 'V structure'), and therefore the backdoor path is closed. pgm = PGM(shape=[4, 4]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 1, 3)) pgm.add_node(daft.Node('B', r"B", 2, 3)) pgm.add_node(daft.Node('C', r"C", 3, 3)) pgm.add_node(daft.Node('D', r"D", 2, 2)) pgm.add_node(daft.Node('E', r"E", 2, 1)) pgm.add_edge('X', 'E') pgm.add_edge('A', 'X') pgm.add_edge('A', 'B') pgm.add_edge('B', 'C') pgm.add_edge('D', 'B') pgm.add_edge('D', 'E') pgm.add_edge('E', 'Y') pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}") #@title # Game 3 #@markdown This game actually requires some action. Notice the backdoor path X <- B -> Y. This is a confounding pattern, is one of the clearest signs that we'll need to control for something, in this case B. pgm = PGM(shape=[4, 4]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 2, 1.75)) pgm.add_node(daft.Node('B', r"B", 2, 3)) pgm.add_edge('X', 'Y') pgm.add_edge('X', 'A') pgm.add_edge('B', 'A') pgm.add_edge('B', 'X') pgm.add_edge('B', 'Y') pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}") #@title # Game 4 #@markdown Pearl named this particular configuration "M Bias", not only because of it's shape, but also because of the common practice of statisticians to want to control for B in many situations. However, notice how in this configuration X and Y start out as *not confounded* and how by controlling for B we would actually introduce confounding by opening the path at the collider, B. pgm = PGM(shape=[4, 4]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 1, 3)) pgm.add_node(daft.Node('B', r"B", 2, 2)) pgm.add_node(daft.Node('C', r"C", 3, 3)) pgm.add_edge('A', 'X') pgm.add_edge('A', 'B') pgm.add_edge('C', 'B') pgm.add_edge('C', 'Y') pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}") #@title # Game 5 #@markdown This is the last game in The Book of Why is the most complex. In this case we have two backdoor paths, one going through A and the other through B, and it's important to notice that if we only control for B that the path: X <- A -> B <- C -> Y (which starts out as closed because B is a collider) actually is opened. Therefore we have to either close both A and B or, as astute observers will notice, we can also just close C and completely close both backdoor paths. pgmpy will nicely confirm these results for us. pgm = PGM(shape=[4, 4]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 1, 3)) pgm.add_node(daft.Node('B', r"B", 2, 2)) pgm.add_node(daft.Node('C', r"C", 3, 3)) pgm.add_edge('A', 'X') pgm.add_edge('A', 'B') pgm.add_edge('C', 'B') pgm.add_edge('C', 'Y') pgm.add_edge("X", "Y") pgm.add_edge("B", "X") pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}") #@title # Game 6 #@markdown So these are no longer drawn from The Book of Why, but were either drawn from another source (which I will reference) or a developed to try to induce a specific bug. #@markdown This example is drawn from Causality by Pearl on p. 80. This example is kind of interesting because there are many possible combinations of nodes which will close the two backdoor paths which exist in this graph. In turns out that D plus any other node in {A, B, C, E} will deconfound X and Y. pgm = PGM(shape=[4, 4]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 1, 3)) pgm.add_node(daft.Node('B', r"B", 3, 3)) pgm.add_node(daft.Node('C', r"C", 1, 2)) pgm.add_node(daft.Node('D', r"D", 2, 2)) pgm.add_node(daft.Node('E', r"E", 3, 2)) pgm.add_node(daft.Node('F', r"F", 2, 1)) pgm.add_edge('X', 'F') pgm.add_edge('F', 'Y') pgm.add_edge('C', 'X') pgm.add_edge('A', 'C') pgm.add_edge('A', 'D') pgm.add_edge('D', 'X') pgm.add_edge('D', 'Y') pgm.add_edge('B', 'D') pgm.add_edge('B', 'E') pgm.add_edge('E', 'Y') pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") bd_adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}") fd_adj_sets = inference.get_all_frontdoor_adjustment_sets("X", "Y") print(f"Ehat's the possible front adjustment sets? {fd_adj_sets}") #@title # Game 7 #@markdown This game tests the front door adjustment. B is taken to be unobserved, and therfore we cannot close the backdoor path X <- B -> Y. pgm = PGM(shape=[4, 3]) pgm.add_node(daft.Node('X', r"X", 1, 1)) pgm.add_node(daft.Node('Y', r"Y", 3, 1)) pgm.add_node(daft.Node('A', r"A", 2, 1)) pgm.add_node(daft.Node('B', r"B", 2, 2)) pgm.add_edge('X', 'A') pgm.add_edge('A', 'Y') pgm.add_edge('B', 'X') pgm.add_edge('B', 'Y') pgm.render() plt.show() graph = convert_pgm_to_pgmpy(pgm) inference = CausalInference(graph) print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}") bd_adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y") print(f"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}") fd_adj_sets = inference.get_all_frontdoor_adjustment_sets("X", "Y") print(f"Ehat's the possible front adjustment sets? {fd_adj_sets}") ```
true
code
0.628065
null
null
null
null
# Bounding Box Visualizer ``` try: import cv2 except ImportError: cv2 = None COLORS = [ "#6793be", "#990000", "#00ff00", "#ffbcc9", "#ffb9c7", "#fdc6d1", "#fdc9d3", "#6793be", "#73a4d4", "#9abde0", "#9abde0", "#8fff8f", "#ffcfd8", "#808080", "#808080", "#ffba00", "#6699ff", "#009933", "#1c1c1c", "#08375f", "#116ebf", "#e61d35", "#106bff", "#8f8fff", "#8fff8f", "#dbdbff", "#dbffdb", "#dbffff", "#ffdbdb", "#ffc2c2", "#ffa8a8", "#ff8f8f", "#e85e68", "#123456", "#5cd38c", "#1d1f5f", "#4e4b04", "#495a5b", "#489d73", "#9d4872", "#d49ea6", "#ff0080", "#6793be", "#990000", "#fececf", "#ffbcc9", "#ffb9c7", "#fdc6d1", "#fdc9d3", "#6793be", "#73a4d4", "#9abde0", "#9abde0", "#8fff8f", "#ffcfd8", "#808080", "#808080", "#ffba00", "#6699ff", "#009933", "#1c1c1c", "#08375f", "#116ebf", "#e61d35", "#106bff", "#8f8fff", "#8fff8f", "#dbdbff", "#dbffdb", "#dbffff", "#ffdbdb", "#ffc2c2", "#ffa8a8", "#ff8f8f", "#e85e68", "#123456", "#5cd38c", "#1d1f5f", "#4e4b04", "#495a5b", "#489d73", "#9d4872", "#d49ea6", "#ff0080" ] def hex_to_rgb(color_hex): color_hex = color_hex.lstrip('#') color_rgb = tuple(int(color_hex[i:i+2], 16) for i in (0, 2, 4)) return color_rgb def annotate_image(image, detection): """ Annotate images with object detection results # Arguments: image: numpy array representing the image used for detection detection: `DetectionResult` result from SKIL on the same image # Return value: annotated image as numpy array """ if cv2 is None: raise Exception("OpenCV is not installed.") objects = detection.get('objects') if objects: for detect in objects: confs = detect.get('confidences') max_conf = max(confs) max_index = confs.index(max_conf) classes = detect.get('predictedClasses') max_class = classes[max_index] class_number = detect.get('predictedClassNumbers')[max_index] h = detect.get('height') w = detect.get('width') center_x = detect.get('centerX') center_y = detect.get('centerY') color_hex = COLORS[class_number] b,g,r = hex_to_rgb(color_hex) color_rgb = (r,g,b) # bounding box xmin, ymin = int(center_x - w/2), int(center_y - h/2) xmax, ymax = int(center_x + w/2), int(center_y + h/2) upper = (xmin, ymin) lower = (xmax, ymax) cv2.rectangle(image, lower, upper, color_rgb, thickness=3) # bounding box label: class_name: confidence text = max_class + ": " + str(int(100*max(confs)))+"%" font = cv2.FONT_HERSHEY_SIMPLEX fontScale = 0.7 # get text size size = cv2.getTextSize(text, font, fontScale+0.1, thickness=2) text_width = size[0][0] text_height = size[0][1] # text-box background cv2.rectangle(image, (xmin-2, ymin), (xmin+text_width, ymin-35), color_rgb, thickness=-1) cv2.putText(image, text, (xmin, ymin-10), font, fontScale, color=0, thickness=2) return image import json import matplotlib.pyplot as plt %matplotlib inline with open('detections/img-5.json') as FILE: detections = json.load(FILE) print(json.dumps(detections['objects'][0], indent=4)) image = annotate_image(cv2.imread("images/img-5.jpg"), detections) cv2.imwrite('images/annotated.jpg', image) plt.figure(figsize=(8,8)) plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) plt.show() image.shape for k, detection in enumerate(detections['objects']): predicted = detection['predictedClasses'][0] confidence = detection['confidences'][0] print('{}: [{}, {:.5}]'.format(k+1, predicted, confidence)) len(COLORS) ```
true
code
0.555676
null
null
null
null
# Analyze Data Quality with SageMaker Processing Jobs and Spark Typically a machine learning (ML) process consists of few steps. First, gathering data with various ETL jobs, then pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm. Often, distributed data processing frameworks such as Spark are used to process and analyze data sets in order to detect data quality issues and prepare them for model training. In this notebook we'll use Amazon SageMaker Processing with a library called [**Deequ**](https://github.com/awslabs/deequ), and leverage the power of Spark with a managed SageMaker Processing Job to run our data processing workloads. Here are some great resources on Deequ: * Blog Post: https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/ * Research Paper: https://assets.amazon.science/4a/75/57047bd343fabc46ec14b34cdb3b/towards-automated-data-quality-management-for-machine-learning.pdf ![Deequ](img/deequ.png) ![](img/processing.jpg) # Amazon Customer Reviews Dataset https://s3.amazonaws.com/amazon-reviews-pds/readme.html ### Dataset Columns: - `marketplace`: 2-letter country code (in this case all "US"). - `customer_id`: Random identifier that can be used to aggregate reviews written by a single author. - `review_id`: A unique ID for the review. - `product_id`: The Amazon Standard Identification Number (ASIN). `http://www.amazon.com/dp/<ASIN>` links to the product's detail page. - `product_parent`: The parent of that ASIN. Multiple ASINs (color or format variations of the same product) can roll up into a single parent. - `product_title`: Title description of the product. - `product_category`: Broad product category that can be used to group reviews (in this case digital videos). - `star_rating`: The review's rating (1 to 5 stars). - `helpful_votes`: Number of helpful votes for the review. - `total_votes`: Number of total votes the review received. - `vine`: Was the review written as part of the [Vine](https://www.amazon.com/gp/vine/help) program? - `verified_purchase`: Was the review from a verified purchase? - `review_headline`: The title of the review itself. - `review_body`: The text of the review. - `review_date`: The date the review was written. ``` ingest_create_athena_table_tsv = False %store -r ingest_create_athena_table_tsv if not ingest_create_athena_table_tsv: print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') print('[ERROR] YOU HAVE TO RUN THE NOTEBOOKS IN THE INGEST FOLDER FIRST.') print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') else: print('[OK]') import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() bucket = sagemaker_session.default_bucket() ``` # Pull the Spark-Deequ Docker Image ``` public_image_uri='docker.io/datascienceonaws/spark-deequ:1.0.0' !docker pull $public_image_uri ``` # Push the Image to a Private Docker Repo ``` private_docker_repo = 'spark-deequ' private_docker_tag = '1.0.0' import boto3 account_id = boto3.client('sts').get_caller_identity().get('Account') region = boto3.session.Session().region_name private_image_uri = '{}.dkr.ecr.{}.amazonaws.com/{}:{}'.format(account_id, region, private_docker_repo, private_docker_tag) print(private_image_uri) !docker tag $public_image_uri $private_image_uri !$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email) ``` # Ignore `spark-deequ does not exist` error below ``` !aws ecr describe-repositories --repository-names $private_docker_repo || aws ecr create-repository --repository-name $private_docker_repo ``` # Ignore ^^ `spark-deequ does not exist` ^^ error above ``` !docker push $private_image_uri ``` # Run the Analysis Job using a SageMaker Processing Job Next, use the Amazon SageMaker Python SDK to submit a processing job. Use the Spark container that was just built with our Spark script. # Review the Spark preprocessing script. ``` !pygmentize preprocess-deequ.py !pygmentize preprocess-deequ.scala from sagemaker.processing import ScriptProcessor processor = ScriptProcessor(base_job_name='spark-amazon-reviews-analyzer', image_uri=private_image_uri, command=['/opt/program/submit'], role=role, instance_count=2, # instance_count needs to be > 1 or you will see the following error: "INFO yarn.Client: Application report for application_ (state: ACCEPTED)" instance_type='ml.r5.2xlarge', env={ 'mode': 'jar', 'main_class': 'Main' }) s3_input_data = 's3://{}/amazon-reviews-pds/tsv/'.format(bucket) print(s3_input_data) !aws s3 ls $s3_input_data ``` ## Setup Output Data ``` from time import gmtime, strftime timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime()) output_prefix = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix) processing_job_name = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix) print('Processing job name: {}'.format(processing_job_name)) s3_output_analyze_data = 's3://{}/{}/output'.format(bucket, output_prefix) print(s3_output_analyze_data) ``` ## Start the Spark Processing Job _Notes on Invoking from Lambda:_ * However, if we use the boto3 SDK (ie. with a Lambda), we need to copy the `preprocess.py` file to S3 and specify the everything include --py-files, etc. * We would need to do the following before invoking the Lambda: !aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/code/preprocess.py !aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/py_files/preprocess.py * Then reference the s3://<location> above in the --py-files, etc. * See Lambda example code in this same project for more details. _Notes on not using ProcessingInput and Output:_ * Since Spark natively reads/writes from/to S3 using s3a://, we can avoid the copy required by ProcessingInput and ProcessingOutput (FullyReplicated or ShardedByS3Key) and just specify the S3 input and output buckets/prefixes._" * See https://github.com/awslabs/amazon-sagemaker-examples/issues/994 for issues related to using /opt/ml/processing/input/ and output/ * If we use ProcessingInput, the data will be copied to each node (which we don't want in this case since Spark already handles this) ``` from sagemaker.processing import ProcessingOutput processor.run(code='preprocess-deequ.py', arguments=['s3_input_data', s3_input_data, 's3_output_analyze_data', s3_output_analyze_data, ], # See https://github.com/aws/sagemaker-python-sdk/issues/1341 # for why we need to specify a null-output outputs=[ ProcessingOutput(s3_upload_mode='EndOfJob', output_name='null-output', source='/opt/ml/processing/output') ], logs=True, wait=False ) from IPython.core.display import display, HTML processing_job_name = processor.jobs[-1].describe()['ProcessingJobName'] display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format(region, processing_job_name))) from IPython.core.display import display, HTML processing_job_name = processor.jobs[-1].describe()['ProcessingJobName'] display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After a Few Minutes</b>'.format(region, processing_job_name))) from IPython.core.display import display, HTML s3_job_output_prefix = output_prefix display(HTML('<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Spark Job Has Completed</b>'.format(bucket, s3_job_output_prefix, region))) ``` # Monitor the Processing Job ``` running_processor = sagemaker.processing.ProcessingJob.from_processing_name(processing_job_name=processing_job_name, sagemaker_session=sagemaker_session) processing_job_description = running_processor.describe() print(processing_job_description) running_processor.wait() ``` # _Please Wait Until the ^^ Processing Job ^^ Completes Above._ # Inspect the Processed Output ## These are the quality checks on our dataset. ## _The next cells will not work properly until the job completes above._ ``` !aws s3 ls --recursive $s3_output_analyze_data/ ``` ## Copy the Output from S3 to Local * dataset-metrics/ * constraint-checks/ * success-metrics/ * constraint-suggestions/ ``` !aws s3 cp --recursive $s3_output_analyze_data ./amazon-reviews-spark-analyzer/ --exclude="*" --include="*.csv" ``` ## Analyze Constraint Checks ``` import glob import pandas as pd import os def load_dataset(path, sep, header): data = pd.concat([pd.read_csv(f, sep=sep, header=header) for f in glob.glob('{}/*.csv'.format(path))], ignore_index = True) return data df_constraint_checks = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-checks/', sep='\t', header=0) df_constraint_checks[['check', 'constraint', 'constraint_status', 'constraint_message']] ``` ## Analyze Dataset Metrics ``` df_dataset_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/dataset-metrics/', sep='\t', header=0) df_dataset_metrics ``` ## Analyze Success Metrics ``` df_success_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/success-metrics/', sep='\t', header=0) df_success_metrics ``` ## Analyze Constraint Suggestions ``` df_constraint_suggestions = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-suggestions/', sep='\t', header=0) df_constraint_suggestions.columns=['column_name', 'description', 'code'] df_constraint_suggestions ``` # Save for the Next Notebook(s) ``` %store df_dataset_metrics %%javascript Jupyter.notebook.save_checkpoint(); Jupyter.notebook.session.delete(); ```
true
code
0.258888
null
null
null
null
### DemIntro02: # Rational Expectations Agricultural Market Model #### Preliminary task: Load required modules ``` from compecon.quad import qnwlogn from compecon.tools import discmoments import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style('dark') %matplotlib notebook ``` Generate yield distribution ``` sigma2 = 0.2 ** 2 y, w = qnwlogn(25, -0.5 * sigma2, sigma2) ``` ## Compute rational expectations equilibrium using function iteration, iterating on acreage planted ``` A = lambda aa, pp: 0.5 + 0.5 * np.dot(w, np.maximum(1.5 - 0.5 * aa * y, pp)) ptarg = 1 a = 1 print('{:^6} {:^10} {:^10}\n{}'.format('iter', 'a', "|a' - a|",'-' * 27)) for it in range(50): aold = a a = A(a, ptarg) print('{:^6} {:^10.4f} {:^10.1e}'.format(it, a, np.linalg.norm(a - aold))) if np.linalg.norm(a - aold) < 1.e-8: break ``` Intermediate outputs ``` q = a * y # quantity produced in each state p = 1.5 - 0.5 * a * y # market price in each state f = np.maximum(p, ptarg) # farm price in each state r = f * q # farm revenue in each state g = (f - p) * q #government expenditures ``` Print results ``` varnames = ['Market Price', 'Farm Price', 'Farm Revenue', 'Government Expenditures'] xavg, xstd = discmoments(w, np.vstack((p, f, r, g))) print('\n{:^24} {:^8} {:^8}\n{}'.format('Variable', 'Expect', 'Std Dev','-'*42)) for varname, av, sd in zip(varnames, xavg, xstd): print('{:24} {:8.4f} {:8.4f}'.format(varname, av, sd)) ``` ## Generate fixed-point mapping ``` aeq = a a = np.linspace(0, 2, 100) g = np.array([A(k, ptarg) for k in a]) ``` Graph rational expectations equilibrium ``` fig1 = plt.figure(figsize=[6, 6]) ax = fig1.add_subplot(111, title='Rational expectations equilibrium', aspect=1, xlabel='Acreage Planted', xticks=[0, aeq, 2], xticklabels=['0', '$a^{*}$', '2'], ylabel='Rational Acreage Planted', yticks=[0, aeq, 2],yticklabels=['0', '$a^{*}$', '2']) ax.plot(a, g, 'b', linewidth=4) ax.plot(a, a, ':', color='grey', linewidth=2) ax.plot([0, aeq, aeq], [aeq, aeq, 0], 'r--', linewidth=3) ax.plot([aeq], [aeq], 'ro', markersize=12) ax.text(0.05, 0, '45${}^o$', color='grey') ax.text(1.85, aeq - 0.15,'$g(a)$', color='blue') fig1.show() ``` ## Compute rational expectations equilibrium as a function of the target price ``` nplot = 50 ptarg = np.linspace(0, 2, nplot) a = 1 Ep, Ef, Er, Eg, Sp, Sf, Sr, Sg = (np.empty(nplot) for k in range(8)) for ip in range(nplot): for it in range(50): aold = a a = A(a, ptarg[ip]) if np.linalg.norm((a - aold) < 1.e-10): break q = a * y # quantity produced p = 1.5 - 0.5 * a * y # market price f = np.maximum(p, ptarg[ip]) # farm price r = f * q # farm revenue g = (f - p) * q # government expenditures xavg, xstd = discmoments(w, np.vstack((p, f, r, g))) Ep[ip], Ef[ip], Er[ip], Eg[ip] = tuple(xavg) Sp[ip], Sf[ip], Sr[ip], Sg[ip] = tuple(xstd) zeroline = lambda y: plt.axhline(y[0], linestyle=':', color='gray', hold=True) ``` Graph expected prices vs target price ``` fig2 = plt.figure(figsize=[8, 6]) ax1 = fig2.add_subplot(121, title='Expected price', xlabel='Target price', xticks=[0, 1, 2], ylabel='Expectation', yticks=[0.5, 1, 1.5, 2], ylim=[0.5, 2.0]) zeroline(Ep) ax1.plot(ptarg, Ep, linewidth=4, label='Market Price') ax1.plot(ptarg, Ef, linewidth=4, label='Farm Price') ax1.legend(loc='upper left') ``` Graph expected prices vs target price ``` ax2 = fig2.add_subplot(122, title='Price variabilities', xlabel='Target price', xticks=[0, 1, 2], ylabel='Standard deviation', yticks=[0, 0.1, 0.2]) #plt.ylim(0.5, 2.0) zeroline(Sf) ax2.plot(ptarg, Sp, linewidth=4, label='Market Price') ax2.plot(ptarg, Sf, linewidth=4, label='Farm Price') ax2.legend(loc='upper left') fig2.show() ``` Graph expected farm revenue vs target price ``` fig3 = plt.figure(figsize=[12, 6]) ax1 = fig3.add_subplot(131, title='Expected revenue', xlabel='Target price', xticks=[0, 1, 2], ylabel='Expectation', yticks=[1, 2, 3], ylim=[0.8, 3.0]) zeroline(Er) ax1.plot(ptarg, Er, linewidth=4) ``` Graph standard deviation of farm revenue vs target price ``` ax2 = fig3.add_subplot(132, title='Farm Revenue Variability', xlabel='Target price', xticks=[0, 1, 2], ylabel='Standard deviation', yticks=[0, 0.2, 0.4]) zeroline(Sr) ax2.plot(ptarg, Sr, linewidth=4) ``` Graph expected government expenditures vs target price ``` ax3 = fig3.add_subplot(133, title='Expected Government Expenditures', xlabel='Target price', xticks=[0, 1, 2], ylabel='Expectation', yticks=[0, 1, 2], ylim=[-0.05, 2.0]) zeroline(Eg) ax3.plot(ptarg, Eg, linewidth=4) plt.show() ```
true
code
0.634996
null
null
null
null
# Baseline model classification The purpose of this notebook is to make predictions for all six categories on the given dataset using some set of rules. <br>Let's assume that human labellers have labelled these comments based on the certain kind of words present in the comments. So it is worth exploring the comments to check the kind of words used under every category and how many times that word occurred in that category. So in this notebook, six datasets are created from the main dataset, to make the analysis easy for each category. After this, counting and storing the most frequently used words under each category is done. For each category, then we are checking the presence of `top n` words from the frequently used word dictionary, in the comments, to make the prediction. ### 1. Import libraries and load data For preparation lets import the required libraries and the data ``` import os dir_path = os.path.dirname(os.getcwd()) import numpy as np import pandas as pd import nltk from nltk.tokenize import word_tokenize from nltk.corpus import stopwords import re import string import operator import pickle import sys sys.path.append(os.path.join(dir_path, "src")) from clean_comments import clean train_path = os.path.join(dir_path, 'data', 'raw', 'train.csv') ## Load dataset df = pd.read_csv(train_path) ``` ### <br>2. Datasets for each category Dataset with toxic comments ``` #extract dataset with toxic label df_toxic = df[df['toxic'] == 1] #Reseting the index df_toxic.set_index(['id'], inplace = True) df_toxic.reset_index(level =['id'], inplace = True) ``` Dataset of severe toxic comments ``` #extract dataset with Severe toxic label df_severe_toxic = df[df['severe_toxic'] == 1] #Reseting the index df_severe_toxic.set_index(['id'], inplace = True) df_severe_toxic.reset_index(level =['id'], inplace = True) ``` Dataset with obscene comment ``` #extract dataset with obscens label df_obscene = df[df['obscene'] == 1] #Reseting the index df_obscene.set_index(['id'], inplace = True) df_obscene.reset_index(level =['id'], inplace = True) #df_obscene =df_obscene.drop('comment_text', axis=1) ``` Dataset with comments labeled as "identity_hate" ``` df_identity_hate = df[df['identity_hate'] == 1] #Reseting the index df_identity_hate.set_index(['id'], inplace = True) df_identity_hate.reset_index(level =['id'], inplace = True) ``` Dataset with all the threat comments ``` df_threat = df[df['threat'] == 1] #Reseting the index df_threat.set_index(['id'], inplace = True) df_threat.reset_index(level =['id'], inplace = True) ``` Dataset of comments with "Insult" label ``` df_insult = df[df['insult'] == 1] #Reseting the index df_insult.set_index(['id'], inplace = True) df_insult.reset_index(level =['id'], inplace = True) ``` Dataset with comments which have all six labels ``` df_6 = df[(df['toxic']==1) & (df['severe_toxic']==1) & (df['obscene']==1) & (df['threat']==1)& (df['insult']==1)& (df['identity_hate']==1)] df_6.set_index(['id'], inplace = True) df_6.reset_index(level =['id'], inplace = True) # df6 = df_6.drop('comment_text', axis=1) ``` ### <br> 3. Preperation of vocab ``` ### frequent_words function take dataset as an input and returns two arguments - ### all_words and counts. ### all_words gives all the words occuring in the provided dataset ### counts gives dictionary with keys as a words those exists in the entire dataset and values ### as a count of existance of these words in the dataset. def frequent_words(data): all_word = [] counts = dict() for i in range (0,len(data)): ### Load input input_str = data.comment_text[i] ### Clean input data processed_text = clean(input_str) ### perform tokenization tokened_text = word_tokenize(processed_text) ### remove stop words comment_word = [] for word in tokened_text: if word not in stopwords.words('english'): comment_word.append(word) #print(len(comment_word)) all_word.extend(comment_word) for word in all_word: if word in counts: counts[word] += 1 else: counts[word] = 1 return all_word, counts ## descend_order_dict funtion takes dataframe as an input and outputs sorted vocab dictionary ## with the values sorted in descending order (keys are words and values are word count) def descend_order_dict(data): all_words, word_count = frequent_words(data) sorted_dict = dict( sorted(word_count.items(), key=operator.itemgetter(1),reverse=True)) return sorted_dict label_sequence = df.columns.drop("id") label_sequence = label_sequence.drop("comment_text").tolist() label_sequence ``` #### <br>Getting the vocab used in each category in descending order its count For **`toxic`** category ``` descend_order_toxic_dict = descend_order_dict(df_toxic) ``` These are the words most frequently used in toxic comments <br>For **`severe_toxic`** category ``` descend_order_severe_toxic_dict =descend_order_dict(df_severe_toxic) ``` These are the words most frequently used in severe toxic comments <br>For **`obscene`** category ``` descend_order_obscene_dict = descend_order_dict(df_obscene) ``` These are the words most frequently used in obscene comments <br>For **`threat`** category ``` descend_order_threat_dict = descend_order_dict(df_threat) ``` These are the words most frequently used in severe threat comments <br>For **`insult`** category ``` descend_order_insult_dict = descend_order_dict(df_insult) ``` These are the words most frequently used in comments labeled as an insult <br>For **`identity_hate`** category ``` descend_order_id_hate_dict = descend_order_dict(df_identity_hate) ``` These are the most frequently used words in the comments labeled as identity_hate <br> For comments when all categories are 1 ``` descend_order_all_label_dict = descend_order_dict(df_6) ``` These are the most frequently used words in the comments labeled as identity_hate #### <br> Picking up the top n words from the descend vocab dictionary In this code, top 3 words are considered to make the prediction. ``` # list(descend_order_all_label_dict.keys())[3] ## combining descend vocab dictionaries of all the categories in one dictionary ## with categories as their keys all_label_descend_vocab = {'toxic':descend_order_toxic_dict, 'severe_toxic':descend_order_severe_toxic_dict, 'obscene':descend_order_obscene_dict, 'threat':descend_order_threat_dict, 'insult':descend_order_insult_dict, 'id_hate':descend_order_id_hate_dict } ## this function takes two arguments - all_label_freq_word and top n picks ## and outputs a dictionary with categories as keys and list of top 3 words as their values. def dict_top_n_words(all_label_descend_vocab, n): count = dict() for label, words in all_label_descend_vocab.items(): word_list = [] for i in range (0,n): word_list.append(list(words.keys())[i]) count[label] = word_list return count ### top 3 words from all the vocabs dict_top_n_words(all_label_descend_vocab,3) ``` ### <br>4. Performance check of baseline Model ``` ## Check if the any word from the top 3 words from the six categories exist in the comments def word_intersection(input_str, n, all_words =all_label_descend_vocab): toxic_pred = [] severe_toxic_pred = [] obscene_pred = [] threat_pred = [] insult_pred = [] id_hate_pred = [] rule_based_pred = [toxic_pred, severe_toxic_pred, obscene_pred, threat_pred, insult_pred,id_hate_pred ] # top_n_words = dict_top_n_words[all_label_freq_word,n] for count,ele in enumerate(list(dict_top_n_words(all_label_descend_vocab,3).values())): for word in ele: if (word in input_str): rule_based_pred[count].append(word) #print(rule_based_pred) for i in range (0,len(rule_based_pred)): if len(rule_based_pred[i])== 0: rule_based_pred[i]= 0 else: rule_based_pred[i]= 1 return rule_based_pred ### Test word_intersection(df['comment_text'][55], 3) ``` <br>Uncomment the below cell to get the prediction on the dataset but it is already saved in `rule_base_pred.pkl` in list form to save time ``` ## store the values of predictions by running the word_intersection function on ## all the comments # rule_base_pred = df['comment_text'].apply(lambda x: word_intersection(x,3)) ``` After running above cell, we get the predictions on the entire dataset for each category in `rule_base_pred`, the orginal type of `rule_base_pred` is pandas.core.series.Series. This pandas series is converted into list and saved for future use. This `.pkl` fine can be loaded by running below cell. ``` ### save rule_base_pred # file_name = "rule_base_pred.pkl" # open_file = open(file_name, "wb") # pickle.dump(rule_base_pred, open_file) # # open_file.close() # open_file = open("rule_base_pred.pkl", "rb") # pred_rule = pickle.load(open_file) # open_file.close() ### Open the saved rule_base_pred.pkl pkl_file = os.path.join(dir_path, 'model', 'rule_base_pred.pkl') open_file = open(pkl_file, "rb") pred_rule = pickle.load(open_file) open_file.close() ## true prediction y_true = df.drop(['id', 'comment_text'], axis=1) ## check the type type(y_true), type(pred_rule) ``` <br>Uncomment pred_rule line in below cell to convert the type of predictions from panda series to list,if not using saved `rule_base_pred.pkl` ``` ### Change the type to list pred_true = y_true.values.tolist() # pred_rule = rule_base_pred.values.tolist() ``` #### Compute accuracy of Baseline Model ``` ## Accuracy check for decent and not-decent comments classification count = 0 for i in range(0, len(df)): if pred_true[i] == pred_rule[i]: count = count+1 print("Overall accuracy of rule based classifier : {}".format((count/len(df))*100)) ``` Based on the rule implimented here, baseline classifier is classifying decent and not-decent comments with the **accuracy of 76.6%**.Now we have to see if AI based models giver better performance than this. ``` ## Category wise accuracy check mean = [] for j in range(0, len(pred_true[0])): count = 0 for i in range(0, len(df)): if pred_true[i][j] == pred_rule[i][j]: count = count+1 mean.append(count/len(df)*100) print("Accuracy of rule based classifier in predicting {} comments : {}".format(label_sequence[j],(count/len(df))*100)) print("Mean accuracy : {}".format(np.array(mean).mean())) ``` Mean accuracy of our *rule-based-model* is 92.7%<br> Minimum accuracy for predicting `toxic `, `severe_toxic `, `obscene `, `threat `, `insult `, or `identity_hate ` class of the Baseline model is more that 88%. <br>Accuracies for: <ol> <li>`toxic `: 89.4%</li> <li>`severe_toxic `: 88.2%</li> <li>`obscene `: 96.3%</li> <li>`threat `: 87.8%</li> <li>`insult `: 95.8%</li> <li>`identity_hate `: 98.3%</li> </ol> <br>In my opinion this model is doing quite good. As we know the dataset have more samples for toxic comments as compared to rest of the categories but this model still managed to predict with 89.4% of accuracy by just considering the top 3 words from its very large vocabulary. It may perform better if we consider more than 3 words from its vocab, because top 3 words not necessarily a true representaion of this category. <br>On the other hand, `obscene `, `insult `, and `identity_hate ` have very good accuracy rates, seems like human labellers looked for these top 3 words to label comments under these categories. <br>For `threat ` category, the model should perform well as the number of sample for this category is just 478, that means it has smaller vocab comparative to other classes. but seems like human labellers looked at more than these top 3 words of its vocab. It could be checked by tweaking the number of top n words. ``` yp=np.array([np.array(xi) for xi in pred_rule]) type(yp) # type(y[0]) yp.shape yt=np.array([np.array(xi) for xi in pred_true]) type(yt) yt.shape from sklearn.metrics import jaccard_score print("Jaccard score is : {}".format(jaccard_score(yt,yp, average= 'weighted'))) ``` Our `rule based model` is really bad seeing jaccard similarity
true
code
0.26341
null
null
null
null
<a href="https://colab.research.google.com/github/hansong0219/Advanced-DeepLearning-Study/blob/master/UNET/UNET_Build.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np import os import sys from tensorflow.keras.layers import Input, Dropout, Concatenate from tensorflow.keras.layers import Conv2DTranspose, Conv2D from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import LeakyReLU, Activation from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import plot_model from tensorflow.keras.losses import BinaryCrossentropy import matplotlib.pyplot as plt import tensorflow as tf def down_sample(layer_inputs,filters, size, apply_batchnorm=True): initializer = tf.random_normal_initializer(0.,0.02) d = Conv2D(filters, size, strides=2,padding='same', kernel_initializer=initializer, use_bias=False)(layer_inputs) if apply_batchnorm: d = BatchNormalization()(d) d = LeakyReLU(alpha=0.2)(d) return d def up_sample(layer_inputs, skip_input,filters, size, dropout_rate=0): initializer = tf.random_normal_initializer(0.,0.02) u = Conv2DTranspose(filters, size, strides=2,padding='same', kernel_initializer=initializer,use_bias=False)(layer_inputs) if dropout_rate: u = Dropout(dropout_rate)(u) u = tf.keras.layers.ReLU()(u) u = Concatenate()([u, skip_input]) return u def Build_UNET(): input_shape = (256,256,3) output_channel = 3 inputs = Input(shape=input_shape,name="inputs") d1 = down_sample(inputs, 64, 4, apply_batchnorm=False) #(128,128,3) d2 = down_sample(d1, 128, 4) #(64,64,128) d3 = down_sample(d2, 256, 4) d4 = down_sample(d3, 512, 4) d5 = down_sample(d4, 512, 4) d6 = down_sample(d5, 512, 4) d7 = down_sample(d6, 512, 4) d8 = down_sample(d7, 512, 4) u7 = up_sample(d8, d7, 512, 4, dropout_rate = 0.5) u6 = up_sample(u7, d6, 512, 4, dropout_rate = 0.5) u5 = up_sample(u6, d5, 512, 4, dropout_rate = 0.5) u4 = up_sample(u5, d4, 512, 4) u3 = up_sample(u4, d3, 256, 4) u2 = up_sample(u3, d2, 128, 4) u1 = up_sample(u2, d1, 64, 4) initializer = tf.random_normal_initializer(0.,0.02) outputs = Conv2DTranspose(output_channel, kernel_size=4, strides=2, padding='same', kernel_initializer=initializer, activation='tanh')(u1) return Model(inputs, outputs) unet = Build_UNET() optimizer = Adam(1e-4, beta_1=0.5) unet.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy']) unet.summary() plot_model(unet, show_shapes=True, dpi=64) loss=BinaryCrossentropy(from_logits=True) optimizer = Adam(1e-4, beta_1=0.5) unet.compile(optimizer=optimizer, loss='mse', metrics=['accuracy']) ```
true
code
0.758262
null
null
null
null
# AutoRec: Rating Prediction with Autoencoders Although the matrix factorization model achieves decent performance on the rating prediction task, it is essentially a linear model. Thus, such models are not capable of capturing complex nonlinear and intricate relationships that may be predictive of users' preferences. In this section, we introduce a nonlinear neural network collaborative filtering model, AutoRec :cite:`Sedhain.Menon.Sanner.ea.2015`. It identifies collaborative filtering (CF) with an autoencoder architecture and aims to integrate nonlinear transformations into CF on the basis of explicit feedback. Neural networks have been proven to be capable of approximating any continuous function, making it suitable to address the limitation of matrix factorization and enrich the expressiveness of matrix factorization. On one hand, AutoRec has the same structure as an autoencoder which consists of an input layer, a hidden layer, and a reconstruction (output) layer. An autoencoder is a neural network that learns to copy its input to its output in order to code the inputs into the hidden (and usually low-dimensional) representations. In AutoRec, instead of explicitly embedding users/items into low-dimensional space, it uses the column/row of the interaction matrix as the input, then reconstructs the interaction matrix in the output layer. On the other hand, AutoRec differs from a traditional autoencoder: rather than learning the hidden representations, AutoRec focuses on learning/reconstructing the output layer. It uses a partially observed interaction matrix as the input, aiming to reconstruct a completed rating matrix. In the meantime, the missing entries of the input are filled in the output layer via reconstruction for the purpose of recommendation. There are two variants of AutoRec: user-based and item-based. For brevity, here we only introduce the item-based AutoRec. User-based AutoRec can be derived accordingly. ## Model Let $\mathbf{R}_{*i}$ denote the $i^\mathrm{th}$ column of the rating matrix, where unknown ratings are set to zeros by default. The neural architecture is defined as: $$ h(\mathbf{R}_{*i}) = f(\mathbf{W} \cdot g(\mathbf{V} \mathbf{R}_{*i} + \mu) + b) $$ where $f(\cdot)$ and $g(\cdot)$ represent activation functions, $\mathbf{W}$ and $\mathbf{V}$ are weight matrices, $\mu$ and $b$ are biases. Let $h( \cdot )$ denote the whole network of AutoRec. The output $h(\mathbf{R}_{*i})$ is the reconstruction of the $i^\mathrm{th}$ column of the rating matrix. The following objective function aims to minimize the reconstruction error: $$ \underset{\mathbf{W},\mathbf{V},\mu, b}{\mathrm{argmin}} \sum_{i=1}^M{\parallel \mathbf{R}_{*i} - h(\mathbf{R}_{*i})\parallel_{\mathcal{O}}^2} +\lambda(\| \mathbf{W} \|_F^2 + \| \mathbf{V}\|_F^2) $$ where $\| \cdot \|_{\mathcal{O}}$ means only the contribution of observed ratings are considered, that is, only weights that are associated with observed inputs are updated during back-propagation. ``` import mxnet as mx from mxnet import autograd, gluon, np, npx from mxnet.gluon import nn from d2l import mxnet as d2l npx.set_np() ``` ## Implementing the Model A typical autoencoder consists of an encoder and a decoder. The encoder projects the input to hidden representations and the decoder maps the hidden layer to the reconstruction layer. We follow this practice and create the encoder and decoder with dense layers. The activation of encoder is set to `sigmoid` by default and no activation is applied for decoder. Dropout is included after the encoding transformation to reduce over-fitting. The gradients of unobserved inputs are masked out to ensure that only observed ratings contribute to the model learning process. ``` class AutoRec(nn.Block): def __init__(self, num_hidden, num_users, dropout=0.05): super(AutoRec, self).__init__() self.encoder = nn.Dense(num_hidden, activation='sigmoid', use_bias=True) self.decoder = nn.Dense(num_users, use_bias=True) self.dropout = nn.Dropout(dropout) def forward(self, input): hidden = self.dropout(self.encoder(input)) pred = self.decoder(hidden) if autograd.is_training(): # Mask the gradient during training return pred * np.sign(input) else: return pred ``` ## Reimplementing the Evaluator Since the input and output have been changed, we need to reimplement the evaluation function, while we still use RMSE as the accuracy measure. ``` def evaluator(network, inter_matrix, test_data, devices): scores = [] for values in inter_matrix: feat = gluon.utils.split_and_load(values, devices, even_split=False) scores.extend([network(i).asnumpy() for i in feat]) recons = np.array([item for sublist in scores for item in sublist]) # Calculate the test RMSE rmse = np.sqrt(np.sum(np.square(test_data - np.sign(test_data) * recons)) / np.sum(np.sign(test_data))) return float(rmse) ``` ## Training and Evaluating the Model Now, let us train and evaluate AutoRec on the MovieLens dataset. We can clearly see that the test RMSE is lower than the matrix factorization model, confirming the effectiveness of neural networks in the rating prediction task. ``` devices = d2l.try_all_gpus() # Load the MovieLens 100K dataset df, num_users, num_items = d2l.read_data_ml100k() train_data, test_data = d2l.split_data_ml100k(df, num_users, num_items) _, _, _, train_inter_mat = d2l.load_data_ml100k(train_data, num_users, num_items) _, _, _, test_inter_mat = d2l.load_data_ml100k(test_data, num_users, num_items) train_iter = gluon.data.DataLoader(train_inter_mat, shuffle=True, last_batch="rollover", batch_size=256, num_workers=d2l.get_dataloader_workers()) test_iter = gluon.data.DataLoader(np.array(train_inter_mat), shuffle=False, last_batch="keep", batch_size=1024, num_workers=d2l.get_dataloader_workers()) # Model initialization, training, and evaluation net = AutoRec(500, num_users) net.initialize(ctx=devices, force_reinit=True, init=mx.init.Normal(0.01)) lr, num_epochs, wd, optimizer = 0.002, 25, 1e-5, 'adam' loss = gluon.loss.L2Loss() trainer = gluon.Trainer(net.collect_params(), optimizer, {"learning_rate": lr, 'wd': wd}) d2l.train_recsys_rating(net, train_iter, test_iter, loss, trainer, num_epochs, devices, evaluator, inter_mat=test_inter_mat) ``` ## Summary * We can frame the matrix factorization algorithm with autoencoders, while integrating non-linear layers and dropout regularization. * Experiments on the MovieLens 100K dataset show that AutoRec achieves superior performance than matrix factorization. ## Exercises * Vary the hidden dimension of AutoRec to see its impact on the model performance. * Try to add more hidden layers. Is it helpful to improve the model performance? * Can you find a better combination of decoder and encoder activation functions? [Discussions](https://discuss.d2l.ai/t/401)
true
code
0.717495
null
null
null
null
``` import lifelines import pymc as pm import pyBMA import matplotlib.pyplot as plt import numpy as np from math import log from datetime import datetime import pandas as pd %matplotlib inline ``` The first step in any data analysis is acquiring and munging the data An example data set can be found at: https://jakecoltman.gitlab.io/website/post/pydata/ Download the file output.txt and transform it into a format like below where the event column should be 0 if there's only one entry for an id, and 1 if there are two entries: End date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165) id,time_to_convert,age,male,event,search,brand ``` running_id = 0 output = [[0]] with open("E:/output.txt") as file_open: for row in file_open.read().split("\n"): cols = row.split(",") if cols[0] == output[-1][0]: output[-1].append(cols[1]) output[-1].append(True) else: output.append(cols) output = output[1:] for row in output: if len(row) == 6: row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False] output = output[1:-1] def convert_to_days(dt): day_diff = dt / np.timedelta64(1, 'D') if day_diff == 0: return 23.0 else: return day_diff df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"]) df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"]) df["lifetime"] = df["lifetime"].apply(convert_to_days) df["male"] = df["male"].astype(int) df["search"] = df["search"].astype(int) df["brand"] = df["brand"].astype(int) df["age"] = df["age"].astype(int) df["event"] = df["event"].astype(int) df = df.drop('advert_time', 1) df = df.drop('conversion_time', 1) df = df.set_index("id") df = df.dropna(thresh=2) df.median() df ###Parametric Bayes #Shout out to Cam Davidson-Pilon ## Example fully worked model using toy data ## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html ## Note that we've made some corrections censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist()) alpha = pm.Uniform("alpha", 0,50) beta = pm.Uniform("beta", 0,50) @pm.observed def survival(value=df["lifetime"], alpha = alpha, beta = beta ): return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha)) mcmc = pm.MCMC([alpha, beta, survival ] ) mcmc.sample(10000) pm.Matplot.plot(mcmc) mcmc.trace("alpha")[:] ``` Problems: 2 - Try to fit your data from section 1 3 - Use the results to plot the distribution of the median -------- 4 - Try adjusting the number of samples, the burn parameter and the amount of thinning to correct get good answers 5 - Try adjusting the prior and see how it affects the estimate -------- 6 - Try to fit a different distribution to the data 7 - Compare answers Bonus - test the hypothesis that the true median is greater than a certain amount For question 2, note that the median of a Weibull is: $$β(log 2)^{1/α}$$ ``` #Solution to question 4: def weibull_median(alpha, beta): return beta * ((log(2)) ** ( 1 / alpha)) plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]) #Solution to question 4: ### Increasing the burn parameter allows us to discard results before convergence ### Thinning the results removes autocorrelation mcmc = pm.MCMC([alpha, beta, survival ] ) mcmc.sample(10000, burn = 3000, thin = 20) pm.Matplot.plot(mcmc) #Solution to Q5 ## Adjusting the priors impacts the overall result ## If we give a looser, less informative prior then we end up with a broader, shorter distribution ## If we give much more informative priors, then we get a tighter, taller distribution censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist()) ## Note the narrowing of the prior alpha = pm.Normal("alpha", 1.7, 10000) beta = pm.Normal("beta", 18.5, 10000) ####Uncomment this to see the result of looser priors ## Note this ends up pretty much the same as we're already very loose #alpha = pm.Uniform("alpha", 0, 30) #beta = pm.Uniform("beta", 0, 30) @pm.observed def survival(value=df["lifetime"], alpha = alpha, beta = beta ): return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha)) mcmc = pm.MCMC([alpha, beta, survival ] ) mcmc.sample(10000, burn = 5000, thin = 20) pm.Matplot.plot(mcmc) #plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]) ## Solution to bonus ## Super easy to do in the Bayesian framework, all we need to do is look at what % of samples ## meet our criteria medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))] testing_value = 15.6 number_of_greater_samples = sum([x >= testing_value for x in medians]) 100 * (number_of_greater_samples / len(medians)) #Cox model ``` If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards. More information here. ``` #Fitting solution cf = lifelines.CoxPHFitter() cf.fit(df, 'lifetime', event_col = 'event') cf.summary ``` Once we've fit the data, we need to do something useful with it. Try to do the following things: 1 - Plot the baseline survival function 2 - Predict the functions for a particular set of features 3 - Plot the survival function for two different set of features 4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time ``` #Solution to 1 fig, axis = plt.subplots(nrows=1, ncols=1) cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival") # Solution to prediction regressors = np.array([[1,45,0,0]]) survival = cf.predict_survival_function(regressors) survival #Solution to plotting multiple regressors fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True) regressor1 = np.array([[1,45,0,1]]) regressor2 = np.array([[1,23,1,1]]) survival_1 = cf.predict_survival_function(regressor1) survival_2 = cf.predict_survival_function(regressor2) plt.plot(survival_1,label = "32 year old male") plt.plot(survival_2,label = "46 year old female") plt.legend(loc = "lower left") #Difference in survival odds = survival_1 / survival_2 plt.plot(odds, c = "red") ``` Model selection Difficult to do with classic tools (here) Problem: 1 - Calculate the BMA coefficient values 2 - Compare these results to past the lifelines results 3 - Try running with different priors ``` ##Solution to 1 from pyBMA import CoxPHFitter bmaCox = pyBMA.CoxPHFitter.CoxPHFitter() bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4) print(bmaCox.summary) #Low probability for everything favours parsimonious models bmaCox = pyBMA.CoxPHFitter.CoxPHFitter() bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4) print(bmaCox.summary) #Low probability for everything favours parsimonious models bmaCox = pyBMA.CoxPHFitter.CoxPHFitter() bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.9]*4) print(bmaCox.summary) #Low probability for everything favours parsimonious models bmaCox = pyBMA.CoxPHFitter.CoxPHFitter() bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3]) print(bmaCox.summary) ```
true
code
0.604341
null
null
null
null
# Modeling and Simulation in Python Chapter 3 Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * # set the random number generator np.random.seed(7) ``` ## More than one State object Here's the code from the previous chapter, with two changes: 1. I've added DocStrings that explain what each function does, and what parameters it takes. 2. I've added a parameter named `state` to the functions so they work with whatever `State` object we give them, instead of always using `bikeshare`. That makes it possible to work with more than one `State` object. ``` def step(state, p1, p2): """Simulate one minute of time. state: bikeshare State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival """ if flip(p1): bike_to_wellesley(state) if flip(p2): bike_to_olin(state) def bike_to_wellesley(state): """Move one bike from Olin to Wellesley. state: bikeshare State object """ state.olin -= 1 state.wellesley += 1 def bike_to_olin(state): """Move one bike from Wellesley to Olin. state: bikeshare State object """ state.wellesley -= 1 state.olin += 1 def decorate_bikeshare(): """Add a title and label the axes.""" decorate(title='Olin-Wellesley Bikeshare', xlabel='Time step (min)', ylabel='Number of bikes') ``` And here's `run_simulation`, which is a solution to the exercise at the end of the previous notebook. ``` def run_simulation(state, p1, p2, num_steps): """Simulate the given number of time steps. state: State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival num_steps: number of time steps """ results = TimeSeries() for i in range(num_steps): step(state, p1, p2) results[i] = state.olin plot(results, label='Olin') ``` Now we can create more than one `State` object: ``` bikeshare1 = State(olin=10, wellesley=2) bikeshare2 = State(olin=2, wellesley=10) ``` Whenever we call a function, we indicate which `State` object to work with: ``` bike_to_olin(bikeshare1) bike_to_wellesley(bikeshare2) ``` And you can confirm that the different objects are getting updated independently: ``` bikeshare1 bikeshare2 ``` ## Negative bikes In the code we have so far, the number of bikes at one of the locations can go negative, and the number of bikes at the other location can exceed the actual number of bikes in the system. If you run this simulation a few times, it happens often. ``` bikeshare = State(olin=10, wellesley=2) run_simulation(bikeshare, 0.4, 0.2, 60) decorate_bikeshare() ``` We can fix this problem using the `return` statement to exit the function early if an update would cause negative bikes. ``` def bike_to_wellesley(state): """Move one bike from Olin to Wellesley. state: bikeshare State object """ if state.olin == 0: return state.olin -= 1 state.wellesley += 1 def bike_to_olin(state): """Move one bike from Wellesley to Olin. state: bikeshare State object """ if state.wellesley == 0: return state.wellesley -= 1 state.olin += 1 ``` Now if you run the simulation again, it should behave. ``` bikeshare = State(olin=10, wellesley=2) run_simulation(bikeshare, 0.4, 0.2, 60) decorate_bikeshare() ``` ## Comparison operators The `if` statements in the previous section used the comparison operator `==`. The other comparison operators are listed in the book. It is easy to confuse the comparison operator `==` with the assignment operator `=`. Remember that `=` creates a variable or gives an existing variable a new value. ``` x = 5 ``` Whereas `==` compares two values and returns `True` if they are equal. ``` x == 5 ``` You can use `==` in an `if` statement. ``` if x == 5: print('yes, x is 5') ``` But if you use `=` in an `if` statement, you get an error. ``` # If you remove the # from the if statement and run it, you'll get # SyntaxError: invalid syntax #if x = 5: # print('yes, x is 5') ``` **Exercise:** Add an `else` clause to the `if` statement above, and print an appropriate message. Replace the `==` operator with one or two of the other comparison operators, and confirm they do what you expect. ## Metrics Now that we have a working simulation, we'll use it to evaluate alternative designs and see how good or bad they are. The metric we'll use is the number of customers who arrive and find no bikes available, which might indicate a design problem. First we'll make a new `State` object that creates and initializes additional state variables to keep track of the metrics. ``` bikeshare = State(olin=10, wellesley=2, olin_empty=0, wellesley_empty=0) ``` Next we need versions of `bike_to_wellesley` and `bike_to_olin` that update the metrics. ``` def bike_to_wellesley(state): """Move one bike from Olin to Wellesley. state: bikeshare State object """ if state.olin == 0: state.olin_empty += 1 return state.olin -= 1 state.wellesley += 1 def bike_to_olin(state): """Move one bike from Wellesley to Olin. state: bikeshare State object """ if state.wellesley == 0: state.wellesley_empty += 1 return state.wellesley -= 1 state.olin += 1 ``` Now when we run a simulation, it keeps track of unhappy customers. ``` run_simulation(bikeshare, 0.4, 0.2, 60) decorate_bikeshare() ``` After the simulation, we can print the number of unhappy customers at each location. ``` bikeshare.olin_empty bikeshare.wellesley_empty ``` ## Exercises **Exercise:** As another metric, we might be interested in the time until the first customer arrives and doesn't find a bike. To make that work, we have to add a "clock" to keep track of how many time steps have elapsed: 1. Create a new `State` object with an additional state variable, `clock`, initialized to 0. 2. Write a modified version of `step` that adds one to the clock each time it is invoked. Test your code by running the simulation and check the value of `clock` at the end. ``` bikeshare = State(olin=10, wellesley=2, olin_empty=0, wellesley_empty=0, clock=0) # Solution def step(state, p1, p2): """Simulate one minute of time. state: bikeshare State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival """ state.clock += 1 if flip(p1): bike_to_wellesley(state) if flip(p2): bike_to_olin(state) # Solution run_simulation(bikeshare, 0.4, 0.2, 60) decorate_bikeshare() # Solution bikeshare ``` **Exercise:** Continuing the previous exercise, let's record the time when the first customer arrives and doesn't find a bike. 1. Create a new `State` object with an additional state variable, `t_first_empty`, initialized to -1 as a special value to indicate that it has not been set. 2. Write a modified version of `step` that checks whether`olin_empty` and `wellesley_empty` are 0. If not, it should set `t_first_empty` to `clock` (but only if `t_first_empty` has not already been set). Test your code by running the simulation and printing the values of `olin_empty`, `wellesley_empty`, and `t_first_empty` at the end. ``` # Solution bikeshare = State(olin=10, wellesley=2, olin_empty=0, wellesley_empty=0, clock=0, t_first_empty=-1) # Solution def step(state, p1, p2): """Simulate one minute of time. state: bikeshare State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival """ state.clock += 1 if flip(p1): bike_to_wellesley(state) if flip(p2): bike_to_olin(state) if state.t_first_empty != -1: return if state.olin_empty + state.wellesley_empty > 0: state.t_first_empty = state.clock # Solution run_simulation(bikeshare, 0.4, 0.2, 60) decorate_bikeshare() # Solution bikeshare ```
true
code
0.647492
null
null
null
null
# CHAPTER 14 - Probabilistic Reasoning over Time ### George Tzanetakis, University of Victoria ## WORKPLAN The section number is based on the 4th edition of the AIMA textbook and is the suggested reading for this week. Each list entry provides just the additional sections. For example the Expected reading include the sections listed under Basic as well as the sections listed under Expected. Some additional readings are suggested for Advanced. 1. Basic: Sections **14.1**, **14.3, and **Summary** 2. Expected: Same as Basic plus 14.2 3. Advanced: All the chapter including bibligraphical and historical notes ## Time and Uncertainty Agents operate over time. They need to maintain a **belief state** (a set of variables (or random variables) indexed by time) that represents which states of the world are currently possible. From the **belief** state and a transition model, the agent can predict how the world might evolve in the next time step. From the percepts observed and a **sensor** model, the agent can update the **belief state**. * CSP: belief states are variables with domains * Logic: logical formulaes which belief states are possible * Probablities: probabilities which belief states are likely * **Transition model:** describe the probability distribution of the variables at time $t$ given the state of the world at past time * **Sensor model:** the probability of each percept at time $t$, given the current state of the world * Dynamic Bayesian Networks * Hidden Markov Models * Kalman Filters ### States and Observations **Discret-time** models, the world is views as a series of **time slices** Each time slide contains a set of **random variables**, some observable and some not. *Example scenario:* you are the security guard stationed at a secret underground installation. You want to know whether it is raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without an umbrella. For each day $t$, the evidence set $E_t$ contains a single evidence variable $Umbrella_{t}$ or $U_t$. The state set $S_t$ contains a single state variable $Rain_{t}$ or $R_t$. <img src="images/rain_umbrella_hmm.png" width="75%"/> ### Transition and Sensor Models **TRANSITION MODEL** * General form: $P(X_t | X_{0:t-1})$ **Markov Assumption**: Andrei Markov (1856-1922) the current state only depends on a fixed number of previous states * First-order markov process: $P(X_t | X_{0:t-1}) = P(X_t | X_{t-1})$ Time homegeneous process: the conditional transition probabilities is the same for all time steps $t$. A Markov chain is a sequence of random variables $X_1, X_2, X_3, . . .$ with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: * $P(X_{n+1} = x|X_{1} = x_1,X_2 = x_2,...,X_n = x_n) = P(X_{n+1} = x|X_n = x_n)$ <img src="images/markov.png" width="30%"/> The possible values of $X_i$ form a countable set $S$ called the state space of the chain. A **Markov Chain** can be specified by a transition matrix with the probabilities of going from a particular state to another state at every time step. ## Sensor model/observations There are many application areas, for example speech recognition, in which we are interesting in modeling probability distributions over sequences of observations. We will denote the observation at time $t$ by the variable $Y_t$. The variable can be a symbol from a discrete alphabet or a continuous variable and we assume that the observations are sampled at discrete equally-spaced time intervals so $t$ can be an integer-valued time index. ## Inference in Temporal Models * **Filtering:** we want to compute the posterior distribution over the current state, given all evidence to date. $P(X_t|e_{1:t})$. An almost identical calculation provides the likelihood of the evidence sequence $P(e_{1:T})$. * **Prediction:** we want to computer the posterior distribution over the future state, given all evidence to date. $P(Xt+k|e_{1:t})$ for some $k > 0$. * **Smoothing or hindsight:** computing the posterior distribution over a past state, given all evidence up to the present: $P(X_{t-k}|e_{1:t})$ for some $k < t$. It provides a better estimate of the state than what was available at the time, because it incorporates more evidence. * **Most likely explanation:** Given a sequence of observations, we might wish to find the sequence of states that is most likely to have generated these observations. That is we wish to compute: $argmax_{x_{1:t}} P(x_{1:t}|e_{1:t})$. This is the typical inference task in Speech Recognition using Hidden Markov Models. ### Sidenote: Speech Recognition In phonology and linguistics, a phoneme is a unit of sound that can distinguish one word from another in a particular language. For example the english words **book** and **took** differ in one phoneme (the b vs t sound) and contain the same two remaining phonemes the **oo** sound and **k** sound. There is a clear correspondence between the written alphabet symbols of a word and the corresponding phonemes but in English there is a lot of confusing variation. For example the writtern symbols **oo** correspond to a different phoneme in the word **door**. In languages like Spanish or Greek there is a stronger direct correspondance between the written symbols and phonemes making it possible to "read" a Greek text without making phoneme errors even if you don't know the underlying words something much harder to do in English. The task of speech recognition is to take as input an audio recording a human talking and convert that recording to written words. It is possible to convert written words to sequences of phonemes and vice versa using a phonetic dictionary. For example check: http://www.speech.cs.cmu.edu/cgi-bin/cmudict There are different symbolic representations for phonemes. For example the international phonetic alphabet is an alphabetic system of phonetic notation based primarily on the Latin script that tries to cover the sounds of all languages around the world. Interesting sidenote: all babies are born with the ability to recongize and also reproduce all phonemes but as they age in a particular linguistic environment their ability gets restricted/pruned to the phonemes of the particular languages they are exposed to. So once we have the phonetic dictionary our task becomes to convert an audio recording of a human voice to a sequence of phonemes that can then be converted to written words using a phonetic dictionary. Without going into details we form different phonemes by appropriately shaping our mouths and tongue and using our vocal folds to produce pitched and unpitched phonemes/sounds (vowels and consonants). It is possible to compute features such as **Mel-Frequency Cepstral Coefficients (MFCC)** using Digital Signal Processing techniques that characterizes these configurations over short intervals of time (typically 20-40 milliseconds). So now, the task of automatic speech recognition becomes given a time sequence of feature vectors (computed from the audio recording) find the most likely sequence of phonemes that produced that sequence of feature vectors. Phonemes and especially vowels can have different durations so a particular word can be represented as a sequence of states corresponding to phonemes with repetitions. For example for the word **book** we might have the following sequence: $b,b,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,k,k$ with informal state notation corresponding to the phonemes. Further complicating our task is the fact that depending on speakers and inflection there are many possible ways to render a particular phoneme. So we can also think of each phoneme as a distribution of feature vectors. So let's look at some possible approaches to solve this problem in order of increasing complexity but also improved accuracy: 1. We can train a classifiers that given a feature vector predicts the corresponding phoneme. However this approach does not take into account that different phonemes have different probabilities (for example the phoneme correpsonding to the written symbol $z$ is less likely than the phoneme corresponding to the vowel $a$ as in the word apple), different phonemes have different typical durations (for example vowels tend to be longer than consonants), and certain transitions between phonemes for example $z$ followed by $b$ are very unlikely if not impossible whereas other ones are are much more common for example $r$ followed by $a$ as in the word apple). 2. We can model the probabilities of diffeerent phonemes and their transitions as a first order Markove chain where the state is the phoneme and then the observation output of each state can be modelled as a continuous probability distribution over the **MFCCs** feature space. That way duration and transition information is taken into account when performing automatic speech recognition. Automatic Speech Recognition Systems based on Hidden Markov Models (HMMs) dominated the field for about 20 years until they were superseded by deep learning models in the last decade or so. They are still widely used especially in situations with restricted computational resources where deep learning systems are not practical. ## Hidden Markov Models Properties: * The observation at time $t$ is generated by some random process whose state $S_t$ is hidden from the observer. * The hidden states form a **Markov Chain** i.e given the value of $S_{t−1}$, the current state $S_t$ is independent of all states prior to $t − 1$. The outputs also satisfy a Markov property which is that given state $S_t$, the observation $Y_t$ is independent of all previous states and observations. * The hidden state variable $S_t$ is discrete We can write the joint distribution of a sequence of states and observations by using the Markov assumptions to factorize: * $ P(S_{1:T},Y_{1:T}) = P(S_1)P(Y_1|S_1) \prod_{t=2}^{T}P(St|S_{t−1})P(Yt|St)$ where the notation $X_{1:T}$ indicates thesequence $X_1,X_2,...,X_T$. We can view the Hiddean Markov Model graphically as a Bayesian network by unrolling over time - think of the HMM as a template for generating a Bayesian Network and the corresponding CPTs over time. In fact, it is possible to perform the temporal inference tasks using exact or approximate inference of the corresponding Bayesian network but for **HMMs** there are significantly more efficient algorithms. <img src="images/hmm2bayesnet.png" width="50%"/> ### Specifying an HMM So all we need to do to specify an HMM are the following components: * A probability distribution over the intial state $P(S_1)$ * The $K$ by $K$ state transition matrix $P(St|St−1)$, where $K$ is the number of states * The $K$ by $L$ emission matrix $P(Yt|St)$ if $Y_t$ is discrete and has $L$ values, or the parameters $θ_t$ of some form of continuous probability density function if $Yt$ is continuous. ### Learning the transition and sensor models In addition to these tasks, we need methods for learning the transition and sensor models from observations. The basic idea is that inference provides an estimate of what transitions actually occurred and what states generated the observations. These estimates can then be used to update the models and the process can be repeated. This is an instance of the expectation-maximization (EM) algorithm. We will talk about learning probabilistic models in Chapter 20 Learning Probabilistic Models. ### Sketch of filtering and prediction (Forward) We perform recursive estimation. First the current state distribution is projected forward from $t$ to $t + 1$. Then it is updated using the new evidence $e_{t+1}$. We will not cover the details but it can be done by recursive application of Bayes rule and the Markov property of evidence and the sum/product rules. We can think of the filtered estimate $P(X_t|e_{1:t})$ as a “message” that is propagated forward along the sequence, modified by each transition, and updated by each new observation. ### Sketch of smoothing (Backward) There are two parts to computing the distribution over past states given evidence up to the present. The first is the evidence up to $k$, and then the evidence from $k + 1$ to $t$. The forward message can be computed as by filtering from $1$ to $k$. Using conditional independence and the sum and product rules we can form a backward message that runs backwards from $t$. It is possible to combine both steps in one pass to smooth the entire sequence. This is, not surprisingly, called the **Foward-Backward** algorithm. ### Finding the most likely sequence View each sequence of states as a path through a graph whose nodes are the possible states at each time step. The task is to find the most likely path through this graph, where the likelihood of any path is the product of the transition probabilities along the path and the probabilities of the given observations at each state. Because of the **Markov** property there is a recursive relationshtip between the most likely paths to each state $x_{t+1}$ and most likely paths to each state $x_t$. By running forward along the sequence, and computing m messages at each time step we will have the probaiblity for the most likely sequence reaching each of the final states. Then we simply select the most likely one. This is called the **Vitterbi** algorithm. ### Markov Chains and Hidden Markov Models Example We start with random variables and a simple independent, identically distributed model for weather. Then we look into how to form a Markov Chain to transition between states and finally we sample a Hidden Markov Model to show how the samples are generated based on the Markov Chain of the hidden states. The results are visualized as strips of colored rectangles. Experiments with the transition probabilities and the emission probabilities can lead to better understanding of how Hidden Markov Models work in terms of generating data. ``` %matplotlib inline import matplotlib.pyplot as plt from scipy import stats import numpy as np from hmmlearn import hmm class Random_Variable: def __init__(self, name, values, probability_distribution): self.name = name self.values = values self.probability_distribution = probability_distribution if all(type(item) is np.int64 for item in values): self.type = 'numeric' self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution)) elif all(type(item) is str for item in values): self.type = 'symbolic' self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution)) self.symbolic_values = values else: self.type = 'undefined' def sample(self,size): if (self.type =='numeric'): return self.rv.rvs(size=size) elif (self.type == 'symbolic'): numeric_samples = self.rv.rvs(size=size) mapped_samples = [self.values[x] for x in numeric_samples] return mapped_samples def probs(self): return self.probability_distribution def vals(self): print(self.type) return self.values ``` ### Generating random weather samples with a IID model with no time dependencies Let's first create some random samples of a symbolic random variable corresponding to the weather with two values Sunny (S) and cloudy (C) and generate random weather for 365 days. The assumption in this model is that the weather of each day is indepedent of the previous days and drawn from the same probability distribution. ``` values = ['S', 'C'] probabilities = [0.9, 0.1] weather = Random_Variable('weather', values, probabilities) samples = weather.sample(365) print(",".join(samples)) ``` Now let lets visualize these samples using yellow for sunny and grey for cloudy ``` state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' def plot_weather_samples(samples, state2color, title): colors = [state2color[x] for x in samples] x = np.arange(0, len(colors)) y = np.ones(len(colors)) plt.figure(figsize=(10,1)) plt.bar(x, y, color=colors, width=1) plt.title(title) plot_weather_samples(samples, state2color, 'iid') ``` ### Markov Chain Now instead of independently sampling the weather random variable lets form a markov chain. The Markov chain will start at a particular state and then will either stay in the same state or transition to a different state based on a transition probability matrix. To accomplish that we basically create a random variable for each row of the transition matrix that basically corresponds to the probabilities of the transitions emanating fromt the state corresponding to that row. Then we can use the markov chain to generate sequences of samples and contrast these sequence with the iid weather model. By adjusting the transition probabilities you can in a probabilistic way control the different lengths of "stretches" of the same state. ``` def markov_chain(transmat, state, state_names, samples): (rows, cols) = transmat.shape rvs = [] values = list(np.arange(0,rows)) # create random variables for each row of transition matrix for r in range(rows): rv = Random_Variable("row" + str(r), values, transmat[r]) rvs.append(rv) # start from initial state and then sample the appropriate # random variable based on the state following the transitions states = [] for n in range(samples): state = rvs[state].sample(1)[0] states.append(state_names[state]) return states # transition matrices for the Markov Chain transmat1 = np.array([[0.7, 0.3], [0.2, 0.8]]) transmat2 = np.array([[0.9, 0.1], [0.1, 0.9]]) transmat3 = np.array([[0.5, 0.5], [0.5, 0.5]]) state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' # plot the iid model too samples = weather.sample(365) plot_weather_samples(samples, state2color, 'iid') samples1 = markov_chain(transmat1,0,['S','C'], 365) plot_weather_samples(samples1, state2color, 'markov chain 1') samples2 = markov_chain(transmat2,0,['S','C'],365) plot_weather_samples(samples2, state2color, 'marov_chain 2') samples3 = markov_chain(transmat3,0,['S','C'], 365) plot_weather_samples(samples3, state2color, 'markov_chain 3') ``` ### Generating samples using a Hidden Markov Model Lets now look at how a Hidden Markov Model would work by having a Markov Chain to generate a sequence of states and for each state having a different emission probability. When sunny we will output red or yellow with higher probabilities and when cloudy black or blue. First we will write the code directly and then we will use the hmmlearn package. ``` state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' # generate random samples for a year samples = weather.sample(365) states = markov_chain(transmat1,0,['S','C'], 365) plot_weather_samples(states, state2color, "markov chain 1") # create two random variables one of the sunny state and one for the cloudy sunny_colors = Random_Variable('sunny_colors', ['y', 'r', 'b', 'g'], [0.6, 0.3, 0.1, 0.0]) cloudy_colors = Random_Variable('cloudy_colors', ['y', 'r', 'b', 'g'], [0.0, 0.1, 0.4, 0.5]) def emit_obs(state, sunny_colors, cloudy_colors): if (state == 'S'): obs = sunny_colors.sample(1)[0] else: obs = cloudy_colors.sample(1)[0] return obs # iterate over the sequence of states and emit color based on the emission probabilities obs = [emit_obs(s, sunny_colors, cloudy_colors) for s in states] obs2color = {} obs2color['y'] = 'yellow' obs2color['r'] = 'red' obs2color['b'] = 'blue' obs2color['g'] = 'grey' plot_weather_samples(obs, obs2color, "Observed sky color") # let's zoom in a month plot_weather_samples(states[0:30], state2color, 'states for a month') plot_weather_samples(obs[0:30], obs2color, 'observations for a month') ``` ### Multinomial HMM Lets do the same generation process using the multinomail HMM model supported by the *hmmlearn* python package. ``` transmat = np.array([[0.7, 0.3], [0.2, 0.8]]) start_prob = np.array([1.0, 0.0]) # yellow and red have high probs for sunny # blue and grey have high probs for cloudy emission_probs = np.array([[0.6, 0.3, 0.1, 0.0], [0.0, 0.1, 0.4, 0.5]]) model = hmm.MultinomialHMM(n_components=2) model.startprob_ = start_prob model.transmat_ = transmat model.emissionprob_ = emission_probs # sample the model - X is the observed values # and Z is the "hidden" states X, Z = model.sample(365) # we have to re-define state2color and obj2color as the hmm-learn # package just outputs numbers for the states state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'states') samples = [item for sublist in X for item in sublist] obj2color = {} obj2color[0] = 'yellow' obj2color[1] = 'red' obj2color[2] = 'blue' obj2color[3] = 'grey' plot_weather_samples(samples, obj2color, 'observations') ``` ### Estimating the parameters of an HMM Let's sample the generative HMM and get a sequence of 1000 observations. Now we can learn in an unsupervised way the paraemters of a two component multinomial HMM just using these observations. Then we can compare the learned parameters with the original parameters of the model used to generate the observations. Notice that the order of the components is different between the original and estimated models. Notice that hmmlearn does NOT directly support supervised training where you have both the labels and observations. It is possible to initialize a HMM model with some of the parameters and learn the others. For example you can initialize the transition matrix and learn the emission probabilities. That way you could implement supervised learning for a multinomial HMM. In many practical applications the hidden labels are not available and that's the hard case that is actually implemented in hmmlearn. The following two cells take a few minutes to compute on a typical laptop. ``` # generate the samples X, Z = model.sample(10000) # learn a new model estimated_model = hmm.MultinomialHMM(n_components=2, n_iter=10000).fit(X) ``` Let's compare the estimated model parameters with the original model. ``` print("Transition matrix") print("Estimated model:") print(estimated_model.transmat_) print("Original model:") print(model.transmat_) print("Emission probabilities") print("Estimated model") print(estimated_model.emissionprob_) print("Original model") print(model.emissionprob_) ``` ### Predicting a sequence of states given a sequence of observations We can also use the trained HMM model to predict a sequence of hidden states given a sequence of observations. This is the task of maximum likelihood sequence estimation. For example in Speech Recognition it would correspond to estimating a sequence of phonemes (hidden states) from a sequence of observations (acoustic vectors). This cell also takes a few minutes to compute. Note that whether the predicted or flipped predicted states correspond to the original depends on which state is selected as state0 and state1. So sometimes when you run the notebook the predicted states will be the right color some times the flipped states will be the right ones. ``` Z2 = estimated_model.predict(X) state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'Original states') plot_weather_samples(Z2, state2color, 'Predicted states') # note the reversal of colors for the states as the order of components is not the same. # we can easily fix this by change the state2color state2color = {} state2color[1] = 'yellow' state2color[0] = 'grey' plot_weather_samples(Z2, state2color, 'Flipped Predicted states') ``` The estimated model can be sampled just like the original model ``` X, Z = estimated_model.sample(365) state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'states generated by estimated model ') samples = [item for sublist in X for item in sublist] obs2color = {} obs2color[0] = 'yellow' obs2color[1] = 'red' obs2color[2] = 'blue' obs2color[3] = 'grey' plot_weather_samples(samples, obs2color, 'observations generated by estimated model') ``` ### An example of filtering <img src="images/rain_umbrella_hmm.png" width="75%"/> * Day 0: no observations $P(R_0) = <0.5, 0.5>$ * Day 1: let's say umbrella appears, $U_{1} = true$. * The prediction step from $t=0$ to $t=1$ is $P(R_1) = \sum_{r_0} P(R_1 | r_0) P(r_0) = \langle 0.7, 0.3 \rangle \times 0.5 + \langle 0.3, 0.7 \rangle \times 0.5 = \langle 0.5, 0.5\rangle $ * The update step simply multiplies the probability of the evidence for $t=1$ and normalizes: $P(R_1|u1) = \alpha P(u_{1} | R_{1}) P(R_1) = \alpha \langle 0.9, 0.2 \rangle \times \langle 0.5, 0.5 \rangle = \alpha \langle 0.45, 0.1 \rangle \approx \langle 0.818, 0.182 \rangle $ * Day 2: let's say umbrella appears, $U_{2} = true$. * Prediction step from $t=1$ to $t=2$ is $P(R_1 | u1) = \alpha P(u_1 | R_1) P(R_1) = \langle 0.7, 0.3 \rangle \times 0.818 + \langle 0.3 0.7 \rangle \times 0.182 \approx \langle 0.627, 0.373 \rangle $ * Updating with evidence for t=2 gives: $P(R_2 | u_1, u_2) = \alpha P(u_2/R_2)P(R2|u_1)= \alpha \langle 0.9, 0.2 \rangle \times \langle 0.627, 0.373 \rangle = \alpha \langle 0.565, 0.0075 \rangle \approx \langle 0.883, 0.117 \rangle $ Intuitively, the probability of rain increases from day 1 to day 2 becaus ethe rain persists.
true
code
0.520131
null
null
null
null
``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Plot style sns.set() %pylab inline pylab.rcParams['figure.figsize'] = (4, 4) # Avoid inaccurate floating values (for inverse matrices in dot product for instance) # See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors np.set_printoptions(suppress=True) def plotVectors(vecs, cols, alpha=1): """ Plot set of vectors. Parameters ---------- vecs : array-like Coordinates of the vectors to plot. Each vectors is in an array. For instance: [[1, 3], [2, 2]] can be used to plot 2 vectors. cols : array-like Colors of the vectors. For instance: ['red', 'blue'] will display the first vector in red and the second in blue. alpha : float Opacity of vectors Returns: fig : instance of matplotlib.figure.Figure The figure of the vectors """ plt.axvline(x=0, color='#A9A9A9', zorder=0) plt.axhline(y=0, color='#A9A9A9', zorder=0) for i in range(len(vecs)): if (isinstance(alpha, list)): alpha_i = alpha[i] else: alpha_i = alpha x = np.concatenate([[0,0],vecs[i]]) plt.quiver([x[0]], [x[1]], [x[2]], [x[3]], angles='xy', scale_units='xy', scale=1, color=cols[i], alpha=alpha_i) ``` $$ \newcommand\bs[1]{\boldsymbol{#1}} \newcommand\norm[1]{\left\lVert#1\right\rVert} $$ # Introduction We will see some major concepts of linear algebra in this chapter. It is also quite heavy so hang on! We will start with getting some ideas on eigenvectors and eigenvalues. We will develop on the idea that a matrix can be seen as a linear transformation and that applying a matrix on its eigenvectors gives new vectors with the same direction. Then we will see how to express quadratic equations into the matrix form. We will see that the eigendecomposition of the matrix corresponding to a quadratic equation can be used to find the minimum and maximum of this function. As a bonus, we will also see how to visualize linear transformations in Python! # 2.7 Eigendecomposition The eigendecomposition is one form of matrix decomposition. Decomposing a matrix means that we want to find a product of matrices that is equal to the initial matrix. In the case of the eigendecomposition, we decompose the initial matrix into the product of its eigenvectors and eigenvalues. Before all, let's see what are eigenvectors and eigenvalues. # Matrices as linear transformations As we have seen in [2.3](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices/) with the example of the identity matrix, you can think of matrices as linear transformations. Some matrices will rotate your space, others will rescale it etc. So when we apply a matrix to a vector, we end up with a transformed version of the vector. When we say that we 'apply' the matrix to the vector it means that we calculate the dot product of the matrix with the vector. We will start with a basic example of this kind of transformation. ### Example 1. ``` A = np.array([[-1, 3], [2, -2]]) A v = np.array([[2], [1]]) v ``` Let's plot this vector: ``` plotVectors([v.flatten()], cols=['#1190FF']) plt.ylim(-1, 4) plt.xlim(-1, 4) ``` Now, we will apply the matrix $\bs{A}$ to this vector and plot the old vector (light blue) and the new one (orange): ``` Av = A.dot(v) print(Av) plotVectors([v.flatten(), Av.flatten()], cols=['#1190FF', '#FF9A13']) plt.ylim(-1, 4) plt.xlim(-1, 4) ``` We can see that applying the matrix $\bs{A}$ has the effect of modifying the vector. Now that you can think of matrices as linear transformation recipes, let's see the case of a very special type of vector: the eigenvector. # Eigenvectors and eigenvalues We have seen an example of a vector transformed by a matrix. Now imagine that the transformation of the initial vector gives us a new vector that has the exact same direction. The scale can be different but the direction is the same. Applying the matrix didn't change the direction of the vector. This special vector is called an eigenvector of the matrix. We will see that finding the eigenvectors of a matrix can be very useful. <span class='pquote'> Imagine that the transformation of the initial vector by the matrix gives a new vector with the exact same direction. This vector is called an eigenvector of $\bs{A}$. </span> This means that $\bs{v}$ is a eigenvector of $\bs{A}$ if $\bs{v}$ and $\bs{Av}$ are in the same direction or to rephrase it if the vectors $\bs{Av}$ and $\bs{v}$ are parallel. The output vector is just a scaled version of the input vector. This scalling factor is $\lambda$ which is called the **eigenvalue** of $\bs{A}$. $$ \bs{Av} = \lambda\bs{v} $$ ### Example 2. Let's $\bs{A}$ be the following matrix: $$ \bs{A}= \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix} $$ We know that one eigenvector of A is: $$ \bs{v}= \begin{bmatrix} 1\\\\ 1 \end{bmatrix} $$ We can check that $\bs{Av} = \lambda\bs{v}$: $$ \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix} \begin{bmatrix} 1\\\\ 1 \end{bmatrix}=\begin{bmatrix} 6\\\\ 6 \end{bmatrix} $$ We can see that: $$ 6\times \begin{bmatrix} 1\\\\ 1 \end{bmatrix} = \begin{bmatrix} 6\\\\ 6 \end{bmatrix} $$ which means that $\bs{v}$ is well an eigenvector of $\bs{A}$. Also, the corresponding eigenvalue is $\lambda=6$. We can represent $\bs{v}$ and $\bs{Av}$ to check if their directions are the same: ``` A = np.array([[5, 1], [3, 3]]) A v = np.array([[1], [1]]) v Av = A.dot(v) orange = '#FF9A13' blue = '#1190FF' plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange]) plt.ylim(-1, 7) plt.xlim(-1, 7) ``` We can see that their directions are the same! Another eigenvector of $\bs{A}$ is $$ \bs{v}= \begin{bmatrix} 1\\\\ -3 \end{bmatrix} $$ because $$ \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix}\begin{bmatrix} 1\\\\ -3 \end{bmatrix} = \begin{bmatrix} 2\\\\ -6 \end{bmatrix} $$ and $$ 2 \times \begin{bmatrix} 1\\\\ -3 \end{bmatrix} = \begin{bmatrix} 2\\\\ -6 \end{bmatrix} $$ So the corresponding eigenvalue is $\lambda=2$. ``` v = np.array([[1], [-3]]) v Av = A.dot(v) plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange]) plt.ylim(-7, 1) plt.xlim(-1, 3) ``` This example shows that the eigenvectors $\bs{v}$ are vectors that change only in scale when we apply the matrix $\bs{A}$ to them. Here the scales were 6 for the first eigenvector and 2 to the second but $\lambda$ can take any real or even complex value. ## Find eigenvalues and eigenvectors in Python Numpy provides a function returning eigenvectors and eigenvalues (the first array corresponds to the eigenvalues and the second to the eigenvectors concatenated in columns): ```python (array([ 6., 2.]), array([[ 0.70710678, -0.31622777], [ 0.70710678, 0.9486833 ]])) ``` Here a demonstration with the preceding example. ``` A = np.array([[5, 1], [3, 3]]) A np.linalg.eig(A) ``` We can see that the eigenvalues are the same than the ones we used before: 6 and 2 (first array). The eigenvectors correspond to the columns of the second array. This means that the eigenvector corresponding to $\lambda=6$ is: $$ \begin{bmatrix} 0.70710678\\\\ 0.70710678 \end{bmatrix} $$ The eigenvector corresponding to $\lambda=2$ is: $$ \begin{bmatrix} -0.31622777\\\\ 0.9486833 \end{bmatrix} $$ The eigenvectors look different because they have not necessarly the same scaling than the ones we gave in the example. We can easily see that the first corresponds to a scaled version of our $\begin{bmatrix} 1\\\\ 1 \end{bmatrix}$. But the same property stands. We have still $\bs{Av} = \lambda\bs{v}$: $$ \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix} \begin{bmatrix} 0.70710678\\\\ 0.70710678 \end{bmatrix}= \begin{bmatrix} 4.24264069\\\\ 4.24264069 \end{bmatrix} $$ With $0.70710678 \times 6 = 4.24264069$. So there are an infinite number of eigenvectors corresponding to the eigenvalue $6$. They are equivalent because we are interested by their directions. For the second eigenvector we can check that it corresponds to a scaled version of $\begin{bmatrix} 1\\\\ -3 \end{bmatrix}$. We can draw these vectors and see if they are parallel. ``` v = np.array([[1], [-3]]) Av = A.dot(v) v_np = [-0.31622777, 0.9486833] plotVectors([Av.flatten(), v.flatten(), v_np], cols=[blue, orange, 'blue']) plt.ylim(-7, 1) plt.xlim(-1, 3) ``` We can see that the vector found with Numpy (in dark blue) is a scaled version of our preceding $\begin{bmatrix} 1\\\\ -3 \end{bmatrix}$. ## Rescaled vectors As we saw it with numpy, if $\bs{v}$ is an eigenvector of $\bs{A}$, then any rescaled vector $s\bs{v}$ is also an eigenvector of $\bs{A}$. The eigenvalue of the rescaled vector is the same. Let's try to rescale $$ \bs{v}= \begin{bmatrix} 1\\\\ -3 \end{bmatrix} $$ from our preceding example. For instance, $$ \bs{3v}= \begin{bmatrix} 3\\\\ -9 \end{bmatrix} $$ $$ \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix} \begin{bmatrix} 3\\\\ -9 \end{bmatrix} = \begin{bmatrix} 6\\\\ 18 \end{bmatrix} = 2 \times \begin{bmatrix} 3\\\\ -9 \end{bmatrix} $$ We have well $\bs{A}\times 3\bs{v} = \lambda\bs{v}$ and the eigenvalue is still $\lambda=2$. ## Concatenating eigenvalues and eigenvectors Now that we have an idea of what eigenvectors and eigenvalues are we can see how it can be used to decompose a matrix. All eigenvectors of a matrix $\bs{A}$ can be concatenated in a matrix with each column corresponding to each eigenvector (like in the second array return by `np.linalg.eig(A)`): $$ \bs{V}= \begin{bmatrix} 1 & 1\\\\ 1 & -3 \end{bmatrix} $$ The first column $ \begin{bmatrix} 1\\\\ 1 \end{bmatrix} $ corresponds to $\lambda=6$ and the second $ \begin{bmatrix} 1\\\\ -3 \end{bmatrix} $ to $\lambda=2$. The vector $\bs{\lambda}$ can be created from all eigenvalues: $$ \bs{\lambda}= \begin{bmatrix} 6\\\\ 2 \end{bmatrix} $$ Then the eigendecomposition is given by $$ \bs{A}=\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1} $$ <span class='pquote'> We can decompose the matrix $\bs{A}$ with eigenvectors and eigenvalues. It is done with: $\bs{A}=\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}$ </span> $diag(\bs{v})$ is a diagonal matrix (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)) containing all the eigenvalues. Continuing with our example we have $$ \bs{V}=\begin{bmatrix} 1 & 1\\\\ 1 & -3 \end{bmatrix} $$ The diagonal matrix is all zeros except the diagonal that is our vector $\bs{\lambda}$. $$ diag(\bs{v})= \begin{bmatrix} 6 & 0\\\\ 0 & 2 \end{bmatrix} $$ The inverse matrix of $\bs{V}$ can be calculated with numpy: ``` V = np.array([[1, 1], [1, -3]]) V V_inv = np.linalg.inv(V) V_inv ``` So let's plug $$ \bs{V}^{-1}=\begin{bmatrix} 0.75 & 0.25\\\\ 0.25 & -0.25 \end{bmatrix} $$ into our equation: $$ \begin{align*} &\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}\\\\ &= \begin{bmatrix} 1 & 1\\\\ 1 & -3 \end{bmatrix} \begin{bmatrix} 6 & 0\\\\ 0 & 2 \end{bmatrix} \begin{bmatrix} 0.75 & 0.25\\\\ 0.25 & -0.25 \end{bmatrix} \end{align*} $$ If we do the dot product of the first two matrices we have: $$ \begin{bmatrix} 1 & 1\\\\ 1 & -3 \end{bmatrix} \begin{bmatrix} 6 & 0\\\\ 0 & 2 \end{bmatrix} = \begin{bmatrix} 6 & 2\\\\ 6 & -6 \end{bmatrix} $$ So with replacing into the equation: $$ \begin{align*} &\begin{bmatrix} 6 & 2\\\\ 6 & -6 \end{bmatrix} \begin{bmatrix} 0.75 & 0.25\\\\ 0.25 & -0.25 \end{bmatrix}\\\\ &= \begin{bmatrix} 6\times0.75 + (2\times0.25) & 6\times0.25 + (2\times-0.25)\\\\ 6\times0.75 + (-6\times0.25) & 6\times0.25 + (-6\times-0.25) \end{bmatrix}\\\\ &= \begin{bmatrix} 5 & 1\\\\ 3 & 3 \end{bmatrix}= \bs{A} \end{align*} $$ Let's check our result with Python: ``` lambdas = np.diag([6,2]) lambdas V.dot(lambdas).dot(V_inv) ``` That confirms our previous calculation. ## Real symmetric matrix In the case of real symmetric matrices (more details about symmetric matrices in [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)), the eigendecomposition can be expressed as $$ \bs{A} = \bs{Q}\Lambda \bs{Q}^\text{T} $$ where $\bs{Q}$ is the matrix with eigenvectors as columns and $\Lambda$ is $diag(\lambda)$. ### Example 3. $$ \bs{A}=\begin{bmatrix} 6 & 2\\\\ 2 & 3 \end{bmatrix} $$ This matrix is symmetric because $\bs{A}=\bs{A}^\text{T}$. Its eigenvectors are: $$ \bs{Q}= \begin{bmatrix} 0.89442719 & -0.4472136\\\\ 0.4472136 & 0.89442719 \end{bmatrix} $$ and its eigenvalues put in a diagonal matrix gives: $$ \bs{\Lambda}= \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix} $$ So let's begin to calculate $\bs{Q\Lambda}$: $$ \begin{align*} \bs{Q\Lambda}&= \begin{bmatrix} 0.89442719 & -0.4472136\\\\ 0.4472136 & 0.89442719 \end{bmatrix} \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix}\\\\ &= \begin{bmatrix} 0.89442719 \times 7 & -0.4472136\times 2\\\\ 0.4472136 \times 7 & 0.89442719\times 2 \end{bmatrix}\\\\ &= \begin{bmatrix} 6.26099033 & -0.8944272\\\\ 3.1304952 & 1.78885438 \end{bmatrix} \end{align*} $$ with: $$ \bs{Q}^\text{T}= \begin{bmatrix} 0.89442719 & 0.4472136\\\\ -0.4472136 & 0.89442719 \end{bmatrix} $$ So we have: $$ \begin{align*} \bs{Q\Lambda} \bs{Q}^\text{T}&= \begin{bmatrix} 6.26099033 & -0.8944272\\\\ 3.1304952 & 1.78885438 \end{bmatrix} \begin{bmatrix} 0.89442719 & 0.4472136\\\\ -0.4472136 & 0.89442719 \end{bmatrix}\\\\ &= \begin{bmatrix} 6 & 2\\\\ 2 & 3 \end{bmatrix} \end{align*} $$ It works! For that reason, it can useful to use symmetric matrices! Let's do the same things easily with `linalg` from numpy: ``` A = np.array([[6, 2], [2, 3]]) A eigVals, eigVecs = np.linalg.eig(A) eigVecs eigVals = np.diag(eigVals) eigVals eigVecs.dot(eigVals).dot(eigVecs.T) ``` We can see that the result corresponds to our initial matrix. # Quadratic form to matrix form Eigendecomposition can be used to optimize quadratic functions. We will see that when $\bs{x}$ takes the values of an eigenvector, $f(\bs{x})$ takes the value of its corresponding eigenvalue. <span class='pquote'> When $\bs{x}$ takes the values of an eigenvector, $f(\bs{x})$ takes the value of its corresponding eigenvalue. </span> We will see in the following points how we can show that with different methods. Let's have the following quadratic equation: $$ f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2 $$ These quadratic forms can be generated by matrices: $$ f(\bs{x})= \begin{bmatrix} x_1 & x_2 \end{bmatrix}\begin{bmatrix} a & b\\\\ c & d \end{bmatrix}\begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} = \bs{x^\text{T}Ax} $$ with: $$ \bs{x} = \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} $$ and $$ \bs{A}=\begin{bmatrix} a & b\\\\ c & d \end{bmatrix} $$ We call them matrix forms. This form is useful to do various things on the quadratic equation like constrained optimization (see bellow). <span class='pquote'> Quadratic equations can be expressed under the matrix form </span> If you look at the relation between these forms you can see that $a$ gives you the number of $x_1^2$, $(b + c)$ the number of $x_1x_2$ and $d$ the number of $x_2^2$. This means that the same quadratic form can be obtained from infinite number of matrices $\bs{A}$ by changing $b$ and $c$ while preserving their sum. ### Example 4. $$ \bs{x} = \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} $$ and $$ \bs{A}=\begin{bmatrix} 2 & 4\\\\ 2 & 5 \end{bmatrix} $$ gives the following quadratic form: $$ 2x_1^2 + (4+2)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2 $$ but if: $$ \bs{A}=\begin{bmatrix} 2 & -3\\\\ 9 & 5 \end{bmatrix} $$ we still have the quadratic same form: $$ 2x_1^2 + (-3+9)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2 $$ ### Example 5 For this example, we will go from the matrix form to the quadratic form using a symmetric matrix $\bs{A}$. Let's use the matrix of the example 3. $$ \bs{x} = \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} $$ and $$\bs{A}=\begin{bmatrix} 6 & 2\\\\ 2 & 3 \end{bmatrix} $$ $$ \begin{align*} \bs{x^\text{T}Ax}&= \begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} 6 & 2\\\\ 2 & 3 \end{bmatrix} \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix}\\\\ &= \begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} 6 x_1 + 2 x_2\\\\ 2 x_1 + 3 x_2 \end{bmatrix}\\\\ &= x_1(6 x_1 + 2 x_2) + x_2(2 x_1 + 3 x_2)\\\\ &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2 \end{align*} $$ Our quadratic equation is thus $6 x_1^2 + 4 x_1x_2 + 3 x_2^2$. ### Note If $\bs{A}$ is a diagonal matrix (all 0 except the diagonal), the quadratic form of $\bs{x^\text{T}Ax}$ will have no cross term. Take the following matrix form: $$ \bs{A}=\begin{bmatrix} a & b\\\\ c & d \end{bmatrix} $$ If $\bs{A}$ is diagonal, then $b$ and $c$ are 0 and since $f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2$ there is no cross term. A quadratic form without cross term is called diagonal form since it comes from a diagonal matrix. # Change of variable A change of variable (or linear substitution) simply means that we replace a variable by another one. We will see that it can be used to remove the cross terms in our quadratic equation. Without the cross term, it will then be easier to characterize the function and eventually optimize it (i.e finding its maximum or minimum). ## With the quadratic form ### Example 6. Let's take again our previous quadratic form: $$ \bs{x^\text{T}Ax} = 6 x_1^2 + 4 x_1x_2 + 3 x_2^2 $$ The change of variable will concern $x_1$ and $x_2$. We can replace $x_1$ with any combination of $y_1$ and $y_2$ and $x_2$ with any combination $y_1$ and $y_2$. We will of course end up with a new equation. The nice thing is that we can find a specific substitution that will lead to a simplification of our statement. Specifically, it can be used to get rid of the cross term (in our example: $4 x_1x_2$). We will see later why it is interesting. Actually, the right substitution is given by the eigenvectors of the matrix used to generate the quadratic form. Let's recall that the matrix form of our equation is: $$ \bs{x} = \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} $$ and $$\bs{A}=\begin{bmatrix} 6 & 2\\\\ 2 & 3 \end{bmatrix} $$ and that the eigenvectors of $\bs{A}$ are: $$ \begin{bmatrix} 0.89442719 & -0.4472136\\\\ 0.4472136 & 0.89442719 \end{bmatrix} $$ With the purpose of simplification, we can replace these values with: $$ \begin{bmatrix} \frac{2}{\sqrt{5}} & -\frac{1}{\sqrt{5}}\\\\ \frac{1}{\sqrt{5}} & \frac{2}{\sqrt{5}} \end{bmatrix} = \frac{1}{\sqrt{5}} \begin{bmatrix} 2 & -1\\\\ 1 & 2 \end{bmatrix} $$ So our first eigenvector is: $$ \frac{1}{\sqrt{5}} \begin{bmatrix} 2\\\\ 1 \end{bmatrix} $$ and our second eigenvector is: $$ \frac{1}{\sqrt{5}} \begin{bmatrix} -1\\\\ 2 \end{bmatrix} $$ The change of variable will lead to: $$ \begin{bmatrix} x_1\\\\ x_2 \end{bmatrix} = \frac{1}{\sqrt{5}} \begin{bmatrix} 2 & -1\\\\ 1 & 2 \end{bmatrix} \begin{bmatrix} y_1\\\\ y_2 \end{bmatrix} = \frac{1}{\sqrt{5}} \begin{bmatrix} 2y_1 - y_2\\\\ y_1 + 2y_2 \end{bmatrix} $$ so we have $$ \begin{cases} x_1 = \frac{1}{\sqrt{5}}(2y_1 - y_2)\\\\ x_2 = \frac{1}{\sqrt{5}}(y_1 + 2y_2) \end{cases} $$ So far so good! Let's replace that in our example: $$ \begin{align*} \bs{x^\text{T}Ax} &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\ &= 6 [\frac{1}{\sqrt{5}}(2y_1 - y_2)]^2 + 4 [\frac{1}{\sqrt{5}}(2y_1 - y_2)\frac{1}{\sqrt{5}}(y_1 + 2y_2)] + 3 [\frac{1}{\sqrt{5}}(y_1 + 2y_2)]^2\\\\ &= \frac{1}{5}[6 (2y_1 - y_2)^2 + 4 (2y_1 - y_2)(y_1 + 2y_2) + 3 (y_1 + 2y_2)^2]\\\\ &= \frac{1}{5}[6 (4y_1^2 - 4y_1y_2 + y_2^2) + 4 (2y_1^2 + 4y_1y_2 - y_1y_2 - 2y_2^2) + 3 (y_1^2 + 4y_1y_2 + 4y_2^2)]\\\\ &= \frac{1}{5}(24y_1^2 - 24y_1y_2 + 6y_2^2 + 8y_1^2 + 16y_1y_2 - 4y_1y_2 - 8y_2^2 + 3y_1^2 + 12y_1y_2 + 12y_2^2)\\\\ &= \frac{1}{5}(35y_1^2 + 10y_2^2)\\\\ &= 7y_1^2 + 2y_2^2 \end{align*} $$ That's great! Our new equation doesn't have any cross terms! ## With the Principal Axes Theorem Actually there is a simpler way to do the change of variable. We can stay in the matrix form. Recall that we start with the form: <div> $$ f(\bs{x})=\bs{x^\text{T}Ax} $$ </div> The linear substitution can be wrote in these terms. We want replace the variables $\bs{x}$ by $\bs{y}$ that relates by: <div> $$ \bs{x}=P\bs{y} $$ </div> We want to find $P$ such as our new equation (after the change of variable) doesn't contain the cross terms. The first step is to replace that in the first equation: <div> $$ \begin{align*} \bs{x^\text{T}Ax} &= (\bs{Py})^\text{T}\bs{A}(\bs{Py})\\\\ &= \bs{y}^\text{T}(\bs{P}^\text{T}\bs{AP})\bs{y} \end{align*} $$ </div> Can you see the how to transform the left hand side ($\bs{x}$) into the right hand side ($\bs{y}$)? The substitution is done by replacing $\bs{A}$ with $\bs{P^\text{T}AP}$. We also know that $\bs{A}$ is symmetric and thus that there is a diagonal matrix $\bs{D}$ containing the eigenvectors of $\bs{A}$ and such as $\bs{D}=\bs{P}^\text{T}\bs{AP}$. We thus end up with: <div> $$ \bs{x^\text{T}Ax}=\bs{y^\text{T}\bs{D} y} $$ </div> <span class='pquote'> We can use $\bs{D}$ to simplify our quadratic equation and remove the cross terms </span> All of this implies that we can use $\bs{D}$ to simplify our quadratic equation and remove the cross terms. If you remember from example 2 we know that the eigenvalues of $\bs{A}$ are: <div> $$ \bs{D}= \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix} $$ </div> <div> $$ \begin{align*} \bs{x^\text{T}Ax} &= \bs{y^\text{T}\bs{D} y}\\\\ &= \bs{y}^\text{T} \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix} \bs{y}\\\\ &= \begin{bmatrix} y_1 & y_2 \end{bmatrix} \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix} \begin{bmatrix} y_1\\\\ y_2 \end{bmatrix}\\\\ &= \begin{bmatrix} 7y_1 +0y_2 & 0y_1 + 2y_2 \end{bmatrix} \begin{bmatrix} y_1\\\\ y_2 \end{bmatrix}\\\\ &= 7y_1^2 + 2y_2^2 \end{align*} $$ </div> That's nice! If you look back to the change of variable that we have done in the quadratic form, you will see that we have found the same values! This form (without cross-term) is called the **principal axes form**. ### Summary To summarise, the principal axes form can be found with $$ \bs{x^\text{T}Ax} = \lambda_1y_1^2 + \lambda_2y_2^2 $$ where $\lambda_1$ is the eigenvalue corresponding to the first eigenvector and $\lambda_2$ the eigenvalue corresponding to the second eigenvector (second column of $\bs{x}$). # Finding f(x) with eigendecomposition We will see that there is a way to find $f(\bs{x})$ with eigenvectors and eigenvalues when $\bs{x}$ is a unit vector. Let's start from: $$ f(\bs{x}) =\bs{x^\text{T}Ax} $$ We know that if $\bs{x}$ is an eigenvector of $\bs{A}$ and $\lambda$ the corresponding eigenvalue, then $ \bs{Ax}=\lambda \bs{x} $. By replacing the term in the last equation we have: $$ f(\bs{x}) =\bs{x^\text{T}\lambda x} = \bs{x^\text{T}x}\lambda $$ Since $\bs{x}$ is a unit vector, $\norm{\bs{x}}_2=1$ and $\bs{x^\text{T}x}=1$ (cf. [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/) Norms). We end up with $$ f(\bs{x}) = \lambda $$ This is a usefull property. If $\bs{x}$ is an eigenvector of $\bs{A}$, $ f(\bs{x}) =\bs{x^\text{T}Ax}$ will take the value of the corresponding eigenvalue. We can see that this is working only if the euclidean norm of $\bs{x}$ is 1 (i.e $\bs{x}$ is a unit vector). ### Example 7 This example will show that $f(\bs{x}) = \lambda$. Let's take again the last example, the eigenvectors of $\bs{A}$ were $$ \bs{Q}= \begin{bmatrix} 0.89442719 & -0.4472136\\\\ 0.4472136 & 0.89442719 \end{bmatrix} $$ and the eigenvalues $$ \bs{\Lambda}= \begin{bmatrix} 7 & 0\\\\ 0 & 2 \end{bmatrix} $$ So if: $$ \bs{x}=\begin{bmatrix} 0.89442719 & 0.4472136 \end{bmatrix} $$ $f(\bs{x})$ should be equal to 7. Let's check that's true. $$ \begin{align*} f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\ &= 6\times 0.89442719^2 + 4\times 0.89442719\times 0.4472136 + 3 \times 0.4472136^2\\\\ &= 7 \end{align*} $$ In the same way, if $\bs{x}=\begin{bmatrix} -0.4472136 & 0.89442719 \end{bmatrix}$, $f(\bs{x})$ should be equal to 2. $$ \begin{align*} f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\ &= 6\times -0.4472136^2 + 4\times -0.4472136\times 0.89442719 + 3 \times 0.89442719^2\\\\ &= 2 \end{align*} $$ # Quadratic form optimization Depending to the context, optimizing a function means finding its maximum or its minimum. It is for instance widely used to minimize the error of cost functions in machine learning. Here we will see how eigendecomposition can be used to optimize quadratic functions and why this can be done easily without cross terms. The difficulty is that we want a constrained optimization, that is to find the minimum or the maximum of the function for $f(\bs{x})$ being a unit vector. ### Example 7. We want to optimize: $$ f(\bs{x}) =\bs{x^\text{T}Ax} \textrm{ subject to }||\bs{x}||_2= 1 $$ In our last example we ended up with: $$ f(\bs{x}) = 7y_1^2 + 2y_2^2 $$ And the constraint of $\bs{x}$ being a unit vector imply: $$ ||\bs{x}||_2 = 1 \Leftrightarrow x_1^2 + x_2^2 = 1 $$ We can also show that $\bs{y}$ has to be a unit vector if it is the case for $\bs{x}$. Recall first that $\bs{x}=\bs{Py}$: $$ \begin{align*} ||\bs{x}||^2 &= \bs{x^\text{T}x}\\\\ &= (\bs{Py})^\text{T}(\bs{Py})\\\\ &= \bs{P^\text{T}y^\text{T}Py}\\\\ &= \bs{PP^\text{T}y^\text{T}y}\\\\ &= \bs{y^\text{T}y} = ||\bs{y}||^2 \end{align*} $$ So $\norm{\bs{x}}^2 = \norm{\bs{y}}^2 = 1$ and thus $y_1^2 + y_2^2 = 1$ Since $y_1^2$ and $y_2^2$ cannot be negative because they are squared values, we can be sure that $2y_2^2\leq7y_2^2$. Hence: $$ \begin{align*} f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\ &\leq 7y_1^2 + 7y_2^2\\\\ &= 7(y_1^2+y_2^2)\\\\ &= 7 \end{align*} $$ This means that the maximum value of $f(\bs{x})$ is 7. The same way can lead to find the minimum of $f(\bs{x})$. $7y_1^2\geq2y_1^2$ and: $$ \begin{align*} f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\ &\geq 2y_1^2 + 2y_2^2\\\\ &= 2(y_1^2+y_2^2)\\\\ &= 2 \end{align*} $$ And the minimum of $f(\bs{x})$ is 2. ### Summary We can note that the minimum of $f(\bs{x})$ is the minimum eigenvalue of the corresponding matrix $\bs{A}$. Another useful fact is that this value is obtained when $\bs{x}$ takes the value of the corresponding eigenvector (check back the preceding paragraph). In that way, $f(\bs{x})=7$ when $\bs{x}=\begin{bmatrix}0.89442719 & 0.4472136\end{bmatrix}$. This shows how useful are the eigenvalues and eigenvector in this kind of constrained optimization. ## Graphical views We saw that the quadratic functions $f(\bs{x}) = ax_1^2 +2bx_1x_2 + cx_2^2$ can be represented by the symmetric matrix $\bs{A}$: $$ \bs{A}=\begin{bmatrix} a & b\\\\ b & c \end{bmatrix} $$ Graphically, these functions can take one of three general shapes (click on the links to go to the Surface Plotter and move the shapes): 1.[Positive-definite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x%2By*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49) | 2.[Negative-definite form](https://academo.org/demos/3d-surface-plotter/?expression=-x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=25) | 3.[Indefinite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49) :-------------------------:|:-------------------------:|:-------: <img src="images/quadratic-functions-positive-definite-form.png" alt="Quadratic function with a positive definite form" title="Quadratic function with a positive definite form"> | <img src="images/quadratic-functions-negative-definite-form.png" alt="Quadratic function with a negative definite form" title="Quadratic function with a negative definite form"> | <img src="images/quadratic-functions-indefinite-form.png" alt="Quadratic function with a indefinite form" title="Quadratic function with a indefinite form"> With the constraints that $\bs{x}$ is a unit vector, the minimum of the function $f(\bs{x})$ corresponds to the smallest eigenvalue and is obtained with its corresponding eigenvector. The maximum corresponds to the biggest eigenvalue and is obtained with its corresponding eigenvector. # Conclusion We have seen a lot of things in this chapter. We saw that linear algebra can be used to solve a variety of mathematical problems and more specifically that eigendecomposition is a powerful tool! However, it cannot be used for non square matrices. In the next chapter, we will see the Singular Value Decomposition (SVD) which is another way of decomposing matrices. The advantage of the SVD is that you can use it also with non-square matrices. # BONUS: visualizing linear transformations We can see the effect of eigenvectors and eigenvalues in linear transformation. We will see first how linear transformation works. Linear transformation is a mapping between an input vector and an output vector. Different operations like projection or rotation are linear transformations. Every linear transformations can be though as applying a matrix on the input vector. We will see the meaning of this graphically. For that purpose, let's start by drawing the set of unit vectors (they are all vectors with a norm of 1). ``` t = np.linspace(0, 2*np.pi, 100) x = np.cos(t) y = np.sin(t) plt.figure() plt.plot(x, y) plt.xlim(-1.5, 1.5) plt.ylim(-1.5, 1.5) plt.show() ``` Then, we will transform each of these points by applying a matrix $\bs{A}$. This is the goal of the function bellow that takes a matrix as input and will draw - the origin set of unit vectors - the transformed set of unit vectors - the eigenvectors - the eigenvectors scalled by their eigenvalues ``` def linearTransformation(transformMatrix): orange = '#FF9A13' blue = '#1190FF' # Create original set of unit vectors t = np.linspace(0, 2*np.pi, 100) x = np.cos(t) y = np.sin(t) # Calculate eigenvectors and eigenvalues eigVecs = np.linalg.eig(transformMatrix)[1] eigVals = np.diag(np.linalg.eig(transformMatrix)[0]) # Create vectors of 0 to store new transformed values newX = np.zeros(len(x)) newY = np.zeros(len(x)) for i in range(len(x)): unitVector_i = np.array([x[i], y[i]]) # Apply the matrix to the vector newXY = transformMatrix.dot(unitVector_i) newX[i] = newXY[0] newY[i] = newXY[1] plotVectors([eigVecs[:,0], eigVecs[:,1]], cols=[blue, blue]) plt.plot(x, y) plotVectors([eigVals[0,0]*eigVecs[:,0], eigVals[1,1]*eigVecs[:,1]], cols=[orange, orange]) plt.plot(newX, newY) plt.xlim(-5, 5) plt.ylim(-5, 5) plt.show() A = np.array([[1,-1], [-1, 4]]) linearTransformation(A) ``` We can see the unit circle in dark blue, the non scaled eigenvectors in light blue, the transformed unit circle in green and the scaled eigenvectors in yellow. It is worth noting that the eigenvectors are orthogonal here because the matrix is symmetric. Let's try with a non-symmetric matrix: ``` A = np.array([[1,1], [-1, 4]]) linearTransformation(A) ``` In this case, the eigenvectors are not orthogonal! # References ## Videos of Gilbert Strang - [Gilbert Strang, Lec21 MIT - Eigenvalues and eigenvectors](https://www.youtube.com/watch?v=lXNXrLcoerU) - [Gilbert Strang, Lec 21 MIT, Spring 2005](https://www.youtube.com/watch?v=lXNXrLcoerU) ## Quadratic forms - [David Lay, University of Colorado, Denver](http://math.ucdenver.edu/~esulliva/LinearAlgebra/SlideShows/07_02.pdf) - [math.stackexchange QA](https://math.stackexchange.com/questions/2207111/eigendecomposition-optimization-of-quadratic-expressions) ## Eigenvectors - [Victor Powell and Lewis Lehe - Interactive representation of eigenvectors](http://setosa.io/ev/eigenvectors-and-eigenvalues/) ## Linear transformations - [Gilbert Strang - Linear transformation](http://ia802205.us.archive.org/18/items/MIT18.06S05_MP4/30.mp4) - [Linear transformation - demo video](https://www.youtube.com/watch?v=wXCRcnbCsJA)
true
code
0.730542
null
null
null
null
<a href="https://colab.research.google.com/github/constantinpape/dl-teaching-resources/blob/main/exercises/classification/5_data_augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Data Augmentation on CIFAR10 In this exercise we will use data augmentation to increase the available training data and thus improve the network training performance. We will use the same network architecture as in the previous exercise. ## Preparation ``` # load tensorboard extension %load_ext tensorboard # import torch and other libraries import os import numpy as np import sklearn.metrics as metrics import matplotlib.pyplot as plt import torch import torch.nn as nn from torch.utils.data import DataLoader from torch.optim import Adam !pip install cifar2png # check if we have gpu support # colab offers free gpus, however they are not activated by default. # to activate the gpu, go to 'Runtime->Change runtime type'. # Then select 'GPU' in 'Hardware accelerator' and click 'Save' have_gpu = torch.cuda.is_available() # we need to define the device for torch, yadda yadda if have_gpu: print("GPU is available") device = torch.device('cuda') else: print("GPU is not available, training will run on the CPU") device = torch.device('cpu') # run this in google colab to get the utils.py file !wget https://raw.githubusercontent.com/constantinpape/training-deep-learning-models-for-vison/master/day1/utils.py # we will reuse the training function, validation function and # data preparation from the previous notebook import utils cifar_dir = './cifar10' !cifar2png cifar10 cifar10 categories = os.listdir('./cifar10/train') categories.sort() images, labels = utils.load_cifar(os.path.join(cifar_dir, 'train')) (train_images, train_labels, val_images, val_labels) = utils.make_cifar_train_val_split(images, labels) ``` ## Data Augmentation The goal of data augmentation is to increase the amount of training data by transforming the input images in a way that they still resemble realistic images. Popular transformations used in data augmentation include rotations, image flips, color jitter or additive noise. Here, we will start with two transformations: - random flips along the vertical centerline - random color jitters ``` # define random augmentations import skimage.color as color def random_flip(image, target, probability=.5): """ Randomly mirror the image across the vertical axis. """ if np.random.rand() < probability: image = np.array([np.fliplr(im) for im in image]) return image, target def random_color_jitter(image, target, probability=.5): """ Randomly jitter the saturation, hue and brightness of the image. """ if np.random.rand() > probability: # skimage expects WHC instead of CHW image = image.transpose((1, 2, 0)) # transform image to hsv color space to apply jitter image = color.rgb2hsv(image) # compute jitter factors in range 0.66 - 1.5 jitter_factors = 1.5 * np.random.rand(3) jitter_factors = np.clip(jitter_factors, 0.66, 1.5) # apply the jitter factors, making sure we stay in correct value range image *= jitter_factors image = np.clip(image, 0, 1) # transform back to rgb and CHW image = color.hsv2rgb(image) image = image.transpose((2, 0, 1)) return image, target # create training dataset with augmentations from functools import partial train_trafos = [ utils.to_channel_first, utils.normalize, random_color_jitter, random_flip, utils.to_tensor ] train_trafos = partial(utils.compose, transforms=train_trafos) train_dataset = utils.DatasetWithTransform(train_images, train_labels, transform=train_trafos) # we don't use data augmentations for the validation set val_dataset = utils.DatasetWithTransform(val_images, val_labels, transform=utils.get_default_cifar_transform()) # sample augmentations def show_image(ax, image): # need to go back to numpy array and WHC axis order image = image.numpy().transpose((1, 2, 0)) ax.imshow(image) n_samples = 8 image_id = 0 fig, ax = plt.subplots(1, n_samples, figsize=(18, 4)) for sample in range(n_samples): image, _ = train_dataset[0] show_image(ax[sample], image) # we reuse the model from the previous exercise # if you want you can also use a different CNN architecture that # you have designed in the tasks part of that exercise model = utils.SimpleCNN(10) model = model.to(device) # instantiate loaders and optimizer and start tensorboard train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=25) optimizer = Adam(model.parameters(), lr=1.e-3) %tensorboard --logdir runs # we have moved all the boilerplate for the full training procedure to utils now n_epochs = 10 utils.run_cifar_training(model, optimizer, train_loader, val_loader, device=device, name='da1', n_epochs=n_epochs) # evaluate the model on test data test_dataset = utils.make_cifar_test_dataset(cifar_dir) test_loader = DataLoader(test_dataset, batch_size=25) predictions, labels = utils.validate(model, test_loader, nn.NLLLoss(), device, step=0, tb_logger=None) print("Test accuracy:") accuracy = metrics.accuracy_score(labels, predictions) print(accuracy) fig, ax = plt.subplots(1, figsize=(8, 8)) utils.make_confusion_matrix(labels, predictions, categories, ax) ``` ## Normalization layers In addition to convolutional layers and pooling layers, another important part of neural networks are normalization layers. These layers keep their input normalized using a learned normalization. The first type of normalization introduced has been [BatchNorm](https://arxiv.org/abs/1502.03167), which we will now add to the CNN architecture from the previous exercise. ``` import torch.nn.functional as F class CNNBatchNorm(nn.Module): def __init__(self, n_classes): super().__init__() self.n_classes = n_classes # the convolutions self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=5) self.conv2 = nn.Conv2d(in_channels=12, out_channels=24, kernel_size=3) # the pooling layer self.pool = nn.MaxPool2d(2, 2) # the normalization layers self.bn1 = nn.BatchNorm2d(12) self.bn2 = nn.BatchNorm2d(24) # the fully connected part of the network # after applying the convolutions and poolings, the tensor # has the shape 24 x 6 x 6, see below self.fc = nn.Sequential( nn.Linear(24 * 6 * 6, 120), nn.ReLU(), nn.Linear(120, 60), nn.ReLU(), nn.Linear(60, self.n_classes) ) self.activation = nn.LogSoftmax(dim=1) def apply_convs(self, x): # input image has shape 3 x 32 x 32 x = self.pool(F.relu(self.bn1(self.conv1(x)))) # shape after conv: 12 x 28 x 28 # shape after pooling: 12 x 14 X 14 x = self.pool(F.relu(self.bn2(self.conv2(x)))) # shape after conv: 24 x 12 x 12 # shape after pooling: 24 x 6 x 6 return x def forward(self, x): x = self.apply_convs(x) x = x.view(-1, 24 * 6 * 6) x = self.fc(x) x = self.activation(x) return x # instantiate model and optimizer model = CNNBatchNorm(10) model = model.to(device) optimizer = Adam(model.parameters(), lr=1.e-3) n_epochs = 10 utils.run_cifar_training(model, optimizer, train_loader, val_loader, device=device, name='batch-norm', n_epochs=n_epochs) model = utils.load_checkpoin("best_checkpoint_batch-norm.tar", model, optimizer)[0] predictions, labels = utils.validate(model, test_loader, nn.NLLLoss(), device, step=0, tb_logger=None) print("Test accuracy:") accuracy = metrics.accuracy_score(labels, predictions) print(accuracy) fig, ax = plt.subplots(1, figsize=(8, 8)) utils.make_confusion_matrix(labels, predictions, categories, ax) ``` ## Tasks and Questions Tasks: - Implement one or two additional augmentations and train the model again using these. You can use [the torchvision transformations](https://pytorch.org/docs/stable/torchvision/transforms.html) for inspiration. Questions: - Compare the model results in this exercise. - Can you think of any transformations that make use of symmetries/invariances not present here but present in other kinds of images (e.g. biomedical images)? Advanced: - Check out the other [normalization layers available in pytorch](https://pytorch.org/docs/stable/nn.html#normalization-layers). Which layers could be beneficial to BatchNorm here? Try training with them and see if this improves performance further.
true
code
0.834407
null
null
null
null
``` %autosave 0 ``` # 4. Evaluation Metrics for Classification In the previous session we trained a model for predicting churn. How do we know if it's good? ## 4.1 Evaluation metrics: session overview * Dataset: https://www.kaggle.com/blastchar/telco-customer-churn * https://raw.githubusercontent.com/alexeygrigorev/mlbookcamp-code/master/chapter-03-churn-prediction/WA_Fn-UseC_-Telco-Customer-Churn.csv *Metric* - function that compares the predictions with the actual values and outputs a single number that tells how good the predictions are ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.feature_extraction import DictVectorizer from sklearn.linear_model import LogisticRegression df = pd.read_csv('data-week-3.csv') df.columns = df.columns.str.lower().str.replace(' ', '_') categorical_columns = list(df.dtypes[df.dtypes == 'object'].index) for c in categorical_columns: df[c] = df[c].str.lower().str.replace(' ', '_') df.totalcharges = pd.to_numeric(df.totalcharges, errors='coerce') df.totalcharges = df.totalcharges.fillna(0) df.churn = (df.churn == 'yes').astype(int) df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=1) df_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=1) df_train = df_train.reset_index(drop=True) df_val = df_val.reset_index(drop=True) df_test = df_test.reset_index(drop=True) y_train = df_train.churn.values y_val = df_val.churn.values y_test = df_test.churn.values del df_train['churn'] del df_val['churn'] del df_test['churn'] numerical = ['tenure', 'monthlycharges', 'totalcharges'] categorical = [ 'gender', 'seniorcitizen', 'partner', 'dependents', 'phoneservice', 'multiplelines', 'internetservice', 'onlinesecurity', 'onlinebackup', 'deviceprotection', 'techsupport', 'streamingtv', 'streamingmovies', 'contract', 'paperlessbilling', 'paymentmethod', ] dv = DictVectorizer(sparse=False) train_dict = df_train[categorical + numerical].to_dict(orient='records') X_train = dv.fit_transform(train_dict) model = LogisticRegression() model.fit(X_train, y_train) val_dict = df_val[categorical + numerical].to_dict(orient='records') X_val = dv.transform(val_dict) y_pred = model.predict_proba(X_val)[:, 1] churn_decision = (y_pred >= 0.5) (y_val == churn_decision).mean() ``` ## 4.2 Accuracy and dummy model * Evaluate the model on different thresholds * Check the accuracy of dummy baselines ``` len(y_val) (y_val == churn_decision).mean() 1132/ 1409 from sklearn.metrics import accuracy_score accuracy_score(y_val, y_pred >= 0.5) thresholds = np.linspace(0, 1, 21) scores = [] for t in thresholds: score = accuracy_score(y_val, y_pred >= t) print('%.2f %.3f' % (t, score)) scores.append(score) plt.plot(thresholds, scores) from collections import Counter Counter(y_pred >= 1.0) 1 - y_val.mean() ``` ## 4.3 Confusion table * Different types of errors and correct decisions * Arranging them in a table ``` actual_positive = (y_val == 1) actual_negative = (y_val == 0) t = 0.5 predict_positive = (y_pred >= t) predict_negative = (y_pred < t) tp = (predict_positive & actual_positive).sum() tn = (predict_negative & actual_negative).sum() fp = (predict_positive & actual_negative).sum() fn = (predict_negative & actual_positive).sum() confusion_matrix = np.array([ [tn, fp], [fn, tp] ]) confusion_matrix (confusion_matrix / confusion_matrix.sum()).round(2) ``` ## 4.4 Precision and Recall ``` p = tp / (tp + fp) p r = tp / (tp + fn) r ``` ## 4.5 ROC Curves ### TPR and FRP ``` tpr = tp / (tp + fn) tpr fpr = fp / (fp + tn) fpr scores = [] thresholds = np.linspace(0, 1, 101) for t in thresholds: actual_positive = (y_val == 1) actual_negative = (y_val == 0) predict_positive = (y_pred >= t) predict_negative = (y_pred < t) tp = (predict_positive & actual_positive).sum() tn = (predict_negative & actual_negative).sum() fp = (predict_positive & actual_negative).sum() fn = (predict_negative & actual_positive).sum() scores.append((t, tp, fp, fn, tn)) columns = ['threshold', 'tp', 'fp', 'fn', 'tn'] df_scores = pd.DataFrame(scores, columns=columns) df_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn) df_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn) plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR') plt.plot(df_scores.threshold, df_scores['fpr'], label='FPR') plt.legend() ``` ### Random model ``` np.random.seed(1) y_rand = np.random.uniform(0, 1, size=len(y_val)) ((y_rand >= 0.5) == y_val).mean() def tpr_fpr_dataframe(y_val, y_pred): scores = [] thresholds = np.linspace(0, 1, 101) for t in thresholds: actual_positive = (y_val == 1) actual_negative = (y_val == 0) predict_positive = (y_pred >= t) predict_negative = (y_pred < t) tp = (predict_positive & actual_positive).sum() tn = (predict_negative & actual_negative).sum() fp = (predict_positive & actual_negative).sum() fn = (predict_negative & actual_positive).sum() scores.append((t, tp, fp, fn, tn)) columns = ['threshold', 'tp', 'fp', 'fn', 'tn'] df_scores = pd.DataFrame(scores, columns=columns) df_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn) df_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn) return df_scores df_rand = tpr_fpr_dataframe(y_val, y_rand) plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR') plt.plot(df_rand.threshold, df_rand['fpr'], label='FPR') plt.legend() ``` ### Ideal model ``` num_neg = (y_val == 0).sum() num_pos = (y_val == 1).sum() num_neg, num_pos y_ideal = np.repeat([0, 1], [num_neg, num_pos]) y_ideal y_ideal_pred = np.linspace(0, 1, len(y_val)) 1 - y_val.mean() accuracy_score(y_ideal, y_ideal_pred >= 0.726) df_ideal = tpr_fpr_dataframe(y_ideal, y_ideal_pred) df_ideal[::10] plt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR') plt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR') plt.legend() ``` ### Putting everything together ``` plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR', color='black') plt.plot(df_scores.threshold, df_scores['fpr'], label='FPR', color='blue') plt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR ideal') plt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR ideal') # plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR random', color='grey') # plt.plot(df_rand.threshold, df_rand['fpr'], label='FPR random', color='grey') plt.legend() plt.figure(figsize=(5, 5)) plt.plot(df_scores.fpr, df_scores.tpr, label='Model') plt.plot([0, 1], [0, 1], label='Random', linestyle='--') plt.xlabel('FPR') plt.ylabel('TPR') plt.legend() from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_val, y_pred) plt.figure(figsize=(5, 5)) plt.plot(fpr, tpr, label='Model') plt.plot([0, 1], [0, 1], label='Random', linestyle='--') plt.xlabel('FPR') plt.ylabel('TPR') plt.legend() ``` ## 4.6 ROC AUC * Area under the ROC curve - useful metric * Interpretation of AUC ``` from sklearn.metrics import auc auc(fpr, tpr) auc(df_scores.fpr, df_scores.tpr) auc(df_ideal.fpr, df_ideal.tpr) fpr, tpr, thresholds = roc_curve(y_val, y_pred) auc(fpr, tpr) from sklearn.metrics import roc_auc_score roc_auc_score(y_val, y_pred) neg = y_pred[y_val == 0] pos = y_pred[y_val == 1] import random n = 100000 success = 0 for i in range(n): pos_ind = random.randint(0, len(pos) - 1) neg_ind = random.randint(0, len(neg) - 1) if pos[pos_ind] > neg[neg_ind]: success = success + 1 success / n n = 50000 np.random.seed(1) pos_ind = np.random.randint(0, len(pos), size=n) neg_ind = np.random.randint(0, len(neg), size=n) (pos[pos_ind] > neg[neg_ind]).mean() ``` ## 4.7 Cross-Validation * Evaluating the same model on different subsets of data * Getting the average prediction and the spread within predictions ``` def train(df_train, y_train, C=1.0): dicts = df_train[categorical + numerical].to_dict(orient='records') dv = DictVectorizer(sparse=False) X_train = dv.fit_transform(dicts) model = LogisticRegression(C=C, max_iter=1000) model.fit(X_train, y_train) return dv, model dv, model = train(df_train, y_train, C=0.001) def predict(df, dv, model): dicts = df[categorical + numerical].to_dict(orient='records') X = dv.transform(dicts) y_pred = model.predict_proba(X)[:, 1] return y_pred y_pred = predict(df_val, dv, model) from sklearn.model_selection import KFold !pip install tqdm from tqdm.auto import tqdm n_splits = 5 # C = regularization parameter for the model # tqdm() is a function that prints progress bars for C in tqdm([0.001, 0.01, 0.1, 0.5, 1, 5, 10]): kfold = KFold(n_splits=n_splits, shuffle=True, random_state=1) scores = [] for train_idx, val_idx in kfold.split(df_full_train): df_train = df_full_train.iloc[train_idx] df_val = df_full_train.iloc[val_idx] y_train = df_train.churn.values y_val = df_val.churn.values dv, model = train(df_train, y_train, C=C) y_pred = predict(df_val, dv, model) auc = roc_auc_score(y_val, y_pred) scores.append(auc) print('C=%s %.3f +- %.3f' % (C, np.mean(scores), np.std(scores))) scores dv, model = train(df_full_train, df_full_train.churn.values, C=1.0) y_pred = predict(df_test, dv, model) auc = roc_auc_score(y_test, y_pred) auc ``` ## 4.8 Summary * Metric - a single number that describes the performance of a model * Accuracy - fraction of correct answers; sometimes misleading * Precision and recall are less misleading when we have class inbalance * ROC Curve - a way to evaluate the performance at all thresholds; okay to use with imbalance * K-Fold CV - more reliable estimate for performance (mean + std) ## 4.9 Explore more * Check the precision and recall of the dummy classifier that always predict "FALSE" * F1 score = 2 * P * R / (P + R) * Evaluate precision and recall at different thresholds, plot P vs R - this way you'll get the precision/recall curve (similar to ROC curve) * Area under the PR curve is also a useful metric Other projects: * Calculate the metrics for datasets from the previous week
true
code
0.625867
null
null
null
null
# 神经网络的训练 作者:杨岱川 时间:2019年12月 github:https://github.com/DrDavidS/basic_Machine_Learning 开源协议:[MIT](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/LICENSE) 参考文献: - 《深度学习入门》,作者:斋藤康毅; - 《深度学习》,作者:Ian Goodfellow 、Yoshua Bengio、Aaron Courville。 - [Keras overview](https://tensorflow.google.cn/guide/keras/overview) ## 本节目的 在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/03深度学习基础/3.01%20神经网络与前向传播.ipynb)中我们学习了基于多层感知机的神经网络前向传播的原理,并且动手实现了一个很简单的神经网络模型。 但是,目前为止我们搭建的神经网络的权重矩阵 $W$ 是随机初始化的,我们只能说把输入 $X$ “喂”了进去, 然后“跑通”了这个网络。但是它的输出并没有任何实际的意义,因为我们并没有对它进行训练。 在 3.02 教程中,我们的主题就是**神经网络的学习**,也就是我们的神经网络是如何从训练数据中自动获取最优权重参数的过程,这个过程的主要思想和之前在传统机器学习中描述的训练本质相同。 我们为了让神经网络能够进行学习,将导入**损失函数(loss function)**这一指标,相信大家对其并不陌生。 神经网络学习的目的就是以损失函数为基准,找出能够使它的值达到最小的权重参数。而为了找出尽可能小的损失函数的值,我们将采用**梯度法**。 > 这些名词是不是听起来都很熟悉? > >“梯度法”在[2.11 XGBoost原理与应用](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/02机器学习基础/2.11%20XGBoost原理与应用.ipynb)中以**梯度提升**的形式出现,而“损失函数”更是贯穿了整个传统机器学习过程。 ## 从数据中学习 同其他机器学习算法一样,神经网络的特征仍然是可以从数据中学习。什么叫“从数据中学习”,就是说我们的权重参数可以由数据来自动决定。 既然是机器学习,我们当然不能人工地决定参数,这样怎么忙得过来呢? >一些大型神经网络参数数量,当然参数更多不代表效果一定更好: > >- ALBERT:1200万,by 谷歌; >- BERT-large:3.34亿,by 谷歌; >- BERT-xlarge:12.7亿,by 谷歌; >- Megatron:80亿,by Nvidia; >- T5,110亿,by 谷歌。 接下来我们会介绍神经网络地学习,也就是如何利用数据决定参数值。 ## 损失函数 损失函数地概念大家都熟悉,我们在之前学过非常多的损失函数,比如 0-1 损失函数,均方误差损失函数等。这里我们会再介绍一种新的损失函数。 ### 交叉熵误差 **交叉熵误差(cross entropy error)**是一种非常常用的损失函数,其公式如下: $$\large E=-\sum_k t_k\log y_k$$ 其中,$\log$ 是以 $\rm e$ 为底数的自然对数 $\log_e$。$k$ 表示共有 $k$ 个类别。$y_k$ 是神经网络的输出,$t_k$ 是真实的、正确的标签。$t_k$ 中只有正确解的标签索引为1,其他均为0,注意这里用的是 one-hot 表示,所以接受多分类问题。 实际上这个公式只计算了正确解标签输出的自然对数。 比如,一个三分类问题,有 A, B ,C 三种类别,而真实值为C,即 $t_k=[0,\quad0,\quad1]$, 而神经网络经过 softmax 后的输出 $y_k=[0.1,\quad0.3,\quad0.6]$。所以其交叉熵误差为 $-\log0.6\approx0.51$。 我们用代码来实现交叉熵: ``` import numpy as np def cross_entropy_error(y, t): """定义交叉熵损失函数""" delta = 1e-7 return -np.sum(t * np.log(y + delta)) ``` 这里的 $y$ 和 $t$ 都是 NumPy 数组。我们在计算 `np.log` 的时候加上了一个很小的值 delta,是为了防止出现 `np.log(0)` 的情况,也就是返回值为负无穷。这样一来会导致后续计算无法进行。 接下来我们试试使用代码进行简单的计算: ``` # 设置第三类为正确解 t = np.array([0, 0, 1]) t # 设置三类概率情况,y1 y1 = np.array([0.1, 0.3, 0.6]) y1 # 设置三类概率情况,y2 y2 = np.array([0.3, 0.4, 0.3]) y2 # 计算y1交叉熵 cross_entropy_error(y1, t) # 计算y2交叉熵 cross_entropy_error(y2, t) ``` 可以看出第一个输出 y1 与监督数据(训练数据)更为切合,所以交叉熵误差更小。 ### mini-batch 学习 机器学习使用训练数据进行学习,我们对训练数据计算损失函数的值。找出让这个值尽可能小的参数。也就是说,计算损失函数的时候必须将所有的训练数据作为对象,有 100 个数据,就应当把这 100 个损失函数的总和作为学习的目标。 要计算所有训练数据的损失函数的综合,以交叉熵误差为例: $$\large E=-\frac{1}{N}\sum_n \sum_k t_{nk}\log y_{nk}$$ 虽然看起来复杂,其实只是把单个数据的损失函数扩展到了 $n$ 个数据而已,最后再除以 $N$,求得单个数据的“平均损失函数”。这样平均化以后,可以获得和训练数据的数量无关的统一指标。 问题在于,很多数据集的数据量可不少,以 MNIST 为例,其训练数据有 60000 个,如果以全部数据为对象求损失函数的和,则时间花费较长。如果更大的数据集,比如 [ImageNet](http://www.image-net.org/about-stats) 数据集,甚至有1419万张图片(2019年12月),这种情况下以全部数据为对象计算损失函数是不现实的。 因此,我们从全部数据中选出一部分,作为全部数据的“近似”。神经网络的学习也是从训练数据中选出一批数据(mini-batch,小批量),然后对每个mini-batch进行学习。 比如在 MNIST 数据集中,每次选择 100 张图片学习。这种学习方式称为 **mini-batch学习**。或者说,整个训练过程的 batch-size 为 100。 ### 为何要设定损失函数 为什么我们训练过程是损失函数最小?我们的最终目的是提高神经网络的识别精度,为什么不把识别精度作为指标? 这涉及到导数在神经网络学习中的作用。以后会详细解释,在神经网络的学习中,寻找最优参数(权重和偏置)时,要寻找使得损失函数的值尽可能小的的参数。而为了找到让损失函数值尽可能小的地方,需要计算参数的导数(准确说是**梯度**),然后以这个导数为指引,逐步更新参数的值。 假设有一个神经网络,我们关注这个网络中某一个权重参数。现在,对这个权重参数的损失函数求导,表示的是“如果稍微改变这个权重参数的值,损失函数会怎么变化”。如果导数的值为负,通过使该权重参数向正方向改变,可以减小损失函数的值;反过来,如果导数的值为正,则通过使该权重参数向负方向改变,可以减小损失函数的值。 >如果导数的值为 0 时,无论权重参数向哪个方向变化,损失函数的值都不变。 如果我们用识别精度(准确率)作为指标,那么绝大多数地方的导数都会变成 0 ,导致参数无法更新。 >假设某个神经网络识别出了 100 个训练数据中的 32 个,这时候准确率为 32%。如果我们以准确率为指标,即使稍微改变权重参数的值,识别的准确率也将继续保持在 32%,不会有变化。也就是说,仅仅微调参数,是无法改善识别精度的。即使有所改善,也不会变成 32.011% 这样连续变化,而是变成 33%,34% 这样离散的值。 > >而如果我们采用**损失函数**作为指标,则当前损失函数的值可以表示为 0.92543...之类的值,而稍微微调一下参数,对应损失函数也会如 0.93431... 这样发生连续的变化。 所以,识别精度对微小的参数变化基本没啥反应,即使有反应,它的值也是不连续地、突然地变化。 回忆之前学习的 **阶跃函数** 和 **sigmoid 函数**: ``` import matplotlib print(matplotlib.__version__) import matplotlib.pyplot as plt import numpy as np %matplotlib inline %config InlineBackend.figure_format = 'svg' # 生成矢量图 def sigmoid(x): """定义sigmoid函数""" return 1.0/(1.0 + np.exp(-x)) def step_function(x): """定义阶跃函数""" return np.array(x > 0, dtype=np.int) # 阶跃函数 plt.figure(figsize=(8,4)) plt.subplot(1, 2, 1) x = np.arange(-6.0, 6.0, 0.1) plt.plot(x, step_function(x)) plt.axhline(y=0.0,ls='dotted',color='k') plt.axhline(y=1.0,ls='dotted',color='k') plt.axhline(y=0.5,ls='dotted',color='k') plt.yticks([0.0,0.5,1.0]) plt.ylim(-0.1,1.1) plt.xlabel('x') plt.ylabel('$step(x)$') plt.title('Step Function') # plt.savefig("pic001.png", dpi=600) # 保存图片 # sigmoid 函数 plt.subplot(1, 2, 2) plt.plot(x, sigmoid(x)) plt.axhline(y=0.0,ls='dotted',color='k') plt.axhline(y=1.0,ls='dotted',color='k') plt.axhline(y=0.5,ls='dotted',color='k') plt.yticks([0.0,0.5,1.0]) plt.ylim(-0.1,1.1) plt.xlabel('x') plt.ylabel('$sigmoid(x)$') plt.title('Sigmoid Function') # plt.savefig("pic001.png", dpi=600) # 保存图片 plt.tight_layout(3) # 间隔 plt.show() ``` 如果我们使用**阶跃函数**作为激活函数,神经网络的学习无法进行。如图,阶跃函数的导数在绝大多数的地方都是 0 ,也就是说,如果我们采用阶跃函数,那么即使将损失函数作为指标,参数的微小变化也会被阶跃函数抹杀,导致损失函数的值没有任何变化。 而 **sigmoid 函数**,如图,不仅函数的输出是连续变化的,曲线的斜率也是连续变化的。也就是说,sigmoid 函数的导数在任何地方都不为 0。得益于这个性质,神经网络的学习得以正确进行。 ## 数值微分 我们使用梯度信息决定前进方向。现在我们会介绍什么是梯度,它有什么性质。 ### 导数 相信大家对导数都不陌生。导数就是表示某个瞬间的变化量,定义为: $$\large \frac{{\rm d}f(x)}{{\rm d}x} = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ 那么现在我们参考上式实现函数求导: ``` def numerical_diff(f, x): """不太好的导数实现""" h = 1e-50 return (f(x + h) - f(x)) / h ``` `numerical_diff` 的命名来源于 **数值微分(numerical differentiation)**。 实际上,我们对 $h$ 赋予了一个很小的值,反倒产生了**舍入误差**: ``` np.float32(1e-50) ``` 如果采用 `float32` 类型来表示 $10^{-50}$,就会变成 $0.0$,无法正确表示。这是第一个问题,我们应当将微小值 $h$ 改为 $10^{-4}$,就可以得到正确的结果了。 第二个问题和函数 $f$ 的差分有关。我们虽然实现了计算函数 $f$ 在 $x+h$ 和 $x$ 之间的差分,但是是有误差的。我们实际上计算的是点 $x+h$ 和 $x$ 之间连线的斜率,而真正的导数则是函数在 $x$ 处切线的斜率。出现这个差异的原因是因为 $h$ 不能真的无限接近于 0。 为了减小误差,我们计算函数 $f$ 在 $(x+h)$ 和 $(x-h)$ 之间的差分。因为这种计算方法以 $x$ 为中心,计算左右两边的差分,所以叫**中心差分**,而 $(x+h)$ 和 $x$ 之间的差分叫**前向差分**。 现在改进如下: ``` def numerical_diff(f, x): """改进后的导数实现""" h = 1e-4 return (f(x + h) - f(x - h)) / (2 * h) ``` ### 数值微分的例子 使用上面的数值微分函数对简单函数求导: $$\large y=0.01x^2+0.1x$$ 首先我们绘制这个函数的图像。 ``` def function_1(x): """定义函数""" return 0.01 * x**2 + 0.1*x x = np.arange(0.0, 20.0, 0.1) y = function_1(x) plt.xlabel('x') plt.ylabel('$f(x)$') plt.plot(x, y) plt.show() ``` 计算函数在 $x=5$ 时候的导数,画切线: ``` def tangent_line(f, x): """切线""" d = numerical_diff(f, x) print(d) y = f(x) - d*x return lambda t: d*t + y x = np.arange(0.0, 20.0, 0.1) y = function_1(x) plt.xlabel("x") plt.ylabel("f(x)") tf = tangent_line(function_1, 5) y2 = tf(x) plt.plot(x, y) plt.plot(x, y2) plt.axvline(x=5,ls='dotted',color='k') plt.axhline(y=0.75,ls='dotted',color='k') plt.yticks([0, 0.75, 1, 2, 3, 4]) plt.show() ``` 众所周知,$f(x)=0.01x^2+0.1x$ 求导的解析解是 $\cfrac{{\rm d}f(x)}{{\rm d}x}=0.02x+0.1$,因此在 $x=5$ 的时候,“真的导数”为 0.2。和上面的结果比起来,严格来说不一致,但是误差很小。 ### 偏导数 接下来我们看一个新函数,这个函数有两个变量: $$\large f(x_0, x_1)=x_0^2+x_1^2$$ 其图像的绘制,用代码实现就是如下: ``` from mpl_toolkits.mplot3d import Axes3D from matplotlib import pyplot as plt import numpy as np def function_2_old(x_0, x_1): """二元函数""" return x_0**2 + x_1**2 fig = plt.figure() ax = Axes3D(fig) x_0 = np.arange(-2, 2.5, 0.2) # x0 x_1 = np.arange(-2, 2.5, 0.2) # x1 X_0, X_1 = np.meshgrid(x_0, x_1) # 二维数组生成 Y = function_2_old(X_0, X_1) ax.set_xlabel('$x_0$') ax.set_ylabel('$x_1$') ax.set_zlabel('$f(x)$') ax.plot_surface(X_0, X_1, Y, rstride=1, cstride=1, cmap='rainbow') # ax.view_init(30, 60) # 调整视角 plt.show() ``` 很漂亮的一幅图。 如果我们要对这个二元函数求导,就有必要区分是对 $x_0$ 还是 $x_1$ 求导。 这里讨论的有多个变量函数的导数就是**偏导数**,表示为 $\cfrac{\partial f}{\partial x_0}$、$\cfrac{\partial f}{\partial x_1}$。 当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_0$ 的偏导数$\cfrac{\partial f}{\partial x_0}$: ``` def function_tmp1(x0): return x0 * x0 + 4.0**2.0 numerical_diff(function_tmp1, 3.0) ``` 当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_1$ 的偏导数$\cfrac{\partial f}{\partial x_1}$: ``` def function_tmp2(x1): return 3.0**2.0 + x1 * x1 numerical_diff(function_tmp2, 4.0) ``` 实际上动笔计算,这两个计算值和解析解的导数基本一致。 所以偏导数和单变量的导数一样,都是求某个地方的**斜率**,不过偏导数需要将多个变量中的某一个变量定为目标变量,然后将其他变量固定为某个值。 ## 梯度 铺垫了这么多,终于到了关键的环节。 我们刚刚计算了 $x_0$ 和 $x_1$ 的偏导数,现在我们要一起计算 $x_0$ 和 $x_1$ 的偏导数。 比如我们考虑求 $x_0=3$,$x_1=4$ 时 $(x_0,x_1)$ 的偏导数 $\left( \cfrac{\partial f}{\partial x_0},\cfrac{\partial f}{\partial x_1} \right)$。 >像 $\left( \cfrac{\partial f}{\partial x_0},\cfrac{\partial f}{\partial x_1} \right)$ 这样由全部变量的偏导数汇总而成的向量就叫做**梯度**。 我们采用以下代码来计算: ``` def _numerical_gradient_no_batch(f, x): """ 计算梯度 输入: f:函数 x:数组,多元变量。 """ h = 1e-4 # 0.0001 grad = np.zeros_like(x) # 生成一个和x形状一样的全为0的数组 for idx in range(x.size): tmp_val = x[idx] x[idx] = float(tmp_val) + h fxh1 = f(x) # f(x+h) x[idx] = tmp_val - h fxh2 = f(x) # f(x-h) grad[idx] = (fxh1 - fxh2) / (2*h) x[idx] = tmp_val # 还原值 return grad def function_2(x): """ 二元函数 重新定义一下,此时输入为一个np.array数组 """ return x[0]**2 + x[1]**2 ``` 这个代码看起来稍微长一点,但是和求单变量的数值微分本质一样。 现在我们用这个函数实际计算一下梯度: ``` _numerical_gradient_no_batch(function_2, np.array([3.0, 4.0])) _numerical_gradient_no_batch(function_2, np.array([0.0, 2.0])) _numerical_gradient_no_batch(function_2, np.array([3.0, 0.0])) ``` 像这样我们就能计算 $(x_0,x_1)$ 在各个点的梯度了。现在我们要把 $f(x_0,x_1)=x_0^2+x_1^2$ 的梯度画在图上,不过我们画的是**负梯度**的向量。 代码参考:[deep-learning-from-scratch](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_2d.py)。 ``` def numerical_gradient(f, X): """计算梯度矢量""" if X.ndim == 1: return _numerical_gradient_no_batch(f, X) else: grad = np.zeros_like(X) for idx, x in enumerate(X): grad[idx] = _numerical_gradient_no_batch(f, x) return grad x0 = np.arange(-2, 2.5, 0.25) x1 = np.arange(-2, 2.5, 0.25) X, Y = np.meshgrid(x0, x1) X = X.flatten() Y = Y.flatten() grad = numerical_gradient(function_2, np.array([X, Y]).T).T plt.figure() plt.quiver(X, Y, -grad[0], -grad[1], angles="xy",color="#666666") plt.xlim([-2, 2]) plt.ylim([-2, 2]) plt.xlabel('x0') plt.ylabel('x1') plt.grid() plt.draw() plt.show() ``` 如图所示,$f(x_0,x_1)=x_0^2+x_1^2$ 的梯度呈现为有向箭头,而且: - 所有的箭头都指向 $f(x_0,x_1)$ 的“最低处”; - 离“最低处”越远,箭头越大。 > 实际上,梯度并非任何时候都指向最低处。 > > 更严格讲,**梯度指示的方向是各点处的函数值减小最多的方向**。 > > 也就是说,我们有可能在某些优化过程中只收敛到了局部最小值。 ### 梯度法 机器学习的主要任务是在训练(学习)过程中寻找最优的参数。这里“最优参数”就是让损失函数取到最小值时的参数。 但是损失函数一般都很复杂(回忆一下 `XGBoost` 的损失函数推导),参数空间很庞大,我们一般不知道它在何处能取得最小值。而使用梯度来寻找函数最小值(或者尽可能小的值)的方法就是梯度法。 >再次提醒:**梯度** 表示的是各点出函数的值减小最多的方向,因此没法保证梯度所指的方向就是函数的最小值或是真正应该前进的方向。实际上在复杂的函数中,梯度指示的方向基本上都 **不是** 函数值的最小位置。 我们沿着梯度方向能够最大限度减小函数(比如损失函数)的值,因此在寻找函数的最小值的位置上还是以梯度信息为线索,决定前进的方向。 这个时候**梯度法**就起作用了。在梯度法中,函数的取值从当前位置沿着梯度方向前进一小步(配合上面的图),然后在新的地方重新求梯度,再沿着梯度方向前进,如此循环往复。 像这样,通过不断地沿着梯度方向前进,逐渐减小函数的值的过程就是**梯度法(gradient method)**,它是解决机器学习中最优化问题的常用方法。 >严格地说,寻找最小值的梯度法叫**梯度下降法**(gradient descent method),而寻找最大值的梯度法称为**梯度上升法**(gradient ascent method),注意和 **提升方法**(Boosting)相区别。 用数学式来表达梯度法,就是: $$x_0=x_0 - \eta \frac{\partial f}{\partial x_0}$$ $$x_1=x_1 - \eta \frac{\partial f}{\partial x_1}$$ 其中,$\eta$,读作 **eta**,表示更新量。回忆一下,在之前的 SKLearn 的机器学习示例中,大多都用 `eta` 作为**学习率(learning rate)**的参数,在神经网络中也是如此。学习率决定在一次学习中,应该学习多少,以及在多大程度上更新参数,就像我们走在下山路上,$\eta$ 决定了我们每一步迈多远。 上面的公式只更新了一次,我们需要反复执行,逐渐减小函数值。 $\eta$ 的具体取值不能太大或者太小,否则都没法抵达一个“合适的位置”。在神经网络中,一般会一边改变学习率的值,一般确认训练是否正常进行。 代码参考[gradient_method.py](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_method.py),用代码实现梯度下降法: ``` def gradient_descent(f, init_x, lr=0.01, step_num=100): """ 梯度下降法 f:要进行最优化的参数 init_x:初始值 lr:学习率,默认为0.01 step_sum:梯度下降法重复的次数 """ x = init_x x_history = [] # 保存每一步的信息 for i in range(step_num): x_history.append( x.copy() ) grad = numerical_gradient(f, x) # 计算梯度矢量 x -= lr * grad return x, np.array(x_history) ``` 使用这个函数就能求得函数的极小值,如果顺利,还能求得最小值。 现在我们来求 $f(x_0,x_1)=x_0^2+x_1^2$ 的最小值: ``` init_x = np.array([-3.0, 4.0]) # 初始位置 resutl = gradient_descent(function_2, init_x=init_x, lr=0.1, step_num=100) # 执行梯度下降算法 print(resutl[0]) ``` 最终结果是 $(-6.11110793\times10^{-10}, 8.14814391\times10^{-10})$,非常接近我们已知的正确值 $(0, 0)$。所以说通过梯度下降法我们基本得到了正确的结果。 如果我们把梯度更新的图片画出,如下: ``` init_x = np.array([-3.0, 4.0]) # 初始位置 lr = 0.1 step_num = 20 x, x_history = gradient_descent(function_2, init_x, lr=lr, step_num=step_num) step = 0.01 x_0 = np.arange(-5,5,step) x_1 = np.arange(-5,5,step) X, Y = np.meshgrid(x_0, x_1) # 建立网格 Z = function_2_old(X, Y) plt.contour(X, Y, Z, levels=10, linewidths=0.5, linestyles='dashdot') # 绘制等高线 plt.plot(x_history[:,0], x_history[:,1], '.') # 绘制梯度下降过程 plt.xlim(-4.5, 4.5) plt.ylim(-4.5, 4.5) plt.xlabel("$x_0$") plt.ylabel("$x_1$") plt.show() ``` 前面说过,**学习率**过大或者过小都无法得到好结果。 可以做实验验证一下: ``` # 学习率过大 init_x = np.array([-3.0, 4.0]) # 初始位置 lr = 10.0 # 学习率 x, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num) print(x) # 学习率过小 init_x = np.array([-3.0, 4.0]) # 初始位置 lr = 1e-10 # 学习率 x, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num) print(x) ``` 由此可见: - 学习率过大,会发散成一个很大的值; - 学习率过小,基本上还没更新就结束了。 因此我们需要设置适当的学习率。记住,学习率是一个**超参数**,通常是人工设定的。 ### 神经网络的梯度 神经网络的训练也是要求梯度的。这里的梯度指的是**损失函数**关于权重参数的梯度。比如,在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/深度学习基础/3.01%20神经网络与前向传播.ipynb)中,我们搭建了一个三层神经网络。其中第一层(layer1)的权重 $W$ 的形状为 $2\times3$,损失函数用 $L$ 表示。 此时梯度用 $\cfrac{\partial L}{\partial W}$ 表示。用具体的数学表达式(注意下标为了方便说明,和以前不一样)来说,就是: $$ \large W= \begin{pmatrix} w_{11} & w_{12} & w_{13} \\ w_{21} & w_{22} & w_{23}\\ \end{pmatrix} $$ $$ \large \frac{\partial L}{\partial W}= \begin{pmatrix} \cfrac{\partial L}{\partial w_{11}} & \cfrac{\partial L}{\partial w_{12}} & \cfrac{\partial L}{\partial w_{13}} \\ \cfrac{\partial L}{\partial w_{21}} & \cfrac{\partial L}{\partial w_{22}} & \cfrac{\partial L}{\partial w_{23}}\\ \end{pmatrix} $$ $\cfrac{\partial L}{\partial W}$ 的元素由各个元素关于 $W$ 的偏导数构成。比如,第1行第1列的元素 $\cfrac{\partial L}{\partial w_{11}}$ 表示当 $w_{11}$ 稍微变化的时候,损失函数 $L$ 会发生多大变化。 我们以一个简单的神经网络为例子,来实现求梯度的代码: ``` import os import sys import numpy as np def softmax(a): """定义 softmax 函数""" exp_a = np.exp(a) sum_exp_a = np.sum(exp_a) y = exp_a / sum_exp_a return y def cross_entropy_error(y, t): """定义交叉熵损失函数""" delta = 1e-7 return -np.sum(t * np.log(y + delta)) def numerical_gradient(f, X): """计算梯度矢量""" if X.ndim == 1: return _numerical_gradient_no_batch(f, X) else: grad = np.zeros_like(X) for idx, x in enumerate(X): grad[idx] = _numerical_gradient_no_batch(f, x) return grad class simpleNet: def __init__(self): """初始化""" # self.W = np.random.randn(2, 3) # 高斯分布初始化 self.W = np.array([[ 0.68851943, 2.06916921, -0.88125086], [-1.30951576, 0.72350587, -1.88984482]]) self.q = 1 def predict(self, x): """预测""" return np.dot(x, self.W) def loss(self, x, t): """损失函数""" z = self.predict(x) y = softmax(z) loss = cross_entropy_error(y, t) return loss ``` 我们建立了一个名叫 `simpleNet` 的简单神经网络,其中 `softmax` 和 `cross_entropy_error` 都和以前一样。simpleNet 类只有一个实例变量,也就是形状为 $2\times 3$ 的权重参数矩阵。 网络中有两个方法,一个是前向传播 `predict`,用于预测;另一个是用于求损失函数的 `loss` 。其中参数 `x` 接受输入数据,`t`接受正确标签。 现在我们运行一下看看结果: ``` net = simpleNet() print(net.W) # 权重参数 x = np.array([0.6, 0.9]) p = net.predict(x) # 预测 print(p) np.argmax(p) # 正确解(最大值)的索引 # 正确解的标签,如果是随机初始化,注意每次运行可能都不一样!!! t = np.array([0, 1, 0]) # 损失 loss1 = net.loss(x, t) print(loss1) ``` 现在我们来求**梯度**。我们使用 `numerical_gradient(f, x)` 求梯度: 由于 `numerical_gradient(f, x)` 中的 `f` 是一个函数,所以为了程序兼容,我们先定义函数 `f(W)`: ``` def f(W): return net.loss(x, t) dW = numerical_gradient(f, net.W) print(dW) ``` `numerical_gradient(f, net.W)` 的结果是 $dW$,形状是一个 $2\times 3$ 的矩阵。 观察这个矩阵,在$\cfrac{\partial L}{\partial W}$ 中: $\cfrac{\partial L}{\partial W_{11}}$的值约为0.039,这表示如果将$w_{11}$ 增加 $h$,则损失函数的值会增加 $0.039h$。 $\cfrac{\partial L}{\partial W_{22}}$的值约为-0.071,这表示如果将$w_{22}$ 增加 $h$,则损失函数的值会减少 $0.071h$。 所以,从减少损失函数的目的出发,$w_{22}$ 应该向正方向更新,而 $w_{11}$ 应该向负方向更新。 我们求出神经网络在输入 $x=[0.6, \quad 0.9]$ 的梯度以后,只需要根据梯度法,更新权重参数即可。 手动更新试试: ``` # 学习率 lr lr = 1e-4 print(lr) class simpleNet_step2: def __init__(self): """初始化,手动更新一次参数""" self.W = np.array([[ 0.68851943 - 0.0001, 2.06916921 + 0.0001, -0.88125086 - 0.0001], [-1.30951576 - 0.0001, 0.72350587 + 0.0001, -1.88984482 - 0.0001]]) self.q = 1 def predict(self, x): """预测""" return np.dot(x, self.W) def loss(self, x, t): """损失函数""" z = self.predict(x) y = softmax(z) loss = cross_entropy_error(y, t) return loss net = simpleNet_step2() net.W x = np.array([0.6, 0.9]) p = net.predict(x) # 预测 print(p) # 最大值为正确答案 t = np.array([0, 1, 0]) # 损失 loss2 = net.loss(x, t) print(loss2) if loss2 < loss1: print("loss2 比 loss1 小了:", loss1 - loss2) ``` 由此可见,我们按照梯度法,更新了权重参数(步长为学习率)以后,损失函数的值下降了。 ## 学习算法总结 到此,我们学习了“损失函数”、“mini-batch”、“梯度”、“梯度下降”等概念。现在回顾一些神经网络的学习步骤: 1. **minibatch**: 从训练数据中**随机**选出一部分数据,这部分数据称为 mini-batch。我们的目标是减小 mini-batch 的损失函数的值。 >在 PyTorch 中,使用 `torch.utils.data` 实现此功能,参考 [TORCH.UTILS.DATA](https://pytorch.org/docs/stable/data.html#multi-process-data-loading)。 > >在 Tensorflow 中,使用 `tf.data` 实现此功能,参考 [tf.data: Build TensorFlow input pipelines](https://tensorflow.google.cn/guide/data)。 2. **计算梯度**: 为了减小 mini-batch 的损失函数的值,需要求出各个权重参数的梯度。梯度表示损失函数的值减小最多的方向。 3. **更新参数**: 将权重参数 $W$ 沿梯度方向进行微小更新。 4. **重复**: 重复步骤1、步骤2、步骤3。 神经网络的学习大概就是按照上面4个步骤进行。这个方法通过梯度下降法更新参数。由于我们使用的数据是**随机**选择的 mini-batch 数据,所以又称为**随机梯度下降(stochastic gradient descent)**。这就是其名称由来。 在大多数深度学习框架中,随机梯度下降法一般由一个名为 **SGD** 的函数来实现: - TensorFlow:`tf.keras.optimizers.SGD`。 - PyTorch:`torch.optim.SGD` 实际上,随机梯度下降是通过数值微分实现的,但是缺点是计算上很耗费时间,后续我们会学习**误差反向传播**法,来解决这个问题。
true
code
0.498413
null
null
null
null
# AEJxLPS (Auroral electrojets SECS) > Abstract: Access to the AEBS products, SECS type. This notebook uses code from the previous notebook to build a routine that is flexible to plot either the LC or SECS products - this demonstrates a prototype quicklook routine. ``` %load_ext watermark %watermark -i -v -p viresclient,pandas,xarray,matplotlib from viresclient import SwarmRequest import datetime as dt import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt import matplotlib as mpl request = SwarmRequest() ``` ## AEBS product information See previous notebook, "Demo AEBS products (LC)", for an introduction to these products. ### Function to request data from VirES and reshape it ``` def fetch_data(start_time=None, end_time=None, spacecraft=None, AEBS_type="L"): """DUPLICATED FROM PREVIOUS NOTEBOOK. TO BE REFACTORED""" # Fetch data from VirES auxiliaries = ['OrbitNumber', 'QDLat', 'QDOrbitDirection', 'OrbitDirection', 'MLT'] if AEBS_type == "L": measurement_vars = ["J_NE"] elif AEBS_type == "S": measurement_vars = ["J_CF_NE", "J_DF_NE"] # Fetch LPL/LPS request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F') request.set_products( measurements=measurement_vars, auxiliaries=auxiliaries, ) data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False) ds_lp = data.as_xarray() # Fetch LPL/LPS Quality request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F:Quality') request.set_products( measurements=['RMS_misfit', 'Confidence'], ) data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False) ds_lpq = data.as_xarray() # Fetch PBL request.set_collection(f'SW_OPER_AEJ{spacecraft}PB{AEBS_type}_2F') request.set_products( measurements=['PointType', 'Flags'], auxiliaries=auxiliaries ) data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False) ds_pb = data.as_xarray() # Meaning of PointType PointType_meanings = { "WEJ_peak": 0, # minimum "EEJ_peak": 1, # maximum "WEJ_eq_bound_s": 2, # equatorward (pair start) "EEJ_eq_bound_s": 3, "WEJ_po_bound_s": 6, # poleward "EEJ_po_bound_s": 7, "WEJ_eq_bound_e": 10, # equatorward (pair end) "EEJ_eq_bound_e": 11, "WEJ_po_bound_e": 14, # poleward "EEJ_po_bound_e": 15, } # Add new data variables (boolean Type) according to the dictionary above ds_pb = ds_pb.assign( {name: ds_pb["PointType"] == PointType_meanings[name] for name in PointType_meanings.keys()} ) # Merge datasets together def drop_duplicate_times(_ds): _, index = np.unique(_ds['Timestamp'], return_index=True) return _ds.isel(Timestamp=index) def merge_attrs(_ds1, _ds2): attrs = {"Sources":[], "MagneticModels":[], "RangeFilters":[]} for item in ["Sources", "MagneticModels", "RangeFilters"]: attrs[item] = list(set(_ds1.attrs[item] + _ds2.attrs[item])) return attrs # Create new dataset from just the newly created PointType arrays # This is created on a non-repeating Timestamp coordinate ds = xr.Dataset( {name: ds_pb[name].where(ds_pb[name], drop=True) for name in PointType_meanings.keys()} ) # Merge in the positional and auxiliary data data_vars = list(set(ds_pb.data_vars).difference(set(PointType_meanings.keys()))) data_vars.remove("PointType") ds = ds.merge( (ds_pb[data_vars] .pipe(drop_duplicate_times)) ) # Merge together with the LPL data # Note that the Timestamp coordinates aren't equal # Separately merge data with matching and missing time sample points in ds_lpl idx_present = list(set(ds["Timestamp"].values).intersection(set(ds_lp["Timestamp"].values))) idx_missing = list(set(ds["Timestamp"].values).difference(set(ds_lp["Timestamp"].values))) # Override prioritises the first dataset (ds_lpl) where there are conflicts ds2 = ds_lp.merge(ds.sel(Timestamp=idx_present), join="outer", compat="override") ds2 = ds2.merge(ds.sel(Timestamp=idx_missing), join="outer") # Update the metadata ds2.attrs = merge_attrs(ds_lp, ds_pb) # Switch the point type arrays to uint8 or bool for performance? # But the .where operations later cast them back to float64 since gaps are filled with nan for name in PointType_meanings.keys(): ds2[name] = ds2[name].astype("uint8").fillna(False) # ds2[name] = ds2[name].fillna(False).astype(bool) ds = ds2 # Append the PBL Flags information into the LPL:Quality dataset to use as a lookup table ds_lpq = ds_lpq.assign( Flags_PBL= ds_pb["Flags"] .pipe(drop_duplicate_times) .reindex_like(ds_lpq, method="nearest"), ) return ds, ds_lpq ``` ### Plotting function ``` # Bit numbers which indicate non-nominal state # Check SW-DS-DTU-GS-003_AEBS_PDD for details BITS_PBL_FLAGS_EEJ_MINOR = (2, 3, 6) BITS_PBL_FLAGS_WEJ_MINOR = (4, 5, 6) BITS_PBL_FLAGS_EEJ_BAD = (0, 7, 8, 11) BITS_PBL_FLAGS_WEJ_BAD = (1, 9, 10, 12) def check_PBL_Flags(flags=0b0, EJ_type="WEJ"): """Return "good", "poor", or "bad" depending on status""" def _check_bits(bitno_set): return any(flags & (1 << bitno) for bitno in bitno_set) if EJ_type == "WEJ": if _check_bits(BITS_PBL_FLAGS_WEJ_BAD): return "bad" elif _check_bits(BITS_PBL_FLAGS_WEJ_MINOR): return "poor" else: return "good" elif EJ_type == "EEJ": if _check_bits(BITS_PBL_FLAGS_EEJ_BAD): return "bad" elif _check_bits(BITS_PBL_FLAGS_EEJ_MINOR): return "poor" else: return "good" glyphs = { "WEJ_peak": {"marker": 'v', "color":'tab:red'}, # minimum "EEJ_peak": {"marker": '^', "color":'tab:purple'}, # maximum "WEJ_eq_bound_s": {"marker": '>', "color":'black'}, # equatorward (pair start) "EEJ_eq_bound_s": {"marker": '>', "color":'black'}, "WEJ_po_bound_s": {"marker": '>', "color":'black'}, # poleward "EEJ_po_bound_s": {"marker": '>', "color":'black'}, "WEJ_eq_bound_e": {"marker": '<', "color":'black'}, # equatorward (pair end) "EEJ_eq_bound_e": {"marker": '<', "color":'black'}, "WEJ_po_bound_e": {"marker": '<', "color":'black'}, # poleward "EEJ_po_bound_e": {"marker": '<', "color":'black'}, } def plot_stack(ds, ds_lpq, hemisphere="North", x_axis="Latitude", AEBS_type="L"): # Identify which variable to plot from dataset # If accessing the SECS (LPS) data, sum the DF & CF parts if "J_CF_NE" in ds.data_vars: ds["J_NE"] = ds["J_DF_NE"] + ds["J_CF_NE"] plotvar = "J_NE" orbdir = "OrbitDirection" if x_axis=="Latitude" else "QDOrbitDirection" markersize = 1 if AEBS_type=="S" else 5 # Select hemisphere if hemisphere == "North": ds = ds.where(ds["Latitude"]>0, drop=True) elif hemisphere == "South": ds = ds.where(ds["Latitude"]<0, drop=True) # Generate plot with split by columns: ascending/descending to/from pole # by rows: successive orbits fig, axes = plt.subplots( nrows=len(ds.groupby("OrbitNumber")), ncols=2, sharex="col", sharey="all", figsize=(10, 20) ) max_ylim = np.max(np.abs(ds[plotvar].sel({"NE": "E"}))) # Loop through each orbit for i, (_, ds_orbit) in enumerate(ds.groupby("OrbitNumber")): if hemisphere == "North": ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True) ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True) if hemisphere == "South": ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True) ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True) # Loop through ascending and descending sections for j, _ds in enumerate((ds_orb_asc, ds_orb_desc)): if len(_ds.Timestamp) == 0: continue # Line plot of current strength axes[i, j].plot( _ds[x_axis], _ds[plotvar].sel({"NE": "E"}), color="tab:blue", marker=".", markersize=markersize, linestyle="" ) axes[i, j].plot( _ds[x_axis], _ds[plotvar].sel({"NE": "N"}), color="tab:grey", marker=".", markersize=markersize, linestyle="" ) # Plot glyphs at the peaks and boundaries locations for name in glyphs.keys(): __ds = _ds.where(_ds[name], drop=True) try: for lat in __ds[x_axis]: axes[i, j].plot( lat, 0, marker=glyphs[name]["marker"], color=glyphs[name]["color"] ) except Exception: pass # Identify Quality and Flags info # Use either the start time of the section or the end, depending on asc or desc index = 0 if j == 0 else -1 t = _ds["Timestamp"].isel(Timestamp=index).values _ds_qualflags = ds_lpq.sel(Timestamp=t, method="nearest") pbl_flags = int(_ds_qualflags["Flags_PBL"].values) lpl_rms_misfit = float(_ds_qualflags["RMS_misfit"].values) lpl_confidence = float(_ds_qualflags["Confidence"].values) # Shade WEJ and EEJ regions, only if well-defined # def _shade_EJ_region(_ds=None, EJ="WEJ", color="tab:red", alpha=0.3): wej_status = check_PBL_Flags(pbl_flags, "WEJ") eej_status = check_PBL_Flags(pbl_flags, "EEJ") if wej_status in ["good", "poor"]: alpha = 0.3 if wej_status == "good" else 0.1 try: WEJ_left = _ds.where( (_ds["WEJ_eq_bound_s"] == 1) | (_ds["WEJ_po_bound_s"] == 1), drop=True) WEJ_right = _ds.where( (_ds["WEJ_eq_bound_e"] == 1) | (_ds["WEJ_po_bound_e"] == 1), drop=True) x1 = WEJ_left[x_axis][0] x2 = WEJ_right[x_axis][0] axes[i, j].fill_betweenx( [-max_ylim, max_ylim], [x1, x1], [x2, x2], color="tab:red", alpha=alpha) except Exception: pass if eej_status in ["good", "poor"]: alpha = 0.3 if eej_status == "good" else 0.15 try: EEJ_left = _ds.where( (_ds["EEJ_eq_bound_s"] == 1) | (_ds["EEJ_po_bound_s"] == 1), drop=True) EEJ_right = _ds.where( (_ds["EEJ_eq_bound_e"] == 1) | (_ds["EEJ_po_bound_e"] == 1), drop=True) x1 = EEJ_left[x_axis][0] x2 = EEJ_right[x_axis][0] axes[i, j].fill_betweenx( [-max_ylim, max_ylim], [x1, x1], [x2, x2], color="tab:purple", alpha=alpha) except Exception: pass # Write the LPL:Quality and PBL Flags info ha = "right" if j == 0 else "left" textx = 0.98 if j == 0 else 0.02 axes[i, j].text( textx, 0.95, f"RMS Misfit {np.round(lpl_rms_misfit, 2)}; Confidence {np.round(lpl_confidence, 2)}", transform=axes[i, j].transAxes, verticalalignment="top", horizontalalignment=ha ) axes[i, j].text( textx, 0.05, f"PBL Flags {pbl_flags:013b}", transform=axes[i, j].transAxes, verticalalignment="bottom", horizontalalignment=ha ) # Write the start/end time and MLT of the section, and the orbit number def _format_utc(t): return f"UTC {t.strftime('%H:%M')}" def _format_mlt(mlt): hour, fraction = divmod(mlt, 1) t = dt.time(int(hour), minute=int(60*fraction)) return f"MLT {t.strftime('%H:%M')}" try: # Left part (section starting UTC, MLT, OrbitNumber) time_s = pd.to_datetime(ds_orb_asc["Timestamp"].isel(Timestamp=0).data) mlt_s = ds_orb_asc["MLT"].dropna(dim="Timestamp").isel(Timestamp=0).data orbit_number = int(ds_orb_asc["OrbitNumber"].isel(Timestamp=0).data) axes[i, 0].text( 0.01, 0.95, f"{_format_utc(time_s)}\n{_format_mlt(mlt_s)}", transform=axes[i, 0].transAxes, verticalalignment="top" ) axes[i, 0].text( 0.01, 0.05, f"Orbit {orbit_number}", transform=axes[i, 0].transAxes, verticalalignment="bottom" ) except Exception: pass try: # Right part (section ending UTC, MLT) time_e = pd.to_datetime(ds_orb_desc["Timestamp"].isel(Timestamp=-1).data) mlt_e = ds_orb_desc["MLT"].dropna(dim="Timestamp").isel(Timestamp=-1).data axes[i, 1].text( 0.99, 0.95, f"{_format_utc(time_e)}\n{_format_mlt(mlt_e)}", transform=axes[i, 1].transAxes, verticalalignment="top", horizontalalignment="right" ) except Exception: pass # Extra config of axes and figure text axes[0, 0].set_ylim(-max_ylim, max_ylim) if hemisphere == "North": axes[0, 0].set_xlim(50, 90) axes[0, 1].set_xlim(90, 50) elif hemisphere == "South": axes[0, 0].set_xlim(-50, -90) axes[0, 1].set_xlim(-90, -50) for ax in axes.flatten(): ax.grid() axes[-1, 0].set_xlabel(x_axis) axes[-1, 0].set_ylabel("Horizontal currents\n[ A.km$^{-1}$ ]") time = pd.to_datetime(ds["Timestamp"].isel(Timestamp=0).data) spacecraft = ds["Spacecraft"].dropna(dim="Timestamp").isel(Timestamp=0).data AEBS_type_name = "LC" if AEBS_type == "L" else "SECS" fig.text( 0.5, 0.9, f"{time.strftime('%Y-%m-%d')}\nSwarm {spacecraft}\n{hemisphere}\nAEBS: {AEBS_type_name}", transform=fig.transFigure, horizontalalignment="center", ) fig.subplots_adjust(wspace=0, hspace=0) return fig, axes ``` ### Fetching and plotting function ``` def quicklook(day="2015-01-01", hemisphere="North", spacecraft="A", AEBS_type="L", xaxis="Latitude"): start_time = dt.datetime.fromisoformat(day) end_time = start_time + dt.timedelta(days=1) ds, ds_lpq = fetch_data(start_time, end_time, spacecraft, AEBS_type) fig, axes = plot_stack(ds, ds_lpq, hemisphere, xaxis, AEBS_type) return ds, fig, axes ``` Consecutive orbits are shown in consecutive rows, centered over the pole. The starting and ending times (UTC and MLT) of the orbital section are shown at the left and right. Westward (WEJ) and Eastward (EEJ) electrojet extents and peak intensities are indicated: - Blue dots: Estimated current density in Eastward direction, J_NE (E) - Grey dots: Estimated current density in Northward direction, J_NE (N) - Red/Purple shaded region: WEJ/EEJ extent (boundaries marked by black triangles) - Red/Purple triangles: Locations of peak WEJ/EEJ intensity Select AEBS_type as S to get SECS results, L to get LC results SECS = spherical elementary current systems method LC = Line current method Notes: The code is currently quite fragile, so it is broken on some days. Sometimes the electrojet regions are not shaded correctly. Only the horizontal currents are currently shown. ``` quicklook(day="2016-01-01", hemisphere="North", spacecraft="A", AEBS_type="S", xaxis="Latitude"); quicklook(day="2016-01-01", hemisphere="North", spacecraft="A", AEBS_type="L", xaxis="Latitude"); ```
true
code
0.554229
null
null
null
null
``` # default_exp models.OmniScaleCNN ``` # OmniScaleCNN > This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on: * Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536. * Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py ``` #export from tsai.imports import * from tsai.models.layers import * from tsai.models.utils import * #export #This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on: # Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536. # Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py class SampaddingConv1D_BN(Module): def __init__(self, in_channels, out_channels, kernel_size): self.padding = nn.ConstantPad1d((int((kernel_size - 1) / 2), int(kernel_size / 2)), 0) self.conv1d = torch.nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size) self.bn = nn.BatchNorm1d(num_features=out_channels) def forward(self, x): x = self.padding(x) x = self.conv1d(x) x = self.bn(x) return x class build_layer_with_layer_parameter(Module): """ formerly build_layer_with_layer_parameter """ def __init__(self, layer_parameters): """ layer_parameters format [in_channels, out_channels, kernel_size, in_channels, out_channels, kernel_size, ..., nlayers ] """ self.conv_list = nn.ModuleList() for i in layer_parameters: # in_channels, out_channels, kernel_size conv = SampaddingConv1D_BN(i[0], i[1], i[2]) self.conv_list.append(conv) def forward(self, x): conv_result_list = [] for conv in self.conv_list: conv_result = conv(x) conv_result_list.append(conv_result) result = F.relu(torch.cat(tuple(conv_result_list), 1)) return result class OmniScaleCNN(Module): def __init__(self, c_in, c_out, seq_len, layers=[8 * 128, 5 * 128 * 256 + 2 * 256 * 128], few_shot=False): receptive_field_shape = seq_len//4 layer_parameter_list = generate_layer_parameter_list(1,receptive_field_shape, layers, in_channel=c_in) self.few_shot = few_shot self.layer_parameter_list = layer_parameter_list self.layer_list = [] for i in range(len(layer_parameter_list)): layer = build_layer_with_layer_parameter(layer_parameter_list[i]) self.layer_list.append(layer) self.net = nn.Sequential(*self.layer_list) self.gap = GAP1d(1) out_put_channel_number = 0 for final_layer_parameters in layer_parameter_list[-1]: out_put_channel_number = out_put_channel_number + final_layer_parameters[1] self.hidden = nn.Linear(out_put_channel_number, c_out) def forward(self, x): x = self.net(x) x = self.gap(x) if not self.few_shot: x = self.hidden(x) return x def get_Prime_number_in_a_range(start, end): Prime_list = [] for val in range(start, end + 1): prime_or_not = True for n in range(2, val): if (val % n) == 0: prime_or_not = False break if prime_or_not: Prime_list.append(val) return Prime_list def get_out_channel_number(paramenter_layer, in_channel, prime_list): out_channel_expect = max(1, int(paramenter_layer / (in_channel * sum(prime_list)))) return out_channel_expect def generate_layer_parameter_list(start, end, layers, in_channel=1): prime_list = get_Prime_number_in_a_range(start, end) layer_parameter_list = [] for paramenter_number_of_layer in layers: out_channel = get_out_channel_number(paramenter_number_of_layer, in_channel, prime_list) tuples_in_layer = [] for prime in prime_list: tuples_in_layer.append((in_channel, out_channel, prime)) in_channel = len(prime_list) * out_channel layer_parameter_list.append(tuples_in_layer) tuples_in_layer_last = [] first_out_channel = len(prime_list) * get_out_channel_number(layers[0], 1, prime_list) tuples_in_layer_last.append((in_channel, first_out_channel, 1)) tuples_in_layer_last.append((in_channel, first_out_channel, 2)) layer_parameter_list.append(tuples_in_layer_last) return layer_parameter_list bs = 16 c_in = 3 seq_len = 12 c_out = 2 xb = torch.rand(bs, c_in, seq_len) m = create_model(OmniScaleCNN, c_in, c_out, seq_len) test_eq(OmniScaleCNN(c_in, c_out, seq_len)(xb).shape, [bs, c_out]) m #hide from tsai.imports import * from tsai.export import * nb_name = get_nb_name() # nb_name = "109_models.OmniScaleCNN.ipynb" create_scripts(nb_name); ```
true
code
0.765769
null
null
null
null
# Notebook to visualize location data ``` import csv # count the number of Starbucks in DC with open('starbucks.csv') as file: csvinput = csv.reader(file) acc = 0 for record in csvinput: if 'DC' in record[3]: acc += 1 print( acc ) def parse_locations(csv_iterator,state=''): """ strip out long/lat and convert to a list of floating point 2-tuples -- optionally, filter by a specified state """ return [ ( float(row[0]), float(row[1])) for row in csv_iterator if state in row[3]] def get_locations(filename, state=''): """ read a list of longitude/latitude pairs from a csv file, optionally, filter by a specified state """ with open(filename, 'r') as input_file: csvinput = csv.reader(input_file) location_data = parse_locations(csvinput,state) return location_data # get the data from all starbucks locations starbucks_locations = get_locations('starbucks.csv') # get the data from burger locations" burger_locations = get_locations('burgerking.csv') + \ get_locations('mcdonalds.csv') + \ get_locations('wendys.csv') # look at the first few (10) data points of each for n in range(10): print( starbucks_locations[n] ) print() for n in range(10): print( burger_locations[n] ) # a common, powerful plotting library import matplotlib.pyplot as plt # set figure size plt.figure(figsize=(12, 9)) # get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data ax = plt.axes() ax.set_aspect('equal', 'datalim') # plot the data plt.scatter(*zip(*starbucks_locations), s=1) plt.legend(["Starbucks"]) # jupyter automatically plots this inline. On the console, you need to invoke plt.show() # FYI: In that case, execution halts until you close the window it opens. # set figure size plt.figure(figsize=(12, 9)) # get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data ax = plt.axes() ax.set_aspect('equal', 'datalim') # plot the data plt.scatter(*zip(*burger_locations), color='green', s=1) plt.legend(["Burgers"]) lat, lon = zip(*get_locations('burgerking.csv')) min_lat = min(lat) max_lat = max(lat) min_lon = min(lon) max_lon = max(lon) lat, lon = zip(*get_locations('mcdonalds.csv')) min_lat = min(min_lat,min(lat)) max_lat = max(max_lat,max(lat)) min_lon = min(min_lon,min(lon)) max_lon = max(max_lon,max(lon)) lat, lon = zip(*get_locations('wendys.csv')) min_lat = min(min_lat,min(lat)) max_lat = max(max_lat,max(lat)) min_lon = min(min_lon,min(lon)) max_lon = max(max_lon,max(lon)) lat, lon = zip(*get_locations('pizzahut.csv')) min_lat = min(min_lat,min(lat)) max_lat = max(max_lat,max(lat)) min_lon = min(min_lon,min(lon)) max_lon = max(max_lon,max(lon)) # set figure size fig = plt.figure(figsize=(12, 9)) #fig = plt.figure() plt.subplot(2,2,1) plt.scatter(*zip(*get_locations('burgerking.csv')), color='black', s=1, alpha=0.2) plt.xlim(min_lat-5,max_lat+5) plt.ylim(min_lon-5,max_lon+5) plt.gca().set_aspect('equal') plt.subplot(2,2,2) plt.scatter(*zip(*get_locations('mcdonalds.csv')), color='black', s=1, alpha=0.2) plt.xlim(min_lat-5,max_lat+5) plt.ylim(min_lon-5,max_lon+5) plt.gca().set_aspect('equal') plt.subplot(2,2,3) plt.scatter(*zip(*get_locations('wendys.csv')), color='black', s=1, alpha=0.2) plt.xlim(min_lat-5,max_lat+5) plt.ylim(min_lon-5,max_lon+5) plt.gca().set_aspect('equal') plt.subplot(2,2,4) plt.scatter(*zip(*get_locations('pizzahut.csv')), color='black', s=1, alpha=0.2) plt.xlim(min_lat-5,max_lat+5) plt.ylim(min_lon-5,max_lon+5) plt.gca().set_aspect('equal') #plt.scatter(*zip(*get_locations('dollar-tree.csv')), color='black', s=1, alpha=0.2) # get the starbucks in DC starbucks_dc_locations = get_locations('starbucks.csv', state='DC') burger_dc_locations = get_locations('burgerking.csv', state='DC') + \ get_locations('mcdonalds.csv', state='DC') + \ get_locations('wendys.csv', state='DC') # show the first 10 locations of each: for n in range(10): print( starbucks_dc_locations[n] ) print() for n in range(min(10,len(burger_dc_locations))): print( burger_dc_locations[n] ) # set figure size plt.figure(figsize=(12, 9)) # get the axes of the plot and set them to be equal-aspect and limited by data ax = plt.axes() ax.set_aspect('equal', 'datalim') # plot the data plt.scatter(*zip(*starbucks_dc_locations)) plt.scatter(*zip(*burger_dc_locations), color='green') # We also want to plot the DC boundaries, so we have a better idea where these things are # the data is contained in DC.txt # let's inspect it. Observe the format with open('DC.txt') as file: for line in file: print(line,end='') # lines already end with a newline so don't print another with open('DC.txt') as file: # get the lower left and upper right coords for the bounding box ll_long, ll_lat = map(float, next(file).split()) ur_long, ur_lat = map(float, next(file).split()) # get the number of regions num_records = int(next(file)) # there better just be one assert num_records == 1 # then a blank line next(file) # Title of "county" county_name = next(file).rstrip() # removes newline at end # "State" county resides in state_name = next(file).rstrip() # this is supposed to be DC assert state_name == "DC" # number of points to expect num_pairs = int(next(file)) dc_boundary = [ tuple(map(float,next(file).split())) for n in range(num_pairs)] dc_boundary # add the beginning to the end so that it closes up dc_boundary.append(dc_boundary[0]) # draw it! ax = plt.axes() ax.set_aspect('equal', 'datalim') plt.plot(*zip(*dc_boundary)) # draw both the starbucks location and DC boundary together plt.figure(figsize=(12, 9)) ax = plt.axes() ax.set_aspect('equal', 'datalim') plt.scatter(*zip(*starbucks_dc_locations)) plt.scatter(*zip(*burger_dc_locations), color='green') plt.plot(*zip(*dc_boundary)) # draw both the starbucks location and DC boundary together plt.figure(figsize=(12, 9)) ax = plt.axes() ax.set_aspect('equal', 'datalim') plt.scatter(*zip(*get_locations('burgerking.csv', state='DC')), color='red') plt.scatter(*zip(*get_locations('mcdonalds.csv', state='DC')), color='green') plt.scatter(*zip(*get_locations('wendys.csv', state='DC')), color='blue') plt.scatter(*zip(*get_locations('pizzahut.csv', state='DC')), color='yellow') plt.scatter(*zip(*get_locations('dollar-tree.csv', state='DC')), color='black') plt.plot(*zip(*dc_boundary)) ``` ### But where's AU? ``` # draw both the starbucks location and DC boundary together plt.figure(figsize=(12, 9)) ax = plt.axes() ax.set_aspect('equal', 'datalim') plt.scatter(*zip(*starbucks_dc_locations)) plt.scatter(*zip(*burger_dc_locations), color='green') plt.plot(*zip(*dc_boundary)) # add a red dot right over Anderson plt.scatter([-77.0897511],[38.9363019],color='red') from ipyleaflet import Map, basemaps, basemap_to_tiles, Marker, CircleMarker m = Map(layers=(basemap_to_tiles(basemaps.OpenStreetMap.HOT), ), center=(38.898082, -77.036696), zoom=11) # marker for AU marker = Marker(location=(38.937831, -77.088852), radius=2, color='green') m.add_layer(marker) for (long,lat) in starbucks_dc_locations: marker = CircleMarker(location=(lat,long), radius=1, color='steelblue') m.add_layer(marker); for (long,lat) in burger_dc_locations: marker = CircleMarker(location=(lat,long), radius=1, color='green') m.add_layer(marker); m ```
true
code
0.497376
null
null
null
null
<a href="https://colab.research.google.com/github/ashraj98/rbf-sin-approx/blob/main/Lab2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Lab 2 ### Ashwin Rajgopal Start off by importing numpy for matrix math, random for random ordering of samples and pyplot for plotting results. ``` import matplotlib.pyplot as plt import numpy as np import random ``` #### Creating the samples X variables can be generated by using `np.random.rand` to generate a array of random numbers between 0 and 1, which is what is required. The same can be done to generate the noise, but then it needs to be divided by 5 and subtracted by .1 to fit the interval [-0.1, 0.1]. The expected values can then by generated applying the function to the inputs and adding the noise. For plotting the original function that will be approximated by the RBF network, `linspace` can be used to generate equally spaced inputs to make a smooth plot of the function. ``` X = np.random.rand(1, 75).flatten() noise = np.random.rand(1, 75).flatten() / 5 - 0.1 D = 0.5 + 0.4 * np.sin(2 * np.pi * X) + noise func_X = np.linspace(0, 1, 100) func_Y = 0.5 + 0.4 * np.sin(2 * np.pi * func_X) ``` #### K-means algorithm This function finds the centers and variances given uncategorized inputs and number of clusters. It also takes in a flag to determined whether to output an averaged variance for all clusters or use specialized variances for each cluster. The algorithm begins by choosing random points from the inputs as the center of the clusters, so that every cluster will have at least point assigned to it. Then the algorithm repetitively assigns points to each cluster using Euclidean distance and averages the assigned points for each cluster to find the new centers. The new centers are compared with the old centers, and if they are the same, the algorithm is stopped. Then using the last assignment of the points, the variance for each cluster is calculated. If a cluster does not have more than one point assigned to it, it is skipped. If `use_same_width=True`, then an normalized variance is used for all clusters. The maximum distance is used by using an outer subtraction between the centers array and itself, and then it is divided by `sqrt(2 * # of clusters)`. If `use_same_width=False`, then for all clusters that had only one point assigned to it, the average of all the other variances is used as the variance for these clusters. ``` def kmeans(clusters=2, X=X, use_same_width=False): centers = np.random.choice(X, clusters, replace=False) diff = 1 while diff != 0: assigned = [[] for i in range(clusters)] for x in X: assigned_center = np.argmin(np.abs(centers - x)) assigned[assigned_center].append(x.item()) new_centers = np.array([np.average(points) for points in assigned]) diff = np.sum(np.abs(new_centers - centers)) centers = new_centers variances = [] no_var = [] for i in range(clusters): if len(assigned[i]) < 2: no_var.append(i) else: variances.append(np.var(assigned[i])) if use_same_width: d_max = np.max(np.abs(np.subtract.outer(centers, centers))) avg_var = d_max / np.sqrt(2 * clusters) variances = [avg_var for i in range(clusters)] else: if len(no_var) > 0: avg_var = np.average(variances) for i in no_var: variances.insert(i, avg_var) return (centers, np.array(variances)) ``` The function below defines the gaussian function. Given the centers and variances for all clusters, it calculates the output for all gaussians at once for a single input. ``` def gaussian(centers, variances, x): return np.exp((-1 / (2 * variances)) * ((centers - x) ** 2)) ``` #### Training the RBF Network For each gaussian, a random weight is generated in the interval [-1, 1]. The same happens for a bias term as well. Then, for the number of epochs specified, the algorithm calculates the gaussian outputs for each input, and then takes the weighted sum and adds the bias to get the output of the network. Then the LMS algorithm is applied. Afterwards, the `linspace`d inputs are used to generate the outputs, which allows for plotting the approximating function. Then both the approximated function (red) and the approximating function (blue) are plot, as well as the training data with the noise. ``` def train(centers, variances, lr, epochs=100): num_centers = len(centers) W = np.random.rand(1, num_centers) * 2 - 1 b = np.random.rand(1, 1) * 2 - 1 order = list(range(len(X))) for i in range(epochs): random.shuffle(order) for j in order: x = X[j] d = D[j] G = gaussian(centers, variances, x) y = W.dot(G) + b e = d - y W += lr * e * G.reshape(1, num_centers) b += lr * e est_Y = [] for x in func_X: G = gaussian(centers, variances, x) y = W.dot(G) + b est_Y.append(y.item()) est_Y = np.array(est_Y) fig = plt.figure() ax = plt.axes() ax.scatter(X, D, label='Sampled') ax.plot(func_X, est_Y, '-b', label='Approximate') ax.plot(func_X, func_Y, '-r', label='Original') plt.title(f'Bases = ${num_centers}, Learning Rate = ${lr}') plt.xlabel('x') plt.ylabel('y') plt.legend(loc="upper right") ``` The learning rates and number of bases that needed to be tested are defined, and then K-means is run for each combination of base and learning rate. The output of the K-means is used as the input for the RBF training algorithm, and the results are plotted. ``` bases = [2, 4, 7, 11, 16] learning_rates = [.01, .02] for base in bases: for lr in learning_rates: centers, variances = kmeans(base, X) train(centers=centers, variances=variances, lr=lr) ``` The best function approximates seem to be with 2 bases. As soon as the bases are increased to 4, overfitting starts to occur, with 16 bases having extreme overfitting. Increasing the learning rate seems to decrease the training error but in some cases increases the overfitting of the data. Run the same combinations or number of bases and learning rate again, but this time using the same Gaussian width for all bases. ``` for base in bases: for lr in learning_rates: centers, variances = kmeans(base, X, use_same_width=True) train(centers=centers, variances=variances, lr=lr, epochs=100) ``` Using the same width for each base seems to drastically decrease overfitting. Even with 16 bases, the approximating function is very smooth. However, after 100 epochs, the training error is still very high, and the original function is not well approximated. After running the training with significantly more epochs (10,000 to 100,000), the function becomes well approximated for large number of bases. But for smaller number of bases like 2, the approximating function is still not close to the approximated function, whereas when using different Gaussian widths, 2 bases was the best approximator of the original function. So, using the same widths, the training takes significantly longer and requires many bases to be used to approximate the original function well.
true
code
0.580828
null
null
null
null
# <center>Introduction on Using Python to access GeoNet's GNSS data In this notebook we will learn how to get data from one GNSS(Global Navigation Satellite System) station. By the end of this tutorial you will have make a graph like the one below. <img src="plot.png"> ## &nbsp;Table of contents ### 1. Introduction ### 2. Building the base FITS query ### 3. Get GNSS data ### 4. Plot data ### 5. Save data ## &nbsp;1. Introduction In this tutorial we will be learning how to use Python to access GNSS (commonly referred to at GPS) data from the continuous GNSS sites in the GeoNet and PositioNZ networks. GeoNet has a API (Application Programming Interface) to access its GNSS data. You do not need to know anything about APIs to use this tutorial. If you would like more info see https://fits.geonet.org.nz/api-docs/. To use this tutorial you will need to install the package pandas (https://pandas.pydata.org/). This tutorial assumes that you have a basic knowledge of Python. ###### About GeoNet GNSS data GeoNet uses GNSS technology to work out the precise positions of over 190 stations in and around NZ everyday. These positions are used to generate a displacement timeseries for each station, so we can observe how much and how quickly each station moves. <br> This data comes split into 3 components: <ul> <li> The displacement in the east-west direction where east is positive displacement. This data has a typeID of "e" <li> The displacement in the north-south direction where north is a positive displacement. This data has a typeID of "n" <li> The displacement in the up-down direction where up is a positive displacement. This data has a typeID of "u"</ul> For more on data types go to http://fits.geonet.org.nz/type (for best formatting use firefox) ## &nbsp;2. Building the base FITS query ###### Import packages ``` import requests import pandas as pd import datetime import matplotlib.pyplot as plt pd.plotting.register_matplotlib_converters() ``` ###### Set URL and endpoint ``` base_url = "http://fits.geonet.org.nz/" endpoint = "observation" ``` The base URL should be set as above to access the FITS database webservice containing the GeoNet GNSS data. The endpoint is set to observation to get the data itself in csv format. There are other endpoints which will return different information such as plot and site. To learn more go to https://fits.geonet.org.nz/api-docs/ ###### Combine URL and endpoint ``` url = base_url + endpoint ``` Combine the base URL and the endpoint to give the information to request the data. ## &nbsp;3. Get GNSS data In this section we will learn how to get all the GNSS observation data from a site and put it into a pandas dataframe, so we can plot and save the data ###### Set query parameters ``` parameters ={"typeID": "e", "siteID": "HANM"} ``` Set the parameters to get the east component(`'typeID':'e'`) of the GNSS station in the Hanmer Basin (`'siteID': 'HANM'`). To find the 4 letter site ID of a station you can use https://www.geonet.org.nz/data/network/sensor/search to find stations in an area of interest ##### Get GNSS data ``` response_e = requests.get(url, params=parameters) ``` We use `requests.get` to get the data using the URL we made earlier and the parameters we set in the last stage ``` parameters["typeID"] = "n" response_n = requests.get(url, params=parameters) parameters["typeID"] = "u" response_u = requests.get(url, params=parameters) ``` Here we've changed the typeID in the parameters dictionary to get the other components for the GNSS station ###### Check that your requests worked ``` print ("The Response status code of the east channel is", response_e.status_code) print ("The Response status code of the north channel is",response_n.status_code) print ("The Response status code of the up channel is",response_u.status_code) ``` The response status code says whether we were successful in getting the data requested and why not if we were unsuccessful: <ul> <li>200 -- everything went okay, and the result has been returned (if any) <li>301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed. <li>400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things. <li>404 -- the resource you tried to access wasn't found on the server. </ul> Now that we know our request for data was successful we want to transform it into a format that we can deal with in Python. Right now the data is one long string ###### Split the string of data ``` data_e = response_e.content.decode("utf-8").split("\n") ``` The above code decodes the response and then splits the east displacement data on the new line symbol as each line is one point of data. If you are using Python2 remove the code `.decode("utf-8")` ###### Split the points of data ``` for i in range(0, len(data_e)): data_e[i]= data_e[i].split(",") ``` The above code uses a for loop to split each point of data on the "," symbol as each value is separated by a ",", producing a list of lists ###### Reformat data values ``` for i in range(1, (len(data_e)-1)): data_e[i][0] = datetime.datetime.strptime(data_e[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object data_e[i][1] = float(data_e[i][1]) #makes 2nd value into a decimal number data_e[i][2] = float(data_e[i][2]) #makes 3rd value into a decimal number ``` The above code uses a `for` loop to go over each point of data and reformat it, so that the first value in each point is seen as a time, and the second and third values are seen as numbers.<br> Note that we choose to miss the first and last data points in our loop as the first data point has the names of the data values and the last point is empty due to how we split the data. ###### Convert nested list into dataframe object ``` df_e = pd.DataFrame(data_e[1:-1],index = range(1, (len(data_e)-1)), columns=data_e[0]) ``` `data_e[1:-1]` makes the list of data be the data in the data frame, `index = range(1, (len(data_e)-1))` makes rows named 1, 2, ... n where n is the number of data points, and `columns=data_e[0]` gives the columns the names that where in the first line of the response string ###### Print the first few lines of the data frame ``` df_e.head() ``` Here we can see on the 4th of June 2014 how much the site HANM had moved east (with formal error) in mm from its reference position, this being the midpoint of the position timeseries. ###### Make everything we have just done into a function ``` def GNSS_dataframe(data): """ This function turns the string of GNSS data received by requests.get into a data frame with GNSS data correctly formatted. """ data = data.split("\n") # splits data on the new line symbol for i in range(0, len(data)): data[i]= data[i].split(",")# splits data ponits on the , symbol for i in range(1, (len(data)-1)): data[i][0] = datetime.datetime.strptime(data[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object data[i][1] = float(data[i][1]) #makes 2nd value into a decimal number data[i][2] = float(data[i][2]) #makes 3rd value into a decimal number df = pd.DataFrame(data[1:-1],index = range(1, (len(data)-1)), columns=data[0]) #make the list into a data frame return df df_e.head() ``` This makes code cells 8 to 11 into a function to be called later in the notebook. ###### Run the above function on the North and Up data ``` df_n = GNSS_dataframe(response_n.content.decode("utf-8")) df_u = GNSS_dataframe(response_u.content.decode("utf-8")) ``` Make sure to run this function on the content string of the requested data. If in Python2 use remove the code `.decode("utf-8")` ##### Why make the data into a data frame? A data frame is a way of formatting data into a table with column and row name much like a csv file and makes long list of data a lot easier to use. Data frame data can be called by column or row name making it easy to get the point(s) of data you want. Data, much like in a table, can be “linked” so that you can do something like plot a data point on a 2D plot. Sadly, data frames are not a built-in data format in Python, so we must use the pandas (https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) package to be able to make a data frame. ## &nbsp;4. Plot data ###### Plot the east data ``` e_plot = df_e.plot(x='date-time', y= ' e (mm)', marker='o', title = 'Relative east displacement for HANM') #plt.savefig("e_plot") ``` The above code plots time on the x axis and the displacement in millimetres on the y axis. `marker = ‘o’` makes each point of data a small circle. If you want to save the plot as a png file in the folder you are running this code from you can uncomment ` plt.savefig("e_plot")` ###### Plot the north data ``` n_plot = df_n.plot(x='date-time', y= ' n (mm)', marker='o', title = 'Relative north displacement for HANM') #plt.savefig("n_plot") ``` ###### Plot the up data ``` u_plot = df_u.plot(x='date-time', y= ' u (mm)', marker='o', title='Relative up displacement for HANM') #plt.savefig("u_plot") ``` ## &nbsp;5. Save data ##### Make a copy of the east data frame ``` df = df_e ``` This makes what is call a deep copy of the data frame with the east displacement data in it. This means that if `df` is edited `df_e` is not effected. ###### Remove the error column from this copy of the data ``` df = df.drop(" error (mm)",axis=1) ``` The above code removes the column called error (mm) and all its data from `df`. ` axis=1` says that we are looking for a column. If we put ` axis=0` we would be looking for a row. ###### Add the up and north data to this data frame (but not the respective errors) ``` df["u (mm)"] = df_u[' u (mm)'] df["n (mm)"] = df_n[' n (mm)'] ``` ###### Print the first few lines of the data frame ``` df.head() ``` Here we can see the layout of the data frame with the columns date, east displacement, up displacement and north displacement ###### Save as CSV file ``` df.to_csv("HANM.csv") ``` This saves the data frame csv file with the same formatting as the data frame. It will have saved in the same place as this notebook is run from and be named HANM ## Useful links <ul> <li>This notebook uses Python https://www.python.org/ <li>This notebook also uses pandas https://pandas.pydata.org/ <li>There is a notebook on this data set in R at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R/Introduction_to_GNSS_data_using_FITS_in_R.ipynb <li>More tutorials on GNSS data can be found at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R <li>To learn more about station codes go to https://www.geonet.org.nz/data/supplementary/channels <li>For more on data types in FITS go to http://fits.geonet.org.nz/type (for best formatting use firefox) <li>For more on FITS go to https://fits.geonet.org.nz/api-docs/ </ul>
true
code
0.334997
null
null
null
null
# Deploy and perform inference on Model Package from AWS Marketplace This notebook provides you instructions on how to deploy and perform inference on model packages from AWS Marketplace object detection model. This notebook is compatible only with those object detection model packages which this notebook is linked to. #### Pre-requisites: 1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio. 1. Ensure that IAM role used has **AmazonSageMakerFullAccess** 1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to this object detection model. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package) #### Contents: 1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package) 2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](#A.-Create-an-endpoint) 2. [Create input payload](#B.-Create-input-payload) 3. [Perform real-time inference](#C.-Perform-real-time-inference) 4. [Visualize output](#D.-Visualize-output) 5. [Delete the endpoint](#E.-Delete-the-endpoint) 3. [Perform batch inference](#3.-Perform-batch-inference) 4. [Clean-up](#4.-Clean-up) 1. [Delete the model](#A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional)) #### Usage instructions You can run this notebook one cell at a time (By using Shift+Enter for running a cell). **Note** - This notebook requires you to follow instructions and specify values for parameters, as instructed. ### 1. Subscribe to the model package To subscribe to the model package: 1. Open the model package listing page you opened this notebook for. 1. On the AWS Marketplace listing, click on the **Continue to subscribe** button. 1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell. ``` model_package_arn='<Customer to specify Model package ARN corresponding to their AWS region>' import json from sagemaker import ModelPackage import sagemaker as sage from sagemaker import get_execution_role import matplotlib.patches as patches import numpy as np from matplotlib import pyplot as plt from PIL import Image from PIL import ImageColor role = get_execution_role() sagemaker_session = sage.Session() boto3 = sagemaker_session.boto_session bucket = sagemaker_session.default_bucket() region = sagemaker_session.boto_region_name s3 = boto3.client("s3") runtime= boto3.client('runtime.sagemaker') ``` In next step, you would be deploying the model for real-time inference. For information on how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html). ### 2. Create an endpoint and perform real-time inference ``` model_name='object-detection-model' #The object detection model packages this notebook notebook is compatible with, support application/x-image as the #content-type. content_type='application/x-image' ``` Review and update the compatible instance type for the model package in the following cell. ``` real_time_inference_instance_type='ml.g4dn.xlarge' batch_transform_inference_instance_type='ml.p2.xlarge' ``` #### A. Create an endpoint ``` #create a deployable model from the model package. model = ModelPackage(role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session) #Deploy the model predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name) ``` Once endpoint has been created, you would be able to perform real-time inference. #### B. Prepare input file for performing real-time inference In this step, we will download class_id_to_label_mapping from S3 bucket. The mapping files has been downloaded from [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). ``` s3_bucket = f"jumpstart-cache-prod-{region}" key_prefix = "inference-notebook-assets" def download_from_s3(key_filenames): for key_filename in key_filenames: s3.download_file(s3_bucket, f"{key_prefix}/{key_filename}", key_filename) img_jpg = "Naxos_Taverna.jpg" #Download image download_from_s3(key_filenames=[img_jpg]) #Mapping from model predictions to class labels class_id_to_label = {"1": "person", "2": "bicycle", "3": "car", "4": "motorcycle", "5": "airplane", "6": "bus", "7": "train", "8": "truck", "9": "boat", "10": "traffic light", "11": "fire hydrant", "13": "stop sign", "14": "parking meter", "15": "bench", "16": "bird", "17": "cat", "18": "dog", "19": "horse", "20": "sheep", "21": "cow", "22": "elephant", "23": "bear", "24": "zebra", "25": "giraffe", "27": "backpack", "28": "umbrella", "31": "handbag", "32": "tie", "33": "suitcase", "34": "frisbee", "35": "skis", "36": "snowboard", "37": "sports ball", "38": "kite", "39": "baseball bat", "40": "baseball glove", "41": "skateboard", "42": "surfboard", "43": "tennis racket", "44": "bottle", "46": "wine glass", "47": "cup", "48": "fork", "49": "knife", "50": "spoon", "51": "bowl", "52": "banana", "53": "apple", "54": "sandwich", "55": "orange", "56": "broccoli", "57": "carrot", "58": "hot dog", "59": "pizza", "60": "donut", "61": "cake", "62": "chair", "63": "couch", "64": "potted plant", "65": "bed", "67": "dining table", "70": "toilet", "72": "tv", "73": "laptop", "74": "mouse", "75": "remote", "76": "keyboard", "77": "cell phone", "78": "microwave", "79": "oven", "80": "toaster", "81": "sink", "82": "refrigerator", "84": "book", "85": "clock", "86": "vase", "87": "scissors", "88": "teddy bear", "89": "hair drier", "90": "toothbrush"} ``` #### C. Query endpoint that you have created with the opened images ``` #perform_inference method performs inference on the endpoint and prints predictions. def perform_inference(filename): response = runtime.invoke_endpoint(EndpointName='test-tensorflow-test', ContentType=content_type, Body=input_img) model_predictions = json.loads(response['Body'].read()) return model_predictions with open(img_jpg, 'rb') as file: input_img = file.read() model_predictions = perform_inference(input_img) result = {key: np.array(value)[np.newaxis, ...] if isinstance(value, list) else np.array([value]) for key, value in model_predictions['predictions'][0].items()} ``` #### D. Display model predictions as bounding boxes on the input image ``` colors = list(ImageColor.colormap.values()) image_pil = Image.open(img_jpg) image_np = np.array(image_pil) plt.figure(figsize=(20,20)) ax = plt.axes() ax.imshow(image_np) classes = [class_id_to_label[str(int(index))] for index in result["detection_classes"][0]] bboxes, confidences = result["detection_boxes"][0], result["detection_scores"][0] for idx in range(20): if confidences[idx] < 0.3: break ymin, xmin, ymax, xmax = bboxes[idx] im_width, im_height = image_pil.size left, right, top, bottom = xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height x, y = left, bottom color = colors[hash(classes[idx]) % len(colors)] rect = patches.Rectangle((left, bottom), right-left, top-bottom, linewidth=3, edgecolor=color, facecolor='none') ax.add_patch(rect) ax.text(left, top, "{} {:.0f}%".format(classes[idx], confidences[idx]*100), bbox=dict(facecolor='white', alpha=0.5)) ``` #### D. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged. ``` model.sagemaker_session.delete_endpoint(model_name) model.sagemaker_session.delete_endpoint_config(model_name) ``` ### 3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) ``` #upload the batch-transform job input files to S3 transform_input_key_prefix = 'object-detection-model-transform-input' transform_input = sagemaker_session.upload_data(img_jpg, key_prefix=transform_input_key_prefix) print("Transform input uploaded to " + transform_input) #Run the batch-transform job transformer = model.transformer(1, batch_transform_inference_instance_type) transformer.transform(transform_input, content_type=content_type) transformer.wait() # output is available on following path transformer.output_path ``` ### 4. Clean-up #### A. Delete the model ``` model.delete_model() ``` #### B. Unsubscribe to the listing (optional) If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model. **Steps to unsubscribe to product from AWS Marketplace**: 1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=mlmp_gitdemo_indust) 2. Locate the listing that you want to cancel the subscription for, and then choose __Cancel Subscription__ to cancel the subscription.
true
code
0.617195
null
null
null
null
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title"><b>The Knapsack Problem</b></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://mate.unipv.it/gualandi" property="cc:attributionName" rel="cc:attributionURL">Stefano Gualandi</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/mathcoding/opt4ds" rel="dct:source">https://github.com/mathcoding/opt4ds</a>. **NOTE:** Run the following script whenever running this script on a Google Colab. ``` import shutil import sys import os.path if not shutil.which("pyomo"): !pip install -q pyomo assert(shutil.which("pyomo")) if not (shutil.which("glpk") or os.path.isfile("glpk")): if "google.colab" in sys.modules: !apt-get install -y -qq glpk-utils else: try: !conda install -c conda-forge glpk except: pass ``` # $n$-Queens Problem The $n$-Queens puzzle is the problem of placing eight chess queens on an $n \times n$ chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal (source: [wikipedia](https://en.wikipedia.org/wiki/Eight_queens_puzzle)). A solution exists for all natural numbers n with the exception of $n = 2$ and $n = 3$. **Example:** For $n=8$, we have the following solution: ``` 1 . . . . . Q . . 2 . . . Q . . . . 3 . . . . . . Q . 4 Q . . . . . . . 5 . . . . . . . Q 6 . Q . . . . . . 7 . . . . Q . . . 8 . . Q . . . . . a b c d e f g h ``` ## Integer Linear Programming Model The $n$-Queens problem can be formalized with the following **ILP** model. **Data:** Size of the board $n\times n$. Let $I=:\{1,\dots,n\}$ a set of indices. **Decision Variables:** The variable $x_{ij} \in \{0,1\}$ is equal to 1 if we place a queen in position $(i,j)$ on the chessboard. **Objective function:** Since the problem is a feasibility problem, we can set the objective function equal to any constant value. **Constraints:** We need the following linear constraints, which encode the puzzle rules: 1. Each queen appears once per row: $$ \sum_{j \in I} x_{ij} = 1, \forall i \in I $$ 2. Each queen appears once per column: $$ \sum_{i \in I} x_{ij} = 1, \forall j \in I $$ 3. Each queen appears once per main diagonals: $$ \sum_{(i,j) \in D_k} x_{ij} \leq 1, D_k \mbox{ main diagonals} $$ 4. Each queen appears once per off-diagonals: $$ \sum_{(i,j) \in O_k} x_{ij} \leq 1, O_k \mbox{ off diagonals} $$ ### Main Diagonals $D_k$ Since we need to specify the pairs of indices that define as a function of $n$, we first defined the following nested loop: ``` n = 5 for j in range(-n+2,n-1): for i in range(1, n+1): if 0 < j+i <= n: print(i, j+i, end='\t') else: print(' ', end='\t') print() ``` ### Off Diagonals $_k$ Similarly, we can define the off diagonals as follows: ``` for i in reversed(range(-n+3, n)): for j in range(1, n): if 0 < n - j+i <= n: print(j, n-j+i, end='\t') else: print(' ', end='\t') print() ``` ### Full Model defined in Pyomo If we put all the definitions together, we can solve the $n$-Queens problem with the script below. Please, note the following Pyomo syntax used to define variable $x_{ij}$ over the [RangeSet](https://pyomo.readthedocs.io/en/stable/library_reference/aml/index.html#pyomo.environ.RangeSet) $I$ and $J$: ``` model.I = RangeSet(1, n) model.J = RangeSet(1, n) model.x = Var(model.I, model.J, within=Binary) ``` Notice also the syntax used to define the row and column constraints, which uses `lambda` function to define constraint rules: ``` model.row = Constraint(model.I, rule = lambda mod, i: sum(mod.x[i,j] for j in mod.J) == 1) ``` Finally, to define the main and of diagonals, we use the [ConstraintList](https://pyomo.readthedocs.io/en/stable/working_models.html) class: ``` model.mainD = ConstraintList() #... model.mainD.add( expr <= 1 ) ``` The complete Pyomo script is as follows. ``` # Import the libraries from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory from pyomo.environ import maximize, Binary, RangeSet, ConstraintList n = 8 # Create concrete model model = ConcreteModel() model.I = RangeSet(1, n) model.J = RangeSet(1, n) # Variables model.x = Var(model.I, model.J, within=Binary) # Objective Function: Maximize Profit model.obj = Objective(expr = n, sense = maximize) # 1. Row constraints def VincoloRighe(mod, i): return sum(mod.x[i,j] for j in mod.J) == 1 model.row = Constraint(model.I, rule = VincoloRighe) # 2. Column constraints model.column = Constraint(model.J, rule = lambda mod, j: sum(mod.x[i,j] for i in mod.I) == 1) # 3. Main Diagonal constraints model.mainD = ConstraintList() # Build the list of possible pairs for j in range(-n+2,n-1): expr = 0 for i in model.I: if 0 < j+i <= n: expr += model.x[i, j+i] model.mainD.add( expr <= 1 ) # 4. Off Diagonal constraints model.offD = ConstraintList() # Build the list of possible pairs for i in range(-n+3,n+1): expr = 0 for j in model.J: if 0 < n-j+i <= n: expr += model.x[j, n-j+i] model.offD.add( expr <= 1 ) ``` To solve the script, we use a solver factory, specifying the GLPK solver, and we inspect the Solver **status** (infeasible, unbounded, or optimal). ``` # Solve the model sol = SolverFactory('glpk').solve(model) # Basic info about the solution process for info in sol['Solver']: print(info) ``` We aspect the optimal decision variables (only the positive variables). ``` # Report solution value print("Optimal solution value: z =", model.obj()) print("Decision variables:") for i in model.I: for j in model.J: if model.x[i,j]() > 0: print("x({},{}) = {}".format(i, j, model.x[i,j]())) ``` And finally, we print a solution on a simplified chessboard $n\times n$. ``` print('\nChessboard Solution:') for i in model.I: for j in model.J: if model.x[i,j]() > 0: print('Q', end=' ') else: print('.', end=' ') print() ``` ## Plotting a solution with a Chessboard ``` # CREDIT: Solution original appeared on Stackoverflow at: # https://stackoverflow.com/questions/60608055/insert-queen-on-a-chessboard-with-pyplot def PlotSolution(n, x, size=6): import matplotlib.pyplot as plt import numpy as np chessboard = np.zeros((n, n)) chessboard[1::2,0::2] = 1 chessboard[0::2,1::2] = 1 plt.figure(figsize=(size, size)) plt.imshow(chessboard, cmap='binary') for i, j in x: if x[i,j]() > 0: plt.text(i-1, j-1, '♕', color='darkorange', fontsize=56*size/n, fontweight='bold', ha='center', va='center') plt.xticks([]) plt.yticks([]) plt.show() PlotSolution(n, model.x) ```
true
code
0.400222
null
null
null
null
# Content: 1. [Definitions](#1.-Definitions) 2. [The root finding problem](#2.-The-root-finding-problem) 3. [Fixed point iteration](#3.-Fixed-point-iteration) >3.1 [The cobweb diagram](#3.1-The-cobweb-diagram) >3.2 [Fixed point iteration theorem](#3.2-Fixed-point-iteration-theorem) >3.3 [The code](#3.3-The-code) 4. [Bisection method](#4.-Bisection-method) # 1. Definitions ![board%20work%20-32.jpg](../boardwork/board%20work%20-32.jpg) [Weierstrass function](https://en.wikipedia.org/wiki/Weierstrass_function) is a peculiar function. It is continuous on the real number line but not differentiable anywhere. ``` import numpy as np import matplotlib.pyplot as plt def weierstrass(a,b,M,x): val = 0.0 for n in range(0,M): val = val + a**n * np.cos(b**n*np.pi*x) return val x = np.linspace(-2,2,1000) # 1000 points between -2 and +2 a=0.5 b=3.0 N=x.size y=np.zeros(N) M=1 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=1') plt.title('Weierstrass function, M=1') plt.legend() plt.show() M=3 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=3') plt.title('Weierstrass function, M=3') plt.legend() plt.show() M=10 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=10') plt.title('Weierstrass function, M=10') plt.legend() plt.show() ``` --- Homework-16: Find examples for polynomial, rational, trigonometric, exponential and logarithmic functions that are in $C^\infty[{\bf R}],~{\rm where}~{\bf R}$ is the real number line. --- ## 2. The root-finding problem ![board%20work%20-33.jpg](../boardwork/board%20work%20-33.jpg) ## 3. Fixed point iteration ![board%20work%20-34.jpg](../boardwork/board%20work%20-34.jpg) ``` import numpy as np def f(x): val=x-np.sqrt(10.0/x) return val def g(x): val=np.sqrt(10.0/x) return val x=1 # initial guess, x0 dx=x i=0 while dx > 1e-3: dx=np.abs(x-g(x)) print('Iteration: ',i,' x:',x,' g(x):',g(x),' f(x): ', f(x)) x=g(x) i=i+1 print('Exact root is x:',np.power(10.0,1.0/3.0)) ``` Here is another elegant way to print the output ``` x=1.0 for i in range(0,15): gx=g(x) fx=f(x) fstring=(f'''Iteration={i:5d} x={x:10.4f} g(x)={gx:10.4f} f(x)={np.abs(fx):10.4f}''') # using f-string print(fstring) x=g(x) out=(f'''Exact root is x={np.power(10.0,1.0/3.0):10.4f}''') print(out) #other way for formatted print #mynumber=3.14 #print('{:10.8f}'.format(mynumber)) ``` ### 3.1 The cobweb diagram ![board%20work%20-35.jpg](../boardwork/board%20work%20-35.jpg) ![board%20work%20-36.jpg](../boardwork/board%20work%20-36.jpg) ![board%20work%20-37.jpg](../boardwork/board%20work%20-37.jpg) ``` def g_fn(x): val=np.sqrt(10.0/x) return val N=15 x=np.zeros(N,float) g=np.zeros(N,float) x0=1.0 # initial guess Ni=10 for i in range(0,Ni): x[i]=x0 g[i]=g_fn(x0) x0=g[i] #print(x,g) import numpy as np import matplotlib.pyplot as plt fig = plt.figure() # comment if square plot is not needed ax = fig.add_subplot(111) # comment if square plot is not needed plt.xlim(0, 4) plt.ylim(0, 4) x_grids = np.linspace(0,4,100) N=x_grids.size g_grids=np.zeros(N) for i in range(N): g_grids[i]=g_fn(x_grids[i]) plt.plot(x_grids,x_grids,'k-',label='x') plt.plot(x_grids,g_grids,'b-',label='g(x)') xval=[x[0],x[0]] gval=[x[0],g[0]] plt.plot(xval,gval) plt.grid() for i in range(0,6): # horizontal line, same y-value xval=[x[i],g[i]] gval=[g[i],g[i]] plt.plot(xval,gval) # vertical line, same x-value xval=[g[i],x[i+1]] gval=[g[i],g[i+1]] plt.plot(xval,gval) ax.set_aspect('equal', adjustable='box') # comment if square plot is not needed plt.title('Cobweb diagram for $x=\sqrt{10/x}$') plt.legend() plt.show() ``` ### Let's try another problem: $x - 1/x^2 = 0;~g(x)=1/x^2;~x_0 = 0.1$ ``` import numpy as np def g_fn(x): val=1.0/x**2 return val def f_fn(x): val=x-1.0/x**2 return val x=0.1 for i in range(0,4): gx=g_fn(x) fx=f_fn(x) fstring=(f'''Iteration={i:5d} x={x:10.4f} g(x)={gx:10.4f} f(x)={np.abs(fx):10.4f}''') # using f-string print(fstring) x=g_fn(x) ``` Diverges! ### 3.2 Fixed point iteration theorem ![board%20work%20-38.jpg](../boardwork/board%20work%20-38.jpg) ![board%20work%20-39.jpg](../boardwork/board%20work%20-39.jpg) ![board%20work%20-40.jpg](../boardwork/board%20work%20-40.jpg) ![board%20work%20-41.jpg](../boardwork/board%20work%20-41.jpg) ![board%20work%20-42.jpg](../boardwork/board%20work%20-42.jpg) ![board%20work%20-43.jpg](../boardwork/board%20work%20-43.jpg) ### 3.3 The code ``` import numpy as np # fn is the g(x) in x = g(x) that we want to solve # x is the initial guess, x0 # xthresh is convergence thershold # maxeval - maximum number of evaluation of fn # iprint control printing, iprint = 1 for extra output def fixedpoint(fn, x, xthresh, maxeval, iprint): if iprint == 1: print('#iter x g(x) dx') ieval=0 g=fn(x) ieval=ieval+1 dx=np.abs(x-g) iiter=0 while dx > xthresh: g=fn(x) ieval=ieval+1 dx=np.abs(x-g) if iprint == 1: print('{:5d}{:15.6e}{:15.6e}{:15.6e}'.format(iiter,x, g, dx)) if ieval >= maxeval: print('Exiting fixed-point iteration, maximum function evaluations reached') break x=g iiter=iiter+1 return x print('Exiting fixed-point iteration, convergence reached') def fn_g(x): val=np.sqrt(10.0/x) return val x0 = 1.0 xthresh = 1E-5 maxeval = 100 iprint=1 x = fixedpoint(fn_g, x0, xthresh, maxeval,iprint) print('The solution is: ',x) ``` ### Let's try another problem: $\exp(-x) + x/5 - 1 = 0$ Let's look at the graphical solution by plotting the function $f(x)$ and see where it takes the value zero. ``` import numpy as np import matplotlib.pyplot as plt def f(x): val=np.exp(-x)+x/5.0-1 return val xmin=-5.0 xmax=10.0 plt.xlim(xmin, xmax) plt.ylim(-3, 10) x = np.linspace(xmin,xmax,100) N=x_grids.size y=np.zeros(N) for i in range(N): y[i]=f(x[i]) plt.plot(x,x*0,'k-') plt.plot(x,y,'b-') plt.grid() plt.show() ``` There are two roots for this equation. One at 0.0 and another near 5.0. There are two ways of rearranging the equation to apply the fixed-point iteration $x_{n+1}=g(x_n)$. * Option-1: $g_1(x)=5\left[ 1- \exp(-x) \right]$ * Option-2: $g_2(x)=-\log\left[ 1 - x/5 \right]$ ``` def g1(x): val=5 * ( 1 - np.exp(-x) ) return val x0 = 2 # somewhere in between both the solutions maxeval = 20 xthresh = 0.0001 iprint=1 x = fixedpoint(g1, x0, xthresh, maxeval,iprint) print('The solution is: ',x) def g2(x): val=-np.log(1-x/5.0) return val x0 = 2.0 # somewhere in between both the solutions maxeval = 10 xthresh = 0.0001 iprint=1 x = fixedpoint(g2, x0, xthresh, maxeval,iprint) print('The solution is: ',x) ``` --- Homework-17: $For~the~above~example,~using~the~fixed~point~convergence~relation~explain~why~using~g_1(x)~results~in~the~solution~x^*=4.965~while~g_2(x)~results~in~x^*=0.0.~In~both~cases,~use~x_0=2.0~as~the~initial~guess.$ --- ## 4. Bisection method ![board%20work%20-44.jpg](../boardwork/board%20work%20-44.jpg) ![board%20work%20-45.jpg](../boardwork/board%20work%20-45.jpg) ![board%20work%20-46.jpg](../boardwork/board%20work%20-46.jpg) ``` import numpy as np def bisection(fn, a0, b0, xthresh, maxeval, iprint): if iprint == 1: print('#iter a b x dx') ieval=0 iiter=1 a=a0 b=b0 dx = abs(a-b) while dx > xthresh or iiter < 10: x = (a+b)/2.0 dx = abs(a-b) fx = fn(x) fb = fn(b) if (fb < xthresh): # handle an exception print('The upper limit seems to be a root. Stopping program.') x=b break if iprint == 1: print('{:5d}{:15.6e}{:15.6e}{:15.6e}{:15.6e}{:15.6e}{:15.6e}'.format(iiter, a, b, x, dx,fx,fb)) if fx*fb > 0: b = x else: a = x ieval=ieval+2 if ieval >= maxeval: print('Exiting fixed-point iteration, maximum function evaluations reached') break iiter=iiter+1 print('Exiting fixed-point iteration, convergence reached') return x def fn_f(x): val=np.exp(-x)+x/5.0-1 return val a = -25.0 b = 30.0 maxeval = 100 xthresh = 0.0001 iprint=1 x = bisection(fn_f, a, b, xthresh, maxeval,iprint) print('The solution is: ',x) def fn_f(x): val=(x-1)**2 return val a = -10 b = 1 maxeval = 100 xthresh = 0.0001 iprint=1 x = bisection(fn_f, a, b, xthresh, maxeval,iprint) print('The solution is: ',x) ```
true
code
0.423935
null
null
null
null
# Text Data Explanation Benchmarking: Emotion Multiclass Classification This notebook demonstrates how to use the benchmark utility to benchmark the performance of an explainer for text data. In this demo, we showcase explanation performance for partition explainer on an Emotion Multiclass Classification model. The metrics used to evaluate are "keep positive" and "keep negative". The masker used is Text Masker. The new benchmark utility uses the new API with MaskedModel as wrapper around user-imported model and evaluates masked values of inputs. ``` import copy import pandas as pd import numpy as np import matplotlib.pyplot as plt from transformers import AutoTokenizer, AutoModelForSequenceClassification import shap.benchmark as benchmark import shap import scipy as sp import nlp import torch pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('max_colwidth', None) ``` ### Load Data and Model ``` train, test = nlp.load_dataset("emotion", split = ["train", "test"]) data={'text':train['text'], 'emotion':train['label']} data = pd.DataFrame(data) tokenizer = AutoTokenizer.from_pretrained("nateraw/bert-base-uncased-emotion",use_fast=True) model = AutoModelForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-emotion") ``` ### Class Label Mapping ``` # set mapping between label and id id2label = model.config.id2label label2id = model.config.label2id labels = sorted(label2id, key=label2id.get) ``` ### Define Score Function ``` def f(x): tv = torch.tensor([tokenizer.encode(v, padding='max_length', max_length=128,truncation=True) for v in x]) attention_mask = (tv!=0).type(torch.int64) outputs = model(tv,attention_mask=attention_mask)[0].detach().numpy() scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T val = sp.special.logit(scores) return val ``` ### Create Explainer Object ``` explainer = shap.Explainer(f,tokenizer,output_names=labels) ``` ### Run SHAP Explanation ``` shap_values = explainer(data['text'][0:20]) ``` ### Define Metrics (Sort Order & Perturbation Method) ``` sort_order = 'positive' perturbation = 'keep' ``` ### Benchmark Explainer ``` sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation) xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20]) sequential_perturbation.plot(xs, ys, auc) sort_order = 'negative' perturbation = 'keep' sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation) xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20]) sequential_perturbation.plot(xs, ys, auc) ```
true
code
0.592018
null
null
null
null
## QE methods and QE_utils In this tutorial, we will explore various methods needed to handle Quantum Espresso (QE) calculations - to run them, prepare input, and extract output. All that will be done with the help of the **QE_methods** and **QE_utils** modules, which contains the following functions: **QE_methods** * cryst2cart(a1,a2,a3,r) * [Topic 2](#topic-2) read_qe_schema(filename, verbose=0) * [Topic 3](#topic-3) read_qe_index(filename, orb_list, verbose=0) * [Topic 4](#topic-4) read_qe_wfc_info(filename, verbose=0) * [Topic 9](#topic-9) read_qe_wfc_grid(filename, verbose=0) * [Topic 5](#topic-5) read_qe_wfc(filename, orb_list, verbose=0) * read_md_data(filename) * read_md_data_xyz(filename, PT, dt) * read_md_data_xyz2(filename, PT) * read_md_data_cell(filename) * out2inp(out_filename,templ_filename,wd,prefix,t0,tmax,dt) * out2pdb(out_filename,T,dt,pdb_prefix) * out2xyz(out_filename,T,dt,xyz_filename) * xyz2inp(out_filename,templ_filename,wd,prefix,t0,tmax,dt) * get_QE_normal_modes(filename, verbosity=0) * [Topic 1](#topic-1) run_qe(params, t, dirname0, dirname1) * read_info(params) * read_all(params) * read_wfc_grid(params) **QE_utils** * get_value(params,key,default,typ) * split_orbitals_energies(C, E) * [Topic 7](#topic-7) merge_orbitals(Ca, Cb) * post_process(coeff, ene, issoc) * [Topic 6](#topic-6) orthogonalize_orbitals(C) * [Topic 8](#topic-8) orthogonalize_orbitals2(Ca,Cb) ``` import os import sys import math import copy if sys.platform=="cygwin": from cyglibra_core import * elif sys.platform=="linux" or sys.platform=="linux2": from liblibra_core import * #from libra_py import * from libra_py import units from libra_py import QE_methods from libra_py import QE_utils from libra_py import scan from libra_py import hpc_utils from libra_py import data_read from libra_py import data_outs from libra_py import data_conv from libra_py.workflows.nbra import step2 import py3Dmol # molecular visualization import matplotlib.pyplot as plt # plots %matplotlib inline plt.rc('axes', titlesize=24) # fontsize of the axes title plt.rc('axes', labelsize=20) # fontsize of the x and y labels plt.rc('legend', fontsize=20) # legend fontsize plt.rc('xtick', labelsize=16) # fontsize of the tick labels plt.rc('ytick', labelsize=16) # fontsize of the tick labels plt.rc('figure.subplot', left=0.2) plt.rc('figure.subplot', right=0.95) plt.rc('figure.subplot', bottom=0.13) plt.rc('figure.subplot', top=0.88) colors = {} colors.update({"11": "#8b1a0e"}) # red colors.update({"12": "#FF4500"}) # orangered colors.update({"13": "#B22222"}) # firebrick colors.update({"14": "#DC143C"}) # crimson colors.update({"21": "#5e9c36"}) # green colors.update({"22": "#006400"}) # darkgreen colors.update({"23": "#228B22"}) # forestgreen colors.update({"24": "#808000"}) # olive colors.update({"31": "#8A2BE2"}) # blueviolet colors.update({"32": "#00008B"}) # darkblue colors.update({"41": "#2F4F4F"}) # darkslategray clrs_index = ["11", "21", "31", "41", "12", "22", "32", "13","23", "14", "24"] ``` First, lets prepare the working directories and run simple SCF calculations to generate the output files ``` PWSCF = os.environ['PWSCF62'] # Setup the calculations params = {} # I run the calculations on laptop, so no BATCH system params["BATCH_SYSTEM"] = None # The number of processors to use params["NP"] = 1 # The QE executable params["EXE"] = F"{PWSCF}/pw.x" # The executable to generate the wavefunction files params["EXE_EXPORT"] = F"{PWSCF}/pw_export.x" #"/mnt/c/cygwin/home/Alexey-user/Soft/espresso/bin/pw_export.x" # The type of the calculations to be performed - in this case only a single SCF with spin-polarization params["nac_method"] = 1 # The prefix of the input file params["prefix0"] = "x0.scf" # Working directory - where all stuff happen params["wd"] = os.getcwd()+"/wd" # Remove the previous results and temporary working directory from the previous runs os.system(F"rm -r {params['wd']}") os.system(F"mkdir {params['wd']}") # Copy the input files into the working directory # also, notice how the SCF input file name has been changed os.system(F"cp x0.scf.in {params['wd']}/x0.scf.0.in") os.system(F"cp x0.exp.in {params['wd']}") os.system(F"cp Li.pbe-sl-kjpaw_psl.1.0.0.UPF {params['wd']}") os.system(F"cp H.pbe-rrkjus_psl.1.0.0.UPF {params['wd']}") ``` <a name="topic-1"></a> ### 1. run_qe(params, t, dirname0, dirname1) Use it to actually run the calculations Comment this out if you have already done the calculations ``` help(QE_methods.run_qe) !pwd os.chdir("wd") QE_methods.run_qe(params, 0, "res", "res2") os.chdir("../") ``` <a name="topic-2"></a> ### 2. read_qe_schema(filename, verbose=0) Can be used to read the information about the completed run ``` pwd info = QE_methods.read_qe_schema("wd/res/x0.save/data-file-schema.xml", verbose=0) print(info) nat = info["nat"] R, F = info["coords"], info["forces"] for at in range(nat): print(F"Atom {at} \t {info['atom_labels'][at]} \t\ x={R.get(3*at+0):.5f}, y={R.get(3*at+1):.5f}, z={R.get(3*at+2):.5f}\ fx={F.get(3*at+0):.5f}, fy={F.get(3*at+1):.5f}, fz={F.get(3*at+2):.5f}") ``` <a name="topic-3"></a> ### 3. read_qe_index(filename, orb_list, verbose=0) Is analogous to **read_qe_schema** in many regards, it just extracts a bit different info, including orbital energies. One would also need to specify which energy levels we want to extract, so one would need that info beforehands. In this example, we have just 4 electrons, so: 1 - HOMO-1 2 - HOMO 3 - LUMO 4 - LUMO+1 Lets try just the 4 orbitals ``` info2, all_e = QE_methods.read_qe_index("wd/res/x0.export/index.xml", [1,2,3,4], verbose=1) print( info2) print(all_e) e_alp = all_e[0] e_bet = all_e[1] for i in range(4): print(F"E_{i}^alpha = {e_alp.get(i,i).real:12.8f} \t E_{i}^beta = {e_bet.get(i,i).real:12.8f}") ``` <a name="topic-4"></a> ### 4. read_qe_wfc_info(filename, verbose=0) Can be used to extract some descriptors of the wavefunctions produced ``` wfc_info1 = QE_methods.read_qe_wfc_info("wd/res/x0.export/wfc.1", verbose=1) wfc_info2 = QE_methods.read_qe_wfc_info("wd/res/x0.export/wfc.2", verbose=1) print(wfc_info1) print(wfc_info2) ``` <a name="topic-5"></a> ### 5. read_qe_wfc(filename, orb_list, verbose=0) Can be used to read in the actual wavefunctions produced ``` alpha = QE_methods.read_qe_wfc("wd/res/x0.export/wfc.1", [1,2,3,4], verbose=0) beta = QE_methods.read_qe_wfc("wd/res/x0.export/wfc.2", [1,2,3,4], verbose=0) print(alpha) print(alpha.num_of_rows, alpha.num_of_cols) print(beta) print(beta.num_of_rows, beta.num_of_cols) ``` Orthogonality and normalization Below we can see that MO overlaps <alpha(i)|alpha(j)> are almost orthonormal - the diagonal elements are coorectly 1.0 But the off-diagonal elements are not quite 0.0 Same is true for <beta(i)|beta(j)> However, there is no any expectation about the orthogonality or normalization across the two sets ``` S_aa = alpha.H() * alpha S_bb = beta.H() * beta S_ab = alpha.H() * beta def print_mat(X): nr, nc = X.num_of_rows, X.num_of_cols for i in range(nr): line = "" for j in range(nc): line = line + "%8.5f " % (X.get(i,j).real) print(line) print("S_aa") print_mat(S_aa) print("S_bb") print_mat(S_bb) print("S_ab") print_mat(S_ab) ``` <a name="topic-6"></a> ### 6. QE_utils.orthogonalize_orbitals(C) Can be used to orthogonalize orbitals if they are not. So lets transform alpha and beta orbitals such they are now orthonormal within each set. The resulting orbitals are not orthonormal across the two sets still ``` alp = QE_utils.orthogonalize_orbitals(alpha) bet = QE_utils.orthogonalize_orbitals(beta) S_aa = alp.H() * alp S_bb = bet.H() * bet S_ab = alp.H() * bet print("S_aa") print_mat(S_aa) print("S_bb") print_mat(S_bb) print("S_ab") print_mat(S_ab) ``` <a name="topic-7"></a> ### 7. QE_utils.merge_orbitals(Ca, Cb) Sometimes (usually in the non-collinear case), we want to have a single set of orbitals (many are nearly doubly degenerate), not just alpha and beta components. We can prepare the single set from the spinor components using this function. In this example, we just gonna mimic non-collinear SOC calculations, pretending that alpha and beta orbital sets are the spinor components. ``` C = QE_utils.merge_orbitals(alpha, beta) S = C.H() * C print_mat(S) ``` <a name="topic-8"></a> ### 8. QE_utils.orthogonalize_orbitals2(Ca, Cb) This is a special orthogonalization procedure - the one for 2-component spinors. The inputs are assumed to be the components for each orbital. The orthogonalization works such that it is S_aa + S_bb = I ``` alpha = QE_methods.read_qe_wfc("wd/res/x0.export/wfc.1", [1,2,3,4], verbose=0) beta = QE_methods.read_qe_wfc("wd/res/x0.export/wfc.2", [1,2,3,4], verbose=0) alp, bet = QE_utils.orthogonalize_orbitals2(alpha, beta) S_aa = alp.H() * alp S_bb = bet.H() * bet print("S_aa") print_mat(S_aa) print("S_bb") print_mat(S_bb) print("S_aa + S_bb") print_mat(S_aa + S_bb) S_ab = alp.H() * bet print("S_ab") print_mat(S_ab) ``` <a name="topic-9"></a> ### 9. read_qe_wfc_grid(filename, verbose=0) Can be used to read the grid points for the given PW representation. ``` G1 = QE_methods.read_qe_wfc_grid("wd/res/x0.export/grid.1", verbose=0) print(len(G1)) for i in range(10): print(F"{i} \t {G1[i].x} \t {G1[i].y} \t {G1[i].z}") ```
true
code
0.334841
null
null
null
null
# Content-based recommender using Deep Structured Semantic Model An example of how to build a Deep Structured Semantic Model (DSSM) for incorporating complex content-based features into a recommender system. See [Learning Deep Structured Semantic Models for Web Search using Clickthrough Data](https://www.microsoft.com/en-us/research/publication/learning-deep-structured-semantic-models-for-web-search-using-clickthrough-data/). This example does not attempt to provide a datasource or train a model, but merely show how to structure a complex DSSM network. ``` import warnings import mxnet as mx from mxnet import gluon, nd, autograd, sym import numpy as np from sklearn.random_projection import johnson_lindenstrauss_min_dim # Define some constants max_user = int(1e5) title_vocab_size = int(3e4) query_vocab_size = int(3e4) num_samples = int(1e4) hidden_units = 128 epsilon_proj = 0.25 ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu() ``` ## Bag of words random projection A previous version of this example contained a bag of word random projection example, it is kept here for reference but not used in the next example. Random Projection is a dimension reduction technique that guarantees the disruption of the pair-wise distance between your original data point within a certain bound. What is even more interesting is that the dimension to project onto to guarantee that bound does not depend on the original number of dimension but solely on the total number of datapoints. You can see more explanation [in this blog post](http://jasonpunyon.com/blog/2017/12/02/fun-with-random-numbers-random-projection/) ``` proj_dim = johnson_lindenstrauss_min_dim(num_samples, epsilon_proj) print("To keep a distance disruption ~< {}% of our {} samples we need to randomly project to at least {} dimensions".format(epsilon_proj*100, num_samples, proj_dim)) class BagOfWordsRandomProjection(gluon.HybridBlock): def __init__(self, vocab_size, output_dim, random_seed=54321, pad_index=0): """ :param int vocab_size: number of element in the vocabulary :param int output_dim: projection dimension :param int ramdon_seed: seed to use to guarantee same projection :param int pad_index: index of the vocabulary used for padding sentences """ super(BagOfWordsRandomProjection, self).__init__() self._vocab_size = vocab_size self._output_dim = output_dim proj = self._random_unit_vecs(vocab_size=vocab_size, output_dim=output_dim, random_seed=random_seed) # we set the projection of the padding word to 0 proj[pad_index, :] = 0 self.proj = self.params.get_constant('proj', value=proj) def _random_unit_vecs(self, vocab_size, output_dim, random_seed): rs = np.random.RandomState(seed=random_seed) W = rs.normal(size=(vocab_size, output_dim)) Wlen = np.linalg.norm(W, axis=1) W_unit = W / Wlen[:,None] return W_unit def hybrid_forward(self, F, x, proj): """ :param nd or sym F: :param nd.NDArray x: index of tokens returns the sum of the projected embeddings of each token """ embedded = F.Embedding(x, proj, input_dim=self._vocab_size, output_dim=self._output_dim) return embedded.sum(axis=1) bowrp = BagOfWordsRandomProjection(1000, 20) bowrp.initialize() bowrp(mx.nd.array([[10, 50, 100], [5, 10, 0]])) ``` With padding: ``` bowrp(mx.nd.array([[10, 50, 100, 0], [5, 10, 0, 0]])) ``` # Content-based recommender / ranking system using DSSM For example in the search result ranking problem: You have users, that have performed text-based searches. They were presented with results, and selected one of them. Results are composed of a title and an image. Your positive examples will be the clicked items in the search results, and the negative examples are sampled from the non-clicked examples. The network will jointly learn embeddings for users and query text making up the "Query", title and image making the "Item" and learn how similar they are. After training, you can index the embeddings for your items and do a knn search with your query embeddings using the cosine similarity to return ranked items ``` proj_dim = 128 class DSSMRecommenderNetwork(gluon.HybridBlock): def __init__(self, query_vocab_size, proj_dim, max_user, title_vocab_size, hidden_units, random_seed=54321, p=0.5): super(DSSMRecommenderNetwork, self).__init__() with self.name_scope(): # User/Query pipeline self.user_embedding = gluon.nn.Embedding(max_user, proj_dim) self.user_mlp = gluon.nn.Dense(hidden_units, activation="relu") # Instead of bag of words, we use learned embeddings + stacked biLSTM average self.query_text_embedding = gluon.nn.Embedding(query_vocab_size, proj_dim) self.query_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True) self.query_text_mlp = gluon.nn.Dense(hidden_units, activation="relu") self.query_dropout = gluon.nn.Dropout(p) self.query_mlp = gluon.nn.Dense(hidden_units, activation="relu") # Item pipeline # Instead of bag of words, we use learned embeddings + stacked biLSTM average self.title_embedding = gluon.nn.Embedding(title_vocab_size, proj_dim) self.title_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True) self.title_mlp = gluon.nn.Dense(hidden_units, activation="relu") # You could use vgg here for example self.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=False).features self.image_mlp = gluon.nn.Dense(hidden_units, activation="relu") self.item_dropout = gluon.nn.Dropout(p) self.item_mlp = gluon.nn.Dense(hidden_units, activation="relu") def hybrid_forward(self, F, user, query_text, title, image): # Query user = self.user_embedding(user) user = self.user_mlp(user) query_text = self.query_text_embedding(query_text) query_text = self.query_lstm(query_text.transpose((1,0,2))) # average the states query_text = query_text.mean(axis=0) query_text = self.query_text_mlp(query_text) query = F.concat(user, query_text) query = self.query_dropout(query) query = self.query_mlp(query) # Item title_text = self.title_embedding(title) title_text = self.title_lstm(title_text.transpose((1,0,2))) # average the states title_text = title_text.mean(axis=0) title_text = self.title_mlp(title_text) image = self.image_embedding(image) image = self.image_mlp(image) item = F.concat(title_text, image) item = self.item_dropout(item) item = self.item_mlp(item) # Cosine Similarity query = query.expand_dims(axis=2) item = item.expand_dims(axis=2) sim = F.batch_dot(query, item, transpose_a=True) / (query.norm(axis=1) * item.norm(axis=1) + 1e-9).expand_dims(axis=2) return sim.squeeze(axis=2) network = DSSMRecommenderNetwork( query_vocab_size, proj_dim, max_user, title_vocab_size, hidden_units ) network.initialize(mx.init.Xavier(), ctx) # Load pre-trained vgg16 weights with network.name_scope(): network.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=True, ctx=ctx).features ``` It is quite hard to visualize the network since it is relatively complex but you can see the two-pronged structure, and the resnet18 branch ``` mx.viz.plot_network(network( mx.sym.var('user'), mx.sym.var('query_text'), mx.sym.var('title'), mx.sym.var('image')), shape={'user': (1,1), 'query_text': (1,30), 'title': (1,30), 'image': (1,3,224,224)}, node_attrs={"fixedsize":"False"}) ``` We can print the summary of the network using dummy data. We can see it is already training on 32M parameters! ``` user = mx.nd.array([[200], [100]], ctx) query = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text title = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text image = mx.nd.random.uniform(shape=(2,3, 224,224), ctx=ctx) # Example of an encoded image network.summary(user, query, title, image) network(user, query, title, image) ``` The output is the similarity, if we wanted to train it on real data, we would need to minimize the Cosine loss, 1 - cosine_similarity.
true
code
0.75822
null
null
null
null
## Precision-Recall Curves in Multiclass For multiclass classification, we have 2 options: - determine a PR curve for each class. - determine the overall PR curve as the micro-average of all classes Let's see how to do both. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_wine from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.multiclass import OneVsRestClassifier # to convert the 1-D target vector in to a matrix from sklearn.preprocessing import label_binarize from sklearn.metrics import precision_recall_curve from yellowbrick.classifier import PrecisionRecallCurve ``` ## Load data (multiclass) ``` # load data data = load_wine() data = pd.concat([ pd.DataFrame(data.data, columns=data.feature_names), pd.DataFrame(data.target, columns=['target']), ], axis=1) data.head() # target distribution: # multiclass and (fairly) balanced data.target.value_counts(normalize=True) # separate dataset into train and test X_train, X_test, y_train, y_test = train_test_split( data.drop(labels=['target'], axis=1), # drop the target data['target'], # just the target test_size=0.3, random_state=0) X_train.shape, X_test.shape # the target is a vector with the 3 classes y_test[0:10] ``` ## Train ML models The dataset we are using is very, extremely simple, so I am creating dumb models intentionally, that is few trees and very shallow for the random forests and few iterations for the logit. This is, so that we can get the most out of the PR curves by inspecting them visually. ### Random Forests The Random Forests in sklearn are not trained as a 1 vs Rest. So in order to produce a 1 vs rest probability vector for each class, we need to wrap this estimator with another one from sklearn: - [OneVsRestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html) ``` # set up the model, wrapped by the OneVsRestClassifier rf = OneVsRestClassifier( RandomForestClassifier( n_estimators=10, random_state=39, max_depth=1, n_jobs=4, ) ) # train the model rf.fit(X_train, y_train) # produce the predictions (as probabilities) y_train_rf = rf.predict_proba(X_train) y_test_rf = rf.predict_proba(X_test) # note that the predictions are an array of 3 columns # first column: the probability of an observation of being of class 0 # second column: the probability of an observation of being of class 1 # third column: the probability of an observation of being of class 2 y_test_rf[0:10, :] pd.DataFrame(y_test_rf).sum(axis=1)[0:10] # The final prediction is that of the biggest probabiity rf.predict(X_test)[0:10] ``` ### Logistic Regression The Logistic regression supports 1 vs rest automatically though its multi_class parameter: ``` # set up the model logit = LogisticRegression( random_state=0, multi_class='ovr', max_iter=10, ) # train logit.fit(X_train, y_train) # obtain the probabilities y_train_logit = logit.predict_proba(X_train) y_test_logit = logit.predict_proba(X_test) # note that the predictions are an array of 3 columns # first column: the probability of an observation of being of class 0 # second column: the probability of an observation of being of class 1 # third column: the probability of an observation of being of class 2 y_test_logit[0:10, :] # The final prediction is that of the biggest probabiity logit.predict(X_test)[0:10] ``` ## Precision-Recall Curve ### Per class with Sklearn ``` # with label_binarize we transform the target vector # into a multi-label matrix, so that it matches the # outputs of the models # then we have 1 class per column y_test = label_binarize(y_test, classes=[0, 1, 2]) y_test[0:10, :] # now we determine the precision and recall at different thresholds # considering only the probability vector for class 2 and the true # target for class 2 # so we treat the problem as class 2 vs rest p, r, thresholds = precision_recall_curve(y_test[:, 2], y_test_rf[:, 2]) # precision values p # recall values r # threhsolds examined thresholds ``` Go ahead and examine the precision and recall for the other classes see how these values change. ``` # now let's do these for all classes and capture the results in # dictionaries, so we can plot the values afterwards # determine the Precision and recall # at various thresholds of probability # in a 1 vs all fashion, for each class precision_rf = dict() recall_rf = dict() # for each class for i in range(3): # determine precision and recall at various thresholds # in a 1 vs all fashion precision_rf[i], recall_rf[i], _ = precision_recall_curve( y_test[:, i], y_test_rf[:, i]) precision_rf # plot the curves for each class for i in range(3): plt.plot(recall_rf[i], precision_rf[i], label='class {}'.format(i)) plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve") plt.show() # and now for the logistic regression precision_lg = dict() recall_lg = dict() # for each class for i in range(3): # determine precision and recall at various thresholds # in a 1 vs all fashion precision_lg[i], recall_lg[i], _ = precision_recall_curve( y_test[:, i], y_test_logit[:, i]) plt.plot(recall_lg[i], precision_lg[i], label='class {}'.format(i)) plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve") plt.show() # and now, just because it is a bit difficult to compare # between models, we plot the PR curves class by class, # but the 2 models in the same plot # for each class for i in range(3): plt.plot(recall_lg[i], precision_lg[i], label='logit class {}'.format(i)) plt.plot(recall_rf[i], precision_rf[i], label='rf class {}'.format(i)) plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve for class{}".format(i)) plt.show() ``` We see that the Random Forest does a better job for all classes. ### Micro-average with sklearn In order to do this, we concatenate all the probability vectors 1 after the other, and so we do with the real values. ``` # probability vectors for all classes in 1-d vector y_test_rf.ravel() # see that the unravelled prediction vector has 3 times the size # of the origina target len(y_test), len(y_test_rf.ravel()) # A "micro-average": quantifying score on all classes jointly # for random forests precision_rf["micro"], recall_rf["micro"], _ = precision_recall_curve( y_test.ravel(), y_test_rf.ravel(), ) # for logistic regression precision_lg["micro"], recall_lg["micro"], _ = precision_recall_curve( y_test.ravel(), y_test_logit.ravel(), ) # now we plot them next to each other i = "micro" plt.plot(recall_lg[i], precision_lg[i], label='logit micro {}') plt.plot(recall_rf[i], precision_rf[i], label='rf micro {}') plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve for class{}".format(i)) plt.show() ``` ## Yellowbrick ### Per class with Yellobrick https://www.scikit-yb.org/en/latest/api/classifier/prcurve.html **Note:** In the cells below, we are passing to Yellobrick classes a model that is already fit. When we fit() the Yellobrick class, it will check if the model is fit, in which case it will do nothing. If we pass a model that is not fit, and a multiclass target, Yellowbrick will wrap the model automatically with a 1 vs Rest classifier. Check Yellobrick's documentation for more details. ``` visualizer = PrecisionRecallCurve( rf, per_class=True, cmap="cool", micro=False, ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Finalize and show the figure visualizer = PrecisionRecallCurve( logit, per_class=True, cmap="cool", micro=False, ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Finalize and show the figure ``` ### Micro yellowbrick ``` visualizer = PrecisionRecallCurve( rf, cmap="cool", micro=True, ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Finalize and show the figure visualizer = PrecisionRecallCurve( logit, cmap="cool", micro=True, ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Finalize and show the figure ``` That's all for PR curves
true
code
0.740723
null
null
null
null
``` #default_exp dataset_torch ``` # dataset_torch > Module to load the slates dataset into a Pytorch Dataset and Dataloaders with default train/valid test splits. ``` #export import torch import recsys_slates_dataset.data_helper as data_helper from torch.utils.data import Dataset, DataLoader import torch import json import numpy as np import logging logging.basicConfig(format='%(asctime)s %(message)s', level='INFO') class SequentialDataset(Dataset): ''' A Pytorch Dataset for the FINN Recsys Slates Dataset. Attributes: data: [Dict] A dictionary with tensors of the dataset. First dimension in each tensor must be the batch dimension. Requires the keys "click" and "slate". Additional elements can be added. sample_candidate_items: [int] Number of negative item examples sampled from the item universe for each interaction. If positive, the dataset provide an additional dictionary item "allitem". Often also called uniform candidate sampling. See Eide et. al. 2021 for more information. ''' def __init__(self, data, sample_candidate_items=0): self.data = data self.num_items = self.data['slate'].max()+1 self.sample_candidate_items = sample_candidate_items self.mask2ind = {'train' : 1, 'valid' : 2, 'test' : 3} logging.info( "Loading dataset with slate size={} and number of negative samples={}" .format(self.data['slate'].size(), self.sample_candidate_items)) # Performs some checks on the dataset to make sure it is valid: assert "slate" in data.keys(), "Slate tensor is not in dataset. This is required." assert "click" in data.keys(), "Click tensor is not in dataset. This is required." assert all([val.size(0)==data['slate'].size(0) for key, val in data.items()]), "Not all data tensors have the same batch dimension" def __getitem__(self, idx): batch = {key: val[idx] for key, val in self.data.items()} if self.sample_candidate_items: # Sample actions uniformly (3 is the first non-special item) batch['allitem'] = torch.randint( size=(batch['click'].size(0), self.sample_candidate_items), low=3, high=self.num_items, device = batch['click'].device ) return batch def __len__(self): return len(self.data['click']) #export def load_dataloaders(data_dir= "dat", batch_size=1024, num_workers= 0, sample_candidate_items=False, valid_pct= 0.05, test_pct= 0.05, t_testsplit= 5, limit_num_users=None, seed=0): """ Loads pytorch dataloaders to be used in training. If used with standard settings, the train/val/test split is equivalent to Eide et. al. 2021. Attributes: data_dir: [str] where download and store data if not already downloaded. batch_size: [int] Batch size given by dataloaders. num_workers: [int] How many threads should be used to prepare batches of data. sample_candidate_items: [int] Number of negative item examples sampled from the item universe for each interaction. If positive, the dataset provide an additional dictionary item "allitem". Often also called uniform candidate sampling. See Eide et. al. 2021 for more information. valid_pct: [float] Percentage of users allocated to validation dataset. test_pct: [float] Percentage of users allocated to test dataset. t_testsplit: [int] For users allocated to validation and test datasets, how many initial interactions should be part of the training dataset. limit_num_users: [int] For debugging purposes, only return some users. seed: [int] Seed used to sample users/items. """ logging.info("Download data if not in data folder..") data_helper.download_data_files(data_dir=data_dir) logging.info('Load data..') with np.load("{}/data.npz".format(data_dir)) as data_np: data = {key: torch.tensor(val) for key, val in data_np.items()} if limit_num_users is not None: logging.info("Limiting dataset to only return the first {} users.".format(limit_num_users)) data = {key : val[:limit_num_users] for key, val in data.items()} with open('{}/ind2val.json'.format(data_dir), 'rb') as handle: # Use string2int object_hook found here: https://stackoverflow.com/a/54112705 ind2val = json.load( handle, object_hook=lambda d: { int(k) if k.lstrip('-').isdigit() else k: v for k, v in d.items() } ) num_users = len(data['click']) num_validusers = int(num_users * valid_pct) num_testusers = int(num_users * test_pct) torch.manual_seed(seed) perm_user = torch.randperm(num_users) valid_user_idx = perm_user[:num_validusers] test_user_idx = perm_user[num_validusers:(num_validusers+num_testusers)] train_user_idx = perm_user[(num_validusers+num_testusers):] # Split dictionary into train/valid/test with a phase mask that shows which interactions are in different sets # (as some users have both train and valid data) data_train = data data_train['phase_mask'] = torch.ones_like(data['click']).bool() data_train['phase_mask'][test_user_idx,t_testsplit:]=False data_train['phase_mask'][valid_user_idx,t_testsplit:]=False data_valid = {key: val[valid_user_idx] for key, val in data.items()} data_valid['phase_mask'] = torch.zeros_like(data_valid['click']).bool() data_valid['phase_mask'][:,t_testsplit:] = True data_test = {key: val[test_user_idx] for key, val in data.items()} data_test['phase_mask'] = torch.zeros_like(data_test['click']).bool() data_test['phase_mask'][:,t_testsplit:] = True data_dicts = { "train" : data_train, "valid" : data_valid, "test" : data_test} datasets = { phase : SequentialDataset(data, sample_candidate_items) for phase, data in data_dicts.items() } # Build dataloaders for each data subset: dataloaders = { phase: DataLoader(ds, batch_size=batch_size, shuffle=(phase=="train"), num_workers=num_workers) for phase, ds in datasets.items() } for key, dl in dataloaders.items(): logging.info( "In {}: num_users: {}, num_batches: {}".format(key, len(dl.dataset), len(dl)) ) # Load item attributes: with np.load('{}/itemattr.npz'.format(data_dir), mmap_mode=None) as itemattr_file: itemattr = {key : val for key, val in itemattr_file.items()} return ind2val, itemattr, dataloaders #slow ind2val, itemattr, dataloaders = load_dataloaders() ```
true
code
0.709824
null
null
null
null
# Simplifying Codebases Param's just a Python library, and so anything you can do with Param you can do "manually". So, why use Param? The most immediate benefit to using Param is that it allows you to greatly simplify your codebases, making them much more clear, readable, and maintainable, while simultaneously providing robust handling against error conditions. Param does this by letting a programmer explicitly declare the types and values of parameters accepted by the code. Param then ensures that only suitable values of those parameters ever make it through to the underlying code, removing the need to handle any of those conditions explicitly. To see how this works, let's create a Python class with some attributes without using Param: ``` class OrdinaryClass(object): def __init__(self, a=2, b=3, title="sum"): self.a = a self.b = b self.title = title def __call__(self): return self.title + ": " + str(self.a + self.b) ``` As this is just standard Python, we can of course instantiate this class, modify its variables, and call it: ``` o1 = OrdinaryClass(b=4, title="Sum") o1.a=4 o1() ``` The same code written using Param would look like: ``` import param class ParamClass(param.Parameterized): a = param.Integer(2, bounds=(0,1000), doc="First addend") b = param.Integer(3, bounds=(0,1000), doc="Second addend") title = param.String(default="sum", doc="Title for the result") def __call__(self): return self.title + ": " + str(self.a + self.b) o2 = ParamClass(b=4, title="Sum") o2() ``` As you can see, the Parameters here are used precisely like normal attributes once they are defined, so the code for `__call__` and for invoking the constructor are the same in both cases. It's thus generally quite straightforward to migrate an existing class into Param. So, why do that? Well, with fewer lines of code than the ordinary class, you've now unlocked a whole wealth of features and better behavior! For instance, what happens if a user tries to supply some inappropriate data? With Param, such errors will be caught immediately: ``` with param.exceptions_summarized(): o3 = ParamClass() o3.b = -5 ``` Of course, you could always add more code to an ordinary Python class to check for errors like that, but it quickly gets unwieldy: ``` class OrdinaryClass2(object): def __init__(self, a=2, b=3, title="sum"): if type(a) is not int: raise ValueError("'a' must be an integer") if type(b) is not int: raise ValueError("'b' must be an integer") if a<0: raise ValueError("'a' must be at least `0`") if b<0: raise ValueError("'b' must be at least `0`") if type(title) is not str: raise ValueError("'title' must be a string") self.a = a self.b = b self.title = title def __call__(self): return self.title + ": " + str(self.a + self.b) with param.exceptions_summarized(): OrdinaryClass2(a="f") ``` Unfortunately, catching errors in the constructor like that won't help if someone modifies the attribute directly, which won't be detected as an error: ``` o4 = OrdinaryClass2() o4.a = "four" ``` Python will happily accept this incorrect value and will continue processing. It may only be much later, in a very different part of your code, that you see a mysterious error message that's then very difficult to relate back to the actual problem you need to fix: ``` with param.exceptions_summarized(): o4() ``` Here there's no problem with the code in the cell above; `o4()` is fully valid Python; the real problem is in the preceding cell, which could have been in a completely different file or library. The error message is also obscure and confusing at this level, because the user of `o4` may have no idea why strings and integers are getting concatenated. To get a better error message, you _could_ move those checks into the `__call__` method, which would make sure that errors are always eventually detected: ``` class OrdinaryClass3(object): def __init__(self, a=2, b=3, title="sum"): self.a = a self.b = b self.title = title def __call__(self): if type(self.a) is not int: raise ValueError("'a' must be an integer") if type(self.b) is not int: raise ValueError("'b' must be an integer") if self.a<0: raise ValueError("'a' must be at least `0`") if self.b<0: raise ValueError("'b' must be at least `0`") if type(self.title) is not str: raise ValueError("'title' must be a string") return self.title + ": " + str(self.a + self.b) o5 = OrdinaryClass3() o5.a = "four" with param.exceptions_summarized(): o5() ``` But you'd now have to check for errors in _every_ _single_ _method_ that might use those parameters. Worse, you still only detect the problem very late, far from where it was first introduced. Any distance between the error and the error report makes it much more difficult to address, as the user then has to track down where in the code `a` might have gotten set to a non-integer. With Param you can catch such problems at their start, as soon as an incorrect value is provided, when it is still simple to detect and correct it. To get those same features in hand-written Python code, you would need to provide explicit getters and setters, which is made easier with Python properties and decorators, but is still quite unwieldy: ``` class OrdinaryClass4(object): def __init__(self, a=2, b=3, title="sum"): self.a = a self.b = b self.title = title @property def a(self): return self.__a @a.setter def a(self, a): if type(a) is not int: raise ValueError("'a' must be an integer") if a < 0: raise ValueError("'a' must be at least `0`") self.__a = a @property def b(self): return self.__b @b.setter def b(self, b): if type(b) is not int: raise ValueError("'a' must be an integer") if b < 0: raise ValueError("'a' must be at least `0`") self.__b = b @property def title(self): return self.__title def title(self, b): if type(title) is not string: raise ValueError("'title' must be a string") self.__title = title def __call__(self): return self.title + ": " + str(self.a + self.b) o5=OrdinaryClass4() o5() with param.exceptions_summarized(): o5=OrdinaryClass4() o5.b=-6 ``` Note that this code has an easily overlooked mistake in it, reporting `a` rather than `b` as the problem. This sort of error is extremely common in copy-pasted validation code of this type, because tests rarely exercise all of the error conditions involved. As you can see, even getting close to the automatic validation already provided by Param requires 8 methods and >30 highly repetitive lines of code, even when using relatively esoteric Python features like properties and decorators, and still doesn't yet implement other Param features like automatic documentation, attribute inheritance, or dynamic values. With Param, the corresponding `ParamClass` code only requires 6 lines and no fancy techniques beyond Python classes. Most importantly, the Param version lets readers and program authors focus directly on what this code actually does, which is to compute a function from three provided parameters: ``` class ParamClass(param.Parameterized): a = param.Integer(2, bounds=(0,1000), doc="First addend") b = param.Integer(3, bounds=(0,1000), doc="Second addend") title = param.String(default="sum", doc="Title for the result") def __call__(self): return self.title + ": " + str(self.a + self.b) ``` Even a quick skim of this code reveals what parameters are available, what values they will accept, what the default values are, and how those parameters will be used in the method. Plus the actual code of the method stands out immediately, as all the code is either parameters or actual functionality. In contrast, users of OrdinaryClass3 will have to read through dozens of lines of code to discern even basic information about usage, or else authors of the code will need to create and maintain docstrings that may or may not match the actual code over time and will further increase the amount of text to write and maintain. ## Programming contracts If you think about the examples above, you can see how Param makes it simple for programmers to make a contract with their users, being explicit and clear what will be accepted and rejected, while also allowing programmers to make safe assumptions about what inputs the code may ever receive. There is no need for `__call__` _ever_ to check for the type of one of its parameters, whether it's in the range allowed, or any other property that can be enforced by Param. Your custom code can then be much more linear and straightforward, getting right to work with the actual task at hand, without having to have reams of `if` statements and `asserts()` that disrupt the flow of the source file and make the reader get sidetracked in error-handling code. Param lets you once and for all declare what this code accepts, which is both clear documentation to the user and a guarantee that the programmer can forget about any other possible value a user might someday supply. Crucially, these contracts apply not just between the user and a given piece of code, but also between components of the system itself. When validation code is expensive, as in ordinary Python, programmers will typically do it only at the edges of the system, where input from the user is accepted. But expressing types and ranges is so easy in Param, it can be done for any major component in the system. The Parameter list declares very clearly what that component accepts, which lets the code for that component ignore all potential inputs that are disallowed by the Parameter specifications, while correctly advertising to the rest of the codebase what inputs are allowed. Programmers can thus focus on their particular components of interest, knowing precisely what inputs will ever be let through, without having to reason about the flow of configuration and data throughout the whole system. Without Param, you should expect Python code to be full of confusing error checking and handling of different input types, while still only catching a small fraction of the possible incorrect inputs that could be provided. But Param-based code should be dramatically easier to read, easier to maintain, easier to develop, and nearly bulletproof against mistaken or even malicious usage.
true
code
0.812347
null
null
null
null
# Changing the input current when solving PyBaMM models This notebook shows you how to change the input current when solving PyBaMM models. It also explains how to load in current data from a file, and how to add a user-defined current function. For more examples of different drive cycles see [here](https://github.com/pybamm-team/PyBaMM/tree/master/results/drive_cycles). ### Table of Contents 1. [Constant current](#constant) 1. [Loading in current data](#data) 1. [Adding your own current function](#function) ## Constant current <a name="constant"></a> In this notebook we will use the SPM as the example model, and change the input current from the default option. If you are not familiar with running a model in PyBaMM, please see [this](./models/SPM.ipynb) notebook for more details. In PyBaMM, the current function is set using the parameter "Current function [A]". Below we load the SPM with the default parameters, and then change the the current function to be an input parameter, so that we can change it easily later. ``` %pip install pybamm -q # install PyBaMM if it is not installed import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') # create the model model = pybamm.lithium_ion.DFN() # set the default model parameters param = model.default_parameter_values # change the current function to be an input parameter param["Current function [A]"] = "[input]" ``` We can set up a simulation in the usual way, making sure we pass in our updated parameters. We choose to solve with a 1.6A current. In order to do this we must pass a dictionary of inputs whose keys are the parameter names and values are the values we want to use for that call to solve ``` # set up simlation simulation = pybamm.Simulation(model, parameter_values=param) # solve the model at the given time points, passing the current as an input t_eval = np.linspace(0, 600, 300) simulation.solve(t_eval, inputs={"Current function [A]": 1.6}) # plot simulation.plot() ``` PyBaMM can also simulate rest behaviour by setting the current function to zero: ``` # solve the model at the given time points simulation.solve(t_eval, inputs={"Current function [A]": 0}) # plot simulation.plot() ``` ## Loading in current data <a name="data"></a> To run drive cycles from data we can create an interpolant and pass it as the current function. ``` import pandas as pd # needed to read the csv data file model = pybamm.lithium_ion.DFN() # import drive cycle from file drive_cycle = pd.read_csv("pybamm/input/drive_cycles/US06.csv", comment="#", header=None).to_numpy() # load parameter values param = model.default_parameter_values # create interpolant - must be a function of *dimensional* time timescale = param.evaluate(model.timescale) current_interpolant = pybamm.Interpolant(drive_cycle, timescale * pybamm.t) # set drive cycle param["Current function [A]"] = current_interpolant # set up simulation - for drive cycles we recommend using the CasadiSolver in "fast" mode solver = pybamm.CasadiSolver(mode="fast") simulation = pybamm.Simulation(model, parameter_values=param, solver=solver) ``` Note that when simulating drive cycles there is no need to pass a list of times at which to return the solution, the results are automatically returned at the time points in the data. If you would like the solution returned at times different to those in the data then you can pass an array of times `t_eval` to `solve` in the usual way. ``` # simulate US06 drive cycle (duration 600 seconds) simulation.solve() # plot simulation.plot() ``` Note that some solvers try to evaluate the model equations at a very large value of `t` during the first step. This may raise a warning if the time requested by the solver is outside of the range of the data provided. However, this does not affect the solve since this large timestep is rejected by the solver, and a suitable shorter initial step is taken. ## Adding your own current function <a name="function"></a> A user defined current function can be passed to any model by specifying either a function or a set of data points for interpolation. For example, you may want to simulate a sinusoidal current with amplitude A and frequency omega. In order to do so you must first define the method ``` # create user-defined function def my_fun(A, omega): def current(t): return A * pybamm.sin(2 * np.pi * omega * t) return current ``` Note that the function returns a function which takes the input time. Then the model may be loaded and the "Current function" parameter updated to `my_fun` called with a specific value of `A` and `omega` ``` model = pybamm.lithium_ion.SPM() # load default parameter values param = model.default_parameter_values # set user defined current function A = model.param.I_typ omega = 0.1 param["Current function [A]"] = my_fun(A,omega) ``` Note that when `my_fun` is evaluated with `A` and `omega`, this creates a new function `current(t)` which can then be used in the expression tree. The model may then be solved in the usual way ``` # set up simulation simulation = pybamm.Simulation(model, parameter_values=param) # Example: simulate for 30 seconds simulation_time = 30 # end time in seconds npts = int(50 * simulation_time * omega) # need enough timesteps to resolve output t_eval = np.linspace(0, simulation_time, npts) solution = simulation.solve(t_eval) label = ["Frequency: {} Hz".format(omega)] # plot current and voltage output_variables = ["Current [A]", "Terminal voltage [V]"] simulation.plot(output_variables, labels=label) ```
true
code
0.702249
null
null
null
null
## Discretisation Discretisation is the process of transforming continuous variables into discrete variables by creating a set of contiguous intervals that span the range of the variable's values. Discretisation is also called **binning**, where bin is an alternative name for interval. ### Discretisation helps handle outliers and may improve value spread in skewed variables Discretisation helps handle outliers by placing these values into the lower or higher intervals, together with the remaining inlier values of the distribution. Thus, these outlier observations no longer differ from the rest of the values at the tails of the distribution, as they are now all together in the same interval / bucket. In addition, by creating appropriate bins or intervals, discretisation can help spread the values of a skewed variable across a set of bins with equal number of observations. ### Discretisation approaches There are several approaches to transform continuous variables into discrete ones. Discretisation methods fall into 2 categories: **supervised and unsupervised**. Unsupervised methods do not use any information, other than the variable distribution, to create the contiguous bins in which the values will be placed. Supervised methods typically use target information in order to create the bins or intervals. #### Unsupervised discretisation methods - Equal width discretisation - Equal frequency discretisation - K-means discretisation #### Supervised discretisation methods - Discretisation using decision trees In this lecture, I will describe **equal frequency discretisation**. ## Equal frequency discretisation Equal frequency discretisation divides the scope of possible values of the variable into N bins, where each bin carries the same amount of observations. This is particularly useful for skewed variables as it spreads the observations over the different bins equally. We find the interval boundaries by determining the quantiles. Equal frequency discretisation using quantiles consists of dividing the continuous variable into N quantiles, N to be defined by the user. Equal frequency binning is straightforward to implement and by spreading the values of the observations more evenly it may help boost the algorithm's performance. This arbitrary binning may also disrupt the relationship with the target. ## In this demo We will learn how to perform equal frequency discretisation using the Titanic dataset with - pandas and NumPy - Feature-engine - Scikit-learn ## Titanic dataset ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import KBinsDiscretizer from feature_engine.discretisers import EqualFrequencyDiscretiser # load the numerical variables of the Titanic Dataset data = pd.read_csv('../titanic.csv', usecols=['age', 'fare', 'survived']) data.head() # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape ``` The variables Age and Fare contain missing data, that I will fill by extracting a random sample of the variable. ``` def impute_na(data, variable): # function to fill NA with a random sample df = data.copy() # random sampling df[variable+'_random'] = df[variable] # extract the random sample to fill the na random_sample = X_train[variable].dropna().sample( df[variable].isnull().sum(), random_state=0) # pandas needs to have the same index in order to merge datasets random_sample.index = df[df[variable].isnull()].index df.loc[df[variable].isnull(), variable+'_random'] = random_sample return df[variable+'_random'] # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') # let's explore the distribution of age X_train[['age', 'fare']].hist(bins=30, figsize=(8,4)) plt.show() ``` ## Equal frequency discretisation with pandas and NumPy The interval limits are the quantile limits. We can find those out with pandas qcut. ``` # let's use pandas qcut (quantile cut) and I indicate that # we want 10 bins. # retbins = True indicates that I want to capture the limits # of each interval (so I can then use them to cut the test set) Age_disccretised, intervals = pd.qcut( X_train['age'], 10, labels=None, retbins=True, precision=3, duplicates='raise') pd.concat([Age_disccretised, X_train['age']], axis=1).head(10) ``` We can see in the above output how by discretising using quantiles, we placed each Age observation within one interval. For example, age 29 was placed in the 26-30 interval, whereas age 63 was placed into the 49-80 interval. Note how the interval widths are different. We can visualise the interval cut points below: ``` intervals ``` And because we generated the bins using the quantile cut method, we should have roughly the same amount of observations per bin. See below. ``` # roughly the same number of passengers per interval Age_disccretised.value_counts() # we can also add labels instead of having the interval boundaries, to the bins, as follows: labels = ['Q'+str(i) for i in range(1,11)] labels Age_disccretised, intervals = pd.qcut(X_train['age'], 10, labels=labels, retbins=True, precision=3, duplicates='raise') Age_disccretised.head() # to transform the test set: # we use pandas cut method (instead of qcut) and # pass the quantile edges calculated in the training set X_test['Age_disc_label'] = pd.cut(x = X_test['age'], bins=intervals, labels=labels) X_test['Age_disc'] = pd.cut(x = X_test['age'], bins=intervals) X_test.head(10) # let's check that we have equal frequency (equal number of observations per bin) X_test.groupby('Age_disc')['age'].count().plot.bar() ``` We can see that the top intervals have less observations. This may happen with skewed distributions if we try to divide in a high number of intervals. To make the value spread more homogeneous, we should discretise in less intervals. ## Equal frequency discretisation with Feature-Engine ``` # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') # with feature engine we can automate the process for many variables # in one line of code disc = EqualFrequencyDiscretiser(q=10, variables = ['age', 'fare']) disc.fit(X_train) # in the binner dict, we can see the limits of the intervals. Note # that the intervals have different widths disc.binner_dict_ # transform train and text train_t = disc.transform(X_train) test_t = disc.transform(X_test) train_t.head() # and now let's explore the number of observations per bucket t1 = train_t.groupby(['age'])['age'].count() / len(train_t) t2 = test_t.groupby(['age'])['age'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t) t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') ``` Note how equal frequency discretisation obtains a better value spread across the different intervals. ## Equal frequency discretisation with Scikit-learn ``` # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') disc = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='quantile') disc.fit(X_train[['age', 'fare']]) disc.bin_edges_ train_t = disc.transform(X_train[['age', 'fare']]) train_t = pd.DataFrame(train_t, columns = ['age', 'fare']) train_t.head() test_t = disc.transform(X_test[['age', 'fare']]) test_t = pd.DataFrame(test_t, columns = ['age', 'fare']) t1 = train_t.groupby(['age'])['age'].count() / len(train_t) t2 = test_t.groupby(['age'])['age'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t) t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') ```
true
code
0.574275
null
null
null
null
# Exercise 4 - Optimizing Model Training In [the previous exercise](./03%20-%20Compute%20Contexts.ipynb), you created cloud-based compute and used it when running a model training experiment. The benefit of cloud compute is that it offers a cost-effective way to scale out your experiment workflow and try different algorithms and parameters in order to optimize your model's performance; and that's what we'll explore in this exercise. > **Important**: This exercise assumes you have completed the previous exercises in this series - specifically, you must have: > > - Created an Azure ML Workspace. > - Uploaded the diabetes.csv data file to the workspace's default datastore. > - Registered a **Diabetes Dataset** dataset in the workspace. > - Provisioned an Azure ML Compute resource named **cpu-cluster**. > > If you haven't done that, now would be a good time - nobody's going to do it for you! ## Task 1: Connect to Your Workspace The first thing you need to do is to connect to your workspace using the Azure ML SDK. Let's start by ensuring you still have the latest version installed (if you ended and restarted your Azure Notebooks session, the environment may have been reset) ``` !pip install --upgrade azureml-sdk[notebooks,automl,explain] import azureml.core print("Ready to use Azure ML", azureml.core.VERSION) ``` Now you're ready to connect to your workspace. When you created it in the previous exercise, you saved its configuration; so now you can simply load the workspace from its configuration file. > **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate. ``` from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to work with', ws.name) ``` Now let's get the Azure ML compute resource you created previously (or recreate it if you deleted it!) ``` from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your CPU cluster cpu_cluster_name = "cpu-cluster" # Verify that cluster does not exist already try: cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: # Create an AzureMl Compute resource (a container cluster) compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', vm_priority='lowpriority', max_nodes=4) cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config) cpu_cluster.wait_for_completion(show_output=True) ``` ## Task 2: Use *Hyperdrive* to Determine Optimal Parameter Values The remote compute you created is a four-node cluster, and you can take advantage of this to execute multiple experiment runs in parallel. One key reason to do this is to try training a model with a range of different hyperparameter values. Azure ML includes a feature called *hyperdrive* that enables you to randomly try different values for one or more hyperparameters, and find the best performing trained model based on a metric that you specify - such as *Accuracy* or *Area Under the Curve (AUC)*. > **More Information**: For more information about Hyperdrive, see the [Azure ML documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters). Let's run a Hyperdrive experiment on the remote compute you have provisioned. First, we'll create the experiment and its associated folder. ``` import os from azureml.core import Experiment # Create an experiment experiment_name = 'diabetes_training' experiment = Experiment(workspace = ws, name = experiment_name) # Create a folder for the experiment files experiment_folder = './' + experiment_name os.makedirs(experiment_folder, exist_ok=True) print("Experiment:", experiment.name) ``` Now we'll create the Python script our experiment will run in order to train a model. ``` %%writefile $experiment_folder/diabetes_training.py # Import libraries import argparse import joblib from azureml.core import Workspace, Dataset, Experiment, Run import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # Set regularization parameter parser = argparse.ArgumentParser() parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate') args = parser.parse_args() reg = args.reg_rate # Get the experiment run context run = Run.get_context() # load the diabetes dataset dataset_name = 'Diabetes Dataset' print("Loading data from " + dataset_name) diabetes = Dataset.get_by_name(workspace=run.experiment.workspace, name=dataset_name).to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) # plot ROC curve fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1]) fig = plt.figure(figsize=(6, 4)) # Plot the diagonal 50% line plt.plot([0, 1], [0, 1], 'k--') # Plot the FPR and TPR achieved by our model plt.plot(fpr, tpr) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve') run.log_image(name = "ROC", plot = fig) plt.show() os.makedirs('outputs', exist_ok=True) # note file saved in the outputs folder is automatically uploaded into experiment record joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete() ``` Now, we'll use the *Hyperdrive* feature of Azure ML to run multiple experiments in parallel, using different values for the **regularization** parameter to find the optimal value for our data. ``` from azureml.train.hyperdrive import GridParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal from azureml.train.hyperdrive import choice from azureml.widgets import RunDetails from azureml.train.sklearn import SKLearn # Sample a range of parameter values params = GridParameterSampling( { # There's only one parameter, so grid sampling will try each value - with multiple parameters it would try every combination '--regularization': choice(0.001, 0.005, 0.01, 0.05, 0.1, 1.0) } ) # Set evaluation policy to stop poorly performing training runs early policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1) # Create an estimator that uses the remote compute hyper_estimator = SKLearn(source_directory=experiment_folder, compute_target = cpu_cluster, conda_packages=['pandas','ipykernel','matplotlib'], pip_packages=['azureml-sdk','argparse','pyarrow'], entry_script='diabetes_training.py') # Configure hyperdrive settings hyperdrive = HyperDriveConfig(estimator=hyper_estimator, hyperparameter_sampling=params, policy=policy, primary_metric_name='AUC', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=6, max_concurrent_runs=4) # Run the experiment run = experiment.submit(config=hyperdrive) # Show the status in the notebook as the experiment runs RunDetails(run).show() ``` When all of the runs have finished, you can find the best one based on the performance metric you specified (in this case, the one with the best AUC). ``` best_run = run.get_best_run_by_primary_metric() best_run_metrics = best_run.get_metrics() parameter_values = best_run.get_details() ['runDefinition']['arguments'] print('Best Run Id: ', best_run.id) print(' -AUC:', best_run_metrics['AUC']) print(' -Accuracy:', best_run_metrics['Accuracy']) print(' -Regularization Rate:',parameter_values) ``` Since we've found the best run, we can register the model it trained. ``` from azureml.core import Model # Register model best_run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'Hyperdrive'}, properties={'AUC': best_run_metrics['AUC'], 'Accuracy': best_run_metrics['Accuracy']}) # List registered models for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` ## Task 3: Use *Auto ML* to Find the Best Model Hyperparameter tuning has helped us find the optimal regularization rate for our logistic regression model, but we might get better results by trying a different algorithm, and by performing some basic feature-engineering, such as scaling numeric feature values. You could just create lots of different training scripts that apply various scikit-learn algorithms, and try them all until you find the best result; but Azure ML provides a feature called *Automated Machine Learning* (or *Auto ML*) that can do this for you. First, let's create a folder for a new experiment. ``` # Create a project folder if it doesn't exist automl_folder = "automl_experiment" if not os.path.exists(automl_folder): os.makedirs(automl_folder) print(automl_folder, 'folder created') ``` You don't need to create a training script (Auto ML will do that for you), but you do need to load the training data; and when using remote compute, this is best achieved by creating a script containing a **get_data** function. ``` %%writefile $automl_folder/get_data.py #Write the get_data file. from azureml.core import Run, Workspace, Dataset from sklearn.model_selection import train_test_split import pandas as pd import numpy as np def get_data(): # load the diabetes dataset run = Run.get_context() dataset_name = 'Diabetes Dataset' diabetes = Dataset.get_by_name(workspace=run.experiment.workspace, name=dataset_name).to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) return { "X" : X_train, "y" : y_train, "X_valid" : X_test, "y_valid" : y_test } ``` Now you're ready to confifure the Auto ML experiment. To do this, you'll need a run configuration that includes the required packages for the experiment environment, and a set of configuration settings that tells Auto ML how many options to try, which metric to use when evaluating models, and so on. > **More Information**: For more information about options when using Auto ML, see the [Azure ML documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train). ``` from azureml.core.runconfig import RunConfiguration from azureml.core.conda_dependencies import CondaDependencies from azureml.train.automl import AutoMLConfig import time import logging automl_run_config = RunConfiguration(framework="python") automl_run_config.environment.docker.enabled = True auto_ml_dependencies = CondaDependencies.create( pip_packages=["azureml-sdk", "pyarrow", "pandas", "scikit-learn", "numpy"]) automl_run_config.environment.python.conda_dependencies = auto_ml_dependencies automl_settings = { "name": "Diabetes_AutoML_{0}".format(time.time()), "iteration_timeout_minutes": 10, "iterations": 10, "primary_metric": 'AUC_weighted', "preprocess": False, "max_concurrent_iterations": 4, "verbosity": logging.INFO } automl_config = AutoMLConfig(task='classification', debug_log='automl_errors.log', path=automl_folder, compute_target=cpu_cluster, run_configuration=automl_run_config, data_script=automl_folder + "/get_data.py", model_explainability=True, **automl_settings, ) ``` OK, we're ready to go. Let's start the Auto ML run, which will generate child runs for different algorithms. > **Note**: This will take some time. Progress will be displayed as each child run completes, and then a widget showing the results will be displayed. ``` from azureml.core.experiment import Experiment from azureml.widgets import RunDetails automl_experiment = Experiment(ws, 'diabetes_automl') automl_run = automl_experiment.submit(automl_config, show_output=True) RunDetails(automl_run).show() ``` View the output of the experiment in the widget, and click the run that produced the best result to see its details. Then click the link to view the experiment details in the Azure portal and view the overall experiment details before viewing the details for the individual run that produced the best result. There's lots of information here about the performance of the model generated and how its features were used. Let's get the best run and the model that was generated (you can ignore any warnings about Azure ML package versions that might appear). ``` best_run, fitted_model = automl_run.get_output() print(best_run) print(fitted_model) best_run_metrics = best_run.get_metrics() for metric_name in best_run_metrics: metric = best_run_metrics[metric_name] print(metric_name, metric) ``` One of the options you used was to include model *explainability*. This uses a test dataset to evaluate the importance of each feature. You can view this data in the notebook widget or the portal, and you can also retrieve it from the run. ``` from azureml.train.automl.automlexplainer import retrieve_model_explanation shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = retrieve_model_explanation(best_run) # Overall feature importance (the Feature value is the column index in the training data) print("Feature\tImportance") for i in range(len(overall_imp)): print(overall_imp[i], '\t', overall_summary[i]) ``` Finally, having found the best performing model, you can register it. ``` # Register model best_run.register_model(model_path='outputs/model.pkl', model_name='diabetes_model', tags={'Training context':'Auto ML'}, properties={'AUC': best_run_metrics['AUC_weighted'], 'Accuracy': best_run_metrics['accuracy']}) # List registered models for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` Now you've seen several ways to leverage the high-scale compute capabilities of the cloud to experiment with model training and find the best performing model for your data. In the next exerise, you'll deploy a registered model into production.
true
code
0.564459
null
null
null
null
# Numpy实现浅层神经网络 实践部分将搭建神经网络,包含一个隐藏层,实验将会展现出与Logistic回归的不同之处。 实验将使用两层神经网络实现对“花”型图案的分类,如图所示,图中的点包含红点(y=0)和蓝点(y=1)还有点的坐标信息,实验将通过以下步骤完成对两种点的分类,使用Numpy实现。 - 输入样本; - 搭建神经网络; - 初始化参数; - 训练,包括前向传播与后向传播(即BP算法); - 得出训练后的参数; - 根据训练所得参数,绘制两类点边界曲线。 <img src="image/data.png" style="width:400px;height:300px;"> 该实验将使用Python原生库实现两层神经网络的搭建,完成分类。 ## 1 - 引用库 首先,载入几个需要用到的库,它们分别是: - numpy:一个python的基本库,用于科学计算 - planar_utils:定义了一些工具函数 - matplotlib.pyplot:用于生成图,在验证模型准确率和展示成本变化趋势时会使用到 - sklearn:用于数据挖掘和数据分析 ``` import numpy as np import sklearn from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets import matplotlib.pyplot as plt %matplotlib inline np.random.seed(1) ``` ## 2 - 载入数据并观察纬度 载入数据后,输出维度 ``` #载入数据 train_x, train_y, test_x, test_y = load_planar_dataset() #输出维度 shape_X = train_x.shape shape_Y = train_y.shape print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) ``` 由输出可知每组输入坐标包含两个值,包含一个值,共320组数据(测试集在训练集基础上增加80组数据,共400组)。 ## 3 - 简单逻辑回归实验 使用逻辑回归处理该数据,观察分类结果 ``` #训练逻辑回归分类器 clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(train_x.T, train_y.T); #绘制逻辑回归分类边界 plot_decision_boundary(lambda x: clf.predict(x), train_x, train_y) plt.title("Logistic Regression") #输出准确率 LR_predictions = clf.predict(train_x.T) print ('Accuracy of logistic regression:%d ' % float((np.dot(train_y,LR_predictions) + np.dot(1-train_y,1-LR_predictions))/float(train_y.size)*100) + '% ' + "(percentage of correctly labelled datapoints)") ``` 可以看出逻辑回归效果并不好,这是因为逻辑回归网络结构只包含输入层和输出层,无法拟合更为复杂的模型,下面尝试神经网络模型。 ## 4 - 神经网络模型 下面开始搭建神经网络模型,我们采用两层神经网络实验,隐藏层包含4个节点,使用tanh激活函数;输出层包含一个节点,使用Sigmoid激活函数,结果小于0.5即认为是0,否则认为是1。 ** 神经网络结构 ** 下面用代码实现神经网络结构,首先确定神经网络的结构,即获取相关数据维度,并设置隐藏层节点个数(本实验设置4个隐藏层节点),用以初始化参数 ``` #定义各层规模函数 def layer_sizes(X, Y): """ 参数含义: X -- 输入的数据 Y -- 输出值 返回值: n_x -- 输入层节点数 n_h -- 隐藏层节点数 n_y -- 输出层节点数 """ n_x = X.shape[0] #输入层大小(节点数) n_h = 4 n_y = Y.shape[0] #输出层大小(节点数) return (n_x, n_h, n_y) ``` ** 初始化模型参数 ** 获取相关维度信息后,开始初始化参数,定义相关函数 ``` # 定义函数:初始化参数 def initialize_parameters(n_x, n_h, n_y): """ 参数: n_x -- 输入层大小 n_h -- 隐藏层大小 n_y -- 输出层大小 返回值: params -- 一个包含所有参数的python字典: W1 -- (隐藏层)权重,维度是 (n_h, n_x) b1 -- (隐藏层)偏移量,维度是 (n_h, 1) W2 -- (输出层)权重,维度是 (n_y, n_h) b2 -- (输出层)偏移量,维度是 (n_y, 1) """ np.random.seed(2) # 设置随机种子 #随机初始化参数 W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros((n_y, 1)) assert (W1.shape == (n_h, n_x)) assert (b1.shape == (n_h, 1)) assert (W2.shape == (n_y, n_h)) assert (b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters ``` ** 前向传播与后向传播 ** 获取输入数据,参数初始化完成后,可以开始前向传播的计算 ``` # 定义函数:前向传播 def forward_propagation(X, parameters): """ 参数: X -- 输入值 parameters -- 一个python字典,包含计算所需全部参数(是initialize_parameters函数的输出) 返回值: A2 -- 模型输出值 cache -- 一个字典,包含 "Z1", "A1", "Z2" and "A2" """ W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] #计算中间量和节点值 Z1 = np.dot(W1, X) + b1 A1 = np.tanh(Z1) Z2 = np.dot(W2, A1) + b2 A2 = 1/(1+np.exp(-Z2)) assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache ``` 前向传播最后可得出模型输出值(即代码中的A2),即可计算成本函数cost ``` # 定义函数:成本函数 def compute_cost(A2, Y, parameters): """ 根据第三章给出的公式计算成本 参数: A2 -- 模型输出值 Y -- 真实值 parameters -- 一个python字典包含参数 W1, b1, W2和b2 返回值: cost -- 成本函数 """ m = Y.shape[1] #样本个数 #计算成本 logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y) cost = -1. / m * np.sum(logprobs) cost = np.squeeze(cost) # 确保维度的正确性 assert(isinstance(cost, float)) return cost ``` 计算了成本函数,可以开始后向传播的计算 ``` # 定义函数:后向传播 def backward_propagation(parameters, cache, X, Y): """ 参数: parameters -- 一个python字典,包含所有参数 cache -- 一个python字典包含"Z1", "A1", "Z2"和"A2". X -- 输入值 Y -- 真实值 返回值: grads -- 一个python字典包含所有参数的梯度 """ m = X.shape[1] #首先从"parameters"获取W1,W2 W1 = parameters["W1"] W2 = parameters["W2"] # 从"cache"中获取A1,A2 A1 = cache["A1"] A2 = cache["A2"] #后向传播: 计算dW1, db1, dW2, db2. dZ2 = A2 - Y dW2 = 1. / m * np.dot(dZ2, A1.T) db2 = 1. / m * np.sum(dZ2, axis = 1, keepdims = True) dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2)) dW1 = 1. / m * np.dot(dZ1, X.T) db1 = 1. / m * np.sum(dZ1, axis = 1, keepdims = True) grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads ``` 通过后向传播获取梯度后,可以根据梯度下降公式更新参数 ``` def update_parameters(parameters, grads, learning_rate = 1.2): """ 使用梯度更新参数 参数: parameters -- 包含所有参数的python字典 grads -- 包含所有参数梯度的python字典 返回值: parameters -- 包含更新后参数的python """ #从"parameters"中读取全部参数 W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # 从"grads"中读取全部梯度 dW1 = grads["dW1"] db1 = grads["db1"] dW2 = grads["dW2"] db2 = grads["db2"] #更新参数 W1 = W1 - learning_rate * dW1 b1 = b1 - learning_rate * db1 W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters ``` ** 神经网络模型 ** 前向传播、成本函数计算和后向传播构成一个完整的神经网络,将上述函数组合,构建一个神经网络模型 ``` #定义函数:神经网络模型 def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ 参数: X -- 输入值 Y -- 真实值 n_h -- 隐藏层大小/节点数 num_iterations -- 训练次数 print_cost -- 设置为True,则每1000次训练打印一次成本函数值 返回值: parameters -- 训练结束,更新后的参数值 """ np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] #根据n_x, n_h, n_y初始化参数,并取出W1,b1,W2,b2 parameters = initialize_parameters(n_x, n_h, n_y) W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] for i in range(0, num_iterations): #前向传播, 输入: "X, parameters". 输出: "A2, cache". A2, cache = forward_propagation(X, parameters) #成本计算. 输入: "A2, Y, parameters". 输出: "cost". cost = compute_cost(A2, Y, parameters) #后向传播, 输入: "parameters, cache, X, Y". 输出: "grads". grads = backward_propagation(parameters, cache, X, Y) #参数更新. 输入: "parameters, grads". 输出: "parameters". parameters = update_parameters(parameters, grads) #每1000次训练打印一次成本函数值 if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters ``` ** 预测 ** 通过上述模型可以训练得出最后的参数,此时需检测其准确率,用训练后的参数预测训练的输出,大于0.5的值视作1,否则视作0 ``` #定义函数:预测 def predict(parameters, X): """ 使用训练所得参数,对每个训练样本进行预测 参数: parameters -- 保安所有参数的python字典 X -- 输入值 返回值: predictions -- 模型预测值向量(红色: 0 / 蓝色: 1) """ #使用训练所得参数进行前向传播计算,并将模型输出值转化为预测值(大于0.5视作1,即True) A2, cache = forward_propagation(X, parameters) predictions = A2 > 0.5 return predictions ``` 下面对获取的数据进行训练,并输出准确率 ``` #建立神经网络模型 parameters = nn_model(train_x, train_y, n_h = 4, num_iterations = 10000, print_cost=True) #绘制分类边界 plot_decision_boundary(lambda x: predict(parameters, x.T), train_x, train_y) plt.title("Decision Boundary for hidden layer size " + str(4)) predictions = predict(parameters, train_x) # 预测训练集 print('Train Accuracy: %d' % float((np.dot(train_y, predictions.T) + np.dot(1 - train_y, 1 - predictions.T)) / float(train_y.size) * 100) + '%') # 预测测试集 predictions = predict(parameters, test_x) print('Test Accuracy: %d' % float((np.dot(test_y, predictions.T) + np.dot(1 - test_y, 1 - predictions.T)) / float(test_y.size) * 100) + '%') ``` 对比逻辑回归47%的准确率和分类结果图,神经网络分类的结果提高了不少,这是因为神经网络增加的隐藏层,为模型训练提供了更多选择,使得神经网络能拟合更加复杂的模型,对于更加复杂的图案分类更加准确。
true
code
0.550426
null
null
null
null
## Bayesian Optimization with Scikit-Optimize In this notebook, we will perform **Bayesian Optimization** with Gaussian Processes in Parallel, utilizing various CPUs, to speed up the search. This is useful to reduce search times. https://scikit-optimize.github.io/stable/auto_examples/parallel-optimization.html#example ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import cross_val_score, train_test_split from skopt import Optimizer # for the optimization from joblib import Parallel, delayed # for the parallelization from skopt.space import Real, Integer, Categorical from skopt.utils import use_named_args # load dataset breast_cancer_X, breast_cancer_y = load_breast_cancer(return_X_y=True) X = pd.DataFrame(breast_cancer_X) y = pd.Series(breast_cancer_y).map({0:1, 1:0}) X.head() # the target: # percentage of benign (0) and malign tumors (1) y.value_counts() / len(y) # split dataset into a train and test set X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) X_train.shape, X_test.shape ``` ## Define the Hyperparameter Space Scikit-optimize provides an utility function to create the range of values to examine for each hyperparameters. More details in [skopt.Space](https://scikit-optimize.github.io/stable/modules/generated/skopt.Space.html) ``` # determine the hyperparameter space param_grid = [ Integer(10, 120, name="n_estimators"), Integer(1, 5, name="max_depth"), Real(0.0001, 0.1, prior='log-uniform', name='learning_rate'), Real(0.001, 0.999, prior='log-uniform', name="min_samples_split"), Categorical(['deviance', 'exponential'], name="loss"), ] # Scikit-optimize parameter grid is a list type(param_grid) ``` ## Define the model ``` # set up the gradient boosting classifier gbm = GradientBoostingClassifier(random_state=0) ``` ## Define the objective function This is the hyperparameter response space, the function we want to minimize. ``` # We design a function to maximize the accuracy, of a GBM, # with cross-validation # the decorator allows our objective function to receive the parameters as # keyword arguments. This is a requirement for scikit-optimize. @use_named_args(param_grid) def objective(**params): # model with new parameters gbm.set_params(**params) # optimization function (hyperparam response function) value = np.mean( cross_val_score( gbm, X_train, y_train, cv=3, n_jobs=-4, scoring='accuracy') ) # negate because we need to minimize return -value ``` ## Optimization with Gaussian Process ``` # We use the Optimizer optimizer = Optimizer( dimensions = param_grid, # the hyperparameter space base_estimator = "GP", # the surrogate n_initial_points=10, # the number of points to evaluate f(x) to start of acq_func='EI', # the acquisition function random_state=0, n_jobs=4, ) # we will use 4 CPUs (n_points) # if we loop 10 times using 4 end points, we perform 40 searches in total for i in range(10): x = optimizer.ask(n_points=4) # x is a list of n_points points y = Parallel(n_jobs=4)(delayed(objective)(v) for v in x) # evaluate points in parallel optimizer.tell(x, y) # the evaluated hyperparamters optimizer.Xi # the accuracy optimizer.yi # all together in one dataframe, so we can investigate further dim_names = ['n_estimators', 'max_depth', 'min_samples_split', 'learning_rate', 'loss'] tmp = pd.concat([ pd.DataFrame(optimizer.Xi), pd.Series(optimizer.yi), ], axis=1) tmp.columns = dim_names + ['accuracy'] tmp.head() ``` ## Evaluate convergence of the search ``` tmp['accuracy'].sort_values(ascending=False).reset_index(drop=True).plot() ``` The trade-off with parallelization, is that we will not optimize the search after each evaluation of f(x), instead after, in this case 4, evaluations of f(x). Thus, we may need to perform more evaluations to find the optima. But, because we do it in parallel, overall, we reduce wall time. ``` tmp.sort_values(by='accuracy', ascending=True) ```
true
code
0.675925
null
null
null
null
# COCO Reader Reader operator that reads a COCO dataset (or subset of COCO), which consists of an annotation file and the images directory. `DALI_EXTRA_PATH` environment variable should point to the place where data from [DALI extra repository](https://github.com/NVIDIA/DALI_extra) is downloaded. Please make sure that the proper release tag is checked out. ``` from nvidia.dali.pipeline import Pipeline import nvidia.dali.ops as ops import nvidia.dali.types as types import numpy as np from time import time import os.path test_data_root = os.environ['DALI_EXTRA_PATH'] file_root = os.path.join(test_data_root, 'db', 'coco', 'images') annotations_file = os.path.join(test_data_root, 'db', 'coco', 'instances.json') num_gpus = 1 batch_size = 16 class COCOPipeline(Pipeline): def __init__(self, batch_size, num_threads, device_id): super(COCOPipeline, self).__init__(batch_size, num_threads, device_id, seed = 15) self.input = ops.COCOReader(file_root = file_root, annotations_file = annotations_file, shard_id = device_id, num_shards = num_gpus, ratio=True) self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB) def define_graph(self): inputs, bboxes, labels = self.input() images = self.decode(inputs) return (images, bboxes, labels) start = time() pipes = [COCOPipeline(batch_size=batch_size, num_threads=2, device_id = device_id) for device_id in range(num_gpus)] for pipe in pipes: pipe.build() total_time = time() - start print("Computation graph built and dataset loaded in %f seconds." % total_time) pipe_out = [pipe.run() for pipe in pipes] images_cpu = pipe_out[0][0].as_cpu() bboxes_cpu = pipe_out[0][1] labels_cpu = pipe_out[0][2] ``` Bounding boxes returned by the operator are lists of floats containing composed of **\[x, y, width, height]** (`ltrb` is set to `False` by default). ``` bboxes = bboxes_cpu.at(4) bboxes ``` Let's see the ground truth bounding boxes drawn on the image. ``` import matplotlib.pyplot as plt import matplotlib.patches as patches import random img_index = 4 img = images_cpu.at(img_index) H = img.shape[0] W = img.shape[1] fig,ax = plt.subplots(1) ax.imshow(img) bboxes = bboxes_cpu.at(img_index) labels = labels_cpu.at(img_index) categories_set = set() for label in labels: categories_set.add(label[0]) category_id_to_color = dict([ (cat_id , [random.uniform(0, 1) ,random.uniform(0, 1), random.uniform(0, 1)]) for cat_id in categories_set]) for bbox, label in zip(bboxes, labels): rect = patches.Rectangle((bbox[0]*W,bbox[1]*H),bbox[2]*W,bbox[3]*H,linewidth=1,edgecolor=category_id_to_color[label[0]],facecolor='none') ax.add_patch(rect) plt.show() ```
true
code
0.620535
null
null
null
null
# Practice Notebook: Methods and Classes The code below defines an *Elevator* class. The elevator has a current floor, it also has a top and a bottom floor that are the minimum and maximum floors it can go to. Fill in the blanks to make the elevator go through the floors requested. ``` class Elevator: def __init__(self, bottom, top, current): """Initializes the Elevator instance.""" self.bottom=bottom self.top=top self.current=current def __str__(self): """Information about Current floor""" return "Current floor: {}".format(self.current) def up(self): """Makes the elevator go up one floor.""" if self.current<10: self.current+=1 def down(self): """Makes the elevator go down one floor.""" if self.current > 0: self.current -= 1 def go_to(self, floor): """Makes the elevator go to the specific floor.""" if floor >= self.bottom and floor <= self.top: self.current = floor elif floor < 0: self.current = 0 else: self.current = 10 elevator = Elevator(-1, 10, 0) ``` This class is pretty empty and doesn't do much. To test whether your *Elevator* class is working correctly, run the code blocks below. ``` elevator.up() elevator.current #should output 1 elevator.down() elevator.current #should output 0 elevator.go_to(10) elevator.current #should output 10 ``` If you get a **<font color =red>NameError</font>** message, be sure to run the *Elevator* class definition code block first. If you get an **<font color =red>AttributeError</font>** message, be sure to initialize *self.current* in your *Elevator* class. Once you've made the above methods output 1, 0 and 10, you've successfully coded the *Elevator* class and its methods. Great work! <br><br> For the up and down methods, did you take into account the top and bottom floors? Keep in mind that the elevator shouldn't go above the top floor or below the bottom floor. To check that out, try the code below and verify if it's working as expected. If it's not, then go back and modify the methods so that this code behaves correctly. ``` # Go to the top floor. Try to go up, it should stay. Then go down. elevator.go_to(10) elevator.up() elevator.down() print(elevator.current) # should be 9 # Go to the bottom floor. Try to go down, it should stay. Then go up. elevator.go_to(-1) elevator.down() elevator.down() elevator.up() elevator.up() print(elevator.current) # should be 1 ``` Now add the __str__ method to your *Elevator* class definition above so that when printing the elevator using the **print( )** method, we get the current floor together with a message. For example, in the 5th floor it should say "Current floor: 5" ``` elevator.go_to(5) print(elevator) ``` Remember, Python uses the default method, that prints the position where the object is stored in the computer’s memory. If your output is something like: <br> > <__main__.Elevator object at 0x7ff6a9ff3fd0> Then you will need to add the special __str__ method, which returns the string that you want to print. Try again until you get the desired output, "Current floor: 5". Once you have successfully produced the desired output, you are all done with this practice notebook. Awesome!
true
code
0.592726
null
null
null
null
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Stop-Reinventing-Pandas" data-toc-modified-id="Stop-Reinventing-Pandas-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Stop Reinventing Pandas</a></span></li><li><span><a href="#First-Hacks!" data-toc-modified-id="First-Hacks!-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>First Hacks!</a></span><ul class="toc-item"><li><span><a href="#Beautiful-pipes!" data-toc-modified-id="Beautiful-pipes!-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Beautiful pipes!</a></span></li><li><span><a href="#The-Penny-Drops" data-toc-modified-id="The-Penny-Drops-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>The Penny Drops</a></span></li><li><span><a href="#Map-with-dict" data-toc-modified-id="Map-with-dict-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Map with dict</a></span></li></ul></li><li><span><a href="#Time-Series" data-toc-modified-id="Time-Series-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Time Series</a></span><ul class="toc-item"><li><span><a href="#Resample" data-toc-modified-id="Resample-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Resample</a></span><ul class="toc-item"><li><span><a href="#The-Old-Way" data-toc-modified-id="The-Old-Way-3.1.1"><span class="toc-item-num">3.1.1&nbsp;&nbsp;</span>The Old Way</a></span></li><li><span><a href="#A-Better-Way" data-toc-modified-id="A-Better-Way-3.1.2"><span class="toc-item-num">3.1.2&nbsp;&nbsp;</span>A Better Way</a></span></li></ul></li><li><span><a href="#Slice-Easily" data-toc-modified-id="Slice-Easily-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Slice Easily</a></span></li><li><span><a href="#Time-Windows:-Rolling,-Expanding,-EWM" data-toc-modified-id="Time-Windows:-Rolling,-Expanding,-EWM-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Time Windows: Rolling, Expanding, EWM</a></span><ul class="toc-item"><li><span><a href="#With-Apply" data-toc-modified-id="With-Apply-3.3.1"><span class="toc-item-num">3.3.1&nbsp;&nbsp;</span>With Apply</a></span></li></ul></li><li><span><a href="#Combine-with-GroupBy-🤯" data-toc-modified-id="Combine-with-GroupBy-🤯-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Combine with GroupBy 🤯</a></span></li></ul></li><li><span><a href="#Sorting" data-toc-modified-id="Sorting-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Sorting</a></span><ul class="toc-item"><li><span><a href="#By-Values" data-toc-modified-id="By-Values-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>By Values</a></span></li><li><span><a href="#By-Index" data-toc-modified-id="By-Index-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>By Index</a></span></li><li><span><a href="#By-Both-(New-in-0.23)" data-toc-modified-id="By-Both-(New-in-0.23)-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>By Both <span style="color: red">(New in 0.23)</span></a></span></li></ul></li><li><span><a href="#Stack,-Unstack" data-toc-modified-id="Stack,-Unstack-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Stack, Unstack</a></span><ul class="toc-item"><li><span><a href="#Unstack" data-toc-modified-id="Unstack-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Unstack</a></span><ul class="toc-item"><li><span><a href="#The-Old-way" data-toc-modified-id="The-Old-way-5.1.1"><span class="toc-item-num">5.1.1&nbsp;&nbsp;</span>The Old way</a></span></li><li><span><a href="#A-better-way" data-toc-modified-id="A-better-way-5.1.2"><span class="toc-item-num">5.1.2&nbsp;&nbsp;</span>A better way</a></span></li></ul></li><li><span><a href="#Unstack" data-toc-modified-id="Unstack-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>Unstack</a></span><ul class="toc-item"><li><span><a href="#Some-More-Hacks" data-toc-modified-id="Some-More-Hacks-5.2.1"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>Some More Hacks</a></span></li></ul></li></ul></li><li><span><a href="#GroupBy" data-toc-modified-id="GroupBy-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>GroupBy</a></span><ul class="toc-item"><li><span><a href="#Old-Ways" data-toc-modified-id="Old-Ways-6.1"><span class="toc-item-num">6.1&nbsp;&nbsp;</span>Old Ways</a></span><ul class="toc-item"><li><span><a href="#List-Aggregates" data-toc-modified-id="List-Aggregates-6.1.1"><span class="toc-item-num">6.1.1&nbsp;&nbsp;</span>List Aggregates</a></span></li><li><span><a href="#Dict-aggregate" data-toc-modified-id="Dict-aggregate-6.1.2"><span class="toc-item-num">6.1.2&nbsp;&nbsp;</span>Dict aggregate</a></span></li><li><span><a href="#With-Rename" data-toc-modified-id="With-Rename-6.1.3"><span class="toc-item-num">6.1.3&nbsp;&nbsp;</span>With Rename</a></span></li></ul></li><li><span><a href="#Named-Aggregations-(New-in-0.25)" data-toc-modified-id="Named-Aggregations-(New-in-0.25)-6.2"><span class="toc-item-num">6.2&nbsp;&nbsp;</span>Named Aggregations <span style="color: red">(New in 0.25)</span></a></span></li></ul></li><li><span><a href="#Clip" data-toc-modified-id="Clip-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Clip</a></span><ul class="toc-item"><li><span><a href="#The-Old-Way" data-toc-modified-id="The-Old-Way-7.1"><span class="toc-item-num">7.1&nbsp;&nbsp;</span>The Old Way</a></span></li><li><span><a href="#A-better-way" data-toc-modified-id="A-better-way-7.2"><span class="toc-item-num">7.2&nbsp;&nbsp;</span>A better way</a></span></li></ul></li><li><span><a href="#Reindex" data-toc-modified-id="Reindex-8"><span class="toc-item-num">8&nbsp;&nbsp;</span>Reindex</a></span></li><li><span><a href="#Method-Chaining" data-toc-modified-id="Method-Chaining-9"><span class="toc-item-num">9&nbsp;&nbsp;</span>Method Chaining</a></span><ul class="toc-item"><li><span><a href="#Assign" data-toc-modified-id="Assign-9.1"><span class="toc-item-num">9.1&nbsp;&nbsp;</span>Assign</a></span><ul class="toc-item"><li><span><a href="#With-a-callable" data-toc-modified-id="With-a-callable-9.1.1"><span class="toc-item-num">9.1.1&nbsp;&nbsp;</span>With a callable</a></span></li></ul></li><li><span><a href="#Pipe" data-toc-modified-id="Pipe-9.2"><span class="toc-item-num">9.2&nbsp;&nbsp;</span>Pipe</a></span></li></ul></li><li><span><a href="#Beautiful-Code-Tells-a-Story" data-toc-modified-id="Beautiful-Code-Tells-a-Story-10"><span class="toc-item-num">10&nbsp;&nbsp;</span>Beautiful Code Tells a Story</a></span></li><li><span><a href="#Bonus!" data-toc-modified-id="Bonus!-11"><span class="toc-item-num">11&nbsp;&nbsp;</span>Bonus!</a></span><ul class="toc-item"><li><span><a href="#Percent-Change" data-toc-modified-id="Percent-Change-11.1"><span class="toc-item-num">11.1&nbsp;&nbsp;</span>Percent Change</a></span></li><li><span><a href="#Interval-Index" data-toc-modified-id="Interval-Index-11.2"><span class="toc-item-num">11.2&nbsp;&nbsp;</span>Interval Index</a></span></li><li><span><a href="#Split-Strings" data-toc-modified-id="Split-Strings-11.3"><span class="toc-item-num">11.3&nbsp;&nbsp;</span>Split Strings</a></span></li><li><span><a href="#Toy-Examples-with-Pandas-Testing" data-toc-modified-id="Toy-Examples-with-Pandas-Testing-11.4"><span class="toc-item-num">11.4&nbsp;&nbsp;</span>Toy Examples with Pandas Testing</a></span></li></ul></li><li><span><a href="#Research-with-Style!" data-toc-modified-id="Research-with-Style!-12"><span class="toc-item-num">12&nbsp;&nbsp;</span>Research with Style!</a></span><ul class="toc-item"><li><span><a href="#Basic" data-toc-modified-id="Basic-12.1"><span class="toc-item-num">12.1&nbsp;&nbsp;</span>Basic</a></span></li><li><span><a href="#Gradient" data-toc-modified-id="Gradient-12.2"><span class="toc-item-num">12.2&nbsp;&nbsp;</span>Gradient</a></span></li><li><span><a href="#Custom" data-toc-modified-id="Custom-12.3"><span class="toc-item-num">12.3&nbsp;&nbsp;</span>Custom</a></span></li><li><span><a href="#Bars" data-toc-modified-id="Bars-12.4"><span class="toc-item-num">12.4&nbsp;&nbsp;</span>Bars</a></span></li></ul></li><li><span><a href="#You-don't-have-to-memorize-this" data-toc-modified-id="You-don't-have-to-memorize-this-13"><span class="toc-item-num">13&nbsp;&nbsp;</span>You don't have to memorize this</a></span></li><li><span><a href="#Resources" data-toc-modified-id="Resources-14"><span class="toc-item-num">14&nbsp;&nbsp;</span>Resources</a></span></li></ul></div> # Stop Reinventing Pandas The following post was presented as a talk for the [IE@DS](https://www.facebook.com/groups/173376299978861/) community, and for the [PyData meetup](https://www.meetup.com/PyData-Tel-Aviv/events/256232456/). All the resources for this post, including a runable notebook, can be found in the [github repo](https://github.com/DeanLa/dont_reinvent_pandas) blog post version here: <span style="font-size:2em"> [DeanLa.com](http://deanla.com/)</span> ![slide1](slides/slide1.jpg) This notebook aims to show some nice ways modern Pandas makes your life easier. It is not about efficiency. I'm Pandas' built-in methods will be more efficient than reinventing pandas, but the main goal is to make the code easier to read, and more imoprtant - easier to write. ![slide3](slides/slide3.jpg) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use(['classic', 'ggplot', 'seaborn-poster', 'dean.style']) %load_ext autoreload %autoreload 2 import my_utils import warnings warnings.simplefilter("ignore") ``` # First Hacks! Reading the data and a few housekeeping tasks. is the first place we can make our code more readable. ``` df_io = pd.read_csv('./bear_data.csv', index_col=0, parse_dates=['date_']) df_io.head() df = df_io.copy().sort_values('date_').set_index('date_').drop(columns='val_updated') df.head() ``` ## Beautiful pipes! One line method chaining is hard to read and prone to human error, chaining each method in its own line makes it a lot more readable. ``` df_io\ .copy()\ .sort_values('date_')\ .set_index('date_')\ .drop(columns='val_updated')\ .head() ``` But it has a problem. You can't comment out and even comment in between ``` # This block will result in an error df_io\ .copy()\ # This is an inline comment # This is a regular comment .sort_values('date_')\ # .set_index('date_')\ .drop(columns='val_updated')\ .head() ``` Even an unnoticeable space character may break everything ``` # This block will result in an error df_io\ .copy()\ .sort_values('date_')\ .set_index('date_')\ .drop(columns='val_updated')\ .head() ``` ## The Penny Drops I like those "penny dropping" moments, when you realize you knew everything that is presented, yet it is presented in a new way you never thought of. ``` # We can split these value inside () users = (134856, 195373, 295817, 294003, 262166, 121066, 129678, 307120, 258759, 277922, 220794, 192312, 318486, 314631, 306448, 297059,206892, 169046, 181703, 146200, 199876, 247904, 250884, 282989, 234280, 202520, 138064, 133577, 301053, 242157) # Penny Drop: We can also Split here df = (df_io .copy() # This is an inline comment # This is a regular comment .sort_values('date_') .set_index('date_') .drop(columns='val_updated') ) df.head() ``` ## Map with dict A dict is a callable with $f(key) = value$, there for you can call `.map` with it. In this example I want to make int key codes into letter. ``` df.bear_type.map(lambda x: x+3).head() # A dict is also a callable bears = { 1: 'Grizzly', 2: 'Sun', 3: 'Pizzly', 4: 'Sloth', 5: 'Polar', 6: 'Cave', 7: 'Black', 8: 'Panda' } df['bear_type'] = df.bear_type.map(bears) df.head() ``` # Time Series ## Resample Task: How many events happen each hour? ### The Old Way ``` bad = df.copy() bad['day'] = bad.index.date bad['hour'] = bad.index.hour (bad .groupby(['day','hour']) .count() ) ``` * Many lines of code * unneeded columns * Index is not a time anymore * **missing rows** (Did you notice?) ### A Better Way ``` df.resample('H').count() # H is for Hour ``` But it's even better on non-round intervals ``` rs = df.resample('10T').count() # T is for Minute, and pandas understands 10 T, it will also under stand 11T if you wonder rs.head() ``` [Complete list of Pandas' time abbrevations](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Period.strftime.html) ## Slice Easily Pandas will automatically make string into timestamps, and it will understand what you want it to do. ``` # Take only timestamp in the hour of 21:00. rs.loc['2018-10-09 21',:] # Take all time stamps before 18:31 rs.loc[:'2018-10-09 18:31',:] ``` ## Time Windows: Rolling, Expanding, EWM If your Dataframe is indexed on a time index (Which we have) ``` fig, ax = plt.subplots() rs.rename(columns = {'bear_type':'bears'}).plot(ax=ax,linestyle='--') (rs .rolling('90T') .mean() .rename(columns = {'bear_type':'rolling mean'}) .plot(ax=ax) ) rs.expanding().mean().rename(columns = {'bear_type':'expanding mean'}).plot(ax=ax) rs.ewm(6).mean().rename(columns = {'bear_type':'ewm mean'}).plot(ax=ax) plt.show() ``` ### With Apply Intuitively, windows are like GroupBy, so you can apply anything you want after the grouping, e.g.: geometric mean. ``` fig, ax = plt.subplots() rs.plot(ax=ax,linestyle='--') (rs .rolling(6) .apply(lambda x: np.power(np.product(x),1/len(x)),raw=True) .rename(columns = {'bear_type':'Rolling Geometric Mean'}) .plot(ax=ax) ) plt.show() ``` ## Combine with GroupBy 🤯 Pandas has no problem with groupby and resample together. It's as simple as `groupby[col1,col2]`. In our specific case, we want to cound events in an interval per event type. ``` per_bear = (df .groupby('bear_type') .resample('15T') .apply('count') .rename(columns={'bear_type':'amount'}) ) per_bear.groupby('bear_type').head(2) ``` # Sorting ## By Values ``` per_bear.sort_values(by=['amount'], ascending=False).head(10) ``` ## By Index ``` per_bear.sort_index().head(7) per_bear.sort_index(level=1).head(7) ``` ## By Both <span style="color:red">(New in 0.23)</span> `Index` has a name. Modern Pandas knows to address this index by name just like a regular column. ``` per_bear.sort_values(['amount','bear_type'], ascending=(False, True)).head(10) ``` # Stack, Unstack ## Unstack In this case, working with a wide format indexed on intervals, with event types as columns, will make a lot more sense. ### The Old way Pivot table in modern pandas is more robust than it used to be. Still, it requires you to specify everything. ``` pt = pd.pivot_table(per_bear,values = 'amount',columns='bear_type',index='date_') pt.head() ``` ### A better way When you have just one column of values, unstack does the same easily ``` pt = per_bear.unstack('bear_type') pt.columns = pt.columns.droplevel() # Unstack creates a multiindex on columns pt.head() ``` ## Unstack And some extra tricks ``` pt.stack().head() ``` This looks kind of what we had expected but: * It's a series, not a DataFrame * The levels of the index are "reversed" to before * The main sort is on the date, yet it used to be on the event type ### Some More Hacks ``` stack_back = (pt .stack() .to_frame('amount') # Turn Series to DF without calling the DF constructor .swaplevel() # Swaps the levels of the index .sort_index() ) stack_back.head() stack_back.equals(per_bear) ``` # GroupBy ```sql select min(B), avg(B), geometric_mean(B), min(C), max(C) from pt group by A ``` ``` pt ``` ## Old Ways ``` pt.groupby('Grizzly')['Polar'].agg(['min','mean']).head() ``` ### List Aggregates ``` pt.groupby('Grizzly')[['Polar','Black']].agg(['min','mean',lambda x: x.prod()/len(x),'max']).head() ``` * Not what we wanted * MultiIndex * Names are not unique * How do you access `<lambda_0>` ### Dict aggregate ``` pt.groupby('Grizzly').agg({'Polar':['min','mean',lambda x: x.prod()/len(x)],'Black':['min','max']}) ``` ### With Rename ``` pt.groupby('Grizzly').Polar.agg({'min_Polar':'min'}) warnings.simplefilter("ignore") pt.groupby('Grizzly').agg({ 'Polar':{'min_Polar':'min','avg_Polar':'mean','geo_Polar':lambda x: x.prod()/len(x)}, 'Black':{'min_Black':'min','max_Black':'max'} }) warnings.simplefilter("default") ``` Still a MultiIndex ## Named Aggregations <span style="color:red">(New in 0.25)</span> This is also the way to go from `1.0.0` as others will be depracated ``` def geo(x): return x.prod()/len(x) pt.groupby('Grizzly').agg( min_Polar = pd.NamedAgg(column='Polar', aggfunc='min'), avg_Polar = pd.NamedAgg(column='Polar', aggfunc='mean'), geo_Polar = pd.NamedAgg('Polar', geo), # But actually NamedAgg is optional min_Black = ('Black','min'), max_Black = ('Black','max') ) ``` # Clip Let's say, we know from domain knowledge the that an bear walks around a minimum of 3 and maximum of 12 times at each timestamp. We would like to fix that. In a real world example, we many time want to turn negative numbers to zeroes or some truly big numbers to sum known max. ## The Old Way Iterate over columns and change values that meet condition. ``` cl = pt.copy() lb = 3 ub = 12 # Needed A loop of 3 lines for col in ['Grizzly','Polar','Black']: cl['clipped_{}'.format(col)] = cl[col] cl.loc[cl[col] < lb,'clipped_{}'.format(col)] = lb cl.loc[cl[col] > ub,'clipped_{}'.format(col)] = ub my_utils.plot_clipped(cl) # my_utils can be found in the github repo ``` ## A better way `.clip(lb,ub)` ``` cl = pt.copy() cl['Grizzly'] = cl.Grizzly.clip(3,12) cl = pt.copy() # Beutiful One Liner cl[['clipped_Grizzly','clipped_Polar','clipped_Black']] = cl.clip(5,12) my_utils.plot_clipped(cl) # my_utils can be found in the github repo ``` # Reindex Now we have 3 types of bears 17:00 to 23:00. But we were at the the park from 16:00 to 00:00. We've also been told that this park as Panda bears and Cave bears. In the old way we would have this column assignment with a loop, and for the rows we would have maybe create a columns and do some join. A lot of work. ``` etypes = ['Grizzly','Polar','Black','Panda','Cave'] # New columns # Define a date range - Pandas will automatically make this into an index idx = pd.date_range(start='2018-10-09 16:00:00',end='2018-10-09 23:59:00',freq=pt.index.freq,tz='UTC') type(idx) pt.reindex(index=idx, columns=etypes, fill_value=0).head(8) ### Let's put this in a function - This will help us later. def get_all_types_and_timestamps(df, min_date='2018-10-09 16:00:00', max_date='2018-10-09 23:59:00', etypes=['Grizzly','Polar','Black','Panda','Cave']): ret = df.copy() time_idx = pd.date_range(start=min_date,end=max_date,freq='15T',tz='UTC') # Indices work like set. This is a good practive so we don't override our intended index idx = ret.index.union(time_idx) etypes = df.columns.union(set(etypes)) ret = ret.reindex(idx, columns=etypes, fill_value=0) return ret ``` # Method Chaining ## Assign Assign is for creating new columns on the dataframes. This is instead of `df[new_col] = function(df[old_col])`. They are both one lines, but `.assign` doesn't break the flow. ``` pt.assign(mean_all = pt.mean(axis=1)).head() ``` ### With a callable This is good when we have a filtering phase before. ``` pt.assign(mean_all = lambda x: x.mean(axis=1)).head() ``` ## Pipe Think R's `%>%`, `.pipe` is a method that accepts a function. `pipe`, by default, assumes the first argument of this function is a dataframe and passes the current dataframe down the pipeline. The function should return a dataframe also, if you want to continue with the chaining. Yet, it can also return any other value if you put it in the last step. This is incredibly valueable because it takes you one step further from "sql" where you do things "in reverse". $f(g(h($ `df` $)))$ = `df.pipe(h).pipe(g).pipe(f)` ``` def add_to_col(df, col='Grizzly', n = 200): ret = df.copy() # A dataframe is mutable, if you don't copy it first, this is prone to many errors. # I always copy when I enter a function, even if I'm sure it shouldn't change anything. ret[col] = ret[col] + n return ret add_to_col(add_to_col(add_to_col(pt), 'Polar', 100), 'Black',500).head() (pt .pipe(add_to_col) .pipe(add_to_col, col='Polar', n=100) .pipe(add_to_col, col='Black', n=500) .head(5)) ``` You can always do this with multiple lines of `df = do_something(df)` but I think this method is more elegant. # Beautiful Code Tells a Story Your code is not just about making the computer do things. It's about telling a story of what you wish to happen. Sometimes other people will want to read you code. Most time, it is you 3 monhts in the future who will want to read it. Some say good code documents itself. I'm not that extreme, yet storytelling with code may save you from many lines of unnecessary comments. The next and final block tells the story in one block. It's elegant, it tells a story. If you build utility functions and `pipe` them while following meaningful naming, they help tell a story. if you `assign` columns with meaningful names, they tell a story. you `drop`, you `apply`, you `read`, you `groupby` and you `resample` - they all tell a story. (Well... Maybe they could have gone with better naming for `resample`) ``` df = (pd .read_csv ('./bear_data.csv', index_col=0, parse_dates=['date_']) .assign (bear_type=lambda df: df.bear_type.map(bears)) .sort_values ('date_') .set_index ('date_') .drop (columns='val_updated') .groupby ('bear_type') .resample ('15T') .apply ('count') .rename (columns={'bear_type': 'amount'}) .unstack ('bear_type') .pipe (my_utils.remove_multi_index) .pipe (get_all_types_and_timestamps) # Remember this from before? .assign (mean_bears=lambda x: x.mean(axis=1)) .loc [:, ['mean_bears']] .pipe (my_utils.make_sliding_time_windows, steps_back=6) .dropna () ) df.head() ``` # Bonus! Cool methods I've found but did not fit in the talk's flow. <span style="font-size:2em"> [No Time?](#You-don't-have-to-memorize-this)</span> ``` src = df.copy().loc[:,['mean_bears']] ``` ## Percent Change ``` src.assign(pct = src.pct_change()).head(11) ``` ## Interval Index Helps creating a "common language" when talking about time series aggregations. ``` src = df.copy() ir = pd.interval_range(start=df.index.min(), end=df.index.max() + df.index.freq, freq=df.index.freq) type(ir) ir try: df.loc['2018-10-09 18:37',:] # Datetime Index except Exception as e: print (type(e), e) # Will result error src.index = ir # Interval Index src.loc['2018-10-09 18:37',:] src.loc['2018-10-09 18:37':'2018-10-09 19:03',:] ``` ## Split Strings The entire concept of strings is different in `1.0.0` ``` txt = pd.DataFrame({'text':['hello','dean langsam','diving into pandas is better than reinventing it']}) txt txt.text.str.split() txt.text.str.split(expand = True) # Expand to make it a dataframe ``` ## Toy Examples with Pandas Testing ``` import pandas.util.testing as tm tm.N, tm.K = 15, 10 st = pd.util.testing.makeTimeDataFrame() * 100 st ``` # Research with Style! ![so fetch](https://media.giphy.com/media/G6ojXggFcXWCs/giphy.gif) ``` stnan = st.copy() stnan[np.random.rand(*stnan.shape) < 0.05] = np.nan # Put some nans in it ``` ## Basic ``` (stnan .style .highlight_null('red') .highlight_max(color='steelblue', axis = 0) # Max each row .highlight_min(color ='gold', axis = 1) # Min each columns ) ``` ## Gradient ``` st.clip(0,100).style.background_gradient( cmap='Purples') ``` ## Custom ``` def custom_style(val): if val < -100: return 'background-color:red' elif val > 100: return 'background-color:green' elif abs(val) <20: return 'background-color:yellow' else: return '' st.style.applymap(custom_style) ``` ## Bars ``` (st.style .bar(subset=['A','D'],color='steelblue') .bar(subset=['J'],color=['indianred','limegreen'], align='mid') ) ``` # You don't have to memorize this Just put this in the back of your mind and remember that modern Pandas has so many superpowers. Just remember they exist, and google them when you actually need them. Always, when I feel I'm insecure about Pandas, I go back to [Greg Reda](https://twitter.com/gjreda)'s [tweet](https://twitter.com/gjreda/status/1049694953687924737): ![greg](./slides/tweet.jpg) # Resources * [Modern Pandas](https://tomaugspurger.github.io/modern-1-intro.html) by Tom Augspurger * [Basic Time Series Manipulation with Pandas](https://towardsdatascience.com/basic-time-series-manipulation-with-pandas-4432afee64ea) by Laura Fedoruk * [Pandas Docs](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.clip.html). You don't have to thoroughly go over everything, just randomly open a page in the docs and you're sure to learn a new thing.
true
code
0.409634
null
null
null
null
# Time series analysis (Pandas) Nikolay Koldunov [email protected] ================ Here I am going to show just some basic [pandas](http://pandas.pydata.org/) stuff for time series analysis, as I think for the Earth Scientists it's the most interesting topic. If you find this small tutorial useful, I encourage you to watch [this video](http://pyvideo.org/video/1198/time-series-data-analysis-with-pandas), where Wes McKinney give extensive introduction to the time series data analysis with pandas. On the official website you can find explanation of what problems pandas solve in general, but I can tell you what problem pandas solve for me. It makes analysis and visualisation of 1D data, especially time series, MUCH faster. Before pandas working with time series in python was a pain for me, now it's fun. Ease of use stimulate in-depth exploration of the data: why wouldn't you make some additional analysis if it's just one line of code? Hope you will also find this great tool helpful and useful. So, let's begin. As an example we are going to use time series of [Arctic Oscillation (AO)](http://en.wikipedia.org/wiki/Arctic_oscillation) and [North Atlantic Oscillation (NAO)](http://en.wikipedia.org/wiki/North_Atlantic_oscillation) data sets. ## Module import First we have to import necessary modules: ``` import pandas as pd import numpy as np %matplotlib inline pd.set_option('max_rows',15) # this limit maximum numbers of rows np.set_printoptions(precision=3 , suppress= True) # this is just to make the output look better pd.__version__ ``` ## Loading data Now, when we are done with preparations, let's get some data. Pandas has very good IO capabilities, but we not going to use them in this tutorial in order to keep things simple. For now we open the file simply with numpy loadtxt: ``` temp = np.loadtxt('../Week03/Ham_3column.txt') ``` Every line in the file consist of three elements: year, month, value: ``` temp[-1] ``` And here is the shape of our array (note that shape of the file might differ in your case, since data updated monthly): ``` temp.shape ``` ## Time Series We would like to convert this data in to time series, that can be manipulated naturally and easily. First step, that we have to do is to create the range of dates for our time series. From the file it is clear, that record starts at January 1891 and ends at August 2014 (at the time I am writing this, of course). Frequency of the data is one day (freq='D'). ``` dates = pd.date_range('1891-01-01', '2014-08-31', freq='D') ``` As you see syntax is quite simple, and this is one of the reasons why I love Pandas so much :) You can check if the range of dates is properly generated: ``` dates ``` Now we are ready to create our first time series. Dates from the *dates* variable will be our index, and `temp` values will be our, hm... values: ``` ham = pd.Series(temp[:,3]/10., index=dates) ham ``` Now we can plot complete time series: ``` ham.plot() ``` or its part: ``` ham['1980':'1990'].plot() ``` or even smaller part: ``` ham['1980-05-02':'1981-03-17'].plot() ``` Reference to the time periods is done in a very natural way. You, of course, can also get individual values. By number: ``` ham[120] ``` or by index (date in our case): ``` ham['1960-01'] ``` And what if we choose only one year? ``` ham['1960'] ``` Isn't that great? :) ##Exercise What was temperature in Hampurg at your burthsday? ## One bonus example :) ``` ham[ham > 0]['1990':'2000'].plot(style='r*') ham[ham < 0]['1990':'2000'].plot(style='b*') ``` ##Exercise - plot all positive temperatures (red stars) and negative temperatires (blue stars) - limit this plot by 1990-2000 period ## Data Frame Now let's make live a bit more interesting and get more data. This will be TMIN time series. We use pandas function `read_csv` to parse dates and create Data Frame ``` hamm = pd.read_csv('Ham_tmin.txt', parse_dates=True, index_col=0, names=['Time','tmin']) hamm type(hamm) ``` Time period is the same: ``` hamm.index ``` Now we create Data Frame, that will contain both TMAX and TMIN data. It is sort of an Excel table where the first row contain headers for the columns and firs column is an index: ``` tmp = pd.DataFrame({'TMAX':ham, 'TMIN':hamm.tmin/10}) tmp ``` One can plot the data straight away: ``` tmp.plot() ``` Or have a look at the first several rows: ``` tmp.head() ``` We can reference each column by its name: ``` tmp['TMIN'] ``` or as method of the Data Frame variable (if name of the variable is a valid python name): ``` tmp.TMIN ``` We can simply add column to the Data Frame: ``` tmp['Diff'] = tmp['TMAX'] - tmp['TMIN'] tmp.head() ``` ##Exercise Find and plot all differences that are larger than 20 ``` tmp['Diff'][tmp['Diff']>20].plot(style='r*') ``` And delete it: ``` del tmp['Diff'] tmp.tail() ``` Slicing will also work: ``` tmp['1981-03'].plot() ``` ## Statistics Back to simple stuff. We can obtain statistical information over elements of the Data Frame. Default is column wise: ``` tmp.mean() tmp.max() tmp.min() ``` You can also do it row-wise: ``` tmp.mean(1) ``` Or get everything at once: ``` tmp.describe() ``` By the way getting correlation coefficients for members of the Data Frame is as simple as: ``` tmp.corr() ``` ##Exercise Find mean of all temperatures larger than 5 ``` tmp[tmp>5].mean() ``` ## Resampling Pandas provide easy way to resample data to different time frequency. Two main parameters for resampling is time period you resemple to and the method that you use. By default the method is mean. Following example calculates monthly ('M'): ``` tmp_mm = tmp.resample("M") tmp_mm['2000':].plot() ``` median: ``` tmp_mm = tmp.resample("M", how='median') tmp_mm['2000':].plot() ``` You can use your methods for resampling, for example np.max (in this case we change resampling frequency to 3 years): ``` tmp_mm = tmp.resample("3M", how=np.max) tmp_mm['2000':].plot() ``` You can specify several functions at once as a list: ``` tmp_mm = tmp.resample("M", how=['mean', np.min, np.max]) tmp_mm['1900':'2020'].plot(subplots=True, figsize=(10,10)) tmp_mm['2000':].plot(figsize=(10,10)) ``` ##Exercise Define function that will find difference between maximum and minimum values of the time series, and resample our `tmp` variable with this function. ``` def satardays(x): xmin = x.min() xmax = x.max() diff = xmin - xmax return diff tmp_mm = tmp.resample("A", how=satardays) tmp_mm['2000':].plot() tmp_mm ``` That's it. I hope you at least get a rough impression of what pandas can do for you. Comments are very welcome (below). If you have intresting examples of pandas usage in Earth Science, we would be happy to put them on [EarthPy](http://earthpy.org). ## Links [Time Series Data Analysis with pandas (Video)](http://www.youtube.com/watch?v=0unf-C-pBYE) [Data analysis in Python with pandas (Video)](http://www.youtube.com/watch?v=w26x-z-BdWQ) [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do)
true
code
0.242206
null
null
null
null
## 1. Google Play Store apps and reviews <p>Mobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.</p> <p><img src="https://assets.datacamp.com/production/project_619/img/google_play_store.png" alt="Google Play logo"></p> <p>Let's take a look at the data, which consists of two files:</p> <ul> <li><code>apps.csv</code>: contains all the details of the applications on Google Play. There are 13 features that describe a given app.</li> <li><code>user_reviews.csv</code>: contains 100 reviews for each app, <a href="https://www.androidpolice.com/2019/01/21/google-play-stores-redesigned-ratings-and-reviews-section-lets-you-easily-filter-by-star-rating/">most helpful first</a>. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.</li> </ul> ``` # Read in dataset import pandas as pd apps_with_duplicates = pd.read_csv('datasets/apps.csv') # Drop duplicates apps = apps_with_duplicates.drop_duplicates() # Print the total number of apps print('Total number of apps in the dataset = ', len(apps['App'])) # Have a look at a random sample of 5 rows n = 5 apps.sample(n) ``` ## 2. Data cleaning <p>The four features that we will be working with most frequently henceforth are <code>Installs</code>, <code>Size</code>, <code>Rating</code> and <code>Price</code>. The <code>info()</code> function (from the previous task) told us that <code>Installs</code> and <code>Price</code> columns are of type <code>object</code> and not <code>int64</code> or <code>float64</code> as we would expect. This is because the column contains some characters more than just [0,9] digits. Ideally, we would want these columns to be numeric as their name suggests. <br> Hence, we now proceed to data cleaning and prepare our data to be consumed in our analyis later. Specifically, the presence of special characters (<code>, $ +</code>) in the <code>Installs</code> and <code>Price</code> columns make their conversion to a numerical data type difficult.</p> ``` # List of characters to remove chars_to_remove = ['+' , ',' , '$'] # List of column names to clean cols_to_clean = ['Installs' , 'Price'] # Loop for each column for col in cols_to_clean: # Replace each character with an empty string for char in chars_to_remove: apps[col] = apps[col].str.replace(char, '') # Convert col to numeric apps[col] = pd.to_numeric(apps[col]) ``` ## 3. Exploring app categories <p>With more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.</p> <p>This brings us to the following questions:</p> <ul> <li>Which category has the highest share of (active) apps in the market? </li> <li>Is any specific category dominating the market?</li> <li>Which categories have the fewest number of apps?</li> </ul> <p>We will see that there are <code>33</code> unique app categories present in our dataset. <em>Family</em> and <em>Game</em> apps have the highest market prevalence. Interestingly, <em>Tools</em>, <em>Business</em> and <em>Medical</em> apps are also at the top.</p> ``` import plotly plotly.offline.init_notebook_mode(connected=True) import plotly.graph_objs as go # Print the total number of unique categories num_categories = len(apps["Category"].unique()) print('Number of categories = ', num_categories) # Count the number of apps in each 'Category' and sort them in descending order num_apps_in_category = apps["Category"].value_counts().sort_values(ascending = False) data = [go.Bar( x = num_apps_in_category.index, # index = category name y = num_apps_in_category.values, # value = count )] plotly.offline.iplot(data) ``` ## 4. Distribution of app ratings <p>After having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.</p> <p>From our research, we found that the average volume of ratings across all app categories is <code>4.17</code>. The histogram plot is skewed to the right indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.</p> ``` # Average rating of apps avg_app_rating = apps['Rating'].mean() print('Average app rating = ', avg_app_rating) # Distribution of apps according to their ratings data = [go.Histogram( x = apps['Rating'] )] # Vertical dashed line to indicate the average app rating layout = {'shapes': [{ 'type' :'line', 'x0': avg_app_rating, 'y0': 0, 'x1': avg_app_rating, 'y1': 1000, 'line': { 'dash': 'dashdot'} }] } plotly.offline.iplot({'data': data, 'layout': layout}) ``` ## 5. Size and price of an app <p>Let's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.</p> <p>How can we effectively come up with strategies to size and price our app?</p> <ul> <li>Does the size of an app affect its rating? </li> <li>Do users really care about system-heavy apps or do they prefer light-weighted apps? </li> <li>Does the price of an app affect its rating? </li> <li>Do users always prefer free apps over paid apps?</li> </ul> <p>We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.</p> ``` %matplotlib inline import seaborn as sns sns.set_style("darkgrid") apps_with_size_and_rating_present = apps[(apps['Rating'].notnull()) & (apps["Size"].notnull())] # Subset for categories with at least 250 apps large_categories = apps_with_size_and_rating_present.groupby('Category').filter(lambda x: len(x) >= 250).reset_index() # Plot size vs. rating plt1 = sns.jointplot(x = large_categories['Size'] , y = large_categories['Rating'] , kind = 'hex') # Subset out apps whose type is 'Paid' paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present['Type'] == 'Paid'] # Plot price vs. rating plt2 = sns.jointplot(x = paid_apps['Price'] , y = paid_apps['Rating'] ) ``` ## 6. Relation between app category and app price <p>So now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.</p> <p>There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.</p> <p>Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that <em>Medical and Family</em> apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.</p> ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() fig.set_size_inches(15, 8) # Select a few popular app categories popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY', 'MEDICAL', 'TOOLS', 'FINANCE', 'LIFESTYLE','BUSINESS'])] # Examine the price trend by plotting Price vs Category ax = sns.stripplot(x = popular_app_cats['Price'], y = popular_app_cats['Category'], jitter=True, linewidth=1) ax.set_title('App pricing trend across categories') # Apps whose Price is greater than 200 apps_above_200 = popular_app_cats[['Category', 'App', 'Price']][popular_app_cats['Price'] > 200] apps_above_200 ``` ## 7. Filter out "junk" apps <p>It looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called <em>I Am Rich Premium</em> or <em>most expensive app (H)</em> just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.</p> <p>Let's filter out these junk apps and re-do our visualization. The distribution of apps under \$20 becomes clearer.</p> ``` # Select apps priced below $100 apps_under_100 = popular_app_cats[popular_app_cats['Price'] < 100] fig, ax = plt.subplots() fig.set_size_inches(15, 8) # Examine price vs category with the authentic apps ax = sns.stripplot(x=apps_under_100['Price'], y=apps_under_100['Category'], data=apps_under_100, jitter=True, linewidth=1) ax.set_title('App pricing trend across categories after filtering for junk apps') ``` ## 8. Popularity of paid apps vs free apps <p>For apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:</p> <ul> <li>Free to download.</li> <li>Main source of income often comes from advertisements.</li> <li>Often created by companies that have other products and the app serves as an extension of those products.</li> <li>Can serve as a tool for customer retention, communication, and customer service.</li> </ul> <p>Some characteristics of paid apps are:</p> <ul> <li>Users are asked to pay once for the app to download and use it.</li> <li>The user can't really get a feel for the app before buying it.</li> </ul> <p>Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!</p> ``` trace0 = go.Box( # Data for paid apps y=apps[apps['Type'] == 'Paid']['Installs'], name = 'Paid' ) trace1 = go.Box( # Data for free apps y=apps[apps['Type'] == 'Free']['Installs'], name = 'Free' ) layout = go.Layout( title = "Number of downloads of paid apps vs. free apps", yaxis = dict( type = 'log', autorange = True ) ) # Add trace0 and trace1 to a list for plotting data = [trace0 , trace1] plotly.offline.iplot({'data': data, 'layout': layout}) ``` ## 9. Sentiment analysis of user reviews <p>Mining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.</p> <p>By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.</p> <p>In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.</p> ``` # Load user_reviews.csv reviews_df = pd.read_csv('datasets/user_reviews.csv') # Join and merge the two dataframe merged_df = pd.merge(apps, reviews_df, on = 'App', how = "inner") # Drop NA values from Sentiment and Translated_Review columns merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review']) sns.set_style('ticks') fig, ax = plt.subplots() fig.set_size_inches(11, 8) # User review sentiment polarity for paid vs. free apps ax = sns.boxplot(x = merged_df['Type'], y = merged_df['Sentiment_Polarity'], data = merged_df) ax.set_title('Sentiment Polarity Distribution') ```
true
code
0.638554
null
null
null
null
<a href="https://colab.research.google.com/github/ksetdekov/HSE_DS/blob/master/07%20NLP/kaggle%20hw/solution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # !pip3 install kaggle from google.colab import files files.upload() !mkdir ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c toxic-comments-classification-apdl-2021 !ls import pandas as pd import numpy as np from sklearn.metrics import * from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline train = pd.read_csv('train_data.csv.zip', compression='zip') test = pd.read_csv('test_data.csv.zip', compression='zip') train.toxic.describe() train.sample(5) test.sample(5) x_train, x_test, y_train, y_test = train_test_split(train.comment, train.toxic, random_state=0, stratify=train.toxic) y_train.describe() y_test.describe() ``` ## Bag of words ``` from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import CountVectorizer from nltk import ngrams vec = CountVectorizer(ngram_range=(1, 2)) # строим BoW для слов bow = vec.fit_transform(x_train) vec2 = CountVectorizer(ngram_range=(1, 2)) # строим BoW для слов bow2 = vec2.fit_transform(train.comment) list(vec2.vocabulary_.items())[:10] bow.mean() clf = LogisticRegression(random_state=0, max_iter=500, class_weight='balanced') clf.fit(bow, y_train) clf2 = LogisticRegression(random_state=0, max_iter=500, class_weight='balanced') clf2.fit(bow2, train.toxic) pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) test bow_test_pred = test.copy() bow_test_pred['toxic'] = clf.predict(vec.transform(test.comment)) bow_test_pred['toxic'] = bow_test_pred['toxic'].astype(int) bow_test_pred.drop('comment', axis=1, inplace=True) bow_test_pred bow_test_pred2 = test.copy() bow_test_pred2['toxic'] = clf2.predict(vec2.transform(test.comment)) bow_test_pred2['toxic'] = bow_test_pred2['toxic'].astype(int) bow_test_pred2.drop('comment', axis=1, inplace=True) bow_test_pred2 bow_test_pred.to_csv('bow_v1.csv', index=False) bow_test_pred2.to_csv('bow_v2.csv', index=False) confusion_matrix(bow_test_pred.toxic, bow_test_pred2.toxic) # !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f bow_v2.csv -m "kirill_setdekov first bow v2 submission all data" ``` ## TF-IDF ``` from sklearn.feature_extraction.text import TfidfVectorizer vec = TfidfVectorizer(ngram_range=(1, 1)) bow = vec.fit_transform(x_train) clf2 = LogisticRegression(random_state=1, max_iter = 500) clf2.fit(bow, y_train) pred = clf2.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) tf_idf = test.copy() tf_idf['toxic'] = clf2.predict(vec.transform(test.comment)) tf_idf['toxic'] = tf_idf['toxic'].astype(int) tf_idf.drop('comment', axis=1, inplace=True) tf_idf tf_idf.to_csv('tf_idf_v1.csv', index=False) # !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f tf_idf_v1.csv -m "kirill_setdekov tfidf v1 submission" ``` ## Symbol n-Grams ``` vec = CountVectorizer(analyzer='char', ngram_range=(1, 5)) bowsimb = vec.fit_transform(x_train) from sklearn.preprocessing import MaxAbsScaler scaler = MaxAbsScaler() scaler.fit(bowsimb) bowsimb = scaler.transform(bowsimb) clf3 = LogisticRegression(random_state=0, max_iter=1000) clf3.fit(bowsimb, y_train) pred = clf3.predict(scaler.transform(vec.transform(x_test))) print(classification_report(pred, y_test)) importances = list(zip(vec.vocabulary_, clf.coef_[0])) importances[0] sorted_importances = sorted(importances, key = lambda x: -abs(x[1])) sorted_importances[:20] symbol_ngrams = test.copy() symbol_ngrams['toxic'] = clf3.predict(scaler. transform(vec.transform(test.comment))) symbol_ngrams['toxic'] = tf_idf['toxic'].astype(int) symbol_ngrams.drop('comment', axis=1, inplace=True) symbol_ngrams symbol_ngrams.to_csv('symbol_ngrams_v1.csv', index=False) from sklearn.metrics import confusion_matrix confusion_matrix(symbol_ngrams.toxic, tf_idf.toxic) # !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f symbol_ngrams_v1.csv -m "kirill_setdekov symbol_ngrams_v1 v1 submission" ``` #FastText ``` !pip3 install fasttext import fasttext with open('ft_train_data.txt', 'w') as f: for pair in list(zip(x_train, y_train)): text, label = pair f.write(f'__label__{int(label)} {text.lower()}\n') with open('ft_test_data.txt', 'w') as f: for pair in list(zip(x_test, y_test)): text, label = pair f.write(f'__label__{int(label)} {text.lower()}\n') with open('ft_all.txt', 'w') as f: for pair in list(zip(train.comment, train.toxic)): text, label = pair f.write(f'__label__{int(label)} {text.lower()}\n') classifier = fasttext.train_supervised('ft_train_data.txt')#, 'model') result = classifier.test('ft_test_data.txt') print('P@1:', result[1])#.precision) print('R@1:', result[2])#.recall) print('Number of examples:', result[0])#.nexamples) classifier2 = fasttext.train_supervised('ft_all.txt')#, 'model') k = 0 for item in [i.lower() for i in test.comment]: item = item.replace("\n"," ") k +=1 k prediction = [] for item in [i.lower() for i in test.comment]: item = item.replace("\n"," ") prediction.append(classifier.predict(item)) prediction2 = [] for item in [i.lower() for i in test.comment]: item = item.replace("\n"," ") prediction2.append(classifier2.predict(item)) pred = [int(label[0][0].split('__')[2][0]) for label in prediction] pred2 = [int(label[0][0].split('__')[2][0]) for label in prediction2] fasttext_pred = test.copy() fasttext_pred['toxic'] = pred fasttext_pred.drop('comment', axis=1, inplace=True) fasttext_pred fasttext_pred2 = test.copy() fasttext_pred2['toxic'] = pred2 fasttext_pred2.drop('comment', axis=1, inplace=True) fasttext_pred2 confusion_matrix(symbol_ngrams.toxic, fasttext_pred.toxic) confusion_matrix(fasttext_pred2.toxic, fasttext_pred.toxic) fasttext_pred.to_csv('fasttext_pred_v1.csv', index=False) fasttext_pred2.to_csv('fasttext_pred_v2.csv', index=False) !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f fasttext_pred_v2.csv -m "kirill_setdekov fasttext_pred v2 submission" ``` ## CNN ``` from torchtext.legacy import data pd.read_csv('train_data.csv.zip', compression='zip') !unzip train_data.csv.zip !unzip test_data.csv.zip # классы Field и LabelField отвечают за то, как данные будут храниться и обрабатываться при считывании TEXT = data.Field(tokenize='spacy') # spacy -- значит, токенизацию будет делать модуль LABEL = data.LabelField() ds = data.TabularDataset( path='train_data.csv', format='csv', skip_header=True, fields=[('comment', TEXT), ('toxic', LABEL)] ) pd.read_csv('test_data.csv') test = data.TabularDataset( path='test_data.csv', format='csv', skip_header=True, fields=[('id', TEXT), ('comment', TEXT)] ) next(ds.comment) next(ds.toxic) TEXT.build_vocab(ds, max_size=25000, vectors="glove.6B.100d") LABEL.build_vocab(ds) TEXT.vocab.itos[:20] len(TEXT.vocab.itos) train, val = ds.split(split_ratio=0.9, stratified=True, strata_field='toxic') # дефолтное соотношение 0.7 print(len(train)) print(len(val)) print(len(test)) BATCH_SIZE = 64 train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train, val, test), batch_size=BATCH_SIZE, sort=True, sort_key=lambda x: len(x.comment), # сорируем тексты по длине, чтобы рядом оказывались предложения с одинаковой длиной и добавлялось меньше паддинга repeat=False) for i, batch in enumerate(valid_iterator): print(batch.batch_size) # pass batch.fields batch.batch_size batch.comment batch.toxic len(batch.toxic) import torch.nn as nn class CNN(nn.Module): def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout_proba): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.conv_0 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[0], embedding_dim)) self.conv_1 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[1], embedding_dim)) self.conv_2 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[2], embedding_dim)) self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim) self.dropout = nn.Dropout(dropout_proba) def forward(self, x): #x = [sent len, batch size] # print(x.shape) x = x.permute(1, 0) #x = [batch size, sent len] embedded = self.embedding(x) #print(embedded.shape) #embedded = [batch size, sent len, emb dim] embedded = embedded.unsqueeze(1) #embedded = [batch size, 1, sent len, emb dim] conv_0 = self.conv_0(embedded) #print(conv_0.shape) conv_0 = conv_0.squeeze(3) #print(conv_0.shape) conved_0 = F.relu(conv_0) conved_1 = F.relu(self.conv_1(embedded).squeeze(3)) conved_2 = F.relu(self.conv_2(embedded).squeeze(3)) #conv_n = [batch size, n_filters, sent len - filter_sizes[n]] # print(conved_0.shape) pool_0 = F.max_pool1d(conved_0, conved_0.shape[2]) # print(pool_0.shape) pooled_0 = pool_0.squeeze(2) # print(pooled_0.shape) pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2) pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2) #pooled_n = [batch size, n_filters] cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim=1)) #cat = [batch size, n_filters * len(filter_sizes)] return self.fc(cat) import torch.nn.functional as F def binary_accuracy(preds, y): rounded_preds = torch.round(F.sigmoid(preds)) correct = (rounded_preds == y).float() acc = correct.sum() / len(correct) return acc def train_func(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() predictions = model(batch.comment.cuda()).squeeze(1) loss = criterion(predictions.float(), batch.toxic.float().cuda()) acc = binary_accuracy(predictions.float(), batch.toxic.float().cuda()) loss.backward() optimizer.step() epoch_loss += loss epoch_acc += acc return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate_func(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: predictions = model(batch.comment.cuda()).squeeze(1) loss = criterion(predictions.float(), batch.toxic.float().cuda()) acc = binary_accuracy(predictions.float(), batch.toxic.float().cuda()) epoch_loss += loss epoch_acc += acc return epoch_loss / len(iterator), epoch_acc / len(iterator) INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 N_FILTERS = 100 FILTER_SIZES = [2,3,4] OUTPUT_DIM = 1 DROPOUT_PROBA = 0.5 model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT_PROBA) INPUT_DIM model pretrained_embeddings = TEXT.vocab.vectors model.embedding.weight.data.copy_(pretrained_embeddings) import torch.optim as optim optimizer = optim.Adam(model.parameters()) # мы подали оптимизатору все параметры -- значит, эмбеддиги тоже будут дообучаться criterion = nn.BCEWithLogitsLoss() # бинарная кросс-энтропия с логитами model = model.cuda() # будем учить на gpu! =) model.embedding from torchsummary import summary # summary(model, (14)) import torch N_EPOCHS = 8 for epoch in range(N_EPOCHS): train_loss, train_acc = train_func(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate_func(model, valid_iterator, criterion) print(f'Epoch: {epoch+1:02}, Train Loss: {train_loss:.3f}, Train Acc: {train_acc*100:.2f}%, Val. Loss: {valid_loss:.3f}, Val. Acc: {valid_acc*100:.2f}%') test.examples model.eval() cnn_res = [] with torch.no_grad(): for batch in test_iterator: predictions = model(batch.comment.cuda()) cnn_res.append(predictions) testout = pd.read_csv('test_data.csv.zip', compression='zip') cnnpred = testout.copy() cnnpred['toxic'] = [float(item) for sublist in cnn_res for item in sublist] cnnpred.drop('comment', axis=1, inplace=True) cnnpred cnnpred['toxic'] = (cnnpred['toxic'] > 0).astype(int) cnnpred cnnpred.to_csv('cnnpred_v4.csv', index=False) !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f cnnpred_v4.csv -m "kirill_setdekov cnn v4 with threshold 0" ``` # word2vec > not done, skip this model ``` ! wget https://nlp.stanford.edu/data/glove.6B.zip with open("alice.txt", 'r', encoding='utf-8') as f: text = f.read() text = re.sub('\n', ' ', text) sents = sent_tokenize(text) punct = '!"#$%&()*+,-./:;<=>?@[\]^_`{|}~„“«»†*—/\-‘’' clean_sents = [] for sent in sents: s = [w.lower().strip(punct) for w in sent.split()] clean_sents.append(s) print(clean_sents[:2]) model_path = "movie_reviews.model" print("Saving model...") model_en.save(model_path) model = word2vec.Word2Vec.load(model_path) model.build_vocab(clean_sents, update=True) model.train(clean_sents, total_examples=model.corpus_count, epochs=5) ``` # bow on random forest ![photo_2021-10-24_13-09-26.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/4gIoSUNDX1BST0ZJTEUAAQEAAAIYAAAAAAIQAABtbnRyUkdCIFhZWiAAAAAAAAAAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAAHRyWFlaAAABZAAAABRnWFlaAAABeAAAABRiWFlaAAABjAAAABRyVFJDAAABoAAAAChnVFJDAAABoAAAAChiVFJDAAABoAAAACh3dHB0AAAByAAAABRjcHJ0AAAB3AAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAFgAAAAcAHMAUgBHAEIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFhZWiAAAAAAAABvogAAOPUAAAOQWFlaIAAAAAAAAGKZAAC3hQAAGNpYWVogAAAAAAAAJKAAAA+EAAC2z3BhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABYWVogAAAAAAAA9tYAAQAAAADTLW1sdWMAAAAAAAAAAQAAAAxlblVTAAAAIAAAABwARwBvAG8AZwBsAGUAIABJAG4AYwAuACAAMgAwADEANv/bAIQACAgICAkICQoKCQ0ODA4NExEQEBETHBQWFBYUHCsbHxsbHxsrJi4lIyUuJkQ1Ly81RE5CPkJOX1VVX3dxd5yc0QEICAgICQgJCgoJDQ4MDg0TERAQERMcFBYUFhQcKxsfGxsfGysmLiUjJS4mRDUvLzVETkI+Qk5fVVVfd3F3nJzR/8IAEQgD7wUAAwEiAAIRAQMRAf/EABwAAQABBQEBAAAAAAAAAAAAAAAHAwQFBggBAv/aAAgBAQAAAACfwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACjBWndS1wAAAAAAAAAAAAAAAAAAAAAAAAAOHcF2zswAAAAAAAAAAAAAAAAAAAAAAAABGsXp1gCJOuJHAAAAAAAAAAAAAAAAAAAAAAAxugZvdg+ILRzrXUPIUkRzJsk8zdMTOAAAAAAAAAAAAAAAAAAAAAA1vQN82dE8SS1tsd2sI73MnJO9YHB9Lc0yrvsh1uJJ46HAAAAAAAAAAAAAAAAAAAAAC20i8jfnzoPJ3O+ch5OWedJF1XC9U8qyLlok6kg3TfZD6o4r3rq0AAAAAAAAAAAAAAAAAAAAAIu5R3KW+cr/6x3XnI/31dyVJtzFPUvLmUnPnafZrhfSI26Ysdd6V+gAAAAAAAAAAAAAAAAAABq8N7TMYjLSttlVz/AF31RyZtEowN0bDmq9eciZ2Z+eug4h1qfIDk25ivoXnjonUYl7b2MAAAAAAAAAAAAAAAAAAx2obpeWUN0JC42kDe9e6JhmAtv1DqSW3C+K704asuneXpepRN1Bz5g+oeXZU2OCNv1LP9TcuWMhdK+feQAAAAAAAAAAAAAAAAAA0aCNx1OK+ppZ1nie56I5w+N30ifdA0HojneS+tHI0c9kc7aB1Nyzt0uc6zhrcZ9R8ubJ1lAuywRn+1gAAAAAAAAAAAAAAAAAAGjcc7zJXPk7dE6VyRjOu+ada6k5bkjKxFkdx3roNzzA3SOuwh0vztb9V8nSDvcQ9IattW48rY7d592UAAAAAAAAAAAAAAAAANQj6ZLxiID07aeh+FrrqTk+R+uHLUS9XRRFHUnLuXnTnKe8JEvdV/FHK8g7/AM/zVq+N6gj7dd4+wAAAAAAAAAAAAAAAAAAeOQo+mnpZyHpu6R11BAGvdech5fuhBPO3QVfnboSHdd6r5UkTJxT3Pl8XyfUmT4kHO+gAAAAAAAAAAAAAAAAAB8WWQ85wiah0PB+syp1W4swE/c6TLi4u6o5sxXc2ZjvkSTpn5KlPyLetIXjSvO/QAAAAAAAAAAAAAAAAAACws82HmAx2biTn6ykmduQJqi7aZt5QlnqZgIZh7DbnJ3P08aXHfW0k2XB970Dzfs84aBN+729b7AAAAAAAAAAAAAAAAAAQfzrPHQWoRpfzHyjG2Znvm/b9hivpPm3OTPM9nxBtPayJeWujOfvjqPlSSdrgvoqdUK4CR73arsAAAAAAAAAAAAAAAAAAByrF01SbyPtWpS7kYRrdHc3183rPQV9BWErdd8oWPeF3HPI0o6ThOtOTNz6LjyTN3AAAAAAAAAAAAAAAAAAAY2N9s5uy/U/vJ0YyRusBdLc3XnRfMcsdTcWap0fgM5tsY7JzhOmnR5O/Q9Pi2wmm9latlAAAAAAAAAAAAAAAAAAECw307INjpeS3DmSMcnq2zVtT7L3TUeLt9l3mWcI71PqzlPe+xOW4k6q5gz0t89ybGXRezwrs/SAAAAAAAAAAAB8/QAAAAAAA8wkLQL0ZIfIXxYdGaZE3UXL21yrz90xM/nA111xxxIGbiTqnly47wizlXb9VmnoODcPvUuegAAAAADz0AAAALW0q5AAAAAAAGF1mP5mgeE89gJczMFdLc07LMfO/R8G4fqTlaaOl3Fupdpca3E0wN1XAeody5mLdO2yV/QAAAAAACIaOf2HO1wAABSs7X4Za4AAAABgIk1HW9l6l90uPcvLfFGA2bVZs1yNuoOX9slrnnoWEcV1RyrMGLjPq/lDduyPeXoh6s0GEfqTeqNJu9x+gAAAAAAAHONvGONn+dwAAPKNjbC4ywAAAAGk8bXmz6Z1JZc07NrHSMV6F1By/uUhwV0lz7adTcqSrYRz1bynt8mQH01zza95/UPxXOEh4+tdegAAAAAAAAjTnL52eVJfAABRsLYKmX+wAAAALXg509y7O2tRd1Hy5JWxwl0zzj8dL8wTHrWg9Y8n7HKMFdYc04vqDlyffuvOX2AAAAAAAAAAsuGrTLdSSGAAKdjZg9y9YAAAAA4q1XZsB1b96fFmmZfoXmKbNK0jqXlrct4hXpuI9AyeU7KiCpL/tw9AAAAAAAAAAD5suTeldsAAfFnY/PoMpdAAAAADlqJZF0LMdfcYSLp+t9acl75tkL9N894rYaE+yzHl5vtcAAA88+gAAAAAAAHnF3X+TAB5aWHwAv78AAAAAIJ52mrU9B6x5OlD4jLq7le46KhGcNo1WSbq/qgAAAfPzUAAKNT6AAAAAHEXbX2APKGPogF1k/QAAAAAjrkXet0hHqfnHD7fdT9Y5/b8V8h9ZG7AAAAAADEXl2AAAAAUOLu2ABSx9sAKuW+wAAAAAWHCF501D047pgNyenmE+QLvIfYAAAAAC1xd1lAAAAAefH3hOWuwQFOxswA9y9YAAAAABHW15oAMVbAKmSuAAAKNX0AAYm3qZoAAAB5StbeldZPRIU6mB82dj8gAyF8AAAAAAAALPGgC+vvfQABi696AAUsMZv7AAADyhYUCpmPqKNK6MHlvj6QAFzlPQAAAAAAAB84TwAVsnVAAGJt8ndgAMbZmXrgAAClj7YfWWrIO+ZzKePtQAH1mKgAAAAAAAAMVbAB7k7oAAw9FlLoAHzhPDI3oAABZY/wADKXPrnXdJW8s7D5AAMpdAAAAAAAAAWuLAAvb/AOgAMLTe5S5ACzxou8mAAB8460Be5EctzPvGNoAAC8yPoAAAAAAAAHmF+AALjJVAA8wnyfWWrADE24rZf0AAKeLogqZj6HIXRuT8AAFXL/QAAAAAAAABjrIAB9ZK6AHzhPBUy/2A+ML4Pc39AAC2xvwBlLoW/GfVl0AAGYrAAAAAAAAACjhwABdZCoA+MIC4yn0BbYoPMzWAAeWWPAVst9KNhZ8h9fegADIXwAAAAAAAAAeYamAAPrIXfoPjCAXmR9Bj7EGTuwAPjHWoBk7r5sLNZ8p9b+gAFxlgAAAAAAAAAMZaAAAucl9h8YQBf33oYigC9yIAKeLogH1mLWx+TXufuofQAH1l6oAAAAAAAAAFpjAAAKmTrj4wgBkb0MH8gr5b0AW2M+QBUUw0eLeigADJ3XoAAAAAAAAAHxhAAAHuTuj4wgAyd2fGEA+816A+bGxAAAi/Wp1AAXWUAAAAAAAAAAPMRRAAAMnde08KAPcrcLXFgM1UAo46gAAAhmvL4AH1mKgAAAAAAAAAAsLAAAAZK7+cIAH1lquPsgGVuQ8sbHwAAAgLd5GAAyd2AAAAAAAAAAFLDAAABkbnCgA+8pjqIDIXwtLCmAAAHNcybgAC6yfoAAAAAAAAAAeYmgAAAF5ZgAABeZJQx1EAAAHnK3SGaAD6zFQAAAAAAAAAACzxoAAAAADx8vp6V8rY2IAAADzkXrSuAGSvAAAAAAAAAAAHxhfAAAAAD5wupatrGJ988z8hyLcfdSiAAAAU+RevfQBXy3oAAAAAAAAAABi7UAAAAFHRI503KbruG2bBVUtRimOZllb7AAx8aabbbLJ+xAGP5e6xADL1wAAAAAAAAAABbYoAAAAY2II13iT96u/QHmF5xpdIXgA8iCGZblKvosJybNXoGtQV00ALzJAAAAAAAHh6AAAHzhvgAAACnEMLTVMeQAAfHPOrdP8AoDyCYw652sWHKu8zEBoccdBgH1mfsAAAAAAA+cWr3VUAAAY+xAAAAoxlE1tNEsVwAHxyfOO4B575DsB9cyIDHccdS5UEWYabADJXgAAAAAAAUsVTXF3dPQAAUsMAAAAeatDuo9CSP6AHiKdLlfB420srPF63pm5dlhqW2ewBUl0ELZWVAFbL+gAAAAAAApYqm8+7u8qgAAxFAAAAAearzzvc/wByBT13UdS1XUsXtu47Flb26utJ54m3pIfPGvVufjqKZ9Bz/Ie+gMvXAAAAAAAAU8VSC6va3oABZ4z0AAAAKMI6D0TvRrkb6Dr2yb1t2bpZnN+g0bjmbukQ+fpG8dTeDmecdnAusn6AAAAAAAAfGLoAub6uAA+cL8gAAAAaPzruMgxP9StI2yVAAMFw9uPZnoHPeUlIHKXTeSAzNUAAAAAAAAPnGWwFzf1gAeY20AAAAApc7RlKs5bj6AAPjhrE9aSUDD8W9ASYDkTreoC7yYAAAAAAAAHzjrQBd31UAKGIAAAABS5wdG7J6AABznBmW613QYrlDLax1dcijyV116DM1QAAAAAAAAHmPsgHt7e/XoDzE0AAAAHz5886edV1AALXTdWxC72GRK/PEH1JhlK80OHJzmOAMRPPpiOa+qvQusoAAAAAAAAAFhYAH3f3foC1xYAAAeR7EFt8a9T7bywAUo1iSOLjdvd+0fao86Z2vQoJ0C3zMoTln/afHk7bmajDPSgMtcAAAAAAAAAAsLAAVr+5A8w9IAAA8hKPuld3917nWp1HUAUoag6+6q0uxmUecuaN2Vk1Gheeho/NHV32jnR5+CvlvQAAAAAAAAAeY+yAC4vbj0LPGgAAGkwJ1/kB8cv7VPIKcRQHuU27vX0mK5VwGCw+NjrATF06AHLG9SeiOzmgMpdAAAAAAAAAAPMZagAq3l19evML8AAAObJvkIGL467VqCPed6UrTx7H3Nmr/AD77E3gAanyv1p9QZsUoipmfQAAAAAAAAAD5xFIAD24urlY2HoAAHJXZ2DjiaA5E6gzbRufsXLc42mrxXCNuAEg9e+gDkuXN650lDexkL4AAAAAAAAAAKWH8AAPbmvjgAAHJPaFDBbQHIvT9WA8vqk075Z81xCAA969kEARtE3QnLs+7Ke5r7AAAAAAAAAABZ40AAAAABzLPO/gxvHPXPKHXGVxPO22QbhgABJ/WAAt+M+teSep8gXeTAAAAAAAPPQAADzGWoAAAAAGhwf19eD45lyke3XYGsc5RqAABW7iy4A4z6d5M7C+jLXAAAAAAA88o29D7yX0AAAfGH+AAAAAAIQ0fpTcGF53x1zoXY2w8daMAAAdQy8AOVpR566/KmZ9AAAAAAfNG2tfh95aqAAAFviQAAAAAPNFg22vKUwbpyD1zvse8y66AAASz1MAIBvY26cL7IAAAAAAW9rbfAq5SqAAABYWAAAAAAD5gbaJxRNqXQkAQF4AAAM93B6ARZG0sbMZeuAAAAAeULW2+AXmQ+gAAADG2YAAAAACPtR6NRjy50RFGgAAAB9d15IA0fjifJlfWa+g8fNKlT+Kfn3k/sAB5RtbX4A+8lcegAAADzH2QAAAAAGK5i7QeQlzlTAAAA7N3IAizlToeWV1lPD4t6FGkHlXK1QAPm2sqIBcZKoAAAAB5aY7wAAAAAPnj3r3ZSM+XccAAADq2UgDlWKulJNZC5t6FD4AKuVqAA+bKz+ABf33oAAAAAt8Z8AAAAABy/Jk5DV+U9UAAADoqdQGM4bodPyC98ABXyn2AFKzs/n0A+8lcegAAAFK08B5TtwAAAAAgLD9Zhj+YYuAAAEydNgIo5txXUe+gALnJ/QB5SsrQAFfJ1AAAAAWWOAAAAAAAjWCu3PQp8/QP8gAAJJ62AQTGGpdHyKABc5P6A8ULK2AAvr/ANAAAAA8oYv5AAAAAAChyHPkygQxzXTAAAkDr8Bzpp2JliXAAVst9ALWyoAAfeSuPQAAAABRxXwAAAAAAEWwL2xkgId5l+AAAq7z2CA5Vtvi+6KAD6y9UHxaWdL0AC6yFQAAAAAClifgAAAAAAfHJ0rT4AgHn4AAOpdh3kBxZtvujde/YAytyPKdnafIAD6yN16AAAAAAUMV8gAAAAACPOfe3K4FHiTAgACW3UgGE4zkNpHRG6gGQvjyjZWoAB5dZGoAAAAAAC3xXgAAAAAB88pTHNQDmCHwAB0bkJiAhKHtqYHap6AXWTe2tjRAAH1kbr0AAAAAABb4v5AAAAAANB5o6/3QCAefgADf58hzqcFHjDabhS1Hrj0FbLeWlnTAAC4yNUAAAAAAAKGL+AAAAAAHnLOO7QrAgjncABtfXX1B/RAIcgvcxH/U+fD7ydraeAAB7kLz0AAAAAAACli6QAAAAAHkLwR1zIoIE57AAzPZer8r7X2AFLi/aLkavKMuhUpgAAVcnWAAAAAAAAHxi6AAAAAAa/An3VvepAQBryI/gAy/S8l8XzhKuRCKeftzCwp9MgAAAu8j9AAAAAAAAA+MZbgAAAALWF43naQ8LyR2lnggWw6E5s1rTceH1KnSWWhmE+0PsPOOMxeB5H3YdUAAA9yN4AAAAAAAAAfOJogAAAB5HMFSRNN2c83fTIY+D43kedMfD9rUrXW+bu0Pk7qeRgR7zDvgGjdHbiAAA+8ncAAAAAAAAABTxHwAAAA816A6U/7H6MFyP2LtwMNBkP477ucrlr+6r4HUejpj9Bybb5YDXN/moAACrlKoAAAAAAAADx8UaVvQAAAAtYWjOc5K+gOf9b7A+gMfGEcYX7yubzuYzG1ZYDVOQJF9As7DqD0AAK+U+wAAAAAAAApUqNKlT8egAAAeR3BMgzbegGN5GnmcAAAAOYdV2QB5H3YVYAAV8p9gAAAAAAB48o0KNH4AAAAAYCAraftnABG/OfYm3AAABrHGskVQDRukduAAK2V+wAAAAAADzyjb29LwAAAAAW0MxhOEmfQACAY36w3QAABqMFxtsmxgDW5HmMABUy1QAAAAAAD5o29Cj4AAAAADyPoI36br0AAW3JGIn+bbkAAahAmg7NnPoALDzpYACplqgAAAAAADHWQAAAAABgoDs5+2b0AAEWwfc/M3y3fgB5o0GaNtGZ9AA80TsD6AD7y1UAAAAAAAUMdRAAAAAChDcWzdJv0AAAocgb/ba3jN/wBuy2TyNeopWGjxjjdlywAAaH1BsQA+8rWAAAAAAAB5YWIAAAADzQoG3ybr0AAAObrTLPLGwt6Pzbntzf5auAABrEqSyAfeVrAAAAAAAAPLbGeAAAABhYEsJ92cAAACPYD3gAAAAAY6r0kAXdb16e1j7eej0AABiaB79fdWrWq/QUMX8AAAAKEPRXN0mfQAAACnyJudYAAAAA80LsH7AAB4e+PXv19/f19fX39fdT7ALCwAPa1e4re+0sVTAAADzQ4G3qb70AAAAIZjbaQAAAABoXTO1AAAAAAfdWtcVvofGG+QAe1rmr9Y2kAAAYOBsbP20gAAAAWHI0gfQAAAABrUjTKAAAAAAFS8u/t5Z430AAe+AAAW8ORXNsofQAAAADyA9W2AAAAAAsrLqAAAAAAAHn3f3nvmLtgAAAAB5oMDb1N2QAAAAADAcs7+AAAAAI97CrAAAAAAAK+XeYT5AAAAAYyBLWd9hAAAAAAcuZK7AAAAANG6R24AAAAAAAzxZ40AAAAHkbQ7Nm/+gAAAAAEURRtAAAAABrkjzEAAAAAABUucieYiiAAAAMNA2Ym+7AAAAAADF8myAAAAAAWLpQAAAAAA+69xcffoW2KAAAAUYhjyeNu9AAAAAAA5byl0AAAAA80fr70AAAAAfVxXuKj0B84miAAAHmiwdJEv1QAAAAAACIow2YAAAAA0HqrNAAAAAfV1dVvoABbYoAAAY2CU4ZgAAAAAAAMNyhIQAAAABqM3SMAAAAPq4u7j0AAPMTQAAAaTosnbEAAAAAAAB5yjn7oAAAABgd3m0AAAD25urj6AAALfEgAAfFnfegAAAAAAAEF6ZsIAAAADH3HRwAAAXNzc/YAAAGLtQAAAAAAAAAAARbD+2AAAAAKGF6nAAAVru5qAAAAClh/AAAAAAAAAAAAjWGNrAAAAANP62AAD7urur6AAAABj7EAAAAAAAAAAAIviLZwAAAAHmn9bgAFe9uPoAAAAAfGIpgAAAAAAAAAACHo7z4AAAADzUetQALu8regAAAAAW+JAAAAAAAAAAAEC65lwAAAAHmn9bgD27vaoAAAAAA8x1mAAAAAAAAAAAc2rwAAAAB5p/W4D28vfv0Hx79LW6AAAAA+cRSAAAAAAAAAAAcp7J9AAAAAPNQ62B7eX32AWut/fvxtnoAAAAFHEeAAAAAAAAAAA+eP98AAAAAPNQ62BXydQAW2jWNTO7VjaGXtKFD7q/NKtVoZPA7J9AAC1xYAAAAAAAAAAMXyhvoAAAAB5qPWoFTJ1wC11W2vmxwpvlvYUcddGWttVyeRy+yZ0AALPGgAAAAAAAAABo0AbkAAAAAeah1sA9yN4AtcXlPNfz8TZCw+bjdsrEltLek0cBtOM2LfwAAY2zAAAAAAAAAAEPxxsIAAAAB5qPWoAvsgBa4zJtf2j4+vn6+Kyk+6XqrR+qoAAPnE0QAAAAAAAAADnewyIAAAAB5qHWwAXuQ9Dy0rrPIgAAAAAo4jwAAAAAAAAADzlPZPsAAAAA81LrQAF5kgAAAAAAAs8aAAAAAAAAABT5A30AAAAAean1mABeZIAAAAH/8QAHAEBAAIDAQEBAAAAAAAAAAAAAAYHAQQFAwII/9oACAECEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMZAAAAAAAAAAAAHnFofY/WAAAAAAAAAAAAeUKhGjLrQyAAAPn6AAAAAAAA+YnX3Pdu3/UAAANXnaUl+wAeHuB5/eQAAAj9d8Q3bk3gAAA86N1natHqAAAQCQd8ABj5+w5deRjB9W7IAAAAKgj57WJNMgI3JAPKkJzOwAcGFcS2eqa8BhXmE+sEAAABCq1CT2fsgUpOZkCIVfLbSAGnWsXWnK2IpXOkDuXB9gAAANWj/IOlafaDFFeFqygKf4HVuoA4dVaKaWW06zjOAe9y9MAAAAVVFQek/nP2fFDfHtcHZNWjPn6vf2Ahlb+bYu31hWvAgFnzAAAAAIjVwDs2f13lQ2G/cO+jFTlx9wPOuIZgm3cr7WlkLAlVqZAAAADSo7AD2s6V+ND4Otb+1X8ALLmg59WcMNvUdXpRgG7c+2AAAABTPGAPqypdQ2B3rcquMk1spiIVzrAEi1uMGbWlAAAAACDVyAPqfQDASDkaxILf4VdcPAAmEc0gltpAAAAAGjSHyAAPbr9bY1o3iVQnAAJzCfgbN17YAAAAAqSNgAekrkW/r63x6b/CgHwMy+Z+kOhWCdwQLKmoAAA5GlIPQAROrAAfdo9DjgbHIgRmwZ6KyhpO4IOzcn0AAAPirODLpl1AHlR2sADPcnXxrj32d3i7fvudIODTxOoKZt/vgAAA+KyiOZJPu2BW8IAAfU3lPj0t30ABHaiJ1BSVWqAAAAxXUHxmT2D1g49M4AAdm3NoAA49NfP3NoM+rn6wAAABiB19h9zGf7gqCPgBtyzZmu0AAFdwTckcQSa2QAAAAQGvhsTuabCNVKANm598AAB40pt7Uczb0gAAAABitIYGxK5b3KY5QBM7MAAAEF4PO5PYubIAAAAHnTvGBnf9+QATSywAABz4JFdWzZkAAAAAcqm/EAANq5t8AAAQeufW8vcAAGORFeDZHYACGVpgAANqw5kAAAQCv5VaoAAxyIrFebv2j3gAK+gIAAN+8AAACO1FZ0x8uNwuRzfG0O0DTh0R5bMrsrZAAIdW3kAAM3ntgAAItVUv5/E8jatfvBy4NEvI9bGmeQAedc8nGObqAABb0hAAA9IjVQNy3OuOJBYv8jpWn2gADwqyNAAAJXagAAHvwaXD0t/uPmNQbg4CXWTsAAB51LHgAAM3J2gAAbWKB1TNkzXwiEJ5gNyypSAAA16m4IAAEktsAAHRU9ECaTuFw3WBmU2VtgAADxq+LAAAzdHWAAHtvIHVDb78Y+APeyJfkAAAHzWkNAAHUl2hZeQADoejlUH8gDsWp0wAAAD5q2KAAe8114d8WxJwAHrvnjRXHAJRaHuAAADGnyOTzOPxQAzKO1CdI69zfQAG77mKmgwCYWZ9gAAMY5PA4XD0cAAOpMY1HwT+wAAffQyINUoEstH6AAAc6vY9qAAB7TTyhvwB9XR1/kB6bXsDkUJgJPanoAAAODWHNAAMybtwrSAE2try8vnB6e/qA8aG5YklreoAAQzw9N7q9LOtVMfAA6UxjnAwAG5fe4AAPmo4UJFsM/fr6fXps7mztbe59isoaNvvSDtQOMAD2mfnDfgABak/AACDVKAAD160imXQ0qY1AZ+/MBmTdyFaIAA7N6ewAA5dBfIAAD0nNhRaqsAAN2YcDggAAzdcnAAHjRHJAAAZ6l1YqGPgA+pX6RH4AAAT21cgAGKhhYAAb8hkHf38o/UOAB2ZHFOcAAAOnfPuAAK+q0AB0JXK+xkCoo6Ab21yMAAABm8ZCAAIxSYAbsplncyAOPTXyAAAAAFuTcAAcigwHrJ5fIfsABW0JAAAAAC4JmAAOZ+fwb81mG2AAGvTHPAAAAAF3yQAAcz8/jpTuXejOM4AAOBUXwAAAAAz+gOkAAOXQDM3sT7G96a+qAAIXWuAAAAAN6/vcAAcugcPaypeNnR3tXIAArWFAAAAASS7PsAAcugcGZ9YB9aHS8gAA8qc5AAAAAJ1bGQABy6BwE9sIAA//8QAGwEBAAMBAQEBAAAAAAAAAAAAAAIDBAEFBgf/2gAIAQMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAv0Y4AAAAAAAAAAAAO6NMqMQAAAAAAAAAAAC/XJXg4AAAHZypAA7wAAAABbrsI+fEAAAO+l1XigAABqqqAAAJ67gwVAAAAb7TmTOAXUgd9HNmABboswwO6tIMuUAAABo2BTi4B6ObODRsoxACWu9ioL9cgVYAAAADvpdCGKsHpSw0hvth5wBZukz40tlwHPPgAAAANt4OZcweo5grO+meZwDRr655rR3UAxUAAAABftAV4oO+mQwRXbjz6w7r0DNXqlRoAoxAAAAAl6QBzHR30xDz+atRjziW6wIyQhcCPnxAAAAA9CwAY6PTCrDtuM2Qv2dAKu2Aw0gAAAANOsAMuoFVnSrBbqtACi2QUYgAAAACXpAAByFfO3KNIAGbSHPOiAAAAAN1wAHKaY971C3UDPRzRoGbSGPOAAAJzpAC/aABihYByeoZco2aDNpFfngAAA220Z4AO+jIACrO6OchbHkYBbvM2kYKgAAANt6nLWBr0gAGansI8ABduM2koxAAAAGvSU5IBZ6AABXg4AAWegZtJ50AAAABq1DPliN9oARo5niAAGvTGq9ThAAAABq1DmXPxduAHPOiAAA76MeWsFQAAAAGzQHKKKvRmAZ8YAAA02znX54AAAADu+wCPJgGfGAAAJar+484AAAABP0OgABHz4gAADTr55vAAALLrccAA0bAAARy5wAADVqoxAABZdfOGOoADVqAABHzQAAC3fjz9sssn3FWCWi+ZRj4AAaNfQAA82IAAC/bnnZ05hqCem/pzJnAAa7HZSAADBUAAB6mXUCOCAs03hDFWAAd23AAAKMQAAH1dfmBzBWXabQUY+AAB3daAAA8+sAAH31/xvRjz9v0zBHHSAAA7utAAApwgAB39gt/JrzPl06OgUZIgAADu24AADz6wAB9J+ovybOjVeA5kzgAAANmgAAQoljAAH657L4D50AK8UAAAABtvAA5n7oYaQAHufrJ87+eWAFOLgAAAErJzssABTVpkQ84AB+nfTlP49cAz4wAAAssttkAAQz3Wgy5QAep+vTH5TkAoxAAAE9VsgAA5n7oAM3cYD2PuPo5B8D84CnCAAAFm2YAApr0SAFP6v4XgYqoz9T6v3egfN/n8xThAABo7yMIO7rQAIUW2gAj9f9oAAM35DcKojnHORjyMQ2aBGqqvVcAOZ+39AAWfrcgAB+UZQAAchTnjL0JAABTXokAAK/wBB+kAAHwXzYAABzNlu3AAEaLbAAAPX/SgAB87+ezAAAIec32gAULwAACv9fuAAM/5BcAAEaaqolu8AK6rpgAACH3v04AA/JaAAEaKKwG+0AjGwAAAD2f0gAAfm3jgBGmioALPQAAAAABs/VQAB8F82A5TRSAAa9IAAAAAel+ngAD4b5cEc1EQAA76EwAAAAB7n6KAAPhvlxDNRwAABbvAAAAAH1H3IAA+G+XM2QK+SmAANGwAAAABH7r6kAAfDfLnMdAhZXMAANmgAAAACr9S9QAAfDfLhlynLK+gAB3fYAAAACv8AYpgAD4b5cGXKAAf/xABSEAABAwIDAwgGBgcFBQgBBQABAgMEAAUGBxESITEQExQiMEFRcSAyQFBSYUJigYKRoRUjQ1NgcrEzNnSSoiQ0cHOyFhclNUVjg8FVRFSQk+L/2gAIAQEAAT8A/wD4W5L5YYed2CrYQVbKeJ0GugqbnkOEGyeSnnam5v4ukE8yuNGH1Ggf+uojyZEZh4Hc42lf4jX/AIOka1ie3fozEF1haaBqSsJ8jvHJgySJWFbI94xED/Lu/wCDBIHfyYkzSsNhdXGaBmykcW29yE+a6l52Ykc3RYcRhH2rq351YiYdBlw4j6PkC2awxiy24lgGRCUQpO55lXroNZx23o2JmpYG6XHB+1vq8mU8kP4MgoJ3suPN/wCv/gZdsR2WzJbVcpzcZC9djbO9WlT83sHx90dcmSfqNFP/AF1gnGkXFyp/MxFsCMUblrBJC/Q1FaipL7MZhx55xKG0JKlqJ0CQKvOdTDLy2rTbeeQODzyikHyRVkzpYffQzd7eGEH9uySoDzTV3zJxbLkyA1d1tMc4rYDKUo6tRL9cf0rBmypr7xZkIc1cWVcDWY97ftWE5L8RwodfKGUL8Auidawdl2/ii2TJybgljmllDaNjbK1UtCm1qQsaKSSCPAispLkqJi1qProiWyts+YG2KzotvP2KDOA3xpGh8neTJGVt2a5R/wB1KB/zj/gPfsY4fsDqWrlODLikbSUBC1qI+6Km51WBn/dIMqR+DYrCGKRieyoniOGDzq0FoK29NnlzZw9PvFrtpgRVvvtSD1EeCxULKXGMnQuxmYw8XXR/RGtYAwcvCc+Tz96jOvyWdOjo+rWI8aWPDamkXF5wOOpKkIQ2VEipueEBvVMK0Pu/NxwN1IzefOGlykxmW7k7IW0ygbwhCQOuan369XF0uzLjIdWTrvWdKwFj66Wu6RYc2Wt6A8sIUHDqW9eChWbEl1jBkoIVpzrzSDyYAwvDxNeHYkt9xtptgudTiaxLZzZL7Ptu0VBhzRKjxKSNRyYrK7zlREmDrKQzHdP3DsK5MtsdRMNrlQ56F9EkLCw4jeW1CrdgTAF9mSJrF2XLLzqneaQ4Eaa1asEYUtLjb0K1NIfQdUuElah5FZNY2tQuOFbvF4uFgqSPmjrjkyLlhFyvMYn12W1/5P8AgJOukC3s89MlNMNa6bTigkVPzQwVE3C5F9fgy2pdYPx3a8UTJjMRh5vo6Eq1d0G0FVnpC69mnD67KuTJm7MMWu+syXkttMLQ+SrcAFVds6LRFeU3b7e7MA+mV8ymrBnBZrlIRHnxFwVrOgWV843WMpc+Phm6Sba+WpLLPONrACtyTqeNTcU4in69Ku8tweBcIFYAnGFjCzPE7lP82rydBRWd0HbgWeaBvbecaPkvky4wrbMTXOWxPcdCGWNsJbOhNYjtP6Gvlxt2pIYeKUk8Sk7xQJG8VidZv2VypXFZhsv/AGoI15Mq53RMZwUE7n0OMms5IPMYpbk90mKg/ajkyuWxdsDLt0gbaEOPMLH1V76xXgm8YblOB1hbkMq/VSUjVJHIlSkKCkkgg6gjcRWHMzMR2VxtD0hUyL3tPH+i6tdyiXi2x50ZW2w+jUf0INYgtxtl7uUIjQMyFpHlrurKCTzOL0Ndz8Z1H/ATNplh7CL+06gOtPNOISSATyYKxYcLXN6Z0YvpWwWy2F7FYyuQxXlm1eQwGltyAstg67Oiy3yQUz5CzAh84tUpSUFpH7Qg6gVdLTcbRLVEuEZTD6QCUK8DyYImfp/AkZt47Syw5Fc+zVNOtqadcbV6yFFJ8xUV9UaSw+j1mnErHmk61mUwi5YElSEfQDMhHJlJO6NjCO0Tuksut/lt1nDC5jFhkd0mOhf2p6nJlysXfL1cBe8pEiPS0KQtSFDRSSQR8xVhmmBerbL/AHUltR8gazthBcGzTh9B1xo/f5MkJu+9wj4NPJq43iywkrRPnxWhp1kOrSN3kav/AP3Q3F8NNulEpxYSFw0LA1V/orGmXMzDDCZiJYkw1ObGumwtHJktPU9Y7hDUd0eQCnydrGEjLONiGc7dosqTcOpzrSNsIBCKbzMw7a162TB8dlXc6tQCqhOokR2X0+q4hK0+Shr/ABvivEzGGbame/GdeQXQ3o3U7O6evdCszLXzdcLlZZY7u2IbtPYua2uowFtBCNPQK0pBJOgHEmr5mthe2OKZYW5NdHEMepS88jt9TD275yf/APFWDNmwXZ5EeUhcF5e4c4dWz9+gQeHJnNZ5UiZZpUWM46VNOtrCElXqckCMiXOixlu82l11CCvTXZ2jprpTWC/0Rga8WUSzK22nXASjY38mCpYh4rsj54CUkH7/AFazvgAP2aeB66XGV8mSU3bt13hE72n0ODycrG8HoGLL1H8JJUPJfX5MPK/TuXMdo7y7blsfajVFEEEg1hed0DEVpla6BuU3r5E6Gs74OrFlnAeot1k8mSE3Vq9QSeCmnhWMIPQMUXmNpoEylkeS+sOTGmKLLeMvYCBNbVOJYJZ+mFo3L5MpJvRsYMNE7pLLrf5bdZwW4xcU9K06kthCvtR1KSopUFA6EHUGsX45w7ecCGOJQVOfQz+oAOqVoUCeTJCK6iHepJHUccaQjzRWckAx8UtyNN0mKg/ajky+lLl4PsziwQsMBBB+p1f4wm3i1wNTMnx2Pk44EmpuaODIeo/SfPK8GW1Lqw3qHeLTGuUbXmnwSkK3K3HTkuU9q3QZU14LLTDZWoIGp0FTc77ejXoVnee+brgbrDWbF4vOJbbAfjR2IryyghFZl24S8FXUDettKHh9xQPJllc27bi6Gt50IZcbdQtR3AdQmsR5yvh9xixRm+bTu6Q99PyTVuzjxTGkJXKEaS33o2AisO4ghYgtTNwiEhC9QpB4oUOINZv4sktvosMR0oRsBcr568Ecl+wVebDbYNwmBosyfgVqUEjUA8mT+JXbnbHrbKd1ehabCjxLSuRSRpV+hGBerlE/dSXEjyBpClIWlaToUkEGrU63crVDlHeh+OhX+YVd4Zg3SfEP7GQ4j8DTLqmXW3UHRSFBQ8wdazSaRc8DMXBH0FsP/Y5yZNTuYxLIi67pMU/iis44PMYpbk90mKg/ajq8mTM3n8NSYpO+NKP4OViaD0DEF2iaaBuU4B5E6igSCCDoRWPgLzlu1PA1UlEeRyZQTeYxalgndJjuI/Dr1nBhaT0xF+jMlbS2wiTp9Ap3BXJhXB12xLIKY7ZRGRrzj6h1RS0KQpSFDRSSQR8xWE5nQsS2eRrpsS29fInSsd4QbxTaksoWG5bBK2Fn80mrlhDElrdU3KtMkafSQgrR+Kaj2K9ylBLFrluH5NKrD+Ul/uDqF3ICDG+einTVntUKzwGIEJrYYaGgH9SavWErBfn479zhB9bAIb1UoABVQcO2SBvh2yMwBwKG0g/xdjvFMvC9qZmx4bb5W8GztqICanZuYwk68y+xGH1Ggf8ArrKjFV4ul9nRrlPdfLkbaRtngUGs34PRsWqf7pMZtf4dTkydnc/hZcfvjSlj7F9fkxZeLRarRIN0dUhl9KmdyCvUrFGoUx+DLjy2CA6y4lxBPimsv8SXTF0LEcG7SA6ssgN6ISgBLgIpxCm1rQoaKSSD5jkxJgS7YdtsGfKcaW3I0BCOKCRqAeTJG5HbvFtUdxCH0VmeFjHF42/FnTy5pPJiS8Wi6ZYxULnxxMEdhSGisbe22QDyZTzzExjEa10TJbca5c1IPRcZz1gaJkIbeHJlhculYJtfetoLZ/yKrM+F0TGdz8Hth4ffTyWo/p3KYtcV9Acb+1g8mA5vQcXWV7XcXw2fJwFFZ3QdYdmnAeo440fv8mSU7YuV2hE7nWEODzbrNqD0XGMl0DdJZad/LY5MI/8AjmVy4fFaY0hj8CSOTCE3oGJ7PJ10CJSAfJfVNYqxRBw1BblTGHnUOL2AGwDU/MTDq3i7DwVB5zucf0/okVhrN+I2xJbvEQNbG+OmK31anPpkzJT6EbCXXVrCfAKOulYMssi9Yit8ZpJ2Uuhx1XwoRvP8a5o20ScFXPvW1sOj7iuTLed0LGdpWTolxamT99JFZ3wtWrLOA4F1k8mSM7SXeYRPrttuj7nJmBYJV/w5IhREBUnnG1tAkDgavFpl2a4yLfMCQ+yQF7J1G8BXJl5YbDEskC6QIuxIlRU884VrOtYyhdAxTeY2mgTKWR5L6w5MTgXvKxEkb1iIw99qCNeTKmd0XGUNBO6Q261WcsLmMSsSu6TFT+KOSDabpcVEQoL7+h0PNoKgKhZXY0maE20MJ8XnEpqw5WTbLcYNzn3yIxzDyXKHDkzzgBudZ5iR67S2j9zkyTnc5abpD72ZCV/Y4Kztg7FztM0Dc6wts+bfJkzLEnD1xgr38zI/J0Vc4ioVxmxVDQsvrb/ynSo7y2H2Xkes2tKx5pOtZlMpueA35SN+wGZKOTK+b0TGdt8Httk/eTWd0E87ZZoHEOsmoWF8RT9Oi2iW4PENkCsrrFfbHap8W6xAyFvBbQ20rpGSaHJTzsi86NF1RDbTVW7KPCURSFuokyVj43Sn/orE1gjX+zyba+dnbAKF8ShaeBq6ZeYttry0KtbryBwcY66TUfBmKpJAaskw+bZTVnydxFLWk3BxmE35h1dYZwnaMNxCxBa66tOdeXvW5/Fl3zHwlZJD0WXMWZLW5TTbSlGp2dtrRqIVpkPfNxYaqx3JFxs8KelIHSWUubIOumo5b5frZY4K5s98NtDcBxUo+CRV4zoury1ItUJqO33Ld666czMxutWpvKx5NNCsPZwXqM+hF5CZccnesJCHE1MMO+WB9MdYcalxVbCxwIWmlJUlSkqGhB0Iq3SlQp8SUk72XkOf5TrWasUTsFvvo38y408OTKqb0XGUFBO6Qh1rlzjhcxihuSBukxUH7UbuTKKd0nCDTHfGfcR+J26zigGLi3nu6TGbXyZdkXjLtyAreUJkR6UlSFKSoaEHQisPTTAvlrl/upTaj5a1nbCC7dZ5o/ZPLbPk5yZITdHb1B8Q08msysRYmg4nnwW7tIbi6IW0hB2Ny009IfkOc4+844v4lqKj+JrD07pdhtkknVTsZtZ8yOTOSBz+FxIG8xZKFfYvqcmXuLo+F7lKdlNuLjvs7Kg3x1TWYVyiYowPBvcRpaUMzSnRfEa9TkyUm81eblD7n4wX9rRrMS0yEY2ujUdhbhdKHkpQkqPXTULAGMJ2hZsr4Hi5o1/1kVAsc5/BTVnn7KJBgmOsg7QSeAqHkfAaI6deHnfk0gN1astsJ2p9mQzBUt9pQUhxx1RINFtKikqSCU8CRw/jPUGlqSlJUToBvJrMx+3S8VypcCU0+2822VKbOo2wNOTAuZVzhmxWAsMdFDoaLu8r0WeXMy+yLrieYyVnmIayy0irRBTcbpBhKc5sPvobK9NdNo6Vj/A7eFHYJjyXHo8gK3r4haOTJy6mVhx+CpW+G/8Ak7WLIPQMS3iLpoESl6eSt45II/T+WyBxL1rKPvIGnJYJvQL3bJf7qS2o+WtA68md0HahWeaB6jrjR+/yZFTgHb1CPg08ms84W03Zp47lusnkyQnasXuD4KaeFYtg9AxLeIumgRKXp5K3jkxU1+n8shJRvdEVmSPNHrcmV92ZtmLIxfdDbL7a2VKNZvyoMnErBjPodKYqUulBpKSohIGpJ0ArDER23YetcJzctqMhK/PTkvdrZvFrm2546NyGignw8DV0y3xbAlLZTbHZKPoOsddKqhZYY0l6aWssp8XnEIqxZfT2sHXHD9ylNJMl4OIW1qvYqFkzhtjQyZUuQfMNirPgjDFkeD9utyGnACOdJK1/iqtkA66fxk9IYZQVuuobQOJUQkfnTN8s7zgbZuURxfclLyCa2hT7jbLS3XFhKEJKlKO4ADiaxVm9OeedjWHRlgbukqGq11Mvt5nKKpdykuk/E4o1EudxhLC4s19lQ70LKay1x/KvbirRdSFyQ2S09wLgFYjw1dbNKkLkQnWoxkuIZcUNAvQ8mWGFbXiCVMdlSH0OwlMuNpb5cdRlRsXXttXHpSl/YvrVb+lJmxnIzS3HW3ErSlAKiSk691Zy3CA9ZrZH5xPSy8HQ39IIKeTJafzN/nQyd0iN+bVZuwuj4uce7pLDa/wGxyZPzRJwmYx4xpLiPsX16vkIwLzcon7mS4geQPJhyb0+xWqXqCXYraj56cmakAysFT3NN7C2nRyZSTui4wYaJ0Elh1qs14PSsGy3AN8d1p3kygm9HxaGO6THcR9qevWb0Lo2Llv90mO2v8OpyZWykz8FRGF6K5lTrChWOcu7lYJj78Rhb1uJ1StG8tfJdaVGiSpbqWY0dx1w8EoSVH8qwBljLZls3W+NBHNELZjH+q/4vnXa2wBrMnMMDu5xYTVnxLYr1Kej2+4NyFtJCnNjXQA+kpQSNSQKnYsw5A1Eq8REHw5wE/gKxLja04etzEt5ZdU+nWO0ji5WEr+m/wBgh3Tmg0XdvVAOuyUqIrH+Zz9uku2mykc+jc9J+A+CKnXGfcHi9MluvuH6TiirkwZmJdLFKZYlvrftxOi0L3lv5orMWYf+wt0fjL1S421oofAtY5Mo7DYbwLwbjDbkOtc0EBfclVYgtv6MvlygJB0YkLQny7qwHa7+xiW0zmbXLLKHxtr5sgBC9xrNuD0nCDzwG+M+25yZOTeYxSuP3SYq0/ajrctwwxhd2W9c5dtjOPkAuPPb9yBWKM0osQOW/C8dpscDKCAB9xNRIN3v8uUtsOSHktrfeWo6kJQNSSTyZezug4xszpO5TxaPk4Cms74O6yTh4usq5MkJ2ki9QvjQ06KzTg9ExnPPc+lt38U8mU87pWDobZO+O661yX63Jn2WfCPF9haB5kUtKkKUlQ0IOhHzFWu4yLXcYs+OQHWHAtOtYexdOxxa8TW6YwwhYhatBvkwbKXDxRZpCQerKQD5K6prNfCt2vjtmdtkNT7qA6hyoWTmK39DJMWMPrL2/wDorAuEX8KwZMVc8SQ84HNyNgIrSncO2J5fOO2uKtfeS0kk1HgxIydmOw2yjwQkJH5fxfmtjS5Wp1i0W50srca23nhTjjz7pW4tbjijvUolRJrKvDOJrTeFzJluWxEejKQSsgGgaxHj7DdgJZlSyp8fsGhtrqTnhHC9I1iWseLj4RVrzptD7iUT7a9F+uhXPCoFwhzYrUqG8l5pwapcSdQRWaJnMYtuLC5LxYWG3G0FZKQFJ5L3eX7u/HcdJ2WIzTDafANpArKa6EYPuqOKobjiwPNOtOuredcdcUVLWoqUT3k1l8Iq8YWdEppDjS3VJKVjUalB0rNuzsW7Ebb0dlLbcpgL0SNBtJ3HkwotzEuWE22+s+y24wPu9dFEFJII0IrD+I7nh6cJlvdCVkaLSoapWnwNWbNuwKfL1ysgjyF+u+yAurRi2wXtGttntvH4PVWPNJrFUL9IYcu8UDUuRV6eYGo5MEzugYrssnXQCSlJ8l9TlzimSY+GGmmlkIfkpQ7yZLsRXLNeuoC6t4IX/IU1PiqhzZUZQ3svLbP3TpUOQuLLjyEes04lY80nWs1I6Z+CVy0b+aWy+PJXJlRO6LjKG2TukNutVndC2J1nmgbnGnGj5o5Mi5oU1eYJ7ltujlxVk6blOen2mUhgvKKlsuVGyNuf/wCrvDDY+o2V1hPLq34XluSmZr77q2i2ra0CaiZdYPjLK02dpaydSXSpyosCHDTsxYrLKfBtAT/T+Nc2rnarlfmFwZIdWyyWXxoRsKQaZcU0624nihQUPMHWrfm7fpt5tzLrMVmK5IQhwJSSdCazIxQ9h6x6xTpLlKLTR+DxVS3FurUtxRUtRJUonUkmoeC71Nw9IvzKGzEa13bXXIRuJA5MpMSvwb2m0uuExZmug+B0VnZC2Lrapnc8wtH2tnkjx3pLzbDDSnHVnRKEjUk/IVlrhC/2di7pubCWWZrASEFYKwRTzSmXXGljrIWUnzB0qBNegTY0xnTnWHUuI14apOtYkxZd8SusOXFbR5kKDYQgJ0CuTJF15H6aYW0sNLDbiF1jTKcXGS/cbK6ht5w6uR17kE1dcN3yzrKbhbnmfrEao+xQ5GJD8Z5t9h1TbqDqlaDoQay9xMvEtgDkogymCWn/AK1XiGYN1nxD+xkOI/A004ppxDifWQoKHmKgSkTIUWUk9V5lDg+8NeTNaAZWC5roG+O427yZZYwgYbmT27itaI0lCOsAV6LRWJJ0W4365zYiVBh+QpaArcdDyW3S/ZYIRxU5bVt/faoAmsJWi/pvdrmxrVLWhqQ2sqDZ00rMPCkvE9riR4imkvMyAvVyoWSDm4zr2PJlqsKYDtOF33JEJ6Qt5xvYWXFfxZrQIPplQFXrEdoskYSLjKSw13d6leQFJzjwoXggtzAj4y3VwxnY4lgXe25SH4/BAQd61/BV3zMxZcn1LRPXEa+i0x1awtmnfYExpu6yVzISiAvb3uI+YNNuodQhaDqlSQoEd4PJmHhC7wLtd7t0XS3Lkgod208XOTLOw2i+3p+PcUuHm2Q60EL2N6TWeAXrYPg0kcmUriLjhC5W1e8IdcR9jyaeaUy860v1kLKT5pOlWmYYF0gywdCy+hf4Gs5YYk4bhTE7+Ykj8HeTJ1DCsVLLiQViIst0KxxC6Diy9MdwklY8nOvyYbwDgxzDUW9Soj8nWLzziS4eKeIATpRzGwXbN1owi2T3KWEI/wDpdYSzRuV9xNCtz8SMxHeCwAjUnUJrFuY1vwxNEJ6DIeeLYWNCAmp+dNykJU3Gs8VtB7nVF2sdYMsMnDDt+tsNDEhLKH/1W5K0HkyQkqE29Ru5bLS6zRgGHjK4EDqPhDw+1PJg7NIR41lsj9uKyFoYMgu0KusBidb5EN8atvtqbV5KFYiwjerBLeakxHCyFdR9KSULFWnD95vD6GYEF10k6ahOiB5qqdkzKeiWpuJLYbW0ysS3V/TWTULI+AjfNvDz3yaQGxVjsMGxW1FuhhZYSVEBxW0eudTUKx2iBoYlujM6cChtIP8AF+ZWNsUWK99CgyW2WFsIcQQ2FLqbizEs/XpV4lr+XOFI/AVldcTMwXASVaqZW40r7FclwulvtrRemy2WG/FxQTUjNPBLB2f0mpw/UZWatePsJ3V1LUS7N84eCHQWj/ro1jfHmMLfiC52xm4Bhll3RHNto1KTUy+3qeT0u5ynvktxRFYkvF1xFJVcnmXujNpS0jcShsAcmFbQ7f7xFs/Syy0+pSlHiOoknhWM8LrwxeFQeeLzZbS424RoSDyZcXHp+D7S8vetDZZPm2SnkzFtol4MvDfFaWg4PNshfJlnO6HjO1knc6Vsn76azqhc7YYEocWJX5ODkySm7Fyu0Inc6whwebdY5g9BxbemNN3SCseTnX5Ln/47lUXOKlW5Dn3mOTLWb0PGdpJO51S2T99NCs5IJYxO1K03SYyfxRu5MppSZuDRFXv5h51o+S+tU+MYk6XGPFp5aP8AKdKwQp1OLbGW/W6Wis67OtaLZdm0bkasPcgzLnjCf6AMJBPMFjpJX9Dy5MmLFJjR593fQUokBLbHzCazCwGcUNMSIjqG5rAIG3wcTSMp8bKcKDb20D4y+3VsyYvQdadlXSMwUEK6gLtI12RqdTyaCtB/AOtbXz93lQSCSdAKdxxhOO9zLl6iBfgHBWdSYswWO6xHUOtLDrRWggjkyRnbcC8wSfUebdH36xvitrDNnVJCQuS4diOg96qul2uF2lrlT5K3nld6v6AULXcVQVTxDeMRKtkvbJ2AfPkylxpImLNjuDhdUhsrjLPEgcUVnFB5jFKJPdJioP2o6vJguMxeMsDBCBqWpDR/nCiRRBBII3isDTBCxbZX+7pIQfJwFFZ3wdUWWcB3usq5MkpnP2e4Qu9iSFfY4OS4solQ5EZe8OtqQfJQ0p9pTD7rK/WbWUHzSdKtUwwblBlg72X0Of5TWYUMXDBd2CN+jKXk/cIVyZYTeh4ztvg9tsn7yazkgljE7MnTdJio/FHJlatM/AvQ3N4Qt9g+S6kMrjyHmV+s2tSD5pOlWyWqFcYUpJ3svoc/ynWmlpW2laTqFAEeRrNnDL91sTU2O0VvQlFWg4ltfJhfG13wwiW3BDSkP6Ehwa6EVIfdkPuvuq2nHFlaz4lR1NZQ4Ufen/p6S0QwyCI/11mrhbotyhvw5jKXWHU7K0GrtkbJ55arVc0Fv4HxTOSOJD/bzoTY8yqrHk7Y7e6h64SFz1j6Gmw1TTaGm0oQkJQkAJSBoAB7p19tKwKXKbTw30uU4rhuqICSpRND3SSANSak4jsUR1LT91iocKgkI5wFWprFuYlpwysRlIMmYRrzCDps/wA5pnPaZz4L1ja5n6j1WG/2+/25udAc2m1biDuUhQ4g1m3iyc7dHLFHdLcZlKS+B9NagFci7Lf2rIJ6or4ti1ghf0CeAOnJk1O5jE70bXdJir/FFZ4OL6ZY2teoGXjyYIaReMr5dv4lKJLX9VjkwdcDbsT2eVroEyUpV5L6prO6DrEs00D1HHGj9/kySnBdsu0I8Wn0ODycrE8LoGIbtE00DcpwDyJ1FMOqYeaeT6yFpUPNJ1rM5lFzwJ01vfsFiQPJfJknPLF/nxddOfi/m3y47hdBxdemO7pBWPJwBfJh5xF7wdA5w6iRADa/w2DUqM5Ekvx3Ro404pCh80nSrTNMC5wZg/YPoc/yms55FvmRrBIjyWnFnnfVOvUVyZQwno2Ew64NBIkLcRWPYXQcX3pnTcX+cHk4AvkwTOE/Ctnk66lUZAV5o6tEa1iTKWwXWS5Ihurgur47AC2zQyOeK99/QE/KPVmyhw1AWh2Wp2csdznURTLDTLaG20JQhIASlI0AA93YhzSNgvcu2ybSVhojZWlzTUGo+dNhXufgym/wVUbNbBz4CTOW1/O2qo2MMMygOZvEUnwKwDTEuPIGrLyHB9VQPsxNLcQjiaXM+AUpxa+J5Y6Nhse34jxDBw7bzPmodU1thGjY1Oqqm53xxug2Va/m86E1MzixW/uYTFj+SNqpGYeNJHr3x8fyBLf/AEAVhvNe/QJLaLo8ZsQ7lbQAcRUafHmRmZEZYW06gLQocCDWtX/MTC9kWph+YXX08WWBtmpOd8IE9Hsjy/53girNnLYZj6Gp0R2H9cnnEUh1p9tLjTgUhaQUqSdQQe8GsQz7ybnPiTbjJe5p9bZC3FEdU004pp1txPrIUFDzFXay4lfhrxJOiL6NKc2y+VJ3lfJkrdSzd59tUrqSGecSPrt1mxFLGM5i9Nz7TLn+gJ5LAhF6ylVF2dVCG839rSiRyYGndBxbZX9d3SAg+TnUrPCETGsc3wW80eTJGbtsXuCTwU06Kv8AC6Be7nE/dSXEjy1pClIWlSToUkEeYrMFKbxl2ZqBqQhiSOTJibzOJJUXXdIin8W6zag9FxhIdA3SWWneSyaX7K1DPFZgOM/ea5Mup3QcZWdwnqrdLSvJwFNDkzngljE7MnY0EmKn8UcmUM7pOEkMd8aQ4j8evWaeCJbU96+QGC5He3yEo4tr5EpUogJBJPcKwnlleby809PZXDg8VKXuWv5JFRIjMSMzGjthDLSAhCRwAFX/AC3sd9u5uU5b+2ptCS2hQSOrULLzB0HQtWVlSvF3V3/rqNHZjMpaZaQ22n1UIASB5Ae886rYG7hbrigbnmy2vzRWCssod9souMyYtJd1DaUVfLM5a71LtiVc6pp3YBT9KpESTFWESGVtqI10WkprI1jVF5cPi2PZXH20cTTkpavV3CiSd5PoR2dtWp4Ch7fmLB6dg68NgalDQdHm2oK5IERU6bFiJWlCnnUthSuAKjpWLcEXTCpjGU4060/qEON+I5MnbquXht2GtWphPlI/kXvrNfGUm1tNWeA4UPvo23nRxQitSTvq84Qv1khxZk+JzbL/AKhCgr7DpyZMXx2TAnWp5wnopStn+RdZmQuh4zugA3OlDw++nkgIF7yi5visQVj7zCuTL6d0HGFmd10Cni0fJwFNZ3QgmZZpoHrtONH7nJktLD9lusFX7J8K+x0VdoZg3OdEP7F9xH+U0y6pl5p1HrIWFDzB1rMxpF0wGZre/YLEgeSuTJ+d0fFgY7pMZaPtT16zVg9FxlOWBukIbd5MKD9N5YtxjvUYb7H2oJAojSsAzug4vsz2u4v82fJwFFZ3wuvZZ3ydZVyZMTefw9Ohn9hJ/JwVeoZgXe4xP3MlxH2A1BkqiTY0lHrMuocH3TrTDiXWm3EnVKkhQ8jyZv4euN6atL1uhuPvNOuIIR4LqFlNjGToXYzMYeLro/ojWsv8GzcKxpzUmY29z60KAQCAgprSpGE8NyVlx+zxFrPeWk1GstngkGHAjs6fSQ2lJrT3vmpa+n4VfdSNVxVh0VaMYYhssZyLAnKbZV9HQHQmsPX0QMSRLtOC39h0rc+I6isxcYW3E78AwWHEBhKgVLFZLtFvD0p3vckexa0t5LY3mnJS17huHpNNlxQApCAlIA9wToyJcOVGWNUutLQfvDSnmlNOuNL9ZCik+YOlMuqZdbdR6yFhQ8wdazTDdxwKxPT9Fxh4eTnJke4sS7419EtMms30LGMXSeCorOnJist3fKxMskaiLHeHmkgHkynn9ExjEaPqSW3GqzshbF1tUzueYWg+bZ5Mm5SJWGrjAX+ykfk6KnxVQ50qKriy8ts/dOlRJCosqPIR6zTiVjzSdazbYROwhGnI4NPNOfY7yZLzuZv82J3SIv5t1mbC6HjO6fC8UPD76eTDw/T+WLcfitcB1j7zdEEHQ1g+eLdiezyidAiSkKPyX1TWd8MJm2WWOLjLrZ+5yZRtuowawV8FyHimsRwugX66RNNA1KcA8tajPqjyGX0es24lY80nWs1WE3HBKJyN/NOMvDyXSG3HFBKEKUTwAGprJyDeIE25iTAksx32UELcQUDVBrFOVt5vWJ7hOjPxmor5QoFZqFkbGTvnXpa/ky0E1boSIMKPGClKSy2lCSreSEjTf7/uERM2DKjLGqXWlIP2ipsF+JLkR1tqCmnFIO7wrZV8Jq72xVtVDac3OuRkPLHhzm8CsqWQzg+ErTetbivYNaWtKRqo05LJ3I3CiSTqfSSkrUAKaaDadBWnuHG8HoGLLyxpuEkrHk51+S4YvvtxtMW0vyR0NhKUhtKQNdjhtGozKHnkNuSEMpJ3uLCikeeyCayxs1gttufNvujM194gyHUVjTLqNii6RpZuBjhpnm1BKAsrq6YIy7wrGD93kSX1/QaLmi3PIIrEGMJV0itWuEx0K0s7moqFFX+cmiCDoaw/NMC92yXrpzUltR8taznhc/h2HLH7CV+TnJknN5u73SGTuejpWPNusxoXQsZXdAGiVuJdHk4kHkb/APHcqfFRtv8Aqjnky/niBi+zvKVogulpXk4Cms7IyEXq1vji7FIP3DyZLzuesE6ITvjyvycrMPDD1hvz6g2ehyll1hf9U0DpVxvN1unM9PnPSOaTst84rXZFYYwpdcSTkMRGiGgRzr5HUbFWm2xrZbosCMNGWGwhNYoyrmX7Eky4tz2WGH9g8CtWoTULJSxtaGZcpT/kA0KbsVv/AEY3bXWediobS2G3OvqlHDWo1ugQwREiMsJ8G0BP9K0/gNVotrmpdhsrUd5JQKfsdk1A/RkYn5tprMxwLxlckjg3sI/BNYFZDGE7MjTf0cHtjRUANSadlgbkUpSlHUn0wCTpUdnYGp4mh7izjg8xihuT3SYqD9qN3JgrAQxZDnvIuYjux1gbBa29dqpcZ6JKfjPJ0cacUhY+aTpWGb7JsV5iTmHCkJcAdHcpB4g1ia8/obD0+5tpCy0yFIB4Eq3Crlcpt0luzJr6nX3DqpSqylskS54hcflJCxDa51CPFdYvg9AxPeI2mgRKWR5L6w5L9/49lk69xUu3tvfa3orky0nCHjO1E+q6pbJ++ms6oXNXy3TNN0iMR9rR5MoZaJWFpENe8MSVoI+q7V8tb1ou02A8khTLpT5p7jSSQQQSCDT0C/XK1v3uSt52Oy4hnnXVFRJV3J1qFhrEE/QxLTLcB4ENnSsp8M4jw/KuK7lD5lh9pGnXSTqirnbIF0jrizYrbzCuKFjWnMm8JOuEoRKb+QdqDlXg2IoKMBT5H75wqqLEjxGkMx2W2mkjchCQkD7B7MTpQUDw95msaumRi68L8ZRTVkZDFpt7XwR2x+XamnJSU7k7zS3VrPWPYxmNBtqG/uoe487oO3Cs00D1HXGj9/kyUm83ebnD7n4wX9rZrM2CIeM7poOo8UPD76eSSo3zKxS+Kl2wH7zPJlJc24WK0MurCUymFtVm00yjGMhbakkuMNKXyZZPJumBGoa9+xz8dVSGVMPvMr9ZtakHzSdKtkswrjClDiy+hz/KdazjgKm2O1y2UFZbkfk6KhYJxXO05iyyvNaebH4rrLfCt9w4i6fpNLTaJKGyhAcC1BSKv+FrFiJKTcYx55A0S+0dldQsqMKxng487MlD4FqCR/opgNRmG48dhttlsAIbSkBKQPCm5SwetvFIWlY1SaB9pWNQRSEbPsDjgbTqaB10I9xLOiFHwFS19OxQ+ob+enH8102kISlKeAGg7PWnJCG+/U04+44eOg7KMztnaPAUB7kzUgGVguc5pvYW26OTLab0LGdpUTolxa2T99NZ2Qdi6Wqb3PMLQfNs8mUslE7Bqobm8MvOtHyXvq/2eTZbvMt76SFNOEJJ+knuNDUHUUjB15ew+1eERX3C/JDbbaUFSlJ0Oq6g5bYzm6bFpW2PF5SW6y4wrd8MwJse4OsnnnUuIDRJ0qTlBYpVxlz5cqSefeW5zSSEAa1By3wdC0KLO0tXi8S5Skc2zstgJCU6JA7gKLjiuKj6SFqQdUmmZKV7lbjWvtOnbk1Ic21/IVFXtN6eHuK4uczAlPnghpR/AVhRvpWKrSk79uWk0Ox1px5COJpySte4bh2bLZcVp3d9JSEgADcPcuIYAm2O4w+Kno7iB+FEaGrbKVDuEOUniy+hz/KdazVssu92K3rgRlvvNyAQlA1JS4KhZVYzl6FUBEdPi84BWXmDrjhWPPblymXRIUhQDevVKavOEsP31KRcoCHikaJXvSseShULKzBMN4PC2FxQOo51xSxSW0ISEpACQNABuArT0FUtOytQ8D2DMop3L3jxpKgoag+5jUlzYQfE8kRWjmniPbyRXOI4bQoVi59EbDN3d1BIjLrLNjnsZ2rwQVr/AAQaHYKWlI1J0pyWTuRu+dEknUnswkqIAploNpA9zGmMq8HtvLddhOPqUsq0cdVULDVhgadEtMRr5pbGtDs5SdHNfHsW3VtncaafQ4Pn4UPYy6kObB4kUD7Aakr23D4DkaOy4g/Oh7VrS3m0cVUuZ8KaVIdV9LSipR4k1Fa2lbZ4DkzIkBjBt1+uhCPxUKydYDuKHHP3UVZ9MnSnZQG5G80palnVR7WKzsjbPE0PeZqWnVGvh2QJB3UzLI3L/GgrXePYpZ/WjyqPI2uqrjQPbvr2EKV6CDqkH5e0a05JQjdrqaXJcX8h6DTZcVoPtpCAAAOA5M4JAawsG+92QgVkixrPvD/gy2PR1p2QhHzNOPLcO87u2js7atTwFAe9Vo1QQe+iNCR2bT62z4im3UuDUGtfYJR/WnyrXQ6io0gK0CuPbzHNSE+hHVq0n2dbiUDVRp2Ste4bh6KUlZAFNNBtIA5c7JAFutLHi8tVZIsaQLw/4voTyk0txKBqo07KUrUJ3Dt20FxQSKQgISAPexqUjZc18e0StSDqDTMhK9x3KodvJOryuQEggimHw4ND63aqIAJpatpRPoQzqgjwND2QmnpIRuTvNKUpR1J19HjTDIbGp9Y+hna+DMtDHg0tVZPx+bwpzve7Jc5DTsoJ3I3mlLUo6qPsDDXNp38Tx97mpKNps+I7XhTMrglf40Dr2zx1dX58oJSQQaYeDifmO0lr0Rs+Pow1aLI8R7IogbyaeklXVTuHpxWdeufsoehnG+HMTNNfuoyay1Y5nBlq+uFr/FVOOobGpNOyFufIewxWdTtn7PfKkjSnEbCyO2ZkKb3HeKQ4lY1BrXszSzqtR+foIWUKChTTocTqKB7KUvadI8PRjq0dTQ9iW4lA1Jp59Th8B6bTZcWBSUgDQejme8HsZXH6gQisLrEXDNnZTxEVFKUVHUn2FlsuL07u+kpAAA99TEbwvt0LUg6pNMyEr3HcqgeyUdxo8T6LThbVqKQtKgCOxUdEk0okkn0UnRQNJ3+wE08+lv5mlrUs6k9hGa2EanieXWtadfQ336msXP8AS8W3Rz45VQmgzDjtDghtKfwHsIBUQBTLQbTp77fRttqHsLUkp3K3ikLSoag9iv1T5emw9zZ0J6poEEdhKVstH5+mwrabQfl25NPSQnVKONEknUnsGG9tweA5TS3UIG807KWrcncKWrRKlE8ATSNJmJx3h2d/VdAaAD2GKzoNtXE8KHvs06nZcUPYULUg6pNMyUr3K3HsF+ofLsIz+miFcO6h6c1Xqp9OGdUaeB7YqAGpNPSSrVKeHZRUbKNe88ilpSNSaclk7kfjRJO8nkuDoZgynDwS0s/lWDmukYptSVb9X9fYY7O2rU8BQHvwipidFhXiPY2pKkblbxSHAoapNa+i56qvLsYz+vUUd/caB9FVSVaun04StFKHardS2NVGnX1OHwHZJTtKAoaJHyFOywNyN5pS1KOqj6GK3uYw5dnPCMsVlqwHcVRD8CFq9gQkrUEim0BCQke/padW9fD2RtxTZ1Bpp5Lg1HH0XfVV5dlHf2uqo76HoHhSzqtR+fpxjo6OzJp59LfzNLWpZ1UezbWEK2iNacdW5xPpZiv8zhK5eK9hP4rFZSsBd9lOfBH9gjM7Cdo8TWnv5xO0lQrv9kSopIIO+mXw4NDuVWvK56h8uyBIOtR39saH1h6D6tG1H5dgg6LSfnQ7EnSnpXEI/GuJ9jzYe2MONN/HJTWTzILt2e+Tae3jNbSto8BQHv8ANPo2XVeygkEEUw+FjQ+tQ5F+qry7MEpII40w8HE/PkNSlaNEePYtnVCT8vTNOOobGqjTr6nN3Aey5wvaQrUz4uuGsoWAm1z3vjfA7ZtBWoJFISEJAH8AGpiNwX7MCQdRTEgLGyrjyOeqry7RC1IUCKadDidRyTFb0p7GMdWU+jrRVpv1p2UBuR+NKUVHUnU+zZwPaz7Yz4MrNZXMBvC7S+9x5Z7aO1sJ1PE/wE6jaQpNHcfZt4NMSNrqq40v1FeXatuKbVqKaeS4N3HwqQradPYwz1VD0NackoRuG8048tzifaM1HtvEoR8DCKwEyGcKWv6zZV+J7WM1tq2jwFafwEako2HD4H2hEnVspVx07YEg6g1xOvYw1ddQ+XIaceQ2N5305IWv5D2rMJ3ncW3H6pSmsOshix2xrwjI7RCCtQSKQgISAP4Elo1QFeHukrSOKhQWg8FD0ox0eTRIHGnpXEI/GiSTqfajWIlmTimeeO1LIqO0GWGWhwQgJ/AdpFa2U7R4n+BVp2kkUoFKiPcSlJSNSQBU3Eljg69JuLCPltampuaOG4+oZLr5+ompecCt4iWwebi6l5oYnfP6pbTPyQijfcaXIkJlzXPkgGkWfHEkaiPcVeZVS7RjeN1jHuKfIqoX3GVtICpc1v5L1q35qX+MQJSGpCfmNDVozSscwhEtK4q/nvRUaXGltB2O8hxB4KQdRyIVsrCvCnX1OfIeHtjp2W1nwSTUFJm4ojg7+dnD81dpHa5xe/gKH8C6VLRova8fb3n2WEFbzqUJHEqOgq7Zk4bt+qW3jJc8GquebN3f1TBitR0+J66qevOKb0soVJlvlW7YRrp+VQMusWzyCYBaBPrvEJqDkxKVoZ10QjxS0nWoOUuGY4SXy/IV9ZWlRMG4ZhjRm0x/Mp2qbYZaGjbSUD6oA5XY7DwKXWkLB7lAGrngDC1yB27chpfxtdQ1fcoJ8cKdtMoSB+6XuXTE3EOGZhQhb8V1J3oVwNYZzPhTiiNdUiO9wDv0FUlaVpCkkEHeCO2n3S321ouzJTbKPrGrpmzao6iiDFckfXPUTUnNm/uH9RHjtDyKqRmpicK1UY6h/JVuzf3gT7b95o1ZsU2S9JBhy0lfe2rcsdldneats1z4WFn8qwO10jFtqB/elf4JJ7MAk6CmWthAFAfwNJRtNnxHttyu9utTJemyUNI+Z3mr5mz67Vojf/K7TkvEuJZOwVyZSyfUTqQKs+Ud5laLuD6IqPD111assMMQNC6wZK/F2osCFDRsRozbSfBCQns71h+1XuOWZ0VLngrgpPkaxfl3cLAVSo2siF8feisI4+nWRaI8oqfheB4oqBPiXCK1KiupcaWNQodmSACSdBWLczGoa3Ido0ddG5T3FIorvWIZ4Tq9LkrO4caw/k2+6EPXmTzf/stVDy7wjERoLW2v5uaqp/AWEnkFKrOwPLdV6ybtb4Uu2Slx19yF9ZFXnDt/wvLSZLS2iD1HkeqfI1gzMguqbgXlfWO5Ej/6VQIIBB1B7DGDxYw1dV+DBFZXMc7ilpf7plxXZxWtTtn7KH8DmnEbCyPanXW2m1OOKCUJGpUdwFYnzQYjFcWzgOucC+fVFPy7nepyOkSFPPuq0BWrQVhrKWKENSbvJDveGWj1agWu325kMworbKB3IHbrQhaVIWkKSRoQd4NZg5edC5y62lrWPxeZH0KwFip2yXNDDyyYb6glY+E+NAggEcD2JNZg46LqnLTbHuoNz7qf6CrDYp1+uTUGIjVaj1ldyB4msLYStmG4aWYzYU8QOdeI6yz6Nwt0K5RXIsxhDrKxoUqrHeBZGGpPPsbTkBw9RfwfI1lri9UtAs81zV5A/ULP0h2GZDxawpN+uUJ/FVZQs7V2uD3wMAdk2grUEikJCQAPdyhqCKd55pWm0dO6kynR360iYPpJpDqF8FVr7NMRwV7U+wzIaWy82lbaxopKuBFXrKy0TStyC4YrnhxRV4wHiK07S1RS80P2jPWqzYwxDY1gR5a9gcWnN6aw/m1a5pSzdGjEd+Pi3UeSxJaQ8w6hxtQ1SpB1B7daErSpKgCkjQg1mHh1Fivqujp0jSBzjdYFuxuuG4Tq1autgtL80djmDjlMZDlqtrurx3POp+hQClK04kmsusLIsNlbcdR/tkkBbp9O62yJdIMiFKbC2XUlJFXSDOwriFxnUpdjOhSFeKeINWW5tXW1xJrZ3Otgn5H082HiiwMN/HIFZPMaNXZ/5to7KK1sp2jxPvBxsLSUmnWlNq0PDk1NIkuJ47xSJSFcdxoKHsjyNtCh7YRV5wbYbwFGRDSl0/tW+qqr9lddIO07bl9Ka+HgurPiO/YblER3nG9k9dlfqnzFYUzGtd92I8giNM+Anqr8j2+bVqEuwImJHXiOfkuspbshldxgPOAIIDqKlYrw7EOj10jg/JWtO5k4UbJ0lrX/ACop3NfDqPUbkq+7S83LOPUhPml5wRfoWtw+a6czhc+hah9q6fzcvawQ1Djt1Ox/iiaCFTy2k9zYCaUpSiVKJJJ1JNYAtAu2KIDK06tIUXXPJFJGnoGpmN8Mwp4gP3JtL+uhHcDTa0qSCk6gjUEcmdFnGkC6oR4suVlHcC9apsIn+wdCk+Tnp5wvkRrSz4rcVWUzARYZDvxyD2MdvnF/IUB7xcbStJCqdaU2dDw9BLi0eqqkTPjH2ikOIXwNa+wmpCNhw+B9uxDg+0X5o8+0G3/ovI3KFYiwheMOvba0FbGvUfRWDM0HYnNwb0ouM8EyOKk1FlR5bCH47qXGljVKknUHsFuttjVa0pHiTpU7GOGoGvP3VgEHQhKts/lU7NzDMcLDCX31fJOgqZnRJJIiWlAHi4upuaeLJPqSG2P+Wip2Ir5PQtEu5PuoVxSpW6mUvrXsspWVHdonXU1AwDi246Fq1upT8TvUqHkxfXd8qbGZ/FdMZIxf295d+43TeS+Hh68yUqkZPYUTxMo+blIynwgnjHePm5SMr8Gp424nzWqsx8E2C04bXLt0AMuoeRqoE8mScdC7rdHzxbYQB6LgKkkA6bqvuEsQxr3IjKgvvLcdJQtKSoLBrC0KbBsFtiTV6vtMgL5M02EPYOnkje2pCxWT5PT7oO4tI9PN58m525n4WCay2YDOFIf11LV2MdsNoA7+/wB5rQlYIUKeZU2fl6IJHA03KWncreKbebXwNA9ualI2kbXePb32GZDS2nm0rbUNFJUNQaxZliU7cyyj5qj1YMV3zDEkoaWrYB0cjucKwzj6y39KGw4GJXeys8pNXXFmH7SD0u4NJWPoJO0qrpnHBbJTboK3frudUVcc0cUzdQ08iOk9zSaCcWX1e4TpRUfrEVAyqxZNKS8yiOk97q6gZJsbjOuiz8mk1FynwmwBtx3Hj9ddN5fYRSN1mYp/LrCDydDaGk/NJIq2YbstqQEQoDLfzA1VWnp5ksc/gy8DvCEK/BY5Mk5CEXS6sHithBHokUUgkHQa8uakhDODpoJ3uLbQKydY3Xh/5tIHp5pPlzFC2+5tlArBrHMYYtSP/YB7CK3tK2jwFD3opAUNCN1PMFveN6fTbkrRuO8U2+hfA7617ZSdQQaWkpUR7gxTge2X9BcADEzudSP+qrzYLtYJXNS2lIP0HE+qryNYbzQvVqCGJv8AtkcfH64q5ZwWdqKgwYzrz6h6q+oE1ecfYnvSigylNNHg0x1as+X+Kb0Q6mKpps8XX+rVpyYtjQC7lOceV8DfUFW/BmGLeB0e0xwR3qG2fzptltsaIQlI8ANK07XFEYy8P3ZgDUrirAo7jWAryLPiaDIWrRpai255LpKgoAg7j2GdF7BXBtDauGrztZTRw3h553vdkn08du8/i+5fJ1KKs7IYtcFr4GED8vTAJIFNNhCAn3spII0NPxyjrJ9XsG5S07lbxTbyFjce1NS0aKCvH3A662y2tx1YQhI1Uo7gBWPMcIvJMCCgdEQre4RvXTLDz7iW2WlLWo6BKQSTWHcpb1cdh64kQ2PxcqxYEw7ZEgx4aXHhxdd6yqCdB7C6kOJWg8FAg1freq23ifDKdOaeUkeVAkEEVltitu+WdEd5Y6ZFASv6w7lenfbzDsltkT5S9ENp3DvUe4CrzdpN4uUmdIOq3Vk+Q7hWVkuM7htMdCwXWXV7af5j6WtXNXTcVSvrziP9WlNI2G0I+EAenEb1UVngKHvcin4+zqpHDsASDqKalkbl76Q4lY1B7R9G22R7e880w0t11YQhI1UonQAVjnHT15dXCgrKIKT9rtYMy4n4g2JUkmPB+L6S6sWE7FYmwmDDQF97qt6zWnsmbuF30TUXqMyVNOJ0f+RHJZbzOstwZnQ3NlxB+xQ8DWEMc23EcUBpwNSwOuyaB5TV3vVvs8RcudIS02n8T8gKxtjSZiiaANUQ2yQy1VpyuvM+xv3Ff6p3Z1jsHiusM32Vhu8peIUEg7D7fiKiSmJkdqQwsKbcSFJI8D6L69hh1XghR/AVYEdNxRB1/aSwr89fTA1IFNICEBPvkipEf6SB5jsUqUk6g03L7l/jQUCNQaHYmpLew4fA+17Se8iucR8afxpbzTaFLW4kJSNSSdwFY8xwu7urgQHCISDvUOLtZdYIViCZ0uWgiAwd/wBc0yy2y2hppAQhI0CQNAB2pNSJ0SKgrkSWmkjvWoJqfmThCESFXIOqHc0CupOdFgRuYhSnaXnc19CzH7XKRnej6dmP2OU1nbbf2lqfHkoVAzbwpKUEuuPRz9dFQ58Ocyl6JJbebPBSFA08y1IaW062laFjRQUNQRWN8q3o5cn2JBW1xXH70/yUttbaihaSlQOhBGhFR5MiK8h5h1bbiTqlSToRWH84bnDCGbrHEpv4xucqBmjhCYBrNLCvB1FSMx8HsJJN2Qr5IBNXvOeIhBRaIS3F/G9uFTLhiLFlxSHFPSnlnqISOqmsE5XR7YWp93CXpfFLXFCKCdABpWa+C+N+gNfKUgfkussMVcy7+hpbnUXvYJ9G+vcxZri78Edw/lWXrHP4st31Stf4J9OKjaXtHgPfZFSI+uqkDzHZIcWg6pNNSkq3K3Ggexkt7be7iPacTZhWuylcdj/aZQ4pSeqmrhmLieevRuTzCTwQ0KYhY7uQK227k4D3kqFSLDjeMgrdiXAD7xp6ddU7TL0iQO5SFqUPxBq3tRnp0ZuU9zTCnAHHNNdlNWWHb4VsisW/Y6MlA2CneD8+0efaYQpx11KEDipRAAq+Zq4dtpW3HWZbw7m/Vq65q4ouRUiHpGR4NDVdTLhcJbilS5Tzq+/nFE1aLJdL1I6Pb4q3l9+nAV/3TYw2Aejs+XOVFymxa88G3WGmU96yuo+SsIQ3A/cnDJI6pSNECkZR4pXMcZIZS0k7nivcabySXzXXu4DnybpUm/YDxA9HaknaaO8fQdSawliuFiW3iQydl5O55rvQeTE2Xtiv+rqm+Ylfvm6vWVmJbaVqYZEtnxaqRAnRlFL8V5sj40FNBtwnQIUT8hUDDV+uKwmLbZC/nsECrFk3cHily7yQwj4G966seGrPYmOagRUo8V8Vq8zyyI7TzLjLqQpC0lKkngQaxphyRhe+qQ0SGFHnI7lYJxKi/wBpQtah0pnqPD0MdP8AMYVuq/FoJ/zECsqmOdxIXP3TCz6bDew2B3+/NKkR9dVpHmOzbfW38xTT6F9+hoH0zTyNhwj2fH+PHG1uWm1u6Ebnnh/QVhbBt2xPJPMgoYSf1r66w9gHD9hQktRg9I73nd5pKQBoBoK017qvOFrFemyibAaUe5YGixWMss59iC5cEqkwv9aKy9zAdsbyLfcFlcBZ3HvZpp1t1CHG1hSFAFKhvBB7GZOjRmVOPvIaaTxUsgCsR5v26JtsWhrpLv71W5ur3iu+31wqnTXFp7mxuQKwlg25YknBtCC3GTvdeI3AVZMK2SyRUsQ4bfzWoArV5mscZbQ70hyZb0JYngeSHaygtT0CHdzJZLcjpIbWk/U9E1nIEDE7PiYqawxiKZh66MzY6js66Ot9y01abpEu1vjzoq9pp1Oo5XIzDv8AaMtr/mSDQt0BJ1TCYT5IFJQlPqpAHy9LHuGEYhsbzSEjpTOrjBrCN9ew9fG1r1DSlc2+im3EOoQ4ggpUAQR3g8uaD5aws8j9482msoGNZ9ze+FlA9KOjbcHgKA9/SWPppHmO0bkrRx3ikSG19+hoelLb1Rtd49mx3iE2OyrU0f8AaX9W2qwhhmVii8JY1IaHXfd8BVstsK2Q2YcNkNstjQAeittK0lKkgpI0INZmYIFlkC5wW9ITyuukcGl1lHi1bwNilu6lA1jE+m882yhTjjiUISNSpR0ArE2bVrt/OR7Wnpb/AMfBsVe8T3u/Pbc6WtY13NjcgeQrDGVt5vCESZh6JGPxDrqqz5b4WtYSRCD7g+m916ZjsMICGWkoSO5IAFDQCsUY4s2Ho6+cfS7K+gwg6msCZij9NXH9NPhCZqwtK/ooIqVmDhCN693aV8kaqqTm/hNr+yMh3ybqRnZbR/YWt9fmoCn87ZR/sLQ2PNyl50X8+pCiisR4im4in9NmBAc2AjRG4aDkyoxYbdcf0TKc/wBmknqfUXQPZGs1sNfou8iewjSNM/Jyssb/APpC0GC8vV+J+aDy5uvlNogNfG+ayfY0i3R/xcQj0oreygHvPv8ANSGdg7SeB7VD7iOB1FIlpPrDSkrSoblVryqGtOIKFlPsuaVzVLxAIgVq3FbCR5qrLWwIs+HY61o0kSRzrnp3m1x7tbZcCQnVt5spq3OyMO4nYK9zkSVsrppaXEIWngQCPQKwBqaxLmNYbEFNh0SZPc01WJsc3zEbhS+8Wo3cw3uFNYDxW6w28i1PFC0hQrLnLpUdZud6jaOoOjLK6A2a1rG2OYWHIim21B2esdRvw+aqm43xTNCg9d5Gye5KtkUtxbiipaipR4knU9i2tba0rQohSTqCKwFiVOILGy6tespnqPjs8aWBF+sMuIQOdA22j4LTWFLw7YL+w8rUICy08n5GkKStCVpOqVDUHkzhf69qY+Ti6ynY2MPOufHIV6LSNtwCk/wApIIINPNFtXy7u2BI4GkSnE8d9IltnjuoKB4GjUxHBfshq6k3DGEgL/azgn89KZbS0222kbkpCR5CsVXWRZ7DPuEdKFOsoBSF8KwHmJecR3swZUeMhsMrXqgGh6OaEVEXGU8o4OBDlYafMixWt48VxWzybVYkx/YLAChx/n5Pcy1vNYkzKv8AeytpDnRYx/ZtVZcD4lvqUPRYZ5pfB5w7Kaw3k+iHLYl3WYh7YOvMoG6gkJAAGgFTZ8K3sc/MkIZa1A2lnQampuYWEoaCpd1aX8m+uaxHnG88hbFkjlr/AN9ypMqRLfW/IdU46s6qUo6k9plziU2G/sh1ekWSQ27SVagEbweyIrNGwi04jW+0jRiWOdRWXt3Nzw7HC1ausatL5M2nyu+xmvgjist2eawlB+uVr9GIjRJX40P4BdaC06GlJKVEH2BKlJ4HSkS1jcoa0XmnUFOuh9kPCrhrBxe+V7uan7R/zUytLiELSdUqSCPI1cbdGuUR6HKb22HRotNWjBmHrLK6VAhBp7ZKddSdx9LNCSiTjKcEfswhusNMmPYbU0ob0RWxV6vtss0RUmdIS0gcB3q8hV3x3ifFUkwMPRXmmDu1R658zVgye1WJF9lbZO8st1mJgA2J3p9vQTAXxT3tGstXw9g61fILT+C+SfcoMBhT8yQ2y2OKlnSsysdQr+hiBbissNOFa1ncFn2DLfEP6bw6wHVayI36p3s81LAbjhh2QEfroh51FZS3Ms3SVAJ3Pt7SfNHJmU+XcVyx8CEJrB7PMYYtDfgwPQSkqUAKQnZSAP4Dks7adocR7nzNtqoeJXHwNESUBwVlrfUXjDsULc1fjjmnfTu1yj2u3SpshWjbLZUagNyMSYpaCxq5LlbS6vd6gYdtS5cpWiG0gIR3rPgKnXmdi3EcYzHFbD0hCENg7kJJq0WW22iIiNBjIabA7hvPmeS/R4si0T2pQBZUwva1rL3H8bD3P26eFGGtwqbWN5RWIM5IrYWzZoxdX3PO7k1eL/d70+XrhLW6e4E9UeQ9hyqvptmI0RVr0ZmDYNA9lOZTJiyI6hqlxtSCPMVYX12bFcYndzMotq8tdnkxi8ZOKbmrxf2atTXM2yC18Edsfl6ERGqirwoe7dfZjUlnZO0OB9zY/wAOm92ZRZTrJjarb/8AsVgvFL+GLuHSCY6+o+3UCfFnxWpUV5LjLg1SpPoqUEglRAA3k1mfjdF2f/RMB3WIyrrrHBxVZXWRi2RJWJ7no0yhBDJVWNMXy8TXErJKYjZIYarLmyS7niWC601qzGcDjq6FYgxfY7A0VTZQ5zuZRvWaxbmTdr+FxmNY0I/QHFfn7JHfdjvtPNK0WhQUk+BFWC5IulogzkHc8yFdkaxxGMDGF0A7pG2Kt0oSbXEk/GwlX5VJUqZiF097kw/mqkJCUpSOAAHoR0bLYoe6DWulLkNI4q1pUz4U0qS6e/SmeddXvUdB7OtIUCDTrZbWQfculY8wA4tx26Wpr5vMD+qawnja7YWkFA1cik9eOusP44sF9bSWJaW3u9lw7KqSoHvrWrtiG0WdlTs6a22B3E9Y1jTM+ZeA5CtgVHh8FK4LcrAmA5N/lIlS0Fu3NnVSj9OsxcXN3B5NmtmiLdE6vU4LIqyWabe7ixBiNkuOH7EjxNYYw5Bw9bWoUZG/i453rVRFZjR1sYwuoUTvWlY+1Ps2TFzEmySITit8V78l9kazeicxiovAbn2EGsJzCrBER4ne3FWP8tYcQZWJLaNNSuWkn8dfQaTtuJFD3NrRNLkNp+lS5hPqppTri+KjyttqcVoKbQG0hI9nIp9oOJ+Y4UQQSD7mxJgG0XzbdSno8r96ir3grEFjWVqZLjQ4PM7xUHGuKbeNhi7PhPwqO1UjMDGMwFBuj3k3UHCmLsQPBwQ5DmvF17cPxVWGsn4URSJF5eEhwcGUepWOZqLHhCeYoDR5sNNBO4ArqNGflyG2GG1LdcVolI4kmsB4NYw1bgXAFTngC8ulrSkakgAViLMnDtlC0B7pUn90zWKMQu4iuztwdYQ0VAAJT4D2bJ+69ExIuGT1JbJH2oodlnbC6tomfNxs1gNt6dgNyM0oBw8+2kmsIWabBxvBiTGChxtaz+CD6ENG9SvcpUBS5DSe+lzFH1RpSnHF8VH0WmVOH5eNNtpQNAKA9oNSmNeukb/c5AUCCAQazUsUKH0GbFjIa5xS0u7FZOxLbJs8tbkRlb7UjQrKQTQSANByZxL0wshPjKRWTcCxuuyZTjqV3FG5CFfQTyZx3ycxKg2+PJW22Wit0INa+z4XnGBiG1StdA3IRr5HcaQQUgjv7LN2Jz+FFu6b2H0KrAeN7RZLSYU3nQvnioEJ1FWq52C/vJmwyl16Pu29khSNv0GE7LaR7iJApchpPfS5hPqilOuL4qPpcaZik6KX+FJAA0A9rIqRH06yeHePc+ZkLpOGJCwN7C0OVkpcNifdIJP9q0lxPLmpabhc8OgQmucLLocWkVAnzbZLblRHltPtncoVhfNy3zENx7yOjv8ADnR6hrMu6sXPFElyO8HGUIQhCh7QlRSoKHEHWrLJ6Va4Mj94whX4jssyEA4Ou38if68mT4/2K6HxdRytp2lpHzoenrRWkcTSpDQ+lRmIHAE0Zh7k0Zbp8K6S78Vc+78ZptT7itAs0kaADXXtieRbzaOKqXMP0RSnFr4qPYIbUs6AU1HS2PE+3EU/H4qR+Hua+RBNtE+MRrzjChWCbmbRiq3PqOiOd5tzyXuocOQprH2WSJwduVmbCZHFxgcF0606y4tt1CkLSdFJI0IPtWAJJkYRs69d4ZKfwUR2Wbl0ETDCowPXluBA8k8mUjWllmOfFI5YidXNfAVpWta1rSn0J4qFKmIHAE0qYs8ABSnnFcVmiSe/0mm1OK0FNNpQAB2utFQA3mnJaBw3mlyHF9+g7JmOpe87hSG0pGiRWnt+lPxdeskaGiCDoR7kUAQQaxBFMC/XBgDQtyFaVhC6/pbDttllWqlNAL/mTuPLpWN8u4WIG1SooSxPA9fuc+SqudqnWqW5FmsKadQd4PtOUUnncKIb13tPrT2JrNq+dPxD0NC9Woadj755MqkaYa18ZC+WGNEqVRfbRxUKVMR9EE0qW4eGgpTjiuKj2TTKnD8vGkNpQnRI7TWluJA3nSnJY4IFLcWs9ZXZAEnQCmYoHWX+FAe4jTzCXB86W2pB0PuTMyLzGKpKgNzqELrJe6l2DPtq1b2VhxHkv0cUYStOI4ZZlo0eA6jyR1kVijCN0w3LLUpvVkn9W8n1V+0ZJy9YV2i+DqF9jerg1bLVNnOHcyypVS5LsqS9IdJK3Flaj8zyZZI2cKRvm45y6nTTU9qywXDqfVpKAkADtFuJQN50pyWTuQKUpSjqok9mhtSzokU0wlseJ8fbXFqQNQnWjN+pXTT8FdNPwV00/BXTT8Arpp+AV01Xw10xXwinJG2NCge5M342k+2SAPXZUmsqLgYmLGGdd0lC2/SuVshXOK5FmMIdZWN6VVjfLibYSuZCCn4H5t+z5MSSi/TGNdzsfsc355jYW5gHfJfQiocDpMGY8PXb2dB/WjWW3904X8znbsMFzer1aSABoBu7NbyEcVU5LUdyRpRJUdSde0Zjqc3ncmkISgaAe3EU9GCusncaIIOh915oWpU7D/SUJ1XEXt/dNYUlohYjtMhawhKJCdVHuB3UlYUAQQQfScbQ4hSFpCkkaEHeCKx5leUc9c7G3u4uxh/VFKSpJIUCCOIPsuVDpRjCKPjacFDsM7nNIdlb8XXjWHWkCAtXepZBq4RjFlutdwO7yNZXu7eFmR8Drg7ZhgrO0fVpIA0AHY60uQhHFWppyUtW4bhR1PabzTMXgpf4UEj3BpTzCXB86WhSFaEe6pLDclh1h1IUhxBSofI1iKzu2W7yoS+CFaoPik1lTi83GGbRMc1kx06tE8Vo9Misf5atXEO3K0thEsb1sjg7TrTjTim3EFK0khSTuII9kywOmMrZ9/sDWdsjWbaGPhacXViQE21kjioqJrEsbezIA46oVWUcrbtM2P3tv69qwyXFfIcaSkAaAdgVU5KbTw3mnJDi+/QdshtSzokU1HS3v4mtPcbjaXE6GnG1Nq0PurM3Dn6QtouLCNX4vrfWRVmuki0XOLPjqIcZXr5irRco91t0WcwdW3kBQ7DMbL5F1aXdLa0BNQNXED9rS0KQpSFpIUDoQfYksPKQVpbUUA6FQB0FZaf3ytfmvsc35XPYq5rXczHQKtqEogRgPgBNXWP0iC+jTVQG0PMVlPcAxepEQndIa3eaO0QgrUEim0BCQB6RpSwkbzTktI3JGtLecXxPbtRlK3q3CkISkaAUPcrjYcToacQW1aH3StCXEKQsApUNCDWM7AbHe5DCUkML67J+qaybxCVolWV5fD9ax2OaWBUlLl9trPzktp/6/Ysp7WyjCgdeaQvpD617xUqJhiDiC1IahR0XB5aygtgBQAG8kCh6ZO41jCZ+ksVXN5J3KkFCfIbqSnZSlOgGgA/CiAQQeBFW6Uuy4hjvp3czI/0024lxCHEHVKgCD4g9lvJ0FR2Q2nfxPo60txCOJpyZ8ApS1KOpV26EKWdEimoyUbzvND3PpTzQcT86UkpJB905jYf/AEtZVPtJ1kRdVp+aaw7dnLLeoM9BP6pwFQ8UncRUaQ1JYZfaUFNuICkkd4PYPsoeZcacSFIWkpUD3g1ia1Ks98nwSNzbp2f5TvHsOGLtmBcrWm32NCUxoydnbCQmsGR7yxmHDRdw8JXXJLvl2GIrki2WW4TFnQNsqIq1tmXdGyrf1itWv48uI2NiWh3ucT+YrAF0/SWGoSlK1cZBaX93sorHBauPdQHITWtOPto4mnJS1eruFEk8T7A1GUvercKQ2lA0SPdWlPsBwfMUpJSSCPdCkhSSkjUEaEVjiwmy3x5tKdGHeuzWU2LkS4QsspwB9gfqfrooHsM5YQZxBFkgf28f80ew5YYwsESxotsp9EaQ2tRJXuDlSMR2a7ZgWBqAtLpZS8HHk/MemazkvgYt0W0tq676ttz+RFYai6IdkH6XVTy4hY5yDzg4tq1+w7qyiuexKn29StziQ4jsYzO2do8BQ5NoCnZTaeG80uQ4vv0HsKUqWdEimYqU71bzWnuw08wlwfOloUg6K90ZhYe/TFkW40nWTG1cbqDNlW+WzKjOFt5pWqVCsFYtj4ltYeGiZTe59vsM7291kc+bw9gwBg84luR58LEJkauqFO5Y4UNvditw9hSxue1JWDWG7FNw7mLb4MocFnYX3LSU+nIfaYYcedUEoQkqUT3AVie9P4kxBJl9y1hDSfBA3CosdMaO2yngkcr7SXmXG1cFJIrCs9dpxJBeO4Je2F+St1A6j02GS4r5DjSQANBSlpSNSQKclp4IGtLdWvifYmo6l7zuFIbSgaJHvB1lLg0PGnG1NnQ+5yAQQax9h42W9OltGkaRq41WEMRP4evUeYhR5onZeR8SDUd5t9pt1tQKFpCgfkfTzu/3Sz/8xzt8K4TuOJJwYjpKWU73XjwQKsVjgWK3NQYbYShA3nvUfE1eL7a7NGMifKQ0gcATvPkKbxixijHthMeJzbTDiwlZ9dYI9PNzEnQLUi1ML0fl+v8AJsVh2FtuqkqG5G5Pn6N6ZMe5LUkaBWixWF7gLlYbfK11KmQFead3pNtlagBQLbKQNaclk+oKUtSjqo6+xJSVHQCmYoTvXvNAe8nG0rToRTrSmz8vH3MSBvJrM2+4flQRAS6Hpja9UFveEcmV95FzwxGQpWrsXVpfp53nRmyj673bWCyyb5dI1vjjrOq3q7kgcTVhsUGxW5qFEbASkdZXes+JrH+NRhiE2lhKVzH9ebBq6Xi43aSuTOkrdcJ7zuHkKy0aWvGFsOydAVek+8hlpx1xQShCSok9wFYpvL2IsQyZQ1IWvYaHgkbhUSMmLHbZT9Ebz4n0cSsassvAeqSk1lLcuetcuCTvYcCh5L9JLqkJ0Tu+dEk8T7G0ytw7uFNNIbG4UB7z0paAoEEU8yWzqN6fcl8xNaLGyVzJAC+5tO9ZrEeYd3vJWxFJjRj9FHrqrDWXd1vBRImaxox71euqswcHR7F0ORBQroyxsL+SxWUd+6BfVQHFaNTB/rTQ9LPDhY//AJ6wdl1AxNYum9NdZfDq0EaApq6ZPX6KhxyJIZkpSPJVLQptakLSQpJ0IPcR2VnsdzvMlMeBGW6s8SBuHmawBgA4aW9LlPJdlOoCdE8EU+80wy466oJQhJUpR4ACsYX9zEF9lTCTzWuwynwQKwblrYTZ4My5RS9Jdb21BZ3CoVpt0BOzEhssj6iAn0s18Q/oyxdBac0kTer5IFYdh7bq5KhuRuT5n0rkxz8F9Hfs6jzFZYXLoeJUMKOiJKC37SzGKusvhQSANAK096mlAEEEU+wWzqPV9wy5sWEyp6S+hptI3qUdKxLmmTtxrKnzkLq22a/YnmqU2lx5ajqt5w7h5msM5eWqzBD8kCTL+JQ6qfIUABWIbQ1ebRLgr01WjqHwUOBr/a7XcARq3Iju/aFJNYZvbV8skOe2Rq4jrjwWNxHpZ3NEsWZzwW6KyjxNbIDEy3TZSWVOOhbW3SFtuJCkLCkngQdRWO8DynsXstW5tIFxClo13ALQNVVKyvxhGGvQA5/y1hVTbVcYCyiZDeZP10keklJUQEgkngBWEcr7neC3JuIVFh/611aLJbbNFTFgRkNIH4q8zyZtYqEK3izRnBz8kfrfqorBlhXfr/EibJLQVtvHwQmmkJQhKEgBKQAB6S1AJJJ0A3k1ju/Kv+JJLqCSy2eZZHyTUCKIsVprvA1PmfSIBBBph1drvTTydxYkBQ+w0w8h9lt5B1QtIUPI+zBJUQANaZjBO9W80B75UkKBBp9ktn6vtxIA1J0FYlzGtVp22IhEqV9U9RNXK9X3EswB1bjyieoygHQeQrDOVq1bEm9K0HEMJqHCiwmEMRWENNpG5KRoPQzTsHRLg3c2Ufq5O5z5LFZR4m6DcV2h9f6mVva+TlD0bla4F0jqjzYyHmj9FY1rEuTx68ixPf8AwOVFxBjHCj5jF+QwU8WnQSmsJ49nYjv9qi3GMwC0XFpeTuI6lG421A681gHwLiRU13D09lTMuRBeQdxStxBq6ZUWWXMW7bb0ywyr9kSF0MlCeF9b+xuhkirvvY//AKqTkijvvP4N0zknACtXrq8R8k1ZMAYasqkuMQw48ODjvXNJ0A0rUVi/HVrw7FWA6h6aRo2yn+qqnTp13uDsl9SnZD6/Mk+ArLrCH/Z61c7ISOmyQFO/UHcn08yL/wDobDj4bXpIk/qm6sMTpEznFDVDXWPn2GI2NialwDc4jX7RWA5/T8MW9ZOq20ltX3PZW21LVoBTTKWx8/fi0BaSk042W1EH2y+4otFjZK5b42+5pO9ZrEuYN3vRWwwTGin6CDvV5msN4Bu97KXXEmPF73FjefIVYcLWixNBMRgc4R1nVb1q9LElmbvVnlQlAbS06tnwWKUJVvmkaqbfYc8ilSawViVrENlYlajn0DYfT4L9IirxYLVeo5YuERDqe4kbx5GsQZQz4q1ybFJK0dzSzourhBukB4tTWH2ljuWCK2leJoPPDg6seRNNXO4snVua+k/JZpnF+J2P7K9TE/LnTTOZGM2eF3cPmAaRmtjJHGY2fNsUjN7FyOKoyvNujnHio/sof+Q1ccyMXT0qQq4qaQe5kbFMx5twkhDTbr76zwAKlE1gDLZFpLdyuqUrmcW2+5qgPTzRxB+lsQqjtL1jxNW0/NVWaJ0aEjUaLX1ldhiRnaiNujihf5KrKGfqzcYJPqlLifZGmlOH5UhtKBoke+daK0jvpUhofSFGW13amjMHck089zv0fapcyLCZU/JfQ02kb1KOgrE2aajtxrKnzkKqBa77iSaeaQ6+6o9d1fAeZrDOW1stYQ/O0lSv9CaSkJAAAAHYZp4bMaYi7x2+o/ue+S6wBiteHbygurPQ3yEPj+iqadbW2laFBQUNQRwIPpmp9qt9xaLUyI08jwWkGr5k1Z5WrsB5cNfw+uipuT2KGCeYXGfHyXs0vLHGSf8A038Fim8rsZr/APT0jzcFMZPYqc9cxWvNdR8kbmre/dWE/wAiSaYyQgo06TdXj/IkCmclsNNDV1+UvzXUXK/B8c6/o/nP51FVQLFabaB0OAwz80IANaH08Z31NhsE2ZqOd2dhoeK1Vb2FzrgnbJOqitZPYz2efhSGvFG7zG8VltcDDxPHQT1Xwpo+xsslw79yaQgJGiRoK096FVFxCeKhRlNDv1ozfBNKlOnhoKLzp4rNak8T7YSACSaxLmLarRtsRSJUrwT6iaul8vmJJYDzjjpJ6jKBuHkKwzla89sSbyooRxDCeNQbfCt8dMeIwhpscEpGnZXa2x7rb5EKQNUOp08qvNqk2i4yIT6dFtq/EdxrKfGHSo4sk13V5oaxye9PZ6VpWnaE1m7iLp13atbLmrMT1/m4qsOxObjrkEb3DoPIdkhxVsvaHE7izIChTDqXmW3UnVK0hQPyPsLDBcOp9WkpAGgGgoe8NaKgOJpUhpP0qVNH0U0qU6eGgouuK4qPuG94ltNjYLkyQArubG9aqxLmHdryVsRiYsX4EHrK8zWHMB3i+KDq0mPG73XKsGE7PYWgIrALv0nl71ntcwcJC8wOmRUf7ZHH2rTUSXKt0xqQwtTb7K9UngQRWDsVRMR2tp9CgJCBo+13pPs5NYnxLBsdslSHH2+eSg823r1lKpSn7jOUtZKnX3CSfmo000llpDaRuSAB2WI2diaHB9NNYGndOwvbXCdVJbLavNHsDLJcPypKQkAD3brRNFYHEilSm09+tKmnuTSpDqvpaUSo8SfccqXGiMqekvIabTxUo6CsTZp+vGsqfOQqoNtv2JpxLSHZDqj13FcB5msNZa222bD8/SVJ/wBCaSlKQAkAAcAO3zLwiIrpvEJvRpZ/XpHcasV+uNintzITpSoesnuWPA1hHG9txJFAaWGpQH6xg+yE1f8AG9gsKCJUtKnu5lvrLrEGbd6nlTVuSIbP4uUU3O6PFxZdfWTvWsk/matVlXEdD7yklYG4Du17PEjJXFacAPUVp+NZRTy5AuEEn+ycC0+S+3aaU4rQcKQkJAAHuwqA76VIaT9KlTPhTSpLqu/SiSeJ9ykgDU1iXMK02YLZYUJMr4EHcnzNXa/33EsoB5bjmp6jCPVHkKwzla+/sSbyotI7mE+sag26FbmEx4cdDTY7kjT2GZEYmRnoz6AtpxJSoGsS2N6x3eRCWOqDq2r4kGokyVCkNyIry2nUHVK0HQisJZtsuhqJfeovgJIqLLjSmkPR3kONq4KQQQfYCav+N7BYUK6TLC3hwZb6y6xJmrfLrtMwdYcf6nrmo9unz3C4Qo7R3uLqJYIjOhd/Wr+fCkpShISlIAHcO0uTIegyEab9nUeY31ldP6LiQME7pLSkds22pxWgptCUAJFae61cKejuHeFE0QQdD7nvWIrTZGC7NkhJ7mxvWryFYlzGut3KmIhMWL4JPXVWHMB3m+qDq0liL3uuVYMJWexNARmAp76Ty96z7JmhYROtAuLSNXon5oNWNEd552O+2lQWnUeIIqbh11Gq4qtsfCeNWfE1/wAPPf7JKcbAO9pW9BrD+cVtlbDN2YMZf71G9FQLpAuDQdhy2nkHvQoHtFLCQSSAKv2YeG7KFpXLD74/ZM9Y1iLNS/XbaahnobH1PXNR7XcJ6+cXtAKOpcc131DskONoVp51firtyARoQCKtj6rViCM7r/YSRSFJUkKSdQRqO0bbU4rQU22ltOgoe7nGUODeKdYW38x7kkyo8Vlb0h1DbaRqVKOgFYmzSQjbjWVOp4F9VRLdf8TziUJdkuk9ZxXAVhrLS22zYfn6SpH+hNJSlICUgADgB7LKjtyY7zDg1Q4gpUPkakMuWi9usqGhYfKT5UCCARwNSYcaUnR5pKvn31Kw2d5jO6/VVTLl6s73OMOPx1j6SCRVozdxDCARMQ3LR9fcqmM7Yv7e0r+4umM5sPL/ALWLKRTebeEV8Xn0ebdN5mYNX/6mE+aTQzEwef8A1dmhmBhD/wDMsUcwcIf/AJlijmNg4cbu1UnNTCDIJTLW58kIq6Z1cU222/feNXXGmKL4ooemu7B/ZNdUVFsEx8hT2jafnxqJZ4UbQhG2v4lexYgZLVwLg10WAqsJz+n4etsjXUlkBXmnd2bbanFaCm20tp0HvEinovFSPwo7vcOJcwLRZQtppQkyvgQdw/mNXfEV+xLKCXVrXqeow36orDOVr72xJvKi2jiGE8ag26Fb2EsRI6Gmx3JHtGadrMS+olpT1JTf5oqyyekQG9T1kdRXKQCCCNRT1sgva7cdGp7wNKcw9b1+rto8jrS8MxiOo+4PPShhcf8A7v8A0UrC5+jK/FNKwzK7nmzX/ZmZ++a/E1/2Yl/v2fxNDDMrXe83pQwundtSj/lpnD1vb3q21+ZpmOwyNGmkoHyHsmJmdWGHgPVUUn7aymuHP2eTDJ3sO/kvskIUtQApppLadBQ9ya1qK1FaitRWta9hpT0cL3jcqlJKToR7decQWqysF2dJSjwRxWryFYlzIud122IWsWN8j11VhzAl5vq0vuJLEY8XXOKvKrBhKz2JoCMwFO97y96z7VmTZ/0jh515CdXYp5xNYdlc3KUyT1XR+Y9x3VkPQJCdN4TqPsrKy49FxAqMTuktFP2p7Jl5ttPqnWult+Brpbfga6W34Gult/OultfOumNeBrprY7jSbi0B6ppM9oChLaHBVdLY+KulM/FRktHgsV0hr4hXPtfEK51HiK20+NbQrWta15Ne3fK0uKG0a21/Ea21/Ea21/Ea5xz4zQed+M0JTo76ExXekUmYjvBFCQ0fpUFA1r6LrKXBv40tCmzofa5EhiM0t591DbaRqVKOgFYlzSba241lSFq731VEgYgxROJQHZLpPWcVwTWGctbZbNiRP0lSfA+omkpSkAJAAHAD2t1pDza23EgoWkpI+Rq/W16xX2TFOo5p3aQfFJ3io7yX2G3U8FpB9xEagjxq3SVWi/Rnt4LEgE+VIWlaErSdQQCD7VqfGgtQ+kaDrg4LNCQ8PpUJbvyrpjnwiumn4KE0fDQmt94NCU0e+g+0fpigtB4KFa1r2ExGoCx2QUocCaTJdT360mZ8SaTJaV9LSgoHv5XWkrGhp1pTatD7TiXH9osoW02sSZfwIO4eZq8YkvuJZQQ4tago6IYa10rDOVr7+xJvKi0jiGE+sag26Fb2EsRI6Gmx3JHt2bFi52NHu7Sd7XUdrDkzVC4qjvHWR7jxAzzc8rHBxINYKuH6Qw1bXidVhvm1+aN3uXUig44OCjSZLo79aTM+JNIkNq4KrX0VoCkkGlpKFFJ7UKUOBIpMl1PfrSZx0AUKEpo99K5t0aag060ps/L2a84htVkYLs2SlPwoG9avIViXMi6XXbYhaxY3yPXVWHcC3m/LS8pBZjE73nKw/hGz2FsCMyFvd7y96/cFzgM3GBJhvDVDqCk060/Zbu404CFsOlJHiKQtK0JWk6hQBHuLErG3GaeH0FaHyNZRXErhz4Cj6iw4n3Uh1xHBVNzNdAugsKGoNA8spraG2OI9h31tKI0JPskmVHitLekPIbbSNSpR0ArE2aaEbcayo1PAvrqLBv2Jpx2A9JeUestWpA8zWGcs7dbtiRcdJMn4PoJpKEoSEpSABwA9xZsWPmZce6tI6jvUd8xWHpXOxCyT1mj+R9xXBgPwn29N5Tu8xvrLa4dCxMw2ToiQlTR92IWtB1SaZlBW5W41ryEVIZ2FbQ4H3JibMC02ULZaUJMv4EHcnzNXbEF+xLLCXFOL1PUYbB0rDeVsmQUSLwSy33Mp9c1b7ZAtsdLEOOhpsdyR7kxPaE3iyTYZHWUjVHyWneKtDyodyDbm4ElCx7jQ4u13tDiNQWHwoeVR3kvsNOpPVWkKH2+7WpC0bjvHK6hKk7JFOIKFFJ9w3S72+1RlSJshLaB48T5CsUZkXC6FcW27UeMd2o9ddYcy6u94KZEzWNGPev11VZML2extBEOMkL73Vb1n3PmFaf0XiV9badG39HkVCkCTFZdHenf5+4sRM7E1Lg00cR+YrAs7puGLcsnVSEFtXmn3jKa2k7Q4j28kAak1inMW3WgLjQimTL/0IpmLifGlwK+u74rVuabrDWX1pswQ8+kSZXxrHVT5CgAPdGa9q6RaGJ6U9eMvQ+S6w1J2mno5PqnaT5H3FiRnaitufAv+tZQztuBcIZP9k4FjyV7sSy6rgk0iGr6SvQIp9vYWfA+23i/WuzMF6bJSjwTxUryFYjx/d764YdtQtlhZ0CUb3F1hjLB+QUSr0ShHEMD1jUODEgsIYisIaaSNyUjT3VeoCLla5kNY1DrRFW1aoN1CHN2iy2v+nuK5M89BkI367OoA8RWVk/o2JDHJ3SGin7U+6EtOK4JNJiOHiQKTDQOOppLKE8EitPSkN7aPmPa3n2Y7anXnUoQkalSiABWJs0mGNuNZkB1zvfV6lWywYlxfMMh1bhQT1n3tdkVhvBlosDYLTYdkd7y95925hWz9G4mkqQNG39HkVDfEiKy78SRr5j3CQDqDwNWeQbViSG8dwZkjXy4GgdQD7jShauCTSYrh4kCkw0DiSaS0hPBIrTs5Dew4fA+04lx7aLGFNJWJErubQf6mrhfMS4umBlIcWCeqw1rsCsMZXMMbEm8kOucQwPVpllphtLbTaUISNAlI0A93ZtWrnrZFuKE746yhfkusNyNph1gneg6jyPuK+tc1cVqA0CwFCsPTenWS3SddS4wnXzG4+4EtrXwSTSIij6x0pEZpPdrQSBwrTtpLe2g6cR7Pc7tb7VGVImyENIHid58hWI8xLreHDCs7bjLKt2qd7q6w9ljcp5TIuqyw0d5RxcVVosVrs7AZgxktjvV9JXmfeF/tybnZ50NQ/tGjp5jeKtTxh3JKV7usW1e4sTtaKjOhPEFJNZYTek4ZaaJ3sOKR7alClcATSIiz6x0puM0niNaCQK09hNSG9hfyPsuIcYM21XRIDCps9XBpsbQT/PTeB8TYkkiZf5nMoPBobyBVlwrZbKgCHFSF97it6/eeOLabXiaahA0QtYdR96oT4kRWXfiSNfP3DiJsKgBXehwafbWT8rVu6xfmhftQSpXAE0iI4rjupEVtPEa0EgcBWnshp9vnEHxHsjiA4hSCojXdqDoah22DBBEaMhvXeSBvPmfeub0EB22zQOIW2aw49twlNk70LO7wB9w3VBXbpSQNTsf0OtZTyObv7zWv9owfZwCeAJpEVxXHdSIrY476SlIG4aUPaDUprZVtDgf4JzYaSrD7K+9ElNYXVulj+T3DJSVx3kjiUKA+0Vl05zWLIH1ttPsqI7q+7TzpERI9Y60lCU8AB7a6gLSU0oFKiD/BGan92Ff89usLjqTD80f/AH7iwYooxXav8R7Ehta/VTSIZ+maQy2jgmgPcEtr6Y+3+CM116YcQPGSisMoIYfX4qA/D3CeBrCY0xXax4Sh7AiO4v5Cm4qE7zvNBOlaGtPcJAIINOtltZH8D5vvaW62tfE8s1hwEQDqNxcJ9wmsLf3ttv8Ai+1AJ4Cm4ijvUdKQy2jgPc8hrbT8xw/gfOB8mXa2fBtaqsYItjAI+I/n7hPA1hT+9ls/xXZhKlHQDWm4hO9ZpDaEeqKA90GpTWh2x9v8DZqv7eJA3rubYRVvQUQYySNCGx7hPA1hT+9dr/xXYhKlHQDWm4h4rNIbQnckVp6cuQIzaVlOqdoA0lwLSFJIINa1rTstDbjbWuq1EDTwrU+3qSCCCKeaLavl3fwLjp8S8Wz9DwcS3SElKEJPEJA9wngawn/ey1/4r00pUo6Aa03E71mktpQOqK07G+f7on+cUzMfZQUoVuNfpCX+9NJuUtJ/tNfMVGWpcxpSjqSse4XWw4kg0pKkKIP8BqUEpUT3DWpTonYifcVwclKP5+4jwNYS/vZa/wDFemy9zZ3jdSVpWAQezvn+6Df9MVMYdfZ2Gny0rUHaFP3qNGcLLqHtpPyG+kKXPMeSy6602hR1SRpt1E/3pn+cU66llpxxfqoSVHyFW6+wLmtxEVaipABOqSKViS2JmdEDqlvbWzohBVvqfPjQIy33l7LadNTx4nSrbd4dzbcXFcKghQB1GlT8Q222vpZlLUFFO1oEk1FxTZpToaRJ2VHhtgpFLxHbETTCW4tL22EaFB01NM3+3uXAwQtXPhShslJA1SNajYit0t55pp1RLSVKWdkhICajYmtckPlpxZDTZcWSg7gKi3+3TI8l9lxRQwNXNUkaCrdeoFz5zorpJRpqCNDUrENshTOiyHFhzq7gkn1qkzGIjRdkPIbQO9RpOLrIpYR0kj5lB0pp1t1tLjawpKhqCDqD7AafZ5waj1hWhG7+Ar1JEW0z3ydNhhZqwpU5c0r010ClH3EeBrCX97LX/iuwbdW2dRTTyHBu40OxvmnRB/OKbbW4oJQNTVxw05NLStUJUk7z4po292O0AEDYSNNE9wqJ/vTP84q4f+XTv8O5/SrddHre3LDG5x5sIC/hGtYOtDTccXFwpW65qEd+wKxxMKjEgN7yo7ah+Sawwty2X5+3vH19UH5qTvFY4/8ANWv+QKvzVjaMb9FulWoVzm8kDw41f4UhuPargsEOuMpDh7wtI3E1fH1ImQLvGUErkshfktI2TUEphYbuEoKHPSXAwnxAHGrNA5jC9zlKHXkNL0/lTWGf/IcS/wDKR/RVW6fIt8puSwd6TvHcoHuNXiczPvLUlk9VYZ8we8ViZ1+4X9MHb0QhaG0eAKwCTVzw1h+JFQhyWWHla7DizrtaVYBbmoQjwZXPobPWVrrvPsUqOCSpH8BZiS+jYUn+Lmw3+JrDLQL0hzfuSAPt9xHgawl/ey1/4rsQopOoOhpiSF6JVuNA9hfP90T/ADirYkbDiu/UDlZATOQB3O1cP/L53+Hc/pWG7QxdVzW3SQpLOqCO5RNYfuT1kuLtvm9VtS9DrwQruPkae6VfcROmKsBRWS2okgJS2Nxq7QrpaJ8aVKfDrqiFhwEnUorF76JE6K+j1VxUKFRcJWaO4l3mluEbwFq1FYhtQm2iQyBq5ptI801ZYLl2nx4ji1c02lWv1U8a6JM6WLVqdekabPdtcNaujDcewS2WxohEYpH2CsM/+RYk/wCUj+iqwpb2LixdY7w3FLWh70nfvFSID9vuSIz6dFJdToe5Q13EViy2S49xF0jpJQopUSn6Ck1c8QfpiO027bwp9AVsrSo7teJAFYD/AN3n/wDMR7Eaks6HbSN3f/AOb0zYtlvi97rxX9iKw23swlr19df9PcR4GsJf3stf+K7OPJ4JX+NA+nfP91T/AD1bP7Jf83K3/v6f+bRAIIIBBpEdpokttIST8IApUWMvrOMNqV4lIJpEdhtW02w2g+KUgU6yy6RttJUBw2gDXQ2F6asNnQaDVI3cqI7LZJQ0hJPEgAV0dnb5zmkbfxaDWlIStJSpIII0IO8GkRY6EqShhtKVesAkAHzpqMyzrzTKEa8dlIGtORmHVBTjKFqHAqSCRRTqNCKTFjpKilhtOvHRIpqOyyCGmkIB47IA9jKQRoaeaLa/keH8AZty+dvcSMDuZY/NdWlotW+OkgAlOv4+4jwNYS/vZbP8V2keRpolZ8jQPpTWG5DYbWTx13UxDbYSUoUrQnXfXNfM1zQ8TSbcwl0O7S9ra2vcrrYcSRSklJIPv41jiSZ2LLhpv0cDQ+6NKbQENoQBoEpA09xHhWFP7223/FdrHkaaJXw7jQPu7SpLIUNocR7+dXzbS1/Ckn8KbX02/KcXqQuQpR9yYW/vfbv8X20eQU6IVwNA+3f/xABIEQACAQIDAgoGBwYFBAIDAAABAgMEBQAGERIhEyAiMDFAQVFScRAUI0JhgTJikZKhscEWUFNyc9EzNENUYyRggqIlRFWQsv/aAAgBAgEBPwD/APbMCD/2bLLHEheR1RR0ljoMV2bqCDVYAZ37xuXFbma61eqiXgk8Me7Fjp3p7ZTI+u2V2m1723/9lzTwwRmSWRUQdJY6YuWcY01Shj2z/EboxV3GsrX2qidn+HYPIejLNjNZMKqdfYRncPGRgdXZtnZ+JA/eTMFBJIAGLpmykpdqOl0ml7/cGK651lfJt1EpbuXsHkPTZLRLc6kLvEK75GxBDFBEkUShUUAADrFbKIqWZyygqpYanTeu/EOY7RLCJDVKm7erbmGFzdaWlCayAE6bZXdiOSOVFeNgykagg6g86zTcMoCjg9N55iWWOGNpJGCoo1JOAQQCDuPWLnmKgoNU2uFl8C/qcXK/V9wJEkmxF/DXcOJQUU1dUpTxDVmPyA7zi3UENvpUgiG4dJ7WPeesTIXikQEgspGo7NcVLz8M6yyMzKxU7RJ6PTar3WW1+Q21ETvjPRi2XejuUW1C+jj6SHpHVM43I6pQRt3PJ+gxlqrNVaYCx1ZNYz/48+SB0nCSRvrsOradOh141wvFDbl1nlG12IN7HF0zRW1uqQngYu5fpHzOCSeIiM7Kqgkk6ADGX7OttpQXA4eQaue74dazNSGmu8+7RZOWPnxIJ5aeRZIpGR1O4g4s2a4p9mGu0SToEnunAIIBB5mlv0ZulVQTkKVlIjbv+HMTypDDJK50VFLH5Yrap6uqmnfpdycZJm1irIe5lfnbjmK3UOql+Ek8CYrM33GckQBYV+G84E9wr5kjM0sju2gBYnFntqW6jSEb3PKdu9uJU1VPSxGWeVUQdpxdc3yybUVCCi/xD04kleVy7sWY9JJ1PGynZtSK+df6QP59bznRcJSw1SjfE2y3k3GsuZKmgKxTEyQd3avlikq6eshWaCQOh5i7uTda1h/HbT5HGW8wiqC0lU/tgNEY+/x831/AUKUynlzHf/KPRk6fg7oY/wCJER9m/m62upqGEy1EgVfxPli7Znq6wtHBrDD8PpNjU4AJIAG84y3YhQRConX/AKhx9weknTF2zVS0m1FTaTTd/ujFbcKuulMlRKWP4DyHHsNpe5VYBBEKb5G/TEcaRoqIoCqAAB3DrdbTJV0s8D9DoRiaJ4ZZInGjIxU+Y41uulVbphJA+73kPQ2LTe6S5xjYOzKByozxjisfbq6h++Vz+OEdkYMpIIOoIxl2+rXxCCcgVCD7442ZKz1u6zkHkR+zX5eiyT8BdaN9f9UA+TbuavF8pbZGQSHmI5Kf3xXXCpr5jLPISewdg8vTlO0ComNbMvs4zyB3t6a6vpqGEzVDhV7O8nuGLvmWqri0cWsUHcOlvM4PHpqeWpnjhiXV3OgGLVbY7dRpAm9ul27267m6j4C5cMo0WZdr5jjxSyQuskblWU6gjFozarbMNfuPQJR+uI3SRA6MGU7wRvHElOkbnuU4c6ux7yfRTzywTJLE5V0OoIxZbtFc6UONBKu6ReJWzinpKiY+5Gxw7F2Zj0k6n0IxV1YHeCDinlE1PDKOh0VvtHMXzM8dNtU9GQ83QX7FxLLJNI0kjlnY6kneT6aWmkqqiKCMcp2CjFHSx0lNFBGNFRdMTTRQoZJXVEHSSdMXLN6AmKgXaPRwjdHyGM5yNwVtjY79lmPM5Ws3qsPrcy+2kHJHhXr2caXhbaswG+GT8G5m13yttrjg3LRdsbdGLXeaO5R6xNpIPpRnpHpn/wACX+RsHpPpt1wnt9Sk8R6PpDsYdxxbrhT3CmWeFtx+kO1T3H05tn4K0OgO+V1XiZcn4az0h7VUof8Ax40kscSNJI4VFGpJ3AYvuaHqNunoyUi6GftbBPEybRq089Y/0YhsqT3nF0zXR0m1HTaTS/8AqMV10ra+QvUTE9y9CjyGKROEqqdPFIo+04zrJrX06eGH8zzGWbMa2p4eVfYRH7zYA069cqf1qgqYPHGQPPBBBIPMwTywSrLE5R1OoIxYsyR1wWCoISo7O5/RP/gS/wAjYPSeJabtPbakSIdUO507xijrIK2nSeFtUYfMfA+jO8++jg/mc8TJU+3Q1EXgl1+TDi3C50lvhMk76eFR0ti8X2qub6E7EIO6MfrxhXVK03qyylYiSSo3ak9/pscfCXehX/mU/Zvxm2TbvMo8KIOPQUU1dVR08Q3senuHfiio4aOmjgiGioPtPf184v1GaS6VMemilttfJuaVmRgykgg6gjGXsxipCUtWwE3Qj+PE/wDgy/yHB6TxbFeZLZUb9TA50df1GIZo54kliYMjDUEYzZUcNd5E7I1VOJkqbZrKmHxx6/d9JIAJJ0GLvmunpg0VHpLL4vdXFVV1FXK0s8hdz2nmsqx7d6p/qh2/DGYZOEvNae6TT7N3GAJxlm0ChpeGlX28oBP1V7B+4c6UW1DBVqN6HYbyPNhiCCDocWfMwkppKWtf2gjISQ+9u6Dg9J41mv8AU2xgh9pATvTu8sVk7VNVNO3TI5biZXm4K80319U+0ei6ZgobcCpbhJvAv64ueYK+4aqz7EXgXGvN5MTW5yt4YTi4ycJX1b98zn8eNla0et1PrMq+xhP3m/cVypFrKKopz76HTzw6MjMrDQgkHqMUE8x0iidz9UE4gy3eZtNKRlHexC4iyZXkAyzwp+OP2LH/AORTX+XE+TK9RrDNFJ+GKu211E2lRTunx7MW+YU9dSyk6BJVJPw1xec1yS7UFCSidBk7T5YZixJJJJ6SedyYNk3GbwxDDsWdm7yTxaOllqqmKCIas7aYoaOKipYqeIbkH2ntP7jzTReq3SRgORNyxz0cUkrhI0LMegAanFDlCtmAeqcQJ9rYgs9gotNUM7jtbfgXFYxswU6IPL+2Hr6p/wDUI8t2Gkkb6Tk+hJZEOquRiO4ba8HUxrIh3HUYuuV4J4zU2wgHpMX9sSI8bMjqVYHQg8UDXFqylU1SrLVMYYz0L75xBlyzwAAUoc9778TWCzzLo1FGPivJxd8pSU6NNRMZEG8ofpDBGnEy77KxXib6pH2Lxso2rgYTWyry5BpH8F6vdbobaiytTPJEdxZfdxTZqtM5AaRoj9cYjlilUPG6up7VOvN5uoeHtwnUcuBtfkedTZ2ht67Ou/Tp0xYajL6RBKR1SYjfwm5zisp61tW2tte4YIIOhHHpql4HDKd3aO/GbqCN4objCv0iFk4mmMs5e02K2rT4xIfzPFzbaVpZ1q4V0jmPKHc/Eo/Y5OrG8bn8SBxbHbGuNckRHs15Uh+GERUVVUaKo0A6vJGkqMkihlYaEHtGL/l+SgczwgtTk/c+BxTV1XSPtU87ofgcUOc5k0WshDjxpuOKG82+uA4CdS3gO5uZmiSaGSJxqrqVPzxWU70tVNA/TG5XndcW/MFyoSAspePwPvGKG62y8KEPsqjwnt8sVFNLA2jDd2HsPEAJwlNO/wBGJsLbKlukKvmcTWsVFulo5ZNzdDDsxFky3L9OaZj8hhcp2Yf6Tnzc4XLNlH/1AfNjiGz2uAgx0UQPeV142Zolks1Vr7ujDzB4lV7HJ1InbI4/Ek8QAkgDGXrWLfQrtD20nKf9B1mREkRkdQykaEHF/wAuPRlqilBaDtHanoVipBUkHvGLdmm4UmiyHho+5un5HFuv9urwFSTYk8D7jzGcqHg6mKrUbpRst5rz6sVYEEgjoIxaM1aAU1x5cfQJO0eeJKCOVBNSyBkYajQ4ittQ/wBIbA+OIrZTp9LVjhIYk+iij5c7mmXg7LUfXKrxMw+ysNnh71U/YvEytbPXK4TONYodG827B1sqGBBGoPSMZgy0Ytuqok1TpeMdnlg+gEggg4tmaK6j0SUmaLuY7x5HFuvNDcVHAy6P2o25uNfaH122TxAauBtp5rgjTn7PZ57pOFXkxKeW/YMUVHBQ06QQLoi/aT3nqF9t73C3SwodHBDr8SOzDqyMVYEEHQg+hBtOo7zjORCC3QeCM+lVLMFA1JOgxZLeLfb4odOWRtP/ADHrhxmLLeoesok+MkY/MYII9KO0bBkYqw6CMWvN08OzHWgyp4/eGKOupa2MSU8ocfiPMcXMlv8AUrlJsjSOXlpzlHQ1VbKIqeIu2Ick1bAGWpjQ9wBbAyQdR/13/pijo4KKnSCBNFUfMnvPUs22fg39fhXksdJR3Hv9Fvj4SupU75kH44znJrc4k8MI9OVrf63cVkcaxw8s+fZ1/MuXgQ9bSJ8ZUH5ji09VUUsgkglZHHaDi2ZxQgJXJof4i/qMU9VT1KB4JVde9Tr6c0W71y3M6jWSHljy7ebo6WSrqYoIxynbQYt1up7dTrDCo+s3ax6rPDHUQyQyLqjqQRi40T0NZNTv0o24947DjL6cJeKEd0mv2b8ZqfbvVR9UIv4egYy1b/UrbGWGkkvLbr5GoxmexequaynX2Lnlge4eNT1VRTOHhldG71OmKLONZFotVGJh3jktiizLaqvQcNwbd0m7GqOu4hlIxeqA0Nxnh9zXaTyPNZLgV6+eY/6cW7zbq+dKANFDWoN6nYfGUo9u8xHwI5xfX4S7Vzf8xH2eixUBrrlBERyAdp/IYG4AD9wTQxzRvHIoZGBBBxe7VJbKto95jbfG3eOPrikulfRkGCodB3dI+zFzustz4Fpo0EqAgsu7aHNZKnCV1REffi1Hmp6vdab1q3VUOmpaM6eY3jGS49a+pc+7D+ZxXPwlZUv4pXP4+jJ1DwVHJVMOVMdF/lHV9ebu9sjuVG8TbnG9G7jieCSCV4pFKup0IPUqKrko6mKojPKRtcW6401wp1mhb+Ze1T1fLsIppb638Niv2a4Y6knFPC880USDlO4UfPFLAtNTwwp0IgUfLqZIHTirvtrpNRJUqW8Kco4rM6tvWkptPrSYa9XuvlWJKiQs50Cx7sWa3PQ0oEsjPM+92JJ+XOZpsvrMZrIE9qg5YHvL1OkraqjlEtPKyN3jFjzQ1bPHS1MQEjdDr0Hq1XUVtsnvIakc01QXIkA6Cw0HoyjScPcxKRyYVLfM7h1EkAanFZfrXSa8JUqW8Kco4rM6OdRSU4H1nxV3i5VmvDVLkeEbhjU4t9sq7hKI4Iye9uwYs9iprYmv05yOU5/Ic9mLLRBerok+Lxj8xjTqVsn9XuFLL4ZVJ6tmptLLP8WQfj6Mm0vB0Es5G+WT8F9Es8MQLSSKg72OmKjMtng1HrIc9yDXE+doRugpGb4udMTZxuj/AOGsUfkNcSZivMnTWOPLQYirLtVSpElTO7sdAA5xZrZLRQazzvLM/wBLViQPgOPV19HRptVE6p59JxXZ0UarRwa/XfFZebjWk8NUuV8I3DB9GmLLlmet2ZqjWOD8WxTUlPSRLFBGEQd3UL7liOq26ijASbpKdj4lhkhdo5EKup0IPUQdDigm4ehpZfFEp/DqucDpaPOZfRS5qeioYKWClXVF3sx7cVOZrvPr/wBRsDuQaYknmlO1JIznvY68SkpJ6yZIYELOxxZbHBbItdzzsOU/6DjV95oKAHhphteAb2xcM4Vc2qUiCFPF0tiWaWZy8sjOx6Sx1PEihkmkWONCzsdABiyZWjp9metAeXpEfYuAAOdlR3jZUcoxG5h2HF1r8yW2XZlqdUP0HCjQ4/aa9f7s/dGP2mvX+7P2DH7S3r/eN9gxV1tTWOHqJNtx26DqWV6hZ7PANd8eqHqka7TaYzkv/wAT5TLx6KinrahIIV1ZvwHecWi0U9sgCIAZCOW/fxbhmC20IIeUPJ4E3nFxzXX1WqQ+wj+r9LDOzklmJJ6SeLbbXV3GURwJu95+xcWmyUlsj5ADSkcqQ9QqaaCpheKZAyMN4OL7l+W3OZY9Xpydzdq/A9VypcxR1vASHSKfQeTdUpxyz5YzXTGW1VYA3gBx/wCJ40UTyyJGilmYgADtJxYrPHbKUAgGdxrI36elnVAWZgAOknFfmq3UuqxEzv8AV6PtxcMx3Kt1UycHH4E3YJJ4oGuLPlaoq9maq1ih7vebFLSU9JEsUEYRB2DqUsUc0bxyIGRhoQcX6ySW2fVATTueQ3d8D1RSQQRiwXIXC3xux9qnIfzHU6YfSOKyJZYmVhqCCp8jitp2paqeBumNyvE0xlexcAgrahfaMPZqfdB7fRPUwU8ZkmlVF72OmK/ONPFqlHEZW8bblxXXevrjrPOxHhG5Rx6Ohqq2YRQRFm/AeeLPlimotmWfSWf/ANV6rVUsFVA8MyBkYaEYvVjntkxO94GPIf8AQ9UyxcvUrgqOdIpuQ3wPYepxrsoBhxqhGM40nBXFZgN0yfiu7iZYsXrLrWVCexU8gH3jisulDQprPOq9y9p+WLhnKVtUoogg8b7zipq6mqk255mdu9jxxiz5Yqa7Zln1ig/FsUdDS0UIip4wq/ifPq88EU8TxSoGRhoQcX3LstAxmh1enJ+aefUxrqNMWOepqLbA9RGyyAaHaGmunb1GBNptewenOlHwlBwoG+GTX5N6aMUvDK1STwS7yF6W+GKzNNXIghpEFPCBoAvTph5HkYu7FmPSSdTzFPTT1MqxQxl3boAxZsrQUuzNV6STdIX3VwBp1l0V1KsAQRoQcZhy4aQtVUqkwe8vg6jbrPW3F9IIzs9rncowtNZMvKHnYVFX2LiDNlZJc4ZJSFp9dkxjuOAQQCDqCOoRJsoPTd6UVNJPCf8AUjZfnhlKsVPSDoebtVmq7lLpENIweVIegYtlppLdFsQpyj9Jz0nrjKrAqwBBGhBxmSxmgl4eBT6u5+4eegp5qiRY4o2dz0BRqcUWWqWjiFVd5lVRvEeuLlmk7Hq1tjEMI3bWmh+WHdnYszEsekn0ZWuXrlvETtrLBop+K9nPwptOO4cScaxn4Yv9N6tdqtOwvtDybfzViy7LcGE02qU4+1/LFPTw00SxQxhEUbgOs64muNDB/i1US+bDE2a7PF0TNIfqLibO0I/waNz/ADMBiXOdwbXg4Yk+04q8wXSsieKaYGNulQoHOAa4teV6usAln9hB3n6RGJ7zabLG0FtiWSXoMn9zituFXWymSolLn8B5DiWS5Nbq+Ob3DyZB3qcI6SIroQVYAg89Auia9/EYagjGdqbYqqafxoVPmvM5ey6asrVVSkQD6K+PCIqKFUAADQAdWZ1UaswA+OKm/Wmm1D1aE9y8rFTnWlXUQUzue9joMVGb7pLqI+DiHwGpxPdLhUa8LVyt8NrBJPP2601txfZgiJXtc7lGEpLJl9A9S4nqtNQv9hi6ZirrgSmvBw+Bf1PHype9Ctvnb+kx/wD551EZiN27A4uc6XhLa0nbFIG+R3cxlywGucVNQulOp3DxnCqqKFUAADQAdVuVXVUsJkgo2n7wDppirzZdpCVTZg+AG/8AHFRXVlSdZqiR/M9ShglnkWOJGdz0ADXFFlmmpIhVXaZUUb+D1xcM06R+rWyIQxDcH00PyGHkeRi7sWYnUk7yeYRmRlZSQQdQRixXAXKghlJ9oOQ/8wwyMvSOZSJ27MJAi9O88e80wqaKoiI+nEwwRoSONYLC9wkE0oK0ynefF8BiONIo1RFCqo0AHV7rYKK4qWK8HN2Ov64uVrq7dNwc6bvdcdDdQ0Jxass1lbpLN7CDxN0nyGJrtaLLGYLbEss3Q0h/vituFXXScJUSlj2DsHkObyVWcHWTUxO6Rdoea4XR1BPaMNTqejdg07/DHAyeHHBSeE44KTwnHAyeHAp37TphIUXs15mddU8sXmm9WudXFpuEhI8jv4tgsUlxlEkmq06nefF8BiKKOGNI40Coo0AHUqnOEVPPLC1E+0jFTysftvD/ALJ/v4XO1J71JKPmDiLN1ok+k0iea4gvNsn04Osi8idDhWVhqCCPRV0lPWQtDPGGQ4vdhntkm2urwE7n7vgeet9qrLhJswREjtY7lGEorJl9BJVOJ6rpC/2GLrmKuuGqbXBw9ka/rjXnKCqakq4KhelHB8xiimSaJWQ6qyhlPwPUWGoI+GM60vB1sE4G6RND5rxabM9zpoUhi4IIo0A2Mftfd++L7mP2wu3fF93H7YXb/h+7gZyug6UhPywM63Dtp4fxwM7VfbSRfacDO9R20cf2nC54f3qEfJ8Jnen9+jceTA4TOVrb6STL8sJmuyt/rsPNDhMw2d+itT56jCXS3SfQrIT/AOYwskbjkup8jrxc4W4w1a1aLyJho38w4sFdWU51hqJE8mOKbN11h0EhSUfWGKXOlI+gnp3Q968oYW82WuiaM1MZVhoVfd+eL5YjRkz0rcJTHtG/Y5uGCaeRY4o2dz0BRqcUWWYKWMVV2mVEG/g9cXDNIRPVrXEIYhu29ND8hiSR5HLuxZj0knUnn8m3LhaQ07ty4Du/kPUs50nC25pAN8Ugf5HceppPNGdUldfIkYp8w3an02at2Hc/KxR51bULV0w/mTFDdKGvXWnmDHtXoYei4UUVdSy08nQw3HuPYcV1HNRVMkEo0ZT9o7xx9cCWRQQHYA9I15kYtWWKutAlm9hB4m6Tia7WeyRmC3RLLN2v/c4rrjV10pkqJSx7B2DyHUbFXmhuUEpOiE7D+TYgfaTqN1phU0k0R9+Nlw6lWZT0g6dVjlkicPG5Vh0EHTFozcwKw1+8dko/X0ZotIraQzxr7aEa+a9QoqCqrpRFTxFm/AeeEpLPYFElWwqKvTURjoXF0zBXXAlS/Bxdka9TGMsV3rVtp2J1YLsN5r1Gcaxn4YvtP6vdqyPs4QsPJt/VQCegYpLLc6vTgqV9O9uSPQRjMdt9QuD7A0il5afqOdVWYgKCSewYoMstwfrVylFPAN+h3McVmYoqeI0loiEMfbJ7zYeR3Ys7EsTqSeq5JrSk1RSk9IEi+YwDqAeoMNQRjOtPsV0E38SPQ+a9SprZX1RHAU0jfHTdimybcJNDNJHEPvHFNk63RaGZ5JT90YprVbqXTgaWNT36aniZktvr1ucqNZYuWn6jB5u12SsuLaouxF2yNuUY9bsliXZplFVV9r9gOLhdKy4Sbc8pPco3KOr2er9TuVLNruDgN5HccQNtIOo53ptqhjlH+nN+Dc/SWuvqyOAp3Yd+mgxRZLmbQ1c4QeFN5xR2C10mhSnVm8T8o4AAGgGg45xmS2eo17Mi6RS6sn6jmqOSkibhJ4mlIPJj6AfM4rb3XVaCLbEcI3CKPkrjXrAOmMu1ZqbdSSE7zGFPmu7qOaYeFtVYO6MN9085S2+sqzpBA7+Q3Yo8mVT6NVTLGPCvKOKPLlqpNCIOEbvk34CqoAUADm75bVuNDJEB7ReVGfiMOjIzKw0YHQj9yZLJNrj+Er9RvSBqGqHfBJ+XMxQyzOEjjZ2PYo1OKHKNxn0aciBfjvOKLK1spdC6GZ+9/wC2EjSNQqKFA7ANOfzbaeBmFdEvIkOj/Bv3JkxNm1Qnvdz1G7f5Of8Aov8Alx6O21tc+zTwM/eeweZxQZNRdHrZto+BMUtDSUibFPAiD4DeepVVNFVQSQSrqjrocXO3y2+rkgk7DyT3jv8A3HliLg7VRD/i1+069Ruv+Tn/AKT/AJcWgtVdXvpBCSO1juUYt2UKWHR6t+FfwjcuIoo4kCRoqqOgAaenQ6A6bvQQRp1C/wBnW5UvIAE8e9D+mJI3jdkdSGU6EH9wgakDFri4GkhTwxIv2DqN2/yVR/Rk/L0jGWrVZ6tBJLJwsw6Ym3AYSNI1CIoVR0AbhxacAwnVdd+NITuCqT3Yq9zr5dRzNYPWUaspk9sBy1HvDB/cFthM9fSReKVR+OKcaR9Ru3+SqP6Mn5YPpgnlgkWSJyrqdxGLHmaKr2YKshJ+gN2PxRMYaV3A1IbC18ofaKrv6cVR1KHvXqWabJ6vIa2BfZOeWB7p/cGVYOFvMB7EDPiMaIvUbt/kqj+jJ+XFBOMu5l+hSVr/AASQ/keJIQ1M0Y+kW1xwD/DErh9jTsUDqU0Mc8TxSKGRgQQcXm1vbKxojvQ7427x1/I8G1U1cvcip944HUbt/kqj+jJ+XHy5mTZ2KOsbd0RyH8j1D//EADwRAAIBAgMFBQgBAQcFAQAAAAECAwARBCExEiAwQGEFBhBBcRMUIjJCUVKBUJEVIzNEYGKCVGOQkrGh/9oACAEDAQE/AP8AzZgE6UsDnXKlhRfK9SG7sf8ARYBOQpMOTmxpUVdB4TSbI2Rqf9FpAzZtkKVFUZDxkkCDrRJJueYUXIoxSA22a9g4F6II4uVuAASbDmUhZ+gpIlTcZgqkmnYsxJ5gZEUtrCw8XjV/X706Mhz5TDp9RqZbOeWRGfQUkKrrmd6WTbboOaha6DpuEAixFSQEZrpwjEdhWH2z4AFyBSjZUCsSM1PFSJ26ClgQa51ZFF7AVI5didwAsbAUmHAzagAN6eT6B++bw7WYr996SENmMjTKVNjwI/kX0qaK3xDTfw63a/28MQLpf7HhqrMbAVHCq5nM+M0u2bDTcjgZszkKVFUWA35ZAi9ecU7LA0DcA7zorixp42Q9N9clHp4Sx7JuNN6FdlB18JBdG4UcZc9KVAosPGeSw2R4qrMbAVHCq5nM8AkAEmncu1+dga6W+2+QCLGpIPNd0eJAIINSRlG6bii7AbhFiRwI4Sc20oAAWHixsCaYliSaAJ0FJB5tWHGbngzybR2RoOew7We334Lxq/rTxshz8RqNx0Dixp1Kmx8YBeT03JRaRt4AmooQM213cQ2QX70kDNmchSIqaCmyU1h/lb14E0myLDU8+hswPCIBFjUsJXMaeA1G7IgcUylTY+GGHzHcxA+IHdRGc2AqOJU6ne2Re9s/GTJG9KgFoxvswVSTTMWJJ/gIm2kB4csVs10oajeljDjrRBBsagFkHXcxI+EHruJATm2QpVCiwHCnP92aiFo135pNprDQfwOHbMrxJIbEFd+SIP0NKLKBuTC8Z8EiZ+gpIkTqeJiPkHrSZKvpvTSbIsNT/BI2ywPJEgamjNGPqo4hfIE17x/soYhPMEUrq2hpxdWHSo4AM242I+gdaG6zBQSaZizEn+Dga6DpxiQKadRkuZoySt0rYvqaCirDxKeYNJMQbP8A1rXeecDJczRmkPnQlkH1Gknvk27LnLGN6d7nZHLom3lfOjC48r0QRw4Gs9vvxpRLf4tKUrwCL1AxuUO7NL9K7sD3GydRuNniF3ZH2FJ8+YBsb1FKGyOtMqtqKbDj6TTRuuo4INiDSnaUHjNEjeVMjx9RQYHduPvW2tB7OGAo4h/ICvbyfevbSflRkc/Ud6E2kXcXPENuyvtt0HNRTbWTa+LwI2mRp4nTgYdrgr9uQkg80oMRkwouKLmiSeLAP7wbkWcsh3J32VsNTzkU30tuPAraZGnjdNRvRNsuDyEkgQdaZixueQifYcHcw/1nckfbcnnYZvpbcsDTwA5rlTKymxG7C+0g+44jMqi5NHEL5LXvP+2mYsSTyUEl/hP68HNlb0NYf5D6+Mz7KW8zz8Mv0t+t0qGFiKfD+a0VI1HjC+y/Q8NiFBJp3Lm55UEggikbaUGpco2qD/DHjM+056fwEMu18J13ioOopsOp0NqaF18vCNtpAeFiDZQPueXw7ZlanP8Admo8kX08JW2UJ/gQSDeo3Dr14DIjaikQJexy4WIHwg9eXQ2dTWIPwj1pclX08MQ12C/b+CRyjXoEEAjkmUMCDToVNjy8puIuvgTYE0Tck8osTtoKXD/ka9nGovapHDNkLDiQSWOydOTZVYWIqSDZBYHllCuI/izHhO1kt9+SWJ20Wlw/5GljRdB4M6oLmpJC/pxopvpbk3F0YctB/iDwxBu4H28ACdBQhkPlQwx82oYdBQijH0iiqKLkCpHDHIWG+qs2gpcP+RpY0XQbkkwXIZmixY3J5CKYrk2lAgi45JhZmHXlcP8A4n68Gg2mJLUsMY8qAA0G4zBRc1JIXPTeWNm0FJh1HzZ0ABpuEgC5qSYnJdOMKjWFxkK9jH+Nexj/ABr2Mf40qhdOSmFpDynY+CXHY0QNoUf/AOUiNHO8bCzKSCOo32YKLmnkLnpurE7eWVJAi65ned1QXNPIznpyAJBuKilD5HXlZ02lv5jlO6CbXaMj/jEa7xYYYbtcuBZZhtbxIAualkLt03EgdtchSQovlc78k4XJczRYsbk8kCQbiopA468rKmw5Hlyfc2AiPFzkakIK734bbwkGIAzje36alN1B3Zpb/CNPAAk2ApcOxzY2pY0XQb7Mqi5NSTM2QyHKgkEEVHKHHXlJk2l6jkgCxAAzNdj4L3Ls+CH6rbTerV2nhvesDiYfNkNvUZioCdkqdVO5NLb4RrSozaClw4+o0FC6DgSTKuQzNMxY3J5cEg3FRSh8jrykoAc25Huz2d73jhK4vHDZj1by8e08P7r2tio9FZiy+jZ+LXtlrSwLq2Z4JIAuakmJyXIc3FLtfC2vIvIqamryS6ZLRgUIba8gK7CwAwXZ8SEWdhtP6nx73wbGIwuJA1BQ/riPIqDrTuznPnYZdoWOo4xIAuaaZmOygpIPNzc+M6bL38jx+72B987RjuPgj+NqHj3nw3tuypW84yHFRG6KeFLKFyGZokk3PNBGOgNCCQ+VDDHzahh18yaWJFNwOK8yrkMzQjeQ3c2FKqqLAbkibakUQRxu62B93wHtmHxzG/6Gm5iYhNh5ojo6Ff6iolKNJGdVYjgyy7OS68wInP00MM3maGHQa3NBEGijkHkVBmaLSS6ZLSRKnU788f1D98Xs3svFY6eMLE3syw2ntkBUaLGioosqgADoN3teD3ftnEoBk52x/wAs+BNLs5DXlkUMbFrUsCetBVGgHJEgC5NNMzHZQUsHm5ua04MWFabFJArBS5spOlzWLwGLwb7E8LL18jwcB2F2hjiCkRSP83yFdn92MBhbNKPbSfdtKVVUBVAAGgG93vg2MThMQPMFT+t+WUILDWr3z5dJWTqKR1cZci8yrkMzQSSQ3c2FKqqLAcOUshSVcmVgRWHaDtDAwyOiukiAkEXrGd0sFKS2HdoT9tVqXul2knyGNx62pu7fbA/yt/RhR7B7XH+Tev7E7W/6KWk7A7XfTBv+7CsL3Rx0hBndIh/7GsD3d7Owlm9n7Rx9T0AALDgd6cP7bsp384nDVGbop3ZZQgsNaJJJJ5JcOSAdqvdj+Ve7t+QowSUY3GqnxVipuKjlDjrxmdUGZraklyUWFJEqdTxWXaUiu6ON2oZsGxzQ7S+h5HGQifCzwn60YVCCu2h1UkHdMKE3N69hH1r3ePrXu8fWvd0617uv3Ne7L+Rr3Yfka92/3UcMfyo4d+lewk+1eyk/Gijj6TvYd7rs/bdKqdQKaBD0o4dvI0Y5FN7Go5drJsjwyQNaaYk2jH7pYfNzc0OPgsU2A7QgxA+W9m9DSMrqrKbgi4PI9rQe69r4lNFY7Q/5cmQDqKMUZ+mmw/4mmRl1HgrFWBFKwYAjgWHCeZVyGZoRvIbubClRV0HIyrtIRXdjGnE9nLGxu8J2D6eXI98INmbCYkDUFD+uWIBFqkg80/p4QvstY6HkGZVFya2pJflyWkiVOp5Tuzifdu1DCTZJlt+xyPefD+27KlbzjYOKiN0XlmkRdW8YX2k6jjNNnZMzSxEm8huftyxkaCaGdPmRwf6VBKs0Mcq6OoYfvkMVCJ8NNEdHRl/qKhBXbQ6q1uSLourU2IUaAmjiHOgAou51Y7kL7L9DxHkVNda2ZJNfhFKioMhy8i7SMK7q4v2/ZwiJ+KFtn9cj2jF7DtbGRjTbJH7z47Oi6mmxH4imldtW4ML7S9RwmDHIUsarnqea7rTmHtR4PplQ/wBRnyPeiP2fa6P+cS8RnVdTTYgeQvTSu3nxI32GB/hexb/25hLfc/8Azke+K2xOBf7qw4JIGtNOg0zppnbpyED3Gyf4Tu+u125CfxRj/wDnI98tcB6vvs6rqabEfiKZmbU8kCQQRSMHUH+D7qJt9qzv+MR5Hvl82A9X3WdF1NPOxyXKiSeVik2G6Vr/AAXc2O5x0vVV5Hvl82A9X3JpJFyAsPvvN81Z0mh5GGW3wtp/AubKx6V3Si2OzC/5yseR75fNgPV9wgEWNSQlc103bbTgUYxak8+SgkuNk6/wExtGa7Fh9j2XhE/7YJ/efI98vmwHq+9LD9S7gya9bQoC1+SBINxUb7a359kMkkMQ1dwKjQIiINFUDke+WuA9X35ofqXkP//Z) ``` ! pip install pymystem3 ! pip install --force-reinstall pymorphy2 !pip install pymorphy2-dicts-ru import pymorphy2 import re morph = pymorphy2.MorphAnalyzer() # убираем все небуквенные символы regex = re.compile("[А-Яа-яA-z]+") def words_only(text, regex=regex): try: return regex.findall(text.lower()) except: return [] for i in train.comment[10].split(): lemmas = morph.parse(i) print(lemmas[0]) from functools import lru_cache @lru_cache(maxsize=128) def lemmatize_word(token, pymorphy=morph): return pymorphy.parse(token)[0].normal_form def lemmatize_text(text): return [lemmatize_word(w) for w in text] tokens = words_only(train.comment[10]) print(lemmatize_text(tokens)) from nltk.corpus import stopwords import nltk nltk.download('stopwords') mystopwords = stopwords.words('russian') def remove_stopwords(lemmas, stopwords = mystopwords): return [w for w in lemmas if not w in stopwords] lemmas = lemmatize_text(tokens) print(*remove_stopwords(lemmas)) def remove_stopwords(lemmas, stopwords = mystopwords): return [w for w in lemmas if not w in stopwords and len(w) > 3] print(*remove_stopwords(lemmas)) def clean_text(text): tokens = words_only(text) lemmas = lemmatize_text(tokens) return remove_stopwords(lemmas) for i in range(20): print(* clean_text(train.comment[i])) from tqdm.auto import trange new_comments = [] for i in trange(len(train.comment), desc='loop'): new_comments.append(" ".join(clean_text(train.comment[i]))) new_comments[:10] vec3 = CountVectorizer(ngram_range=(1, 2)) # строим BoW для слов bow3 = vec3.fit_transform(new_comments) list(vec3.vocabulary_.items())[100:120] bow3 clf3 = LogisticRegression(random_state=0, max_iter=500, class_weight='balanced') clf3.fit(bow3, train.toxic) pred = clf3.predict(bow3) print(classification_report(pred, train.toxic)) test new_commentstest = [] for i in trange(len(test.comment), desc='loop'): new_commentstest.append(" ".join(clean_text(test.comment[i]))) bow_test_pred3 = test.copy() bow_test_pred3['newcomment'] = new_commentstest bow_test_pred3.tail() bow_test_pred3['toxic'] = clf3.predict(vec3.transform(bow_test_pred3.newcomment)) bow_test_pred3['toxic'] = bow_test_pred3['toxic'].astype(int) bow_test_pred3.drop('comment', axis=1, inplace=True) bow_test_pred3.drop('newcomment', axis=1, inplace=True) bow_test_pred3 confusion_matrix(bow_test_pred2.toxic, bow_test_pred3.toxic) bow_test_pred3.to_csv('bow_v3.csv', index=False) # !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f bow_v3.csv -m "kirill_setdekov bow3 with preprocessing" !pip install scikit-learn==0.24 from sklearn.ensemble import RandomForestClassifier from sklearn.experimental import enable_halving_search_cv # noqa from sklearn.model_selection import HalvingGridSearchCV ``` nor run -too slow ``` # rnd_reg = RandomForestClassifier( ) # # hyper-parameter space # param_grid_RF = { # 'n_estimators' : [10,20,50,100,200,500,1000], # 'max_features' : [0.6,0.8,"auto","sqrt"], # } # search_two = HalvingGridSearchCV(rnd_reg, param_grid_RF, factor=5, scoring='accuracy', # n_jobs=-1, random_state=0, verbose=2).fit(bow3, train.toxic) # search_two.best_params_ rnd_reg_2 = RandomForestClassifier(n_estimators=1000, verbose=5, n_jobs=-1) search_no = rnd_reg_2.fit(bow3, train.toxic) bow_test_pred4 = test.copy() bow_test_pred4['newcomment'] = new_commentstest bow_test_pred4.tail() bow_test_pred4['toxic'] = search_no.predict(vec3.transform(bow_test_pred4.newcomment)) bow_test_pred4['toxic'] = bow_test_pred4['toxic'].astype(int) bow_test_pred4.drop('comment', axis=1, inplace=True) bow_test_pred4.drop('newcomment', axis=1, inplace=True) bow_test_pred4 confusion_matrix(bow_test_pred4.toxic, bow_test_pred3.toxic) bow_test_pred4.to_csv('bow_v4.csv', index=False) !kaggle competitions submit -c toxic-comments-classification-apdl-2021 -f bow_v4.csv -m "kirill_setdekov bow4 with preprocessing and RF" ```
true
code
0.279878
null
null
null
null
# Minimal end-to-end causal analysis with ```cause2e``` This notebook shows a minimal example of how ```cause2e``` can be used as a standalone package for end-to-end causal analysis. It illustrates how we can proceed in stringing together many causal techniques that have previously required fitting together various algorithms from separate sources with unclear interfaces. Additionally, the numerous techniques have been packed into only two easy-to-use functions for causal discovery and causal estimation. Hopefully, you will find this notebook helpful in guiding you through the process of setting up your own causal analyses for custom problems. The overall structure should always be the same regardless of the application domain. For more advanced features, check out the other notebooks. ### Imports By the end of this notebook, you will probably be pleasantly surprised by the fact that we did not have to import lots of different packages to perform a full causal analysis consisting of different subtasks. ``` import os from cause2e import path_mgr, knowledge, discovery ``` ## Set up paths to data and output directories This step is conveniently handled by the ```PathManager``` class, which avoids having to wrestle with paths throughout the multistep causal analysis. If we want to perform the analysis in a directory ```'dirname'``` that contains ```'dirname/data'``` and ```'dirname/output'``` as subdirectories, we can also use ```PathManagerQuick``` for an even easier setup. The ```experiment_name``` argument is used for generating output files with meaningful names, in case we want to study multiple scenarios (e.g. with varying model parameters). For this analysis, we use the sprinkler dataset. Unfortunately, there are still some problems to be sorted out with categorical data in the estimation step, but continuous and discrete data work fine. Therefore, we use a version of the dataset where only the seasons ```'Spring'``` and ```'Summer'``` are present, such that we can replace these values by 0 and 1. ``` cwd = os.getcwd() wd = os.path.dirname(cwd) paths = path_mgr.PathManagerQuick(experiment_name='sprinkler', data_name='sprinkler.csv', directory=wd ) ``` ## Learn the causal graph from data and domain knowledge Model-based causal inference leverages qualitative knowledge about pairwise causal connections to obtain unbiased estimates of quantitative causal effects. The qualitative knowledge is encoded in the causal graph, so we must recover this graph before we can start actually estimating the desired effects. For learning the graph from data and domain knowledge, we use the ```StructureLearner``` class. ``` learner = discovery.StructureLearner(paths) ``` ### Read the data The ```StructureLearner``` has reading methods for csv and parquet files. ``` learner.read_csv(index_col=0) ``` The first step in the analysis should be an assessment of which variables we are dealing with. In the sprinkler dataset, each sample tells us - the current season - whether it is raining - whether our lawn sprinkler is activated - whether our lawn is slippery - whether our lawn is wet. ``` learner.variables ``` ### Preprocess the data As mentioned above, currently there are problems in the estimation step with categorical data, so we use this occasion to showcase ```cause2e```'s built-in preprocessing functionalities. We define a function that replaces instances of ```'Summer'``` by 1, and instances of ```'Spring'``` by 0. Afterwards we apply it to our data and throw out the categorical ```'Season'``` column. For more preprocessing options, check out the pertaining notebook. ``` def is_summer(data, col_name): return (data[col_name] == 'Summer').apply(int) learner.combine_variables(name='Season_binary', func=is_summer, input_cols=['Season'], keep_old=False) ``` It necessary to communicate to the ```StructureLearner``` if the variables are discrete, continuous, or both. We check how many unique values each variable takes on in our sample and deduce that all variables are discrete. ``` learner.data.nunique() ``` This information is passed to the ```StructureLearner``` by indicating the exact sets of discrete and continuous variables. ``` learner.discrete = learner.variables learner.continuous = set() ``` ### Provide domain knowledge Humans can often infer parts of the causal graph from domain knowledge. The nodes are always just the variables in the data, so the problem of finding the right graph comes down to selecting the right edges between them. As a reminder: The correct causal graph has an edge from variable A to variable B if and only if variable A directly influences variable B (changing the value of variable A changes the value of variable B if we keep all other variables fixed). There are three ways of passing domain knowledge for the graph search: - Indicate which edges must be present in the causal graph. - Indicate which edges must not be present in the causal graph. - Indicate a temporal order in which the variables have been created. This is then used to generate forbidden edges, since the future can never influence the past. In this example, we use the ```knowledge.EdgeCreator``` to prescribe that - no variables are direct causes of the season, - the lawn being slippery is not a direct cause of any other variable - turning the sprinkler on or off directly affects the wetness of the lawn, - turning the sprinkler on or off does not directly affect the weather. ``` edge_creator = knowledge.EdgeCreator() edge_creator.forbid_edges_from_groups({'Season_binary'}, incoming=learner.variables) edge_creator.forbid_edges_from_groups({'Slippery'}, outgoing=learner.variables) edge_creator.require_edge('Sprinkler', 'Wet') edge_creator.forbid_edge('Sprinkler', 'Rain') ``` There is a fourth way of passing knowledge which is not used in learning the graph, but in validating the quantitative estimates resulting from our end-to-end causal analysis. We often know beforehand what some of the quantitative effects should look like, e.g. - turning the sprinkler on should have a positive overall effect (-> average treatment effect; read below if you are not familiar with types of causal effects) on the lawn being wet and - making the lawn wet should have a positive overall effect on the lawn being slippery. Instead of checking manually at the end if our expectations have been met, we can automate this validation by using the ```knowledge.ValidationCreator```. For instructiveness, we also add two more validations that should fail: - the sprinkler has a negative natural direct effect on the weather and - the natural indirect effect of the lawn being slippery on the season is between 0.2 and 0.4 (remember to normalize your data before such a validation if they are not measured on the same scale). ``` validation_creator = knowledge.ValidationCreator() validation_creator.add_expected_effect(('Sprinkler', 'Wet', 'nonparametric-ate'), ('greater', 0)) validation_creator.add_expected_effect(('Wet', 'Slippery', 'nonparametric-ate'), ('greater', 0)) validation_creator.add_expected_effect(('Sprinkler', 'Rain', 'nonparametric-nde'), ('less', 0)) validation_creator.add_expected_effect(('Slippery', 'Season_binary', 'nonparametric-nie'), ('between', 0.2, 0.4)) ``` We pass the knowledge to the ```StructureLearner``` and check if it has been correctly received. ``` learner.set_knowledge(edge_creator=edge_creator, validation_creator=validation_creator) ``` ### Apply a structure learning algorithm Now that the ```StructureLearner``` has received the data and the domain knowledge, we can try to recover the original graph using causal discovery methods provided by the internally called ```py-causal``` package. There are many parameters that can be tuned (choice of algorithm, search score, independence test, hyperparameters, ...) and we can get an overview by calling some informative methods of the learner. Reasonable default arguments are provided (FGES with CG-BIC score for possibly mixed datatypes and respecting domain knowledge), so we use these for our minimal example. ``` learner.run_quick_search() ``` The output of the search is a proposed causal graph. We can ignore the warning about stopping the Java Virtual Machine (needed by ```py-causal``` which is a wrapper around the ```TETRAD``` software that is written in Java) if we do not run into any problems. If the algorithm cannot orient all edges, we need to do this manually. Therefore, the output includes a list of all undirected edges, so we do not miss them in complicated graphs with many variables and edges. In our case, all the edges are already oriented. The result seems reasonable: - The weather depends on the season. - The sprinkler use also depends on the season. - The lawn will be wet if it rains or if the sprinkler is activated. - The lawn will be slippery if it is wet. ### Saving the graph ```Cause2e``` allows us to save the result of our search to different file formats with the ```StructureLearner.save_graphs``` method. The name of the file is determined by the ```experiment_name``` parameter from the ```PathManager```. If the result of the graph search is already a directed acyclic graph that respects our domain knowledge, the graph is automatically saved, as we can see from the above output. Check out the graph postprocessing notebook for information on how to proceed when the result of the search needs further adjustments. ## Estimate causal effects from the graph and the data After we have successfully recovered the causal graph from data and domain knowledge, we can use it to estimate quantitative causal effects between the variables in the graph. It is pleasant that we can use the same graph and data to estimate multiple causal effects, e.g. the one that the Sprinkler has on the lawn being slippery, as well as the one that the season has on the rain probability, without having to repeat the previous steps. Once we have managed to qualitatively model the data generating process, we are already in a very good position. The remaining challenges can be tackled with the core functionality from the ```DoWhy``` package, which we have wrapped into a single easy-to-use convenience method. Usually, all estimation topics are handled by the ```estimator.Estimator```, but the ```StructureLearner``` has the possibility to run a quick analysis of all causal effects with preset parameters. For more detailed analyses, check out the other notebooks that describe the causal identification and estimation process step by step. ``` learner.run_all_quick_analyses() ``` The output consists of a detailed analysis of the causal effects in our system. ### Heatmaps The first three images are heatmaps, where the (i, j)-entry shows the causal effect of variable i on variable j. The three heatmaps differ in the type of causal effect that they are describing: - **Average Treatment Effect (ATE)**: Shows how the outcome variable varies if we vary the treatment variable. This comprises direct and indirect effects. The sprinkler influences the lawn being slippery, even if this does not happen directly, but via its influence on the lawn being wet. - **Natural Direct Effect (NDE)**: Shows how the outcome variable varies if we vary the treatment variable and keep all other variables fixed. This comprises only direct effects. The sprinkler does not directly influence the lawn being slippery, as we can read off from the heatmap. - **Natural Indirect Effect (NIE)**: Shows the difference between ATE and NDE. By definition, this comprises only indirect effects. The sprinkler has a strong indirect influence on the lawn being slippery, as we can read off from the heatmap. In our example, we can easily identify from the graph if an effect is direct or indirect, but in examples where a variable simultaneously has a direct and an indirect influence on another variable, it is very challenging to separate the effects without resorting to the algebraic methods that ```cause2e``` uses internally. ### Validations The next output shows if our model has passed each of the validations, based on the expected causal effects that we have communicated before running the causal end-to-end analysis. If we are interested in a specific effect, say, the effect of the sprinkler on the lawn being slippery, the estimation of this effect by our learnt causal model can be trusted more if the estimation for other effects match our expectations. We see that the results of the validations turned out exactly as described above (in practice we would not want validations to fail, this was only for demonstrative purposes). ### Numeric tables The three numeric tables show the same information as the three previous heatmaps, only in quantitative instead of visual form. ### PDF report ```Cause2e``` automatically generates a pdf report that contains - the causal graph indicating all qualitative relationships, - the three heatmaps visualizing all quantitative causal effects, - the results of the validations, - the three numeric tables reporting all quantitative causal effects. This is helpful if we want to communicate our findings to other people, or if we want to modify the analysis at a later time and compare the outcome for both methods. ## Discussion of the results The heatmaps show the effects that we would expect given our causal graph: - There is less rain in summer than in spring. - Sprinklers are more often turned on in summer than in spring. - Rain increases the wetness of the lawn. - Turning the sprinkler on also increases the wetness of the lawn. - Wetting the lawn causes it to be slippery. It is interesting to see that the first two effects roughly cancel each other out, resulting in a small ATE of 0.1 that ```'Season_binary'``` has on ```'Slippery'``` and ```'Wet'```. In general, it is a good strategy to look at the heatmaps for discovering the qualitative nature of the different causal effects and then inspect the numeric tables for the exact numbers if needed. Another noteworthy entry is the overall effect of ```'Sprinkler'``` on ```'Wet'```. The result is 0.638, so turning on the sprinkler makes it more likely for the lawn to be wet, as it should be. However, we might ask ourselves: "Why is the effect not 1? Whenever we turn on the sprinkler, the lawn will be wet!" This can be explained by looking at the definition of our chosen effect type, the nonparametric average treatment effect (ATE): The ATE tells us how much (on average) we change the outcome by changing the treatment. In our case, we can distinguish between two possible scenarios: If it is raining, then the lawn is wet anyway, so turning the sprinkler on does not change the outcome at all. Only if it is not raining, the lawn state is changed to wet by turning on the sprinkler. We can convince ourselves that this is the correct explanation by looking at the proportion of samples where it is not raining. ``` 1 - sum(learner.data['Rain']) / len(learner.data) ``` We recover the same number of 0.638. Additionally, we can change our data to consist only of the instances where it is not raining. If we now repeat the causal analysis, the effect is indeed 1 (skip after the warnings that are caused by the now degenerate dataset). This procedure can be generalized to analyzing other conditional causal effects. ``` learner.data = learner.data[learner.data['Rain']==0] learner.run_all_quick_analyses() ```
true
code
0.843573
null
null
null
null
# Nearest neighbors This notebook illustrates the classification of the nodes of a graph by the [k-nearest neighbors algorithm](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm), based on the labels of a few nodes. ``` from IPython.display import SVG import numpy as np from sknetwork.data import karate_club, painters, movie_actor from sknetwork.classification import KNN from sknetwork.embedding import GSVD from sknetwork.visualization import svg_graph, svg_digraph, svg_bigraph ``` ## Graphs ``` graph = karate_club(metadata=True) adjacency = graph.adjacency position = graph.position labels_true = graph.labels seeds = {i: labels_true[i] for i in [0, 33]} knn = KNN(GSVD(3), n_neighbors=1) labels_pred = knn.fit_transform(adjacency, seeds) precision = np.round(np.mean(labels_pred == labels_true), 2) precision image = svg_graph(adjacency, position, labels=labels_pred, seeds=seeds) SVG(image) # soft classification (here probability of label 1) knn = KNN(GSVD(3), n_neighbors=2) knn.fit(adjacency, seeds) membership = knn.membership_ scores = membership[:,1].toarray().ravel() image = svg_graph(adjacency, position, scores=scores, seeds=seeds) SVG(image) ``` ## Directed graphs ``` graph = painters(metadata=True) adjacency = graph.adjacency position = graph.position names = graph.names rembrandt = 5 klimt = 6 cezanne = 11 seeds = {cezanne: 0, rembrandt: 1, klimt: 2} knn = KNN(GSVD(3), n_neighbors=2) labels = knn.fit_transform(adjacency, seeds) image = svg_digraph(adjacency, position, names, labels=labels, seeds=seeds) SVG(image) # soft classification membership = knn.membership_ scores = membership[:,0].toarray().ravel() image = svg_digraph(adjacency, position, names, scores=scores, seeds=[cezanne]) SVG(image) ``` ## Bipartite graphs ``` graph = movie_actor(metadata=True) biadjacency = graph.biadjacency names_row = graph.names_row names_col = graph.names_col inception = 0 drive = 3 budapest = 8 seeds_row = {inception: 0, drive: 1, budapest: 2} knn = KNN(GSVD(3), n_neighbors=2) labels_row = knn.fit_transform(biadjacency, seeds_row) labels_col = knn.labels_col_ image = svg_bigraph(biadjacency, names_row, names_col, labels_row, labels_col, seeds_row=seeds_row) SVG(image) # soft classification membership_row = knn.membership_row_ membership_col = knn.membership_col_ scores_row = membership_row[:,1].toarray().ravel() scores_col = membership_col[:,1].toarray().ravel() image = svg_bigraph(biadjacency, names_row, names_col, scores_row=scores_row, scores_col=scores_col, seeds_row=seeds_row) SVG(image) ```
true
code
0.624837
null
null
null
null
# WorkFlow ## Classes ## Load the data ## Test Modelling ## Modelling **<hr>** ## Classes ``` NAME = "change the conv2d" BATCH_SIZE = 32 import os import cv2 import torch import numpy as np def load_data(img_size=112): data = [] index = -1 labels = {} for directory in os.listdir('./data/'): index += 1 labels[f'./data/{directory}/'] = [index,-1] print(len(labels)) for label in labels: for file in os.listdir(label): filepath = label + file img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE) img = cv2.resize(img,(img_size,img_size)) img = img / 255.0 data.append([ np.array(img), labels[label][0] ]) labels[label][1] += 1 for _ in range(12): np.random.shuffle(data) print(len(data)) np.save('./data.npy',data) return data import torch def other_loading_data_proccess(data): X = [] y = [] print('going through the data..') for d in data: X.append(d[0]) y.append(d[1]) print('splitting the data') VAL_SPLIT = 0.25 VAL_SPLIT = len(X)*VAL_SPLIT VAL_SPLIT = int(VAL_SPLIT) X_train = X[:-VAL_SPLIT] y_train = y[:-VAL_SPLIT] X_test = X[-VAL_SPLIT:] y_test = y[-VAL_SPLIT:] print('turning data to tensors') X_train = torch.from_numpy(np.array(X_train)) y_train = torch.from_numpy(np.array(y_train)) X_test = torch.from_numpy(np.array(X_test)) y_test = torch.from_numpy(np.array(y_test)) return [X_train,X_test,y_train,y_test] ``` **<hr>** ## Load the data ``` REBUILD_DATA = True if REBUILD_DATA: data = load_data() np.random.shuffle(data) X_train,X_test,y_train,y_test = other_loading_data_proccess(data) ``` ## Test Modelling ``` import torch import torch.nn as nn import torch.nn.functional as F # class Test_Model(nn.Module): # def __init__(self): # super().__init__() # self.conv1 = nn.Conv2d(1, 6, 5) # self.pool = nn.MaxPool2d(2, 2) # self.conv2 = nn.Conv2d(6, 16, 5) # self.fc1 = nn.Linear(16 * 25 * 25, 120) # self.fc2 = nn.Linear(120, 84) # self.fc3 = nn.Linear(84, 36) # def forward(self, x): # x = self.pool(F.relu(self.conv1(x))) # x = self.pool(F.relu(self.conv2(x))) # x = x.view(-1, 16 * 25 * 25) # x = F.relu(self.fc1(x)) # x = F.relu(self.fc2(x)) # x = self.fc3(x) # return x class Test_Model(nn.Module): def __init__(self): super().__init__() self.pool = nn.MaxPool2d(2, 2) self.conv1 = nn.Conv2d(1, 32, 5) self.conv3 = nn.Conv2d(32,64,5) self.conv2 = nn.Conv2d(64, 128, 5) self.fc1 = nn.Linear(128 * 10 * 10, 512) self.fc2 = nn.Linear(512, 256) self.fc4 = nn.Linear(256,128) self.fc3 = nn.Linear(128, 36) def forward(self, x,shape=False): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv3(x))) x = self.pool(F.relu(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, 128 * 10 * 10) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc4(x)) x = self.fc3(x) return x device = torch.device('cuda') model = Test_Model().to(device) preds = model(X_test.reshape(-1,1,112,112).float().to(device),True) preds[0] optimizer = torch.optim.SGD(model.parameters(),lr=0.1) criterion = nn.CrossEntropyLoss() EPOCHS = 5 loss_logs = [] from tqdm import tqdm PROJECT_NAME = "Sign-Language-Recognition" def test(net,X,y): correct = 0 total = 0 net.eval() with torch.no_grad(): for i in range(len(X)): real_class = torch.argmax(y[i]).to(device) net_out = net(X[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) if predictied_class == real_class: correct += 1 total += 1 return round(correct/total,3) import wandb len(os.listdir('./data/')) import random # index = random.randint(0,29) # print(index) # wandb.init(project=PROJECT_NAME,name=NAME) # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch.float()) # loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])}) # wandb.finish() import matplotlib.pyplot as plt import pandas as pd df = pd.Series(loss_logs) df.plot.line(figsize=(12,6)) test(model,X_test,y_test) test(model,X_train,y_train) preds X_testing = X_train y_testing = y_train correct = 0 total = 0 model.eval() with torch.no_grad(): for i in range(len(X_testing)): real_class = torch.argmax(y_testing[i]).to(device) net_out = model(X_testing[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) # print(predictied_class) if str(predictied_class) == str(real_class): correct += 1 total += 1 print(round(correct/total,3)) # for real,pred in zip(y_batch,preds): # print(real) # print(torch.argmax(pred)) # print('\n') ``` ## Modelling ``` # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # activation # best num of epochs # best optimizer # best loss ## best lr class Test_Model(nn.Module): def __init__(self,conv2d_output=128,conv2d_1_ouput=32,conv2d_2_ouput=64,output_fc1=512,output_fc2=256,output_fc4=128,output=36,activation=F.relu,max_pool2d_keranl=2): super().__init__() print(conv2d_output) print(conv2d_1_ouput) print(conv2d_2_ouput) print(output_fc1) print(output_fc2) print(output_fc4) print(activation) self.conv2d_output = conv2d_output self.pool = nn.MaxPool2d(max_pool2d_keranl) self.conv1 = nn.Conv2d(1, conv2d_1_ouput, 5) self.conv3 = nn.Conv2d(conv2d_1_ouput,conv2d_2_ouput,5) self.conv2 = nn.Conv2d(conv2d_2_ouput, conv2d_output, 5) self.fc1 = nn.Linear(conv2d_output * 10 * 10, output_fc1) self.fc2 = nn.Linear(output_fc1, output_fc2) self.fc4 = nn.Linear(output_fc2,output_fc4) self.fc3 = nn.Linear(output_fc4, output) self.activation = activation def forward(self, x,shape=False): x = self.pool(self.activation(self.conv1(x))) x = self.pool(self.activation(self.conv3(x))) x = self.pool(self.activation(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, self.conv2d_output * 10 * 10) x = self.activation(self.fc1(x)) x = self.activation(self.fc2(x)) x = self.activation(self.fc4(x)) x = self.fc3(x) return x # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # best num of epochs # best loss ## best lr # batch size EPOCHS = 3 BATCH_SIZE = 32 # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # activation = # best num of epochs # best optimizer = # best loss ## best lr def get_loss(criterion,y,model,X): preds = model(X.view(-1,1,112,112).to(device).float()) preds.to(device) loss = criterion(preds,torch.tensor(y,dtype=torch.long).to(device)) loss.backward() return loss.item() optimizers = [torch.optim.SGD,torch.optim.Adadelta,torch.optim.Adagrad,torch.optim.Adam,torch.optim.AdamW,torch.optim.SparseAdam,torch.optim.Adamax] for optimizer in optimizers: model = Test_Model(activation=nn.ReLU()) criterion = optimizer(model.parameters(),lr=0.1) wandb.init(project=PROJECT_NAME,name=f'optimizer-{optimizer}') for _ in tqdm(range(EPOCHS)): for i in range(0,len(X_train),BATCH_SIZE): X_batch = X_train[i:i+BATCH_SIZE] y_batch = y_train[i:i+BATCH_SIZE] model.to(device) preds = model(X_batch.float()) loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) optimizer.zero_grad() loss.backward() optimizer.step() wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)}) print(f'{torch.argmax(preds[index])} \n {y_batch[index]}') print(f'{torch.argmax(preds[1])} \n {y_batch[1]}') print(f'{torch.argmax(preds[2])} \n {y_batch[2]}') print(f'{torch.argmax(preds[3])} \n {y_batch[3]}') print(f'{torch.argmax(preds[4])} \n {y_batch[4]}') wandb.finish() # activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()] # for activation in activations: # model = Test_Model(activation=activation) # optimizer = torch.optim.SGD(model.parameters(),lr=0.1) # criterion = nn.CrossEntropyLoss() # index = random.randint(0,29) # print(index) # wandb.init(project=PROJECT_NAME,name=f'activation-{activation}') # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch.float()) # loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)}) # print(f'{torch.argmax(preds[index])} \n {y_batch[index]}') # print(f'{torch.argmax(preds[1])} \n {y_batch[1]}') # print(f'{torch.argmax(preds[2])} \n {y_batch[2]}') # print(f'{torch.argmax(preds[3])} \n {y_batch[3]}') # print(f'{torch.argmax(preds[4])} \n {y_batch[4]}') # wandb.finish() for real,pred in zip(y_batch,preds): print(real) print(torch.argmax(pred)) print('\n') ```
true
code
0.628607
null
null
null
null
<!-- dom:TITLE: Week 2 January 11-15: Introduction to the course and start Variational Monte Carlo --> # Week 2 January 11-15: Introduction to the course and start Variational Monte Carlo <!-- dom:AUTHOR: Morten Hjorth-Jensen Email [email protected] at Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway & Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA --> <!-- Author: --> **Morten Hjorth-Jensen Email [email protected]**, Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway and Department of Physics and Astronomy and Facility for Rare Ion Beams, Michigan State University, East Lansing, Michigan, USA Date: **Jan 14, 2021** Copyright 1999-2021, Morten Hjorth-Jensen Email [email protected]. Released under CC Attribution-NonCommercial 4.0 license ## Overview of week 2 **Topics.** * Introduction to the course and overview of topics to be covered * Introduction to Variational Monte Carlo methods, Metropolis Algorithm, statistics and Markov Chain theory **Teaching Material, videos and written material.** * Asynchronuous vidoes * Lecture notes and reading assignments * Additional (often recommended) background material ## Textbook There are no unique textbooks which cover the material to be discussed. For each week however, we will, in addition to our own lecture notes, send links to additional literature. This can be articles or chapters from other textbooks. A useful textbook is however * [Bernd A. Berg, *Markov Chain Monte Carlo Simulations and their Statistical Analysis*, World Scientific, 2004](https://www.worldscientific.com/worldscibooks/10.1142/5602), chapters 1, 2 This book has its main focus on spin-models, but many of the concepts are general. Chapters 1 and 2 contain a good discussion of the statistical foundation. ## Aims * Be able to apply central many-particle methods like the Variational Monte Carlo method to properties of many-fermion systems and many-boson systems. * Understand how to simulate quantum mechanical systems with many interacting particles. The methods are relevant for atomic, molecular, solid state, materials science, nanotechnology, quantum chemistry and nuclear physics. * Learn to manage and structure larger projects, with unit tests, object orientation and writing clean code * Learn about a proper statistical analysis of large data sets * Learn to optimize with convex optimization methods functions that depend on many variables. * Parallelization and code optimizations ## Lectures and ComputerLab * Lectures: Thursday (2.15pm-4pm). First time January 14. Last lecture May 6. * Computerlab: Thursday (4.15pm-7pm), first time January 14, last lab session May 6. * Weekly plans and all other information are on the webpage of the course * **First project to be handed in March 26**. * **Second and final project to be handed in May 31.** * There is no final exam, only project work. ## Course Format * Two compulsory projects. Electronic reports only. You are free to choose your format. We use devilry to hand in the projects. * Evaluation and grading: The two projects count 1/2 each of the final mark. No exam. * The computer lab (room 397 in the Physics buidling) has no PCs, so please bring your own laptops. C/C++ is the default programming language, but programming languages like Fortran2008, Rust, Julia, and/or Python can also be used. All source codes discussed during the lectures can be found at the webpage of the course. ## Topics covered in this course * Parallelization (MPI and OpenMP), high-performance computing topics. Choose between Python, Fortran2008 and/or C++ as programming languages. * Algorithms for Monte Carlo Simulations (multidimensional integrals), Metropolis-Hastings and importance sampling algorithms. Improved Monte Carlo methods. * Statistical analysis of data from Monte Carlo calculations, bootstrapping, jackknife and blocking methods. * Eigenvalue solvers * For project 2 there will be at least three variants: a. Variational Monte Carlo for fermions b. Hartree-Fock theory for fermions c. Coupled cluster theory for fermions (iterative methods) d. Neural networks and Machine Learning to solve the same problems as in project 1 e. Eigenvalue problems with deep learning methods f. Possible project on quantum computing ## Topics covered in this course * Search for minima in multidimensional spaces (conjugate gradient method, steepest descent method, quasi-Newton-Raphson, Broyden-Jacobian). Convex optimization, gradient methods * Iterative methods for solutions of non-linear equations. * Object orientation * Data analysis and resampling techniques * Variational Monte Carlo (VMC) for 'ab initio' studies of quantum mechanical many-body systems. * Simulation of two- and three-dimensional systems like quantum dots or atoms and molecules or systems from solid state physics * **Simulation of trapped bosons using VMC (project 1, default)** * **Machine learning and neural networks (project 2, default, same system as in project 1)** * Extension of project 1 to fermionic systems (project 2) * Coupled cluster theory (project 2, depends on interest) * Other quantum-mechanical methods and systems can be tailored to one's interests (Hartree-Fock Theory, Many-body perturbation theory, time-dependent theories and more). ## Quantum Monte Carlo Motivation Most quantum mechanical problems of interest in for example atomic, molecular, nuclear and solid state physics consist of a large number of interacting electrons and ions or nucleons. The total number of particles $N$ is usually sufficiently large that an exact solution cannot be found. Typically, the expectation value for a chosen hamiltonian for a system of $N$ particles is $$ \langle H \rangle = \frac{\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N \Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) H(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) \Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)} {\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N \Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N) \Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)}, $$ an in general intractable problem. This integral is actually the starting point in a Variational Monte Carlo calculation. **Gaussian quadrature: Forget it**! Given 10 particles and 10 mesh points for each degree of freedom and an ideal 1 Tflops machine (all operations take the same time), how long will it take to compute the above integral? The lifetime of the universe is of the order of $10^{17}$ s. ## Quantum Monte Carlo Motivation As an example from the nuclear many-body problem, we have Schroedinger's equation as a differential equation $$ \hat{H}\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A)=E\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A) $$ where $$ \boldsymbol{r}_1,..,\boldsymbol{r}_A, $$ are the coordinates and $$ \alpha_1,..,\alpha_A, $$ are sets of relevant quantum numbers such as spin and isospin for a system of $A$ nucleons ($A=N+Z$, $N$ being the number of neutrons and $Z$ the number of protons). ## Quantum Monte Carlo Motivation There are $$ 2^A\times \left(\begin{array}{c} A\\ Z\end{array}\right) $$ coupled second-order differential equations in $3A$ dimensions. For a nucleus like beryllium-10 this number is **215040**. This is a truely challenging many-body problem. Methods like partial differential equations can at most be used for 2-3 particles. ## Various many-body methods * Monte-Carlo methods * Renormalization group (RG) methods, in particular density matrix RG * Large-scale diagonalization (Iterative methods, Lanczo's method, dimensionalities $10^{10}$ states) * Coupled cluster theory, favoured method in quantum chemistry, molecular and atomic physics. Applications to ab initio calculations in nuclear physics as well for large nuclei. * Perturbative many-body methods * Green's function methods * Density functional theory/Mean-field theory and Hartree-Fock theory The physics of the system hints at which many-body methods to use. ## Quantum Monte Carlo Motivation **Pros and Cons of Monte Carlo.** * Is physically intuitive. * Allows one to study systems with many degrees of freedom. Diffusion Monte Carlo (DMC) and Green's function Monte Carlo (GFMC) yield in principle the exact solution to Schroedinger's equation. * Variational Monte Carlo (VMC) is easy to implement but needs a reliable trial wave function, can be difficult to obtain. This is where we will use Hartree-Fock theory to construct an optimal basis. * DMC/GFMC for fermions (spin with half-integer values, electrons, baryons, neutrinos, quarks) has a sign problem. Nature prefers an anti-symmetric wave function. PDF in this case given distribution of random walkers. * The solution has a statistical error, which can be large. * There is a limit for how large systems one can study, DMC needs a huge number of random walkers in order to achieve stable results. * Obtain only the lowest-lying states with a given symmetry. Can get excited states with extra labor. ## Quantum Monte Carlo Motivation **Where and why do we use Monte Carlo Methods in Quantum Physics.** * Quantum systems with many particles at finite temperature: Path Integral Monte Carlo with applications to dense matter and quantum liquids (phase transitions from normal fluid to superfluid). Strong correlations. * Bose-Einstein condensation of dilute gases, method transition from non-linear PDE to Diffusion Monte Carlo as density increases. * Light atoms, molecules, solids and nuclei. * Lattice Quantum-Chromo Dynamics. Impossible to solve without MC calculations. * Simulations of systems in solid state physics, from semiconductors to spin systems. Many electrons active and possibly strong correlations. ## Quantum Monte Carlo Motivation We start with the variational principle. Given a hamiltonian $H$ and a trial wave function $\Psi_T$, the variational principle states that the expectation value of $\langle H \rangle$, defined through $$ E[H]= \langle H \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}, $$ is an upper bound to the ground state energy $E_0$ of the hamiltonian $H$, that is $$ E_0 \le \langle H \rangle . $$ In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. Traditional integration methods such as the Gauss-Legendre will not be adequate for say the computation of the energy of a many-body system. ## Quantum Monte Carlo Motivation The trial wave function can be expanded in the eigenstates of the hamiltonian since they form a complete set, viz., $$ \Psi_T(\boldsymbol{R})=\sum_i a_i\Psi_i(\boldsymbol{R}), $$ and assuming the set of eigenfunctions to be normalized one obtains $$ \frac{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})H(\boldsymbol{R})\Psi_n(\boldsymbol{R})} {\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})\Psi_n(\boldsymbol{R})} =\frac{\sum_{n}a^2_n E_n} {\sum_{n}a^2_n} \ge E_0, $$ where we used that $H(\boldsymbol{R})\Psi_n(\boldsymbol{R})=E_n\Psi_n(\boldsymbol{R})$. In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. The variational principle yields the lowest state of a given symmetry. ## Quantum Monte Carlo Motivation In most cases, a wave function has only small values in large parts of configuration space, and a straightforward procedure which uses homogenously distributed random points in configuration space will most likely lead to poor results. This may suggest that some kind of importance sampling combined with e.g., the Metropolis algorithm may be a more efficient way of obtaining the ground state energy. The hope is then that those regions of configurations space where the wave function assumes appreciable values are sampled more efficiently. ## Quantum Monte Carlo Motivation The tedious part in a VMC calculation is the search for the variational minimum. A good knowledge of the system is required in order to carry out reasonable VMC calculations. This is not always the case, and often VMC calculations serve rather as the starting point for so-called diffusion Monte Carlo calculations (DMC). DMC is a way of solving exactly the many-body Schroedinger equation by means of a stochastic procedure. A good guess on the binding energy and its wave function is however necessary. A carefully performed VMC calculation can aid in this context. ## Quantum Monte Carlo Motivation * Construct first a trial wave function $\psi_T(\boldsymbol{R},\boldsymbol{\alpha})$, for a many-body system consisting of $N$ particles located at positions $\boldsymbol{R}=(\boldsymbol{R}_1,\dots ,\boldsymbol{R}_N)$. The trial wave function depends on $\alpha$ variational parameters $\boldsymbol{\alpha}=(\alpha_1,\dots ,\alpha_M)$. * Then we evaluate the expectation value of the hamiltonian $H$ $$ E[H]=\langle H \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})} {\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}. $$ * Thereafter we vary $\alpha$ according to some minimization algorithm and return to the first step. ## Quantum Monte Carlo Motivation **Basic steps.** Choose a trial wave function $\psi_T(\boldsymbol{R})$. $$ P(\boldsymbol{R})= \frac{\left|\psi_T(\boldsymbol{R})\right|^2}{\int \left|\psi_T(\boldsymbol{R})\right|^2d\boldsymbol{R}}. $$ This is our new probability distribution function (PDF). The approximation to the expectation value of the Hamiltonian is now $$ E[H(\boldsymbol{\alpha})] = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})}. $$ ## Quantum Monte Carlo Motivation Define a new quantity <!-- Equation labels as ordinary links --> <div id="eq:locale1"></div> $$ E_L(\boldsymbol{R},\boldsymbol{\alpha})=\frac{1}{\psi_T(\boldsymbol{R},\boldsymbol{\alpha})}H\psi_T(\boldsymbol{R},\boldsymbol{\alpha}), \label{eq:locale1} \tag{1} $$ called the local energy, which, together with our trial PDF yields <!-- Equation labels as ordinary links --> <div id="eq:vmc1"></div> $$ E[H(\boldsymbol{\alpha})]=\int P(\boldsymbol{R})E_L(\boldsymbol{R}) d\boldsymbol{R}\approx \frac{1}{N}\sum_{i=1}^N E_L(\boldsymbol{R_i},\boldsymbol{\alpha}) \label{eq:vmc1} \tag{2} $$ with $N$ being the number of Monte Carlo samples. ## Quantum Monte Carlo The Algorithm for performing a variational Monte Carlo calculations runs thus as this * Initialisation: Fix the number of Monte Carlo steps. Choose an initial $\boldsymbol{R}$ and variational parameters $\alpha$ and calculate $\left|\psi_T^{\alpha}(\boldsymbol{R})\right|^2$. * Initialise the energy and the variance and start the Monte Carlo calculation. * Calculate a trial position $\boldsymbol{R}_p=\boldsymbol{R}+r*step$ where $r$ is a random variable $r \in [0,1]$. * Metropolis algorithm to accept or reject this move $w = P(\boldsymbol{R}_p)/P(\boldsymbol{R})$. * If the step is accepted, then we set $\boldsymbol{R}=\boldsymbol{R}_p$. * Update averages * Finish and compute final averages. Observe that the jumping in space is governed by the variable *step*. This is Called brute-force sampling. Need importance sampling to get more relevant sampling, see lectures below. ## Quantum Monte Carlo: hydrogen atom The radial Schroedinger equation for the hydrogen atom can be written as $$ -\frac{\hbar^2}{2m}\frac{\partial^2 u(r)}{\partial r^2}- \left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r), $$ or with dimensionless variables <!-- Equation labels as ordinary links --> <div id="eq:hydrodimless1"></div> $$ -\frac{1}{2}\frac{\partial^2 u(\rho)}{\partial \rho^2}- \frac{u(\rho)}{\rho}+\frac{l(l+1)}{2\rho^2}u(\rho)-\lambda u(\rho)=0, \label{eq:hydrodimless1} \tag{3} $$ with the hamiltonian $$ H=-\frac{1}{2}\frac{\partial^2 }{\partial \rho^2}- \frac{1}{\rho}+\frac{l(l+1)}{2\rho^2}. $$ Use variational parameter $\alpha$ in the trial wave function <!-- Equation labels as ordinary links --> <div id="eq:trialhydrogen"></div> $$ u_T^{\alpha}(\rho)=\alpha\rho e^{-\alpha\rho}. \label{eq:trialhydrogen} \tag{4} $$ ## Quantum Monte Carlo: hydrogen atom Inserting this wave function into the expression for the local energy $E_L$ gives $$ E_L(\rho)=-\frac{1}{\rho}- \frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right). $$ A simple variational Monte Carlo calculation results in <table border="1"> <thead> <tr><th align="center"> $\alpha$ </th> <th align="center">$\langle H \rangle $</th> <th align="center"> $\sigma^2$</th> <th align="center">$\sigma/\sqrt{N}$</th> </tr> </thead> <tbody> <tr><td align="center"> 7.00000E-01 </td> <td align="center"> -4.57759E-01 </td> <td align="center"> 4.51201E-02 </td> <td align="center"> 6.71715E-04 </td> </tr> <tr><td align="center"> 8.00000E-01 </td> <td align="center"> -4.81461E-01 </td> <td align="center"> 3.05736E-02 </td> <td align="center"> 5.52934E-04 </td> </tr> <tr><td align="center"> 9.00000E-01 </td> <td align="center"> -4.95899E-01 </td> <td align="center"> 8.20497E-03 </td> <td align="center"> 2.86443E-04 </td> </tr> <tr><td align="center"> 1.00000E-00 </td> <td align="center"> -5.00000E-01 </td> <td align="center"> 0.00000E+00 </td> <td align="center"> 0.00000E+00 </td> </tr> <tr><td align="center"> 1.10000E+00 </td> <td align="center"> -4.93738E-01 </td> <td align="center"> 1.16989E-02 </td> <td align="center"> 3.42036E-04 </td> </tr> <tr><td align="center"> 1.20000E+00 </td> <td align="center"> -4.75563E-01 </td> <td align="center"> 8.85899E-02 </td> <td align="center"> 9.41222E-04 </td> </tr> <tr><td align="center"> 1.30000E+00 </td> <td align="center"> -4.54341E-01 </td> <td align="center"> 1.45171E-01 </td> <td align="center"> 1.20487E-03 </td> </tr> </tbody> </table> ## Quantum Monte Carlo: hydrogen atom We note that at $\alpha=1$ we obtain the exact result, and the variance is zero, as it should. The reason is that we then have the exact wave function, and the action of the hamiltionan on the wave function $$ H\psi = \mathrm{constant}\times \psi, $$ yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then $$ \langle H^n \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}= \mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}. $$ **This gives an important information: the exact wave function leads to zero variance!** Variation is then performed by minimizing both the energy and the variance. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) For bosons in a harmonic oscillator-like trap we will use is a spherical (S) or an elliptical (E) harmonic trap in one, two and finally three dimensions, with the latter given by <!-- Equation labels as ordinary links --> <div id="trap_eqn"></div> $$ \begin{equation} V_{ext}(\mathbf{r}) = \Bigg\{ \begin{array}{ll} \frac{1}{2}m\omega_{ho}^2r^2 & (S)\\ \strut \frac{1}{2}m[\omega_{ho}^2(x^2+y^2) + \omega_z^2z^2] & (E) \label{trap_eqn} \tag{5} \end{array} \end{equation} $$ where (S) stands for symmetric and <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \hat{H} = \sum_i^N \left( \frac{-\hbar^2}{2m} { \bigtriangledown }_{i}^2 + V_{ext}({\bf{r}}_i)\right) + \sum_{i<j}^{N} V_{int}({\bf{r}}_i,{\bf{r}}_j), \label{_auto1} \tag{6} \end{equation} $$ as the two-body Hamiltonian of the system. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) We will represent the inter-boson interaction by a pairwise, repulsive potential <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} V_{int}(|\mathbf{r}_i-\mathbf{r}_j|) = \Bigg\{ \begin{array}{ll} \infty & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\ 0 & {|\mathbf{r}_i-\mathbf{r}_j|} > {a} \end{array} \label{_auto2} \tag{7} \end{equation} $$ where $a$ is the so-called hard-core diameter of the bosons. Clearly, $V_{int}(|\mathbf{r}_i-\mathbf{r}_j|)$ is zero if the bosons are separated by a distance $|\mathbf{r}_i-\mathbf{r}_j|$ greater than $a$ but infinite if they attempt to come within a distance $|\mathbf{r}_i-\mathbf{r}_j| \leq a$. ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) Our trial wave function for the ground state with $N$ atoms is given by <!-- Equation labels as ordinary links --> <div id="eq:trialwf"></div> $$ \begin{equation} \Psi_T(\mathbf{R})=\Psi_T(\mathbf{r}_1, \mathbf{r}_2, \dots \mathbf{r}_N,\alpha,\beta)=\prod_i g(\alpha,\beta,\mathbf{r}_i)\prod_{i<j}f(a,|\mathbf{r}_i-\mathbf{r}_j|), \label{eq:trialwf} \tag{8} \end{equation} $$ where $\alpha$ and $\beta$ are variational parameters. The single-particle wave function is proportional to the harmonic oscillator function for the ground state <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} g(\alpha,\beta,\mathbf{r}_i)= \exp{[-\alpha(x_i^2+y_i^2+\beta z_i^2)]}. \label{_auto3} \tag{9} \end{equation} $$ ## [Quantum Monte Carlo for bosons](https://github.com/mortele/variational-monte-carlo-fys4411) For spherical traps we have $\beta = 1$ and for non-interacting bosons ($a=0$) we have $\alpha = 1/2a_{ho}^2$. The correlation wave function is <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} f(a,|\mathbf{r}_i-\mathbf{r}_j|)=\Bigg\{ \begin{array}{ll} 0 & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\ (1-\frac{a}{|\mathbf{r}_i-\mathbf{r}_j|}) & {|\mathbf{r}_i-\mathbf{r}_j|} > {a}. \end{array} \label{_auto4} \tag{10} \end{equation} $$ ### Simple example, the hydrogen atom The radial Schroedinger equation for the hydrogen atom can be written as (when we have gotten rid of the first derivative term in the kinetic energy and used $rR(r)=u(r)$) $$ -\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}- \left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r). $$ We will specialize to the case with $l=0$ and end up with $$ -\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}- \left(\frac{ke^2}{r}\right)u(r)=Eu(r). $$ Then we introduce a dimensionless variable $\rho=r/a$ where $a$ is a constant with dimension length. Multiplying with $ma^2/\hbar^2$ we can rewrite our equations as $$ -\frac{1}{2}\frac{d^2 u(\rho)}{d \rho^2}- \frac{ke^2ma}{\hbar^2}\frac{u(\rho)}{\rho}-\lambda u(\rho)=0. $$ Since $a$ is just a parameter we choose to set $$ \frac{ke^2ma}{\hbar^2}=1, $$ which leads to $a=\hbar^2/mke^2$, better known as the Bohr radius with value $0.053$ nm. Scaling the equations this way does not only render our numerical treatment simpler since we avoid carrying with us all physical parameters, but we obtain also a **natural** length scale. We will see this again and again. In our discussions below with a harmonic oscillator trap, the **natural** lentgh scale with be determined by the oscillator frequency, the mass of the particle and $\hbar$. We have also defined a dimensionless 'energy' $\lambda = Ema^2/\hbar^2$. With the rescaled quantities, the ground state energy of the hydrogen atom is $1/2$. The equation we want to solve is now defined by the Hamiltonian $$ H=-\frac{1}{2}\frac{d^2 }{d \rho^2}-\frac{1}{\rho}. $$ As trial wave function we peep now into the analytical solution for the hydrogen atom and use (with $\alpha$ as a variational parameter) $$ u_T^{\alpha}(\rho)=\alpha\rho \exp{-(\alpha\rho)}. $$ Inserting this wave function into the expression for the local energy $E_L$ gives $$ E_L(\rho)=-\frac{1}{\rho}- \frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right). $$ To have analytical local energies saves us from computing numerically the second derivative, a feature which often increases our numerical expenditure with a factor of three or more. Integratng up the local energy (recall to bring back the PDF in the integration) gives $\overline{E}[\boldsymbol{\alpha}]=\alpha(\alpha/2-1)$. ### Second example, the harmonic oscillator in one dimension We present here another well-known example, the harmonic oscillator in one dimension for one particle. This will also serve the aim of introducing our next model, namely that of interacting electrons in a harmonic oscillator trap. Here as well, we do have analytical solutions and the energy of the ground state, with $\hbar=1$, is $1/2\omega$, with $\omega$ being the oscillator frequency. We use the following trial wave function $$ \psi_T(x;\alpha) = \exp{-(\frac{1}{2}\alpha^2x^2)}, $$ which results in a local energy $$ \frac{1}{2}\left(\alpha^2+x^2(1-\alpha^4)\right). $$ We can compare our numerically calculated energies with the exact energy as function of $\alpha$ $$ \overline{E}[\alpha] = \frac{1}{4}\left(\alpha^2+\frac{1}{\alpha^2}\right). $$ Similarly, with the above ansatz, we can also compute the exact variance which reads $$ \sigma^2[\alpha]=\frac{1}{4}\left(1+(1-\alpha^4)^2\frac{3}{4\alpha^4}\right)-\overline{E}. $$ Our code for computing the energy of the ground state of the harmonic oscillator follows here. We start by defining directories where we store various outputs. ``` # Common imports import os # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "Results/VMCHarmonic" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') outfile = open(data_path("VMCHarmonic.dat"),'w') ``` We proceed with the implementation of the Monte Carlo algorithm but list first the ansatz for the wave function and the expression for the local energy ``` %matplotlib inline # VMC for the one-dimensional harmonic oscillator # Brute force Metropolis, no importance sampling and no energy minimization from math import exp, sqrt from random import random, seed import numpy as np import matplotlib.pyplot as plt from numba import jit from decimal import * # Trial wave function for the Harmonic oscillator in one dimension def WaveFunction(r,alpha): return exp(-0.5*alpha*alpha*r*r) # Local energy for the Harmonic oscillator in one dimension def LocalEnergy(r,alpha): return 0.5*r*r*(1-alpha**4) + 0.5*alpha*alpha ``` Note that in the Metropolis algorithm there is no need to compute the trial wave function, mainly since we are just taking the ratio of two exponentials. It is then from a computational point view, more convenient to compute the argument from the ratio and then calculate the exponential. Here we have refrained from this purely of pedagogical reasons. ``` # The Monte Carlo sampling with the Metropolis algo # The jit decorator tells Numba to compile this function. # The argument types will be inferred by Numba when the function is called. def MonteCarloSampling(): NumberMCcycles= 100000 StepSize = 1.0 # positions PositionOld = 0.0 PositionNew = 0.0 # seed for rng generator seed() # start variational parameter alpha = 0.4 for ia in range(MaxVariations): alpha += .05 AlphaValues[ia] = alpha energy = energy2 = 0.0 #Initial position PositionOld = StepSize * (random() - .5) wfold = WaveFunction(PositionOld,alpha) #Loop over MC MCcycles for MCcycle in range(NumberMCcycles): #Trial position PositionNew = PositionOld + StepSize*(random() - .5) wfnew = WaveFunction(PositionNew,alpha) #Metropolis test to see whether we accept the move if random() <= wfnew**2 / wfold**2: PositionOld = PositionNew wfold = wfnew DeltaE = LocalEnergy(PositionOld,alpha) energy += DeltaE energy2 += DeltaE**2 #We calculate mean, variance and error energy /= NumberMCcycles energy2 /= NumberMCcycles variance = energy2 - energy**2 error = sqrt(variance/NumberMCcycles) Energies[ia] = energy Variances[ia] = variance outfile.write('%f %f %f %f \n' %(alpha,energy,variance,error)) return Energies, AlphaValues, Variances ``` Finally, the results are presented here with the exact energies and variances as well. ``` #Here starts the main program with variable declarations MaxVariations = 20 Energies = np.zeros((MaxVariations)) ExactEnergies = np.zeros((MaxVariations)) ExactVariance = np.zeros((MaxVariations)) Variances = np.zeros((MaxVariations)) AlphaValues = np.zeros(MaxVariations) (Energies, AlphaValues, Variances) = MonteCarloSampling() outfile.close() ExactEnergies = 0.25*(AlphaValues*AlphaValues+1.0/(AlphaValues*AlphaValues)) ExactVariance = 0.25*(1.0+((1.0-AlphaValues**4)**2)*3.0/(4*(AlphaValues**4)))-ExactEnergies*ExactEnergies #simple subplot plt.subplot(2, 1, 1) plt.plot(AlphaValues, Energies, 'o-',AlphaValues, ExactEnergies,'r-') plt.title('Energy and variance') plt.ylabel('Dimensionless energy') plt.subplot(2, 1, 2) plt.plot(AlphaValues, Variances, '.-',AlphaValues, ExactVariance,'r-') plt.xlabel(r'$\alpha$', fontsize=15) plt.ylabel('Variance') save_fig("VMCHarmonic") plt.show() #nice printout with Pandas import pandas as pd from pandas import DataFrame data ={'Alpha':AlphaValues, 'Energy':Energies,'Exact Energy':ExactEnergies,'Variance':Variances,'Exact Variance':ExactVariance,} frame = pd.DataFrame(data) print(frame) ``` For $\alpha=1$ we have the exact eigenpairs, as can be deduced from the table here. With $\omega=1$, the exact energy is $1/2$ a.u. with zero variance, as it should. We see also that our computed variance follows rather well the exact variance. Increasing the number of Monte Carlo cycles will improve our statistics (try to increase the number of Monte Carlo cycles). The fact that the variance is exactly equal to zero when $\alpha=1$ is that we then have the exact wave function, and the action of the hamiltionan on the wave function $$ H\psi = \mathrm{constant}\times \psi, $$ yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then $$ \langle H^n \rangle = \frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}= \mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})} {\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}. $$ **This gives an important information: the exact wave function leads to zero variance!** As we will see below, many practitioners perform a minimization on both the energy and the variance.
true
code
0.696488
null
null
null
null
## Step 1: Import Libraries ``` # All imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import missingno import seaborn as sns from sklearn.feature_selection import SelectKBest, f_regression from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.svm import SVR from sklearn.neural_network import MLPRegressor from sklearn.ensemble import RandomForestRegressor import warnings warnings.filterwarnings('ignore') # List all the files for dir_name, _, file_names in os.walk('data'): for file_name in file_names: print(os.path.join(dir_name, file_name)) ``` ## Step 2: Reading the Data ``` data_vw = pd.read_csv("data/vw.csv") data_vw.shape data_vw.head() data_vw.describe() missingno.matrix(data_vw) data_vw.isnull().sum() ``` ## Step 3: EDA ``` categorical_features = [feature for feature in data_vw.columns if data_vw[feature].dtype == 'O'] # Getting the count plot for feature in categorical_features: sns.countplot(y=data_vw[feature]) plt.show() # Getting the barplot plt.figure(figsize=(10,5), facecolor='w') sns.barplot(x=data_vw['year'], y=data_vw['price']) sns.barplot(x=data_vw['transmission'], y=data_vw['price']) # Getting the relation b/w milleage and price plt.figure(figsize=(10, 6)) sns.scatterplot(x=data_vw['mileage'], y=data_vw['price'], hue=data_vw['year']) plt.figure(figsize=(5,5)) sns.scatterplot(x=data_vw['mileage'], y=data_vw['price'], hue=data_vw['transmission']) plt.figure(figsize=(10,10)) sns.pairplot(data_vw) ``` ## Step 4: Feature Engineering ``` data_vw.head() ``` Dropping the year column, but instead will create data on how old the car is ``` data_vw['age_of_car'] = 2020 - data_vw['year'] data_vw.drop(['year'], axis=1, inplace=True) # Look at the frequency of the ages sns.countplot(y=data_vw['age_of_car']) # OHE the categorical variables data_vw_extended = pd.get_dummies(data_vw) data_vw_extended.shape sc = StandardScaler() data_vw_extended = pd.DataFrame(sc.fit_transform(data_vw_extended), columns=data_vw_extended.columns) data_vw_extended.head() X_train, X_test, y_train, y_test = train_test_split(data_vw_extended.drop(['price'], axis=1), data_vw_extended[['price']]) X_train.shape, X_test.shape, y_train.shape, y_test.shape ``` ## Step 5: Feature Selection ``` # Select the k best features no_of_features = [] r_2_train = [] r_2_test = [] for k in range(3, 40, 2): selector = SelectKBest(f_regression, k=k) X_train_selector = selector.fit_transform(X_train, y_train) X_test_selector = selector.transform(X_test) lin_reg = LinearRegression() lin_reg.fit(X_train_selector, y_train) no_of_features.append(k) r_2_train.append(lin_reg.score(X_train_selector, y_train)) r_2_test.append(lin_reg.score(X_test_selector, y_test)) sns.lineplot(x=no_of_features, y=r_2_train) sns.lineplot(x=no_of_features, y=r_2_test) ``` k=23 is providing us the best optimal result. Hence training the model on 23 ``` selector = SelectKBest(f_regression, k=23) X_train_selector = selector.fit_transform(X_train, y_train) X_test_selector = selector.transform(X_test) column_name = data_vw_extended.drop(['price'], axis=1).columns column_name[selector.get_support()] ``` ## Step 6: Model ``` def regressor_builder(model): regressor = model regressor.fit(X_train_selector, y_train) score = regressor.score(X_test_selector, y_test) return regressor, score list_models = [LinearRegression(), Lasso(), Ridge(), SVR(), RandomForestRegressor(), MLPRegressor()] model_performance = pd.DataFrame(columns=['Features', 'Model', 'Performance']) for model in list_models: regressor, score = regressor_builder(model) model_performance = model_performance.append({"Feature": "Linear", "Model": regressor, "Performance": score}, ignore_index=True) model_performance ``` Randomforest provides the best r^2
true
code
0.572424
null
null
null
null
``` #hide #skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab #default_exp data.transforms #export from fastai.torch_basics import * from fastai.data.core import * from fastai.data.load import * from fastai.data.external import * from sklearn.model_selection import train_test_split #hide from nbdev.showdoc import * ``` # Helper functions for processing data and basic transforms > Functions for getting, splitting, and labeling data, as well as generic transforms ## Get, split, and label For most data source creation we need functions to get a list of items, split them in to train/valid sets, and label them. fastai provides functions to make each of these steps easy (especially when combined with `fastai.data.blocks`). ### Get First we'll look at functions that *get* a list of items (generally file names). We'll use *tiny MNIST* (a subset of MNIST with just two classes, `7`s and `3`s) for our examples/tests throughout this page. ``` path = untar_data(URLs.MNIST_TINY) (path/'train').ls() # export def _get_files(p, fs, extensions=None): p = Path(p) res = [p/f for f in fs if not f.startswith('.') and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)] return res # export def get_files(path, extensions=None, recurse=True, folders=None, followlinks=True): "Get all the files in `path` with optional `extensions`, optionally with `recurse`, only in `folders`, if specified." path = Path(path) folders=L(folders) extensions = setify(extensions) extensions = {e.lower() for e in extensions} if recurse: res = [] for i,(p,d,f) in enumerate(os.walk(path, followlinks=followlinks)): # returns (dirpath, dirnames, filenames) if len(folders) !=0 and i==0: d[:] = [o for o in d if o in folders] else: d[:] = [o for o in d if not o.startswith('.')] if len(folders) !=0 and i==0 and '.' not in folders: continue res += _get_files(p, f, extensions) else: f = [o.name for o in os.scandir(path) if o.is_file()] res = _get_files(path, f, extensions) return L(res) ``` This is the most general way to grab a bunch of file names from disk. If you pass `extensions` (including the `.`) then returned file names are filtered by that list. Only those files directly in `path` are included, unless you pass `recurse`, in which case all child folders are also searched recursively. `folders` is an optional list of directories to limit the search to. ``` t3 = get_files(path/'train'/'3', extensions='.png', recurse=False) t7 = get_files(path/'train'/'7', extensions='.png', recurse=False) t = get_files(path/'train', extensions='.png', recurse=True) test_eq(len(t), len(t3)+len(t7)) test_eq(len(get_files(path/'train'/'3', extensions='.jpg', recurse=False)),0) test_eq(len(t), len(get_files(path, extensions='.png', recurse=True, folders='train'))) t #hide test_eq(len(get_files(path/'train'/'3', recurse=False)),346) test_eq(len(get_files(path, extensions='.png', recurse=True, folders=['train', 'test'])),729) test_eq(len(get_files(path, extensions='.png', recurse=True, folders='train')),709) test_eq(len(get_files(path, extensions='.png', recurse=True, folders='training')),0) ``` It's often useful to be able to create functions with customized behavior. `fastai.data` generally uses functions named as CamelCase verbs ending in `er` to create these functions. `FileGetter` is a simple example of such a function creator. ``` #export def FileGetter(suf='', extensions=None, recurse=True, folders=None): "Create `get_files` partial function that searches path suffix `suf`, only in `folders`, if specified, and passes along args" def _inner(o, extensions=extensions, recurse=recurse, folders=folders): return get_files(o/suf, extensions, recurse, folders) return _inner fpng = FileGetter(extensions='.png', recurse=False) test_eq(len(t7), len(fpng(path/'train'/'7'))) test_eq(len(t), len(fpng(path/'train', recurse=True))) fpng_r = FileGetter(extensions='.png', recurse=True) test_eq(len(t), len(fpng_r(path/'train'))) #export image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/')) #export def get_image_files(path, recurse=True, folders=None): "Get image files in `path` recursively, only in `folders`, if specified." return get_files(path, extensions=image_extensions, recurse=recurse, folders=folders) ``` This is simply `get_files` called with a list of standard image extensions. ``` test_eq(len(t), len(get_image_files(path, recurse=True, folders='train'))) #export def ImageGetter(suf='', recurse=True, folders=None): "Create `get_image_files` partial that searches suffix `suf` and passes along `kwargs`, only in `folders`, if specified" def _inner(o, recurse=recurse, folders=folders): return get_image_files(o/suf, recurse, folders) return _inner ``` Same as `FileGetter`, but for image extensions. ``` test_eq(len(get_files(path/'train', extensions='.png', recurse=True, folders='3')), len(ImageGetter( 'train', recurse=True, folders='3')(path))) #export def get_text_files(path, recurse=True, folders=None): "Get text files in `path` recursively, only in `folders`, if specified." return get_files(path, extensions=['.txt'], recurse=recurse, folders=folders) #export class ItemGetter(ItemTransform): "Creates a proper transform that applies `itemgetter(i)` (even on a tuple)" _retain = False def __init__(self, i): self.i = i def encodes(self, x): return x[self.i] test_eq(ItemGetter(1)((1,2,3)), 2) test_eq(ItemGetter(1)(L(1,2,3)), 2) test_eq(ItemGetter(1)([1,2,3]), 2) test_eq(ItemGetter(1)(np.array([1,2,3])), 2) #export class AttrGetter(ItemTransform): "Creates a proper transform that applies `attrgetter(nm)` (even on a tuple)" _retain = False def __init__(self, nm, default=None): store_attr() def encodes(self, x): return getattr(x, self.nm, self.default) test_eq(AttrGetter('shape')(torch.randn([4,5])), [4,5]) test_eq(AttrGetter('shape', [0])([4,5]), [0]) ``` ### Split The next set of functions are used to *split* data into training and validation sets. The functions return two lists - a list of indices or masks for each of training and validation sets. ``` # export def RandomSplitter(valid_pct=0.2, seed=None): "Create function that splits `items` between train/val with `valid_pct` randomly." def _inner(o): if seed is not None: torch.manual_seed(seed) rand_idx = L(list(torch.randperm(len(o)).numpy())) cut = int(valid_pct * len(o)) return rand_idx[cut:],rand_idx[:cut] return _inner src = list(range(30)) f = RandomSplitter(seed=42) trn,val = f(src) assert 0<len(trn)<len(src) assert all(o not in val for o in trn) test_eq(len(trn), len(src)-len(val)) # test random seed consistency test_eq(f(src)[0], trn) ``` Use scikit-learn train_test_split. This allow to *split* items in a stratified fashion (uniformely according to the ‘labels‘ distribution) ``` # export def TrainTestSplitter(test_size=0.2, random_state=None, stratify=None, train_size=None, shuffle=True): "Split `items` into random train and test subsets using sklearn train_test_split utility." def _inner(o, **kwargs): train,valid = train_test_split(range_of(o), test_size=test_size, random_state=random_state, stratify=stratify, train_size=train_size, shuffle=shuffle) return L(train), L(valid) return _inner src = list(range(30)) labels = [0] * 20 + [1] * 10 test_size = 0.2 f = TrainTestSplitter(test_size=test_size, random_state=42, stratify=labels) trn,val = f(src) assert 0<len(trn)<len(src) assert all(o not in val for o in trn) test_eq(len(trn), len(src)-len(val)) # test random seed consistency test_eq(f(src)[0], trn) # test labels distribution consistency # there should be test_size % of zeroes and ones respectively in the validation set test_eq(len([t for t in val if t < 20]) / 20, test_size) test_eq(len([t for t in val if t > 20]) / 10, test_size) #export def IndexSplitter(valid_idx): "Split `items` so that `val_idx` are in the validation set and the others in the training set" def _inner(o): train_idx = np.setdiff1d(np.array(range_of(o)), np.array(valid_idx)) return L(train_idx, use_list=True), L(valid_idx, use_list=True) return _inner items = list(range(10)) splitter = IndexSplitter([3,7,9]) test_eq(splitter(items),[[0,1,2,4,5,6,8],[3,7,9]]) # export def _grandparent_idxs(items, name): def _inner(items, name): return mask2idxs(Path(o).parent.parent.name == name for o in items) return [i for n in L(name) for i in _inner(items,n)] # export def GrandparentSplitter(train_name='train', valid_name='valid'): "Split `items` from the grand parent folder names (`train_name` and `valid_name`)." def _inner(o): return _grandparent_idxs(o, train_name),_grandparent_idxs(o, valid_name) return _inner fnames = [path/'train/3/9932.png', path/'valid/7/7189.png', path/'valid/7/7320.png', path/'train/7/9833.png', path/'train/3/7666.png', path/'valid/3/925.png', path/'train/7/724.png', path/'valid/3/93055.png'] splitter = GrandparentSplitter() test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]]) fnames2 = fnames + [path/'test/3/4256.png', path/'test/7/2345.png', path/'valid/7/6467.png'] splitter = GrandparentSplitter(train_name=('train', 'valid'), valid_name='test') test_eq(splitter(fnames2),[[0,3,4,6,1,2,5,7,10],[8,9]]) # export def FuncSplitter(func): "Split `items` by result of `func` (`True` for validation, `False` for training set)." def _inner(o): val_idx = mask2idxs(func(o_) for o_ in o) return IndexSplitter(val_idx)(o) return _inner splitter = FuncSplitter(lambda o: Path(o).parent.parent.name == 'valid') test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]]) # export def MaskSplitter(mask): "Split `items` depending on the value of `mask`." def _inner(o): return IndexSplitter(mask2idxs(mask))(o) return _inner items = list(range(6)) splitter = MaskSplitter([True,False,False,True,False,True]) test_eq(splitter(items),[[1,2,4],[0,3,5]]) # export def FileSplitter(fname): "Split `items` by providing file `fname` (contains names of valid items separated by newline)." valid = Path(fname).read_text().split('\n') def _func(x): return x.name in valid def _inner(o): return FuncSplitter(_func)(o) return _inner with tempfile.TemporaryDirectory() as d: fname = Path(d)/'valid.txt' fname.write_text('\n'.join([Path(fnames[i]).name for i in [1,3,4]])) splitter = FileSplitter(fname) test_eq(splitter(fnames),[[0,2,5,6,7],[1,3,4]]) # export def ColSplitter(col='is_valid'): "Split `items` (supposed to be a dataframe) by value in `col`" def _inner(o): assert isinstance(o, pd.DataFrame), "ColSplitter only works when your items are a pandas DataFrame" valid_idx = (o.iloc[:,col] if isinstance(col, int) else o[col]).values.astype('bool') return IndexSplitter(mask2idxs(valid_idx))(o) return _inner df = pd.DataFrame({'a': [0,1,2,3,4], 'b': [True,False,True,True,False]}) splits = ColSplitter('b')(df) test_eq(splits, [[1,4], [0,2,3]]) #Works with strings or index splits = ColSplitter(1)(df) test_eq(splits, [[1,4], [0,2,3]]) # does not get confused if the type of 'is_valid' is integer, but it meant to be a yes/no df = pd.DataFrame({'a': [0,1,2,3,4], 'is_valid': [1,0,1,1,0]}) splits_by_int = ColSplitter('is_valid')(df) test_eq(splits_by_int, [[1,4], [0,2,3]]) # export def RandomSubsetSplitter(train_sz, valid_sz, seed=None): "Take randoms subsets of `splits` with `train_sz` and `valid_sz`" assert 0 < train_sz < 1 assert 0 < valid_sz < 1 assert train_sz + valid_sz <= 1. def _inner(o): if seed is not None: torch.manual_seed(seed) train_len,valid_len = int(len(o)*train_sz),int(len(o)*valid_sz) idxs = L(list(torch.randperm(len(o)).numpy())) return idxs[:train_len],idxs[train_len:train_len+valid_len] return _inner items = list(range(100)) valid_idx = list(np.arange(70,100)) splits = RandomSubsetSplitter(0.3, 0.1)(items) test_eq(len(splits[0]), 30) test_eq(len(splits[1]), 10) ``` ### Label The final set of functions is used to *label* a single item of data. ``` # export def parent_label(o): "Label `item` with the parent folder name." return Path(o).parent.name ``` Note that `parent_label` doesn't have anything customize, so it doesn't return a function - you can just use it directly. ``` test_eq(parent_label(fnames[0]), '3') test_eq(parent_label("fastai_dev/dev/data/mnist_tiny/train/3/9932.png"), '3') [parent_label(o) for o in fnames] #hide #test for MS Windows when os.path.sep is '\\' instead of '/' test_eq(parent_label(os.path.join("fastai_dev","dev","data","mnist_tiny","train", "3", "9932.png") ), '3') # export class RegexLabeller(): "Label `item` with regex `pat`." def __init__(self, pat, match=False): self.pat = re.compile(pat) self.matcher = self.pat.match if match else self.pat.search def __call__(self, o): res = self.matcher(str(o)) assert res,f'Failed to find "{self.pat}" in "{o}"' return res.group(1) ``` `RegexLabeller` is a very flexible function since it handles any regex search of the stringified item. Pass `match=True` to use `re.match` (i.e. check only start of string), or `re.search` otherwise (default). For instance, here's an example the replicates the previous `parent_label` results. ``` f = RegexLabeller(fr'{os.path.sep}(\d){os.path.sep}') test_eq(f(fnames[0]), '3') [f(o) for o in fnames] f = RegexLabeller(r'(\d*)', match=True) test_eq(f(fnames[0].name), '9932') #export class ColReader(DisplayedTransform): "Read `cols` in `row` with potential `pref` and `suff`" def __init__(self, cols, pref='', suff='', label_delim=None): store_attr() self.pref = str(pref) + os.path.sep if isinstance(pref, Path) else pref self.cols = L(cols) def _do_one(self, r, c): o = r[c] if isinstance(c, int) else r[c] if c=='name' else getattr(r, c) if len(self.pref)==0 and len(self.suff)==0 and self.label_delim is None: return o if self.label_delim is None: return f'{self.pref}{o}{self.suff}' else: return o.split(self.label_delim) if len(o)>0 else [] def __call__(self, o, **kwargs): if len(self.cols) == 1: return self._do_one(o, self.cols[0]) return L(self._do_one(o, c) for c in self.cols) ``` `cols` can be a list of column names or a list of indices (or a mix of both). If `label_delim` is passed, the result is split using it. ``` df = pd.DataFrame({'a': 'a b c d'.split(), 'b': ['1 2', '0', '', '1 2 3']}) f = ColReader('a', pref='0', suff='1') test_eq([f(o) for o in df.itertuples()], '0a1 0b1 0c1 0d1'.split()) f = ColReader('b', label_delim=' ') test_eq([f(o) for o in df.itertuples()], [['1', '2'], ['0'], [], ['1', '2', '3']]) df['a1'] = df['a'] f = ColReader(['a', 'a1'], pref='0', suff='1') test_eq([f(o) for o in df.itertuples()], [L('0a1', '0a1'), L('0b1', '0b1'), L('0c1', '0c1'), L('0d1', '0d1')]) df = pd.DataFrame({'a': [L(0,1), L(2,3,4), L(5,6,7)]}) f = ColReader('a') test_eq([f(o) for o in df.itertuples()], [L(0,1), L(2,3,4), L(5,6,7)]) df['name'] = df['a'] f = ColReader('name') test_eq([f(df.iloc[0,:])], [L(0,1)]) ``` ## Categorize - ``` #export class CategoryMap(CollBase): "Collection of categories with the reverse mapping in `o2i`" def __init__(self, col, sort=True, add_na=False, strict=False): if is_categorical_dtype(col): items = L(col.cat.categories, use_list=True) #Remove non-used categories while keeping order if strict: items = L(o for o in items if o in col.unique()) else: if not hasattr(col,'unique'): col = L(col, use_list=True) # `o==o` is the generalized definition of non-NaN used by Pandas items = L(o for o in col.unique() if o==o) if sort: items = items.sorted() self.items = '#na#' + items if add_na else items self.o2i = defaultdict(int, self.items.val2idx()) if add_na else dict(self.items.val2idx()) def map_objs(self,objs): "Map `objs` to IDs" return L(self.o2i[o] for o in objs) def map_ids(self,ids): "Map `ids` to objects in vocab" return L(self.items[o] for o in ids) def __eq__(self,b): return all_equal(b,self) t = CategoryMap([4,2,3,4]) test_eq(t, [2,3,4]) test_eq(t.o2i, {2:0,3:1,4:2}) test_eq(t.map_objs([2,3]), [0,1]) test_eq(t.map_ids([0,1]), [2,3]) test_fail(lambda: t.o2i['unseen label']) t = CategoryMap([4,2,3,4], add_na=True) test_eq(t, ['#na#',2,3,4]) test_eq(t.o2i, {'#na#':0,2:1,3:2,4:3}) t = CategoryMap(pd.Series([4,2,3,4]), sort=False) test_eq(t, [4,2,3]) test_eq(t.o2i, {4:0,2:1,3:2}) col = pd.Series(pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)) t = CategoryMap(col) test_eq(t, ['H','M','L']) test_eq(t.o2i, {'H':0,'M':1,'L':2}) col = pd.Series(pd.Categorical(['M','H','M'], categories=['H','M','L'], ordered=True)) t = CategoryMap(col, strict=True) test_eq(t, ['H','M']) test_eq(t.o2i, {'H':0,'M':1}) # export class Categorize(DisplayedTransform): "Reversible transform of category string to `vocab` id" loss_func,order=CrossEntropyLossFlat(),1 def __init__(self, vocab=None, sort=True, add_na=False): if vocab is not None: vocab = CategoryMap(vocab, sort=sort, add_na=add_na) store_attr() def setups(self, dsets): if self.vocab is None and dsets is not None: self.vocab = CategoryMap(dsets, sort=self.sort, add_na=self.add_na) self.c = len(self.vocab) def encodes(self, o): try: return TensorCategory(self.vocab.o2i[o]) except KeyError as e: raise KeyError(f"Label '{o}' was not included in the training dataset") from e def decodes(self, o): return Category (self.vocab [o]) #export class Category(str, ShowTitle): _show_args = {'label': 'category'} cat = Categorize() tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['cat', 'dog']) test_eq(cat('cat'), 0) test_eq(cat.decode(1), 'dog') test_stdout(lambda: show_at(tds,2), 'cat') test_fail(lambda: cat('bird')) cat = Categorize(add_na=True) tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['#na#', 'cat', 'dog']) test_eq(cat('cat'), 1) test_eq(cat.decode(2), 'dog') test_stdout(lambda: show_at(tds,2), 'cat') cat = Categorize(vocab=['dog', 'cat'], sort=False, add_na=True) tds = Datasets(['cat', 'dog', 'cat'], tfms=[cat]) test_eq(cat.vocab, ['#na#', 'dog', 'cat']) test_eq(cat('dog'), 1) test_eq(cat.decode(2), 'cat') test_stdout(lambda: show_at(tds,2), 'cat') ``` ## Multicategorize - ``` # export class MultiCategorize(Categorize): "Reversible transform of multi-category strings to `vocab` id" loss_func,order=BCEWithLogitsLossFlat(),1 def __init__(self, vocab=None, add_na=False): super().__init__(vocab=vocab,add_na=add_na,sort=vocab==None) def setups(self, dsets): if not dsets: return if self.vocab is None: vals = set() for b in dsets: vals = vals.union(set(b)) self.vocab = CategoryMap(list(vals), add_na=self.add_na) def encodes(self, o): if not all(elem in self.vocab.o2i.keys() for elem in o): diff = [elem for elem in o if elem not in self.vocab.o2i.keys()] diff_str = "', '".join(diff) raise KeyError(f"Labels '{diff_str}' were not included in the training dataset") return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o]) def decodes(self, o): return MultiCategory ([self.vocab [o_] for o_ in o]) #export class MultiCategory(L): def show(self, ctx=None, sep=';', color='black', **kwargs): return show_title(sep.join(self.map(str)), ctx=ctx, color=color, **kwargs) cat = MultiCategorize() tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], tfms=[cat]) test_eq(tds[3][0], TensorMultiCategory([])) test_eq(cat.vocab, ['a', 'b', 'c']) test_eq(cat(['a', 'c']), tensor([0,2])) test_eq(cat([]), tensor([])) test_eq(cat.decode([1]), ['b']) test_eq(cat.decode([0,2]), ['a', 'c']) test_stdout(lambda: show_at(tds,2), 'a;c') # if vocab supplied, ensure it maintains its order (i.e., it doesn't sort) cat = MultiCategorize(vocab=['z', 'y', 'x']) test_eq(cat.vocab, ['z','y','x']) test_fail(lambda: cat('bird')) # export class OneHotEncode(DisplayedTransform): "One-hot encodes targets" order=2 def __init__(self, c=None): store_attr() def setups(self, dsets): if self.c is None: self.c = len(L(getattr(dsets, 'vocab', None))) if not self.c: warn("Couldn't infer the number of classes, please pass a value for `c` at init") def encodes(self, o): return TensorMultiCategory(one_hot(o, self.c).float()) def decodes(self, o): return one_hot_decode(o, None) ``` Works in conjunction with ` MultiCategorize` or on its own if you have one-hot encoded targets (pass a `vocab` for decoding and `do_encode=False` in this case) ``` _tfm = OneHotEncode(c=3) test_eq(_tfm([0,2]), tensor([1.,0,1])) test_eq(_tfm.decode(tensor([0,1,1])), [1,2]) tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(), OneHotEncode()]]) test_eq(tds[1], [tensor([1.,0,0])]) test_eq(tds[3], [tensor([0.,0,0])]) test_eq(tds.decode([tensor([False, True, True])]), [['b','c']]) test_eq(type(tds[1][0]), TensorMultiCategory) test_stdout(lambda: show_at(tds,2), 'a;c') #hide #test with passing the vocab tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(vocab=['a', 'b', 'c']), OneHotEncode()]]) test_eq(tds[1], [tensor([1.,0,0])]) test_eq(tds[3], [tensor([0.,0,0])]) test_eq(tds.decode([tensor([False, True, True])]), [['b','c']]) test_eq(type(tds[1][0]), TensorMultiCategory) test_stdout(lambda: show_at(tds,2), 'a;c') # export class EncodedMultiCategorize(Categorize): "Transform of one-hot encoded multi-category that decodes with `vocab`" loss_func,order=BCEWithLogitsLossFlat(),1 def __init__(self, vocab): super().__init__(vocab, sort=vocab==None) self.c = len(vocab) def encodes(self, o): return TensorMultiCategory(tensor(o).float()) def decodes(self, o): return MultiCategory (one_hot_decode(o, self.vocab)) _tfm = EncodedMultiCategorize(vocab=['a', 'b', 'c']) test_eq(_tfm([1,0,1]), tensor([1., 0., 1.])) test_eq(type(_tfm([1,0,1])), TensorMultiCategory) test_eq(_tfm.decode(tensor([False, True, True])), ['b','c']) _tfm2 = EncodedMultiCategorize(vocab=['c', 'b', 'a']) test_eq(_tfm2.vocab, ['c', 'b', 'a']) #export class RegressionSetup(DisplayedTransform): "Transform that floatifies targets" loss_func=MSELossFlat() def __init__(self, c=None): store_attr() def encodes(self, o): return tensor(o).float() def decodes(self, o): return TitledFloat(o) if o.ndim==0 else TitledTuple(o_.item() for o_ in o) def setups(self, dsets): if self.c is not None: return try: self.c = len(dsets[0]) if hasattr(dsets[0], '__len__') else 1 except: self.c = 0 _tfm = RegressionSetup() dsets = Datasets([0, 1, 2], RegressionSetup) test_eq(dsets.c, 1) test_eq_type(dsets[0], (tensor(0.),)) dsets = Datasets([[0, 1, 2], [3,4,5]], RegressionSetup) test_eq(dsets.c, 3) test_eq_type(dsets[0], (tensor([0.,1.,2.]),)) #export def get_c(dls): if getattr(dls, 'c', False): return dls.c if getattr(getattr(dls.train, 'after_item', None), 'c', False): return dls.train.after_item.c if getattr(getattr(dls.train, 'after_batch', None), 'c', False): return dls.train.after_batch.c vocab = getattr(dls, 'vocab', []) if len(vocab) > 0 and is_listy(vocab[-1]): vocab = vocab[-1] return len(vocab) ``` ## End-to-end dataset example with MNIST Let's show how to use those functions to grab the mnist dataset in a `Datasets`. First we grab all the images. ``` path = untar_data(URLs.MNIST_TINY) items = get_image_files(path) ``` Then we split between train and validation depending on the folder. ``` splitter = GrandparentSplitter() splits = splitter(items) train,valid = (items[i] for i in splits) train[:3],valid[:3] ``` Our inputs are images that we open and convert to tensors, our targets are labeled depending on the parent directory and are categories. ``` from PIL import Image def open_img(fn:Path): return Image.open(fn).copy() def img2tensor(im:Image.Image): return TensorImage(array(im)[None]) tfms = [[open_img, img2tensor], [parent_label, Categorize()]] train_ds = Datasets(train, tfms) x,y = train_ds[3] xd,yd = decode_at(train_ds,3) test_eq(parent_label(train[3]),yd) test_eq(array(Image.open(train[3])),xd[0].numpy()) ax = show_at(train_ds, 3, cmap="Greys", figsize=(1,1)) assert ax.title.get_text() in ('3','7') test_fig_exists(ax) ``` ## ToTensor - ``` #export class ToTensor(Transform): "Convert item to appropriate tensor class" order = 5 ``` ## IntToFloatTensor - ``` # export class IntToFloatTensor(DisplayedTransform): "Transform image to float tensor, optionally dividing by 255 (e.g. for images)." order = 10 #Need to run after PIL transforms on the GPU def __init__(self, div=255., div_mask=1): store_attr() def encodes(self, o:TensorImage): return o.float().div_(self.div) def encodes(self, o:TensorMask ): return o.long() // self.div_mask def decodes(self, o:TensorImage): return ((o.clamp(0., 1.) * self.div).long()) if self.div else o t = (TensorImage(tensor(1)),tensor(2).long(),TensorMask(tensor(3))) tfm = IntToFloatTensor() ft = tfm(t) test_eq(ft, [1./255, 2, 3]) test_eq(type(ft[0]), TensorImage) test_eq(type(ft[2]), TensorMask) test_eq(ft[0].type(),'torch.FloatTensor') test_eq(ft[1].type(),'torch.LongTensor') test_eq(ft[2].type(),'torch.LongTensor') ``` ## Normalization - ``` # export def broadcast_vec(dim, ndim, *t, cuda=True): "Make a vector broadcastable over `dim` (out of `ndim` total) by prepending and appending unit axes" v = [1]*ndim v[dim] = -1 f = to_device if cuda else noop return [f(tensor(o).view(*v)) for o in t] # export @docs class Normalize(DisplayedTransform): "Normalize/denorm batch of `TensorImage`" parameters,order = L('mean', 'std'),99 def __init__(self, mean=None, std=None, axes=(0,2,3)): store_attr() @classmethod def from_stats(cls, mean, std, dim=1, ndim=4, cuda=True): return cls(*broadcast_vec(dim, ndim, mean, std, cuda=cuda)) def setups(self, dl:DataLoader): if self.mean is None or self.std is None: x,*_ = dl.one_batch() self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7 def encodes(self, x:TensorImage): return (x-self.mean) / self.std def decodes(self, x:TensorImage): f = to_cpu if x.device.type=='cpu' else noop return (x*f(self.std) + f(self.mean)) _docs=dict(encodes="Normalize batch", decodes="Denormalize batch") mean,std = [0.5]*3,[0.5]*3 mean,std = broadcast_vec(1, 4, mean, std) batch_tfms = [IntToFloatTensor(), Normalize.from_stats(mean,std)] tdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4, device=default_device()) x,y = tdl.one_batch() xd,yd = tdl.decode((x,y)) test_eq(x.type(), 'torch.cuda.FloatTensor' if default_device().type=='cuda' else 'torch.FloatTensor') test_eq(xd.type(), 'torch.LongTensor') test_eq(type(x), TensorImage) test_eq(type(y), TensorCategory) assert x.mean()<0.0 assert x.std()>0.5 assert 0<xd.float().mean()/255.<1 assert 0<xd.float().std()/255.<0.5 #hide nrm = Normalize() batch_tfms = [IntToFloatTensor(), nrm] tdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4) x,y = tdl.one_batch() test_close(x.mean(), 0.0, 1e-4) assert x.std()>0.9, x.std() #Just for visuals from fastai.vision.core import * tdl.show_batch((x,y)) #hide x,y = cast(x,Tensor),cast(y,Tensor) #Lose type of tensors (to emulate predictions) test_ne(type(x), TensorImage) tdl.show_batch((x,y), figsize=(1,1)) #Check that types are put back by dl. ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
true
code
0.577019
null
null
null
null
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import gzip #loading the data from the given file image_size = 28 num_images = 55000 f = gzip.open('train-images-idx3-ubyte.gz','r') f.read(16) buf = f.read(image_size * image_size * num_images) data = np.frombuffer(buf, dtype=np.uint8).astype(np.float32) data = data.reshape(num_images, image_size, image_size, 1) #pritning the images image = np.asarray(data[550]).squeeze() plt.imshow(image) plt.show() #storing the data in the form of matrix X=np.asarray(data[:]) X=X.squeeze() X=X.reshape(X.shape[0],X.shape[2]*X.shape[1]) X=X.T/255 X.shape #knowing the no of features and the no of data points in the given array m=X.shape[1] n=X.shape[0] print(m) print(n) #loading the labels f = gzip.open('train-labels-idx1-ubyte.gz','r') f.read(8) Y = np.zeros((1,m)) for i in range(0,54999): buf = f.read(1) labels = np.frombuffer(buf, dtype=np.uint8).astype(np.int64) Y[0,i]=labels print(Y[0,550]) print(Y.shape) Y1= np.zeros((10,m)) for i in range (0,m): for j in range(0,10): if(j==int(Y[0,i])): Y1[j,i]=1 else: Y1[j,i]=0 Y=Y1 ``` df = pd.read_csv('Downloads/mnist_train.csv',header = None) data = np.array(df) X = (data[:,1:].transpose())/255 m = X.shape[1] n = X.shape[0] Y_orig = data[:,0:1].transpose() Y = np.zeros((10,m)) for i in range(m): Y[int(Y_orig[0,i]),i] = 1 ``` def relu(Z): result = (Z + np.abs(Z))/2 return result def relu_backward(Z): result = (Z + np.abs(Z))/(2*np.abs(Z)) return result def softmax(Z): temp = np.exp(Z) result = temp/np.sum(temp,axis = 0,keepdims = True) return result def initialize_parameters(layer_dims): parameters = {} L = len(layer_dims) - 1 for l in range(1,L + 1): parameters["W" + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])*0.01 parameters["b" + str(l)] = np.zeros((layer_dims[l],1)) #print(parameters) return parameters def forward_prop(X,parameters): cache = {} L = len(layer_dims) - 1 A_prev = X for l in range(1,L): Z = parameters["W" + str(l)].dot(A_prev) + parameters["b" + str(l)] A = relu(Z) cache["Z" + str(l)] = Z A_prev = A Z = parameters["W" + str(L)].dot(A_prev) + parameters["b" + str(L)] AL = softmax(Z) cache["Z" + str(L)] = Z return AL,cache def compute_cost(AL,Y): m = AL.shape[1] cost = (np.sum(-(Y * np.log(AL))))/(m) return cost def backward_prop(X,Y,cache,parameters,AL,layer_dims): m = X.shape[1] dparameters = {} L = len(layer_dims) - 1 dZ = AL - Y dparameters["dW" + str(L)] = dZ.dot(relu(cache["Z" + str(L-1)]).transpose())/m #dparameters["dW" + str(L)] = dZ.dot(X.transpose())/m dparameters["db" + str(L)] = np.sum(dZ,axis = 1,keepdims = True)/m for l in range(1,L): dZ = ((parameters["W" + str(L-l+1)].transpose()).dot(dZ)) * (relu_backward(cache["Z" + str(L-l)])) if L-l-1 != 0: dparameters["dW" + str(L-l)] = dZ.dot(relu(cache["Z" + str(L-1-l)]).transpose())/m else: dparameters["dW" + str(L-l)] = dZ.dot(X.transpose())/m dparameters["db" + str(L-l)] = np.sum(dZ,axis = 1,keepdims = True)/m return dparameters def update_parameters(parameters,dparameters,layer_dims,learning_rate): L = len(layer_dims) - 1 for l in range(1,L+1): parameters["W" + str(l)] = parameters["W" + str(l)] - learning_rate*dparameters["dW" + str(l)] parameters["b" + str(l)] = parameters["b" + str(l)] - learning_rate*dparameters["db" + str(l)] return parameters def model(X,Y,layer_dims,learning_rate,num_iters): costs = [] parameters = initialize_parameters(layer_dims) for i in range(num_iters): AL,cache = forward_prop(X,parameters) cost = compute_cost(AL,Y) costs.append(cost) dparameters = backward_prop(X,Y,cache,parameters,AL,layer_dims) parameters = update_parameters(parameters,dparameters,layer_dims,learning_rate) print(i,"\t",cost) return parameters,costs #trainig layer_dims = [784,120,10] parameters,costs = model(X,Y,layer_dims,0.5,2000) plt.plot(costs) #training df = pd.read_csv('mnist_test.csv',header = None) data = np.array(df) X_test = (data[:,1:].transpose())/255 Y_test = data[:,0:1].transpose() accuracy = 0 m_test = X_test.shape[1] predict = np.zeros((1,m_test)) A_test,cache = forward_prop(X_test,parameters) for i in range(m_test): max = 0 for j in range(10): if A_test[j,i] > max: max = A_test[j,i] max_index = j predict[0,i] = max_index if predict[0,i] == Y_test[0,i]: accuracy = accuracy + 1 accuracy = (accuracy/m_test)*100 print(accuracy,"%") index = 0 #change index toview different examples index = 897 print("Its a",int(predict[0,index])) plt.imshow(X_test[:,index].reshape(28,28)) ```
true
code
0.378459
null
null
null
null
# Backtest Orbit Model In this section, we will cover: - How to create a TimeSeriesSplitter - How to create a BackTester and retrieve the backtesting results - How to leverage the backtesting to tune the hyper-paramters for orbit models ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import orbit from orbit.models import LGT, DLT from orbit.diagnostics.backtest import BackTester, TimeSeriesSplitter from orbit.diagnostics.plot import plot_bt_predictions from orbit.diagnostics.metrics import smape, wmape from orbit.utils.dataset import load_iclaims import warnings warnings.filterwarnings('ignore') print(orbit.__version__) # load log-transformed data data = load_iclaims() data.shape ``` The way to gauge the performance of a time-series model is through re-training models with different historic periods and check their forecast within certain steps. This is similar to a time-based style cross-validation. More often, we called it `backtest` in time-series modeling. The purpose of this notebook is to illustrate how to `backtest` a single model using `BackTester` `BackTester` will compose a `TimeSeriesSplitter` within it, but `TimeSeriesSplitter` is useful as a standalone, in case there are other tasks to perform that requires splitting but not backtesting. `TimeSeriesSplitter` implemented each 'slices' as genertor, i.e it can be used in a for loop. You can also retrieve the composed `TimeSeriesSplitter` object from `BackTester` to utilize the additional methods in `TimeSeriesSplitter` Currently, there are two schemes supported for the back-testing engine: expanding window and rolling window. * **expanding window**: for each back-testing model training, the train start date is fixed, while the train end date is extended forward. * **rolling window**: for each back-testing model training, the training window length is fixed but the window is moving forward. ## Create a TimeSeriesSplitter There two main way to splitting a timeseries: expanding and rolling. Expanding window has a fixed starting point, and the window length grows as we move forward in timeseries. It is useful when we want to incoporate all historical information. On the other hand, rolling window has a fixed window length, and the starting point of the window moves forward as we move forward in timeseries. Now, we will illustrate how to use `TimeSeriesSplitter` to split the claims timeseries. ### Expanding window ``` # configs min_train_len = 380 # minimal length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward ex_splitter = TimeSeriesSplitter(df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type='expanding', date_col='week') print(ex_splitter) ``` We can visualize the splits, green is training window and yellow it the forecasting windown. The starting point is always 0 for three splits but window length increases from 380 to 420. ``` _ = ex_splitter.plot() ``` ### Rolling window ``` # configs min_train_len = 380 # in case of rolling window, this specify the length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward roll_splitter = TimeSeriesSplitter(data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type='rolling', date_col='week') ``` We can visualize the splits, green is training window and yellow it the forecasting windown. The window length is always 380, while the starting point moves forward 20 weeks each steps. ``` _ = roll_splitter.plot() ``` ### Specifying number of splits User can also define number of splits using `n_splits` instead of specifying minimum training length. That way, minimum training length will be automatically calculated. ``` ex_splitter2 = TimeSeriesSplitter(data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, n_splits=5, window_type='expanding', date_col='week') _ = ex_splitter2.plot() ``` ### TimeSeriesSplitter as generator `TimeSeriesSplitter` is implemented as a genetor, therefore we can call `split()` to loop through it. It comes handy even for tasks other than backtest. ``` for train_df, test_df, scheme, key in roll_splitter.split(): print('Initial Claim slice {} rolling mean:{:.3f}'.format(key, train_df['claims'].mean())) ``` ## Create a BackTester Now, we are ready to do backtest, first let's initialize a `DLT` model and a `BackTester`. You pass in `TimeSeriesSplitter` parameters to `BackTester`. ``` # instantiate a model dlt = DLT( date_col='week', response_col='claims', regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'], seasonality=52, estimator='stan-map', ) # configs min_train_len = 100 forecast_len = 20 incremental_len = 100 window_type = 'expanding' bt = BackTester( model=dlt, df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, window_type=window_type, ) ``` ## Backtest fit and predict The most expensive portion of backtesting is fitting the model iteratively. Thus, we separate the api calls for `fit_predict` and `score` to avoid redundant computation for multiple metrics or scoring methods ``` bt.fit_predict() ``` Once `fit_predict()` is called, the fitted models and predictions can be easily retrieved from `BackTester`. Here the data is grouped by the date, split_key, and whether or not that observation is part of the training or test data ``` predicted_df = bt.get_predicted_df() predicted_df.head() ``` We also provide a plotting utility to visualize the predictions against the actuals for each split. ``` plot_bt_predictions(predicted_df, metrics=smape, ncol=2, include_vline=True); ``` Users might find this useful for any custom computations that may need to be performed on the set of predicted data. Note that the columns are renamed to generic and consistent names. Sometimes, it might be useful to match the data back to the original dataset for ad-hoc diagnostics. This can easily be done by merging back to the orignal dataset ``` predicted_df.merge(data, left_on='date', right_on='week') ``` ## Backtest Scoring The main purpose of `BackTester` are the evaluation metrics. Some of the most widely used metrics are implemented and built into the `BackTester` API. The default metric list is **smape, wmape, mape, mse, mae, rmsse**. ``` bt.score() ``` It is possible to filter for only specific metrics of interest, or even implement your own callable and pass into the `score()` method. For example, see this function that uses last observed value as a predictor and computes the `mse`. Or `naive_error` which computes the error as the delta between predicted values and the training period mean. Note these are not really useful error metrics, just showing some examples of callables you can use ;) ``` def mse_naive(test_actual): actual = test_actual[1:] predicted = test_actual[:-1] return np.mean(np.square(actual - predicted)) def naive_error(train_actual, test_predicted): train_mean = np.mean(train_actual) return np.mean(np.abs(test_predicted - train_mean)) bt.score(metrics=[mse_naive, naive_error]) ``` It doesn't take additional time to refit and predict the model, since the results are stored when `fit_predict()` is called. Check docstrings for function criteria that is required for it to be supported with this api. In some cases, we may want to evaluate our metrics on both train and test data. To do this you can call score again with the following indicator ``` bt.score(include_training_metrics=True) ``` ## Backtest Get Models In cases where `BackTester` doesn't cut it or for more custom use-cases, there's an interface to export the `TimeSeriesSplitter` and predicted data, as shown earlier. It's also possible to get each of the fitted models for deeper diving ``` fitted_models = bt.get_fitted_models() model_1 = fitted_models[0] model_1.get_regression_coefs() ``` BackTester composes a TimeSeriesSplitter within it, but TimeSeriesSplitter can also be created on its own as a standalone object. See section below on TimeSeriesSplitter for more details on how to use the splitter. All of the additional TimeSeriesSplitter args can also be passed into BackTester on instantiation ``` ts_splitter = bt.get_splitter() _ = ts_splitter.plot() ``` ## Hyperparameter Tunning After seeing the results fromt the backtest, users may wish to fine tune the hyperparmeters. Orbit also provide a `grid_search_orbit` utilities for parameter searching. It uses `Backtester` under the hood so users can compare backtest metrics for different paramters combination. ``` from orbit.utils.params_tuning import grid_search_orbit # defining the search space for level smoothing paramter and seasonality smooth paramter param_grid = { 'level_sm_input': [0.3, 0.5, 0.8], 'seasonality_sm_input': [0.3, 0.5, 0.8], } # configs min_train_len = 380 # in case of rolling window, this specify the length of window length forecast_len = 20 # length forecast window incremental_len = 20 # step length for moving forward best_params, tuned_df = grid_search_orbit(param_grid, model=dlt, df=data, min_train_len=min_train_len, incremental_len=incremental_len, forecast_len=forecast_len, metrics=None, criteria=None, verbose=True) tuned_df.head() # backtest output for each parameter searched best_params # output best parameters ```
true
code
0.575111
null
null
null
null
# Representação numérica de palavras e textos Neste notebook iremos apresentação formas de representar valores textuais por meio de representação numérica. Iremos usar pandas, caso queira entender um pouco sobre pandas, [veja este notebook](pandas.ipynb). Por isso, não esqueça de instalar o módulo pandas: ``pip3 install pandas`` Em aprendizado de máquina, muitas vezes, precisamos da representação numérica de um determinado valor. Por exemplo: ``` import pandas as pd df_jogos = pd.DataFrame([ ["boa","nublado","não"], ["boa","chuvoso","não"], ["média","nublado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) df_jogos ``` Caso quisermos maperar cada coluna (agora chamada de atributo) para um valor, forma mais simples de se fazer a transformação é simplesmente mapear esse atributo para um valor numérico. Veja o exemplo abaixo: Nesse exemplo, temos dois atributos disposição do jogador e tempo e queremos prever se o jogar irá jogar volei ou não. Tanto os atributos quanto a classe podem ser mapeados como número. Além disso, o atributo `disposicao` é um atributo que representa uma escala - o que deixa essa forma de tranformação bem adequada para esse atributo. ``` from typing import Dict def mapeia_atributo_para_int(df_data:pd.DataFrame, coluna:str, dic_nom_to_int: Dict[int,str]): for i,valor in enumerate(df_data[coluna]): valor_int = dic_nom_to_int[valor] df_data[coluna].iat[i] = valor_int df_jogos = pd.DataFrame([ ["boa","nublado","sim"], ["boa","chuvoso","não"], ["média","ensolarado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} mapeia_atributo_para_int(df_jogos, "disposição", dic_disposicao) dic_tempo = {"ensolarado":3,"nublado":2,"chuvoso":1} mapeia_atributo_para_int(df_jogos, "tempo", dic_tempo) dic_volei = {"sim":1, "não":0} mapeia_atributo_para_int(df_jogos, "jogar volei?", dic_volei) df_jogos ``` ## Binarização dos atributos categóricos Podemos fazer a binarização dos atributos categóricos em que, cada valor de atributo transforma-se em uma coluna que recebe `0` caso esse atributo não exista e `1`, caso contrário. Em nosso exemplo: ``` from preprocessamento_atributos import BagOfItems df_jogos = pd.DataFrame([ [4, "boa","nublado","sim"], [3,"boa","chuvoso","não"], [2,"média","ensolarado","sim"], [1,"fraca","chuvoso","não"]], columns=["id","disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} bag_of_tempo = BagOfItems(0) #veja a implementação do método em preprocesamento_atributos.py df_jogos_bot = bag_of_tempo.cria_bag_of_items(df_jogos,["tempo"]) df_jogos_bot ``` Como existem vários valores no teste que você desconhece, se fizermos dessa forma, atributos que estão no teste poderiam estar completamente zerados no treino, sendo desnecessário, por exemplo: ``` df_jogos_treino = df_jogos[:2] df_jogos_treino df_jogos_teste = df_jogos[2:] df_jogos_teste ``` ## Exemplo Real Considere este exemplo real de filmes e seus atores ([obtidos no kaggle](https://www.kaggle.com/rounakbanik/the-movies-dataset)): ``` import pandas as pd df_amostra = pd.read_csv("movies_amostra.csv") df_amostra ``` Nesse exemplo, as colunas que representam os atores principais podem ser binarizadas. Em nosso caso, podemos colocar os atores todos em um "Bag of Items". Os atores são representados por as colunas `ator_1`, `ator_2`,..., `ator_5`. Abaixo, veja um sugestão de como fazer em dataset: ``` import pandas as pd from preprocessamento_atributos import BagOfItems obj_bag_of_actors = BagOfItems(min_occur=3) #boa=bag of actors ;) df_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_amostra_boa ``` Veja que temos bastante atributos um para cada ator. Mesmo sendo melhor possuirmos poucos atributos e mais informativos, um método de aprendizado de máquina pode ser capaz de usar essa quantidade de forma eficaz. Particularmente, o [SVM linear](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html) e o [RandomForest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) são métodos que conseguem ir bem nesse tipo de dado. Essa é a forma mais prática de fazer, porém, em aprendizado de máquina, geralmente dividimos nossos dados em, pelo menos, treino e teste em que treino é o dado que você terá todo o acesso e, o teste, deve reproduzir uma amostra do mundo real. Vamos supor que no treino há atores raros que não ocorrem no teste, nesse caso tais atributos seriam inúteis para o teste. Isso pode fazer com que o resultado reproduza menos o mundo real - neste caso, é muito possível que a diferença seja quase insignificante. Mas, caso queiramos fazer da forma "mais correta", temos que considerar apenas o treino para isso: ``` #supondo que 80% da amostra é treino df_treino_amostra = df_amostra.sample(frac=0.8, random_state = 2) df_teste_amostra = df_amostra.drop(df_treino_amostra.index) #min_occur=3 definie o minimo de ocorrencias desse ator para ser considerado #pois, um ator que apareceu em poucos filmes, pode ser menos relevante para a predição do genero obj_bag_of_actors = BagOfItems(min_occur=3) df_treino_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_treino_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_teste_amostra_boa = obj_bag_of_actors.aplica_bag_of_items(df_teste_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) ``` ## Representação Bag of Words Muitas vezes, temos textos que podem ser relevantes para uma determinada tarefa de aprendizado d máquina. Por isso, temos que representar tais elementos para nosso método de aprendizado de máquina. A forma mais usual para isso, é a `Bag of Words` em que cada palavra é um atributo e, o valor dela, é a frequencia dele no texto (ou algum outro valor que indique a importancia dessa palavra no texto). Por exemplo, caso temos as frases `A casa é grande`, `A casa é verde verde` em que cada frase é uma instancia diferente. A representação seria da seguinte forma: ``` dic_bow = {"a":[1,1], "casa":[1,1], "é":[1,1], "verde":[0,2] } df_bow = pd.DataFrame.from_dict(dic_bow) df_bow ``` Da forma que fizemos acima, usamos a frequencia de um termo para definir sua importancia no texto, porém, existem termos que possuem uma frequencia muito alta e importancia baixa: são os casos dos artigos e preposições por exemplo, pois, eles não discriminam o texto. Uma forma de mensurar o porder discriminativo das palavras é usando a métrica `TF-IDF`. Para calcularmos essa métrica, primeiramente calculamos a frequencia de um termo no documento (TF) e, logo após multiplamos pelo IDF. A fórmula para calcular o TF-IDF do termo $i$ no documento (ou instancia) $j$ é a seguinte: \begin{equation} TFIDF_{ij} = TF_{ij} \times IDF_i \end{equation} \begin{equation} TF_{ij} = log(f_{ij}) \end{equation} em que $f_{ij}$ é a frequencia de um termo $i$ no documento $j$. Usa-se o `log` para suavizar valores muito altos e o $IDF$ (do inglês, _Inverse Document Frequency_) do termo $i$ é calculado da seguinte forma: \begin{equation} IDF_i = log(\frac{N}{n_i}) \end{equation} em que $N$ é o número de documentos da coleção e $n_i$ é o número de documentos em que esse termo $i$ ocorre. Espera-se que, quanto mais discriminativo o termo, em menos documentos esse termo irá ocorrer e, consequentemente, o $IDF$ deste termo será mais alto. Por exemplo, considere as palavras `de`, `bebida` e `cerveja`. `cerveja` é uma palavra mais discriminativa do que `bebida`; e `bebibda` é mais discriminativo do que a preposição `de`. Muito provavelmente teremos mais frequentemente termos menos discriminativos. Por exemplo, se tivermos uma coleção de 1000 documentos, `de` poderia ocorrer em 900 documentos, `bebida` em 500 e `cerveja` em 100 documentos. Se fizermos o calculo, veremos que quanto mais discriminativo um termo, mais alto é seu IDF: ``` import math N = 1000 n_de = 900 n_bebida = 500 n_cerveja = 100 IDF_de = math.log(N/n_de) IDF_bebida = math.log(N/n_bebida) IDF_cerveja = math.log(N/n_cerveja) print(f"IDF_de: {IDF_de}\tIDF_bebida:{IDF_bebida}\tIDF_cerveja:{IDF_cerveja}") ``` A biblioteca `scikitlearn`também já possui uma classe [TFIDFVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) que transforma um texto em um vetor de atributos usando o TF-IDF para o valor referente a relevancia deste termo. Veja um exemplo na coluna `resumo` do nosso dataset de filme: ``` import pandas as pd from preprocessamento_atributos import BagOfWords df_amostra = pd.read_csv("datasets/movies_amostra.csv") bow_amostra = BagOfWords() df_bow_amostra = bow_amostra.cria_bow(df_amostra,"resumo") df_bow_amostra ``` Como são muitos atributos, pode parecer que não ficou corretamente gerado. Mas, filtrando as palavras de um determinado resumo você verificará que está ok: ``` df_bow_amostra[["in","lake", "high"]] ``` Não fique preso apenas nessas representações. Vocês podem tentar fazer representações mais sucintas, como, por exemplo: para preprocessar os dados da equipe do filme (atores, diretor e escritor), calcule o número de filmes de comédia que membros da equipe participaram e, logo após, o número de filme de ação. Neste caso, como você usará a classe, você deverá usar **apenas** os dados de treino. No caso do resumo, você pode utilizar palavras chaves. Por exemplo, faça uma lista de palavras chaves que remetem "ação" e contabilize o quantidade dessas palavras chaves no resumo.
true
code
0.257415
null
null
null
null
# Hash Codes Consider the challenges associated with the 16-bit hashcode for a character string `s` that sums the Unicode values of the characters in `s`. For example, let `s = "stop"`. It's unicode character representation is: ``` for char in "stop": print(char + ': ' + str(ord(char))) sum([ord(x) for x in "stop"]) ``` If we then sum these unicode values, we arrive as the following hash code: ``` stop -----------> 454 ``` The problem is, the following strings will all map to the same value! ``` stop -----------> 454 pots -----------> 454 tops -----------> 454 spot -----------> 454 ``` A better hash code would take into account the _position_ of our characters. ## Polynomial Hash code If we refer to the characters of our string as $x_0, x_1, \dots, x_n$, we can then chose a non-zero constant, $a \neq 1$, and use a hash code: $$a^{n-1} x_0 + a^{n-2} x_1 + \dots + a^1 x_{n-1} + a^0 x_{n}$$ This is simply a polynomial in $a$ that has our $x_i$ values as it's coefficients. This is known as a **polynomial** hash code. ``` 1 << 32 2**32 2 << 2 ``` ## Investigate hash map uniformity ``` import random import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_format='retina' n = 0 prime = 109345121 scale = 1 + random.randrange(prime - 1) shift = random.randrange(prime) def my_hash_func(k, upper): table = upper * [None] hash_code = hash(k) compressed_code = (hash_code * scale + shift) % prime % len(table) return compressed_code upper = 1000 inputs = list(range(0, upper)) hash_results = [] for i in inputs: hash_results.append(my_hash_func(i, upper)) plt.figure(figsize=(15,10)) plt.plot(inputs, hash_results) plt.figure(figsize=(15,10)) plt.scatter(inputs, hash_results) def moving_average(x, w): return np.convolve(x, np.ones(w), 'valid') / w averages_over_window_size_5 = moving_average(hash_results, 5) plt.hist(averages_over_window_size_5) l = [4, 7, 9, 13, 1, 3, 7] l1 = [1, 4, 7]; l2 = [3, 9, 13] def merge_sort(l): size = len(l) midway = size // 2 first_half = l[:midway] second_half = l[midway:] if len(first_half) > 1 or len(second_half) > 1: sorted_first_half = merge_sort(first_half) sorted_second_half = merge_sort(second_half) else: sorted_first_half = first_half sorted_second_half = second_half sorted_l = merge(sorted_first_half, sorted_second_half) return sorted_l def merge(l1, l2): """Merge two sorted lists.""" i = 0 j = 0 lmerged = [] while (i <= len(l1) - 1) or (j <= len(l2) - 1): if i == len(l1): lmerged.extend(l2[j:]) break if j == len(l2): lmerged.extend(l1[i:]) break if (i < len(l1)) and (l1[i] < l2[j]): lmerged.append(l1[i]) i += 1 else: lmerged.append(l2[j]) j += 1 return lmerged merge_sort(l) l = [random.choice(list(range(1000))) for x in range(1000)] %%time res = sorted(l) %%time res = merge_sort(l) ```
true
code
0.400691
null
null
null
null
``` !date import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns %matplotlib inline sns.set_context('paper') sns.set_style('darkgrid') ``` # Mixture Model in PyMC3 Original NB by Abe Flaxman, modified by Thomas Wiecki ``` import pymc3 as pm, theano.tensor as tt # simulate data from a known mixture distribution np.random.seed(12345) # set random seed for reproducibility k = 3 ndata = 500 spread = 5 centers = np.array([-spread, 0, spread]) # simulate data from mixture distribution v = np.random.randint(0, k, ndata) data = centers[v] + np.random.randn(ndata) plt.hist(data); # setup model model = pm.Model() with model: # cluster sizes a = pm.constant(np.array([1., 1., 1.])) p = pm.Dirichlet('p', a=a, shape=k) # ensure all clusters have some points p_min_potential = pm.Potential('p_min_potential', tt.switch(tt.min(p) < .1, -np.inf, 0)) # cluster centers means = pm.Normal('means', mu=[0, 0, 0], sd=15, shape=k) # break symmetry order_means_potential = pm.Potential('order_means_potential', tt.switch(means[1]-means[0] < 0, -np.inf, 0) + tt.switch(means[2]-means[1] < 0, -np.inf, 0)) # measurement error sd = pm.Uniform('sd', lower=0, upper=20) # latent cluster of each observation category = pm.Categorical('category', p=p, shape=ndata) # likelihood for each observed value points = pm.Normal('obs', mu=means[category], sd=sd, observed=data) # fit model with model: step1 = pm.Metropolis(vars=[p, sd, means]) step2 = pm.ElemwiseCategoricalStep(var=category, values=[0, 1, 2]) tr = pm.sample(10000, step=[step1, step2]) ``` ## Full trace ``` pm.plots.traceplot(tr, ['p', 'sd', 'means']); ``` ## After convergence ``` # take a look at traceplot for some model parameters # (with some burn-in and thinning) pm.plots.traceplot(tr[5000::5], ['p', 'sd', 'means']); # I prefer autocorrelation plots for serious confirmation of MCMC convergence pm.autocorrplot(tr[5000::5], ['sd']) ``` ## Sampling of cluster for individual data point ``` i=0 plt.plot(tr['category'][5000::5, i], drawstyle='steps-mid') plt.axis(ymin=-.1, ymax=2.1) def cluster_posterior(i=0): print('true cluster:', v[i]) print(' data value:', np.round(data[i],2)) plt.hist(tr['category'][5000::5,i], bins=[-.5,.5,1.5,2.5,], rwidth=.9) plt.axis(xmin=-.5, xmax=2.5) plt.xticks([0,1,2]) cluster_posterior(i) ```
true
code
0.543469
null
null
null
null
### Neural style transfer in PyTorch This tutorial implements the "slow" neural style transfer based on the VGG19 model. It closely follows the official neural style tutorial you can find [here](http://pytorch.org/tutorials/advanced/neural_style_tutorial.html). __Note:__ if you didn't sit through the explanation of neural style transfer in the on-campus lecture, you're _strongly recommended_ to follow the link above instead of this notebook. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib.pyplot import imread from skimage.transform import resize, rotate import torch, torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable # desired size of the output image imsize = 512 # REDUCE THIS TO 128 IF THE OPTIMIZATION IS TOO SLOW FOR YOU def image_loader(image_name): image = resize(imread(image_name), [imsize, imsize]) image = image.transpose([2,0,1]) / image.max() image = Variable(dtype(image)) # fake batch dimension required to fit network's input dimensions image = image.unsqueeze(0) return image use_cuda = torch.cuda.is_available() print("torch", torch.__version__) if use_cuda: print("Using GPU.") else: print("Not using GPU.") dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor ``` ### Draw input images ``` !mkdir -p images !wget https://github.com/yandexdataschool/Practical_DL/raw/fall21/week10_interpretability/bonus_style_transfer/images/wave.jpg -O images/wave.jpg style_img = image_loader("images/wave.jpg").type(dtype) !wget http://cdn.cnn.com/cnnnext/dam/assets/170809210024-trump-nk.jpg -O images/my_img.jpg content_img = image_loader("images/my_img.jpg").type(dtype) assert style_img.size() == content_img.size(), \ "we need to import style and content images of the same size" def imshow(tensor, title=None): image = tensor.clone().cpu() # we clone the tensor to not do changes on it image = image.view(3, imsize, imsize) # remove the fake batch dimension image = image.numpy().transpose([1,2,0]) plt.imshow(image / np.max(image)) if title is not None: plt.title(title) plt.figure(figsize=[12,6]) plt.subplot(1,2,1) imshow(style_img.data, title='Style Image') plt.subplot(1,2,2) imshow(content_img.data, title='Content Image') ``` ### Define Style Transfer Losses As shown in the lecture, we define two loss functions: content and style losses. Content loss is simply a pointwise mean squared error of high-level features while style loss is the error between gram matrices of intermediate feature layers. To obtain the feature representations we use a pre-trained VGG19 network. ``` import torchvision.models as models cnn = models.vgg19(pretrained=True).features # move it to the GPU if possible: if use_cuda: cnn = cnn.cuda() class ContentLoss(nn.Module): def __init__(self, target, weight): super(ContentLoss, self).__init__() # we 'detach' the target content from the tree used self.target = target.detach() * weight self.weight = weight def forward(self, input): self.loss = F.mse_loss(input * self.weight, self.target) return input.clone() def backward(self, retain_graph=True): self.loss.backward(retain_graph=retain_graph) return self.loss def gram_matrix(input): a, b, c, d = input.size() # a=batch size(=1) # b=number of feature maps # (c,d)=dimensions of a f. map (N=c*d) features = input.view(a * b, c * d) # resise F_XL into \hat F_XL G = torch.mm(features, features.t()) # compute the gram product # we 'normalize' the values of the gram matrix # by dividing by the number of element in each feature maps. return G.div(a * b * c * d) class StyleLoss(nn.Module): def __init__(self, target, weight): super(StyleLoss, self).__init__() self.target = target.detach() * weight self.weight = weight def forward(self, input): self.G = gram_matrix(input) self.G.mul_(self.weight) self.loss = F.mse_loss(self.G, self.target) return input.clone() def backward(self, retain_graph=True): self.loss.backward(retain_graph=retain_graph) return self.loss ``` ### Style transfer pipeline We can now define a unified "model" that computes all the losses on the image triplet (content image, style image, optimized image) so that we could optimize them with backprop (over image pixels). ``` content_weight=1 # coefficient for content loss style_weight=1000 # coefficient for style loss content_layers=('conv_4',) # use these layers for content loss style_layers=('conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5') # use these layers for style loss content_losses = [] style_losses = [] model = nn.Sequential() # the new Sequential module network # move these modules to the GPU if possible: if use_cuda: model = model.cuda() i = 1 for layer in list(cnn): if isinstance(layer, nn.Conv2d): name = "conv_" + str(i) model.add_module(name, layer) if name in content_layers: # add content loss: target = model(content_img).clone() content_loss = ContentLoss(target, content_weight) model.add_module("content_loss_" + str(i), content_loss) content_losses.append(content_loss) if name in style_layers: # add style loss: target_feature = model(style_img).clone() target_feature_gram = gram_matrix(target_feature) style_loss = StyleLoss(target_feature_gram, style_weight) model.add_module("style_loss_" + str(i), style_loss) style_losses.append(style_loss) if isinstance(layer, nn.ReLU): name = "relu_" + str(i) model.add_module(name, layer) if name in content_layers: # add content loss: target = model(content_img).clone() content_loss = ContentLoss(target, content_weight) model.add_module("content_loss_" + str(i), content_loss) content_losses.append(content_loss) if name in style_layers: # add style loss: target_feature = model(style_img).clone() target_feature_gram = gram_matrix(target_feature) style_loss = StyleLoss(target_feature_gram, style_weight) model.add_module("style_loss_" + str(i), style_loss) style_losses.append(style_loss) i += 1 if isinstance(layer, nn.MaxPool2d): name = "pool_" + str(i) model.add_module(name, layer) # *** ``` ### Optimization We can now optimize both style and content loss over input image. ``` input_image = Variable(content_img.clone().data, requires_grad=True) optimizer = torch.optim.Adam([input_image], lr=0.1) num_steps = 300 for i in range(num_steps): # correct the values of updated input image input_image.data.clamp_(0, 1) model(input_image) style_score = 0 content_score = 0 for sl in style_losses: style_score += sl.backward() for cl in content_losses: content_score += cl.backward() if i % 10 == 0: # <--- adjust the value to see updates more frequently print('Step # {} Style Loss : {:4f} Content Loss: {:4f}'.format( i, style_score.data.item(), content_score.item())) plt.figure(figsize=[10,10]) imshow(input_image.data) plt.show() loss = style_score + content_score optimizer.step(lambda: loss) optimizer.zero_grad() # a last correction... input_image.data.clamp_(0, 1) ``` ### Final image ``` plt.figure(figsize=[10,10]) imshow(input_image.data) plt.show() ```
true
code
0.830371
null
null
null
null
## Implementing a 1D convnet In Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It **takes as input 3D tensors with shape (samples, time, features) and also returns similarly-shaped 3D tensors**. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor. Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task altready seen previously. ``` from tensorflow.keras.datasets import imdb from tensorflow.keras.preprocessing import sequence max_features = 10000 # number of words to consider as features max_len = 500 # cut texts after this number of words (among top max_features most common words) print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=max_len) x_test = sequence.pad_sequences(x_test, maxlen=max_len) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) ``` **1D convnets are structured in the same way as their 2D counter-parts**: they consist of a stack of `Conv1D` and `MaxPooling1D layers`, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more Dense layers to the model, for classification or regression. One difference, though, is the fact that **we can afford to use larger convolution windows with 1D convnets**. Indeed, with a 2D convolution layer, a 3x3 convolution window contains `3*3 = 9` feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9. This is our example 1D convnet for the IMDB dataset: ``` from tensorflow.keras.models import Sequential from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop model = Sequential() model.add(layers.Embedding(max_features, 128, input_length=max_len)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.MaxPooling1D(5)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.GlobalMaxPooling1D()) model.add(layers.Dense(1)) model.summary() model.compile( optimizer=RMSprop(lr=1e-4), loss='binary_crossentropy', metrics=['acc'] ) history = model.fit( x_train, y_train, epochs=10, batch_size=128, validation_split=0.2 ) ``` Here are our training and validation results: validation accuracy is slightly lower than that of the LSTM example we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task. ``` import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ## Combining CNNs and RNNs to process long sequences Because 1D convnets process input patches independently, **they are not sensitive to the order of the timesteps** (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous notebook, where **order-sensitivity was key to produce good predictions**: ``` import numpy as np import os # Import data data_dir = './datasets/jena' fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv') f = open(fname) data = f.read() f.close() lines = data.split('\n') header = lines[0].split(',') lines = lines[1:] print(header) print() print(len(lines)) # Preprocessing float_data = np.zeros((len(lines), len(header) - 1)) for i, line in enumerate(lines): values = [float(x) for x in line.split(',')[1:]] float_data[i, :] = values mean = float_data[:200000].mean(axis=0) float_data -= mean std = float_data[:200000].std(axis=0) float_data /= std # Create datasets def generator(data, lookback, delay, min_index, max_index, shuffle=False, batch_size=128, step=6): if max_index is None: max_index = len(data) - delay - 1 i = min_index + lookback while 1: if shuffle: rows = np.random.randint(min_index + lookback, max_index, size=batch_size) else: if i + batch_size >= max_index: i = min_index + lookback rows = np.arange(i, min(i + batch_size, max_index)) i += len(rows) samples = np.zeros((len(rows), lookback // step, data.shape[-1])) targets = np.zeros((len(rows),)) for j, row in enumerate(rows): indices = range(rows[j] - lookback, rows[j], step) samples[j] = data[indices] targets[j] = data[rows[j] + delay][1] yield samples, targets lookback = 1440 step = 6 delay = 144 batch_size = 128 train_gen = generator( float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step, batch_size=batch_size ) val_gen = generator( float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step, batch_size=batch_size ) test_gen = generator( float_data, lookback=lookback, delay=delay, min_index=300001, max_index=None, step=step, batch_size=batch_size ) # This is how many steps to draw from `val_gen` in order to see the whole validation set: val_steps = (300000 - 200001 - lookback) // batch_size # This is how many steps to draw from `test_gen` in order to see the whole test set: test_steps = (len(float_data) - 300001 - lookback) // batch_size from tensorflow.keras.models import Sequential from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop model = Sequential() model.add(layers.Conv1D(32, 5, activation='relu', input_shape=(None, float_data.shape[-1]))) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.GlobalMaxPooling1D()) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit( train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps ) import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` The validation MAE stays in the low 0.40s: **we cannot even beat our common-sense baseline using the small convnet**. Again, this is because **our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees** (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. **This limitation of convnets was not an issue on IMDB**, because **patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences**. One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. **This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs**, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the step parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes. ``` # This was previously set to 6 (one point per hour). Now 3 (one point per 30 min). step = 3 lookback = 720 # Unchanged delay = 144 # Unchanged train_gen = generator( float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step ) val_gen = generator( float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step ) test_gen = generator( float_data, lookback=lookback, delay=delay, min_index=300001, max_index=None, step=step ) val_steps = (300000 - 200001 - lookback) // 128 test_steps = (len(float_data) - 300001 - lookback) // 128 ``` This is our new model, **starting with two `Conv1D` layers and following-up with a `GRU` layer**: ``` model = Sequential() model.add(layers.Conv1D(32, 5, activation='relu',input_shape=(None, float_data.shape[-1]))) model.add(layers.MaxPooling1D(3)) model.add(layers.Conv1D(32, 5, activation='relu')) model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5)) model.add(layers.Dense(1)) model.summary() model.compile(optimizer=RMSprop(), loss='mae') history = model.fit( train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps ) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` Judging from the validation loss, **this setup is not quite as good as the regularized GRU alone, but it's significantly faster**. It is looking at twice more data, which in this case doesn't appear to be hugely helpful, but may be important for other datasets.
true
code
0.790171
null
null
null
null
# **[Adversarial Disturbances for Controller Verification](http://proceedings.mlr.press/v144/ghai21a/ghai21a.pdf)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/nsc-tutorial/blob/main/controller-verification.ipynb) ## Housekeeping Imports [jax](https://github.com/google/jax), numpy, scipy, plotting utils... ``` #@title import jax import itertools import numpy as onp import jax.numpy as np import matplotlib.pyplot as plt import ipywidgets as widgets from jax.numpy.linalg import inv, pinv from scipy.linalg import solve_discrete_are as dare from jax import jit, grad, hessian from IPython import display from toolz.dicttoolz import valmap, itemmap from itertools import chain def liveplot(costss, xss, wss, cmax=30, cumcmax=15, wmax=2, xmax=20, logcmax=100, logcumcmax=1000): cummean = lambda x: np.cumsum(np.array(x))/np.arange(1, len(x)+1) cumcostss = valmap(cummean, costss) disturbances = valmap(lambda x: list(map(lambda w: w[0], x)), wss) plt.style.use('seaborn') colors = { "Zero Control": "gray", "LQR / H2": "green", "Finite-horizon LQR / H2": "teal", "Optimal LQG for GRW": "aqua", "Robust / Hinf Control": "orange", "GPC": "red" } fig, ax = plt.subplots(3, 2, figsize=(21, 12)) costssline = {} for Cstr, costs in costss.items(): costssline[Cstr], = ax[0, 0].plot([], label=Cstr, color=colors[Cstr]) ax[0, 0].set_xlabel("Time") ax[0, 0].set_ylabel("Instantaneous Cost") ax[0, 0].set_ylim([-1, cmax]) ax[0, 0].set_xlim([0, 100]) ax[0, 0].legend() cumcostssline = {} for Cstr, costs in cumcostss.items(): cumcostssline[Cstr], = ax[0, 1].plot([], label=Cstr, color=colors[Cstr]) ax[0, 1].set_xlabel("Time") ax[0, 1].set_ylabel("Average Cost") ax[0, 1].set_ylim([-1, cumcmax]) ax[0, 1].set_xlim([0, 100]) ax[0, 1].legend() perturblines = {} for Cstr, W in disturbances.items(): perturblines[Cstr], = ax[1, 0].plot([], label=Cstr, color=colors[Cstr]) ax[1, 0].set_xlabel("Time") ax[1, 0].set_ylabel("Generated Disturbances") ax[1, 0].set_ylim([-wmax, wmax]) ax[1, 0].set_xlim([0, 100]) ax[1, 0].legend() pointssline, trailssline = {}, {} for Cstr, C in xss.items(): pointssline[Cstr], = ax[1,1].plot([], [], label=Cstr, color=colors[Cstr], ms=20, marker='s') trailssline[Cstr], = ax[1,1].plot([], [], label=Cstr, color=colors[Cstr], lw=2) ax[1, 1].set_xlabel("Position") ax[1, 1].set_ylabel("") ax[1, 1].set_ylim([-1, 6]) ax[1, 1].set_xlim([-xmax, xmax]) ax[1, 1].legend() logcostssline = {} for Cstr, costs in costss.items(): logcostssline[Cstr], = ax[2, 0].plot([1], label=Cstr, color=colors[Cstr]) ax[2, 0].set_xlabel("Time") ax[2, 0].set_ylabel("Instantaneous Cost (Log Scale)") ax[2, 0].set_xlim([0, 100]) ax[2, 0].set_ylim([0.1, logcmax]) ax[2, 0].set_yscale('log') ax[2, 0].legend() logcumcostssline = {} for Cstr, costs in cumcostss.items(): logcumcostssline[Cstr], = ax[2, 1].plot([1], label=Cstr, color=colors[Cstr]) ax[2, 1].set_xlabel("Time") ax[2, 1].set_ylabel("Average Cost (Log Scale)") ax[2, 1].set_xlim([0, 100]) ax[2, 1].set_ylim([0.1, logcumcmax]) ax[2, 1].set_yscale('log') ax[2, 1].legend() def livedraw(t): for Cstr, costsline in costssline.items(): costsline.set_data(np.arange(t), costss[Cstr][:t]) for Cstr, cumcostsline in cumcostssline.items(): cumcostsline.set_data(np.arange(t), cumcostss[Cstr][:t]) for i, (Cstr, pointsline) in enumerate(pointssline.items()): pointsline.set_data(xss[Cstr][t][0], i) for Cstr, perturbline in perturblines.items(): perturbline.set_data(np.arange(t), disturbances[Cstr][:t]) for i, (Cstr, trailsline) in enumerate(trailssline.items()): trailsline.set_data(list(map(lambda x: x[0], xss[Cstr][max(t-10, 0):t])), i) for Cstr, logcostsline in logcostssline.items(): logcostsline.set_data(np.arange(t), costss[Cstr][:t]) for Cstr, logcumcostsline in logcumcostssline.items(): logcumcostsline.set_data(np.arange(t), cumcostss[Cstr][:t]) return chain(costssline.values(), cumcostssline.values(), perturblines.values(), pointssline.values(), trailssline.values(), logcostssline.values(), logcumcostssline.values()) print("🧛 reanimating :) meanwhile...") livedraw(99) plt.show() from matplotlib import animation anim = animation.FuncAnimation(fig, livedraw, frames=100, interval=50, blit=True) from IPython.display import HTML display.clear_output(wait=True) return HTML(anim.to_html5_video()) ``` ## A simple dynamical system Defines a discrete-time [double-integrator](https://en.wikipedia.org/wiki/Double_integrator) -- a simple linear dynamical system that mirrors 1d kinematics -- along with a quadratic cost. Below $\mathbf{x}_t$ is the state, $\mathbf{u}_t$ is the control input (or action), $\mathbf{w}_t$ is the disturbance. $$ \mathbf{x}_{t+1} = A\mathbf{x}_t + B\mathbf{u}_t + \mathbf{w}_t, \qquad c(\mathbf{x},\mathbf{u}) = \mathbf{x}^\top Q \mathbf{x} + \mathbf{u}^\top R \mathbf{u}$$ $$ A = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix},\quad B = \begin{bmatrix} 0\\ 1 \end{bmatrix}, \quad Q = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, \quad R = \begin{bmatrix} 1 \end{bmatrix}$$ In the task of controller verification, the **verifier** selects $\mathbf{w}_t$ adaptively as a function of past state-action pairs $(\mathbf{x}_s,\mathbf{u}_s:s\leq t)$. ``` dx, du, T = 2, 1, 100 A, B = np.array([[1.0, 1.0], [0.0, 1.0]]), np.array([[0.0], [1.0]]) Q, R = np.eye(dx), np.eye(du) dyn = lambda x, u, w, t: A @ x + B @ u + w cost = lambda x, u, t: x.T @ A @ x + u.T @ R @ u # A basic control loop. # (x, z) is the environ-controller state. # w is disturbance and z_w disturbance generator state def eval(control, disturbance): x, z, z_w = np.zeros(dx), None, None for t in range(T): u, z = control(x, z, t) w, z_w = disturbance(x, u, z_w, t) c = cost(x, u, t) yield (x, u, w, c) x = dyn(x, u, w, t) ``` ## Control Algorithms The segment below puts forth a few basic control strategies, whose performance characteristics we would like to verify. + **Zero Control**: Executes $\mathbf{u}=\mathbf{0}$. + **LQR / H2**: A discrete-time [linear-quadratic regulator](https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator). + **Finite-horizon LQR / H2**: A finite-horizon variant of the above. + **Robust / $H_\infty$ Control**: A worst-case [robust](https://en.wikipedia.org/wiki/H-infinity_methods_in_control_theory) controller. + **GPC**: [Gradient-perturbation](https://arxiv.org/abs/1902.08721) controller. ``` #@title def zero(): return lambda x, z, t: (np.zeros(du), z) def h2(A=A, B=B, Q=Q, R=R): P = dare(A, B, Q, R) K = - inv(R + B.T @ P @ B) @ (B.T @ P @ A) return lambda x, z, t: (K @ x, z) def h2nonstat(A=A, B=B, Q=Q, R=R, T=T): dx, du = B.shape P, K = [np.zeros((dx, dx)) for _ in range(T + 1)], [np.zeros((du, dx)) for _ in range(T)] P[T] = Q for t in range(T - 1, -1, -1): P[t] = Q + A.T @ P[t + 1] @ A - (A.T @ P[t + 1] @ B) @ inv(R + B.T @ P[t + 1] @ B) @ (B.T @ P[t + 1] @ A) K[t] = - inv(R + B.T @ P[t + 1] @ B) @ (B.T @ P[t + 1] @ A) return lambda x, z, t: (K[t] @ x, z) def hinf(A=A, B=B, Q=Q, R=R, T=T, gamma=1.0): dx, du = B.shape P, K = [np.zeros((dx, dx)) for _ in range(T + 1)], [np.zeros((du, dx)) for _ in range(T)], P[T] = Q for t in range(T - 1, -1, -1): Lambda = np.eye(dx) + (B @ inv(R) @ B.T - gamma ** -2 * np.eye(dx)) @ P[t + 1] P[t] = Q + A.T @ P[t + 1] @ pinv(Lambda) @ A K[t] = - np.linalg.inv(R) @ B.T @ P[t + 1] @ pinv(Lambda) @ A return lambda x, z, t: (K[t] @ x, z) def gpc(A=A, B=B, Q=Q, R=R, T=T, H=3, M=3, lr=0.01, dyn=dyn, cost=cost): dx, du = B.shape P = dare(A, B, Q, R) K = - np.array(inv(R + B.T @ P @ B) @ (B.T @ P @ A)) def proxy(E, off, W): y = np.zeros(dx) for h in range(H): v = K @ y + np.tensordot(E, W[h: h + M], axes=([0, 2], [0, 1])) y = dyn(y, v, W[h + M], h + M) v = K @ y + np.tensordot(E, W[h: h + M], axes=([0, 2], [0, 1])) c = cost(y, v, None) return c proxygrad = jit(grad(proxy, argnums=(0, 1))) def gpc_u(x, z, t): if z is None or t == 0: z = np.zeros(dx), np.zeros(du), np.zeros((H + M, dx)), np.zeros((M, du, dx)), np.zeros(du) xprev, uprev, W, E, off = z W = jax.ops.index_update(W, 0, x - A @ xprev - B @ uprev) W = np.roll(W, -1, axis=0) if t >= H + M: Edelta, offdelta = proxygrad(E, off, W) E -= lr * Edelta off -= lr * offdelta u = K @ x + np.tensordot(E, W[-M:], axes=([0, 2], [0, 1])) + off return u, (x, u, W, E, off) return gpc_u def controllers(gamma, H, M, lr): return { "Zero Control": zero(), "LQR / H2": h2(), "Finite-horizon LQR / H2": h2nonstat(), "Robust / Hinf Control": hinf(gamma=gamma), "GPC": gpc(H=H, M=M, lr=lr), } ``` ## [Memory Online Trust Region](https://arxiv.org/abs/2012.06695) (**MOTR**) disturbances This is an online learning approach to disturbance generation, akin to nonstochastic control but with the role of control and disturbance swapped. ``` # Author: Udaya Ghai ([email protected]) def motr(A=A, B=B, Q=Q, R=R, r_off=0.5, r_E= 1.0, T=T, H=3, M=3, lr=0.001, dyn=dyn, cost=cost): dx, du = B.shape def proxy(E, off, U, X): x = X[0] for h in range(H): w = np.tensordot(E, U[h: h + M], axes=([0, 2], [0, 1])) + off x = dyn(x, U[h + H], w, h+M) return np.sum(x.T @ Q @ x) proxygrad = jit(grad(proxy, argnums=(0, 1))) proxyhess = jit(hessian(proxy)) def project(x, r): norm_x = np.linalg.norm(x) return x if norm_x < r else (r / norm_x) * x def motr_w(x, u, z_w, t): if z_w is None or t == 0: z_w = np.zeros((H+M, du, 1)),np.zeros((H, dx, 1)), np.zeros((M, dx, du)), np.ones((dx, 1)) U, X, E, off = z_w U = jax.ops.index_update(U, 0, u) U = np.roll(U, -1, axis=0) X = jax.ops.index_update(X, 0, np.reshape(x, (dx,1))) X = np.roll(X, -1, axis=0) if t >= H + M: Edelta, offdelta = proxygrad(E, off, U, X) E = project(E + lr*Edelta, r_E) off = project(off + lr * offdelta, r_off) w = np.tensordot(E, U[-M:], axes=([0, 2], [0, 1])) + off return np.squeeze(w), (U, X, E, off) return motr_w #@title MOTR Pertrubation #@markdown Environment Parameters motr_offset_radius = 1 #@param {type:"slider", min:0, max:2, step:0.01} motr_radius = 0.4 #@param {type:"slider", min:0, max:2, step:0.01} motr_lookback = 5 #@param {type:"slider", min:1, max:20, step:1} motr_memory = 5 #@param {type:"slider", min:1, max:20, step:1} motr_gen = motr(r_off=motr_offset_radius, r_E=motr_radius, M=motr_memory, H=motr_lookback) #@markdown Constant Pertrubation: Control parameters hinf_log_gamma = 2 #@param {type:"slider", min:-2, max:5, step:0.01} hinf_gamma = 10**(hinf_log_gamma) gpc_lookback = 5 #@param {type:"slider", min:1, max:20, step:1} gpc_memory = 5 #@param {type:"slider", min:1, max:20, step:1} gpc_log_lr = -3 #@param {type:"slider", min:-5, max:0, step:0.01} gpc_lr = 10**(gpc_log_lr) Cs = controllers(hinf_gamma, gpc_lookback, gpc_memory, gpc_lr) print("🧛 evaluating controllers") traces = {Cstr: list(zip(*eval(C, motr_gen))) for Cstr, C in Cs.items()} xss = valmap(lambda x: x[0], traces) uss = valmap(lambda x: x[1], traces) wss = valmap(lambda x: x[2], traces) costss = valmap(lambda x: x[3], traces) liveplot(costss, xss, wss, 250, 200, 4, 20, 10**5, 10**5) ```
true
code
0.449816
null
null
null
null
## Appendix (Application of the mutual fund theorem) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import FinanceDataReader as fdr import pandas as pd ticker_list = ['069500'] df_list = [fdr.DataReader(ticker, '2015-01-01', '2016-12-31')['Change'] for ticker in ticker_list] df = pd.concat(df_list, axis=1) #df.columns = ['005930', '000660', '005935', '035420', '005380', '207940', '012330', '068270', '051910', '055550', '069500'] df.columns = ['KODEX200'] r = df.dropna() rf = 0.0125 #df = df.resample('Y').agg(lambda x:x.mean()*252) # Calculate basic summary statistics for individual stocks stock_volatility = r.std() * np.sqrt(252) stock_return = r.mean() * 252 alpha = stock_return.values sigma = stock_volatility.values # cov_inv = np.linalg.inv(cov) # temp = np.dot(cov_inv, (stock_return- rf)) # theta_opt = temp / temp.sum() # optimal weight in Risky Mutual fund # alpha = np.dot(theta_opt, stock_return) # 0.5941 # sigma = np.sqrt(cov.dot(theta_opt).dot(theta_opt)) ``` ## (5B), (7B) ``` # g_B = 0 # in case of age over retirement (Second scenario in Problem(B)) X0 = 150. # Saving account at the beginning l = 3 t = 45 # age in case of age over retirement (Second scenario in Problem(B)) gamma = -3. # risk averse measure phi = rf + (alpha -rf)**2 / (2 * sigma**2 * (1-gamma)) # temporal function for f_B rho = 0.04 # impatience factor for utility function beta = 4.59364 # parameter for mu delta = 0.05032 # parameter for mu rf=0.02 def f_B(t): if t < 65: ds = 0.01 T = 65 T_tilde = 110 value = 0 for s in np.arange(T, T_tilde, ds): w_s = np.exp(-rho*s/(1-gamma)) tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-1/(1-gamma)*(tmp - gamma*tmp - gamma*phi *(s-t))) * w_s * ds f = np.exp(-1/(1-gamma) *(tmp - gamma*tmp + gamma*phi*(T-t))) * value return f else: # 65~ ds = 0.01 T_tilde = 110 value = 0 for s in np.arange(t, T_tilde, ds): w_s = np.exp(-rho*s/(1-gamma)) tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-1/(1-gamma)*(tmp - gamma*tmp - gamma*phi *(s-t))) * w_s * ds return value # def f_B(t): # ds = 0.01 # T_tilde = 110 # value = 0 # for s in np.arange(t, T_tilde, ds): # w_s = np.exp(-rho*s/(1-gamma)) # tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) # value += np.exp(- tmp + gamma/(1-gamma) * phi *(s-t)) * w_s * ds # return value # def V_B(t, x): # f_b = f_B(t) # value_fcn = 1/gamma * f_b **(1-gamma) * x **gamma # return value_fcn def C_star(t,X): w_t = np.exp(-rho*t/(1-gamma)) f_b = f_B(t) c_t = w_t/f_b * X return c_t def g_B(t, l): ds=0.01 value = 0. T=65 # retirement if t < T: for s in np.arange(t, T, ds): tmp = (10**(beta + delta*s - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) value += np.exp(-tmp)*l * ds return value else: return 0. pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (X0 + g_B(t, l))/X0 # Optimal weight for Risky Asset (7B) print(pi_opt) # 0.25 # print(C_star(t, X)) ``` ## Simulation ``` import time start = time.time() dt = 1 def mu(t): # Mortality rate in next year value = (10**(beta + delta*(t+dt) - 10)- 10**(beta + delta*t - 10))/(delta * np.log(10)) return value n_simulation = 10000 Asset = np.empty(37) Asset_stack = [] C_stack = [] for i in range(n_simulation): Asset[0] = 150 # initial wealth C_list = [] for t in range(45, 81): if t < 65: # before retirement l_t = 3 # payment to pension fund pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset[t-45] + g_B(t, l_t))/Asset[t-45] C_t = 0 # Z = np.random.randn() Asset[t-45+1] = Asset[t-45]*np.exp(((1-pi_opt)*rf + pi_opt*alpha + mu(t)+ l_t/Asset[t-45] \ -pi_opt**2 * sigma**2/2)*dt + pi_opt * sigma * np.sqrt(dt) * Z) else : # after retirement l_t = 0 # payment duty is 0 after retirement pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset[t-45] + g_B(t, l_t))/Asset[t-45] C_t = C_star(t=t, X = Asset[t-45]) Z = np.random.randn() Asset[t-45+1] = Asset[t-45]*np.exp(((1-pi_opt)*rf + pi_opt*alpha + mu(t)- C_t/Asset[t-45] \ -pi_opt**2 * sigma**2/2)*dt + pi_opt * sigma * np.sqrt(dt) * Z) C_list.append(C_t) Asset_stack.append(list(Asset)) C_stack.append(C_list) end = time.time() print(end - start) ``` ## Check the Simulation Result ``` Asset_mean = np.mean(Asset_stack, axis=0) #(37,) C_mean = np.mean(C_stack, axis=0) # (16,1) plt.rcParams['figure.figsize'] = [30, 15] plt.rcParams.update({'font.size': 30}) plt.title('Retirement planning') plt.xlabel('Age') plt.ylabel('Won(1000000)') plt.plot(range(45,81),Asset_mean[:-1], label='Wealth') plt.plot(range(65,81),C_mean, '--', color = 'r', label="Pension") plt.legend() plt.grid() pi_opt_list=[] for t in range(45, 81): if t < 65: l_t = 3 else : l_t = 0 pi_opt = (alpha-rf)/(sigma**2 *(1-gamma)) * (Asset_mean[:-1][t-45] + g_B(t, l_t))/Asset_mean[:-1][t-45] pi_opt_list.append(pi_opt) plt.title('Optimal weight of risky-asset changing following ages') plt.xlabel('Age') plt.ylabel('Weight') plt.bar(range(45,81),np.array(pi_opt_list).squeeze()) ```
true
code
0.439447
null
null
null
null
# Deploy a Trained MXNet Model In this notebook, we walk through the process of deploying a trained model to a SageMaker endpoint. If you recently ran [the notebook for training](get_started_mnist_deploy.ipynb) with %store% magic, the `model_data` can be restored. Otherwise, we retrieve the model artifact from a public S3 bucket. ``` # setups import os import json import boto3 import sagemaker from sagemaker.mxnet import MXNetModel from sagemaker import get_execution_role, Session sess = Session() role = get_execution_role() %store -r mx_mnist_model_data try: mx_mnist_model_data except NameError: import json # copy a pretrained model from a public public to your default bucket with open("code/config.json", "r") as f: CONFIG = json.load(f) bucket = CONFIG["public_bucket"] s3 = boto3.client("s3") key = "datasets/image/MNIST/model/mxnet-training-2020-11-21-01-38-01-009/model.tar.gz" target = os.path.join("/tmp", "model.tar.gz") s3.download_file(bucket, key, target) # upload to default bucket mx_mnist_model_data = sess.upload_data( path=os.path.join("/tmp", "model.tar.gz"), bucket=sess.default_bucket(), key_prefix="model/mxnet", ) print(mx_mnist_model_data) ``` ## MXNet Model Object The `MXNetModel` class allows you to define an environment for making inference using your model artifact. Like `MXNet` class we discussed [in this notebook for training an MXNet model](get_started_mnist_train.ipynb), it is high level API used to set up a docker image for your model hosting service. Once it is properly configured, it can be used to create a SageMaker Endpoint on an EC2 instance. The SageMaker endpoint is a containerized environment that uses your trained model to make inference on incoming data via RESTful API calls. Some common parameters used to initiate the `MXNetModel` class are: - entry_point: A user defined python file to be used by the inference container as handlers of incoming requests - source_dir: The directory of the `entry_point` - role: An IAM role to make AWS service requests - model_data: the S3 bucket URI of the compressed model artifact. It can be a path to a local file if the endpoint is to be deployed on the SageMaker instance you are using to run this notebook (local mode) - framework_version: version of the MXNet package to be used - py_version: python version to be used We elaborate on the `entry_point` below. ``` model = MXNetModel( entry_point="inference.py", source_dir="code", role=role, model_data=mx_mnist_model_data, framework_version="1.7.0", py_version="py3", ) ``` ### Entry Point for the Inference Image Your model artifacts pointed by `model_data` is pulled by the `MXNetModel` and it is decompressed and saved in in the docker image it defines. They become regular model checkpoint files that you would produce outside SageMaker. This means in order to use your trained model for serving, you need to tell `MXNetModel` class how to a recover a MXNet model from the static checkpoint. Also, the deployed endpoint interacts with RESTful API calls, you need to tell it how to parse an incoming request to your model. These two instructions needs to be defined as two functions in the python file pointed by `entry_point`. By convention, we name this entry point file `inference.py` and we put it in the `code` directory. To tell the inference image how to load the model checkpoint, you need to implement a function called `model_fn`. This function takes one positional argument - `model_dir`: the directory of the static model checkpoints in the inference image. The return of `model_fn` is an MXNet model. In this example, the `model_fn` looks like: ```python def model_fn(model_dir): """Load the gluon model. Called once when hosting service starts. :param: model_dir The directory where model files are stored. :return: a model (in this case a Gluon network) """ net = gluon.SymbolBlock.imports( symbol_file=os.path.join(model_dir, 'compiled-symbol.json'), input_names=['data'], param_file=os.path.join(model_dir, 'compiled-0000.params')) return net ``` Next, you need to tell the hosting service how to handle the incoming data. This includes: * How to parse the incoming request * How to use the trained model to make inference * How to return the prediction to the caller of the service You do it by implementing a function called `transform_fn`. This function takes 4 positional arguments: - `net`: the return from `model_fn` - `data`: the payload of the incoming request - `content_type`: the content type of the incoming request - `accept_type`: the conetent type of the response In this example, the `transform_fn` looks like: ```python def transform_fn(net, data, input_content_type, output_content_type): assert input_content_type=='application/json' assert output_content_type=='application/json' # parsed should be a 1d array of length 728 parsed = json.loads(data) parsed = parsed['inputs'] # convert to numpy array arr = np.array(parsed).reshape(-1, 1, 28, 28) # convert to mxnet ndarray nda = mx.nd.array(arr) output = net(nda) prediction = mx.nd.argmax(output, axis=1) response_body = json.dumps(prediction.asnumpy().tolist()) return response_body, output_content_type ``` The `content_type` is used by the function to parse the `data`. In the following example, the functions requires the content type of the payload to be a json string and it parses the json string into a python dictionary by `json.loads`. Moreover, it assumes the parsed dictionary contains a key `inputs` that maps to the input data to be consumed by the model. It also assumes the input data is a flattened 1D array representation that can be reshaped into a numpy array of shape (-1, 1, 28, 28). The input images of a MXNet model follows NCHW convention. It also assumes the input data is already normalized and can be readily consumed by the neural network. After the inference, the function uses `accept_type` to encode the prediction into the content type of the response. In this example, the function requires the caller of the service to accept json string. The return of `transform_fn` is always a tuple of encoded response body and the content type to be accepted by the caller. ## Execute the inference container Once the `MXNetModel` class is initiated, we can call its `deploy` method to run the container for the hosting service. Some common parameters needed to call `deploy` methods are: - initial_instance_count: the number of SageMaker instances to be used to run the hosting service. - instance_type: the type of SageMaker instance to run the hosting service. Set it to `local` if you want run the hosting service on the local SageMaker instance. Local mode are typically used for debugging. - serializer: A python callable used to serialize (encode) the request data. - deserializer: A python callable used to deserialize (decode) the response data. Commonly used serializers and deserialzers are implemented in `sagemaker.serializers` and `sagemaker.deserializer` submodules of the SageMaker Python SDK. Since in the `transform_fn` we declared that the incoming requests are json-encoded, we need use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we need to use a json deserializer to parse the response into a an (in this case, an integer represeting the predicted hand-written digit). <span style="color:red"> Note: local mode is not supported in SageMaker Studio </span> ``` from sagemaker.serializers import JSONSerializer from sagemaker.deserializers import JSONDeserializer # set local_mode to False if you want to deploy on a remote # SageMaker instance local_mode = False if local_mode: instance_type = "local" else: instance_type = "ml.c4.xlarge" predictor = model.deploy( initial_instance_count=1, instance_type=instance_type, serializer=JSONSerializer(), deserializer=JSONDeserializer(), ) ``` The `predictor` we get above can be used to make prediction requests agaist a SageMaker endpoint. For more information, check [the api reference for SageMaker Predictor]( https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html#sagemaker.predictor.Predictor) Now, let's test the endpoint with some dummy data. ``` import random dummy_data = {"inputs": [random.random() for _ in range(784)]} ``` In `transform_fn`, we declared that the parsed data is a python dictionary with a key `inputs` and its value should be a 1D array of length 784. Hence, the definition of `dummy_data`. ``` res = predictor.predict(dummy_data) print("Predicted digit:", *map(int, res)) ``` If the input data does not look exactly like `dummy_data`, the endpoint will raise an exception. This is because of the stringent way we defined the `transform_fn`. Let's test the following example. ``` dummy_data = [random.random() for _ in range(784)] ``` When the `dummy_data` is parsed in `transform_fn`, it does not have an `inputs` field, so `transform_fn` will crush. ``` # uncomment the following line to make inference on incorrectly formated input data # res = predictor.predict(dummy_data) ``` Now, let's use real MNIST test to test the endpoint. We use helper functions defined in `code.utils` to download MNIST data set and normalize the input data. ``` import random import boto3 import matplotlib.pyplot as plt import os import numpy as np import gzip import json %matplotlib inline # Donwload MNIST test set from a public bucket with open("code/config.json", "rb") as f: CONFIG = json.load(f) fname = "t10k-images-idx3-ubyte.gz" bucket = CONFIG["public_bucket"] key = "datasets/image/MNIST/" + fname target = os.path.join("/tmp", fname) s3 = boto3.client("s3") if not os.path.exists(target): s3.download_file(bucket, key, target) # parse to numpy with gzip.open(target, "rb") as f: images = np.frombuffer(f.read(), np.uint8, offset=16).reshape(-1, 28, 28) # randomly sample 16 images to inspect mask = random.sample(range(images.shape[0]), 16) samples = images[mask] # plot the images fig, axs = plt.subplots(nrows=1, ncols=16, figsize=(16, 1)) for i, splt in enumerate(axs): splt.imshow(samples[i]) ``` First, let us use the model to infer the samples one-by-one. This is the typical use case for an online application. ``` # convert to float and normalize normalize the input def normalize(x, axis): eps = np.finfo(float).eps mean = np.mean(x, axis=axis, keepdims=True) # avoid division by zero std = np.std(x, axis=axis, keepdims=True) + eps return (x - mean) / std samples = normalize(samples.astype(np.float32), axis=(1, 2)) # mean 0; std 1 res = [] for img in samples: data = {"inputs": img.flatten().tolist()} res.append(predictor.predict(data)[0]) print("Predictions: ", *map(int, res)) ``` Since in `transform_fn`, the parsed numpy array could have take on any value for its batch dimension, we can send the entire `samples` at once and let the model do a batch inference. ``` data = {"inputs": samples.tolist()} res = predictor.predict(data) print("Predictions: ", *map(int, res)) ``` ## Test and debug the entry point before deployment When deploying a model to a SageMaker endpoint, it is a good practice to test the entry point. The following snippet shows you how you can test and debug the `model_fn` and `transform_fn` you implemented in the entry point for the inference image. ``` !pygmentize code/test_inference.py ``` The `test` function simulates how the inference container works. It pulls the model artifact and loads the model into memory by calling `model_fn` and parse it with `model_dir`. When it receives a request, it calls `transform_fn` and parse it with the loaded model, the payload of the request, request content type and response content type. Implementing such a test function helps you debugging the entry point before put it into the production. If `test` runs correctly, then you can be certain that if the incoming data and its content type are what they suppose to be, then the endpoint point is going to work as expected. ## (Optional) Clean up If you do not plan to use the endpoint, you should delete it to free up some computation resource. If you use local, you will need to manually delete the docker container bounded at port 8080 (the port that listens to the incoming request). ``` import os if not local_mode: predictor.delete_endpoint() else: # detach the inference container from port 8080 (in local mode) os.system("docker container ls | grep 8080 | awk '{print $1}' | xargs docker container rm -f") ```
true
code
0.507873
null
null
null
null
``` %matplotlib inline ``` PyTorch 1.0 Distributed Trainer with Amazon AWS =============================================== **Author**: `Nathan Inkawhich <https://github.com/inkawhich>`_ **Edited by**: `Teng Li <https://github.com/teng-li>`_ In this tutorial we will show how to setup, code, and run a PyTorch 1.0 distributed trainer across two multi-gpu Amazon AWS nodes. We will start with describing the AWS setup, then the PyTorch environment configuration, and finally the code for the distributed trainer. Hopefully you will find that there is actually very little code change required to extend your current training code to a distributed application, and most of the work is in the one-time environment setup. Amazon AWS Setup ---------------- In this tutorial we will run distributed training across two multi-gpu nodes. In this section we will first cover how to create the nodes, then how to setup the security group so the nodes can communicate with eachother. Creating the Nodes ~~~~~~~~~~~~~~~~~~ In Amazon AWS, there are seven steps to creating an instance. To get started, login and select **Launch Instance**. **Step 1: Choose an Amazon Machine Image (AMI)** - Here we will select the ``Deep Learning AMI (Ubuntu) Version 14.0``. As described, this instance comes with many of the most popular deep learning frameworks installed and is preconfigured with CUDA, cuDNN, and NCCL. It is a very good starting point for this tutorial. **Step 2: Choose an Instance Type** - Now, select the GPU compute unit called ``p2.8xlarge``. Notice, each of these instances has a different cost but this instance provides 8 NVIDIA Tesla K80 GPUs per node, and provides a good architecture for multi-gpu distributed training. **Step 3: Configure Instance Details** - The only setting to change here is increasing the *Number of instances* to 2. All other configurations may be left at default. **Step 4: Add Storage** - Notice, by default these nodes do not come with a lot of storage (only 75 GB). For this tutorial, since we are only using the STL-10 dataset, this is plenty of storage. But, if you want to train on a larger dataset such as ImageNet, you will have to add much more storage just to fit the dataset and any trained models you wish to save. **Step 5: Add Tags** - Nothing to be done here, just move on. **Step 6: Configure Security Group** - This is a critical step in the configuration process. By default two nodes in the same security group would not be able to communicate in the distributed training setting. Here, we want to create a **new** security group for the two nodes to be in. However, we cannot finish configuring in this step. For now, just remember your new security group name (e.g. launch-wizard-12) then move on to Step 7. **Step 7: Review Instance Launch** - Here, review the instance then launch it. By default, this will automatically start initializing the two instances. You can monitor the initialization progress from the dashboard. Configure Security Group ~~~~~~~~~~~~~~~~~~~~~~~~ Recall that we were not able to properly configure the security group when creating the instances. Once you have launched the instance, select the *Network & Security > Security Groups* tab in the EC2 dashboard. This will bring up a list of security groups you have access to. Select the new security group you created in Step 6 (i.e. launch-wizard-12), which will bring up tabs called *Description, Inbound, Outbound, and Tags*. First, select the *Inbound* tab and *Edit* to add a rule to allow "All Traffic" from "Sources" in the launch-wizard-12 security group. Then select the *Outbound* tab and do the exact same thing. Now, we have effectively allowed all Inbound and Outbound traffic of all types between nodes in the launch-wizard-12 security group. Necessary Information ~~~~~~~~~~~~~~~~~~~~~ Before continuing, we must find and remember the IP addresses of both nodes. In the EC2 dashboard find your running instances. For both instances, write down the *IPv4 Public IP* and the *Private IPs*. For the remainder of the document, we will refer to these as the **node0-publicIP**, **node0-privateIP**, **node1-publicIP**, and **node1-privateIP**. The public IPs are the addresses we will use to SSH in, and the private IPs will be used for inter-node communication. Environment Setup ----------------- The next critical step is the setup of each node. Unfortunately, we cannot configure both nodes at the same time, so this process must be done on each node separately. However, this is a one time setup, so once you have the nodes configured properly you will not have to reconfigure for future distributed training projects. The first step, once logged onto the node, is to create a new conda environment with python 3.6 and numpy. Once created activate the environment. :: $ conda create -n nightly_pt python=3.6 numpy $ source activate nightly_pt Next, we will install a nightly build of Cuda 9.0 enabled PyTorch with pip in the conda environment. :: $ pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu90/torch_nightly.html We must also install torchvision so we can use the torchvision model and dataset. At this time, we must build torchvision from source as the pip installation will by default install an old version of PyTorch on top of the nightly build we just installed. :: $ cd $ git clone https://github.com/pytorch/vision.git $ cd vision $ python setup.py install And finally, **VERY IMPORTANT** step is to set the network interface name for the NCCL socket. This is set with the environment variable ``NCCL_SOCKET_IFNAME``. To get the correct name, run the ``ifconfig`` command on the node and look at the interface name that corresponds to the node's *privateIP* (e.g. ens3). Then set the environment variable as :: $ export NCCL_SOCKET_IFNAME=ens3 Remember, do this on both nodes. You may also consider adding the NCCL\_SOCKET\_IFNAME setting to your *.bashrc*. An important observation is that we did not setup a shared filesystem between the nodes. Therefore, each node will have to have a copy of the code and a copy of the datasets. For more information about setting up a shared network filesystem between nodes, see `here <https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-shared-file-storage-for-amazon-ec2/>`__. Distributed Training Code ------------------------- With the instances running and the environments setup we can now get into the training code. Most of the code here has been taken from the `PyTorch ImageNet Example <https://github.com/pytorch/examples/tree/master/imagenet>`__ which also supports distributed training. This code provides a good starting point for a custom trainer as it has much of the boilerplate training loop, validation loop, and accuracy tracking functionality. However, you will notice that the argument parsing and other non-essential functions have been stripped out for simplicity. In this example we will use `torchvision.models.resnet18 <https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet18>`__ model and will train it on the `torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__ dataset. To accomodate for the dimensionality mismatch of STL-10 with Resnet18, we will resize each image to 224x224 with a transform. Notice, the choice of model and dataset are orthogonal to the distributed training code, you may use any dataset and model you wish and the process is the same. Lets get started by first handling the imports and talking about some helper functions. Then we will define the train and test functions, which have been largely taken from the ImageNet Example. At the end, we will build the main part of the code which handles the distributed training setup. And finally, we will discuss how to actually run the code. Imports ~~~~~~~ The important distributed training specific imports here are `torch.nn.parallel <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__, `torch.distributed <https://pytorch.org/docs/stable/distributed.html>`__, `torch.utils.data.distributed <https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler>`__, and `torch.multiprocessing <https://pytorch.org/docs/stable/multiprocessing.html>`__. It is also important to set the multiprocessing start method to *spawn* or *forkserver* (only supported in Python 3), as the default is *fork* which may cause deadlocks when using multiple worker processes for dataloading. ``` import time import sys import torch if __name__ == '__main__': torch.multiprocessing.set_start_method('spawn') import torch.nn as nn import torch.nn.parallel import torch.distributed as dist import torch.optim import torch.utils.data import torch.utils.data.distributed import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models from torch.multiprocessing import Pool, Process ``` Helper Functions ~~~~~~~~~~~~~~~~ We must also define some helper functions and classes that will make training easier. The ``AverageMeter`` class tracks training statistics like accuracy and iteration count. The ``accuracy`` function computes and returns the top-k accuracy of the model so we can track learning progress. Both are provided for training convenience but neither are distributed training specific. ``` class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def accuracy(output, target, topk=(1,)): """Computes the precision@k for the specified values of k""" with torch.no_grad(): maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) res.append(correct_k.mul_(100.0 / batch_size)) return res ``` Train Functions ~~~~~~~~~~~~~~~ To simplify the main loop, it is best to separate a training epoch step into a function called ``train``. This function trains the input model for one epoch of the *train\_loader*. The only distributed training artifact in this function is setting the `non\_blocking <https://pytorch.org/docs/stable/notes/cuda.html#use-pinned-memory-buffers>`__ attributes of the data and label tensors to ``True`` before the forward pass. This allows asynchronous GPU copies of the data meaning transfers can be overlapped with computation. This function also outputs training statistics along the way so we can track progress throughout the epoch. The other function to define here is ``adjust_learning_rate``, which decays the initial learning rate at a fixed schedule. This is another boilerplate trainer function that is useful to train accurate models. ``` def train(train_loader, model, criterion, optimizer, epoch): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() # switch to train mode model.train() end = time.time() for i, (input, target) in enumerate(train_loader): # measure data loading time data_time.update(time.time() - end) # Create non_blocking tensors for distributed training input = input.cuda(non_blocking=True) target = target.cuda(non_blocking=True) # compute output output = model(input) loss = criterion(output, target) # measure accuracy and record loss prec1, prec5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), input.size(0)) top1.update(prec1[0], input.size(0)) top5.update(prec5[0], input.size(0)) # compute gradients in a backward pass optimizer.zero_grad() loss.backward() # Call step of optimizer to update model params optimizer.step() # measure elapsed time batch_time.update(time.time() - end) end = time.time() if i % 10 == 0: print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t' 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format( epoch, i, len(train_loader), batch_time=batch_time, data_time=data_time, loss=losses, top1=top1, top5=top5)) def adjust_learning_rate(initial_lr, optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" lr = initial_lr * (0.1 ** (epoch // 30)) for param_group in optimizer.param_groups: param_group['lr'] = lr ``` Validation Function ~~~~~~~~~~~~~~~~~~~ To track generalization performance and simplify the main loop further we can also extract the validation step into a function called ``validate``. This function runs a full validation step of the input model on the input validation dataloader and returns the top-1 accuracy of the model on the validation set. Again, you will notice the only distributed training feature here is setting ``non_blocking=True`` for the training data and labels before they are passed to the model. ``` def validate(val_loader, model, criterion): batch_time = AverageMeter() losses = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() # switch to evaluate mode model.eval() with torch.no_grad(): end = time.time() for i, (input, target) in enumerate(val_loader): input = input.cuda(non_blocking=True) target = target.cuda(non_blocking=True) # compute output output = model(input) loss = criterion(output, target) # measure accuracy and record loss prec1, prec5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), input.size(0)) top1.update(prec1[0], input.size(0)) top5.update(prec5[0], input.size(0)) # measure elapsed time batch_time.update(time.time() - end) end = time.time() if i % 100 == 0: print('Test: [{0}/{1}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t' 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format( i, len(val_loader), batch_time=batch_time, loss=losses, top1=top1, top5=top5)) print(' * Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f}' .format(top1=top1, top5=top5)) return top1.avg ``` Inputs ~~~~~~ With the helper functions out of the way, now we have reached the interesting part. Here is where we will define the inputs for the run. Some of the inputs are standard model training inputs such as batch size and number of training epochs, and some are specific to our distributed training task. The required inputs are: - **batch\_size** - batch size for *each* process in the distributed training group. Total batch size across distributed model is batch\_size\*world\_size - **workers** - number of worker processes used with the dataloaders in each process - **num\_epochs** - total number of epochs to train for - **starting\_lr** - starting learning rate for training - **world\_size** - number of processes in the distributed training environment - **dist\_backend** - backend to use for distributed training communication (i.e. NCCL, Gloo, MPI, etc.). In this tutorial, since we are using several multi-gpu nodes, NCCL is suggested. - **dist\_url** - URL to specify the initialization method of the process group. This may contain the IP address and port of the rank0 process or be a non-existant file on a shared file system. Here, since we do not have a shared file system this will incorporate the **node0-privateIP** and the port on node0 to use. ``` print("Collect Inputs...") # Batch Size for training and testing batch_size = 32 # Number of additional worker processes for dataloading workers = 2 # Number of epochs to train for num_epochs = 2 # Starting Learning Rate starting_lr = 0.1 # Number of distributed processes world_size = 4 # Distributed backend type dist_backend = 'nccl' # Url used to setup distributed training dist_url = "tcp://172.31.22.234:23456" ``` Initialize process group ~~~~~~~~~~~~~~~~~~~~~~~~ One of the most important parts of distributed training in PyTorch is to properly setup the process group, which is the **first** step in initializing the ``torch.distributed`` package. To do this, we will use the ``torch.distributed.init_process_group`` function which takes several inputs. First, a *backend* input which specifies the backend to use (i.e. NCCL, Gloo, MPI, etc.). An *init\_method* input which is either a url containing the address and port of the rank0 machine or a path to a non-existant file on the shared file system. Note, to use the file init\_method, all machines must have access to the file, similarly for the url method, all machines must be able to communicate on the network so make sure to configure any firewalls and network settings to accomodate. The *init\_process\_group* function also takes *rank* and *world\_size* arguments which specify the rank of this process when run and the number of processes in the collective, respectively. The *init\_method* input can also be "env://". In this case, the address and port of the rank0 machine will be read from the following two environment variables respectively: MASTER_ADDR, MASTER_PORT. If *rank* and *world\_size* arguments are not specified in the *init\_process\_group* function, they both can be read from the following two environment variables respectively as well: RANK, WORLD_SIZE. Another important step, especially when each node has multiple gpus is to set the *local\_rank* of this process. For example, if you have two nodes, each with 8 GPUs and you wish to train with all of them then $world\_size=16$ and each node will have a process with local rank 0-7. This local\_rank is used to set the device (i.e. which GPU to use) for the process and later used to set the device when creating a distributed data parallel model. It is also recommended to use NCCL backend in this hypothetical environment as NCCL is preferred for multi-gpu nodes. ``` print("Initialize Process Group...") # Initialize Process Group # v1 - init with url dist.init_process_group(backend=dist_backend, init_method=dist_url, rank=int(sys.argv[1]), world_size=world_size) # v2 - init with file # dist.init_process_group(backend="nccl", init_method="file:///home/ubuntu/pt-distributed-tutorial/trainfile", rank=int(sys.argv[1]), world_size=world_size) # v3 - init with environment variables # dist.init_process_group(backend="nccl", init_method="env://", rank=int(sys.argv[1]), world_size=world_size) # Establish Local Rank and set device on this node local_rank = int(sys.argv[2]) dp_device_ids = [local_rank] torch.cuda.set_device(local_rank) ``` Initialize Model ~~~~~~~~~~~~~~~~ The next major step is to initialize the model to be trained. Here, we will use a resnet18 model from ``torchvision.models`` but any model may be used. First, we initialize the model and place it in GPU memory. Next, we make the model ``DistributedDataParallel``, which handles the distribution of the data to and from the model and is critical for distributed training. The ``DistributedDataParallel`` module also handles the averaging of gradients across the world, so we do not have to explicitly average the gradients in the training step. It is important to note that this is a blocking function, meaning program execution will wait at this function until *world\_size* processes have joined the process group. Also, notice we pass our device ids list as a parameter which contains the local rank (i.e. GPU) we are using. Finally, we specify the loss function and optimizer to train the model with. ``` print("Initialize Model...") # Construct Model model = models.resnet18(pretrained=False).cuda() # Make model DistributedDataParallel model = torch.nn.parallel.DistributedDataParallel(model, device_ids=dp_device_ids, output_device=local_rank) # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda() optimizer = torch.optim.SGD(model.parameters(), starting_lr, momentum=0.9, weight_decay=1e-4) ``` Initialize Dataloaders ~~~~~~~~~~~~~~~~~~~~~~ The last step in preparation for the training is to specify which dataset to use. Here we use the `STL-10 dataset <https://cs.stanford.edu/~acoates/stl10/>`__ from `torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__. The STL10 dataset is a 10 class dataset of 96x96px color images. For use with our model, we resize the images to 224x224px in the transform. One distributed training specific item in this section is the use of the ``DistributedSampler`` for the training set, which is designed to be used in conjunction with ``DistributedDataParallel`` models. This object handles the partitioning of the dataset across the distributed environment so that not all models are training on the same subset of data, which would be counterproductive. Finally, we create the ``DataLoader``'s which are responsible for feeding the data to the processes. The STL-10 dataset will automatically download on the nodes if they are not present. If you wish to use your own dataset you should download the data, write your own dataset handler, and construct a dataloader for your dataset here. ``` print("Initialize Dataloaders...") # Define the transform for the data. Notice, we must resize to 224x224 with this dataset and model. transform = transforms.Compose( [transforms.Resize(224), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Initialize Datasets. STL10 will automatically download if not present trainset = datasets.STL10(root='./data', split='train', download=True, transform=transform) valset = datasets.STL10(root='./data', split='test', download=True, transform=transform) # Create DistributedSampler to handle distributing the dataset across nodes when training # This can only be called after torch.distributed.init_process_group is called train_sampler = torch.utils.data.distributed.DistributedSampler(trainset) # Create the Dataloaders to feed data to the training and validation steps train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=(train_sampler is None), num_workers=workers, pin_memory=False, sampler=train_sampler) val_loader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, num_workers=workers, pin_memory=False) ``` Training Loop ~~~~~~~~~~~~~ The last step is to define the training loop. We have already done most of the work for setting up the distributed training so this is not distributed training specific. The only detail is setting the current epoch count in the ``DistributedSampler``, as the sampler shuffles the data going to each process deterministically based on epoch. After updating the sampler, the loop runs a full training epoch, runs a full validation step then prints the performance of the current model against the best performing model so far. After training for num\_epochs, the loop exits and the tutorial is complete. Notice, since this is an exercise we are not saving models but one may wish to keep track of the best performing model then save it at the end of training (see `here <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L184>`__). ``` best_prec1 = 0 for epoch in range(num_epochs): # Set epoch count for DistributedSampler train_sampler.set_epoch(epoch) # Adjust learning rate according to schedule adjust_learning_rate(starting_lr, optimizer, epoch) # train for one epoch print("\nBegin Training Epoch {}".format(epoch+1)) train(train_loader, model, criterion, optimizer, epoch) # evaluate on validation set print("Begin Validation @ Epoch {}".format(epoch+1)) prec1 = validate(val_loader, model, criterion) # remember best prec@1 and save checkpoint if desired # is_best = prec1 > best_prec1 best_prec1 = max(prec1, best_prec1) print("Epoch Summary: ") print("\tEpoch Accuracy: {}".format(prec1)) print("\tBest Accuracy: {}".format(best_prec1)) ``` Running the Code ---------------- Unlike most of the other PyTorch tutorials, this code may not be run directly out of this notebook. To run, download the .py version of this file (or convert it using `this <https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe>`__) and upload a copy to both nodes. The astute reader would have noticed that we hardcoded the **node0-privateIP** and $world\_size=4$ but input the *rank* and *local\_rank* inputs as arg[1] and arg[2] command line arguments, respectively. Once uploaded, open two ssh terminals into each node. - On the first terminal for node0, run ``$ python main.py 0 0`` - On the second terminal for node0 run ``$ python main.py 1 1`` - On the first terminal for node1, run ``$ python main.py 2 0`` - On the second terminal for node1 run ``$ python main.py 3 1`` The programs will start and wait after printing "Initialize Model..." for all four processes to join the process group. Notice the first argument is not repeated as this is the unique global rank of the process. The second argument is repeated as that is the local rank of the process running on the node. If you run ``nvidia-smi`` on each node, you will see two processes on each node, one running on GPU0 and one on GPU1. We have now completed the distributed training example! Hopefully you can see how you would use this tutorial to help train your own models on your own datasets, even if you are not using the exact same distributed envrionment. If you are using AWS, don't forget to **SHUT DOWN YOUR NODES** if you are not using them or you may find an uncomfortably large bill at the end of the month. **Where to go next** - Check out the `launcher utility <https://pytorch.org/docs/stable/distributed.html#launch-utility>`__ for a different way of kicking off the run - Check out the `torch.multiprocessing.spawn utility <https://pytorch.org/docs/master/multiprocessing.html#spawning-subprocesses>`__ for another easy way of kicking off multiple distributed processes. `PyTorch ImageNet Example <https://github.com/pytorch/examples/tree/master/imagenet>`__ has it implemented and can demonstrate how to use it. - If possible, setup a NFS so you only need one copy of the dataset
true
code
0.786961
null
null
null
null
``` # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: Kevin P. Murphy ([email protected]) # and Mahmoud Soliman ([email protected]) # This notebook reproduces figures for chapter 1 from the book # "Probabilistic Machine Learning: An Introduction" # by Kevin Murphy (MIT Press, 2021). # Book pdf is available from http://probml.ai ``` <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> <a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter1_introduction_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Figure 1.1:<a name='1.1'></a> <a name='iris'></a> Three types of Iris flowers: Setosa, Versicolor and Virginica. Used with kind permission of Dennis Kramb and SIGNA ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_B.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_C.png" width="256"/> ## Figure 1.2:<a name='1.2'></a> <a name='cat'></a> Illustration of the image classification problem. From https://cs231n.github.io/ . Used with kind permission of Andrej Karpathy ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.2.png" width="256"/> ## Figure 1.3:<a name='1.3'></a> <a name='irisPairs'></a> Visualization of the Iris data as a pairwise scatter plot. On the diagonal we plot the marginal distribution of each feature for each class. The off-diagonals contain scatterplots of all possible pairs of features. Figure(s) generated by [iris_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_plot.py ``` ## Figure 1.4:<a name='1.4'></a> <a name='dtreeIrisDepth2'></a> Example of a decision tree of depth 2 applied to the Iris data, using just the petal length and petal width features. Leaf nodes are color coded according to the predicted class. The number of training samples that pass from the root to a node is shown inside each box; we show how many values of each class fall into this node. This vector of counts can be normalized to get a distribution over class labels for each node. We can then pick the majority class. Adapted from Figures 6.1 and 6.2 of <a href='#Geron2019'>[Aur19]</a> . To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks/iris_dtree.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_B.png" width="256"/> ## Figure 1.5:<a name='1.5'></a> <a name='linreg'></a> (a) Linear regression on some 1d data. (b) The vertical lines denote the residuals between the observed output value for each input (blue circle) and its predicted value (red cross). The goal of least squares regression is to pick a line that minimizes the sum of squared residuals. Figure(s) generated by [linreg_residuals_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_residuals_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_residuals_plot.py ``` ## Figure 1.6:<a name='1.6'></a> <a name='polyfit2d'></a> Linear and polynomial regression applied to 2d data. Vertical axis is temperature, horizontal axes are location within a room. Data was collected by some remote sensing motes at Intel's lab in Berkeley, CA (data courtesy of Romain Thibaux). (a) The fitted plane has the form $ f ( \bm x ) = w_0 + w_1 x_1 + w_2 x_2$. (b) Temperature data is fitted with a quadratic of the form $ f ( \bm x ) = w_0 + w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2$. Figure(s) generated by [linreg_2d_surface_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_surface_demo.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_2d_surface_demo.py ``` ## Figure 1.7:<a name='1.7'></a> <a name='linregPoly'></a> (a-c) Polynomials of degrees 2, 14 and 20 fit to 21 datapoints (the same data as in \cref fig:linreg ). (d) MSE vs degree. Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_poly_vs_degree.py ``` ## Figure 1.8:<a name='1.8'></a> <a name='eqn:irisClustering'></a> (a) A scatterplot of the petal features from the iris dataset. (b) The result of unsupervised clustering using $K=3$. Figure(s) generated by [iris_kmeans.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_kmeans.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_kmeans.py ``` ## Figure 1.9:<a name='1.9'></a> <a name='pcaDemo'></a> (a) Scatterplot of iris data (first 3 features). Points are color coded by class. (b) We fit a 2d linear subspace to the 3d data using PCA. The class labels are ignored. Red dots are the original data, black dots are points generated from the model using $ \bm x = \mathbf W \bm z + \bm \mu $, where $ \bm z $ are latent points on the underlying inferred 2d linear manifold. Figure(s) generated by [iris_pca.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_pca.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_pca.py ``` ## Figure 1.10:<a name='1.10'></a> <a name='humanoid'></a> Examples of some control problems. (a) Space Invaders Atari game. From https://gym.openai.com/envs/SpaceInvaders-v0/ . (b) Controlling a humanoid robot in the MuJuCo simulator so it walks as fast as possible without falling over. From https://gym.openai.com/envs/Humanoid-v2/ ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_B.png" width="256"/> ## Figure 1.11:<a name='1.11'></a> <a name='cake'></a> The three types of machine learning visualized as layers of a chocolate cake. This figure (originally from https://bit.ly/2m65Vs1 ) was used in a talk by Yann LeCun at NIPS'16, and is used with his kind permission ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.11.png" width="256"/> ## Figure 1.12:<a name='1.12'></a> <a name='emnist'></a> (a) Visualization of the MNIST dataset. Each image is $28 \times 28$. There are 60k training examples and 10k test examples. We show the first 25 images from the training set. Figure(s) generated by [mnist_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/mnist_viz_tf.py) [emnist_viz_pytorch.py](https://github.com/probml/pyprobml/blob/master/scripts/emnist_viz_pytorch.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n mnist_viz_tf.py try_deimport() %run -n emnist_viz_pytorch.py ``` ## Figure 1.13:<a name='1.13'></a> <a name='CIFAR'></a> (a) Visualization of the Fashion-MNIST dataset <a href='#fashion'>[XRV17]</a> . The dataset has the same size as MNIST, but is harder to classify. There are 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle-boot. We show the first 25 images from the training set. Figure(s) generated by [fashion_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/fashion_viz_tf.py) [cifar_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/cifar_viz_tf.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n fashion_viz_tf.py try_deimport() %run -n cifar_viz_tf.py ``` ## Figure 1.14:<a name='1.14'></a> <a name='imagenetError'></a> (a) Sample images from the \bf ImageNet dataset <a href='#ILSVRC15'>[Rus+15]</a> . This subset consists of 1.3M color training images, each of which is $256 \times 256$ pixels in size. There are 1000 possible labels, one per image, and the task is to minimize the top-5 error rate, i.e., to ensure the correct label is within the 5 most probable predictions. Below each image we show the true label, and a distribution over the top 5 predicted labels. If the true label is in the top 5, its probability bar is colored red. Predictions are generated by a convolutional neural network (CNN) called ``AlexNet'' (\cref sec:alexNet ). From Figure 4 of <a href='#Krizhevsky12'>[KSH12]</a> . Used with kind permission of Alex Krizhevsky. (b) Misclassification rate (top 5) on the ImageNet competition over time. Used with kind permission of Andrej Karpathy ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_B.png" width="256"/> ## Figure 1.15:<a name='1.15'></a> <a name='termDoc'></a> Example of a term-document matrix, where raw counts have been replaced by their TF-IDF values (see \cref sec:tfidf ). Darker cells are larger values. From https://bit.ly/2kByLQI . Used with kind permission of Christoph Carl Kling ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.15.png" width="256"/> ## References: <a name='Geron2019'>[Aur19]</a> G. Aur'elien "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). <a name='Krizhevsky12'>[KSH12]</a> A. Krizhevsky, I. Sutskever and G. Hinton. "Imagenet classification with deep convolutional neural networks". (2012). <a name='ILSVRC15'>[Rus+15]</a> O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg and L. Fei-Fei. "ImageNet Large Scale Visual Recognition Challenge". In: ijcv (2015). <a name='fashion'>[XRV17]</a> H. Xiao, K. Rasul and R. Vollgraf. "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms". abs/1708.07747 (2017). arXiv: 1708.07747
true
code
0.564219
null
null
null
null
# Copy Task Plots ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from glob import glob import json import os import sys sys.path.append(os.path.abspath(os.getcwd() + "./../")) %matplotlib inline ``` ## Load training history To generate the models and training history used in this notebook, run the following commands: ``` mkdir ./notebooks/copy ./train.py --seed 1 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 10 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 100 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ./train.py --seed 1000 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy ``` ``` batch_num = 40000 files = glob("./copy/*-{}.json".format(batch_num)) files # Read the metrics from the .json files history = [json.loads(open(fname, "rt").read()) for fname in files] training = np.array([(x['cost'], x['loss'], x['seq_lengths']) for x in history]) print("Training history (seed x metric x sequence) =", training.shape) # Average every dv values across each (seed, metric) dv = 1000 training = training.reshape(len(files), 3, -1, dv).mean(axis=3) print(training.shape) # Average the seeds training_mean = training.mean(axis=0) training_std = training.std(axis=0) print(training_mean.shape) print(training_std.shape) fig = plt.figure(figsize=(12, 5)) # X axis is normalized to thousands x = np.arange(dv / 1000, (batch_num / 1000) + (dv / 1000), dv / 1000) # Plot the cost # plt.plot(x, training_mean[0], 'o-', linewidth=2, label='Cost') plt.errorbar(x, training_mean[0], yerr=training_std[0], fmt='o-', elinewidth=2, linewidth=2, label='Cost') plt.grid() plt.yticks(np.arange(0, training_mean[0][0]+5, 5)) plt.ylabel('Cost per sequence (bits)') plt.xlabel('Sequence (thousands)') plt.title('Training Convergence', fontsize=16) ax = plt.axes([.57, .55, .25, .25], facecolor=(0.97, 0.97, 0.97)) plt.title("BCELoss") plt.plot(x, training_mean[1], 'r-', label='BCE Loss') plt.yticks(np.arange(0, training_mean[1][0]+0.2, 0.2)) plt.grid() plt.show() loss = history[3]['loss'] cost = history[3]['cost'] seq_lengths = history[3]['seq_lengths'] unique_sls = set(seq_lengths) all_metric = list(zip(range(1, batch_num+1), seq_lengths, loss, cost)) fig = plt.figure(figsize=(12, 5)) plt.ylabel('Cost per sequence (bits)') plt.xlabel('Iteration (thousands)') plt.title('Training Convergence (Per Sequence Length)', fontsize=16) for sl in unique_sls: sl_metrics = [i for i in all_metric if i[1] == sl] x = [i[0] for i in sl_metrics] y = [i[3] for i in sl_metrics] num_pts = len(x) // 50 total_pts = num_pts * 50 x_mean = [i.mean()/1000 for i in np.split(np.array(x)[:total_pts], num_pts)] y_mean = [i.mean() for i in np.split(np.array(y)[:total_pts], num_pts)] plt.plot(x_mean, y_mean, label='Seq-{}'.format(sl)) plt.yticks(np.arange(0, 80, 5)) plt.legend(loc=0) plt.show() ``` # Evaluate ``` import torch from IPython.display import Image as IPythonImage from PIL import Image, ImageDraw, ImageFont import io from tasks.copytask import dataloader from train import evaluate from tasks.copytask import CopyTaskModelTraining model = CopyTaskModelTraining() model.net.load_state_dict(torch.load("./copy/copy-task-10-batch-40000.model")) seq_len = 60 _, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len))) result = evaluate(model.net, model.criterion, x, y) y_out = result['y_out'] def cmap(value): pixval = value * 255 low = 64 high = 240 factor = (255 - low - (255-high)) / 255 return int(low + pixval * factor) def draw_sequence(y, u=12): seq_len = y.size(0) seq_width = y.size(2) inset = u // 8 pad = u // 2 width = seq_len * u + 2 * pad height = seq_width * u + 2 * pad im = Image.new('L', (width, height)) draw = ImageDraw.ImageDraw(im) draw.rectangle([0, 0, width, height], fill=250) for i in range(seq_len): for j in range(seq_width): val = 1 - y[i, 0, j].data[0] draw.rectangle([pad + i*u + inset, pad + j*u + inset, pad + (i+1)*u - inset, pad + (j+1)*u - inset], fill=cmap(val)) return im def im_to_png_bytes(im): png = io.BytesIO() im.save(png, 'PNG') return bytes(png.getbuffer()) def im_vconcat(im1, im2, pad=8): assert im1.size == im2.size w, h = im1.size width = w height = h * 2 + pad im = Image.new('L', (width, height), color=255) im.paste(im1, (0, 0)) im.paste(im2, (0, h+pad)) return im def make_eval_plot(y, y_out, u=12): im_y = draw_sequence(y, u) im_y_out = draw_sequence(y_out, u) im = im_vconcat(im_y, im_y_out, u//2) w, h = im.size pad_w = u * 7 im2 = Image.new('L', (w+pad_w, h), color=255) im2.paste(im, (pad_w, 0)) # Add text font = ImageFont.truetype("./fonts/PT_Sans-Web-Regular.ttf", 13) draw = ImageDraw.ImageDraw(im2) draw.text((u,4*u), "Targets", font=font) draw.text((u,13*u), "Outputs", font=font) return im2 im = make_eval_plot(y, y_out, u=8) IPythonImage(im_to_png_bytes(im)) ``` ## Create an animated GIF Lets see how the prediction looks like in each checkpoint that we saved. ``` seq_len = 80 _, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len))) frames = [] font = ImageFont.truetype("./fonts/PT_Sans-Web-Regular.ttf", 13) for batch_num in range(500, 10500, 500): model = CopyTaskModelTraining() model.net.load_state_dict(torch.load("./copy/copy-task-10-batch-{}.model".format(batch_num))) result = evaluate(model.net, model.criterion, x, y) y_out = result['y_out'] frame = make_eval_plot(y, y_out, u=10) w, h = frame.size frame_seq = Image.new('L', (w, h+40), color=255) frame_seq.paste(frame, (0, 40)) draw = ImageDraw.ImageDraw(frame_seq) draw.text((10, 10), "Sequence Num: {} (Cost: {})".format(batch_num, result['cost']), font=font) frames += [frame_seq] im = frames[0] im.save("./copy-train-80.gif", save_all=True, append_images=frames[1:], loop=0, duration=1000) im = frames[0] im.save("./copy-train-80-fast.gif", save_all=True, append_images=frames[1:], loop=0, duration=100) ```
true
code
0.565239
null
null
null
null
## TFMA Notebook example This notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers. Note: Please make sure to follow the instructions in [README.md](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/README.md) when running this notebook ## Setup Import necessary packages. ``` import apache_beam as beam import os import preprocess import shutil import tensorflow as tf import tensorflow_data_validation as tfdv import tensorflow_model_analysis as tfma from google.protobuf import text_format from tensorflow.python.lib.io import file_io from tensorflow_transform.beam.tft_beam_io import transform_fn_io from tensorflow_transform.coders import example_proto_coder from tensorflow_transform.saved import saved_transform_io from tensorflow_transform.tf_metadata import dataset_schema from tensorflow_transform.tf_metadata import schema_utils from trainer import task from trainer import taxi ``` Helper functions and some constants for running the notebook locally. ``` BASE_DIR = os.getcwd() DATA_DIR = os.path.join(BASE_DIR, 'data') OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output') # Base dir containing train and eval data TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train') EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval') # Base dir where TFT writes training data TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train') TFT_TRAIN_FILE_PREFIX = 'train_transformed' # Base dir where TFT writes eval data TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval') TFT_EVAL_FILE_PREFIX = 'eval_transformed' TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf') # Base dir where TFMA writes eval data TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma') SERVING_MODEL_DIR = 'serving_model_dir' EVAL_MODEL_DIR = 'eval_model_dir' def get_tft_train_output_dir(run_id): return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id) def get_tft_eval_output_dir(run_id): return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id) def get_tf_output_dir(run_id): return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id) def get_tfma_output_dir(run_id): return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id) def _get_output_dir(base_dir, run_id): return os.path.join(base_dir, 'run_' + str(run_id)) def get_schema_file(): return os.path.join(OUTPUT_DIR, 'schema.pbtxt') ``` Clean up output directories. ``` shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True) shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True) shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True) shutil.rmtree(get_schema_file(), ignore_errors=True) ``` ## Compute and visualize descriptive data statistics ``` # Compute stats over training data. train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv')) # Visualize training data stats. tfdv.visualize_statistics(train_stats) ``` ## Infer a schema ``` # Infer a schema from the training data stats. schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False) tfdv.display_schema(schema=schema) ``` ## Check evaluation data for errors ``` # Compute stats over eval data. eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv')) # Compare stats of eval data with training data. tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET') # Check eval data for errors by validating the eval data stats using the previously inferred schema. anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies) # Update the schema based on the observed anomalies. # Relax the minimum fraction of values that must come from the domain for feature company. company = tfdv.get_feature(schema, 'company') company.distribution_constraints.min_domain_mass = 0.9 # Add new value to the domain of feature payment_type. payment_type_domain = tfdv.get_domain(schema, 'payment_type') payment_type_domain.value.append('Prcard') # Validate eval stats after updating the schema updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies) ``` ## Freeze the schema Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. ``` file_io.recursive_create_dir(OUTPUT_DIR) file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema)) ``` ## Preprocess Inputs transform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow). ``` # Transform eval data preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'), outfile_prefix=TFT_EVAL_FILE_PREFIX, working_dir=get_tft_eval_output_dir(0), schema_file=get_schema_file(), pipeline_args=['--runner=DirectRunner']) print('Done') # Transform training data preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'), outfile_prefix=TFT_TRAIN_FILE_PREFIX, working_dir=get_tft_train_output_dir(0), schema_file=get_schema_file(), pipeline_args=['--runner=DirectRunner']) print('Done') ``` ## Compute statistics over transformed data ``` # Compute stats over transformed training data. TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*") transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA) # Visualize transformed training data stats and compare to raw training data. # Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation. tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW') ``` ## Prepare the Model To use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``. ``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github. Contruct the **EvalSavedModel** after training is completed. ``` def run_experiment(hparams): """Run the training and evaluate using the high level API""" # Train and evaluate the model as usual. estimator = task.train_and_maybe_evaluate(hparams) # Export TFMA's sepcial EvalSavedModel eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR) receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir) tfma.export.export_eval_savedmodel( estimator=estimator, export_dir_base=eval_model_dir, eval_input_receiver_fn=receiver_fn) def eval_input_receiver_fn(working_dir): # Extract feature spec from the schema. raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec serialized_tf_example = tf.placeholder( dtype=tf.string, shape=[None], name='input_example_tensor') # First we deserialize our examples using the raw schema. features = tf.parse_example(serialized_tf_example, raw_feature_spec) # Now that we have our raw examples, we must process them through tft _, transformed_features = ( saved_transform_io.partially_apply_saved_transform( os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR), features)) # The key MUST be 'examples'. receiver_tensors = {'examples': serialized_tf_example} # NOTE: Model is driven by transformed features (since training works on the # materialized output of TFT, but slicing will happen on raw features. features.update(transformed_features) return tfma.export.EvalInputReceiver( features=features, receiver_tensors=receiver_tensors, labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)]) print('Done') ``` ## Train and export the model for TFMA ``` def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor): """Helper method to train and export the model for TFMA The caller specifies the input and output directory by providing run ids. The optional parameters allows the user to change the modelfor time series view. Args: tft_run_id: The run id for the preprocessing. Identifies the folder containing training data. tf_run_id: The run for this training run. Identify where the exported model will be written to. num_layers: The number of layers used by the hiden layer. first_layer_size: The size of the first hidden layer. scale_factor: The scale factor between each layer in in hidden layers. """ hparams = tf.contrib.training.HParams( # Inputs: are tf-transformed materialized features train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'), eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'), schema_file=get_schema_file(), # Output: dir for trained model job_dir=get_tf_output_dir(tf_run_id), tf_transform_dir=get_tft_train_output_dir(tft_run_id), # Output: dir for both the serving model and eval_model which will go into tfma # evaluation output_dir=get_tf_output_dir(tf_run_id), train_steps=10000, eval_steps=5000, num_layers=num_layers, first_layer_size=first_layer_size, scale_factor=scale_factor, num_epochs=None, train_batch_size=40, eval_batch_size=40) run_experiment(hparams) print('Done') run_local_experiment(tft_run_id=0, tf_run_id=0, num_layers=4, first_layer_size=100, scale_factor=0.7) print('Done') ``` ## Run TFMA to compute metrics For local analysis, TFMA offers a helper method ``tfma.run_model_analysis`` ``` help(tfma.run_model_analysis) ``` #### You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation. ``` def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None): """A simple wrapper function that runs tfma locally. A function that does extra transformations on the data and then run model analysis. Args: slice_spec: The slicing spec for how to slice the data. tf_run_id: An id to contruct the model directories with. tfma_run_id: An id to construct output directories with. input_csv: The evaluation data in csv format. schema_file: The file holding a text-serialized schema for the input data. add_metrics_callback: Optional list of callbacks for computing extra metrics. Returns: An EvalResult that can be used with TFMA visualization functions. """ eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR) eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0]) eval_shared_model = tfma.default_eval_shared_model( eval_saved_model_path=eval_model_dir, add_metrics_callbacks=add_metrics_callbacks) schema = taxi.read_schema(schema_file) print(eval_model_dir) display_only_data_location = input_csv with beam.Pipeline() as pipeline: csv_coder = taxi.make_csv_coder(schema) raw_data = ( pipeline | 'ReadFromText' >> beam.io.ReadFromText( input_csv, coder=beam.coders.BytesCoder(), skip_header_lines=True) | 'ParseCSV' >> beam.Map(csv_coder.decode)) # Examples must be in clean tf-example format. coder = taxi.make_proto_coder(schema) raw_data = ( raw_data | 'ToSerializedTFExample' >> beam.Map(coder.encode)) _ = (raw_data | 'ExtractEvaluateAndWriteResults' >> tfma.ExtractEvaluateAndWriteResults( eval_shared_model=eval_shared_model, slice_spec=slice_spec, output_path=get_tfma_output_dir(tfma_run_id), display_only_data_location=input_csv)) return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id)) print('Done') ``` #### You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``. Below are examples of how slices can be specified. ``` # An empty slice spec means the overall slice, that is, the whole dataset. OVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec() # Data can be sliced along a feature column # In this case, data is sliced along feature column trip_start_hour. FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour']) # Data can be sliced by crossing feature columns # In this case, slices are computed for trip_start_day x trip_start_month. FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month']) # Metrics can be computed for a particular feature value. # In this case, metrics is computed for all data where trip_start_hour is 12. FEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)]) # It is also possible to mix column cross and feature value cross. # In this case, data where trip_start_hour is 12 will be sliced by trip_start_day. COLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)]) ALL_SPECS = [ OVERALL_SLICE_SPEC, FEATURE_COLUMN_SLICE_SPEC, FEATURE_COLUMN_CROSS_SPEC, FEATURE_VALUE_SPEC, COLUMN_CROSS_VALUE_SPEC ] ``` #### Let's run TFMA! ``` tf.logging.set_verbosity(tf.logging.INFO) tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), tf_run_id=0, tfma_run_id=1, slice_spec=ALL_SPECS, schema_file=get_schema_file()) print('Done') ``` ## Visualization: Slicing Metrics To see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed. The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights. This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below. ``` # Show data sliced along feature column trip_start_hour. tfma.view.render_slicing_metrics( tfma_result_1, slicing_column='trip_start_hour') # Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above. tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC) # Show overall metrics. tfma.view.render_slicing_metrics(tfma_result_1) ``` ## Visualization: Plots TFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks`` ``` tf.logging.set_verbosity(tf.logging.INFO) tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), tf_run_id=0, tfma_run_id='vis', slice_spec=ALL_SPECS, schema_file=get_schema_file(), add_metrics_callbacks=[ # calibration_plot_and_prediction_histogram computes calibration plot and prediction # distribution at different thresholds. tfma.post_export_metrics.calibration_plot_and_prediction_histogram(), # auc_plots enables precision-recall curve and ROC visualization at different thresholds. tfma.post_export_metrics.auc_plots() ]) print('Done') ``` Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``. In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)])`` to specify the slice where trip_start_hour is 1. Plots are interactive: - Drag to pan - Scroll to zoom - Right click to reset the view Simply hover over the desired data point to see more details. ``` tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)])) ``` #### Custom metrics In addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``. All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics: https://www.tensorflow.org/api_docs/python/tf/metrics In the cells below, false negative rate is computed as an example. ``` # Defines a callback that adds FNR to the result. def add_fnr_for_threshold(threshold): def _add_fnr_callback(features_dict, predictions_dict, labels_dict): metric_ops = {} prediction_tensor = tf.cast( predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64) fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict), tf.squeeze(prediction_tensor), [threshold]) tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict), tf.squeeze(prediction_tensor), [threshold]) fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0]) metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op)) return metric_ops return _add_fnr_callback tf.logging.set_verbosity(tf.logging.INFO) tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), tf_run_id=0, tfma_run_id='fnr', slice_spec=ALL_SPECS, schema_file=get_schema_file(), add_metrics_callbacks=[ # Simply add the call here. add_fnr_for_threshold(0.75) ]) tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC) ``` ## Visualization: Time Series It is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time. **Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method. ``` help(tfma.multiple_model_analysis) ``` **Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method. ``` help(tfma.multiple_data_analysis) ``` It is also possible to compose a time series manually. ``` # Create different models. # Run some experiments with different hidden layer configurations. run_local_experiment(tft_run_id=0, tf_run_id=1, num_layers=3, first_layer_size=200, scale_factor=0.7) run_local_experiment(tft_run_id=0, tf_run_id=2, num_layers=4, first_layer_size=240, scale_factor=0.5) print('Done') tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), tf_run_id=1, tfma_run_id=2, slice_spec=ALL_SPECS, schema_file=get_schema_file()) tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), tf_run_id=2, tfma_run_id=3, slice_spec=ALL_SPECS, schema_file=get_schema_file()) print('Done') ``` Like plots, time series view must visualized for a slice too. In the example below, we are showing the overall slice. Select a metric to see its time series graph. Hover over each data point to get more details. ``` eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3], tfma.constants.MODEL_CENTRIC_MODE) tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC) ``` Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline. ``` # Visualize the results in a Time Series. In this case, we are showing the slice specified. eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1), get_tfma_output_dir(2), get_tfma_output_dir(3)], tfma.constants.MODEL_CENTRIC_MODE) tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC) ```
true
code
0.549278
null
null
null
null
# DeepDreaming with TensorFlow >[Loading and displaying the model graph](#loading) >[Naive feature visualization](#naive) >[Multiscale image generation](#multiscale) >[Laplacian Pyramid Gradient Normalization](#laplacian) >[Playing with feature visualzations](#playing) >[DeepDream](#deepdream) This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science: - visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries) - embed TensorBoard graph visualizations into Jupyter notebooks - produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg)) - use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost - generate DeepDream-like images with TensorFlow (DogSlugs included) The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures. ``` # boilerplate code from __future__ import print_function import os from io import BytesIO import numpy as np from functools import partial import PIL.Image from IPython.display import clear_output, Image, display, HTML import tensorflow as tf ``` <a id='loading'></a> ## Loading and displaying the model graph The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network: ``` #!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip model_fn = 'tensorflow_inception_graph.pb' # creating TensorFlow session and loading the model graph = tf.Graph() sess = tf.InteractiveSession(graph=graph) with tf.gfile.FastGFile(model_fn, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) t_input = tf.placeholder(np.float32, name='input') # define the input tensor imagenet_mean = 117.0 t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0) tf.import_graph_def(graph_def, {'input':t_preprocessed}) ``` To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore. ``` layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name] feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers] print('Number of layers', len(layers)) print('Total number of feature channels:', sum(feature_nums)) # Helper functions for TF Graph visualization def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def rename_nodes(graph_def, rename_func): res_def = tf.GraphDef() for n0 in graph_def.node: n = res_def.node.add() n.MergeFrom(n0) n.name = rename_func(n.name) for i, s in enumerate(n.input): n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:]) return res_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) # Visualizing the network graph. Be sure expand the "mixed" nodes to see their # internal structure. We are going to visualize "Conv2D" nodes. tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1))) show_graph(tmp_def) ``` <a id='naive'></a> ## Naive feature visualization Let's start with a naive way of visualizing these. Image-space gradient ascent! ``` # Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity # to have non-zero gradients for features with negative initial activations. layer = 'mixed4d_3x3_bottleneck_pre_relu' channel = 139 # picking some feature channel to visualize # start with a gray image with a little noise img_noise = np.random.uniform(size=(224,224,3)) + 100.0 def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 1)*255) f = BytesIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) def visstd(a, s=0.1): '''Normalize the image range for visualization''' return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5 def T(layer): '''Helper for getting layer output tensor''' return graph.get_tensor_by_name("import/%s:0"%layer) def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! img = img0.copy() for i in range(iter_n): g, score = sess.run([t_grad, t_score], {t_input:img}) # normalizing the gradient, so the same step size should work g /= g.std()+1e-8 # for different layers and networks img += g*step print(score, end = ' ') clear_output() showarray(visstd(img)) render_naive(T(layer)[:,:,:,channel]) ``` <a id="multiscale"></a> ## Multiscale image generation Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale. With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality. ``` def tffunc(*argtypes): '''Helper that transforms TF-graph generating function into a regular one. See "resize" function below. ''' placeholders = list(map(tf.placeholder, argtypes)) def wrap(f): out = f(*placeholders) def wrapper(*args, **kw): return out.eval(dict(zip(placeholders, args)), session=kw.get('session')) return wrapper return wrap # Helper function that uses TF to resize an image def resize(img, size): img = tf.expand_dims(img, 0) return tf.image.resize_bilinear(img, size)[0,:,:,:] resize = tffunc(np.float32, np.int32)(resize) def calc_grad_tiled(img, t_grad, tile_size=512): '''Compute the value of tensor t_grad over the image in a tiled way. Random shifts are applied to the image to blur tile boundaries over multiple iterations.''' sz = tile_size h, w = img.shape[:2] sx, sy = np.random.randint(sz, size=2) img_shift = np.roll(np.roll(img, sx, 1), sy, 0) grad = np.zeros_like(img) for y in range(0, max(h-sz//2, sz),sz): for x in range(0, max(w-sz//2, sz),sz): sub = img_shift[y:y+sz,x:x+sz] g = sess.run(t_grad, {t_input:sub}) grad[y:y+sz,x:x+sz] = g return np.roll(np.roll(grad, -sx, 1), -sy, 0) def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! img = img0.copy() for octave in range(octave_n): if octave>0: hw = np.float32(img.shape[:2])*octave_scale img = resize(img, np.int32(hw)) for i in range(iter_n): g = calc_grad_tiled(img, t_grad) # normalizing the gradient, so the same step size should work g /= g.std()+1e-8 # for different layers and networks img += g*step print('.', end = ' ') clear_output() showarray(visstd(img)) render_multiscale(T(layer)[:,:,:,channel]) ``` <a id="laplacian"></a> ## Laplacian Pyramid Gradient Normalization This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normailzation_. ``` k = np.float32([1,4,6,4,1]) k = np.outer(k, k) k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32) def lap_split(img): '''Split the image into lo and hi frequency components''' with tf.name_scope('split'): lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME') lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1]) hi = img-lo2 return lo, hi def lap_split_n(img, n): '''Build Laplacian pyramid with n splits''' levels = [] for i in range(n): img, hi = lap_split(img) levels.append(hi) levels.append(img) return levels[::-1] def lap_merge(levels): '''Merge Laplacian pyramid''' img = levels[0] for hi in levels[1:]: with tf.name_scope('merge'): img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi return img def normalize_std(img, eps=1e-10): '''Normalize image by making its standard deviation = 1.0''' with tf.name_scope('normalize'): std = tf.sqrt(tf.reduce_mean(tf.square(img))) return img/tf.maximum(std, eps) def lap_normalize(img, scale_n=4): '''Perform the Laplacian pyramid normalization.''' img = tf.expand_dims(img,0) tlevels = lap_split_n(img, scale_n) tlevels = list(map(normalize_std, tlevels)) out = lap_merge(tlevels) return out[0,:,:,:] # Showing the lap_normalize graph with TensorBoard lap_graph = tf.Graph() with lap_graph.as_default(): lap_in = tf.placeholder(np.float32, name='lap_in') lap_out = lap_normalize(lap_in) show_graph(lap_graph) def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! # build the laplacian normalization graph lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n)) img = img0.copy() for octave in range(octave_n): if octave>0: hw = np.float32(img.shape[:2])*octave_scale img = resize(img, np.int32(hw)) for i in range(iter_n): g = calc_grad_tiled(img, t_grad) g = lap_norm_func(g) img += g*step print('.', end = ' ') clear_output() showarray(visfunc(img)) render_lapnorm(T(layer)[:,:,:,channel]) ``` <a id="playing"></a> ## Playing with feature visualizations We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns. ``` render_lapnorm(T(layer)[:,:,:,65]) ``` Lower layers produce features of lower complexity. ``` render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101]) ``` There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern. ``` render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4) ``` <a id="deepdream"></a> ## DeepDream Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow. ``` def render_deepdream(t_obj, img0=img_noise, iter_n=10, step=1.5, octave_n=4, octave_scale=1.4): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! # split the image into a number of octaves img = img0 octaves = [] for i in range(octave_n-1): hw = img.shape[:2] lo = resize(img, np.int32(np.float32(hw)/octave_scale)) hi = img-resize(lo, hw) img = lo octaves.append(hi) # generate details octave by octave for octave in range(octave_n): if octave>0: hi = octaves[-octave] img = resize(img, hi.shape[:2])+hi for i in range(iter_n): g = calc_grad_tiled(img, t_grad) img += g*(step / (np.abs(g).mean()+1e-7)) print('.',end = ' ') clear_output() showarray(img/255.0) ``` Let's load some image and populate it with DogSlugs (in case you've missed them). ``` img0 = PIL.Image.open('pilatus800.jpg') img0 = np.float32(img0) showarray(img0/255.0) render_deepdream(tf.square(T('mixed4c')), img0) ``` Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset. Using an arbitrary optimization objective still works: ``` render_deepdream(T(layer)[:,:,:,139], img0) ``` Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image. We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
true
code
0.712626
null
null
null
null
We will use this notebook to calculate and visualize statistics of our chess move dataset. This will allow us to better understand our limitations and help diagnose problems we may encounter down the road when training/defining our model. ``` import pdb import numpy as np import matplotlib.pyplot as plt %matplotlib inline def get_move_freqs(moves, sort=True): freq_dict = {} for move in moves: if move not in freq_dict: freq_dict[move] = 0 freq_dict[move] = freq_dict[move] + 1 tuples = [(w, c) for w, c in freq_dict.items()] if sort: tuples = sorted(tuples, key=lambda x: -x[1]) return (tuples, moves) def plot_frequency(counts, move_limit=1000): # limit to the n most frequent moves n = 1000 counts = counts[0:n] # from: http://stackoverflow.com/questions/30690619/python-histogram-using-matplotlib-on-top-words moves = [x[0] for x in counts] values = [int(x[1]) for x in counts] bar = plt.bar(range(len(moves)), values, color='green', alpha=0.4) plt.xlabel('Move Index') plt.ylabel('Frequency') plt.title('Move Frequency Chart') plt.show() def plot_uniq_over_count(moves, interval=0.01): xs, ys = [], [] for i in range(0, len(moves), int(len(moves) * interval)): chunk = moves[0:i] uniq = list(set(chunk)) xs.append(len(chunk)) ys.append(len(uniq)) plt.plot(xs, ys) plt.ticklabel_format(style='sci', axis='x', scilimits=(0, 0)) plt.xlabel('Moves') plt.ylabel('Unique Moves') plt.show() def plot_game_lengths(game_lengths): xs = [g[0] for g in game_lengths] ys = [g[1] for g in game_lengths] bar = plt.bar(xs, ys, color='blue', alpha=0.4) plt.xlabel('Half-moves per game') plt.ylabel('Frequency') plt.title('Game Length') plt.show() def plot_repeat_states(moves): uniq_states = {} moves_in_game = '' for move in moves: moves_in_game = moves_in_game + ' ' + move if moves_in_game not in uniq_states: uniq_states[moves_in_game] = 0 uniq_states[moves_in_game] = uniq_states[moves_in_game] + 1 if is_game_over_move(move): moves_in_game = '' vals = [] d = {} for state, count in sorted(uniq_states.items(), key=lambda x: (-x[1], x[0])): vals.append((count, state)) # move_count = len(state.split()) # if move_count not in d: # d[move_count] = 0 # d[move_count] = d[move_count] + 1 vals.append([c for c, s in vals]) plt.plot(vals) plt.xlim([0, 100]) plt.xlabel('Board State') plt.ylabel('Frequency') plt.title('Frequency of Board State') plt.show() # vals = [(length, count) for length, count in sorted(d.items(), key=lambda x: -x[0])] # pdb.set_trace() # plt.bar(vals) # plt.xlim([0, 1000]) # plt.xlabel('Moves in State') # plt.ylabel('Frequency') # print('{} uniq board states'.format(len(list(uniq_states.keys())))) def get_game_lengths(moves): game_lengths = {} total_games = 0 current_move = 1 for move in moves: if is_game_over_move(move): if current_move not in game_lengths: game_lengths[current_move] = 0 game_lengths[current_move] = game_lengths[current_move] + 1 current_move = 1 total_games = total_games + 1 else: current_move = current_move + 1 print(total_games) return [(k, v) for k, v in game_lengths.items()], total_games def is_game_over_move(move): return move in ('0-1', '1-0', '1/2-1/2') ``` Load our concatonated moves data. ``` with open('../data/train_moves.txt', 'r') as f: moves = f.read().split(' ') print('{} moves loaded'.format(len(moves))) counts, moves = get_move_freqs(moves) game_lengths, total_games = get_game_lengths(moves) # plot_repeat_states(moves) ``` ## Plot Move Frequency Here we can see which moves appear most frequently in the dataset. These moves are the most popular moves played by chess champions. ``` plot_frequency(counts) ``` We will list the most common few moves along with what percentage of the entire moves dataset this move represents. ``` top_n = 10 for w in counts[0:top_n]: print((w[0]).ljust(8), '{:.2f}%'.format((w[1]/len(moves)) * 100.00)) ``` ## Plot Unique Moves Here we compare the number of unique moves over the total move count. Take notice that the number of unique moves converges towards a constant as the number of total moves increase. This would suggest that there is a subset of all possible moves that actually make sense for a chess champion to play. ``` plot_uniq_over_count(moves) ``` ## Plot Game Lengths ``` plot_game_lengths(game_lengths) top_n = 10 sorted_lengths = sorted(game_lengths, key=lambda x: -x[1]) for l in sorted_lengths[0:top_n]: print((str(l[0])).ljust(8), '{:.3f}%'.format((l[1]/total_games) * 100.00)) ```
true
code
0.464476
null
null
null
null
# Classification of Chest and Abdominal X-rays Code Source: Lakhani, P., Gray, D.L., Pett, C.R. et al. J Digit Imaging (2018) 31: 283. https://doi.org/10.1007/s10278-018-0079-6 The code to download and prepare dataset had been modified form the original source code. ``` # load requirements for the Keras library from keras import applications from keras.preprocessing.image import ImageDataGenerator from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D from keras.models import Model from keras.optimizers import Adam !rm -rf /content/* # Download dataset !wget https://github.com/paras42/Hello_World_Deep_Learning/raw/9921a12c905c00a88898121d5dc538e3b524e520/Open_I_abd_vs_CXRs.zip !ls /content # unzip !unzip /content/Open_I_abd_vs_CXRs.zip # dimensions of our images img_width, img_height = 299, 299 # directory and image information train_data_dir = 'Open_I_abd_vs_CXRs/TRAIN/' validation_data_dir = 'Open_I_abd_vs_CXRs/VAL/' # epochs = number of passes of through training data # batch_size = number of images processes at the same time train_samples = 65 validation_samples = 10 epochs = 20 batch_size = 5 # build the Inception V3 network, use pretrained weights from ImgaeNet # remove top funnly connected layers by imclude_top=False base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width, img_height,3)) # build a classifier model to put on top of the convolutional model # This consists of a global average pooling layer and a fully connected layer with 256 nodes # Then apply dropout and signoid activation model_top = Sequential() model_top.add(GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)), model_top.add(Dense(256, activation='relu')) model_top.add(Dropout(0.5)) model_top.add(Dense(1, activation='sigmoid')) model = Model(inputs=base_model.input, outputs=model_top(base_model.output)) # Compile model using Adam optimizer with common values and binary cross entropy loss # USe low learning rate (lr) for transfer learning model.compile(optimizer=Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0), loss='binary_crossentropy', metrics=['accuracy']) # Some on-the-fly augmentation options train_datagen = ImageDataGenerator( rescale = 1./255, # Rescale pixel values to 0-1 to aid CNN processing shear_range = 0.2, # 0-1 range for shearing zoom_range = 0.2, # 0-1 range for zoom rotation_range = 20, # 0.180 range, degrees of rotation width_shift_range = 0.2, # 0-1 range horizontal translation height_shift_range = 0.2, # 0-1 range vertical translation horizontal_flip = True # set True or false ) val_datagen = ImageDataGenerator( rescale=1./255 # Rescale pixel values to 0-1 to aid CNN processing ) # Directory, image size, batch size already specied above # Class mode is set to 'binary' for a 2-class problem # Generator randomly shuffles and presents images in batches to the network train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary' ) validation_generator = val_datagen.flow_from_directory( validation_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary' ) # Fine-tune the pretrained Inception V3 model using the data generator # Specify steps per epoch (number of samples/batch_size) history = model.fit_generator( train_generator, steps_per_epoch=train_samples//batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=validation_samples//batch_size ) # import matplotlib library, and plot training curve import matplotlib.pyplot as plt print(history.history.keys()) plt.figure() plt.plot(history.history['acc'],'orange', label='Training accuracy') plt.plot(history.history['val_acc'],'blue', label='Validation accuracy') plt.plot(history.history['loss'],'red', label='Training loss') plt.plot(history.history['val_loss'],'green', label='validation loss') plt.legend() plt.show() # import numpy and keras preprocessing libraries import numpy as np from keras.preprocessing import image # load, resize, and display test images img_path = 'Open_I_abd_vs_CXRs/TEST/abd2.png' img_path2 = 'Open_I_abd_vs_CXRs/TEST/chest2.png' img = image.load_img(img_path, target_size=(img_width, img_height)) img2 = image.load_img(img_path2, target_size=(img_width, img_height)) plt.imshow(img) plt.show() # convert image to numpy array, so Keras can render a prediction img = image.img_to_array(img) # expand array from 3 dimensions (height, width, channels) to 4 dimensions (batch size, height, width, channels) # rescale pixel values to 0-1 x = np.expand_dims(img, axis=0) * 1./255 # get prediction on test image score = model.predict(x) print('Predicted:', score, 'Chest X-ray' if score < 0.5 else 'Abd X-ray') # display and render a prediction for the 2nd image plt.imshow(img2) plt.show() img2 = image.img_to_array(img2) x = np.expand_dims(img2, axis=0) * 1./255 score = model.predict(x) print('Predicted:', score, 'Chest X-ray' if score < 0.5 else 'Abd X-ray') ```
true
code
0.765034
null
null
null
null
<a href="https://colab.research.google.com/github/hadisotudeh/zestyAI_challenge/blob/main/Zesty_AI_Data_Scientist_Assignment_%7C_Hadi_Sotudeh.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <center> <h1><b>Zesty AI Data Science Interview Task - Hadi Sotudeh</b></h1> </center> To perform this task, I had access to the [`2009 RESIDENTIAL ENERGY CONSUMPTION SURVEY`](https://www.eia.gov/consumption/residential/data/2009/index.php?view=microdata) to predict `electricity consumption`. </br> </br> Libraries available in Python such as `scikit-learn` and `fastai` were employed to perform this machine learning regression task. </br> </br> First, I need to install the notebook dependencies, import the relevant libraries, download the dataset, and have them available in Google Colab (next cell). ## Install Dependencies, Import Libraries, and Download the dataset ``` %%capture # install dependencies !pip install fastai --upgrade # Import Libraries # general libraries import warnings import os from datetime import datetime from tqdm import tqdm_notebook as tqdm # machine learning libraries import pandas as pd import matplotlib import numpy as np import matplotlib.pyplot as plt from fastai.tabular.all import * from sklearn.ensemble import RandomForestRegressor from pandas_profiling import ProfileReport import joblib from xgboost import XGBRegressor from lightgbm import LGBMRegressor # model interpretation library from sklearn.inspection import plot_partial_dependence %%capture #download the dataset ! wget https://www.eia.gov/consumption/residential/data/2009/csv/recs2009_public.csv ``` ## Set Global parameters The electric consumption is located in the `KWH` field of the dataset. ``` #show plots inside the jupyter notebook %matplotlib inline # pandas settings to show more columns are rows in the jupyter notebook pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 50000) # don't show warnings warnings.filterwarnings('ignore') # dataset file path dataset = "recs2009_public.csv" # target variable to predict dep_var = "KWH" ``` ## Read the dataset from CSV files, Perform Data Cleaning, and Feature Engineering Following a typical machine learning project, I first clean up the dataset to prevent data-leakage related features or non-relevant features.</br></br>It is important to mention that I did not first look at each column to figure out which feature to keep or not. What I did first was to train a model and iteratively look at the feature importances and check their meanings in the dataset documentation to figure out what features to remove to prevent data leakage.</br></br>In addition, a group of features with high correlations were identified and only one of them in each group was kept. ``` # read the train file df = pd.read_csv(dataset) # remove data-leakage and non-relevant features non_essential_features = ["KWHSPH","KWHCOL","KWHWTH","KWHRFG","KWHOTH","BTUEL","BTUELSPH","BTUELCOL", "BTUELWTH","BTUELRFG","BTUELOTH","DOLLAREL","DOLELSPH","DOLELCOL","DOLELWTH", "DOLELRFG","DOLELOTH","TOTALBTUOTH","TOTALBTURFG","TOTALDOL","ELWATER", "TOTALBTUWTH","TOTALBTU","ELWARM","TOTALBTUCOL","TOTALDOLCOL", "REPORTABLE_DOMAIN","TOTALDOLWTH","TOTALBTUSPH","TOTCSQFT","TOTALDOLSPH", "BTUNG", "BTUNGSPH", "BTUNGWTH","BTUNGOTH","DOLLARNG","DOLNGSPH","DOLNGWTH","DOLNGOTH", "DIVISION" ] df.drop(columns = non_essential_features, inplace=True) # take the log of dependent variable ('price'). More details are in the training step. df[dep_var] = np.log(df[dep_var]) ``` I created train and validation sets with random selection (80% vs.20% rule) from the dataset[link text](https://) file in the next step. ``` splits = RandomSplitter(valid_pct=0.2)(range_of(df)) procs = [Categorify, FillMissing] cont, cat = cont_cat_split(df, 1, dep_var=dep_var) to = TabularPandas(df, procs, cat, cont, y_names=dep_var, splits = splits) ``` The following cell shows 5 random instances of the dataset (after cleaning and feature engineering). ``` to.show(5) ``` ## Train the ML Model Since model interpretation is also important for me, I chose RandomForest for both prediction and interpretation and knowledge discovery. ``` def rf(xs, y, n_estimators=40, max_features=0.5, min_samples_leaf=5, **kwargs): "randomforst regressor" return RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators, max_features=max_features, min_samples_leaf=min_samples_leaf, oob_score=True).fit(xs, y) xs,y = to.train.xs,to.train.y valid_xs,valid_y = to.valid.xs,to.valid.y m = rf(xs, y) ``` The predictions are evaluated based on [Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview/evaluation) (Taking logs means that errors in predicting high electricity consumptions and low ones will affect the result equally). </br> </br> ``` def r_mse(pred,y): return round(math.sqrt(((pred-y)**2).mean()), 6) def m_rmse(m, xs, y): return r_mse(m.predict(xs), y) ``` Print the Mean Root Squared Error of the logarithmic `KWH` on the train set: ``` m_rmse(m, xs, y) ``` Print the Mean Root Squared Error of the logarithmic `KWH` on the validation set: ``` m_rmse(m, valid_xs, valid_y) ``` Calculate Feature Importance and remove non-important features and re-train the model. ``` def rf_feat_importance(m, df): return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}).sort_values('imp', ascending=False) # show the top 10 features fi = rf_feat_importance(m, xs) fi[:10] ``` Only keep features with importance of more than 0.005 for re-training. ``` to_keep = fi[fi.imp>0.005].cols print(f"features to keep are : {list(to_keep)}") ``` Some of the features to keep for re-training are: 1. `TOTALDOLOTH`: Total cost for appliances, electronics, lighting, and miscellaneous 2. `PELHOTWA`: Who pays for electricity used for water heating 3. `ACROOMS`: Number of rooms cooled 4. `TOTALDOLRFG`: Total cost for refrigerators, in whole dollars 5. `REGIONC`: Census Region 6. `TEMPNITEAC`: Temperature at night (summer) ``` xs_imp = xs[to_keep] valid_xs_imp = valid_xs[to_keep] m = rf(xs_imp, y) ``` Print the loss function of the re-trained model on train and validation sets. ``` m_rmse(m, xs_imp, y), m_rmse(m, valid_xs_imp, valid_y) ``` Check the correlation among the final features and adjust the set of features to remove at the beginning of the code. ``` from scipy.cluster import hierarchy as hc def cluster_columns(df, figsize=(10,6), font_size=12): corr = np.round(scipy.stats.spearmanr(df).correlation, 4) corr_condensed = hc.distance.squareform(1-corr) z = hc.linkage(corr_condensed, method='average') fig = plt.figure(figsize=figsize) hc.dendrogram(z, labels=df.columns, orientation='left', leaf_font_size=font_size) plt.show() cluster_columns(xs_imp) ``` Store the re-trained model. ``` joblib.dump(m, 'model.joblib') ``` ## Interpret the Model and Do Knowledge Discovery When I plot the feature importances of the trained model, I can clearly see that `TOTALDOLOTH` (Total cost for appliances, electronics, lighting, and miscellaneous uses in whole dollars) is the most important factor for the model to make its decisions. ``` def plot_fi(fi): return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False) plot_fi(rf_feat_importance(m, xs_imp)); ``` In this section, I make use of the [Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) to interpret the learned function (ML model) and understand how this function makes decisions and predicts house prices for sale.</br></br>The 1D-feature plots show by changing one unit (increase or decrease) of the feature shown in the x-axis, how much the predicted dependent variable (`log KWH`) changes on average. ``` explore_cols = ['TOTALDOLOTH','TOTALDOLRFG','ACROOMS','TEMPHOMEAC','TEMPNITEAC','CDD30YR','CUFEETNGOTH','WASHLOAD','CUFEETNG'] explore_cols_vals = ["Total cost for appliances, electronics, lighting, and miscellaneous uses, in whole dollars", "Total cost for refrigerators, in whole dollars", "Number of rooms cooled", "Temperature when someone is home during the day (summer)", "Temperature at night (summer)", "Cooling degree days, 30-year average 1981-2010, base 65F", "Natural Gas usage for other purposes (all end-uses except SPH and WTH), in hundred cubic feet", "Frequency clothes washer used", "Total Natural Gas usage, in hundred cubic feet"] for index, col in enumerate(explore_cols): fig,ax = plt.subplots(figsize=(12, 4)) plot_partial_dependence(m, valid_xs_imp, [col], grid_resolution=20, ax=ax); x_label = explore_cols_vals[index] plt.xlabel(x_label) ``` The 2D-feature plots show by changing one unit (increase or decrease) of the features shown in the x and y axes, how much the dependent variable changes. </br> </br> Here, the plot shows how much the model (learned function) changes its `log KWH` prediction on average when the two dimensions on the x and y axes change. ``` paired_features = [("TEMPNITEAC","TEMPHOMEAC"),("CUFEETNG","CUFEETNGOTH")] paired_features_vals = [("Temperature at night (summer)","Temperature when someone is home during the day (summer)"), ("Total Natural Gas usage, in hundred cubic feet","Natural Gas usage for other purposes (all end-uses except SPH and WTH), in hundred cubic feet")] for index, pair in enumerate(paired_features): fig,ax = plt.subplots(figsize=(8, 8)) plot_partial_dependence(m, valid_xs_imp, [pair], grid_resolution=20, ax=ax); x_label = paired_features_vals[index][0] y_label = paired_features_vals[index][1] plt.xlabel(x_label) plt.ylabel(y_label) ``` ## THE END!
true
code
0.641787
null
null
null
null
## Bengaluru House Price ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt pd.set_option("display.max_rows", None, "display.max_columns", None) df1=pd.read_csv("Dataset/Bengaluru_House_Data.csv") df1.head() ``` ### Data Cleaning ``` df1.info() df1.isnull().sum() df1.groupby('area_type')['area_type'].agg('count') df2=df1.drop(['area_type','availability','society','balcony'], axis='columns') df2.head() df2.isnull().sum() df2.shape df2['location'].fillna(df2['location'].mode().values[0],inplace=True) df2['size'].fillna(df2['size'].mode().values[0],inplace=True) df2['bath'].fillna(df2['bath'].mode().values[0],inplace=True) df2.isnull().sum() df2['size'].unique() df2['bhk']=df2['size'].apply(lambda x: int(x.split(' ')[0])) df2=df2.drop(['size'],axis='columns') df2.head() df2['bhk'].unique() df2['total_sqft'].unique() ``` ###### Dimension Reduction ``` def infloat(x): try: float(x) except: return False return True df2[~df2['total_sqft'].apply(infloat)].head(10) def convert(x): token=x.split('-') if(len(token)==2): return (float(token[0])+float(token[1]))/2 try: return float(x) except: return 1600 df2['total_sqft']=df2['total_sqft'].apply(convert) df2.head() df2.loc[410] df2.isnull().sum() df2['total_sqft'].agg('mean') df2['bath'].unique() df3=df2.copy() df3['price_per_sqft']=(df3['price']*100000/df3['total_sqft']).round(2) df3.head() df3.location.unique() stats=df3.groupby('location')['location'].agg('count').sort_values(ascending=False) stats location_stat_less_than_10=stats[stats<=10] location_stat_less_than_10 df3['location']=df3['location'].apply(lambda x:'others' if x in location_stat_less_than_10 else x) len(df3.location.unique()) df3.head(10) df3[df3['total_sqft']/df3['bhk']<300].head() df3.shape df4=df3[~(df3['total_sqft']/df3['bhk']<300)] df4.shape df4.price_per_sqft.describe() def remove(df): df_out = pd.DataFrame() for key, subdf in df.groupby('location'): m=np.mean(subdf.price_per_sqft) st=np.std(subdf.price_per_sqft) reduced_df=subdf[(subdf.price_per_sqft >(m-st)) & (subdf.price_per_sqft<=(m+st))] df_out = pd.concat([df_out, reduced_df],ignore_index=True) return df_out df5=remove(df4) df5.shape def draw(df,location): bhk2=df[ (df.location==location) & (df.bhk==2)] bhk3=df[ (df.location==location) & (df.bhk==3)] plt.rcParams['figure.figsize']=(15,10) plt.scatter(bhk2.total_sqft,bhk2.price,color='blue') plt.scatter(bhk3.total_sqft,bhk3.price,color='green',marker='+') draw(df5,'Rajaji Nagar') import matplotlib matplotlib.rcParams['figure.figsize']=(15,10) plt.hist(df5.price_per_sqft,rwidth=.8) df5.bath.unique() df5[df5.bath>df5.bhk+2] df6=df5[df5.bath<df5.bhk+2] df6.shape df6.head() df6=df6.drop(['price_per_sqft'],axis='columns') df6.head() dummies=pd.get_dummies(df6.location) dummies.head(3) dummies.shape df7=pd.concat([df6,dummies.drop('others',axis='columns')],axis='columns') df7.shape df7.head(3) df8=df7.drop('location',axis='columns') df8.head(3) df8.shape x=df8.drop('price',axis='columns') x.head(2) y=df8['price'] y.head() from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test=train_test_split(x,y, test_size=0.2,random_state=10) from sklearn.linear_model import LinearRegression lr=LinearRegression() lr.fit(x_train,y_train) y_pred=lr.predict(x_test) from sklearn.metrics import r2_score r2_score(y_pred,y_test) lr.score(x_test,y_test) from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import cross_val_score cv=ShuffleSplit(n_splits=5, test_size=.2,random_state=0) cross_val_score(LinearRegression(),x,y,cv=cv) from sklearn.ensemble import RandomForestRegressor rfg=RandomForestRegressor(n_estimators=50) rfg.fit(x_train,y_train) r2_score(y_test,rfg.predict(x_test)) rfg.score(x_test,y_test) cross_val_score(RandomForestRegressor(),x,y,cv=cv) x.columns X=x def predict_price(location,sqft,bath,bhk): loc_index = np.where(X.columns==location)[0][0] x=np.zeros(len(X.columns)) x[0]=sqft x[1]=bath x[2]=bhk if loc_index>=0: x[loc_index]=1 return lr.predict([x])[0] predict_price('1st Phase JP Nagar',1000,4,5) predict_price('Indira Nagar',1000,2,2) import pickle with open('banglore_home_price_model.pickle','wb') as f: pickle.dump(lr,f) import json columns={ 'data_columns' : [col.lower() for col in X.columns] } with open("columns.json","w") as f: f.write(json.dumps(columns)) ```
true
code
0.301542
null
null
null
null
# Лекция 7. Разреженные матрицы и прямые методы для решения больших разреженных систем ## План на сегодняшнюю лекцию - Плотные неструктурированные матрицы и распределённое хранение - Разреженные матрицы и форматы их представления - Быстрая реализация умножения разреженной матрицы на вектор - Метод Гаусса для разреженных матриц: упорядоченность - Заполнение и графы: сепараторы - Лапласиан графа ## Плотные матрицы большой размерности - Если размер матрицы очень большой, то она не помещается в память - Возможные способы работы с такими матрицами - Если матрица **структурирована**, например блочно Тёплицева с Тёплицевыми блоками (в следующих лекциях), тогда возможно сжатое хранение - Для неструктурированных матриц помогает **распределённая память** - MPI для обработки распределённо хранимых матриц ### Распределённая память и MPI - Разбиваем матрицу на блоки и храним их на различных машинах - Каждая машина имеет своё собственное адресное пространство и не может повредить данные на других машинах - В этом случае машины передают друг другу данные для агрегирования результата вычислений - [MPI (Message Passing Interface)](https://en.wikipedia.org/wiki/Message_Passing_Interface) – стандарт в параллельных вычислениях с распределённой памятью ### Пример: умножение матрицы на вектор - Предположим, вы хотите посчитать произведение $Ax$ и матрица $A$ не помещается в памяти - В этом случае вы можете разбить матрицу на блоки и поместить их на разные машины - Возможные стратегии: - Одномерное деление на блоки использует только строки - Двумерное деление на блоки использует и строки и столбцы #### Пример одномерного деления на блоки <img src="./1d_block.jpg"> #### Общее время вычисления произведения матрицы на вектор для одномерного разбиения на блоки - Каждая машина хранит $n / p $ полных строк и $n / p$ элементов вектора $x$ - Общее число операций $n^2 / p$ - Общее время для отправки и записи данных $t_s \log p + t_w n$, где $t_s$ – единица времени на отправку и $t_w$ – единица времени на запись #### Пример двумерного деления на блоки <img src="./2d_block.png" width=400> #### Общее время вычисления умножения матрицы на вектор с использованием двумерного разбиения на блоки - Каждая машина хранит блок размера $n / \sqrt{p} $ и $n / \sqrt{p}$ элементов вектора - Общее число операций $n^2 / p$ - Общее время для отправки и записи данных примерно равно $t_s \log p + t_w (n/\sqrt{p}) \log p$, где $t_s$ – единица времени на отправку и $t_w$ – единица времени на запись ### Пакеты с поддержкой распределённого хранения данных - [ScaLAPACK](http://www.netlib.org/scalapack/) - [Trilinos](https://trilinos.org/) В Python вы можете использовать [mpi4py](https://mpi4py.readthedocs.io/en/stable/) для параллельной реализации ваших алгоритмов. - PyTorch поддерживает распределённое обучение и хранение данных, см подробности [тут](https://pytorch.org/tutorials/intermediate/dist_tuto.html) ### Резюме про работу с большими плотными неструктурированными матрицами - Распределённое хранение матриц - MPI - Пакеты, которые используют блочные вычисления - Различные подходы к блочным вычислениям ## Разреженные матрицы - Ограничением в решении задач линейной алгебры с плотными матрицами является память, требуемая для хранения плотных матриц, $N^2$ элементов. - Разреженные матрицы, где большинство элементов нулевые позволяют по крайней мере хранить их в памяти. - Основные вопросы: можем ли мы решать следующие задачи для разреженных матриц? - решение линейных систем - вычисление собственных значений и собственных векторов - вычисление матричных функций ## Приложения разреженных матриц Разреженные матрицы возникают в следующих областях: - математическое моделирование и решение уравнений в частных производных - обработка графов, например анализ социальных сетей - рекомендательные системы - в целом там, где отношения между объектами "разрежены". ### Разреженные матрицы помогают в вычислительной теории графов - Графы представляют в виде матриц смежности, которые чаще всего разрежены - Численное решение задач теории графов сводится к операциям с этими разреженными матрицами - Кластеризация графа и выделение сообществ - Ранжирование - Случайные блуждатели - И другие.... - Пример: возможно, самый большой доступный граф гиперссылок содержит 3.5 миллиарда веб-страниц и 128 миллиардов гиперссылок, больше подробностей см. [тут](http://webdatacommons.org/hyperlinkgraph/) - Различные графы среднего размера для тестирования ваших алгоритмов доступны в [Stanford Large Network Dataset Collection](https://snap.stanford.edu/data/) ### Florida sparse matrix collection - Большое количество разреженных матриц из различных приложений вы можете найти в [Florida sparse matrix collection](http://www.cise.ufl.edu/research/sparse/matrices/). ``` from IPython.display import IFrame IFrame('http://yifanhu.net/GALLERY/GRAPHS/search.html', 500, 500) ``` ### Разреженные матрицы и глубокое обучение - DNN имеют очень много параметров - Некоторые из них могут быть избыточными - Как уменьшить число параметров без серьёзной потери в точности? - [Sparse variational dropout method](https://github.com/ars-ashuha/variational-dropout-sparsifies-dnn) даёт существенно разреженные фильтры в DNN почти без потери точности! ## Построение разреженных матриц - Мы можем генерировать разреженные матрицы с помощью пакета **scipy.sparse** - Можно задать матрицы очень большого размера Полезные функции при создании разреженных матриц: - для созданий диагональной матрицы с заданными диагоналями ```spdiags``` - Кронекерово произведение (определение будет далее) разреженных матриц ```kron``` - также арифметические операции для разреженных матриц перегружены ### Кронекерово произведение Для матриц $A\in\mathbb{R}^{n\times m}$ и $B\in\mathbb{R}^{l\times k}$ Кронекерово произведение определяется как блочная матрица следующего вида $$ A\otimes B = \begin{bmatrix}a_{11}B & \dots & a_{1m}B \\ \vdots & \ddots & \vdots \\ a_{n1}B & \dots & a_{nm}B\end{bmatrix}\in\mathbb{R}^{nl\times mk}. $$ Основные свойства: - билинейность - $(A\otimes B) (C\otimes D) = AC \otimes BD$ - Пусть $\mathrm{vec}(X)$ оператор векторизации матрицы по столбцам. Тогда $\mathrm{vec}(AXB) = (B^T \otimes A) \mathrm{vec}(X).$ ``` import numpy as np import scipy as sp import scipy.sparse from scipy.sparse import csc_matrix, csr_matrix import matplotlib.pyplot as plt import scipy.linalg import scipy.sparse.linalg %matplotlib inline n = 5 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) plt.spy(A, aspect='equal', marker='.', markersize=5) ``` ### Шаблон разреженности - Команда ```spy``` рисует шаблон разреженности данной матрицы: пиксель $(i, j)$ отображается на рисунке, если соответствующий элемент матрицы ненулевой. - Шаблон разреженности действительно очень важен для понимания сложности алгоритмов линейной алгебры для разреженных матриц. - Зачастую шаблона разреженности достаточно для анализа того, насколько "сложно" работать с этой матрицей. ### Определение разреженных матриц - Разреженные матрицы – это матрицы, такие что количество ненулевых элементов в них существенно меньше общего числа элементов в матрице. - Из-за этого вы можете выполнять базовые операции линейной алгебры (прежде всего решать линейные системы) гораздо быстрее по сравнению с использованием плотных матриц. ## Что нам необходимо, чтобы увидеть, как это работает - **Вопрос 1:** Как хранить разреженные матрицы в памяти? - **Вопрос 2:** Как умножить разреженную матрицу на вектор быстро? - **Вопрос 3:** Как быстро решать линейные системы с разреженными матрицами? ### Хранение разреженных матриц Существет много форматов хранения разреженных матриц, наиболее важные: - COO (координатный формат) - LIL (список списков) - CSR (compressed sparse row) - CSC (compressed sparse column) - блочные варианты В ```scipy``` представлены конструкторы для каждого из этих форматов, например ```scipy.sparse.lil_matrix(A)```. #### Координатный формат (COO) - Простейший формат хранения разреженной матрицы – координатный. - В этом формате разреженная матрица – это набор индексов и значений в этих индексах. ```python i, j, val ``` где ```i, j``` массивы индексов, ```val``` массив элементов матрицы. <br> - Таким образом, нам нужно хранить $3\cdot$**nnz** элементов, где **nnz** обозначает число ненулевых элементов в матрице. **Q:** Что хорошего и что плохого в использовании такого формата? #### Основные недостатки - Он неоптимален по памяти (почему?) - Он неоптимален для умножения матрицы на вектор (почему?) - Он неоптимален для удаления элемента (почему?) Первые два недостатка решены в формате CSR. **Q**: какой формат решает третий недостаток? #### Compressed sparse row (CSR) В формате CSR матрица хранится также с помощью трёх массивов, но других: ```python ia, ja, sa ``` где: - **ia** (начало строк) массив целых чисел длины $n+1$ - **ja** (индексы столбцов) массив целых чисел длины **nnz** - **sa** (элементы матрицы) массив действительных чисел длины **nnz** <img src="https://www.karlrupp.net/wp-content/uploads/2016/02/csr_storage_sparse_marix.png" width=60% /> Итак, всего необходимо хранить $2\cdot{\bf nnz} + n+1$ элементов. ### Разреженные матрицы в PyTorch и Tensorflow - PyTorch поддерживает разреженные матрицы в формате COO - Неполная поддержка вычисления градиентов в операциях с такими матрицами, список и обсуждение см. [тут](https://github.com/pytorch/pytorch/issues/9674) - Tensorflow также поддерживает разреженные матрицы в COO формате - Список поддерживаемых операций приведён [здесь](https://www.tensorflow.org/api_docs/python/tf/sparse) и поддержка вычисления градиентов также ограничена ### CSR формат позволяет быстро умножить разреженную матрицу на вектор (SpMV) ```python for i in range(n): for k in range(ia[i]:ia[i+1]): y[i] += sa[k] * x[ja[k]] ``` ``` import numpy as np import scipy as sp import scipy.sparse import scipy.sparse.linalg from scipy.sparse import csc_matrix, csr_matrix, coo_matrix import matplotlib.pyplot as plt %matplotlib inline n = 1000 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csr_matrix(A) rhs = np.ones(n * n) B = coo_matrix(A) %timeit A.dot(rhs) %timeit B.dot(rhs) ``` Видно, что **CSR** быстрее, и чем менее структурирован шаблон разреженности, тем выше выигрыш в скорости. ### Разреженные матрицы и эффективность - Использование разреженных матриц приводит к уменьшению сложности - Но они не очень подходят для параллельных/GPU реализаций - Они не показывают максимальную эффективность из-за случайного доступа к данным. - Обычно, пиковая производительность порядка $10\%-15\%$ считается хорошей. ### Вспомним как измеряется эффективность операций - Стандартный способ измерения эффективности операций линейной алгебры – это использование **flops** (число опраций с плавающей точкой в секунду) - Измерим эффективность умножения матрицы на вектор в случае плотной и разреженной матрицы ``` import numpy as np import time n = 4000 a = np.random.randn(n, n) v = np.random.randn(n) t = time.time() np.dot(a, v) t = time.time() - t print('Time: {0: 3.1e}, Efficiency: {1: 3.1e} Gflops'.\ format(t, ((2 * n ** 2)/t) / 10 ** 9)) n = 4000 ex = np.ones(n); a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); rhs = np.random.randn(n) t = time.time() a.dot(rhs) t = time.time() - t print('Time: {0: 3.1e}, Efficiency: {1: 3.1e} Gflops'.\ format(t, (3 * n) / t / 10 ** 9)) ``` ### Случайный доступ к данным и промахи в обращении к кешу - Сначала все элементы матрицы и вектора хранятся в оперативной памяти (RAM – Random Access Memory) - Если вы хотите вычислить произведение матрицы на вектор, часть элементов матрицы и вектора перемещаются в кеш (быстрой памяти малого объёма), см. [лекцию об алгоритме Штрассена и умножении матриц](https://github.com/amkatrutsa/nla2020_ozon/blob/master/lectures/lecture4/lecture4.ipynb) - После этого CPU берёт данные из кеша, обрабатывает их и возвращает результат снова в кеш - Если CPU требуются данные, которых ещё нет в кеше, это называется промах в обращении к кешу (cache miss) - Если случается промах в обращении к кешу, необходимые данные перемещаются из оперативной памяти в кеш **Q**: что если в кеше нет свободного места? - Чем больше промахов в обращении к кешу, тем медленнее выполняются вычисления ### План кеша и LRU <img src="./cache_scheme.png" width="500"> #### Умножение матрицы в CSR формате на вектор ```python for i in range(n): for k in range(ia[i]:ia[i+1]): y[i] += sa[k] * x[ja[k]] ``` - Какая часть операций приводит к промахам в обращении к кешу? - Как эту проблему можно решить? ### Переупорядочивание уменьшает количество промахов в обращении к кешу - Если ```ja``` хранит последовательно элементы, тогда они могут быть перемещены в кеш одновременно и количество промахов в обращении к кешу уменьшится - Так происходит, когда разреженная матрица является **ленточной** или хотя бы блочно-диагональной - Мы можем превратить данную разреженную матрицу в ленточную или блочно-диагональную с помощью *перестановок* - Пусть $P$ матрица перестановок строк матрицы и $Q$ матрица перестановок столбцов матрицы - $A_1 = PAQ$ – матрица с шириной ленты меньшей, чем у матрицы $A$ - $y = Ax \to \tilde{y} = A_1 \tilde{x}$, где $\tilde{x} = Q^{\top}x$ и $\tilde{y} = Py$ - [Separated block diagonal form](http://albert-jan.yzelman.net/PDFs/yzelman09-rev.pdf) призван минимизировать количество промахов в обращении к кешу - Он также может быть расширен на двумерный случай, где разделяются не только строки, но и столбцы #### Пример - SBD в одномерном случае <img src="./sbd.png" width="400"> ## Методы решения линейных систем с разреженными матрицами - Прямые методы - LU разложение - Различные методы переупорядочивания для минимизации заполнения факторов - Крыловские методы ``` n = 10 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csr_matrix(A) rhs = np.ones(n * n) sol = sp.sparse.linalg.spsolve(A, rhs) _, (ax1, ax2) = plt.subplots(1, 2) ax1.plot(sol) ax1.set_title('Not reshaped solution') ax2.contourf(sol.reshape((n, n), order='f')) ax2.set_title('Reshaped solution') ``` ## LU разложение разреженной матрицы - Почему разреженная линейная система может быть решена быстрее, чем плотная? С помощью какого метода? - В LU разложении матрицы $A$ факторы $L$ и $U$ могут быть также разреженными: $$A = L U$$ - А решение линейной системы с разреженной треугольной матрицей может быть вычислено очень быстро. <font color='red'> Заметим, что обратная матрица от разреженной матрицы НЕ разрежена! </font> ``` n = 7 ex = np.ones(n); a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); b = np.array(np.linalg.inv(a.toarray())) print(a.toarray()) print(b) ``` ## А факторы... - $L$ и $U$ обычно разрежены - В случае трёхдиагональной матрицы они даже бидиагональны! ``` from scipy.sparse.linalg import splu T = splu(a.tocsc(), permc_spec="NATURAL") plt.spy(T.L) ``` Отметим, что ```splu``` со значением параметра ```permc_spec``` по умолчанию даёт перестановку, которая не даёт бидиагональные факторы: ``` from scipy.sparse.linalg import splu T = splu(a.tocsc()) plt.spy(T.L) print(T.perm_c) ``` ## Двумерный случай В двумерном случае всё гораздо хуже: ``` n = 20 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) T = scipy.sparse.linalg.spilu(A) plt.spy(T.L, marker='.', color='k', markersize=8) ``` Для правильной перестановки в двумерном случае число ненулевых элементов в $L$ растёт как $\mathcal{O}(N \log N)$. Однако сложность равна $\mathcal{O}(N^{3/2})$. ## Разреженные матрицы и теория графов - Число ненулей в факторах из LU разложения тесно связано с теорией графов. - Пакет ``networkx`` можно использовать для визуализации графов, имея только матрицу смежности. ``` import networkx as nx n = 10 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) nx.draw(G, pos=nx.spectral_layout(G), node_size=10) ``` ## Заполнение (fill-in) - Заполнение матрицы – это элементы, которые были **нулями**, но стали **ненулями** в процессе выполнения алгоритма. - Заполнение может быть различным для различных перестановок. Итак, до того как делать факторизацию матрицы нам необходимо переупорядочить её элементы так, чтобы заполнение факторов было наименьшим. **Пример** $$A = \begin{bmatrix} * & * & * & * & *\\ * & * & 0 & 0 & 0 \\ * & 0 & * & 0 & 0 \\ * & 0 & 0& * & 0 \\ * & 0 & 0& 0 & * \end{bmatrix} $$ - Если мы исключаем элементы сверху вниз, тогда мы получим плотную матрицу. - Однако мы можем сохранить разреженность, если исключение будет проводиться снизу вверх. - Подробности на следующих слайдах ## Метод Гаусса для разреженных матриц - Дана матрица $A$ такая что $A=A^*>0$. - Вычислим её разложение Холецкого $A = LL^*$. Фактор $L$ может быть плотным даже если $A$ разреженная: $$ \begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix} = \begin{bmatrix} * & & & \\ * & * & & \\ * & * & * & \\ * & * & * & * \end{bmatrix} \begin{bmatrix} * & * & * & * \\ & * & * & * \\ & & * & * \\ & & & * \end{bmatrix} $$ **Q**: как сделать факторы разреженными, то есть минимизировать заполнение? ## Метод Гаусса и перестановка - Нам нужно найти перестановку индексов такую что факторы будут разреженными, то есть мы будем вычислять разложение Холецкого для матрицы $PAP^\top$, где $P$ – матрица перестановки. - Для примера с предыдущего слайда $$ P \begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix} P^\top = \begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ * & * & * & * \end{bmatrix} = \begin{bmatrix} * & & & \\ & * & & \\ & & * & \\ * & * & * & * \end{bmatrix} \begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ & & & * \end{bmatrix} $$ где $$ P = \begin{bmatrix} & & & 1 \\ & & 1 & \\ & 1 & & \\ 1 & & & \end{bmatrix} $$ - Такая форма матрицы даёт разреженные факторы в LU разложении ``` import numpy as np import scipy.sparse as spsp import scipy.sparse.linalg as spsplin import scipy.linalg as splin import matplotlib.pyplot as plt %matplotlib inline A = spsp.coo_matrix((np.random.randn(10), ([0, 0, 0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 2, 3, 0, 1, 0, 2, 0, 3]))) print("Original matrix") plt.spy(A) plt.show() lu = spsplin.splu(A.tocsc(), permc_spec="NATURAL") print("L factor") plt.spy(lu.L) plt.show() print("U factor") plt.spy(lu.U) plt.show() print("Column permutation:", lu.perm_c) print("Row permutation:", lu.perm_r) ``` ### Блочный случай $$ PAP^\top = \begin{bmatrix} A_{11} & & A_{13} \\ & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33}\end{bmatrix} $$ тогда $$ PAP^\top = \begin{bmatrix} A_{11} & 0 & 0 \\ 0 & A_{22} & 0 \\ A_{31} & A_{32} & A_{33} - A_{31}A_{11}^{-1} A_{13} - A_{32}A_{22}^{-1}A_{23} \end{bmatrix} \begin{bmatrix} I & 0 & A_{11}^{-1}A_{13} \\ 0 & I & A_{22}^{-1}A_{23} \\ 0 & 0 & I\end{bmatrix} $$ - Блок $ A_{33} - A_{31}A_{11}^{-1} A_{13} - A_{32}A_{22}^{-1}A_{23}$ является дополнением по Шуру для блочно-диагональной матрицы $\begin{bmatrix} A_{11} & 0 \\ 0 & A_{22} \end{bmatrix}$ - Мы свели задачу к решению меньших линейных систем с матрицами $A_{11}$ и $A_{22}$ ### Как найти перестановку? - Основная идея взята из теории графов - Разреженную матрицы можно рассматривать как **матрицу смежности** некоторого графа: вершины $(i, j)$ связаны ребром, если соответствующий элемент матрицы не ноль. ### Пример Графы для матрицы $\begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix}$ и для матрицы $\begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ * & * & * & * \end{bmatrix}$ имеют следующий вид: <img src="./graph_dense.png" width=300 align="center"> и <img src="./graph_sparse.png" width=300 align="center"> * Почему вторая упорядоченность лучше, чем первая? ### Сепаратор графа **Определение.** Сепаратором графа $G$ называется множество вершин $S$, таких что их удаление оставляет как минимум две связные компоненты. Сепаратор $S$ даёт следующий метод нумерации вершин графа $G$: - Найти сепаратор $S$, удаление которого оставляет связные компоненты $T_1$, $T_2$, $\ldots$, $T_k$ - Номера вершин в $S$ от $N − |S| + 1$ до $N$ - Рекурсивно, номера вершин в каждой компоненте: - в $T_1$ от $1$ до $|T_1|$ - в $T_2$ от $|T_1| + 1$ до $|T_1| + |T_2|$ - и так далее - Если компонента достаточно мала, то нумерация внутри этой компоненты произвольная ### Сепаратор и структура матрицы: пример Сепаратор для матрицы двумерного лапласиана $$ A_{2D} = I \otimes A_{1D} + A_{1D} \otimes I, \quad A_{1D} = \mathrm{tridiag}(-1, 2, -1), $$ имеет следующий вид <img src='./separator.png' width=300> </img> Если мы пронумеруем сначала индексы в $\alpha$, затем в $\beta$, и наконец индексы в сепараторе $\sigma$ получим следующую матрицу $$ PAP^\top = \begin{bmatrix} A_{\alpha\alpha} & & A_{\alpha\sigma} \\ & A_{\beta\beta} & A_{\beta\sigma} \\ A_{\sigma\alpha} & A_{\sigma\beta} & A_{\sigma\sigma}\end{bmatrix}, $$ которая имеет подходящую структуру. - Таким образом, задача поиска перестановки была сведена к задаче поиска сепаратора графа! ### Nested dissection - Для блоков $A_{\alpha\alpha}$, $A_{\beta\beta}$ можем продолжить разбиение рекурсивно - После завершения рекурсии нужно исключить блоки $A_{\sigma\alpha}$ и $A_{\sigma\beta}$. - Это делает блок в положении $A_{\sigma\sigma}\in\mathbb{R}^{n\times n}$ **плотным**. - Вычисление разложения Холецкого этого блока стоит $\mathcal{O}(n^3) = \mathcal{O}(N^{3/2})$, где $N = n^2$ – общее число вершин. - В итоге сложность $\mathcal{O}(N^{3/2})$ ## Пакеты для nested dissection - MUltifrontal Massively Parallel sparse direct Solver ([MUMPS](http://mumps.enseeiht.fr/)) - [Pardiso](https://www.pardiso-project.org/) - [Umfpack как часть пакета SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html) У них есть интефейс для C/C++, Fortran и Matlab ### Резюме про nested dissection - Нумерация свелась к поиску сепаратора - Подход разделяй и властвуй - Рекурсивно продолжается на два (или более) подмножества вершин после разделения - В теории nested dissection даёт оптимальную сложность (почему?) - На практике этот метод лучше других только на очень больших задачах ## Сепараторы на практике - Вычисление сепаратора – это **нетривиальная задача!** - Построение методов разбиения графа было активной сферой научных исследований долгие годы Существующие подходы: - Спектральное разбиение (использует собственные векторы **Лапласиана графа**) – подробности далее - Геометрическое разбиение (для сеток с заданными координатами вершин) [обзор и анализ](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.4886&rep=rep1&type=pdf) - Итеративные перестановки ([(Kernighan-Lin, 1970)](http://xilinx.asia/_hdl/4/eda.ee.ucla.edu/EE201A-04Spring/kl.pdf), [(Fiduccia-Matheysses, 1982](https://dl.acm.org/citation.cfm?id=809204)) - Поиск в ширину [(Lipton, Tarjan 1979)](http://www.cs.princeton.edu/courses/archive/fall06/cos528/handouts/sepplanar.pdf) - Многоуровневая рекурсивная бисекция (наиболее практичная эвристика) ([обзор](https://people.csail.mit.edu/jshun/6886-s18/lectures/lecture13-1.pdf) и [статья](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.499.4130&rep=rep1&type=pdf)). Пакет для подобного рода разбиений называется METIS, написан на C, и доступен [здесь](http://glaros.dtc.umn.edu/gkhome/views/metis) ## Спектральное разбиение графа - Идея спектрального разбиения восходит к работам Мирослава Фидлера, который изучал связность графов ([статья](https://dml.cz/bitstream/handle/10338.dmlcz/101168/CzechMathJ_23-1973-2_11.pdf)). - Нам нужно разбить вершинеы графа на 2 множества - Рассмотрим метки вершин +1/-1 и **функцию потерь** $$E_c(x) = \sum_{j} \sum_{i \in N(j)} (x_i - x_j)^2, \quad N(j) \text{ обозначает множество соседей вершины } j. $$ Нам нужно сбалансированное разбиение, поэтому $$\sum_i x_i = 0 \quad \Longleftrightarrow \quad x^\top e = 0, \quad e = \begin{bmatrix}1 & \dots & 1\end{bmatrix}^\top,$$ и поскольку мы ввели метки +1/-1, то выполнено $$\sum_i x^2_i = n \quad \Longleftrightarrow \quad \|x\|_2^2 = n.$$ ## Лапласиан графа Функция потерь $E_c$ может быть записана в виде (проверьте почему) $$E_c = (Lx, x)$$ где $L$ – **Лапласиан графа**, который определяется как симметричная матрица с элементами $$L_{ii} = \mbox{степень вершины $i$},$$ $$L_{ij} = -1, \quad \mbox{если $i \ne j$ и существует ребро},$$ и $0$ иначе. - Строчные суммы в матрице $L$ равны нулю, поэтому существует собственное значение $0$, которое даёт собственный вектор из всех 1. - Собственные значения неотрицательны (почему?). ## Разбиение как задача оптимизации - Минимизация $E_c$ с упомянутыми ограничениями приводит к разбиению, которое минимизирует число вершин в сепараторе, но сохраняет разбиение сбалансированным - Теперь мы запишем релаксацию целочисленной задачи квадратичного программирования в форме непрерывной задачи квадратичного программирования $$E_c(x) = (Lx, x)\to \min_{\substack{x^\top e =0, \\ \|x\|_2^2 = n}}$$ ## Вектор Фидлера - Решение этой задачи минимизации – собственный вектор матрицы $L$, соответствующий **второму** минимальному собственному значению (он называется вектором Фидлера) - В самом деле, $$ \min_{\substack{x^\top e =0, \\ \|x\|_2^2 = n}} (Lx, x) = n \cdot \min_{{x^\top e =0}} \frac{(Lx, x)}{(x, x)} = n \cdot \min_{{x^\top e =0}} R(x), \quad R(x) \text{ отношение Релея} $$ - Поскольку $e$ – собственный вектор, соответствующий наименьшему собственному значению, то на подпространстве $x^\top e =0$ мы получим второе минимальное собственное значение. - Знак $x_i$ обозначает разбиение графа. - Осталось понять, как вычислить этот вектор. Мы знаем про степенной метод, но он ищет собственный вектор для максимального по модулю собственного значения. - Итерационные методы для задачи на собственные значения будут рассмотрены далее в курсе... ``` import numpy as np %matplotlib inline import matplotlib.pyplot as plt import networkx as nx kn = nx.read_gml('karate.gml') print("Number of vertices = {}".format(kn.number_of_nodes())) print("Number of edges = {}".format(kn.number_of_edges())) nx.draw_networkx(kn, node_color="red") #Draw the graph Laplacian = nx.laplacian_matrix(kn).asfptype() plt.spy(Laplacian, markersize=5) plt.title("Graph laplacian") plt.axis("off") plt.show() eigval, eigvec = spsplin.eigsh(Laplacian, k=2, which="SM") print("The 2 smallest eigenvalues =", eigval) plt.scatter(np.arange(len(eigvec[:, 1])), np.sign(eigvec[:, 1])) plt.show() print("Sum of elements in Fiedler vector = {}".format(np.sum(eigvec[:, 1].real))) nx.draw_networkx(kn, node_color=np.sign(eigvec[:, 1])) ``` ### Резюме по примеру использования спектрального разбиения графа - Мы вызвали функцию из SciPy для поиска фиксированного числа собственных векторов и собственных значений, которые минимальны (возможны другие опции) - Детали методов, которые реализованы в этих функциях, обсудим уже скоро - Вектор Фидлера даёт простой способ разбиения графа - Для разбиения графа на большее количество частей следует использовать собственные векторы Лапласиана как векторы признаков и запустить какой-нибудь алгоритм кластеризации, например $k$-means ### Вектор Фидлера и алгебраическая связность графа **Определение.** Алгебраическая связность графа – это второе наименьшее собственное значение матрицы Лапласиана графа. **Утверждение.** Алгебраическая связность графа больше 0 тогда и только тогда, когда граф связный. ## Minimal degree orderings - Идея в том, чтобы исклоючить строки и/или столбцы с малым числом ненулей, обновить заполнение и повторить. - Эффективная реализация является отдельной задачей (добавление/удаление элементов). - На практике часто лучше всего для задач среднего размера - SciPy [использует](https://docs.scipy.org/doc/scipy-1.3.0/reference/generated/scipy.sparse.linalg.splu.html) этот подход для различных матриц ($A^{\top}A$, $A + A^{\top}$) ## Главное в сегодняшней лекции - Плотные матрицы большого размера и распределённые вычисления - Разреженные матрицы, приложения и форматы их хранения - Эффективные способы умножения разреженной матрицы на вектор - LU разложение разреженной матрицы: заполнение и перестановки строк - Минимизация заполнения: сепараторы и разбиение графа - Nested dissection - Спектральное разбиение графа: Лапласиан графа и вектор Фидлера ``` from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() ```
true
code
0.240507
null
null
null
null
# quant-econ Solutions: Modeling Career Choice Solutions for http://quant-econ.net/py/career.html ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from quantecon import DiscreteRV, compute_fixed_point from career import CareerWorkerProblem ``` ## Exercise 1 Simulate job / career paths. In reading the code, recall that `optimal_policy[i, j]` = policy at $(\theta_i, \epsilon_j)$ = either 1, 2 or 3; meaning 'stay put', 'new job' and 'new life'. ``` wp = CareerWorkerProblem() v_init = np.ones((wp.N, wp.N))*100 v = compute_fixed_point(wp.bellman_operator, v_init, verbose=False) optimal_policy = wp.get_greedy(v) F = DiscreteRV(wp.F_probs) G = DiscreteRV(wp.G_probs) def gen_path(T=20): i = j = 0 theta_index = [] epsilon_index = [] for t in range(T): if optimal_policy[i, j] == 1: # Stay put pass elif optimal_policy[i, j] == 2: # New job j = int(G.draw()) else: # New life i, j = int(F.draw()), int(G.draw()) theta_index.append(i) epsilon_index.append(j) return wp.theta[theta_index], wp.epsilon[epsilon_index] theta_path, epsilon_path = gen_path() fig, axes = plt.subplots(2, 1, figsize=(10, 8)) for ax in axes: ax.plot(epsilon_path, label='epsilon') ax.plot(theta_path, label='theta') ax.legend(loc='lower right') plt.show() ``` ## Exercise 2 The median for the original parameterization can be computed as follows ``` wp = CareerWorkerProblem() v_init = np.ones((wp.N, wp.N))*100 v = compute_fixed_point(wp.bellman_operator, v_init) optimal_policy = wp.get_greedy(v) F = DiscreteRV(wp.F_probs) G = DiscreteRV(wp.G_probs) def gen_first_passage_time(): t = 0 i = j = 0 while 1: if optimal_policy[i, j] == 1: # Stay put return t elif optimal_policy[i, j] == 2: # New job j = int(G.draw()) else: # New life i, j = int(F.draw()), int(G.draw()) t += 1 M = 25000 # Number of samples samples = np.empty(M) for i in range(M): samples[i] = gen_first_passage_time() print(np.median(samples)) ``` To compute the median with $\beta=0.99$ instead of the default value $\beta=0.95$, replace `wp = CareerWorkerProblem()` with `wp = CareerWorkerProblem(beta=0.99)` The medians are subject to randomness, but should be about 7 and 11 respectively. Not surprisingly, more patient workers will wait longer to settle down to their final job ## Exercise 3 Here’s the code to reproduce the original figure ``` from matplotlib import cm wp = CareerWorkerProblem() v_init = np.ones((wp.N, wp.N))*100 v = compute_fixed_point(wp.bellman_operator, v_init) optimal_policy = wp.get_greedy(v) fig, ax = plt.subplots(figsize=(6,6)) tg, eg = np.meshgrid(wp.theta, wp.epsilon) lvls=(0.5, 1.5, 2.5, 3.5) ax.contourf(tg, eg, optimal_policy.T, levels=lvls, cmap=cm.winter, alpha=0.5) ax.contour(tg, eg, optimal_policy.T, colors='k', levels=lvls, linewidths=2) ax.set_xlabel('theta', fontsize=14) ax.set_ylabel('epsilon', fontsize=14) ax.text(1.8, 2.5, 'new life', fontsize=14) ax.text(4.5, 2.5, 'new job', fontsize=14, rotation='vertical') ax.text(4.0, 4.5, 'stay put', fontsize=14) ``` Now we want to set `G_a = G_b = 100` and generate a new figure with these parameters. To do this replace: wp = CareerWorkerProblem() with: wp = CareerWorkerProblem(G_a=100, G_b=100) In the new figure, you will see that the region for which the worker will stay put has grown because the distribution for $\epsilon$ has become more concentrated around the mean, making high-paying jobs less realistic
true
code
0.600188
null
null
null
null
# IMPORTS ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt ``` # READ THE DATA ``` data = pd.read_csv('./input/laptops.csv', encoding='latin-1') data.head(10) ``` # MAIN EDA BLOCK ``` print(f'Data Shape\nRows: {data.shape[0]}\nColumns: {data.shape[1]}') print('=' * 30) data.info() data.describe() data['Product'] = data['Product'].str.split('(').apply(lambda x: x[0]) data['CPu_Speed'] = data['Cpu'].str.split(' ').apply(lambda x: x[-1]).str.replace('GHz', '') data['Cpu_Vender'] = data['Cpu'].str.split(' ').apply(lambda x: x[0]) data['Cpu_Type'] = data['Cpu'].str.split(' ').apply(lambda x: x[1:4] if x[1]=='Celeron' and 'Pentium' and 'Xeon' else (x[1:3] if (x[1]=='Core' or x[0]=='AMD') else x[0])) data['Cpu_Type'] = data['Cpu_Type'].apply(lambda x: ' '.join(x)) data['Cpu_Type'] data.head(10) split_mem = data['Memory'].str.split(' ', 1, expand=True) data['Storage_Type'] = split_mem[1] data['Memory'] = split_mem[0] data['Memory'].unique() data.head(10) data['Ram'] = data['Ram'].str.replace('GB', '') df_mem = data['Memory'].str.split('(\d+)', expand=True) data['Memory'] = pd.to_numeric(df_mem[1]) data.rename(columns={'Memory': 'Memory (GB or TB)'}, inplace=True) def mem(x): if x == 1: return 1024 elif x == 2: return 2048 data['Memory (GB or TB)'] = data['Memory (GB or TB)'].apply(lambda x: 1024 if x==1 else x) data['Memory (GB or TB)'] = data['Memory (GB or TB)'].apply(lambda x: 2048 if x==2 else x) data.rename(columns={'Memory (GB or TB)': 'Storage (GB)'}, inplace=True) data.head(10) data['Weight'] = data['Weight'].str.replace('kg', '') data.head(10) gpu_distr_list = data['Gpu'].str.split(' ') data['Gpu_Vender'] = data['Gpu'].str.split(' ').apply(lambda x: x[0]) data['Gpu_Type'] = data['Gpu'].str.split(' ').apply(lambda x: x[1:]) data['Gpu_Type'] = data['Gpu_Type'].apply(lambda x: ' '.join(x)) data.head(10) data['Touchscreen'] = data['ScreenResolution'].apply(lambda x: 1 if 'Touchscreen' in x else 0) data['Ips'] = data['ScreenResolution'].apply(lambda x: 1 if 'IPS' in x else 0) def cat_os(op_s): if op_s =='Windows 10' or op_s == 'Windows 7' or op_s == 'Windows 10 S': return 'Windows' elif op_s =='macOS' or op_s == 'Mac OS X': return 'Mac' else: return 'Other/No OS/Linux' data['OpSys'] = data['OpSys'].apply(cat_os) data = data.reindex(columns=["Company", "TypeName", "Inches", "Touchscreen", "Ips", "Cpu_Vender", "Cpu_Type","Ram", "Storage (GB)", "Storage Type", "Gpu_Vender", "Gpu_Type", "Weight", "OpSys", "Price_euros" ]) data.head(10) data['Ram'] = data['Ram'].astype('int') data['Storage (GB)'] = data['Storage (GB)'].astype('int') data['Weight'] = data['Weight'].astype('float') data.info() sns.set(rc={'figure.figsize': (9,5)}) data['Company'].value_counts().plot(kind='bar') sns.barplot(x=data['Company'], y=data['Price_euros']) data['TypeName'].value_counts().plot(kind='bar') sns.barplot(x=data['TypeName'], y=data['Price_euros']) cpu_distr = data['Cpu_Type'].value_counts()[:10].reset_index() cpu_distr sns.barplot(x=cpu_distr['index'], y=cpu_distr['Cpu_Type'], hue='Cpu_Vender', data=data) gpu_distr = data['Gpu_Type'].value_counts()[:10].reset_index() gpu_distr sns.barplot(x=gpu_distr['index'], y=gpu_distr['Gpu_Type'], hue='Gpu_Vender', data=data) sns.barplot(x=data['OpSys'], y=data['Price_euros']) corr_data = data.corr() corr_data['Price_euros'].sort_values(ascending=False) sns.heatmap(data.corr(), annot=True) X = data.drop(columns=['Price_euros']) y = np.log(data['Price_euros']) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=42) from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder from sklearn.metrics import r2_score, mean_absolute_error from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, ExtraTreesRegressor from xgboost import XGBRegressor from sklearn.ensemble import VotingRegressor, StackingRegressor step1 = ColumnTransformer(transformers=[ ('col_inf', OneHotEncoder(sparse=False, handle_unknown='ignore'), [0,1,5,6,9,10,11,13]) ],remainder='passthrough') rf = RandomForestRegressor(n_estimators=350, random_state=3, max_samples=0.5, max_features=0.75, max_depth=15) gbdt = GradientBoostingRegressor(n_estimators=100, max_features=0.5) xgb = XGBRegressor(n_estimators=25, learning_rate=0.3, max_depth=5) et = ExtraTreesRegressor(n_estimators=100, random_state=3, max_samples=0.5, max_features=0.75, max_depth=10) step2 = VotingRegressor([('rf', rf), ('gbdt', gbdt), ('xgb', xgb), ('et', et)], weights=[5,1,1,1]) pipe = Pipeline([ ('step1', step1), ('step2', step2)]) pipe.fit(X_train, y_train) y_pred = pipe.predict(X_test) print('R2 score', r2_score(y_test, y_pred)) print('MAE', mean_absolute_error(y_test, y_pred)) ```
true
code
0.288607
null
null
null
null
# Visualizing invasive and non-invasive EEG data [Liberty Hamilton, PhD](https://csd.utexas.edu/research/hamilton-lab) Assistant Professor, University of Texas at Austin Department of Speech, Language, and Hearing Sciences and Department of Neurology, Dell Medical School Welcome! In this notebook we will be discussing how to look at time series electrophysiological 🧠 data that is recorded noninvasively at the scalp (scalp electroencephalography or EEG), or invasively in patients who are undergoing surgical treatment for epilepsy (sometimes called intracranial EEG or iEEG, also called stereo EEG/sEEG, or electrocorticography/ECoG). ### Python libraries you will be using in this tutorial: * MNE-python * matplotlib * numpy ![MNE-python logo](https://mne.tools/stable/_static/mne_logo.png) MNE-python is open source python software for exploring and analyzing human neurophysiological data (EEG/MEG/iEEG). ### What you will learn to do * Load some sample EEG data * Load some sample intracranial EEG data * Plot the raw EEG data/iEEG data * Plot the power spectrum of your data * Epoch data according to specific task conditions (sentences) * Plot all epochs and averaged evoked activity * Plot average evoked activity in response to specific task conditions (ERPs) * Plot by channel as well as averaging across channels * Plot EEG activity at specific time points on the scalp (topomaps) * Customize your plots ### Other Resources: * [MNE-python tutorials](https://mne.tools/stable/auto_tutorials/index.html) -- This has many additional resources above and beyond that also include how to preprocess your data, remove artifacts, and more! <a id="basics1"></a> # 1. The basics: loading in your data ``` !pip install matplotlib==3.2 import mne # This is the mne library import numpy as np # This gives us the power of numpy, which is just generally useful for array manipulation %matplotlib inline from matplotlib import pyplot as plt from matplotlib import cm datasets = {'ecog': '/home/jovyan/data/we_eeg_viz_data/ecog/sub-S0006/S0006_ecog_hg.fif', 'eeg': '/home/jovyan/data/we_eeg_viz_data/eeg/sub-MT0002/MT0002-eeg.fif'} event_files = {'ecog': '/home/jovyan/data/we_eeg_viz_data/ecog/sub-S0006/S0006_eve.txt', 'eeg': '/home/jovyan/data/we_eeg_viz_data/eeg/sub-MT0002/MT0002_eve.txt'} stim_file = '/home/jovyan/data/we_eeg_viz_data/stimulus_list.csv' # Get some information about the stimuli (here, the names of the sound files that were played) ev_names=np.genfromtxt(stim_file, skip_header=1, delimiter=',',dtype=np.str, usecols=[1],encoding='utf-8') ev_nums=np.genfromtxt(stim_file, skip_header=1, delimiter=',',dtype=np.int, usecols=[0], encoding='utf-8') event_id = dict() for i, ev_name in enumerate(ev_names): event_id[ev_name] = ev_nums[i] ``` ## 1.1. Choose which dataset to look at (start with EEG) For the purposes of this tutorial, we'll be looking at some scalp EEG and intracranial EEG datasets from my lab. Participants provided written informed consent for participation in our research. These data were collected from two distinct participants listening to sentences from the [TIMIT acoustic-phonetic corpus](https://catalog.ldc.upenn.edu/LDC93S1). This is a database of English sentences spoken by multiple talkers from throughout the United States, and has been used in speech recognition research, neuroscience research, and more! The list of stimuli is in the `stimulus_list.csv` file. Each stimulus starts with either a "f" or a "m" to indicate a female or male talker. The rest of the alphanumeric string has to do with other characteristics of the talkers that we won't go into here. The stimulus timings have been provided for you in the event files (ending with the suffix `_eve.txt`. We'll talk about those more later. ### EEG Data The EEG data was recorded with a 64-channel [BrainVision ActiCHamp](https://www.brainproducts.com/productdetails.php?id=74) system. These data are part of an ongoing project in our lab and are unpublished. You can find similar (larger) datasets from [Broderick et al.](https://datadryad.org/stash/dataset/doi:10.5061/dryad.070jc), or Bradley Voytek's lab has a list of [Open Electrophysiology datasets](https://github.com/openlists/ElectrophysiologyData). ### The ECoG Data The ECoG data was recorded from 106 electrodes across multiple regions of the brain while our participant listened to TIMIT sentences. This is a smaller subset of sentences than the EEG dataset and so is a bit faster to load. The areas we recorded from are labeled according to a clinical montage. For iEEG and ECoG datasets, these names are rarely standardized, so it can be hard to know exactly what is what without additional information. Here, each channel is named according to the general location of the electrode probe to which it belongs. | Device | General location | |---|---| | RAST | Right anterior superior temporal | | RMST | Right middle superior temporal | | RPST | Right posterior superior temporal | | RPPST | Right posterior parietal/superior temporal | | RAIF | Right anterior insula | | RPI | Right posterior insula | | ROF | Right orbitofrontal | | RAC | Right anterior cingulate | ``` data_type = 'eeg' # Can choose from 'eeg' or 'ecog' ``` ## 1.2. Load the data This next command loads the data from our fif file of interest. The `preload=True` flag means that the data will be loaded (necessary for some operations). If `preload=False`, you can still perform some aspects of this tutorial, and this is a great option if you have a large dataset and would like to look at some of the header information and metadata before you start to analyze it. ``` raw = mne.io.read_raw_fif(datasets[data_type], preload=True) ``` There is a lot of useful information in the info structure. For example, we can get the sampling frequency (`raw.info['sfreq']`), the channel names (`raw.info['ch_names']`), the channel types and locations (in `raw.info['chs']`), and whether any filtering operations have been performed already (`raw.info['highpass']` and `raw.info['lowpass']` show the cut-offs for the data). ``` print(raw.info) sampling_freq = raw.info['sfreq'] nchans = raw.info['nchan'] print('The sampling frequency of our data is %d'%(sampling_freq)) print('Here is our list of %d channels: '%nchans) print(raw.ch_names) eeg_colors = {'eeg': 'k', 'eog': 'steelblue'} fig = raw.plot(show=False, color=eeg_colors, scalings='auto'); fig.set_figwidth(8) fig.set_figheight(4) ``` <a id="plots2"></a> # 2. Let's make some plots! MNE-python makes creating some plots *super easy*, which is great for data quality checking, exploration, and eventually manuscript figure generation. For example, one might wish to plot the power spectral density (PSD), which ## 2.2. Power spectral density ``` raw.plot_psd(); ``` ## 2.3. Sensor positions (for EEG) For EEG, MNE-python also has convenient functions for showing the location of the sensors used. Here, we have a 64-channel montage. You can also use this information to help interpret some of your plots if you're plotting a single channel or a group of channels. For ECoG, we will not be plotting sensors in this way. If you would like read more about that process, please see [this tutorial](https://mne.tools/stable/auto_tutorials/misc/plot_ecog.html). You can also check out [Noah Benson's session](https://neurohackademy.org/course/introduction-to-the-geometry-and-structure-of-the-human-brain/) (happening in parallel with this tutorial!) for plotting 3D brains. ``` if data_type == 'eeg': raw.plot_sensors(kind='topomap',show_names=True); ``` Ok, awesome! So now we know where the sensors are, how densely they tile the space, and what their names are. *Knowledge = Power!* So what if we wanted to look at the power spectral density plot we saw above by channel? We can use `plot_psd_topo` for that! There are also customizable options for playing with the colors. ``` if data_type == 'eeg': raw.plot_psd_topo(fig_facecolor='w', axis_facecolor='w', color='k'); ``` Finally, this one works for both EEG and ECoG. Here we are looking at the power spectral density plot again, but taking the average across trials and showing +/- 1 standard deviation from the mean across channels. ``` raw.plot_psd(area_mode='std', average=True); ``` Finally, we can plot these same figures using a narrower frequency range, and looking at a smaller set of channels using `picks`. For `plot_psd` and other functions, `picks` is a list of integer indices corresponding to your channels of interest. You can choose these by their number, or you can use the convenient `mne.pick_channels` function to choose them by name. For example, in EEG, we often see strong responses to auditory stimuli at the top of the head, so here we will restrict our EEG channels to a few at the top of the head at the midline. For ECoG, we are more likely to see responses to auditory stimuli in temporal lobe electrodes (potentially RPPST, RPST, RMST, RAST), so we'll try those. ``` if data_type == 'eeg': picks = mne.pick_channels(raw.ch_names, include=['Pz','CPz','Cz','FCz','Fz','C1','C2','FC1','FC2','CP1','CP2']) elif data_type == 'ecog': picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11']) raw.plot_psd(picks = picks, fmin=1, fmax=raw.info['sfreq']/2, xscale='log'); ``` ## Plotting responses to events Ok, so this is all well and good. We can plot our raw data, the power spectrum, and the locations of the sensors. But what if we care about responses to the stimuli we described above? What if we want to look at responses to specific sentences, or the average response across all sentences, or something else? How can we determine which EEG sensors or ECoG electrodes respond to the speech stimuli? Enter.... *Epoching!* MNE-python gives you a very convenient way of rearranging your data according to events of interest. These can actually even be found automatically from a stimulus channel, if you have one (using [`mne.find_events`](https://mne.tools/stable/generated/mne.find_events.html)), which we won't use here because we already have the timings from another procedure. You can also find other types of epochs, like those based on EMG or [eye movements (EOG)](https://mne.tools/stable/generated/mne.preprocessing.find_eog_events.html). Here, we will load our event files (ending with `_eve.txt`). These contain information about the start sample, stop sample, and event ID for each stimulus. Each row in the file is one stimulus. The timings are in samples rather than in seconds, so if you are creating these on your own, pay attention to your sampling rate (in `raw.info['sfreq']`). ``` # Load some events. The format of these is start sample, end sample, and event ID. events = mne.read_events(event_files[data_type]) print(events) num_events = len(events) unique_stimuli = np.unique(np.array(events)[:,2]) num_unique = len(unique_stimuli) print('There are %d total events, corresponding to %d unique stimuli'%(num_events, num_unique)) ``` ## Epochs Great. So now that we have the events, we will "epoch" our data, which basically uses these timings to split up our data into trials of a given length. We will also set some parameters for data rejection to get rid of noisy trials. ``` # Set some rejection criteria. This will be based on the peak-to-peak # amplitude of your data. if data_type=='eeg': reject = {'eeg': 60e-6} # Higher than peak to peak amplitude of 60 µV will be rejected scalings = None units = None elif data_type=='ecog': reject = {'ecog': 10} # Higher than Z-score of 10 will be rejected scalings = {'ecog': 1} # Don't rescale these as if they should be in µV units = {'ecog': 'Z-score'} tmin = -0.2 tmax = 1.0 epochs = mne.Epochs(raw, events, tmin=tmin, tmax=tmax, baseline=(None, 0), reject=reject, verbose=True) ``` So what's in this epochs data structure? If we look at it, we can see that we have an entry for each event ID, and we can see how many times that stimulus was played. You can also see whether baseline correction was done and for what time period, and whether any data was rejected. ``` epochs ``` Now, you could decide at this point that you just want to work with the data directly as a numpy array. Luckily, that's super easy to do! We can just call `get_data()` on our epochs data structure, and this will output a matrix of `[events x channels x time points]`. If you do not limit the channel type, you will get all of them (including any EOG, stimulus channels, or other non-EEG/ECoG channels). ``` ep_data = epochs.get_data() print(ep_data.shape) ``` ## Plotting Epoched data Ok... so we are getting ahead of ourselves. MNE-python provides a lot of ways to plot our data so that we don't have to deal with writing functions to do this ourselves! For example, if we'd like to plot the EEG/ECoG for all of the single trials we just loaded, along with an average across all of these trials (and channels of interest), we can do that easily with `epochs.plot_image()`. ``` epochs.plot_image(combine='mean', scalings=scalings, units=units) ``` As before, we can choose specific channels to look at instead of looking at all of them at once. For which method do you think this would make the most difference? Why? ``` if data_type == 'eeg': picks = mne.pick_channels(raw.ch_names, include=['Fz','FCz','Cz','CPz','Pz']) elif data_type == 'ecog': picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11']) epochs.plot_image(picks = picks, combine='mean', scalings=scalings, units=units) ``` We can also sort the trials, if we would like. This can be very convenient if you have reaction times or some other portion of the trial where reordering would make sense. Here, we'll just pick a channel and order by the mean activity within each trial. ``` if data_type == 'eeg': picks = mne.pick_channels(raw.ch_names, include=['CP6']) elif data_type == 'ecog': picks = mne.pick_channels(raw.ch_names, include=['RPPST2']) # Get the data as a numpy array eps_data = epochs.get_data() # Sort the data new_order = eps_data[:,picks[0],:].mean(1).argsort(0) epochs.plot_image(picks=picks, order=new_order, scalings=scalings, units=units) ``` ## Other ways to view epoched data For EEG, another way to view these epochs by trial is using the scalp topography information. This allows us to quickly assess differences across the scalp in response to the stimuli. What do you notice about the responses? ``` if data_type == 'eeg': epochs.plot_topo_image(vmin=-30, vmax=30, fig_facecolor='w',font_color='k'); ``` ## Comparing epochs of different trial types So far we have just shown averages of activity across many different sentences. However, as mentioned above, the sentences come from multiple male and female talkers. So -- one quick split we could try is just to compare the responses to female vs. male talkers. This is relatively simple with the TIMIT stimuli because their file name starts with "f" or "m" to indicate this. ``` # Make lists of the event ID numbers corresponding to "f" and "m" sentences f_evs = [] m_evs = [] for k in event_id.keys(): if k[0] == 'f': f_evs.append(event_id[k]) elif k[0] == 'm': m_evs.append(event_id[k]) print(unique_stimuli) f_evs_new = [v for v in f_evs if v in unique_stimuli] m_evs_new = [v for v in m_evs if v in unique_stimuli] # Epoch the data separately for "f" and "m" epochs f_epochs = mne.Epochs(raw, events, event_id=f_evs_new, tmin=tmin, tmax=tmax, reject=reject) m_epochs = mne.Epochs(raw, events, event_id=m_evs_new, tmin=tmin, tmax=tmax, reject=reject) ``` Now we can plot the epochs just as we did above. ``` f_epochs.plot_image(combine='mean', show=False, scalings=scalings, units=units) m_epochs.plot_image(combine='mean', show=False, scalings=scalings, units=units) ``` Cool! So now we have a separate plot for the "f" and "m" talkers. However, it's not super convenient to compare the traces this way... we kind of want them on the same axis. MNE easily allows us to do this too! Instead of using the epochs, we can create `evoked` data structures, which are averaged epochs. You can [read more about evoked data structures here](https://mne.tools/dev/auto_tutorials/evoked/plot_10_evoked_overview.html). ## Compare evoked data ``` evokeds = {'female': f_epochs.average(), 'male': m_epochs.average()} mne.viz.plot_compare_evokeds(evokeds, show_sensors='upper right',picks=picks); ``` If we actually want errorbars on this plot, we need to do this a bit differently. We can use the `iter_evoked()` method on our epochs structures to create a dictionary of conditions for which we will plot our comparisons with `plot_compare_evokeds`. ``` evokeds = {'f':list(f_epochs.iter_evoked()), 'm':list(m_epochs.iter_evoked())} mne.viz.plot_compare_evokeds(evokeds, picks=picks); ``` ## Plotting scalp topography For EEG, another common plot you may see is a topographic map showing activity (or other data like p-values, or differences between conditions). In this example, we'll show the activity at -0.2, 0, 0.1, 0.2, 0.3, and 1 second. You can also of course choose just one time to look at. ``` if data_type == 'eeg': times=[tmin, 0, 0.1, 0.2, 0.3, tmax] epochs.average().plot_topomap(times, ch_type='eeg', cmap='PRGn', res=32, outlines='skirt', time_unit='s'); ``` We can also plot arbitrary data using `mne.viz.plot_topomap`, and passing in a vector of data matching the number of EEG channels, and `raw.info` to give specifics on those channel locations. ``` if data_type == 'eeg': chans = mne.pick_types(raw.info, eeg=True) data = np.random.randn(len(chans),) plt.figure() mne.viz.plot_topomap(data, raw.info, show=True) ``` We can even animate these topo maps! This won't work well in jupyterhub, but feel free to try on your own! ``` if data_type == 'eeg': fig,anim=epochs.average().animate_topomap(blit=False, times=np.linspace(tmin, tmax, 100)) ``` ## A few more fancy EEG plots If we want to get especially fancy, we can also use `plot_joint` with our evoked data (or averaged epoched data, as shown below). This allows us to combine the ERPs for individual channels with topographic maps at time points that we specify. Pretty awesome! ``` if data_type == 'eeg': epochs.average().plot_joint(picks='eeg', times=[0.1, 0.2, 0.3]) ``` # What if I need more control? - matplotlib alternatives If you feel you need more specific control over your plots, it's easy to get the data into a usable format for plotting with matplotlib. You can export both the raw and epoched data using the `get_data()` function, which will allow you to save your data as a numpy array `[ntrials x nchannels x ntimepoints]`. Then, you can do whatever you want with the data! Throw it into matplotlib, use seaborn, or whatever your heart desires! ``` if data_type == 'eeg': picks = mne.pick_channels(raw.ch_names, include=['Fz','FCz','Cz','CPz','Pz']) elif data_type == 'ecog': picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11']) f_data = f_epochs.get_data(picks=picks) m_data = m_epochs.get_data(picks=picks) times = f_epochs.times print(f_data.shape) ``` ## Plot evoked data with errorbars We can recreate some similar plots to those in MNE-python with some of the matplotlib functions. Here we'll create something similar to what was plotted in `plot_compare_evokeds`. ``` def plot_errorbar(x, ydata, label=None, axlines=True, alpha=0.5, **kwargs): ''' Plot the mean +/- standard error of ydata. Inputs: x : vector of x values ydata : matrix of your data (this will be averaged along the 0th dimension) label : A string containing the label for this plot axlines : [bool], whether to draw the horizontal and vertical axes alpha: opacity of the standard error area ''' ymean = ydata.mean(0) ystderr = ydata.std(0)/np.sqrt(ydata.shape[0]) plt.plot(x, ydata.mean(0), label=label, **kwargs) plt.fill_between(x, ymean+ystderr, ymean-ystderr, alpha=alpha, **kwargs) if axlines: plt.axvline(0, color='k', linestyle='--') plt.axhline(0, color='k', linestyle='--') plt.gca().set_xlim([x.min(), x.max()]) plt.figure() plot_errorbar(times, f_data.mean(0), label='female') plot_errorbar(times, m_data.mean(0), label='male') plt.xlabel('Time (s)') plt.ylabel('Z-scored high gamma') plt.legend() ``` ## ECoG Exercise: 1. If you wanted to look at each ECoG electrode individually to find which ones have responses to the speech data, how would you do this? 2. Can you plot the comparison between "f" and "m" trials for each electrode as a subplot (try using `plt.subplot()` from `matplotlib`) ``` # Get the data for f trials # Get the data for m trials # Loop through each channel, and create a set of subplots for each ``` # Hooray, the End! You did it! Go forth and use MNE-python in your own projects, or even contribute to the code! 🧠
true
code
0.504211
null
null
null
null
``` import pandas as pd import os import s3fs # for reading from S3FileSystem import json %matplotlib inline import matplotlib.pyplot as plt import torch.nn as nn import torch import torch.utils.model_zoo as model_zoo import numpy as np import torchvision.models as models # To get ResNet18 # From - https://github.com/cfotache/pytorch_imageclassifier/blob/master/PyTorch_Image_Inference.ipynb import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms from PIL import Image from torch.autograd import Variable from torch.utils.data.sampler import SubsetRandomSampler ``` # Prepare the Model ``` SAGEMAKER_PATH = r'/home/ec2-user/SageMaker' MODEL_PATH = os.path.join(SAGEMAKER_PATH, r'sidewalk-cv-assets19/pytorch_pretrained/models/20e_slid_win_no_feats_r18.pt') os.path.exists(MODEL_PATH) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device # Use PyTorch's ResNet18 # https://stackoverflow.com/questions/53612835/size-mismatch-for-fc-bias-and-fc-weight-in-pytorch model = models.resnet18(num_classes=5) model.to(device) model.load_state_dict(torch.load(MODEL_PATH)) model.eval() ``` # Prep Data ``` # From Galen test_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) #device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # the dataset loads the files into pytorch vectors #image_dataset = TwoFileFolder(dir_containing_crops, meta_to_tensor_version=2, transform=data_transform) # the dataloader takes these vectors and batches them together for parallelization, increasing performance #dataloader = torch.utils.data.DataLoader(image_dataset, batch_size=4, shuffle=True, num_workers=4) # this is the number of additional features provided by the dataset #len_ex_feats = image_dataset.len_ex_feats #dataset_size = len(image_dataset) # Load in the data data_dir = 'images' data = datasets.ImageFolder(data_dir, transform=test_transforms) classes = data.classes !ls -a images !rm -f -r images/.ipynb_checkpoints/ # Examine the classes based on folders... # Need to make sure that we don't get a .ipynb_checkpoints as a folder # Discussion here - https://forums.fast.ai/t/how-to-remove-ipynb-checkpoint/8532/19 classes num = 10 indices = list(range(len(data))) print(indices) np.random.shuffle(indices) idx = indices[:num] test_transforms = transforms.Compose([transforms.Resize(224), transforms.ToTensor(), ]) #sampler = SubsetRandomSampler(idx) loader = torch.utils.data.DataLoader(data, batch_size=num) dataiter = iter(loader) images, labels = dataiter.next() len(images) # Look at the first image images[0] len(labels) labels ``` # Execute Inference on 2 Sample Images ``` # Note on how to make sure the model and the input tensors are both on cuda device (gpu) # https://discuss.pytorch.org/t/runtimeerror-input-type-torch-cuda-floattensor-and-weight-type-torch-floattensor-should-be-the-same/21782/6 def predict_image(image, model): image_tensor = test_transforms(image).float() image_tensor = image_tensor.unsqueeze_(0) input = Variable(image_tensor) input = input.to(device) output = model(input) index = output.data.cpu().numpy().argmax() return index, output to_pil = transforms.ToPILImage() #images, labels = get_random_images(5) fig=plt.figure(figsize=(10,10)) for ii in range(len(images)): image = to_pil(images[ii]) index, output = predict_image(image, model) print(f'index: {index}') print(f'output: {output}') sub = fig.add_subplot(1, len(images), ii+1) res = int(labels[ii]) == index sub.set_title(str(classes[index]) + ":" + str(res)) plt.axis('off') plt.imshow(image) plt.show() res ``` # Comments and Questions What's the order of the labels (and how I should order the folders for the input data?) This file implies that there are different orders https://github.com/ProjectSidewalk/sidewalk-cv-assets19/blob/master/GSVutils/sliding_window.py ```label_from_int = ('Curb Cut', 'Missing Cut', 'Obstruction', 'Sfc Problem') pytorch_label_from_int = ('Missing Cut', "Null", 'Obstruction', "Curb Cut", "Sfc Problem")```
true
code
0.666117
null
null
null
null
The first thing we need to do is to download the dataset from Kaggle. We use the [Enron dataset](https://www.kaggle.com/wcukierski/enron-email-dataset), which is the biggest public email dataset available. To do so we will use GDrive and download the dataset within a Drive folder to be used by Colab. ``` from google.colab import drive drive.mount('/content/gdrive') import os os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Kaggle" %cd /content/gdrive/My Drive/Kaggle ``` We can download the dataset from Kaggle and save it in the GDrive folder. This needs to be done only the first time. ``` # !kaggle datasets download -d wcukierski/enron-email-dataset # unzipping the zip files # !unzip \*.zip ``` Now we are finally ready to start working with the dataset, accessible as a CSV file called `emails.csv`. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import email import re if not 'emails_df' in locals(): emails_df = pd.read_csv('./emails.csv') # Use only a subpart of the whole dataset to avoid exceeding RAM # emails_df = emails_df[:10000] print(emails_df.shape) emails_df.head() print(emails_df['message'][0]) # Convert to message objects from the message strings messages = list(map(email.message_from_string, emails_df['message'])) def get_text_from_email(msg): parts = [] for part in msg.walk(): if part.get_content_type() == 'text/plain': parts.append( part.get_payload() ) text = ''.join(parts) return text emails = pd.DataFrame() # Parse content from emails emails['content'] = list(map(get_text_from_email, messages)) import gc # Remove variables from memory del messages del emails_df gc.collect() def normalize_text(text): text = text.lower() # creating a space between a word and the punctuation following it to separate words # and compact repetition of punctuation # eg: "he is a boy.." => "he is a boy ." text = re.sub(r'([.,!?]+)', r" \1 ", text) # replacing everything with space except (a-z, A-Z, ".", "?", "!", ",", "'") text = re.sub(r"[^a-zA-Z?.!,']+", " ", text) # Compact spaces text = re.sub(r'[" "]+', " ", text) # Remove forwarded messages text = text.split('forwarded by')[0] text = text.strip() return text emails['content'] = list(map(normalize_text, emails['content'])) # Drop samples with empty content text after normalization emails['content'].replace('', np.nan, inplace=True) emails.dropna(subset=['content'], inplace=True) pd.set_option('max_colwidth', -1) emails.head(50) ``` In the original paper, the dataset is built with 8billion emails where the context is provided by the email date, subject and previous message if the user is replying. Unfortunately, in the Enron dataset it's not possible to build the reply relationship between emails. In order thus to generate the context of a sentence, we train the sequence-to-sequence model to predict the sentence completion from pairs of split sentences. For instance, the sentence `here is our forecast` is split in the following pairs within the dataset: ``` [ ('<start> here is <end>', '<start> our forecast <end>'), ('<start> here is our <end>', '<start> forecast <end>') ] ``` ``` # Skip long sentences, which increase maximum length a lot when padding # and make the number of parameters to train explode SENTENCE_MAX_WORDS = 20 def generate_dataset (emails): contents = emails['content'] output = [] vocabulary_sentences = [] for content in contents: # Skip emails longer than one sentence if (len(content) > SENTENCE_MAX_WORDS * 5): continue sentences = content.split(' . ') for sentence in sentences: # Remove user names from start or end of sentence. This is just an heuristic # but it's more efficient than compiling the list of names and removing all of them sentence = re.sub("(^\w+\s,\s)|(\s,\s\w+$)", "", sentence) words = sentence.split(' ') if ((len(words) > SENTENCE_MAX_WORDS) or (len(words) < 2)): continue vocabulary_sentences.append('<start> ' + sentence + ' <end>') for i in range(1, len(words) - 1): input_data = '<start> ' + ' '.join(words[:i+1]) + ' <end>' output_data = '<start> ' + ' '.join(words[i+1:]) + ' <end>' data = (input_data, output_data) output.append(data) return output, vocabulary_sentences pairs, vocabulary_sentences = generate_dataset(emails) print(len(pairs)) print(len(vocabulary_sentences)) print(*pairs[:10], sep='\n') print(*vocabulary_sentences[:10], sep='\n') ``` This is where the fun begins. The dataset is finally available and we start working on the analysis by using [Keras](https://keras.io/) and [TensorFlow](https://www.tensorflow.org/). ``` import tensorflow as tf from tensorflow import keras np.random.seed(42) ``` We need to transform the text corpora into sequences of integers (each integer being the index of a token in a dictionary) by using keras `Tokenizer`. We also limit to the 10k most frequent words, deleting uncommon words from sentences. Normally we would use two tokenizers, one for the input strings and a different one for the output text, but in this case we are predicting the same vocabulary in both cases. All the words in the output texts are available also in the input texts because of how dataset pairs are generated. Also since we will apply the "teacher forcing" technique during training, we need both the target data and the (target + 1 timestep) data. ``` vocab_max_size = 10000 def tokenize(text): tokenizer = keras.preprocessing.text.Tokenizer(filters='', num_words=vocab_max_size) tokenizer.fit_on_texts(text) return tokenizer input = [pair[0] for pair in pairs] output = [pair[1] for pair in pairs] tokenizer = tokenize(vocabulary_sentences) encoder_input = tokenizer.texts_to_sequences(input) decoder_input = tokenizer.texts_to_sequences(output) decoder_target = [ [decoder_input[seqN][tokenI + 1] for tokenI in range(len(decoder_input[seqN]) - 1)] for seqN in range(len(decoder_input))] # Convert to np.array encoder_input = np.array(encoder_input) decoder_input = np.array(decoder_input) decoder_target = np.array(decoder_target) from sklearn.model_selection import train_test_split encoder_input_train, encoder_input_test, decoder_input_train, decoder_input_test, decoder_target_train, decoder_target_test = train_test_split(encoder_input, decoder_input, decoder_target, test_size=0.2) print(encoder_input_train.shape, encoder_input_test.shape) print(decoder_input_train.shape, decoder_input_test.shape) print(decoder_target_train.shape, decoder_target_test.shape) def max_length(t): return max(len(i) for i in t) max_length_in = max_length(encoder_input) max_length_out = max_length(decoder_input) encoder_input_train = keras.preprocessing.sequence.pad_sequences(encoder_input_train, maxlen=max_length_in, padding="post") decoder_input_train = keras.preprocessing.sequence.pad_sequences(decoder_input_train, maxlen=max_length_out, padding="post") decoder_target_train = keras.preprocessing.sequence.pad_sequences(decoder_target_train, maxlen=max_length_out, padding="post") encoder_input_test = keras.preprocessing.sequence.pad_sequences(encoder_input_test, maxlen=max_length_in, padding="post") decoder_input_test = keras.preprocessing.sequence.pad_sequences(decoder_input_test, maxlen=max_length_out, padding="post") decoder_target_test = keras.preprocessing.sequence.pad_sequences(decoder_target_test, maxlen=max_length_out, padding="post") print(max_length_in, max_length_out) # Shuffle the data in unison p = np.random.permutation(len(encoder_input_train)) encoder_input_train = encoder_input_train[p] decoder_input_train = decoder_input_train[p] decoder_target_train = decoder_target_train[p] q = np.random.permutation(len(encoder_input_test)) encoder_input_test = encoder_input_test[q] decoder_input_test = decoder_input_test[q] decoder_target_test = decoder_target_test[q] import math batch_size = 128 vocab_size = vocab_max_size if len(tokenizer.word_index) > vocab_max_size else len(tokenizer.word_index) # Rule of thumb of embedding size: vocab_size ** 0.25 # https://stackoverflow.com/questions/48479915/what-is-the-preferred-ratio-between-the-vocabulary-size-and-embedding-dimension embedding_dim = math.ceil(vocab_size ** 0.25) latent_dim = 192 # Latent dimensionality of the encoding space. print(vocab_size, embedding_dim) ``` Here we define the RNN models. We start with the Encoder-Decoder model used in training which leverages the "teacher forcing technique". Therefore, it will receive as input `encoder_input` and `decoder_input` datasets. Then the second model is represented by the inference Decoder which will receive as input the encoded states of the input sequence and the predicted token of the previous time step. Both models use GRU units to preserve the context state, which have been shown to be more accurate than LSTM units and simpler to use since they have only one state. ``` # GRU Encoder encoder_in_layer = keras.layers.Input(shape=(max_length_in,)) encoder_embedding = keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim) encoder_bi_gru = keras.layers.Bidirectional(keras.layers.GRU(units=latent_dim, return_sequences=True, return_state=True)) # Discard the encoder output and use hidden states (h) and memory cells states (c) # for forward (f) and backward (b) layer encoder_out, fstate_h, bstate_h = encoder_bi_gru(encoder_embedding(encoder_in_layer)) state_h = keras.layers.Concatenate()([fstate_h, bstate_h]) # GRUDecoder decoder_in_layer = keras.layers.Input(shape=(None,)) decoder_embedding = keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim) decoder_gru = keras.layers.GRU(units=latent_dim * 2, return_sequences=True, return_state=True) # Discard internal states in training, keep only the output sequence decoder_gru_out, _ = decoder_gru(decoder_embedding(decoder_in_layer), initial_state=state_h) decoder_dense_1 = keras.layers.Dense(128, activation="relu") decoder_dense = keras.layers.Dense(vocab_size, activation="softmax") decoder_out_layer = decoder_dense(keras.layers.Dropout(rate=0.2)(decoder_dense_1(keras.layers.Dropout(rate=0.2)(decoder_gru_out)))) # Define the model that uses the Encoder and the Decoder model = keras.models.Model([encoder_in_layer, decoder_in_layer], decoder_out_layer) def perplexity(y_true, y_pred): return keras.backend.exp(keras.backend.mean(keras.backend.sparse_categorical_crossentropy(y_true, y_pred))) model.compile(optimizer='adam', loss="sparse_categorical_crossentropy", metrics=[perplexity]) model.summary() keras.utils.plot_model(model, "encoder-decoder.png", show_shapes=True) epochs = 10 history = model.fit([encoder_input_train, decoder_input_train], decoder_target_train, batch_size=batch_size, epochs=epochs, validation_split=0.2) def plot_history(history): plt.plot(history.history['loss'], label="Training loss") plt.plot(history.history['val_loss'], label="Validation loss") plt.legend() plot_history(history) scores = model.evaluate([encoder_input_test[:1000], decoder_input_test[:1000]], decoder_target_test[:1000]) print("%s: %.2f" % (model.metrics_names[1], scores[1])) # Inference Decoder encoder_model = keras.models.Model(encoder_in_layer, state_h) state_input_h = keras.layers.Input(shape=(latent_dim * 2,)) inf_decoder_out, decoder_h = decoder_gru(decoder_embedding(decoder_in_layer), initial_state=state_input_h) inf_decoder_out = decoder_dense(decoder_dense_1(inf_decoder_out)) inf_model = keras.models.Model(inputs=[decoder_in_layer, state_input_h], outputs=[inf_decoder_out, decoder_h]) keras.utils.plot_model(encoder_model, "encoder-model.png", show_shapes=True) keras.utils.plot_model(inf_model, "inference-model.png", show_shapes=True) def tokenize_text(text): text = '<start> ' + text.lower() + ' <end>' text_tensor = tokenizer.texts_to_sequences([text]) text_tensor = keras.preprocessing.sequence.pad_sequences(text_tensor, maxlen=max_length_in, padding="post") return text_tensor # Reversed map from a tokenizer index to a word index_to_word = dict(map(reversed, tokenizer.word_index.items())) # Given an input string, an encoder model (infenc_model) and a decoder model (infmodel), def decode_sequence(input_tensor): # Encode the input as state vectors. state = encoder_model.predict(input_tensor) target_seq = np.zeros((1, 1)) target_seq[0, 0] = tokenizer.word_index['<start>'] curr_word = "<start>" decoded_sentence = '' i = 0 while curr_word != "<end>" and i < (max_length_out - 1): output_tokens, h = inf_model.predict([target_seq, state]) curr_token = np.argmax(output_tokens[0, 0]) if (curr_token == 0): break; curr_word = index_to_word[curr_token] decoded_sentence += ' ' + curr_word target_seq[0, 0] = curr_token state = h i += 1 return decoded_sentence def tokens_to_seq(tokens): words = list(map(lambda token: index_to_word[token] if token != 0 else '', tokens)) return ' '.join(words) ``` Let's test the inference model with some inputs. ``` texts = [ 'here is', 'have a', 'please review', 'please call me', 'thanks for', 'let me', 'Let me know', 'Let me know if you', 'this sounds', 'is this call going to', 'can you get', 'is it okay', 'it should', 'call if there\'s', 'gave her a', 'i will let', 'i will be', 'may i get a copy of all the', 'how is our trade', 'this looks like a', 'i am fine with the changes', 'please be sure this' ] output = list(map(lambda text: (text, decode_sequence(tokenize_text(text))), texts)) output_df = pd.DataFrame(output, columns=["input", "output"]) output_df.head(len(output)) ``` The predicted outputs are actually quite good. The grammar is correct and have a logical sense. Some predictions also show that the predictions are personalized based on the Enron dataset, for instance in the case of `here is - the latest version of the presentation`. The `please review - the attached outage report` also shows personalized prediction. This is consistent with the goal the task. Save the Tokenizer and the Keras model for usage within the browser. ``` import json with open( 'word_dict-final.json' , 'w' ) as file: json.dump( tokenizer.word_index , file) encoder_model.save('./encoder-model-final.h5') inf_model.save('./inf-model-final.h5') ```
true
code
0.351965
null
null
null
null