code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
## Introduction In this chapter, we'll begin looking into how we can do more with matplotlib than just simply creating a bunch of static visualizations. Specifically, in this lesson, we'll be looking into the built-in interactivity that matplotlib provides, and in addition, we'll also see how we can take advantage of matplotlib's event handling API to insert our own code into our chosen backend's event loop so we can achieve much of the same functionality that some of the more popular browser-based visualization libraries provide. Now, this is usually the point in the lesson where I would say something like "let's go ahead and get started by running the standard setup code", but in this case, that code is not so standard. If you take a look at the code in the next cell, everything should look pretty familiar to you, with the exception of the very first line. On line 1, we're calling the `%matplotlib` magic command, as we always do to set up our notebook, however, we would normally pass in the `inline` parameter here to specify that we want to setup our notebook for displaying matplotlib figures inline. In this lesson though, we'll be using the brand new `notebook` option that was added in matplotlib version 1.4. This new option works with IPython version 3 or higher, but if you're running an earlier version, you can still get the same functionality, you'll just need to pass in the `nbagg` option instead of `notebook`. What the `notebook` option does is use the new `NBAgg` backend created by Phil Elson to enable interactive figures in a Jupyter notebook. Using this option we will be able to use nearly all of the interactive features that matplotlib supports all from within our Jupyter notebook. So, let's go ahead and run the setup code below to get our notebook ready to display interactive versions of our matplotlib plots inline. ``` %matplotlib notebook import matplotlib.pyplot as plt import numpy as np from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina') ``` Now that we have everything setup, let's start off with a simple plot of some random data to give us something to play around with. The next cell plots some random, normally distributed data onto a 2-dimensional scatterplot the purpose of experimentation. ``` plt.scatter(np.random.randn(100), np.random.randn(100)); ``` ## Built-in Functionality The very first thing to notice here, is that our figure looks a bit different. It now has some sort of title bar across its top, with what looks like a power button at its right edge, and along the bottom left edge, it now has a series of buttons. These buttons are where we will find the majority of the interactivity that the `NBAgg` backend provides, but let's ignore these right now, and instead just try mousing over the figure itself. Notice, that as soon as the cursor enters the figure, a pair of x and y coordinates show up at the bottom right edge of our plot. This readout updates with every movement of the mouse allowing us to see the actual values of a particular data point in our graph by simply hovering over it. Now, let's move onto those buttons and see what they can do for us. As you mouse over the buttons, you'll see a description of what each button can do to the right, where the x and y coordinates were displayed before. Let's click on the fourth button in the group, the one with the cross in it. This is the pan and zoom button. Clicking on this one will allow you to click on an area of the graph and drag it around the screen (i.e., panning) and right-click (or, two-finger-click for Mac users) and drag to zoom in and out. Play around with the panning and zooming functionality a bit to get a feel for it. The next button in the group, the one with the empty square, is the zoom to rectangle button. This mode allows you to draw a rectangle around an area of the graph that you want to zoom in on. Go ahead and give it a try now, to see how it works. Now, that we've played around with our graph a bit, we may have left ourselves in a state that we don't want to be in. That's where the first three buttons come into play. We can use the second and third buttons to move back and forth through each view we've just created, just like you would with a web browser. And, the first button will reset the graph entirely. Finally, the last button in the group, the one with the floppy disk---a device most of you probably haven't even seen in real life---is the save button, and clicking it will allow you to download a copy of the plot to your local hard drive. Aside from that, the only thing left is that power button at the top. Clicking on that will turn off all of the interactive features of the plot. I'd be careful clicking on that one though. Once you click it, there's no going back without running the code in the cell again. ## Event Handling Having the ability to pan and zoom is all well and good, but wouldn't it be great if you could respond to things like mouse clicks and key presses? Well, fortunately, you can through the extensive [events API][1] that matplotlib provides, and hooking your code into the event loop is super simple. Just call the [`mpl_connect`][2] method from the current figure's canvas object and pass in the name of an event and a callback function and you're all set up to handle that event. When the specified event occurs, your callback function will be called and it will be passed one of several [`matplotlib.backend_bases.Event`][3] objects specific to the type of event your callback function is handling. ### Handling Mouse Buttons Events As an example of the events API in action, we're going to create a figure that will allow us to plot some random data by simply clicking on a point in the graph. Let's start by creating our callback function. The code in the next cell creates a handler function that takes a `matplotlib.backend_bases.Event` object, specifically, an instance of the `matplotlib.backend_bases.MouseEvent` class, and will use the x and y coordinates where the user clicked to plot a handful of normally distributed, random data points. One thing to notice is that we check which button was clicked before plotting the points, and we only do so if the left mouse button was clicked, i.e., button 1. [1]: http://matplotlib.org/users/event_handling.html [2]: http://matplotlib.org/api/backend_bases_api.html#matplotlib.backend_bases.FigureCanvasBase.mpl_connect [3]: http://matplotlib.org/api/backend_bases_api.html#matplotlib.backend_bases.Event ``` def button_press_handler(event): if event.button == 1: sigma = 0.05 n = 10 xs = sigma * np.random.randn(n) + event.xdata ys = sigma * np.random.randn(n) + event.ydata plt.plot(xs, ys, 'bo') ``` Then, we'll create a new `Figure` object, grab that figure's axes object, and use it to make a couple of changes to our plot. First, for purely aesthetic purposes, we'll turn on the grid lines for the plot. Then, and this is the most important piece, we'll turn off auto scaling. The reason for doing this is that, with auto scaling on, the graph will go nuts rescaling itself to fit all of our data points as tightly as possible. After that, we'll get the canvas object from our figure and call the `mpl_connect` method to register our callback function with the button press event. Now, if we run the code below, we should get an empty plot that will allow us to add data points to it by picking a point around which our callback function will randomly place a small number of data points. ``` fig = plt.figure(figsize=(6, 4)) ax = fig.gca() ax.grid('on') # If we don't turn this off, the graph will go nuts scaling # itself nearly every time we add a few more points. ax.set_autoscale_on(False) # Connect the event listeners to the fig.canvas.mpl_connect('button_press_event', button_press_handler) plt.show(fig) ``` And, that's really all it takes to add a bit of event handling to our plots, but we're not done just yet. ### Ignoring Mouse Events Outside of Normal Mode Though our button press event handler works fairly well, we do have one major problem with the figure above. If we click on one of the built-in interactivity buttons, like the pan and zoom, or zoom to rectangle buttons, and try to use the chosen functionality in our graph, it will work, but we'll also end up plotting some data points everywhere we click. What we really want is to ignore all clicks when anything other than normal mode is turned on. To do that, we'll need to grab a reference to the toolbar, so we can check what mode we're in. The following code is exactly the same event handler as above, but we added a few extra lines to prevent us from plotting points when using one of the built-in modes. On line 4, we get a reference to the toolbar from the current figure manager. Then, on line 5, if anything other than normal mode (i.e., no mode) is selected, we simply exit the handler, otherwise, we plot some data points. ``` def button_press_handler(event): # Get the toolbar and make sure that we are not in zoom or pan mode. # If we are, just exit without doing anything toolbar = plt.get_current_fig_manager().toolbar if toolbar.mode != "": pass elif event.button == 1: sigma = 0.05 n = 10 xs = sigma * np.random.randn(n) + event.xdata ys = sigma * np.random.randn(n) + event.ydata plt.plot(xs, ys, 'bo') ``` So, we've now fixed our problem, but why not add one more bit of functionality while we're at it. ### Handling Key Press Events In the next cell, we'll create another handler function, this time for key press events. In our handler, we'll check if the user has pressed the 'r' key (for regression), and if so, we'll gather all of the data points from the axes object, perform a linear regression on the data points, and plot the resultant line. ``` def key_press_handler(event): if event.key.lower() == 'r': ax = plt.gca() xs = [x for l in ax.lines for x in l.get_xdata()] ys = [y for l in ax.lines for y in l.get_ydata()] m, b = np.polyfit(xs, ys, 1) xs = np.linspace(*plt.xlim(), num=2) plt.plot(xs, m*xs+b, 'r--') ``` Now, we just need to run the same code as we did above to create our plot, this time though, we'll connect both event handlers to their respective events. So, let's run it now and see how it works. ``` fig = plt.figure(figsize=(6, 4)) ax = fig.gca() ax.grid('on') # If we don't turn this off, the graph will go nuts scaling # itself with the first couple of points chosen. ax.set_autoscale_on(False) # Connect the event listeners to the fig.canvas.mpl_connect('button_press_event', button_press_handler) fig.canvas.mpl_connect('key_press_event', key_press_handler) plt.show(fig) ``` And, of course, you remembered to close all of the figures you've created along the way, right? ``` plt.close('all') ``` ## Conclusion And, that's going to bring us to the end. In this lesson, we learned about the default interactivity that matplotlib provides for us right out of the box. We learned how to use this functionality, as well as other forms of interactivity in a Jupyter notebook by passing the `notebook`, or `nbagg`, option to IPython's `matplotlib` magic command. And, finally, we learned about event handling and we created a sample plot that handled both key press, and mouse button press events. There's still quite a few more events that you can listen for, we've really only scratched the surface of what the events API has to offer, so I encourage you to read through the [documentation][1] to see what all is possible. [1]: http://matplotlib.org/users/event_handling.html
github_jupyter
%matplotlib notebook import matplotlib.pyplot as plt import numpy as np from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina') plt.scatter(np.random.randn(100), np.random.randn(100)); def button_press_handler(event): if event.button == 1: sigma = 0.05 n = 10 xs = sigma * np.random.randn(n) + event.xdata ys = sigma * np.random.randn(n) + event.ydata plt.plot(xs, ys, 'bo') fig = plt.figure(figsize=(6, 4)) ax = fig.gca() ax.grid('on') # If we don't turn this off, the graph will go nuts scaling # itself nearly every time we add a few more points. ax.set_autoscale_on(False) # Connect the event listeners to the fig.canvas.mpl_connect('button_press_event', button_press_handler) plt.show(fig) def button_press_handler(event): # Get the toolbar and make sure that we are not in zoom or pan mode. # If we are, just exit without doing anything toolbar = plt.get_current_fig_manager().toolbar if toolbar.mode != "": pass elif event.button == 1: sigma = 0.05 n = 10 xs = sigma * np.random.randn(n) + event.xdata ys = sigma * np.random.randn(n) + event.ydata plt.plot(xs, ys, 'bo') def key_press_handler(event): if event.key.lower() == 'r': ax = plt.gca() xs = [x for l in ax.lines for x in l.get_xdata()] ys = [y for l in ax.lines for y in l.get_ydata()] m, b = np.polyfit(xs, ys, 1) xs = np.linspace(*plt.xlim(), num=2) plt.plot(xs, m*xs+b, 'r--') fig = plt.figure(figsize=(6, 4)) ax = fig.gca() ax.grid('on') # If we don't turn this off, the graph will go nuts scaling # itself with the first couple of points chosen. ax.set_autoscale_on(False) # Connect the event listeners to the fig.canvas.mpl_connect('button_press_event', button_press_handler) fig.canvas.mpl_connect('key_press_event', key_press_handler) plt.show(fig) plt.close('all')
0.430626
0.991075
<a href="https://colab.research.google.com/github/aonekoda/ml_one_day/blob/main/LogisticRegression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Logistic Regression ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline data = pd.read_csv('https://raw.githubusercontent.com/aonekoda/ml_one_day/main/2.01.%20Admittance.csv') data.head() # Replace all No entries with 0, and all Yes entries with 1 data['Admitted'] = data['Admitted'].map({'Yes': 1, 'No': 0}) data.head() ``` ### Variables 독립변수와 종속변수를 확인한다. 로지스틱 회귀는 기본적으로 종속변수가 이항인 모형이다. ``` # Create the dependent and independent variables y = data['Admitted'] x1 = data['SAT'] ``` ### Scatter plot 읽어들인 데이터의 산점도를 그려본다. ``` # Create a scatter plot of x1 (SAT, no constant) and y (Admitted) plt.scatter(x1,y, color='C0') # Don't forget to label your axes! plt.xlabel('SAT', fontsize = 20) plt.ylabel('Admitted', fontsize = 20) plt.show() ``` ### Define Logistic Regression Model ### 규제를 사용하여 과대 적합 방지 LogisticRegression의 파라미터 C는 규제의 정도를 제어한다. 규제를 사용하여 모형의 복잡도를 조정할 수 있다. 규제는 모형의 훈련과정에서 가중치(계수)를 줄이는 역할을 한다. * L1 - 중요한 변수만을 남긴다.(계수가 0이 되는 경우가 있다.) * L2 - 기본값. 중요하지 않은 변수의 weight값을 0에 가깝게 규제한다. * LogisticRegression의 penalty의 기본값은 'l2', 'l1'으로 변경가능 * L2 규제 regularization 가중치 감쇠 * L1 규제(feature selection)는 가중치를 0으로 만든다. * 규제는 다중 공선성 해결에 도움 * 규제는 과대 적합을 방지 * 매개변수 C로 규제의 정도를 제어한다. * C의 값을 감소하면 규제의 강도가 증가한다. 모형을 단순화하여 과대적합(overfitting)을 방지한다. ``` X = x1.values.reshape(-1,1) model_sk = LogisticRegression(solver='liblinear',C=100.0) model_sk.fit(X, y) print(model_sk.intercept_, model_sk.coef_) ``` ### Plot Logistic Regression Curve ``` xx = np.linspace(1300, 2100,100) mu = 1.0/(1 + np.exp(-model_sk.coef_[0][0]*xx - model_sk.intercept_[0])) plt.scatter(x1,y,color='C0') plt.xlabel('SAT', fontsize = 20) plt.ylabel('Admitted', fontsize = 20) plt.plot(xx,mu,'r') plt.show() ``` ### Prediction 새로운 값으로 예측을 수행한다. ``` model_sk.predict([[1520]]) model_sk.predict([[1980]]) model_sk.predict_proba([[1980]]) ``` # Multi-Class Logistic Regression * 사이킷런에서 붓꽃 데이터셋을 적재합니다. * 클래스는 이미 정수 레이블로 변환되어 있습니다. * 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica 입니다. ``` from sklearn import datasets import numpy as np iris = datasets.load_iris() print(iris.DESCR) X = iris.data[:, [2, 3]] y = iris.target print('클래스 레이블:', np.unique(y)) ``` ### 데이터 분할하기 70%는 훈련 데이터 30%는 테스트 데이터로 분할합니다: ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=1, stratify=y) print('y의 레이블 카운트:', np.bincount(y)) print('y_train의 레이블 카운트:', np.bincount(y_train)) print('y_test의 레이블 카운트:', np.bincount(y_test)) ``` ### Standardization ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test) ``` ### 결정 경계 시각화 ``` from matplotlib.colors import ListedColormap def plot_decision_regions(X, y, classifier, resolution=0.02): # 마커와 컬러맵을 설정합니다. markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # 결정 경계를 그립니다. x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=colors[idx], marker=markers[idx], label=cl) ``` ### MultiClass Logistic Regression ``` from sklearn.linear_model import LogisticRegression lr = LogisticRegression(solver='lbfgs', multi_class='auto', C=.1, random_state=1) lr.fit(X_train_std, y_train) plot_decision_regions(X_train_std, y_train, classifier=lr) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() plt.show() ``` ### Prediction ``` # 테스트 데이터 X_test_std[:1, :] # 로지스틱 회귀 모형으로 class 예측 lr.predict(X_test_std[:1, :]) # 로지스틱 회귀 모형에서 각 class에 대한 예측확률을 확인 lr.predict_proba(X_test_std[:1, :]) ```
github_jupyter
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline data = pd.read_csv('https://raw.githubusercontent.com/aonekoda/ml_one_day/main/2.01.%20Admittance.csv') data.head() # Replace all No entries with 0, and all Yes entries with 1 data['Admitted'] = data['Admitted'].map({'Yes': 1, 'No': 0}) data.head() # Create the dependent and independent variables y = data['Admitted'] x1 = data['SAT'] # Create a scatter plot of x1 (SAT, no constant) and y (Admitted) plt.scatter(x1,y, color='C0') # Don't forget to label your axes! plt.xlabel('SAT', fontsize = 20) plt.ylabel('Admitted', fontsize = 20) plt.show() X = x1.values.reshape(-1,1) model_sk = LogisticRegression(solver='liblinear',C=100.0) model_sk.fit(X, y) print(model_sk.intercept_, model_sk.coef_) xx = np.linspace(1300, 2100,100) mu = 1.0/(1 + np.exp(-model_sk.coef_[0][0]*xx - model_sk.intercept_[0])) plt.scatter(x1,y,color='C0') plt.xlabel('SAT', fontsize = 20) plt.ylabel('Admitted', fontsize = 20) plt.plot(xx,mu,'r') plt.show() model_sk.predict([[1520]]) model_sk.predict([[1980]]) model_sk.predict_proba([[1980]]) from sklearn import datasets import numpy as np iris = datasets.load_iris() print(iris.DESCR) X = iris.data[:, [2, 3]] y = iris.target print('클래스 레이블:', np.unique(y)) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=1, stratify=y) print('y의 레이블 카운트:', np.bincount(y)) print('y_train의 레이블 카운트:', np.bincount(y_train)) print('y_test의 레이블 카운트:', np.bincount(y_test)) from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test) from matplotlib.colors import ListedColormap def plot_decision_regions(X, y, classifier, resolution=0.02): # 마커와 컬러맵을 설정합니다. markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # 결정 경계를 그립니다. x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=colors[idx], marker=markers[idx], label=cl) from sklearn.linear_model import LogisticRegression lr = LogisticRegression(solver='lbfgs', multi_class='auto', C=.1, random_state=1) lr.fit(X_train_std, y_train) plot_decision_regions(X_train_std, y_train, classifier=lr) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() plt.show() # 테스트 데이터 X_test_std[:1, :] # 로지스틱 회귀 모형으로 class 예측 lr.predict(X_test_std[:1, :]) # 로지스틱 회귀 모형에서 각 class에 대한 예측확률을 확인 lr.predict_proba(X_test_std[:1, :])
0.777088
0.975273
``` import numpy as np import seaborn as sns import matplotlib.pyplot as plt import util as u ``` # validating easy/hard pos/neg mining ## embedding distances across time ``` m = list(u.slurp_manifest("imgs/02_20c_10o.c00_r00.manifest")) e = np.load('runs/04/c00_r00.embeddings.npy') adjacent_distances = [] for i in range(len(e)-1): adjacent_distances.append(np.dot(e[i], e[i+1].T)) #adjacent_distances.append(np.linalg.norm(e[i]-e[i+1])) ax = sns.distplot(adjacent_distances) ax.set_title('embedding cosine sim between adjacent frames') ax.set_xlabel('cosine sim') ax.set_ylabel('density') ax.figure.savefig("blog_imgs/embed_sim_over_time.adjacent.png") random_pair_distances = [] for _ in range(len(adjacent_distances)): i, j = np.random.randint(0, len(adjacent_distances), (2,)) random_pair_distances.append(np.dot(e[i],e[j].T)) ax = sns.distplot(random_pair_distances) ax.set_title('embedding cosine sim between random frame pairs') ax.set_xlabel('cosine sim') ax.set_ylabel('density') ax.figure.savefig("blog_imgs/embed_sim_over_time.random.png") ``` ## embedding distances across camera positions ``` m = list(u.slurp_manifest("imgs/perturbed.manifest")) e = np.load('perturbed.embeddings.trained.npy') camera_distances = [0] sims = [1] for line in open("perturbed_cameras.ssv", "r").readlines(): cols = line.strip().split(" ") assert len(cols) == 5 i, cs = int(cols[0]), float(cols[4]) camera_distances.append(cs) sims.append(np.dot(e[0], e[i].T)) ax = sns.scatterplot(sims, camera_distances) ax.set_title('effect on embedding by perturbing camera (trained model)') ax.set_xlabel('cosine sim') ax.set_ylabel('camera perturbation') #ax.figure.savefig("blog_imgs/embed_sims_over_camera_perturbation.trained.png") m = list(u.slurp_manifest("imgs/perturbed.manifest")) e = np.load('perturbed.embeddings.untrained.npy') camera_distances = [0] sims = [1] for line in open("perturbed_cameras.ssv", "r").readlines(): cols = line.strip().split(" ") assert len(cols) == 5 i, cs = int(cols[0]), float(cols[4]) camera_distances.append(cs) sims.append(np.dot(e[0], e[i].T)) ax = sns.scatterplot(sims, camera_distances) ax.set_title('effect on embedding by perturbing camera (untrained model)') ax.set_xlabel('cosine sim') ax.set_ylabel('camera perturbation') ax.figure.savefig("blog_imgs/embed_sims_over_camera_perturbation.untrained.png") ``` # completely random hacking ``` e_w = np.load('e_works.npy') e_t = np.load('e_test.npy') assert e_w.shape == e_t.shape np.all(np.isclose(e_w, e_t)) e_t[:5,:5] e2 = np.load('runs/04/c00_r00.embeddings.npy') e2[:5,:5] np.all(np.isclose(e, e2)) sns.distplot(sims.flatten()) e1 = np.load('runs/02/c00_r00.embeddings.npy') e2 = np.load('runs/02/c01.embeddings.npy') e1.shape, e2.shape sns.distplot(np.dot(e1, e1.T).flatten()) sims = np.dot(e1[:5], e2.T) sims.reshape((-1)).shape sns.distplot(sims.flatten()) e1[:5,:5] e2[:5,:5] e = np.load('test.embeddings.npy') e[:5,:5] sims = np.dot(e, e.T).flatten() np.min(sims), np.max(sims) import util as u manifest_a = list(u.slurp_manifest('runs/01/c00_r00.manifest')) manifest_b = list(u.slurp_manifest('runs/01/c01.manifest')) e_a = np.load("runs/02/c00_r00.embeddings.npy") e_b = np.load("runs/02/c01.embeddings.npy") print(e1.shape, e2.shape) print(manifest1.index('runs/01/imgs/c00/r00/f666.png'), manifest2.index('runs/01/imgs/c01/r00/f670.png')) idxs_a = range(709) sims = np.dot(e_a[idxs_a], e_b.T) print(sims.shape) print(sims[:3,:3]) top_N = np.argmax(sims, axis=1) top_N[:5] for e_a_idx, closest_e_b_idx in enumerate(top_N): print("e_a_idx", e_a_idx, "closest_e_b_idx", closest_e_b_idx) print(np.dot(e_a[e_a_idx], e_b[closest_e_b_idx].T)) print(manifest_a[e_a_idx], manifest_b[closest_e_b_idx]) if e_a_idx > 10: break np.argsort(sims).shape[closest_e_b_idx] top_5 = np.argsort(sims.T)[:,-5:] top_5.shape top_5[3] ```
github_jupyter
import numpy as np import seaborn as sns import matplotlib.pyplot as plt import util as u m = list(u.slurp_manifest("imgs/02_20c_10o.c00_r00.manifest")) e = np.load('runs/04/c00_r00.embeddings.npy') adjacent_distances = [] for i in range(len(e)-1): adjacent_distances.append(np.dot(e[i], e[i+1].T)) #adjacent_distances.append(np.linalg.norm(e[i]-e[i+1])) ax = sns.distplot(adjacent_distances) ax.set_title('embedding cosine sim between adjacent frames') ax.set_xlabel('cosine sim') ax.set_ylabel('density') ax.figure.savefig("blog_imgs/embed_sim_over_time.adjacent.png") random_pair_distances = [] for _ in range(len(adjacent_distances)): i, j = np.random.randint(0, len(adjacent_distances), (2,)) random_pair_distances.append(np.dot(e[i],e[j].T)) ax = sns.distplot(random_pair_distances) ax.set_title('embedding cosine sim between random frame pairs') ax.set_xlabel('cosine sim') ax.set_ylabel('density') ax.figure.savefig("blog_imgs/embed_sim_over_time.random.png") m = list(u.slurp_manifest("imgs/perturbed.manifest")) e = np.load('perturbed.embeddings.trained.npy') camera_distances = [0] sims = [1] for line in open("perturbed_cameras.ssv", "r").readlines(): cols = line.strip().split(" ") assert len(cols) == 5 i, cs = int(cols[0]), float(cols[4]) camera_distances.append(cs) sims.append(np.dot(e[0], e[i].T)) ax = sns.scatterplot(sims, camera_distances) ax.set_title('effect on embedding by perturbing camera (trained model)') ax.set_xlabel('cosine sim') ax.set_ylabel('camera perturbation') #ax.figure.savefig("blog_imgs/embed_sims_over_camera_perturbation.trained.png") m = list(u.slurp_manifest("imgs/perturbed.manifest")) e = np.load('perturbed.embeddings.untrained.npy') camera_distances = [0] sims = [1] for line in open("perturbed_cameras.ssv", "r").readlines(): cols = line.strip().split(" ") assert len(cols) == 5 i, cs = int(cols[0]), float(cols[4]) camera_distances.append(cs) sims.append(np.dot(e[0], e[i].T)) ax = sns.scatterplot(sims, camera_distances) ax.set_title('effect on embedding by perturbing camera (untrained model)') ax.set_xlabel('cosine sim') ax.set_ylabel('camera perturbation') ax.figure.savefig("blog_imgs/embed_sims_over_camera_perturbation.untrained.png") e_w = np.load('e_works.npy') e_t = np.load('e_test.npy') assert e_w.shape == e_t.shape np.all(np.isclose(e_w, e_t)) e_t[:5,:5] e2 = np.load('runs/04/c00_r00.embeddings.npy') e2[:5,:5] np.all(np.isclose(e, e2)) sns.distplot(sims.flatten()) e1 = np.load('runs/02/c00_r00.embeddings.npy') e2 = np.load('runs/02/c01.embeddings.npy') e1.shape, e2.shape sns.distplot(np.dot(e1, e1.T).flatten()) sims = np.dot(e1[:5], e2.T) sims.reshape((-1)).shape sns.distplot(sims.flatten()) e1[:5,:5] e2[:5,:5] e = np.load('test.embeddings.npy') e[:5,:5] sims = np.dot(e, e.T).flatten() np.min(sims), np.max(sims) import util as u manifest_a = list(u.slurp_manifest('runs/01/c00_r00.manifest')) manifest_b = list(u.slurp_manifest('runs/01/c01.manifest')) e_a = np.load("runs/02/c00_r00.embeddings.npy") e_b = np.load("runs/02/c01.embeddings.npy") print(e1.shape, e2.shape) print(manifest1.index('runs/01/imgs/c00/r00/f666.png'), manifest2.index('runs/01/imgs/c01/r00/f670.png')) idxs_a = range(709) sims = np.dot(e_a[idxs_a], e_b.T) print(sims.shape) print(sims[:3,:3]) top_N = np.argmax(sims, axis=1) top_N[:5] for e_a_idx, closest_e_b_idx in enumerate(top_N): print("e_a_idx", e_a_idx, "closest_e_b_idx", closest_e_b_idx) print(np.dot(e_a[e_a_idx], e_b[closest_e_b_idx].T)) print(manifest_a[e_a_idx], manifest_b[closest_e_b_idx]) if e_a_idx > 10: break np.argsort(sims).shape[closest_e_b_idx] top_5 = np.argsort(sims.T)[:,-5:] top_5.shape top_5[3]
0.334807
0.761694
# Multi-Objective Optimization Ax API ### Using the Service API For Multi-objective optimization (MOO) in the `AxClient`, objectives are specified through the `ObjectiveProperties` dataclass. An `ObjectiveProperties` requires a boolean `minimize`, and also accepts an optional floating point `threshold`. If a `threshold` is not specified, Ax will infer it through the use of heuristics. If the user knows the region of interest (because they have specs or prior knowledge), then specifying the thresholds is preferable to inferring it. But if the user would need to guess, inferring is preferable. To learn more about how to choose a threshold, see [Set Objective Thresholds to focus candidate generation in a region of interest](#Set-Objective-Thresholds-to-focus-candidate-generation-in-a-region-of-interest). See the [Service API Tutorial](/tutorials/gpei_hartmann_service.html) for more infomation on running experiments with the Service API. ``` from ax.service.ax_client import AxClient from ax.service.utils.instantiation import ObjectiveProperties import torch # Plotting imports and initialization from ax.utils.notebook.plotting import render, init_notebook_plotting from ax.plot.pareto_utils import compute_posterior_pareto_frontier from ax.plot.pareto_frontier import plot_pareto_frontier init_notebook_plotting() # Load our sample 2-objective problem from botorch.test_functions.multi_objective import BraninCurrin branin_currin = BraninCurrin(negate=True).to( dtype=torch.double, device= torch.device("cuda" if torch.cuda.is_available() else "cpu"), ) ax_client = AxClient() ax_client.create_experiment( name="moo_experiment", parameters=[ { "name": f"x{i+1}", "type": "range", "bounds": [0.0, 1.0], } for i in range(2) ], objectives={ # `threshold` arguments are optional "a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]), "b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]) }, overwrite_existing_experiment=True, is_test=True, ) ``` ### Create an Evaluation Function In the case of MOO experiments, evaluation functions can be any arbitrary function that takes in a `dict` of parameter names mapped to values and returns a `dict` of objective names mapped to a `tuple` of mean and SEM values. ``` def evaluate(parameters): evaluation = branin_currin(torch.tensor([parameters.get("x1"), parameters.get("x2")])) # In our case, standard error is 0, since we are computing a synthetic function. # Set standard error to None if the noise level is unknown. return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)} ``` ### Run Optimization ``` for i in range(25): parameters, trial_index = ax_client.get_next_trial() # Local evaluation here can be replaced with deployment to external system. ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters)) ``` ### Plot Pareto Frontier ``` objectives = ax_client.experiment.optimization_config.objective.objectives frontier = compute_posterior_pareto_frontier( experiment=ax_client.experiment, data=ax_client.experiment.fetch_data(), primary_objective=objectives[1].metric, secondary_objective=objectives[0].metric, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) ``` # Deep Dive In the rest of this tutorial, we will show two algorithms available in Ax for multi-objective optimization and visualize how they compare to eachother and to quasirandom search. MOO covers the case where we care about multiple outcomes in our experiment but we do not know before hand a specific weighting of those objectives (covered by `ScalarizedObjective`) or a specific constraint on one objective (covered by `OutcomeConstraint`s) that will produce the best result. The solution in this case is to find a whole Pareto frontier, a surface in outcome-space containing points that can't be improved on in every outcome. This shows us the tradeoffs between objectives that we can choose to make. ### Problem Statement Optimize a list of M objective functions $ \bigl(f^{(1)}( x),..., f^{(M)}( x) \bigr)$ over a bounded search space $\mathcal X \subset \mathbb R^d$. We assume $f^{(i)}$ are expensive-to-evaluate black-box functions with no known analytical expression, and no observed gradients. For instance, a machine learning model where we're interested in maximizing accuracy and minimizing inference time, with $\mathcal X$ the set of possible configuration spaces ### Pareto Optimality In a multi-objective optimization problem, there typically is no single best solution. Rather, the *goal* is to identify the set of Pareto optimal solutions such that any improvement in one objective means deteriorating another. Provided with the Pareto set, decision-makers can select an objective trade-off according to their preferences. In the plot below, the red dots are the Pareto optimal solutions (assuming both objectives are to be minimized). ![pareto front](attachment:pareto_front%20%281%29.png) ### Evaluating the Quality of a Pareto Front (Hypervolume) Given a reference point $ r \in \mathbb R^M$, which we represent as a list of M `ObjectiveThreshold`s, one for each coordinate, the hypervolume (HV) of a Pareto set $\mathcal P = \{ y_i\}_{i=1}^{|\mathcal P|}$ is the volume of the space dominated (superior in every one of our M objectives) by $\mathcal P$ and bounded from above by a point $ r$. The reference point should be set to be slightly worse (10% is reasonable) than the worst value of each objective that a decision maker would tolerate. In the figure below, the grey area is the hypervolume in this 2-objective problem. ![hv_figure](attachment:hv_figure%20%281%29.png) ### Set Objective Thresholds to focus candidate generation in a region of interest The below plots show three different sets of points generated by the qEVHI algorithm with different objective thresholds (aka reference points). Note that here we use absolute thresholds, but thresholds can also be relative to a status_quo arm. The first plot shows the points without the `ObjectiveThreshold`s visible (they're set far below the origin of the graph). The second shows the points generated with (-18, -6) as thresholds. The regions violating the thresholds are greyed out. Only the white region in the upper right exceeds both threshold, points in this region dominate the intersection of these thresholds (this intersection is the reference point). Only points in this region contribute to the hypervolume objective. A few exploration points are not in the valid region, but almost all the rest of the points are. The third shows points generated with a very strict pair of thresholds, (-18, -2). Only the white region in the upper right exceeds both thresholds. Many points do not lie in the dominating region, but there are still more focused there than in the second examples. ![objective_thresholds_comparison.png](attachment:objective_thresholds_comparison.png) ### Further Information A deeper explanation of our the qEHVI and qParEGO algorithms this notebook explores can be found at https://arxiv.org/abs/2006.05078, and the underlying BoTorch implementation has a researcher-oriented tutorial at https://botorch.org/tutorials/multi_objective_bo. ## Setup ``` import pandas as pd from ax import * import numpy as np from ax.metrics.noisy_function import NoisyFunctionMetric from ax.service.utils.report_utils import exp_to_df from ax.runners.synthetic import SyntheticRunner # Factory methods for creating multi-objective optimization modesl. from ax.modelbridge.factory import get_MOO_EHVI, get_MOO_PAREGO # Analysis utilities, including a method to evaluate hypervolumes from ax.modelbridge.modelbridge_utils import observed_hypervolume ``` ## Define experiment configurations ### Search Space ``` x1 = RangeParameter(name="x1", lower=0, upper=1, parameter_type=ParameterType.FLOAT) x2 = RangeParameter(name="x2", lower=0, upper=1, parameter_type=ParameterType.FLOAT) search_space = SearchSpace( parameters=[x1, x2], ) ``` ### MultiObjectiveOptimizationConfig To optimize multiple objective we must create a `MultiObjective` containing the metrics we'll optimize and `MultiObjectiveOptimizationConfig` (which contains `ObjectiveThreshold`s) instead of our more typical `Objective` and `OptimizationConfig` We define `NoisyFunctionMetric`s to wrap our synthetic Branin-Currin problem's outputs. Add noise to see how robust our different optimization algorithms are. ``` class MetricA(NoisyFunctionMetric): def f(self, x: np.ndarray) -> float: return float(branin_currin(torch.tensor(x))[0]) class MetricB(NoisyFunctionMetric): def f(self, x: np.ndarray) -> float: return float(branin_currin(torch.tensor(x))[1]) metric_a = MetricA("a", ["x1", "x2"], noise_sd=0.0, lower_is_better=False) metric_b = MetricB("b", ["x1", "x2"], noise_sd=0.0, lower_is_better=False) mo = MultiObjective( objectives=[Objective(metric=metric_a), Objective(metric=metric_b)], ) objective_thresholds = [ ObjectiveThreshold(metric=metric, bound=val, relative=False) for metric, val in zip(mo.metrics, branin_currin.ref_point) ] optimization_config = MultiObjectiveOptimizationConfig( objective=mo, objective_thresholds=objective_thresholds, ) ``` ## Define experiment creation utilities These construct our experiment, then initialize with Sobol points before we fit a Gaussian Process model to those initial points. ``` # Reasonable defaults for number of quasi-random initialization points and for subsequent model-generated trials. N_INIT = 6 N_BATCH = 25 def build_experiment(): experiment = Experiment( name="pareto_experiment", search_space=search_space, optimization_config=optimization_config, runner=SyntheticRunner(), ) return experiment ## Initialize with Sobol samples def initialize_experiment(experiment): sobol = Models.SOBOL(search_space=experiment.search_space) for _ in range(N_INIT): experiment.new_trial(sobol.gen(1)).run() return experiment.fetch_data() ``` # Sobol We use quasirandom points as a fast baseline for evaluating the quality of our multi-objective optimization algorithms. ``` sobol_experiment = build_experiment() sobol_data = initialize_experiment(sobol_experiment) sobol_model = Models.SOBOL( experiment=sobol_experiment, data=sobol_data, ) sobol_hv_list = [] for i in range(N_BATCH): generator_run = sobol_model.gen(1) trial = sobol_experiment.new_trial(generator_run=generator_run) trial.run() exp_df = exp_to_df(sobol_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) # Fit a GP-based model in order to calculate hypervolume. # We will not use this model to generate new points. dummy_model = get_MOO_EHVI( experiment=sobol_experiment, data=sobol_experiment.fetch_data(), ) try: hv = observed_hypervolume(modelbridge=dummy_model) except: hv = 0 print("Failed to compute hv") sobol_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") sobol_outcomes = np.array(exp_to_df(sobol_experiment)[['a', 'b']], dtype=np.double) ``` ## qEHVI Expected HyperVolume Improvement. This is our current recommended algorithm for multi-objective optimization, when a reference point is already known. ``` ehvi_experiment = build_experiment() ehvi_data = initialize_experiment(ehvi_experiment) ehvi_hv_list = [] ehvi_model = None for i in range(N_BATCH): ehvi_model = get_MOO_EHVI( experiment=ehvi_experiment, data=ehvi_data, ) generator_run = ehvi_model.gen(1) trial = ehvi_experiment.new_trial(generator_run=generator_run) trial.run() ehvi_data = Data.from_multiple_data([ehvi_data, trial.fetch_data()]) exp_df = exp_to_df(ehvi_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) try: hv = observed_hypervolume(modelbridge=ehvi_model) except: hv = 0 print("Failed to compute hv") ehvi_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") ehvi_outcomes = np.array(exp_to_df(ehvi_experiment)[['a', 'b']], dtype=np.double) ``` ## Plot qEHVI Pareto Frontier based on model posterior The plotted points are samples from the fitted model's posterior, not observed samples. ``` frontier = compute_posterior_pareto_frontier( experiment=ehvi_experiment, data=ehvi_experiment.fetch_data(), primary_objective=metric_b, secondary_objective=metric_a, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) ``` ## qParEGO This is a good alternative algorithm for multi-objective optimization when qEHVI runs too slowly or produces poor results. ``` parego_experiment = build_experiment() parego_data = initialize_experiment(parego_experiment) parego_hv_list = [] parego_model = None for i in range(N_BATCH): parego_model = get_MOO_PAREGO( experiment=parego_experiment, data=parego_data, ) generator_run = parego_model.gen(1) trial = parego_experiment.new_trial(generator_run=generator_run) trial.run() parego_data = Data.from_multiple_data([parego_data, trial.fetch_data()]) exp_df = exp_to_df(parego_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) try: hv = observed_hypervolume(modelbridge=parego_model) except: hv = 0 print("Failed to compute hv") parego_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") parego_outcomes = np.array(exp_to_df(parego_experiment)[['a', 'b']], dtype=np.double) ``` ## Plot qParEGO Pareto Frontier based on model posterior The plotted points are samples from the fitted model's posterior, not observed samples. ``` frontier = compute_posterior_pareto_frontier( experiment=parego_experiment, data=parego_experiment.fetch_data(), primary_objective=metric_b, secondary_objective=metric_a, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) ``` ## Plot empirical data #### Plot observed hypervolume, with color representing the iteration that a point was generated on. To examine optimization process from another perspective, we plot the collected observations under each algorithm where the color corresponds to the BO iteration at which the point was collected. The plot on the right for $q$EHVI shows that the $q$EHVI quickly identifies the Pareto front and most of its evaluations are very close to the Pareto front. $q$ParEGO also identifies has many observations close to the Pareto front, but relies on optimizing random scalarizations, which is a less principled way of optimizing the Pareto front compared to $q$EHVI, which explicitly attempts focuses on improving the Pareto front. Sobol generates random points and has few points close to the Pareto front. ``` import numpy as np from matplotlib import pyplot as plt %matplotlib inline from matplotlib.cm import ScalarMappable fig, axes = plt.subplots(1, 3, figsize=(20,6)) algos = ["Sobol", "parEGO", "EHVI"] outcomes_list = [sobol_outcomes, parego_outcomes, ehvi_outcomes] cm = plt.cm.get_cmap('viridis') BATCH_SIZE = 1 n_results = N_BATCH*BATCH_SIZE + N_INIT batch_number = torch.cat([torch.zeros(N_INIT), torch.arange(1, N_BATCH+1).repeat(BATCH_SIZE, 1).t().reshape(-1)]).numpy() for i, train_obj in enumerate(outcomes_list): x = i sc = axes[x].scatter(train_obj[:n_results, 0], train_obj[:n_results,1], c=batch_number[:n_results], alpha=0.8) axes[x].set_title(algos[i]) axes[x].set_xlabel("Objective 1") axes[x].set_xlim(-150, 5) axes[x].set_ylim(-15, 0) axes[0].set_ylabel("Objective 2") norm = plt.Normalize(batch_number.min(), batch_number.max()) sm = ScalarMappable(norm=norm, cmap=cm) sm.set_array([]) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([0.93, 0.15, 0.01, 0.7]) cbar = fig.colorbar(sm, cax=cbar_ax) cbar.ax.set_title("Iteration") ``` # Hypervolume statistics The hypervolume of the space dominated by points that dominate the reference point. #### Plot the results The plot below shows a common metric of multi-objective optimization performance when the true Pareto frontier is known: the log difference between the hypervolume of the true Pareto front and the hypervolume of the approximate Pareto front identified by each algorithm. The log hypervolume difference is plotted at each step of the optimization for each of the algorithms. The plot show that $q$EHVI vastly outperforms $q$ParEGO which outperforms the Sobol baseline. ``` import numpy as np from matplotlib import pyplot as plt %matplotlib inline iters = np.arange(1, N_BATCH + 1) log_hv_difference_sobol = np.log10(branin_currin.max_hv - np.asarray(sobol_hv_list))[:N_BATCH + 1] log_hv_difference_parego = np.log10(branin_currin.max_hv - np.asarray(parego_hv_list))[:N_BATCH + 1] log_hv_difference_ehvi = np.log10(branin_currin.max_hv - np.asarray(ehvi_hv_list))[:N_BATCH + 1] fig, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.plot(iters, log_hv_difference_sobol, label="Sobol", linewidth=1.5) ax.plot(iters, log_hv_difference_parego, label="qParEGO", linewidth=1.5) ax.plot(iters, log_hv_difference_ehvi, label="qEHVI", linewidth=1.5) ax.set(xlabel='number of observations (beyond initial points)', ylabel='Log Hypervolume Difference') ax.legend(loc="lower right") ```
github_jupyter
from ax.service.ax_client import AxClient from ax.service.utils.instantiation import ObjectiveProperties import torch # Plotting imports and initialization from ax.utils.notebook.plotting import render, init_notebook_plotting from ax.plot.pareto_utils import compute_posterior_pareto_frontier from ax.plot.pareto_frontier import plot_pareto_frontier init_notebook_plotting() # Load our sample 2-objective problem from botorch.test_functions.multi_objective import BraninCurrin branin_currin = BraninCurrin(negate=True).to( dtype=torch.double, device= torch.device("cuda" if torch.cuda.is_available() else "cpu"), ) ax_client = AxClient() ax_client.create_experiment( name="moo_experiment", parameters=[ { "name": f"x{i+1}", "type": "range", "bounds": [0.0, 1.0], } for i in range(2) ], objectives={ # `threshold` arguments are optional "a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]), "b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]) }, overwrite_existing_experiment=True, is_test=True, ) def evaluate(parameters): evaluation = branin_currin(torch.tensor([parameters.get("x1"), parameters.get("x2")])) # In our case, standard error is 0, since we are computing a synthetic function. # Set standard error to None if the noise level is unknown. return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)} for i in range(25): parameters, trial_index = ax_client.get_next_trial() # Local evaluation here can be replaced with deployment to external system. ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters)) objectives = ax_client.experiment.optimization_config.objective.objectives frontier = compute_posterior_pareto_frontier( experiment=ax_client.experiment, data=ax_client.experiment.fetch_data(), primary_objective=objectives[1].metric, secondary_objective=objectives[0].metric, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) import pandas as pd from ax import * import numpy as np from ax.metrics.noisy_function import NoisyFunctionMetric from ax.service.utils.report_utils import exp_to_df from ax.runners.synthetic import SyntheticRunner # Factory methods for creating multi-objective optimization modesl. from ax.modelbridge.factory import get_MOO_EHVI, get_MOO_PAREGO # Analysis utilities, including a method to evaluate hypervolumes from ax.modelbridge.modelbridge_utils import observed_hypervolume x1 = RangeParameter(name="x1", lower=0, upper=1, parameter_type=ParameterType.FLOAT) x2 = RangeParameter(name="x2", lower=0, upper=1, parameter_type=ParameterType.FLOAT) search_space = SearchSpace( parameters=[x1, x2], ) class MetricA(NoisyFunctionMetric): def f(self, x: np.ndarray) -> float: return float(branin_currin(torch.tensor(x))[0]) class MetricB(NoisyFunctionMetric): def f(self, x: np.ndarray) -> float: return float(branin_currin(torch.tensor(x))[1]) metric_a = MetricA("a", ["x1", "x2"], noise_sd=0.0, lower_is_better=False) metric_b = MetricB("b", ["x1", "x2"], noise_sd=0.0, lower_is_better=False) mo = MultiObjective( objectives=[Objective(metric=metric_a), Objective(metric=metric_b)], ) objective_thresholds = [ ObjectiveThreshold(metric=metric, bound=val, relative=False) for metric, val in zip(mo.metrics, branin_currin.ref_point) ] optimization_config = MultiObjectiveOptimizationConfig( objective=mo, objective_thresholds=objective_thresholds, ) # Reasonable defaults for number of quasi-random initialization points and for subsequent model-generated trials. N_INIT = 6 N_BATCH = 25 def build_experiment(): experiment = Experiment( name="pareto_experiment", search_space=search_space, optimization_config=optimization_config, runner=SyntheticRunner(), ) return experiment ## Initialize with Sobol samples def initialize_experiment(experiment): sobol = Models.SOBOL(search_space=experiment.search_space) for _ in range(N_INIT): experiment.new_trial(sobol.gen(1)).run() return experiment.fetch_data() sobol_experiment = build_experiment() sobol_data = initialize_experiment(sobol_experiment) sobol_model = Models.SOBOL( experiment=sobol_experiment, data=sobol_data, ) sobol_hv_list = [] for i in range(N_BATCH): generator_run = sobol_model.gen(1) trial = sobol_experiment.new_trial(generator_run=generator_run) trial.run() exp_df = exp_to_df(sobol_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) # Fit a GP-based model in order to calculate hypervolume. # We will not use this model to generate new points. dummy_model = get_MOO_EHVI( experiment=sobol_experiment, data=sobol_experiment.fetch_data(), ) try: hv = observed_hypervolume(modelbridge=dummy_model) except: hv = 0 print("Failed to compute hv") sobol_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") sobol_outcomes = np.array(exp_to_df(sobol_experiment)[['a', 'b']], dtype=np.double) ehvi_experiment = build_experiment() ehvi_data = initialize_experiment(ehvi_experiment) ehvi_hv_list = [] ehvi_model = None for i in range(N_BATCH): ehvi_model = get_MOO_EHVI( experiment=ehvi_experiment, data=ehvi_data, ) generator_run = ehvi_model.gen(1) trial = ehvi_experiment.new_trial(generator_run=generator_run) trial.run() ehvi_data = Data.from_multiple_data([ehvi_data, trial.fetch_data()]) exp_df = exp_to_df(ehvi_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) try: hv = observed_hypervolume(modelbridge=ehvi_model) except: hv = 0 print("Failed to compute hv") ehvi_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") ehvi_outcomes = np.array(exp_to_df(ehvi_experiment)[['a', 'b']], dtype=np.double) frontier = compute_posterior_pareto_frontier( experiment=ehvi_experiment, data=ehvi_experiment.fetch_data(), primary_objective=metric_b, secondary_objective=metric_a, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) parego_experiment = build_experiment() parego_data = initialize_experiment(parego_experiment) parego_hv_list = [] parego_model = None for i in range(N_BATCH): parego_model = get_MOO_PAREGO( experiment=parego_experiment, data=parego_data, ) generator_run = parego_model.gen(1) trial = parego_experiment.new_trial(generator_run=generator_run) trial.run() parego_data = Data.from_multiple_data([parego_data, trial.fetch_data()]) exp_df = exp_to_df(parego_experiment) outcomes = np.array(exp_df[['a', 'b']], dtype=np.double) try: hv = observed_hypervolume(modelbridge=parego_model) except: hv = 0 print("Failed to compute hv") parego_hv_list.append(hv) print(f"Iteration: {i}, HV: {hv}") parego_outcomes = np.array(exp_to_df(parego_experiment)[['a', 'b']], dtype=np.double) frontier = compute_posterior_pareto_frontier( experiment=parego_experiment, data=parego_experiment.fetch_data(), primary_objective=metric_b, secondary_objective=metric_a, absolute_metrics=["a", "b"], num_points=20, ) render(plot_pareto_frontier(frontier, CI_level=0.90)) import numpy as np from matplotlib import pyplot as plt %matplotlib inline from matplotlib.cm import ScalarMappable fig, axes = plt.subplots(1, 3, figsize=(20,6)) algos = ["Sobol", "parEGO", "EHVI"] outcomes_list = [sobol_outcomes, parego_outcomes, ehvi_outcomes] cm = plt.cm.get_cmap('viridis') BATCH_SIZE = 1 n_results = N_BATCH*BATCH_SIZE + N_INIT batch_number = torch.cat([torch.zeros(N_INIT), torch.arange(1, N_BATCH+1).repeat(BATCH_SIZE, 1).t().reshape(-1)]).numpy() for i, train_obj in enumerate(outcomes_list): x = i sc = axes[x].scatter(train_obj[:n_results, 0], train_obj[:n_results,1], c=batch_number[:n_results], alpha=0.8) axes[x].set_title(algos[i]) axes[x].set_xlabel("Objective 1") axes[x].set_xlim(-150, 5) axes[x].set_ylim(-15, 0) axes[0].set_ylabel("Objective 2") norm = plt.Normalize(batch_number.min(), batch_number.max()) sm = ScalarMappable(norm=norm, cmap=cm) sm.set_array([]) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([0.93, 0.15, 0.01, 0.7]) cbar = fig.colorbar(sm, cax=cbar_ax) cbar.ax.set_title("Iteration") import numpy as np from matplotlib import pyplot as plt %matplotlib inline iters = np.arange(1, N_BATCH + 1) log_hv_difference_sobol = np.log10(branin_currin.max_hv - np.asarray(sobol_hv_list))[:N_BATCH + 1] log_hv_difference_parego = np.log10(branin_currin.max_hv - np.asarray(parego_hv_list))[:N_BATCH + 1] log_hv_difference_ehvi = np.log10(branin_currin.max_hv - np.asarray(ehvi_hv_list))[:N_BATCH + 1] fig, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.plot(iters, log_hv_difference_sobol, label="Sobol", linewidth=1.5) ax.plot(iters, log_hv_difference_parego, label="qParEGO", linewidth=1.5) ax.plot(iters, log_hv_difference_ehvi, label="qEHVI", linewidth=1.5) ax.set(xlabel='number of observations (beyond initial points)', ylabel='Log Hypervolume Difference') ax.legend(loc="lower right")
0.852935
0.965706
# Galaxy morphometry Primarily, morphometrics can be used to address galaxy morphology. Different works use a non-parametric approach to measure a galaxy's shape characteristics. `Lotz et al. (2004)` characterize galaxies' Concentration, Asymmetry. Furthermore, `Ferrari et al. (2015)` include Shannon entropy (information entropy) to quantify pixel values distribution. More recently, `Rosa et al. (2018)` characterized a galaxy's morphology using the second moment of the gradient of the images through Gradient Pattern Analysis. One of the critical features of non-parametric morphology estimation is understanding how a given parameter can reliably separate elliptical and spiral galaxies. Each metric measures patterns and forms in the image. Concentration (C) measures how tightly pixels are distributed in the galaxy structure, Asymmetry (A) measures irregularity of the form of the galaxy's disk and bulge, Smoothness (S) measures the presence of the small structures in the galaxy's disk (star-forming regions), Entropy (H) is the measure of the heterogeneity of the pixel distribution and G2 analyses the asymmetric vector field (variation of flux intensity). When applied to an image with a clearly elliptical galaxy, usually, C will have high values, and A, S, H, and G2 will have low values. While in the case of a clearly spiral galaxy, it would be the opposite. Combining these values can provide a solid intuition of the class of unknown galaxies. Besides using the morphometric parameters to study the actual physical processes molding galaxies directly, they can also be used in Machine Learning (ML) methods to discriminate ETGs from LTGs. `Barchi et al. (2020)` conducts a thorough study of ML using the CASHG2 system as the primary input information and galaxy Zoo 1, GZ1 `(LINTOTT et al., 2008; LINTOTT et al., 2011)` provides the "true" classification. These are the main ingredients for the training step. Several ML algorithms were tested: Decision Tree (DT); Support Vector Machine (SVM); and Multilayer Perceptron (MLP). DT had a slightly better performance than the other two, with an overall accuracy (OA) of 98.5%, when dealing only with bright galaxies. When applying Deep Learning (DL) technique, they find only a tiny increase in OA, 99.5%. CyMorph can also be applied to: - X-Ray map tracing gas from the ICM. By combining the results of the galaxy's morphology and ICM hot gas X-Ray morphology properties, we plan to investigate how the cluster's ICM morphological properties (high asymmetry, for example) affect member galaxies' morphology. - Trace galaxy properties typical for $\gamma$-ray burst(GRB) hosts to prioritize targets with similar characteristics during gravitational wave optical follow-ups `(Santana-Silva et al. (in prep.)`. In summary, CyMorph can extract five different metrics: Concentration (C), Asymmetry (A), Smoothness (S), Entropy (H), and Gradient Pattern Analysis (G2). Each of these metrics (except G2) ranges from 0 (min) to 1 (max) depending on the particular features of the content of the image. These metrics have a wide range of application in further resurch. ## Metrics ### Concentration Concentration is straightforward to grasp intuitively; it simply measures how the flux is distributed on the galaxy profile, that is concentrated in the center (bulge) of the galaxy or distributed around the whole profile. The practical process to perform this kind of measurement consists of several steps. Literature has different approaches to calculate concentration `(BERSHADY et al., 2000; GRAHAM et al., 2001b; ABRAHAM et al., 1994).`. Here we fallow the method proposed in `Conselice (2003) and Lotz et al. (2004)`. Figure 1 is showing the $R_p$ and $2*R_p$ (left panel) and $\eta$ profile (left panel) of a spiral galaxy <figure> <img src='imgs/rp_eta_ppt.png' alt='missing' class="center"/> </figure> <center><i>Figure 1</i></center> By definition, Concentration stands for a ratio of two portions of accumulated flux in two different fractions of the total flux of a galaxy. It is defined by: $C = log_{10}(R_1/R_2)$, where $R_1$ and $R_2$ are the fractions of the total flux. They also called the outer and inner radii enclosing a fraction of total flux. The values of $R_1$ and $R_2$ can vary arbitrary, in range of 100 to 0 (containing all flux and none of it). Several studies used different pairs of $R_1$ and $R_2$ `(LOTZ et al., 2004; FERRARI et al., 2015)`. Panel A in Figure 2 shows the accumulated flux curve for the elliptical galaxy, panel B for the spiral galaxy. From the comparison, it could be seen that flux growth in the case of the elliptical galaxy slows considerably already at $R_2$, resulting in the highly concentrated center of this galaxy. In the case of a spiral galaxy, the accumulated flux keeps increasing more uniformly even after $R_2$. <figure> <img src='imgs/c_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 2</i></center> The processing starts by calculating the accumulated flux intensity curve and $\eta$ profile on the cleaned image. Galaxies are resolved objects with poorly defined edges and do not all have the same radial surface brightness profile; some care is required to define the flux associated with each object. The $\eta$ profile is necessary to obtain the Petrosian radius ($R_p$) because the total flux of a galaxy is defined by the accumulated flux until $2*R_p$. The aperture $2*R_p$ is large enough to contain nearly all the flux for typical galaxy profiles but small enough that the sky noise impact is negligible `Blanton et al. (2001)`. Based on the $\eta$ profile, it is also largely insensitive to variations in the limiting surface brightness and redshift (in the sense of distance), providing reliable results for the galaxies with a high signal-to-noise ratio. We follow `Blanton et al. (2001)` and `Strateva et al. (2001)` and set $R_p$ on $\eta = 0.2$ as it is shown in the right panel in Figure 1. The left panel in Figure 1 shows the portions in $R_p$ and $2*R_p$. Finally, it becomes possible to set the radii that contain the fractions of total flux. Intuitively and as it could be seen in the Figure 2, elliptical galaxies have the flux concentrated in the center, and it falls almost entirely in the R2 radius, leaving R1 with almost equal to R2 and pushing the ratio up. The opposite occurs with spiral galaxies, the main portion of the flux is concentrated in the center, but the spiral arms increase the total flux located in R2 and drive the ratio down. ### Asymmetry Asymmetry could be the most uncomplicated metric to extract, and the simplicity consists in the fact that it is not necessary to perform any additional operation on the $segmented\_image$. Tracing the asymmetry distribution in the galaxy profile can help to reveal dynamic processes in galaxies. This is especially true when we talk about collisionless stars and can track the matter distribution more precisely. For instance, galaxies disturbed by interactions or mergers with another galaxy will tend to have high asymmetries `(CONSELICE et al., 2000)`. Asymmetry, by definition, measures the degree of irregularity of the galaxy profile. To obtain it, CyMorph will rotate the segmented image by $180^{\circ}$ and run a $for$ loop on both of the images (segmented and rotated). This loop will compare each $[i,j]$ pixel of both of the images, and in case if both are not zero (containing flux counts), the non-zero pixels will be stored in $list1$ and $list2$—these lists of pixels for segmented and rotated images, respectively. Figure 3 is showing how does correlation works when we collect the pixels on input (Panel A) and rotated (Panel B) images. Panel C shows the visual correlation of $list1$ and $list2$, rotated and original pixels respectively. <figure> <img src='imgs/a_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 3</i></center> The next step would be the calculation of the correlation coefficient between the segmented image and the rotated image pixels lists. CyMorph will use Pearson and Spearman Ranks `(PRESS,2005)` coefficient functions to calculate the correlation coefficients between the two lists. In the case of an elliptical galaxy, the correlation coefficient will be high since the pixels of the elliptical galaxy are fairly heterogeneously distributed. In the case of a spiral galaxy, where pixels have a much higher gradient, it will mean that, after rotation, two pixels on the same position will have very different values. The formula to calculate A is $A = 1 - spearmanr(list1, list2)$. When the correlation coefficient is high (meaning that there is no significant difference between pixel values on both images), asymmetry will be low (case of an elliptical galaxy). In case if the correlation coefficient is low, the asymmetry will be high (case of spiral galaxy). As it could be seen in Figure 3, if one rotates a spiral galaxy, the correlation would be low since spiral arms and similar irregularities will contribute to the fact that the same pixels would have very different flux values. It would be the opposite if we rotated the elliptical one. The reason is that elliptical galaxies have nearly perfect flux distribution. So the correlation will be high. ### Smoothness Smoothness (or clumpiness) is calculated very similarly with asymmetry. It calculates the Pearson and Spearman ranks correlation coefficients between the segmented image and its smoothed version `(ABRAHAM et al., 1996; CONSELICE, 2003; FERRARI et al., 2015)`. Instead of rotation, we are applying a second order of the Butter filter to smooth the original image. This filter provides the advantage of continuous adaptive control of the smoothing degree applied to the image `(KASZYNSKI; PISKOROWSKI, 2006; PEDRINI; SCHWARTZ, 2008; SAUTTER, 2018)`. By intuition, if one tries to smooth the elliptical galaxy, the result will be almost the same image because naturally, these galaxies are smooth. Scanning the images, storing the pixels to lists, and calculating the coefficient will result in a high correlation between the segmented and smoothed image. Spiral galaxies will produce the opposite result, for the correlation will be low between the images, and clumpiness will be high. The formula to obtain S is $S = 1 - spearmanr(list1, list2)$. The nomenclature could bring confusion, but the logic behind this metric consists in the fact that spiral galaxies will present small structures inside the disk that will contribute to the high Clumpiness values, while elliptical galaxies, naturally smooth, will have high correlation and low Clumpiness value. From Figure 4 we can see that smoothing the spiral galaxy produces a significant difference. In the case of an elliptical galaxy, it would be almost unnoticeable. The level of change in the image after the smoothing is the key factor in the metric calculation. Figure 4 is showing how does correlation works when we collect the pixels on input and smoothed image. Panel C is showing the visual correlation of $list1$ and $list2$, rotated and original pixels respectively. <figure> <img src='imgs/s_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 4</i></center> ### Entropy Entropy (H) works very similarly to the concentration in the sense that it measures pixel density/frequency in a given number of bins. Entropy bins number is a parameter that can be tuned to adapt better to the data at hand, and it is responsible for how many bins the flux distribution will be split. In a simplified manner, H measures the distribution of pixel values in the image by dividing the image in an arbitrary number of bins. The process of the H extraction does not have any additional image manipulation. The input image pixels are raveled (converted to 1d array) and are used to calculate values of the histogram (frequency of the flux) and bin edges with $numpy.histogram$. The illustration of this process is shown in Figure 5. It is worth noting how the flux of the elliptical galaxy is concentrated in a small area, while the spiral galaxy's flux occupies a broader area of flux distribution. The next step is the normalization of the frequency counts by maximum count and then calculating the entropy value with: \begin{equation} H(I)=-\sum_{k}^{K} p\left(I_{k}\right) \log \left[p\left(I_{k}\right)\right] \end{equation} <center><i>Equation 1</i></center> where: - $p(I_{k})$ - s the probability of occurrence of the value $I_{k}$ - $K$ - number of bins that data was split For discrete variables, $H$ reaches the maximum value for a uniform distribution, when $p(I_{k}) = 1/K$ for all $k$, resulting in $H_{max} = \log K$. The minimum entropy is that of a delta function, for which $H = 0$. Hence, we can get the normalized entropy with: \begin{equation} \widetilde{H}(I)=\frac{H(I)}{H_{\max }} \quad 0 \leqslant \widetilde{H}(I) \leqslant 1 \end{equation} <center><i>Equation 2</i></center> Elliptical galaxies are expected to have low entropy (for having natural homogeneity of flux (pixel value) distribution), and spiral galaxies are expected to have high values of entropy (by presenting irregular structures and those having pixel value heterogeneity naturally) `(BISHIP, 2007; FERRARI et al., 2015)`. The top row in the Figure 5 shows the flux distribution of an elliptical galaxy when the image is raveled and used as input for the $numpy.histogram$. The thin distribution limited to low counts with a fast decrease slope characterizes the flux concentration of the elliptical profile. We can see the opposite in the case of a spiral galaxy (middle row), where the flux is spread over a bigger area of the flux axis. <figure> <img src='imgs/e_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 5</i></center> ### Gradient Pattern Analysis (second moment) The second moment of Gradient Pattern Analysis (GPA), in short G2, can be considered the most complex metric to be calculated. According to `Sautter (2018)`, GPA has four moments, but for the galaxy morphology classification, only the first and second are currently used. `Rosa et al. (2018)` showed that improved and revised version of the second moment (with due modifications) shows best results in galaxy separations when compared with classic CAS classification `(CONSELICE, 2003)`. We have performed a revision and optimization of the code to comply with the original definition. #### Extraction process The first step is to convert all zeros in the image to $numpy.nan$. This condition assures that the border pixels of the image will be ignored during the gradient field generation. Otherwise, they would contribute detrimentally to the final result. Then we are generating the gradient field with $numpy.gradient(segmented\_image)$. In the next step, we will generate an asymmetric gradient field. This process consists of locating pairs of pixels located at the same distance from the center and comparing modulus (strength) and phase (direction) between them. If two pixels of the given pair have the same modulus but opposite phases, they will be considered symmetric and removed. This process will be performed on all the unique pairs of pixels (to exclude the repetition). The resulting lattice is called an asymmetry vector field because all symmetric pairs are removed. Next, we will obtain the count of asymmetric vectors, their sum, and their modulus sum. To determine if the vectors are aligned and have the same magnitude, we need to calculate $confluence$ using the equation 3: \begin{equation} confluence = \left(\frac{\left|\sum_{i}^{V_{a}} v_{a}^{i}\right|}{\sum_{i}^{V_{a}}\left|v_{a}^{i}\right|}\right) \end{equation} <center><i>Equation 3</i></center> where: - $v_{a}$ - list of asymmetrical vectors - $V_{A}$ - count of asymmetric vectors The final calculation of G2 value is obtained by the equation 4 `(ROSA et al., 1999; RAMOS et al., 2000; ROSA et al., 2003; SAUTTER, 2018)`: \begin{equation} G_{2} = \frac{V_{A}} {V - V_{c}} (2-confluence) \end{equation} <center><i>Equation 4</i></center> where: - $V_{A}$ - total valid pixels - $V_{c}$ - contour pixels - $V_{A}$ - asymmetric pixels - $2$ - normalization factor Figure 6 shows results in two fringe cases: random noise, resulting in maximum G2 value (as there are very few pairs that end up canceled) and Gaussian noise, resulting in minimum G2 value (as there are all pixel pairs end up canceled). <figure> <img src='imgs/g2_example.png' alt='missing' class="center"/> </figure> <center><i>Figure 6</i></center> It is possible to perform fine-tuning of the G2. It has two tolerances: $modulus\_tolerance$. $modulus\_tolerance$ is responsible for serving as the threshold of the minimum acceptable strength difference between two vectors. Worth noting that modulus are normalized by the maximum value during the calculation, so the $modulus\_tolerance$ can be influential even with low values, as it ranges from 0 (no tolerance at all) to 1 (any two pixels are considered to have same modulus). $phase\_tolerance$ serves as a threshold of minimal acceptable angle difference between two pixels (their vectors). It ranges from 0 (no tolerance) to 3.14 (any any two pixels are considered to have same angle) The top row in Figure 6 shows the results expected with a simulated input image with Gaussian noise. In this case, almost all the vectors should cancel out, the asymmetric gradient field should be empty, and the resulting value of G2 should be near to 0 (minimum value of G2). The bottom row shows the results expected with a simulated input image with random noise. In this case, almost all the vectors will be preserved, and the resulting value of G2 should be near to 2.0 (maximum value of G2). Intuitively, elliptical galaxies will have a lot of the same vectors, being naturally smooth and round, and as a consequence, many vectors will be canceled. In the case of spiral galaxies, the internal structures distort the gradient field, resulting in many vectors being preserved. Figure 7 shows visualization of this behavior. The top row shows the results expected with elliptical galaxies. Generally, most of the vectors should cancel out, resulting in a low value of G2 ($G2 < 0.5$). The bottom row shows the results expected with spiral galaxies. In this case, most of the vectors will be preserved, and the resulting value of G2 should be high ($G2 > 1.5$). <figure> <img src='imgs/g2_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 7</i></center>
github_jupyter
# Galaxy morphometry Primarily, morphometrics can be used to address galaxy morphology. Different works use a non-parametric approach to measure a galaxy's shape characteristics. `Lotz et al. (2004)` characterize galaxies' Concentration, Asymmetry. Furthermore, `Ferrari et al. (2015)` include Shannon entropy (information entropy) to quantify pixel values distribution. More recently, `Rosa et al. (2018)` characterized a galaxy's morphology using the second moment of the gradient of the images through Gradient Pattern Analysis. One of the critical features of non-parametric morphology estimation is understanding how a given parameter can reliably separate elliptical and spiral galaxies. Each metric measures patterns and forms in the image. Concentration (C) measures how tightly pixels are distributed in the galaxy structure, Asymmetry (A) measures irregularity of the form of the galaxy's disk and bulge, Smoothness (S) measures the presence of the small structures in the galaxy's disk (star-forming regions), Entropy (H) is the measure of the heterogeneity of the pixel distribution and G2 analyses the asymmetric vector field (variation of flux intensity). When applied to an image with a clearly elliptical galaxy, usually, C will have high values, and A, S, H, and G2 will have low values. While in the case of a clearly spiral galaxy, it would be the opposite. Combining these values can provide a solid intuition of the class of unknown galaxies. Besides using the morphometric parameters to study the actual physical processes molding galaxies directly, they can also be used in Machine Learning (ML) methods to discriminate ETGs from LTGs. `Barchi et al. (2020)` conducts a thorough study of ML using the CASHG2 system as the primary input information and galaxy Zoo 1, GZ1 `(LINTOTT et al., 2008; LINTOTT et al., 2011)` provides the "true" classification. These are the main ingredients for the training step. Several ML algorithms were tested: Decision Tree (DT); Support Vector Machine (SVM); and Multilayer Perceptron (MLP). DT had a slightly better performance than the other two, with an overall accuracy (OA) of 98.5%, when dealing only with bright galaxies. When applying Deep Learning (DL) technique, they find only a tiny increase in OA, 99.5%. CyMorph can also be applied to: - X-Ray map tracing gas from the ICM. By combining the results of the galaxy's morphology and ICM hot gas X-Ray morphology properties, we plan to investigate how the cluster's ICM morphological properties (high asymmetry, for example) affect member galaxies' morphology. - Trace galaxy properties typical for $\gamma$-ray burst(GRB) hosts to prioritize targets with similar characteristics during gravitational wave optical follow-ups `(Santana-Silva et al. (in prep.)`. In summary, CyMorph can extract five different metrics: Concentration (C), Asymmetry (A), Smoothness (S), Entropy (H), and Gradient Pattern Analysis (G2). Each of these metrics (except G2) ranges from 0 (min) to 1 (max) depending on the particular features of the content of the image. These metrics have a wide range of application in further resurch. ## Metrics ### Concentration Concentration is straightforward to grasp intuitively; it simply measures how the flux is distributed on the galaxy profile, that is concentrated in the center (bulge) of the galaxy or distributed around the whole profile. The practical process to perform this kind of measurement consists of several steps. Literature has different approaches to calculate concentration `(BERSHADY et al., 2000; GRAHAM et al., 2001b; ABRAHAM et al., 1994).`. Here we fallow the method proposed in `Conselice (2003) and Lotz et al. (2004)`. Figure 1 is showing the $R_p$ and $2*R_p$ (left panel) and $\eta$ profile (left panel) of a spiral galaxy <figure> <img src='imgs/rp_eta_ppt.png' alt='missing' class="center"/> </figure> <center><i>Figure 1</i></center> By definition, Concentration stands for a ratio of two portions of accumulated flux in two different fractions of the total flux of a galaxy. It is defined by: $C = log_{10}(R_1/R_2)$, where $R_1$ and $R_2$ are the fractions of the total flux. They also called the outer and inner radii enclosing a fraction of total flux. The values of $R_1$ and $R_2$ can vary arbitrary, in range of 100 to 0 (containing all flux and none of it). Several studies used different pairs of $R_1$ and $R_2$ `(LOTZ et al., 2004; FERRARI et al., 2015)`. Panel A in Figure 2 shows the accumulated flux curve for the elliptical galaxy, panel B for the spiral galaxy. From the comparison, it could be seen that flux growth in the case of the elliptical galaxy slows considerably already at $R_2$, resulting in the highly concentrated center of this galaxy. In the case of a spiral galaxy, the accumulated flux keeps increasing more uniformly even after $R_2$. <figure> <img src='imgs/c_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 2</i></center> The processing starts by calculating the accumulated flux intensity curve and $\eta$ profile on the cleaned image. Galaxies are resolved objects with poorly defined edges and do not all have the same radial surface brightness profile; some care is required to define the flux associated with each object. The $\eta$ profile is necessary to obtain the Petrosian radius ($R_p$) because the total flux of a galaxy is defined by the accumulated flux until $2*R_p$. The aperture $2*R_p$ is large enough to contain nearly all the flux for typical galaxy profiles but small enough that the sky noise impact is negligible `Blanton et al. (2001)`. Based on the $\eta$ profile, it is also largely insensitive to variations in the limiting surface brightness and redshift (in the sense of distance), providing reliable results for the galaxies with a high signal-to-noise ratio. We follow `Blanton et al. (2001)` and `Strateva et al. (2001)` and set $R_p$ on $\eta = 0.2$ as it is shown in the right panel in Figure 1. The left panel in Figure 1 shows the portions in $R_p$ and $2*R_p$. Finally, it becomes possible to set the radii that contain the fractions of total flux. Intuitively and as it could be seen in the Figure 2, elliptical galaxies have the flux concentrated in the center, and it falls almost entirely in the R2 radius, leaving R1 with almost equal to R2 and pushing the ratio up. The opposite occurs with spiral galaxies, the main portion of the flux is concentrated in the center, but the spiral arms increase the total flux located in R2 and drive the ratio down. ### Asymmetry Asymmetry could be the most uncomplicated metric to extract, and the simplicity consists in the fact that it is not necessary to perform any additional operation on the $segmented\_image$. Tracing the asymmetry distribution in the galaxy profile can help to reveal dynamic processes in galaxies. This is especially true when we talk about collisionless stars and can track the matter distribution more precisely. For instance, galaxies disturbed by interactions or mergers with another galaxy will tend to have high asymmetries `(CONSELICE et al., 2000)`. Asymmetry, by definition, measures the degree of irregularity of the galaxy profile. To obtain it, CyMorph will rotate the segmented image by $180^{\circ}$ and run a $for$ loop on both of the images (segmented and rotated). This loop will compare each $[i,j]$ pixel of both of the images, and in case if both are not zero (containing flux counts), the non-zero pixels will be stored in $list1$ and $list2$—these lists of pixels for segmented and rotated images, respectively. Figure 3 is showing how does correlation works when we collect the pixels on input (Panel A) and rotated (Panel B) images. Panel C shows the visual correlation of $list1$ and $list2$, rotated and original pixels respectively. <figure> <img src='imgs/a_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 3</i></center> The next step would be the calculation of the correlation coefficient between the segmented image and the rotated image pixels lists. CyMorph will use Pearson and Spearman Ranks `(PRESS,2005)` coefficient functions to calculate the correlation coefficients between the two lists. In the case of an elliptical galaxy, the correlation coefficient will be high since the pixels of the elliptical galaxy are fairly heterogeneously distributed. In the case of a spiral galaxy, where pixels have a much higher gradient, it will mean that, after rotation, two pixels on the same position will have very different values. The formula to calculate A is $A = 1 - spearmanr(list1, list2)$. When the correlation coefficient is high (meaning that there is no significant difference between pixel values on both images), asymmetry will be low (case of an elliptical galaxy). In case if the correlation coefficient is low, the asymmetry will be high (case of spiral galaxy). As it could be seen in Figure 3, if one rotates a spiral galaxy, the correlation would be low since spiral arms and similar irregularities will contribute to the fact that the same pixels would have very different flux values. It would be the opposite if we rotated the elliptical one. The reason is that elliptical galaxies have nearly perfect flux distribution. So the correlation will be high. ### Smoothness Smoothness (or clumpiness) is calculated very similarly with asymmetry. It calculates the Pearson and Spearman ranks correlation coefficients between the segmented image and its smoothed version `(ABRAHAM et al., 1996; CONSELICE, 2003; FERRARI et al., 2015)`. Instead of rotation, we are applying a second order of the Butter filter to smooth the original image. This filter provides the advantage of continuous adaptive control of the smoothing degree applied to the image `(KASZYNSKI; PISKOROWSKI, 2006; PEDRINI; SCHWARTZ, 2008; SAUTTER, 2018)`. By intuition, if one tries to smooth the elliptical galaxy, the result will be almost the same image because naturally, these galaxies are smooth. Scanning the images, storing the pixels to lists, and calculating the coefficient will result in a high correlation between the segmented and smoothed image. Spiral galaxies will produce the opposite result, for the correlation will be low between the images, and clumpiness will be high. The formula to obtain S is $S = 1 - spearmanr(list1, list2)$. The nomenclature could bring confusion, but the logic behind this metric consists in the fact that spiral galaxies will present small structures inside the disk that will contribute to the high Clumpiness values, while elliptical galaxies, naturally smooth, will have high correlation and low Clumpiness value. From Figure 4 we can see that smoothing the spiral galaxy produces a significant difference. In the case of an elliptical galaxy, it would be almost unnoticeable. The level of change in the image after the smoothing is the key factor in the metric calculation. Figure 4 is showing how does correlation works when we collect the pixels on input and smoothed image. Panel C is showing the visual correlation of $list1$ and $list2$, rotated and original pixels respectively. <figure> <img src='imgs/s_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 4</i></center> ### Entropy Entropy (H) works very similarly to the concentration in the sense that it measures pixel density/frequency in a given number of bins. Entropy bins number is a parameter that can be tuned to adapt better to the data at hand, and it is responsible for how many bins the flux distribution will be split. In a simplified manner, H measures the distribution of pixel values in the image by dividing the image in an arbitrary number of bins. The process of the H extraction does not have any additional image manipulation. The input image pixels are raveled (converted to 1d array) and are used to calculate values of the histogram (frequency of the flux) and bin edges with $numpy.histogram$. The illustration of this process is shown in Figure 5. It is worth noting how the flux of the elliptical galaxy is concentrated in a small area, while the spiral galaxy's flux occupies a broader area of flux distribution. The next step is the normalization of the frequency counts by maximum count and then calculating the entropy value with: \begin{equation} H(I)=-\sum_{k}^{K} p\left(I_{k}\right) \log \left[p\left(I_{k}\right)\right] \end{equation} <center><i>Equation 1</i></center> where: - $p(I_{k})$ - s the probability of occurrence of the value $I_{k}$ - $K$ - number of bins that data was split For discrete variables, $H$ reaches the maximum value for a uniform distribution, when $p(I_{k}) = 1/K$ for all $k$, resulting in $H_{max} = \log K$. The minimum entropy is that of a delta function, for which $H = 0$. Hence, we can get the normalized entropy with: \begin{equation} \widetilde{H}(I)=\frac{H(I)}{H_{\max }} \quad 0 \leqslant \widetilde{H}(I) \leqslant 1 \end{equation} <center><i>Equation 2</i></center> Elliptical galaxies are expected to have low entropy (for having natural homogeneity of flux (pixel value) distribution), and spiral galaxies are expected to have high values of entropy (by presenting irregular structures and those having pixel value heterogeneity naturally) `(BISHIP, 2007; FERRARI et al., 2015)`. The top row in the Figure 5 shows the flux distribution of an elliptical galaxy when the image is raveled and used as input for the $numpy.histogram$. The thin distribution limited to low counts with a fast decrease slope characterizes the flux concentration of the elliptical profile. We can see the opposite in the case of a spiral galaxy (middle row), where the flux is spread over a bigger area of the flux axis. <figure> <img src='imgs/e_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 5</i></center> ### Gradient Pattern Analysis (second moment) The second moment of Gradient Pattern Analysis (GPA), in short G2, can be considered the most complex metric to be calculated. According to `Sautter (2018)`, GPA has four moments, but for the galaxy morphology classification, only the first and second are currently used. `Rosa et al. (2018)` showed that improved and revised version of the second moment (with due modifications) shows best results in galaxy separations when compared with classic CAS classification `(CONSELICE, 2003)`. We have performed a revision and optimization of the code to comply with the original definition. #### Extraction process The first step is to convert all zeros in the image to $numpy.nan$. This condition assures that the border pixels of the image will be ignored during the gradient field generation. Otherwise, they would contribute detrimentally to the final result. Then we are generating the gradient field with $numpy.gradient(segmented\_image)$. In the next step, we will generate an asymmetric gradient field. This process consists of locating pairs of pixels located at the same distance from the center and comparing modulus (strength) and phase (direction) between them. If two pixels of the given pair have the same modulus but opposite phases, they will be considered symmetric and removed. This process will be performed on all the unique pairs of pixels (to exclude the repetition). The resulting lattice is called an asymmetry vector field because all symmetric pairs are removed. Next, we will obtain the count of asymmetric vectors, their sum, and their modulus sum. To determine if the vectors are aligned and have the same magnitude, we need to calculate $confluence$ using the equation 3: \begin{equation} confluence = \left(\frac{\left|\sum_{i}^{V_{a}} v_{a}^{i}\right|}{\sum_{i}^{V_{a}}\left|v_{a}^{i}\right|}\right) \end{equation} <center><i>Equation 3</i></center> where: - $v_{a}$ - list of asymmetrical vectors - $V_{A}$ - count of asymmetric vectors The final calculation of G2 value is obtained by the equation 4 `(ROSA et al., 1999; RAMOS et al., 2000; ROSA et al., 2003; SAUTTER, 2018)`: \begin{equation} G_{2} = \frac{V_{A}} {V - V_{c}} (2-confluence) \end{equation} <center><i>Equation 4</i></center> where: - $V_{A}$ - total valid pixels - $V_{c}$ - contour pixels - $V_{A}$ - asymmetric pixels - $2$ - normalization factor Figure 6 shows results in two fringe cases: random noise, resulting in maximum G2 value (as there are very few pairs that end up canceled) and Gaussian noise, resulting in minimum G2 value (as there are all pixel pairs end up canceled). <figure> <img src='imgs/g2_example.png' alt='missing' class="center"/> </figure> <center><i>Figure 6</i></center> It is possible to perform fine-tuning of the G2. It has two tolerances: $modulus\_tolerance$. $modulus\_tolerance$ is responsible for serving as the threshold of the minimum acceptable strength difference between two vectors. Worth noting that modulus are normalized by the maximum value during the calculation, so the $modulus\_tolerance$ can be influential even with low values, as it ranges from 0 (no tolerance at all) to 1 (any two pixels are considered to have same modulus). $phase\_tolerance$ serves as a threshold of minimal acceptable angle difference between two pixels (their vectors). It ranges from 0 (no tolerance) to 3.14 (any any two pixels are considered to have same angle) The top row in Figure 6 shows the results expected with a simulated input image with Gaussian noise. In this case, almost all the vectors should cancel out, the asymmetric gradient field should be empty, and the resulting value of G2 should be near to 0 (minimum value of G2). The bottom row shows the results expected with a simulated input image with random noise. In this case, almost all the vectors will be preserved, and the resulting value of G2 should be near to 2.0 (maximum value of G2). Intuitively, elliptical galaxies will have a lot of the same vectors, being naturally smooth and round, and as a consequence, many vectors will be canceled. In the case of spiral galaxies, the internal structures distort the gradient field, resulting in many vectors being preserved. Figure 7 shows visualization of this behavior. The top row shows the results expected with elliptical galaxies. Generally, most of the vectors should cancel out, resulting in a low value of G2 ($G2 < 0.5$). The bottom row shows the results expected with spiral galaxies. In this case, most of the vectors will be preserved, and the resulting value of G2 should be high ($G2 > 1.5$). <figure> <img src='imgs/g2_showcase.png' alt='missing' class="center"/> </figure> <center><i>Figure 7</i></center>
0.964221
0.990035
``` import torch import torchvision from torch.utils.data import DataLoader import torchvision.transforms as transforms from torchsummary import summary from torch.utils.data import random_split import matplotlib.pyplot as plt import numpy as np import warnings import torch.nn as nn from torch import optim import torch.nn.functional as F from tqdm import tqdm warnings.filterwarnings('ignore') # HYPERPARAMS num_epochs = 15 batch_size = 128 lr = 0.01 # CONSTANTS NUM_CLASSES = 10 # cuda availability cuda_available = torch.cuda.is_available() device = "cuda" if cuda_available else "cpu" train = torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ])) test = torchvision.datasets.MNIST('./data', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor() ])) SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=batch_size, num_workers=2, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64) # train dataloader train_loader = torch.utils.data.DataLoader(train, **dataloader_args) # test dataloader test_loader = torch.utils.data.DataLoader(test, **dataloader_args) import matplotlib.pyplot as plt import numpy as np def display_image(image, title: str="Class label"): """ This function essentially takes in normalized tensors and the Un-normalize them and display the image as output. Args: ---- image: Image which we want to plot. title: Label for that image. """ image = image.numpy().transpose((1, 2, 0)) # (C, H, W) --> (H, W, C) # Convert mean and std to numpy array mean = np.asarray(DATA_MEAN) std = np.asarray(DATA_STD) # unnormalize the image image = std * image + mean image = np.clip(image, 0, 1) print(title) fig = plt.figure() # Create a new figure fig.set_figheight(15) fig.set_figwidth(15) ax = fig.add_subplot(111) ax.axis("off") # Sqitch off the axis ax.imshow(image) # Iteratre over and get 1 batch of training data data, targets = next(iter(train_loader)) # make_grid takes all tensors(batch) and joins them into a single big tensor image (almost) batch_grid = torchvision.utils.make_grid(data) display_image(batch_grid, title=[str(cls.item()) for cls in targets]) class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 26 # CONVOLUTION BLOCK 1 self.convblock2 = nn.Sequential( nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 24 # TRANSITION BLOCK 1 self.convblock3 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=4, kernel_size=(1, 1), padding=0, bias=False), ) # output_size = 24 self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12 # CONVOLUTION BLOCK 2 self.convblock4 = nn.Sequential( nn.Conv2d(in_channels=4, out_channels=8, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 10 self.convblock5 = nn.Sequential( nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 8 self.convblock6 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=48, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 6 # OUTPUT BLOCK self.gap = nn.Sequential( nn.AvgPool2d(kernel_size=6) ) # output_size = 1 self.convblock8 = nn.Sequential( nn.Conv2d(in_channels=48, out_channels=16, kernel_size=(1, 1), padding=0, bias=False), ) self.classifier = nn.Linear(16, 10) self.dropout = nn.Dropout(dropout_value) def forward(self, x): x = self.convblock1(x) x = self.convblock2(x) x = self.convblock3(x) x = self.pool1(x) x = self.convblock4(x) x = self.convblock5(x) x = self.convblock6(x) x = self.gap(x) x = self.convblock8(x) x = x.view(-1, 16) x = self.classifier(x) return F.log_softmax(x, dim=-1) use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28)) from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samples data, target = data.to(device), target.to(device) # Init optimizer.zero_grad() # In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. # Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. # Predict y_pred = model(data) # Calculate loss loss = F.nll_loss(y_pred, target) train_losses.append(loss) # Backpropagation loss.backward() optimizer.step() # Update pbar-tqdm pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() processed += len(data) pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}') train_acc.append(100*correct/processed) def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test_acc.append(100. * correct / len(test_loader.dataset)) model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=0.05) EPOCHS = 15 for epoch in range(EPOCHS): print("EPOCH:", epoch) train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) %matplotlib inline import matplotlib.pyplot as plt fig, axs = plt.subplots(2,2,figsize=(15,10)) axs[0, 0].plot(train_losses) axs[0, 0].set_title("Training Loss") axs[1, 0].plot(train_acc[4000:]) axs[1, 0].set_title("Training Accuracy") axs[0, 1].plot(test_losses) axs[0, 1].set_title("Test Loss") axs[1, 1].plot(test_acc) axs[1, 1].set_title("Test Accuracy") ```
github_jupyter
import torch import torchvision from torch.utils.data import DataLoader import torchvision.transforms as transforms from torchsummary import summary from torch.utils.data import random_split import matplotlib.pyplot as plt import numpy as np import warnings import torch.nn as nn from torch import optim import torch.nn.functional as F from tqdm import tqdm warnings.filterwarnings('ignore') # HYPERPARAMS num_epochs = 15 batch_size = 128 lr = 0.01 # CONSTANTS NUM_CLASSES = 10 # cuda availability cuda_available = torch.cuda.is_available() device = "cuda" if cuda_available else "cpu" train = torchvision.datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ])) test = torchvision.datasets.MNIST('./data', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor() ])) SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=batch_size, num_workers=2, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64) # train dataloader train_loader = torch.utils.data.DataLoader(train, **dataloader_args) # test dataloader test_loader = torch.utils.data.DataLoader(test, **dataloader_args) import matplotlib.pyplot as plt import numpy as np def display_image(image, title: str="Class label"): """ This function essentially takes in normalized tensors and the Un-normalize them and display the image as output. Args: ---- image: Image which we want to plot. title: Label for that image. """ image = image.numpy().transpose((1, 2, 0)) # (C, H, W) --> (H, W, C) # Convert mean and std to numpy array mean = np.asarray(DATA_MEAN) std = np.asarray(DATA_STD) # unnormalize the image image = std * image + mean image = np.clip(image, 0, 1) print(title) fig = plt.figure() # Create a new figure fig.set_figheight(15) fig.set_figwidth(15) ax = fig.add_subplot(111) ax.axis("off") # Sqitch off the axis ax.imshow(image) # Iteratre over and get 1 batch of training data data, targets = next(iter(train_loader)) # make_grid takes all tensors(batch) and joins them into a single big tensor image (almost) batch_grid = torchvision.utils.make_grid(data) display_image(batch_grid, title=[str(cls.item()) for cls in targets]) class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 26 # CONVOLUTION BLOCK 1 self.convblock2 = nn.Sequential( nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 24 # TRANSITION BLOCK 1 self.convblock3 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=4, kernel_size=(1, 1), padding=0, bias=False), ) # output_size = 24 self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12 # CONVOLUTION BLOCK 2 self.convblock4 = nn.Sequential( nn.Conv2d(in_channels=4, out_channels=8, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 10 self.convblock5 = nn.Sequential( nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 8 self.convblock6 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=48, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 6 # OUTPUT BLOCK self.gap = nn.Sequential( nn.AvgPool2d(kernel_size=6) ) # output_size = 1 self.convblock8 = nn.Sequential( nn.Conv2d(in_channels=48, out_channels=16, kernel_size=(1, 1), padding=0, bias=False), ) self.classifier = nn.Linear(16, 10) self.dropout = nn.Dropout(dropout_value) def forward(self, x): x = self.convblock1(x) x = self.convblock2(x) x = self.convblock3(x) x = self.pool1(x) x = self.convblock4(x) x = self.convblock5(x) x = self.convblock6(x) x = self.gap(x) x = self.convblock8(x) x = x.view(-1, 16) x = self.classifier(x) return F.log_softmax(x, dim=-1) use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28)) from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samples data, target = data.to(device), target.to(device) # Init optimizer.zero_grad() # In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. # Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. # Predict y_pred = model(data) # Calculate loss loss = F.nll_loss(y_pred, target) train_losses.append(loss) # Backpropagation loss.backward() optimizer.step() # Update pbar-tqdm pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() processed += len(data) pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}') train_acc.append(100*correct/processed) def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test_acc.append(100. * correct / len(test_loader.dataset)) model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=0.05) EPOCHS = 15 for epoch in range(EPOCHS): print("EPOCH:", epoch) train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) %matplotlib inline import matplotlib.pyplot as plt fig, axs = plt.subplots(2,2,figsize=(15,10)) axs[0, 0].plot(train_losses) axs[0, 0].set_title("Training Loss") axs[1, 0].plot(train_acc[4000:]) axs[1, 0].set_title("Training Accuracy") axs[0, 1].plot(test_losses) axs[0, 1].set_title("Test Loss") axs[1, 1].plot(test_acc) axs[1, 1].set_title("Test Accuracy")
0.877752
0.593079
# Exploratory Data Analysis + Features ``` import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from math import pi import warnings # current version of seaborn generates a bunch of warnings that we'll ignore warnings.filterwarnings("ignore") import seaborn as sns sns.set(style="white", color_codes=True) %matplotlib inline import scipy from scipy.stats import describe from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error import lightgbm as lgb import xgboost as xgb import sys from fastai.structured import * from fastai.column_data import * from sklearn.model_selection import * train = pd.read_csv("../input/train.csv") # the train dataset is now a Pandas DataFrame test = pd.read_csv("../input/test.csv") # the train dataset is now a Pandas DataFrame # Let's see what's in the trainings data - Jupyter notebooks print the result of the last thing you do train.head() ``` ## Shape of the data ``` print("Santander Value Prediction Challenge train - rows:",train.shape[0]," columns:", train.shape[1]) print("Santander Value Prediction Challenge test - rows:",test.shape[0]," columns:", test.shape[1]) train.head() test.head() ``` ## Missing values ``` train.isnull().values.any() test.isnull().values.any() ``` ## Types of Feature ``` dtype_df = train.dtypes.reset_index() dtype_df.columns = ["Count", "Column Type"] dtype_df.groupby("Column Type").aggregate('count').reset_index() ``` ## Distribution of Target Variable ``` plt.title("Distribution of Target") sns.distplot(train['target'].dropna(),color='blue', kde=True,bins=100) plt.show() ``` ### Violin distribution of target ``` sns.set_style("whitegrid") ax = sns.violinplot(x=train.target.values) plt.show() plt.title("Distribution of log(target)") sns.distplot(np.log1p(train['target']).dropna(),color='blue', kde=True,bins=100) plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(1+train.target.values)) plt.show() ``` ## Identifying features that are highly correlated with target ``` labels = [] values = [] for col in train.columns: if col not in ["ID", "target"]: labels.append(col) values.append(np.corrcoef(train[col].values, train["target"].values)[0,1]) corr_df = pd.DataFrame({'columns_labels':labels, 'corr_values':values}) corr_df = corr_df.sort_values(by='corr_values') corr_df = corr_df[(corr_df['corr_values']>0.25) | (corr_df['corr_values']<-0.25)] ind = np.arange(corr_df.shape[0]) width = 0.9 fig, ax = plt.subplots(figsize=(10,6)) rects = ax.barh(ind, np.array(corr_df.corr_values.values), color='black') ax.set_yticks(ind) ax.set_yticklabels(corr_df.columns_labels.values, rotation='horizontal') ax.set_xlabel("Correlation coefficient") ax.set_title("Correlation coefficient of the variables") plt.show() ``` ## Correlation matrix of the most highly correlated features ``` temp_df = train[corr_df.columns_labels.tolist()] corrmat = temp_df.corr(method='pearson') f, ax = plt.subplots(figsize=(12, 12)) sns.heatmap(corrmat, vmax=1., square=True, cmap=plt.cm.BrBG) plt.title("Important variables correlation map", fontsize=15) plt.show() ``` ## Sparsity ``` sparsity = { col: (train[col] == 0).mean() for idx, col in enumerate(train) } sparsity = pd.Series(sparsity) fig = plt.figure(figsize=[7,12]) ax = fig.add_subplot(211) ax.hist(sparsity, range=(0,1), bins=100) ax.set_xlabel('Sparsity of Features') ax.set_ylabel('Number of Features') ax = fig.add_subplot(212) ax.hist(sparsity, range=(0.8,1), bins=100) ax.set_xlabel('Sparsity of Features') ax.set_ylabel('Number of Features') plt.show() cat_flds = [] bs = 64 test = test.set_index('ID') train['target'] = np.log(train['target']) x, y, nas = proc_df(train, 'target', skip_flds=['ID']) df_train_x, df_val_x, df_train_y, df_val_y= train_test_split(x, y, test_size=0.1, random_state=42) model_data = ColumnarModelData.from_data_frames( '.', df_train_x, df_val_x, df_train_y, df_val_y, cat_flds, bs, is_reg=True, is_multi=False, test_df=test) emb_szs = [] n_cont = len(df_train_x.columns) emb_drop = 0.0 out_sz = 1 szs = [400, 50] drops = [0.0,0.0] learner = model_data.get_learner(emb_szs, n_cont, emb_drop, out_sz, szs, drops) learner.lr_find2(start_lr=1, end_lr=1000, num_it=500) learner.sched.plot() learner.unfreeze() lr = 0.105 learner.fit(lr, 10, cycle_len=2) preds = learner.predict(is_test = True) train_describe = train.describe() train_describe test_describe = test.describe() test_describe plt.figure(figsize=(12, 5)) plt.hist(train.target.values, bins=100) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Target') plt.show() plt.figure(figsize=(30, 5)) x = train.iloc[1] plt.hist(x) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Log 1+Target') plt.show() ``` *This is a highly skewed distribution, so let's try to re-plot it with with log transform of the target.* ``` plt.figure(figsize=(12, 5)) plt.hist(np.log(1+train.target.values), bins=100) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Log 1+Target') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(1+train.target.values)) plt.show() ``` *Let's take a look at the statistics of the Log(1+target)* ``` train_log_target = train[['target']] train_log_target['target'] = np.log(1+train['target'].values) train_log_target.describe() ``` *We see that the statistical properties of teh Log(1+Target) distribution are much more amenable.* *Now let's take a look at columns with constant value.* ``` constant_train = train.loc[:, (train == train.iloc[0]).all()].columns.tolist() constant_test = test.loc[:, (test == test.iloc[0]).all()].columns.tolist() print('Number of constant columns in the train set:', len(constant_train)) print('Number of constant columns in the test set:', len(constant_test)) ``` > So this is interesting: there are 256 constant columns in the train set, but none in the test set. These constant columns are thus most likely an artifact of the way that the train and test sets were constructed, and not necessarily irrelevant in their own right. This is yet another byproduct of having a very small dataset. For most problems it would be useful to take a look at the description of these columns, but in this competition they are anonymized, and thus would not yield any useful information. So let's subset the colums that we'd use to just those that are not constant. ``` columns_to_use = test.columns.tolist() del columns_to_use[0] # Remove 'ID' columns_to_use = [x for x in columns_to_use if x not in constant_train] #Remove all 0 columns len(columns_to_use) ``` > So we have the total of 4735 columns to work with. However, as mentioned earlier, most of these columns seem to be filled predominatly with zeros. Let's try to get a better sense of this data. ``` describe(train[columns_to_use].values, axis=None) ``` > If we treat all the train matrix values as if they belonged to a single row vector, we see a huge amount of varience, far exceeding the similar variance for the target variable. Now let's plot it to see how diverse the numerical values are. ``` plt.figure(figsize=(12, 5)) plt.hist(train[columns_to_use].values.flatten(), bins=50) plt.title('Histogram all train counts') plt.xlabel('Count') plt.ylabel('Value') plt.show() ``` > Most of the values are heavily concentrated around 0 Let's see with the log plot.. ``` plt.figure(figsize=(12, 5)) plt.hist(np.log(train[columns_to_use].values.flatten()+1), bins=50) plt.title('Log Histogram all train counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() ``` > Only marginal improvement - there is a verly small bump close to 15. Let's try out with violin plot ``` sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(train[columns_to_use].values.flatten()+1)) plt.show() ``` *Not really - the plot looks nicer, but the overall shape is almost same.* let's take a look at the distribution of non-zero values. ``` train_nz = np.log(train[columns_to_use].values.flatten()+1) train_nz = train_nz[np.nonzero(train_nz)] plt.figure(figsize=(12, 5)) plt.hist(train_nz, bins=50) plt.title('Log Histogram nonzero train counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=train_nz) plt.show() describe(train_nz) ``` Let's do the same thing with the test data. ``` test_nz = np.log(test[columns_to_use].values.flatten()+1) test_nz = test_nz[np.nonzero(test_nz)] plt.figure(figsize=(12, 5)) plt.hist(test_nz, bins=50) plt.title('Log Histogram nonzero test counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=test_nz) plt.show() describe(test_nz) ``` *Again, we see that these distributions look similar, but they are definitely not the same.* let's take a closer look at the shape and content of the train data. We want to get a better numerical grasp of the true extent of zeros. ``` train[columns_to_use].values.flatten().shape ((train[columns_to_use].values.flatten())==0).mean() ``` *Almost 97% of all values in the train dataframe are zeros. That looks pretty sparse to me, but let's see how much variation is there between different columns.* ``` train_zeros = pd.DataFrame({'Percentile':((train[columns_to_use].values)==0).mean(axis=0), 'Column' : columns_to_use}) train_zeros.head() describe(train_zeros.Percentile.values) ``` *It seems that the vast majority of columns have 95+ percent of zeros in them. Let's see how would that look on a plot.* ``` plt.figure(figsize=(12, 5)) plt.hist(train_zeros.Percentile.values, bins=50) plt.title('Histogram percentage zeros train counts') plt.xlabel('Count') plt.ylabel('Value') plt.show() describe(np.log(train[columns_to_use].values+1), axis=None) describe(test[columns_to_use].values, axis=None) describe(np.log(test[columns_to_use].values+1), axis=None) test_zeros = pd.DataFrame({'Percentile':(np.log(1+test[columns_to_use].values)==0).mean(axis=0), 'Column' : columns_to_use}) test_zeros.head() describe(test_zeros.Percentile.values) y = np.log(1+train.target.values) y.shape y ``` ## Predictive Modeling ``` train_1 = lgb.Dataset(train[columns_to_use],y ,feature_name = "auto") params = {'boosting_type': 'gbdt', 'objective': 'regression', 'metric': 'rmse', 'learning_rate': 0.0105, 'num_leaves': 100, 'feature_fraction': 0.4, 'bagging_fraction': 0.6, 'max_depth': 5, 'min_child_weight': 10} clf = lgb.train(params, train_1, num_boost_round = 400, verbose_eval=True) preds = clf.predict(test[columns_to_use]) preds sample_submission = pd.read_csv("../input/sample_submission.csv") sample_submission.target = np.exp(preds)-1 sample_submission.to_csv('simple_lgbm.csv', index=False) sample_submission.head() nr_splits = 5 random_state = 1054 y_oof = np.zeros((y.shape[0])) total_preds = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof))) params['max_depth'] = 4 y_oof_2 = np.zeros((y.shape[0])) total_preds_2 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_2 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_2[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_2))) params['max_depth'] = 6 y_oof_3 = np.zeros((y.shape[0])) total_preds_3 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_3 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_3[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_3))) params['max_depth'] = 7 y_oof_4 = np.zeros((y.shape[0])) total_preds_4 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_4 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_4[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_4))) params['max_depth'] = 8 y_oof_5 = np.zeros((y.shape[0])) total_preds_5 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_5 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_5[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_5))) params['max_depth'] = 10 y_oof_6 = np.zeros((y.shape[0])) total_preds_6 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_6 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_6[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_6))) params['max_depth'] = 12 y_oof_7 = np.zeros((y.shape[0])) total_preds_7 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_7 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_7[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_7))) print('Total error', np.sqrt(mean_squared_error(y, 1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5))) print('Total error', np.sqrt(mean_squared_error(y, -0.5*y_oof-0.5*y_oof_2-y_oof_3 +3*y_oof_4))) print('Total error', np.sqrt(mean_squared_error(y, 0.75*(1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5)+ 0.25*(-0.5*y_oof-0.5*y_oof_2-y_oof_3 +3*y_oof_4)))) sub_preds = (0.75*(1.4*(1.6*total_preds_7-0.6*total_preds_6)-0.4*total_preds_5)+ 0.25*(-0.5*total_preds-0.5*total_preds_2-total_preds_3 +3*total_preds_4)) sample_submission.target = np.exp(sub_preds)-1 sample_submission.to_csv('submission_1.csv', index=False) sample_submission.head() params = {'objective': 'reg:linear', 'eval_metric': 'rmse', 'eta': 0.01, 'max_depth': 10, 'subsample': 0.6, 'colsample_bytree': 0.6, 'alpha':0.001, 'random_state': 42, 'silent': True} y_oof_8 = np.zeros((y.shape[0])) total_preds_8 = 0 dtest = xgb.DMatrix(test[columns_to_use]) kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = xgb.DMatrix(X_train, y_train) val = xgb.DMatrix(X_val, y_val) watchlist = [(train_1, 'train'), (val, 'val')] clf = xgb.train(params, train_1, 1000, watchlist, maximize=False, early_stopping_rounds = 60, verbose_eval=100) total_preds_8 += clf.predict(dtest, ntree_limit=clf.best_ntree_limit)/nr_splits pred_oof = clf.predict(val, ntree_limit=clf.best_ntree_limit) y_oof_8[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_8))) print('Total error', np.sqrt(mean_squared_error(y, 0.7*(0.75*(1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5)+0.25*(-0.5*y_oof-0.5*y_oof_2-y_oof_3+3*y_oof_4))+0.3*y_oof_8))) sub_preds = (0.7*(0.75*(1.4*(1.6*total_preds_7-0.6*total_preds_6)-0.4*total_preds_5)+0.25*(-0.5*total_preds-0.5*total_preds_2-total_preds_3+3*total_preds_4))+0.3*total_preds_8) sample_submission.target = np.exp(sub_preds)-1 sample_submission.to_csv('blended_submission_2.csv', index=False) sample_submission.head() ```
github_jupyter
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from math import pi import warnings # current version of seaborn generates a bunch of warnings that we'll ignore warnings.filterwarnings("ignore") import seaborn as sns sns.set(style="white", color_codes=True) %matplotlib inline import scipy from scipy.stats import describe from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error import lightgbm as lgb import xgboost as xgb import sys from fastai.structured import * from fastai.column_data import * from sklearn.model_selection import * train = pd.read_csv("../input/train.csv") # the train dataset is now a Pandas DataFrame test = pd.read_csv("../input/test.csv") # the train dataset is now a Pandas DataFrame # Let's see what's in the trainings data - Jupyter notebooks print the result of the last thing you do train.head() print("Santander Value Prediction Challenge train - rows:",train.shape[0]," columns:", train.shape[1]) print("Santander Value Prediction Challenge test - rows:",test.shape[0]," columns:", test.shape[1]) train.head() test.head() train.isnull().values.any() test.isnull().values.any() dtype_df = train.dtypes.reset_index() dtype_df.columns = ["Count", "Column Type"] dtype_df.groupby("Column Type").aggregate('count').reset_index() plt.title("Distribution of Target") sns.distplot(train['target'].dropna(),color='blue', kde=True,bins=100) plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=train.target.values) plt.show() plt.title("Distribution of log(target)") sns.distplot(np.log1p(train['target']).dropna(),color='blue', kde=True,bins=100) plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(1+train.target.values)) plt.show() labels = [] values = [] for col in train.columns: if col not in ["ID", "target"]: labels.append(col) values.append(np.corrcoef(train[col].values, train["target"].values)[0,1]) corr_df = pd.DataFrame({'columns_labels':labels, 'corr_values':values}) corr_df = corr_df.sort_values(by='corr_values') corr_df = corr_df[(corr_df['corr_values']>0.25) | (corr_df['corr_values']<-0.25)] ind = np.arange(corr_df.shape[0]) width = 0.9 fig, ax = plt.subplots(figsize=(10,6)) rects = ax.barh(ind, np.array(corr_df.corr_values.values), color='black') ax.set_yticks(ind) ax.set_yticklabels(corr_df.columns_labels.values, rotation='horizontal') ax.set_xlabel("Correlation coefficient") ax.set_title("Correlation coefficient of the variables") plt.show() temp_df = train[corr_df.columns_labels.tolist()] corrmat = temp_df.corr(method='pearson') f, ax = plt.subplots(figsize=(12, 12)) sns.heatmap(corrmat, vmax=1., square=True, cmap=plt.cm.BrBG) plt.title("Important variables correlation map", fontsize=15) plt.show() sparsity = { col: (train[col] == 0).mean() for idx, col in enumerate(train) } sparsity = pd.Series(sparsity) fig = plt.figure(figsize=[7,12]) ax = fig.add_subplot(211) ax.hist(sparsity, range=(0,1), bins=100) ax.set_xlabel('Sparsity of Features') ax.set_ylabel('Number of Features') ax = fig.add_subplot(212) ax.hist(sparsity, range=(0.8,1), bins=100) ax.set_xlabel('Sparsity of Features') ax.set_ylabel('Number of Features') plt.show() cat_flds = [] bs = 64 test = test.set_index('ID') train['target'] = np.log(train['target']) x, y, nas = proc_df(train, 'target', skip_flds=['ID']) df_train_x, df_val_x, df_train_y, df_val_y= train_test_split(x, y, test_size=0.1, random_state=42) model_data = ColumnarModelData.from_data_frames( '.', df_train_x, df_val_x, df_train_y, df_val_y, cat_flds, bs, is_reg=True, is_multi=False, test_df=test) emb_szs = [] n_cont = len(df_train_x.columns) emb_drop = 0.0 out_sz = 1 szs = [400, 50] drops = [0.0,0.0] learner = model_data.get_learner(emb_szs, n_cont, emb_drop, out_sz, szs, drops) learner.lr_find2(start_lr=1, end_lr=1000, num_it=500) learner.sched.plot() learner.unfreeze() lr = 0.105 learner.fit(lr, 10, cycle_len=2) preds = learner.predict(is_test = True) train_describe = train.describe() train_describe test_describe = test.describe() test_describe plt.figure(figsize=(12, 5)) plt.hist(train.target.values, bins=100) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Target') plt.show() plt.figure(figsize=(30, 5)) x = train.iloc[1] plt.hist(x) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Log 1+Target') plt.show() plt.figure(figsize=(12, 5)) plt.hist(np.log(1+train.target.values), bins=100) plt.title('Histogram target counts') plt.xlabel('Count') plt.ylabel('Log 1+Target') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(1+train.target.values)) plt.show() train_log_target = train[['target']] train_log_target['target'] = np.log(1+train['target'].values) train_log_target.describe() constant_train = train.loc[:, (train == train.iloc[0]).all()].columns.tolist() constant_test = test.loc[:, (test == test.iloc[0]).all()].columns.tolist() print('Number of constant columns in the train set:', len(constant_train)) print('Number of constant columns in the test set:', len(constant_test)) columns_to_use = test.columns.tolist() del columns_to_use[0] # Remove 'ID' columns_to_use = [x for x in columns_to_use if x not in constant_train] #Remove all 0 columns len(columns_to_use) describe(train[columns_to_use].values, axis=None) plt.figure(figsize=(12, 5)) plt.hist(train[columns_to_use].values.flatten(), bins=50) plt.title('Histogram all train counts') plt.xlabel('Count') plt.ylabel('Value') plt.show() plt.figure(figsize=(12, 5)) plt.hist(np.log(train[columns_to_use].values.flatten()+1), bins=50) plt.title('Log Histogram all train counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=np.log(train[columns_to_use].values.flatten()+1)) plt.show() train_nz = np.log(train[columns_to_use].values.flatten()+1) train_nz = train_nz[np.nonzero(train_nz)] plt.figure(figsize=(12, 5)) plt.hist(train_nz, bins=50) plt.title('Log Histogram nonzero train counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=train_nz) plt.show() describe(train_nz) test_nz = np.log(test[columns_to_use].values.flatten()+1) test_nz = test_nz[np.nonzero(test_nz)] plt.figure(figsize=(12, 5)) plt.hist(test_nz, bins=50) plt.title('Log Histogram nonzero test counts') plt.xlabel('Count') plt.ylabel('Log value') plt.show() sns.set_style("whitegrid") ax = sns.violinplot(x=test_nz) plt.show() describe(test_nz) train[columns_to_use].values.flatten().shape ((train[columns_to_use].values.flatten())==0).mean() train_zeros = pd.DataFrame({'Percentile':((train[columns_to_use].values)==0).mean(axis=0), 'Column' : columns_to_use}) train_zeros.head() describe(train_zeros.Percentile.values) plt.figure(figsize=(12, 5)) plt.hist(train_zeros.Percentile.values, bins=50) plt.title('Histogram percentage zeros train counts') plt.xlabel('Count') plt.ylabel('Value') plt.show() describe(np.log(train[columns_to_use].values+1), axis=None) describe(test[columns_to_use].values, axis=None) describe(np.log(test[columns_to_use].values+1), axis=None) test_zeros = pd.DataFrame({'Percentile':(np.log(1+test[columns_to_use].values)==0).mean(axis=0), 'Column' : columns_to_use}) test_zeros.head() describe(test_zeros.Percentile.values) y = np.log(1+train.target.values) y.shape y train_1 = lgb.Dataset(train[columns_to_use],y ,feature_name = "auto") params = {'boosting_type': 'gbdt', 'objective': 'regression', 'metric': 'rmse', 'learning_rate': 0.0105, 'num_leaves': 100, 'feature_fraction': 0.4, 'bagging_fraction': 0.6, 'max_depth': 5, 'min_child_weight': 10} clf = lgb.train(params, train_1, num_boost_round = 400, verbose_eval=True) preds = clf.predict(test[columns_to_use]) preds sample_submission = pd.read_csv("../input/sample_submission.csv") sample_submission.target = np.exp(preds)-1 sample_submission.to_csv('simple_lgbm.csv', index=False) sample_submission.head() nr_splits = 5 random_state = 1054 y_oof = np.zeros((y.shape[0])) total_preds = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof))) params['max_depth'] = 4 y_oof_2 = np.zeros((y.shape[0])) total_preds_2 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_2 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_2[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_2))) params['max_depth'] = 6 y_oof_3 = np.zeros((y.shape[0])) total_preds_3 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_3 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_3[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_3))) params['max_depth'] = 7 y_oof_4 = np.zeros((y.shape[0])) total_preds_4 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_4 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_4[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_4))) params['max_depth'] = 8 y_oof_5 = np.zeros((y.shape[0])) total_preds_5 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_5 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_5[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_5))) params['max_depth'] = 10 y_oof_6 = np.zeros((y.shape[0])) total_preds_6 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_6 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_6[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_6))) params['max_depth'] = 12 y_oof_7 = np.zeros((y.shape[0])) total_preds_7 = 0 kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = lgb.Dataset(X_train,y_train ,feature_name = "auto") val = lgb.Dataset(X_val ,y_val ,feature_name = "auto") clf = lgb.train(params,train_1,num_boost_round = 400,verbose_eval=True) total_preds_7 += clf.predict(test[columns_to_use])/nr_splits pred_oof = clf.predict(X_val) y_oof_7[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_7))) print('Total error', np.sqrt(mean_squared_error(y, 1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5))) print('Total error', np.sqrt(mean_squared_error(y, -0.5*y_oof-0.5*y_oof_2-y_oof_3 +3*y_oof_4))) print('Total error', np.sqrt(mean_squared_error(y, 0.75*(1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5)+ 0.25*(-0.5*y_oof-0.5*y_oof_2-y_oof_3 +3*y_oof_4)))) sub_preds = (0.75*(1.4*(1.6*total_preds_7-0.6*total_preds_6)-0.4*total_preds_5)+ 0.25*(-0.5*total_preds-0.5*total_preds_2-total_preds_3 +3*total_preds_4)) sample_submission.target = np.exp(sub_preds)-1 sample_submission.to_csv('submission_1.csv', index=False) sample_submission.head() params = {'objective': 'reg:linear', 'eval_metric': 'rmse', 'eta': 0.01, 'max_depth': 10, 'subsample': 0.6, 'colsample_bytree': 0.6, 'alpha':0.001, 'random_state': 42, 'silent': True} y_oof_8 = np.zeros((y.shape[0])) total_preds_8 = 0 dtest = xgb.DMatrix(test[columns_to_use]) kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state) for i, (train_index, val_index) in enumerate(kf.split(y)): print('Fitting fold', i+1, 'out of', nr_splits) X_train, X_val = train[columns_to_use].iloc[train_index], train[columns_to_use].iloc[val_index] y_train, y_val = y[train_index], y[val_index] train_1 = xgb.DMatrix(X_train, y_train) val = xgb.DMatrix(X_val, y_val) watchlist = [(train_1, 'train'), (val, 'val')] clf = xgb.train(params, train_1, 1000, watchlist, maximize=False, early_stopping_rounds = 60, verbose_eval=100) total_preds_8 += clf.predict(dtest, ntree_limit=clf.best_ntree_limit)/nr_splits pred_oof = clf.predict(val, ntree_limit=clf.best_ntree_limit) y_oof_8[val_index] = pred_oof print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof))) print('Total error', np.sqrt(mean_squared_error(y, y_oof_8))) print('Total error', np.sqrt(mean_squared_error(y, 0.7*(0.75*(1.4*(1.6*y_oof_7-0.6*y_oof_6)-0.4*y_oof_5)+0.25*(-0.5*y_oof-0.5*y_oof_2-y_oof_3+3*y_oof_4))+0.3*y_oof_8))) sub_preds = (0.7*(0.75*(1.4*(1.6*total_preds_7-0.6*total_preds_6)-0.4*total_preds_5)+0.25*(-0.5*total_preds-0.5*total_preds_2-total_preds_3+3*total_preds_4))+0.3*total_preds_8) sample_submission.target = np.exp(sub_preds)-1 sample_submission.to_csv('blended_submission_2.csv', index=False) sample_submission.head()
0.400867
0.899166
``` %matplotlib inline import openpathsampling as paths import numpy as np import matplotlib.pyplot as plt import matplotlib import os import openpathsampling.visualize as ops_vis from IPython.display import SVG matplotlib.rcParams.update({'font.size': 16, 'xtick.labelsize': 12, 'ytick.labelsize': 12}) ``` # Analyzing the flexible path length simulation Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis). ``` filename = "alanine_dipeptide_tps.nc" ``` Opening a large storage file can take some time. In addition, the `AnalysisStorage` object tries to pre-cache things, which makes opening it take longer, but can make analysis faster. ``` %%time flexible = paths.Storage(filename, mode='r') engine = flexible.engines[0] flex_scheme = flexible.schemes[0] print("File size: {0} for {1} steps, {2} snapshots".format( flexible.file_size_str, len(flexible.steps), len(flexible.snapshots), )) ``` That tell us a little about the file we're dealing with. Note that the number of snapshots listed is twice the number of configurations. That's because every snapshot object stores a virtual version of its velocity-reversed copy. Both share the same configuration and velocity storage; one just flips the signs on the velocities. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the `move_summary` gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected. ``` %%time flex_scheme.move_summary(flexible.steps) ``` #### Replica history tree and decorrelated trajectories The `PathTree` object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories. It takes the history of a given replica (in our case, replica 0, the only one) and builds a description of the move history. A `PathTree` is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 20 steps in order to visualize it. (All 10000 steps would be unwieldy.) Note that only the accepted steps are shown by default (see examples for path tree visualization for customization details.) After the visualization, we make a second `PathTree` of all the steps, in order to count the number of decorrelated trajectories. ``` tree = ops_vis.PathTree( flexible.steps[0:21], ops_vis.ReplicaEvolution( replica=0 ) ) with open("flexible_path_tree.svg", mode='w') as f: f.write(tree.svg()) SVG(tree.svg()) print("Decorrelated trajectories (first 20 steps):", len(tree.generator.decorrelated_trajectories)) %%time full_tree = ops_vis.PathTree( flexible.steps, ops_vis.ReplicaEvolution( replica=0 ) ) print("Total decorrelated trajectories:", len(full_tree.generator.decorrelated_trajectories)) ``` #### Path length distribution Flexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths. We also use `engine.snapshot_timestep` to convert the count of frames to time, including correct units. ``` path_lengths = [len(step.active[0].trajectory) for step in flexible.steps] plt.hist(path_lengths, bins=40, alpha=0.5); plt.xlabel("Path length (frames)") plt.ylabel("Number of paths") #print("Maximum:", max(path_lengths), "("+str(max(path_lengths)*engine.snapshot_timestep)+")") #print("Average:", "{0:.2f}".format(np.mean(path_lengths)), "("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")") plt.tight_layout() ``` #### Path density histogram Next we will create a path density histogram. Calculating the histogram itself is quite easy: first we reload the collective variables we want to plot it in (we choose the phi and psi angles). Then we create the empty path density histogram, by telling it which CVs to use and how to make the histogram (bin sizes, etc). Finally, we build the histogram by giving it the list of active trajectories to histogram. ``` from openpathsampling.numerics import HistogramPlotter2D psi = flexible.cvs['psi'] phi = flexible.cvs['phi'] deg = 180.0 / np.pi path_density = paths.PathDensityHistogram(cvs=[phi, psi], left_bin_edges=(-180/deg,-180/deg), bin_widths=(2.0/deg,2.0/deg)) %%time path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps]) ``` Now we've built the path density histogram, and we want to visualize it. We have a convenient `plot_2d_histogram` function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional `matplotlib` named arguments to `plt.pcolormesh`. ``` tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4) plotter = HistogramPlotter2D(path_density, xticklabels=tick_labels, yticklabels=tick_labels, label_format="{:4.2f}") ax = plotter.plot(cmap="Blues") ```
github_jupyter
%matplotlib inline import openpathsampling as paths import numpy as np import matplotlib.pyplot as plt import matplotlib import os import openpathsampling.visualize as ops_vis from IPython.display import SVG matplotlib.rcParams.update({'font.size': 16, 'xtick.labelsize': 12, 'ytick.labelsize': 12}) filename = "alanine_dipeptide_tps.nc" %%time flexible = paths.Storage(filename, mode='r') engine = flexible.engines[0] flex_scheme = flexible.schemes[0] print("File size: {0} for {1} steps, {2} snapshots".format( flexible.file_size_str, len(flexible.steps), len(flexible.snapshots), )) %%time flex_scheme.move_summary(flexible.steps) tree = ops_vis.PathTree( flexible.steps[0:21], ops_vis.ReplicaEvolution( replica=0 ) ) with open("flexible_path_tree.svg", mode='w') as f: f.write(tree.svg()) SVG(tree.svg()) print("Decorrelated trajectories (first 20 steps):", len(tree.generator.decorrelated_trajectories)) %%time full_tree = ops_vis.PathTree( flexible.steps, ops_vis.ReplicaEvolution( replica=0 ) ) print("Total decorrelated trajectories:", len(full_tree.generator.decorrelated_trajectories)) path_lengths = [len(step.active[0].trajectory) for step in flexible.steps] plt.hist(path_lengths, bins=40, alpha=0.5); plt.xlabel("Path length (frames)") plt.ylabel("Number of paths") #print("Maximum:", max(path_lengths), "("+str(max(path_lengths)*engine.snapshot_timestep)+")") #print("Average:", "{0:.2f}".format(np.mean(path_lengths)), "("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")") plt.tight_layout() from openpathsampling.numerics import HistogramPlotter2D psi = flexible.cvs['psi'] phi = flexible.cvs['phi'] deg = 180.0 / np.pi path_density = paths.PathDensityHistogram(cvs=[phi, psi], left_bin_edges=(-180/deg,-180/deg), bin_widths=(2.0/deg,2.0/deg)) %%time path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps]) tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4) plotter = HistogramPlotter2D(path_density, xticklabels=tick_labels, yticklabels=tick_labels, label_format="{:4.2f}") ax = plotter.plot(cmap="Blues")
0.311846
0.944434
# Introduction <hr style="border:2px solid black"> </hr> <div class="alert alert-block alert-warning"> <font color=black> **What?** `__call__` method </font> </div> # Definition <hr style="border:2px solid black"> </hr> <div class="alert alert-block alert-info"> <font color=black> - `__call__` is a built-in method which enables to write classes where the instances behave like functions and can be called like a function. - In practice: `object()` is shorthand for `object.__call__()` </font> </div> # _ _call_ _ vs. _ _init_ _ <hr style="border:2px solid black"> </hr> <div class="alert alert-block alert-info"> <font color=black> - `__init__()` is properly defined as Class Constructor which builds an instance of a class, whereas `__call__` makes such a instance callable as a function and therefore can be modifiable. - Technically `__init__` is called once by `__new__` when object is created, so that it can be initialised - But there are many scenarios where you might want to redefine your object, say you are done with your object, and may find a need for a new object. With `__call__` you can redefine the same object as if it were new. </font> </div> # Example #1 <hr style="border:2px solid black"> </hr> ``` class Example(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self): print("Instance is called via special method __call__") e = Example() e.__init__() e.__call__() ``` # Example #2 <hr style="border:2px solid black"> </hr> ``` class Product(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self, a, b): print("Instance is called via special method __call__") print(a*b) p = Product() p.__init__() # Is being call like if p was a function p(2,3) # The cell above is equivalent to this call p.__call__(2,3) ``` # Example #3 <hr style="border:2px solid black"> </hr> ``` class Stuff(object): def __init__(self, x, y, Range): super(Stuff, self).__init__() self.x = x self.y = y self.Range = Range def __call__(self, x, y): self.x = x self.y = y print("__call with (%d, %d)" % (self.x, self.y)) def __del__(self, x, y): del self.x del self.y del self.Range s = Stuff(1, 2, 3) s.x s(7,8) s.x ``` # Example #4 <hr style="border:2px solid black"> </hr> ``` class Sum(): def __init__(self, x, y): self.x = x self.y = y print("__init__ with (%d, %d)" % (self.x, self.y)) def __call__(self, x, y): self.x = x self.y = y print("__call__ with (%d, %d)" % (self.x, self.y)) def sum(self): return self.x + self.y sum_1 = Sum(2,2) sum_1.sum() sum_1 = Sum(2,2) sum_1(3,3) sum_1 = Sum(2,2) # This is equivalent to sum_1.__call__(3,3) # You can also do this sum_1 = Sum(2,2)(3,3) ``` # References <hr style="border:2px solid black"> </hr> <div class="alert alert-block alert-warning"> <font color=black> - https://www.geeksforgeeks.org/__call__-in-python/ </font> </div>
github_jupyter
class Example(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self): print("Instance is called via special method __call__") e = Example() e.__init__() e.__call__() class Product(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self, a, b): print("Instance is called via special method __call__") print(a*b) p = Product() p.__init__() # Is being call like if p was a function p(2,3) # The cell above is equivalent to this call p.__call__(2,3) class Stuff(object): def __init__(self, x, y, Range): super(Stuff, self).__init__() self.x = x self.y = y self.Range = Range def __call__(self, x, y): self.x = x self.y = y print("__call with (%d, %d)" % (self.x, self.y)) def __del__(self, x, y): del self.x del self.y del self.Range s = Stuff(1, 2, 3) s.x s(7,8) s.x class Sum(): def __init__(self, x, y): self.x = x self.y = y print("__init__ with (%d, %d)" % (self.x, self.y)) def __call__(self, x, y): self.x = x self.y = y print("__call__ with (%d, %d)" % (self.x, self.y)) def sum(self): return self.x + self.y sum_1 = Sum(2,2) sum_1.sum() sum_1 = Sum(2,2) sum_1(3,3) sum_1 = Sum(2,2) # This is equivalent to sum_1.__call__(3,3) # You can also do this sum_1 = Sum(2,2)(3,3)
0.701406
0.773815
``` !kill -9 -1 #KILL SWITCH TO FREE UP MEMORY, DO NOT RUN ALL CELLS IN THE NOTEBOOK AT ONCE # Code to read csv file into colaboratory: !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) #2. Get the file downloadedTest = drive.CreateFile({'id':'1rGfRRaEqKAT0EwyF8uZRTDbfVBDajiE_'}) # replace the id with id of file you want to access downloadedTest.GetContentFile('test.csv') downloadedValidation = drive.CreateFile({'id':'16kjxp9JOgUVExDHlcTGGS4eGjoYP6m0N'}) # replace the id with id of file you want to access downloadedValidation.GetContentFile('validation.csv') downloadedValidation = drive.CreateFile({'id':'1EoOp6UvVrEMZDKnyhOH27U3LFYsjgDqW'}) # replace the id with id of file you want to access downloadedValidation.GetContentFile('pCTR.csv') ls import matplotlib.pyplot as plt import pandas as pd import numpy as np #Read file as panda dataframe test = pd.read_csv('test.csv') validation = pd.read_csv('validation.csv') pCTR = pd.read_csv('pCTR.csv') validation.shape pCTR.shape pCTR.mean() recalibration = lambda x: x/(x+(1-x)/0.025) pCTRCalibrated = pCTR.applymap(recalibration) pCTRCalibrated.mean() validationwithpCTR = validation validationwithpCTR['pCTR'] = pCTRCalibrated['pred1'] print(list(validationwithpCTR.columns)) #keep relevant columns for hyperparameter tuning tuningHyperP =validationwithpCTR[['click','slotprice','payprice','pCTR','advertiser']] tuningHyperP.head() C_values = [7,7.5,8,8.5,9,9.5,10,10.5,11,11.5,12,12.5,13,13.5,14,14.5,15] lambdaValues =[5.2*10**-7,9*10**-8, 1*10**-7, 3*10**-7,7*10**-7,9*10**-7,1*10**-6] def hyperparameterTuneORTB1(c,lambdaP,tuningHyperP): print("C = " +str(c) + " lambda" +str(lambdaP)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] bid = np.sqrt(c/lambdaP*pCTR+c**2) - c if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return for i in C_values: for j in lambdaValues: hyperparameterTuneORTB1(i,j,tuningHyperP) #hyperparameterTuneORTB1(8,5.2*10**-7,tuningHyperP) #BEST RESULTS/PARAMETERS (FROM XGBoost) #C = 9 lambda5.2e-07 #WON Impressions: 151693 Clicks: 161 Budget Depleted: 6249.978 CPC: 38.81973913043478 CTR (Percent): 0.10613541824606278 CPM: 41.201492488117445 def hyperparameterTuneORTB2(c,lambdaP,tuningHyperP): print("C = " +str(c) + " lambda" +str(lambdaP)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] a = (pCTR+np.sqrt(c**2*lambdaP**2+pCTR**2))/ (c*lambdaP) b = (c*lambdaP)/(pCTR+np.sqrt(c**2*lambdaP**2+pCTR**2)) bid = c*(a**(1/3) - b*(1/3)) if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return #ORTB2: C_values2 = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] lambdaValues2 =[0.99*10**-7] for k in C_values2: for n in lambdaValues2: hyperparameterTuneORTB2(k,n,tuningHyperP) #BEST ORTB2 FROM XGBoost: #C = 7.0 lambda9.9e-08 #Final Impressions: 159127 Clicks: 149 Budget Depleted: 6249.964 CPC: 41.94606711409396 CTR (Percent): 0.0936359008841994 CPM: 39.27657782777278 # #baseBid * np.exp(pCTR/avgCTR) def hyperparameterTuneOWN(baseBid,avgCTR,tuningHyperP): print("baseBid = " +str(baseBid)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] bid = baseBid *(pCTR/avgCTR)**2 if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return iterator = np.linspace(203,204,21) for i in iterator: hyperparameterTuneOWN(i,0.00073,tuningHyperP) #Best Hyperparameters for this (XGBoost) #baseBid = 203.35 #Final Impressions: 111098 Clicks: 105 Budget Depleted: 6249.989 CPC: 59.52370476190476 CTR (Percent): 0.09451115231597329 CPM: 56.256539271634054 4**2 ```
github_jupyter
!kill -9 -1 #KILL SWITCH TO FREE UP MEMORY, DO NOT RUN ALL CELLS IN THE NOTEBOOK AT ONCE # Code to read csv file into colaboratory: !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) #2. Get the file downloadedTest = drive.CreateFile({'id':'1rGfRRaEqKAT0EwyF8uZRTDbfVBDajiE_'}) # replace the id with id of file you want to access downloadedTest.GetContentFile('test.csv') downloadedValidation = drive.CreateFile({'id':'16kjxp9JOgUVExDHlcTGGS4eGjoYP6m0N'}) # replace the id with id of file you want to access downloadedValidation.GetContentFile('validation.csv') downloadedValidation = drive.CreateFile({'id':'1EoOp6UvVrEMZDKnyhOH27U3LFYsjgDqW'}) # replace the id with id of file you want to access downloadedValidation.GetContentFile('pCTR.csv') ls import matplotlib.pyplot as plt import pandas as pd import numpy as np #Read file as panda dataframe test = pd.read_csv('test.csv') validation = pd.read_csv('validation.csv') pCTR = pd.read_csv('pCTR.csv') validation.shape pCTR.shape pCTR.mean() recalibration = lambda x: x/(x+(1-x)/0.025) pCTRCalibrated = pCTR.applymap(recalibration) pCTRCalibrated.mean() validationwithpCTR = validation validationwithpCTR['pCTR'] = pCTRCalibrated['pred1'] print(list(validationwithpCTR.columns)) #keep relevant columns for hyperparameter tuning tuningHyperP =validationwithpCTR[['click','slotprice','payprice','pCTR','advertiser']] tuningHyperP.head() C_values = [7,7.5,8,8.5,9,9.5,10,10.5,11,11.5,12,12.5,13,13.5,14,14.5,15] lambdaValues =[5.2*10**-7,9*10**-8, 1*10**-7, 3*10**-7,7*10**-7,9*10**-7,1*10**-6] def hyperparameterTuneORTB1(c,lambdaP,tuningHyperP): print("C = " +str(c) + " lambda" +str(lambdaP)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] bid = np.sqrt(c/lambdaP*pCTR+c**2) - c if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return for i in C_values: for j in lambdaValues: hyperparameterTuneORTB1(i,j,tuningHyperP) #hyperparameterTuneORTB1(8,5.2*10**-7,tuningHyperP) #BEST RESULTS/PARAMETERS (FROM XGBoost) #C = 9 lambda5.2e-07 #WON Impressions: 151693 Clicks: 161 Budget Depleted: 6249.978 CPC: 38.81973913043478 CTR (Percent): 0.10613541824606278 CPM: 41.201492488117445 def hyperparameterTuneORTB2(c,lambdaP,tuningHyperP): print("C = " +str(c) + " lambda" +str(lambdaP)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] a = (pCTR+np.sqrt(c**2*lambdaP**2+pCTR**2))/ (c*lambdaP) b = (c*lambdaP)/(pCTR+np.sqrt(c**2*lambdaP**2+pCTR**2)) bid = c*(a**(1/3) - b*(1/3)) if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return #ORTB2: C_values2 = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] lambdaValues2 =[0.99*10**-7] for k in C_values2: for n in lambdaValues2: hyperparameterTuneORTB2(k,n,tuningHyperP) #BEST ORTB2 FROM XGBoost: #C = 7.0 lambda9.9e-08 #Final Impressions: 159127 Clicks: 149 Budget Depleted: 6249.964 CPC: 41.94606711409396 CTR (Percent): 0.0936359008841994 CPM: 39.27657782777278 # #baseBid * np.exp(pCTR/avgCTR) def hyperparameterTuneOWN(baseBid,avgCTR,tuningHyperP): print("baseBid = " +str(baseBid)) budget = 6250000 FinalImpressions = 0 Calculatedclicks = 0 for index, row in tuningHyperP.iterrows(): pCTR = row['pCTR'] payprice = row['payprice'] click = row['click'] slotprice = row['slotprice'] bid = baseBid *(pCTR/avgCTR)**2 if bid>budget: continue elif bid > payprice and bid >= slotprice and click==1: ImpressionsCount = 1 clickCount = 1 subtract = payprice elif bid > payprice and bid >= slotprice and click==0: ImpressionsCount = 1 clickCount=0 subtract = payprice else: ImpressionsCount = 0 clickCount=0 subtract = 0 Calculatedclicks = Calculatedclicks + clickCount FinalImpressions = ImpressionsCount + FinalImpressions budget = budget - subtract #print(budget) budgetDepleted = 6250000-budget CTR = Calculatedclicks/FinalImpressions*100 CPM = budgetDepleted/FinalImpressions CPC = budgetDepleted/Calculatedclicks/1000 budgetDepletedFinal = budgetDepleted/1000 print("Final Impressions: " + str(FinalImpressions) + " Clicks: " + str(Calculatedclicks) + " Budget Depleted: " + str(budgetDepletedFinal) + " CPC: " + str(CPC) + " CTR (Percent): " + str(CTR) + " CPM: " + str(CPM) ) return iterator = np.linspace(203,204,21) for i in iterator: hyperparameterTuneOWN(i,0.00073,tuningHyperP) #Best Hyperparameters for this (XGBoost) #baseBid = 203.35 #Final Impressions: 111098 Clicks: 105 Budget Depleted: 6249.989 CPC: 59.52370476190476 CTR (Percent): 0.09451115231597329 CPM: 56.256539271634054 4**2
0.246171
0.187411
[Table of contents](../toc.ipynb) # Python semantics Remember that semantics is the meaning of the language. ## Variables are actually pointers The next slides of topic (variables are actually pointers) are a condensed version of notebook 03 from *A Whirlwind Tour of Python* [[VanderPlas2016]](./references.bib), which is under CC0 license. Assigning a variable in Python is very easy, just use the equal sign: ```python my_var = 7 ``` ### Comparision with C However, in contrast to other languages like C, the above line of code should be read like: **`my_var` points to a memory bucket which contains currently an integer of seven.** This is very different from C where a similar line of code would be ```C int my_var = 7; ``` This C code line could be read as: **A container called `my_var` is defined to store integers and it contains currently seven.** ### Consequences Because Python variables just point to some object in the memory: * there is no need to declare a variable type * the variable type may change * and this is the reason why Python is called dynamically typed language Hence you can do things like this in Python which won't work in statically typed languages like C. ``` my_var = 7 # integer my_var = 7.1 # float my_var = "Some string" # string my_var = [0, 1, "a list with a string"] # list with intergers and a string ``` ### Variables as pointers in practice Because variables are actually pointers, you might wonder how this code is interpreted. ``` x = [0, 1, 2, 3] y = x print(y) x[3] = 99 print(y) ``` Here, the last entry of x was changed and because y point to x, the print y command showed the changed values. However, if we change the bucket where x points to something different, y still points to the "old" bucket. ``` x = 77.7 print(y) ``` ### Is this safe? You might wonder if this will cause trouble in equations. But it is safe because: * Numbers, strings and other basic types are immutable * This means you can not their value. You can only change what values the variable points to. ``` x = 10 y = x x += 5 # Here actually the variable is changed so that it points to another integer which is 15. Hence, y does not change. print("x =", x) print("y =", y) ``` ## Python loves objects! Because there is no need to define the type of a variable it is often said that Python is type-free. **This is wrong!** If no type is given, the Python interpreter selects a type and we can read the type. ``` x = 7 type(x) x = [0, 1, 2, "string"] type(x) x = {"a": 3, "b": 4.4} type(x) ``` Hence, Python has types and the type are not linked to the variable but to the object. You have seen here some of the basic types like `int` (integer), `list`, and `dict` (dictionary). ### Everything is an object You can access different properties of objects with the period `.` The last line of code set x as pointer to a dictionary and the keys of a dictionary can be accessed with `.keys()` method. ``` x = {"a": 3, "b": 4.4} x.keys() ``` Also very basic objects like integers have attributes and methods. ``` x = 9 print(x.real) # real attribute of x print(x.bit_length()) # compute bitlength of x with .bitlenght() method ``` ## Operators Python's operators can be categorized into: * Arithmetic Operators * Comparison (Relational) Operators * Assignment Operators * Logical Operators * Bitwise Operators * Membership Operators * Identity Operators ### Arithmetic operators The arithmetic operators in Python are: Operator | Description | Code example --- | --- | --- Addition | Sum of two variables | `a + b` Subtraction | Difference of two variables | `a - b` Multiplication | Product of two variables | `a * b` Exponentiation | - | `a ** b` Division | Quotient of two variables | `a / b` Floor division | Quotient without fractional part | `a // b` Modulus | Returns integer which remains after division | `a % b` Negation | Negative of variable | `-a` Matrix product | Introduced in Python 3.5, requires numpy | `a @ b` ``` # here some operators in combination ((5 + 3) / 4) ** 2 # true division 21 / 2 # floor division 21 // 2 # modulus 21 % 2 ``` ### Comparison operators The comparison operators in Python return a bool (`True` or `False`) and these operators are: Code example | Description --- | --- `a == b` | a equal b `a != b` | a not equal b `a < b` | a less than b `a <= b` | a less or equal b `a > b` | a greater than b `a >= b` | a greater or equal b ``` a = 1; b = 5 a < b a != b a > b # check floor division 25 // 2 == 12 # 30 is between 24 and 50 24 < 30 < 50 ``` ### Assignment operators Beside the simple assignment `=`, there are: Code example | Description --- | --- `a += 5` | Add And `a -= 5` | Subtract And `a *= 5` | Multiply And `a **= 5` | Exponent And `a /= 5` | Division And `a %= 5` | Modulus And `a //= 5` | Floor division And ``` a = 5 a += 5 print(a) a -= 10 print(a) ``` ### Logical operators These operators are designed for boolean variables. There are logical `and`, `or`, `not`. The operator `xor` is missing but can be constructed. ``` a = True; b = False a and b a or b not(a, b) ``` ### Bitwise operators * These operators are rather advanced and barely used for standard tasks. * They compare the binary representation of numbers, which can be accessed with `bin()` function. ``` bin(10) ``` Which is read as: `0b` binary format, $1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2^1 + 0 \cdot 2^0 = 10$ Fore sake of completeness the operators are: * Bitwise AND `a & b` * Bitwise OR `a | b` * Bitwise XOR `a ^ b` * Bitshift left `a << b` * Bitshift right `a >> b` * Binary complement (flipping the bit) `~a` ### Membership operators The membership operators are designed to find values in lists, tuples or strings. The two operators are `in` and `not in` and they return `True` or `False`. ``` a = 2 b = [0, 1, 2, 3, 4] a in b c = 8 c not in b strg = "hi here is a text" "text" in strg ``` ### Identity operators They compare memory location of objects and are called `is` and `is not`. ``` a = 5; b = 2 a is b a = 10; b = a a is b ```
github_jupyter
my_var = 7 int my_var = 7; my_var = 7 # integer my_var = 7.1 # float my_var = "Some string" # string my_var = [0, 1, "a list with a string"] # list with intergers and a string x = [0, 1, 2, 3] y = x print(y) x[3] = 99 print(y) x = 77.7 print(y) x = 10 y = x x += 5 # Here actually the variable is changed so that it points to another integer which is 15. Hence, y does not change. print("x =", x) print("y =", y) x = 7 type(x) x = [0, 1, 2, "string"] type(x) x = {"a": 3, "b": 4.4} type(x) x = {"a": 3, "b": 4.4} x.keys() x = 9 print(x.real) # real attribute of x print(x.bit_length()) # compute bitlength of x with .bitlenght() method # here some operators in combination ((5 + 3) / 4) ** 2 # true division 21 / 2 # floor division 21 // 2 # modulus 21 % 2 a = 1; b = 5 a < b a != b a > b # check floor division 25 // 2 == 12 # 30 is between 24 and 50 24 < 30 < 50 a = 5 a += 5 print(a) a -= 10 print(a) a = True; b = False a and b a or b not(a, b) bin(10) a = 2 b = [0, 1, 2, 3, 4] a in b c = 8 c not in b strg = "hi here is a text" "text" in strg a = 5; b = 2 a is b a = 10; b = a a is b
0.259075
0.962743
![](../images/logos.jpg "MiCMOR, KIT Campus Alpin") **[MiCMOR](https://micmor.kit.edu) [SummerSchool "Environmental Data Science: From Data Exploration to Deep Learning"](https://micmor.kit.edu/sites/default/files/MICMoR%20Summer%20School%202019%20Flyer.pdf)** IMK-IFU KIT Campus Alpin, Sept. 4 - 13 2019, Garmisch-Partenkirchen, Germany. --- # Data ingestion and validation ... testing for data. We now can load data. Thats nice - but how do we verify that we get what we expect. Does this batch of data differ from previous sensor measurements for instance? Is there something broken? Any drifts in the data? Normally, this would lead to some really messy code where we'd build huge piles of `if else` blocks to make sure that the data is in an expected range etc. A much nicer tool to encode our **expectations** on the data is with `great_expectations`. This great package allows you to write down your expectations on the data and export it into an expectations file. When new data arrives or data changes you can validate this new data against those expectations. This is very useful if you have a data pipeline or stream data into your precessing... ``` %matplotlib inline %load_ext autoreload %autoreload 2 import pandas as pd import great_expectations as ge import json import matplotlib import numpy as np import matplotlib.pyplot as plt ``` For toying around, we first start with a very famous dataset. The titanic passenger survival data. It's included with great expectations but we pull it fcrom their github first. ``` ! wget https://raw.githubusercontent.com/great-expectations/great_expectations/develop/examples/data/Titanic.csv ``` Next, we read it with `read_csv`. However, instead of the plain pandas read_csv, we use the version from `great_expectations`. It works the same... ``` titanic_df = ge.read_csv("Titanic.csv") titanic_df.head() titanic_df.Age.hist(); ``` Let's define our frist expectation: The column mean of column Age should be between 20 and 40. The response is a True (it's valid) - and we get some extra stats as a JSON response. ``` titanic_df.expect_column_mean_to_be_between("Age", 20,40) ``` Let's do another one. Age should range between 0 and 80. ``` titanic_df.expect_column_values_to_be_between("Age", 0,80) ``` Now something more difficult: The name column should match this complicated RegEx. If you don't know what regexes are - don't worry. It's a complicated way to parse text in very little letters of code 😁... What is interesting this line ist the last argument. We specify that 95% of all data should fulfill this expectation. ``` titanic_df.expect_column_values_to_match_regex('Name', '[A-Z][a-z]+(?: \([A-Z][a-z]+\))?, ', mostly=.95) ``` Now some more: The entries for feature column "sex" sould either be "male" or "female". These are the two instances of the allowed set(). ``` titanic_df.expect_column_values_to_be_in_set('Sex', ['male', 'female']) ``` Feature "survived" shold be a boolean - thus only 1 and 0 is allowed. ``` titanic_df.expect_column_values_to_be_in_set('Survived', [1, 0]) ``` No test if PClass values are wither 1st, 2nd or 3rd. ``` titanic_df.expect_column_values_to_be_in_set('PClass', ['1st', '2nd', '3rd']) ``` Hm. This fails. And this is due to a '*' in the data. Since it failed, this expectation is not recorded. We can have a look at all recorded expectations: ``` print(json.dumps(titanic_df.get_expectation_suite(), indent=2)) ``` Now we could dump this expectation config to disk and validate any new data coming in if it fulfills these expectations: ``` # titanic_df.save_expectation_suite('titanic_expectations.json') ```
github_jupyter
%matplotlib inline %load_ext autoreload %autoreload 2 import pandas as pd import great_expectations as ge import json import matplotlib import numpy as np import matplotlib.pyplot as plt ! wget https://raw.githubusercontent.com/great-expectations/great_expectations/develop/examples/data/Titanic.csv titanic_df = ge.read_csv("Titanic.csv") titanic_df.head() titanic_df.Age.hist(); titanic_df.expect_column_mean_to_be_between("Age", 20,40) titanic_df.expect_column_values_to_be_between("Age", 0,80) titanic_df.expect_column_values_to_match_regex('Name', '[A-Z][a-z]+(?: \([A-Z][a-z]+\))?, ', mostly=.95) titanic_df.expect_column_values_to_be_in_set('Sex', ['male', 'female']) titanic_df.expect_column_values_to_be_in_set('Survived', [1, 0]) titanic_df.expect_column_values_to_be_in_set('PClass', ['1st', '2nd', '3rd']) print(json.dumps(titanic_df.get_expectation_suite(), indent=2)) # titanic_df.save_expectation_suite('titanic_expectations.json')
0.493164
0.966315
# Object-Orient Programming ## Keywords * [Encapsulation](#encapsulation) * [Abstraction](#abstraction) * [Inheritance](#inheritance) * [Polymorphism](#polymorphism) ## Examples ### Encapsulation * NOT enforced in Python * But still useful to signal to users * Methods/Variables * Public * Accessible by anyone * Protected * Usable by objects of the same class * Private * Only accessible by the object itself #### Naming Conventions * Private methods/attributes have a double dunder prefix (protected has one) * self.__my_private_variable * self.__private_function( ) * Protected has one * self._my_favorite_functions( ) #### Open to the World ``` class Person: def __init__(self, height:int, weight: int, age: int): self.height = height self.weight = weight self.age = age # Easily accessible bruce = Person(170, 73, 31) bruce.height # Can control the attribute bruce.weight = 70 bruce.weight ``` #### Make It Private ``` class Person: def __init__(self, height:int, weight: int, age: int): self.height = height self.__weight = weight # Private information self.__age = age # Private information def get_weight(self) -> int: # Can control what the user gets return self.__weight - 2 # Can't access it anymore bruce = Person(170, 75, 35) bruce.__weight # Have to access it using the method bruce.get_weight() ``` ### Inheritance and Abstraction #### Abstraction * Method of hiding complexity and showing/providing what the user needs #### Inheritance * Great for preventing repetitive code * Used to define shared methods/attributes for child classes ``` class Person: def __init__(self, height: int, weight: int, age: int): self.height = height self._weight = weight # Protected information self._age = age # Protected information def get_weight(self): pass def get_height(self): # Will be inherited return self.height class Adult(Person): def __init__(self, height: int, weight: int, age: int, job: str): super().__init__(height, weight, age) # Inherit attributes and abstract method self.job = job # Adult class specific attribute def get_weight(self) -> int: return self._weight - 2 class Kid(Person): def __init__(self, height: int, weight: int, age: int): super().__init__(height, weight, age) # Inherit attributes and abstract method def get_weight(self) -> str: return "?" bruce = Adult(170, 75, 35, 'bioinformatician') # Inherited method bruce.get_height() # Accessing a now protected method! bruce.get_weight() jack_jack = Kid(78, 11, 1.5) jack_jack.get_weight() ``` ### Polymorphism * Modifying inherited methods to fit the child class * Known as __Method Overriding__ ``` class Person: def __init__(self, height: int, weight: int, age: int): self.height = height self._weight = weight # Protected information self._age = age # Protected information def get_weight(self): return self._weight def get_height(self): # Will be inherited return self.height def get_age(self): return self._age class Adult(Person): def __init__(self, height: int, weight: int, age: int, job: str): super().__init__(height, weight, age) # Inherit attributes and abstract method self.job = job # Adult class specific attribute def get_weight(self) -> int: return self._weight - 2 class Kid(Person): def __init__(self, height: int, weight: int, age: int): super().__init__(height, weight, age) # Inherit attributes and abstract method def get_weight(self) -> str: return "?" def get_height(self) -> int: return self.height + 3 bruce = Adult(170, 75, 35, 'bioinformatician') bruce.get_height() # get_height() was modified jack_jack = Kid(78, 11, 1.5) jack_jack.get_height() ```
github_jupyter
class Person: def __init__(self, height:int, weight: int, age: int): self.height = height self.weight = weight self.age = age # Easily accessible bruce = Person(170, 73, 31) bruce.height # Can control the attribute bruce.weight = 70 bruce.weight class Person: def __init__(self, height:int, weight: int, age: int): self.height = height self.__weight = weight # Private information self.__age = age # Private information def get_weight(self) -> int: # Can control what the user gets return self.__weight - 2 # Can't access it anymore bruce = Person(170, 75, 35) bruce.__weight # Have to access it using the method bruce.get_weight() class Person: def __init__(self, height: int, weight: int, age: int): self.height = height self._weight = weight # Protected information self._age = age # Protected information def get_weight(self): pass def get_height(self): # Will be inherited return self.height class Adult(Person): def __init__(self, height: int, weight: int, age: int, job: str): super().__init__(height, weight, age) # Inherit attributes and abstract method self.job = job # Adult class specific attribute def get_weight(self) -> int: return self._weight - 2 class Kid(Person): def __init__(self, height: int, weight: int, age: int): super().__init__(height, weight, age) # Inherit attributes and abstract method def get_weight(self) -> str: return "?" bruce = Adult(170, 75, 35, 'bioinformatician') # Inherited method bruce.get_height() # Accessing a now protected method! bruce.get_weight() jack_jack = Kid(78, 11, 1.5) jack_jack.get_weight() class Person: def __init__(self, height: int, weight: int, age: int): self.height = height self._weight = weight # Protected information self._age = age # Protected information def get_weight(self): return self._weight def get_height(self): # Will be inherited return self.height def get_age(self): return self._age class Adult(Person): def __init__(self, height: int, weight: int, age: int, job: str): super().__init__(height, weight, age) # Inherit attributes and abstract method self.job = job # Adult class specific attribute def get_weight(self) -> int: return self._weight - 2 class Kid(Person): def __init__(self, height: int, weight: int, age: int): super().__init__(height, weight, age) # Inherit attributes and abstract method def get_weight(self) -> str: return "?" def get_height(self) -> int: return self.height + 3 bruce = Adult(170, 75, 35, 'bioinformatician') bruce.get_height() # get_height() was modified jack_jack = Kid(78, 11, 1.5) jack_jack.get_height()
0.756627
0.851398
# 3장 시카고 샌드위치 맛집 분석 2장까지 우리가 다룬 데이터는 엑셀이든 텍스트든 파일의 형태였습니다. 그리고 우리는 파이썬과 몇몇 모듈의 기초에 집중하면서 뭔가 성과를 얻기 위해 노력했습니다. 이제 3장부터는 데이터를 인터넷에서 직접 얻는 과정을 이야기하려 합니다. 이를 거창하게 웹 스크래핑(Web Scraping)이라고 하지 않더라도 단지 원하는 정보 한줄을 얻는 과정이라도 기초를 알고 가야 합니다. 1,2장과 달리 이번장은 인터넷에서 웹페이지의 내용을 가져오는 Beautiful Soup라는 모듈의 기초부터 익히고, 이번장의 목표인 시카고 샌드위치 맛집 리스트를 정리하려고 합니다. 물론 그 과정에서도 익히고 배워야 할 것이 있습니다. ``` from bs4 import BeautifulSoup ``` BeautifulSoup에서 bs4를import합니다. ``` page = open("data/03. test_first.html",'r').read() soup = BeautifulSoup(page, 'html.parser') print(soup.prettify()) ``` 지금 파일로 다운받은 html을 읽는 것이기 때문에 open 명령으로 읽기 옵션('r')을 주고 읽으면 됩니다. 읽은 html 페이지의 내용을 전체 다 보고 싶으면. prettify()라는 옵션을 사용하면 들여쓰기가 되어 보기 좋게 나타납니다. 지금 위 코드에서 예제로 사용되는 전체 html코드를 soup라는 변수에 저장했는데. 그 soup라는 변수에서 한 단계 아래에서 포함된 태그들을 알고 싶으면 children이라는 속성을 사용하면 됩니다. ``` list(soup.children) html = list(soup.children)[2] html ``` 이때 soup는 문서 전체를 저장한 변수이기 때문에 그 안에서 html태그에 접속하고 싶다면, 위와 같이 접근할 수 있습니다. <br> 그러면 위와 같은 결과를 얻게 됩니다. 다시 html의 children을 조사해보면 ``` list(html.children) ``` ``` body = list(html.children)[3] body ``` 이렇게 children과 parent를 이용해서 태그를 조사할 수 있고 그냥 한 번에 나타낼 수도 있습니다. ``` soup.body ``` 이렇게 바로 찾을 수도 있습니다. ``` list(body.children) ``` 또한 body 태그 안에 children의 리스트를 확인할 수 있습니다. 리스트 자료형에 대해서는 뒤에서 다시 다루겠습니다. 지금은 그냥 배열 정도로 생각하고 넘어가겠습니다. 위 코드에서 접근한 대로 단계별로 접근하고 다시 그 구조를 코드 속에 담아주는 것은 체계적으로 생각하고 접근할 수 있는 장점이 있지만, 복잡하고 큰 크기의 페이지를 접근하는 것에는 쉽지 않습니다. 만약 접근해야 할 태그를 알고 있다면 find나 find_all 명령을 많이 사용하게 됩니다. ``` soup.find_all('p') ``` 위와 같은 모든 p 태그를 찾는 것입니다. 물론 하나만 찾을 때는 find 명령을 사용할 수 있습니다. ``` soup.find('p') ``` 이렇게 사용하면 제일 첫 번째 p 태그를 찾아줍니다. ``` soup.find_all('p', class_='outer-text') ``` 이렇게 p 태그의 class가 outer-text인 것을 찾는 것도 가능합니다. ``` soup.find_all(class_='outer-text') ``` 혹은 그냥 class 이름으로만 outer-text를 찾을 수도 있습니다. ``` soup.find_all(id='first') ``` 또 id가 first인 태그들을 찾을 수도 있습니다. ``` soup.find('p') ``` 그러나 find 명령은 제일 처음 나타난 태그만 찾아주기 때문에 그 다음 태그만 찾고 싶을 때는 다른 방법을 사용해야 합니다. ``` soup.head ``` soup의 head에 있는 내용입니다. 여기서 next_sibling이라는 명령을 사용할 수 있습니다. ``` soup.head.next_sibling ``` soup의 head 다음에 줄바꿈 문자가 있습니다. ``` soup.head.next_sibling.next_sibling ``` 다시 한 번 더 이렇게 head와 같은 위치에 있던 body 태그로 접근할 수 있습니다. ``` body.p ``` 또한 제일 처음 나타나는 p 태그에 대해, 위와 같이 next_sibling을 두번 걸면 그 다음 p 태그로 이동할 수 있다는 것을 알 수 있습니다. ``` for each_tag in soup.find_all('p'): print(each_tag.get_text()) ``` 또 get_text() 명령으로 태그 안에 있는 텍스트만 가지고 올 수 있습니다. ``` body.get_text() ``` body 전체에서 get_text()를 하면 태그가 있던 자리는 줄바꿈(\n)이 표시되고 전체 텍스트를 보여줍니다. ``` links = soup.find_all('a') links ``` 클릭 가능한 링크를 의미하는 a 태그를 찾았습니다. ``` for each in links: href = each['href'] text = each.string print(text + '->' + href) ``` ## 3-2 크롬 개발자 도구를 이용해서 원하는 태그 찾기 웹 페이지의 태그를 beautiful soup의 결과만 보면서 확인할 수는 없습니다. 원하는 곳의 태그가 무엇인지 확인하는 방법 중 간편한 방법이 웹 브라우저인 크롬의 개발자 도구를 사용하는 것입니다. https://finance.naver.com/marketindex/로 접속합니다. 거기서 미국 USB 1,179.60원 이라는 글자에서 환율을 가져오려고 합니다. ``` from urllib.request import urlopen ``` 먼저 url로 접근하는 경우 urllib에서 urlopen이라는 함수를 import해둡니다. ``` url = "https://finance.naver.com/marketindex/" page = urlopen(url) soup = BeautifulSoup(page, "html.parser") print(soup.prettify()) ``` 그리고 해당 페이지를 읽어옵니다. prettify()로 print를 해도 사실 확인하기는 쉽지 않습니다.어차피 우리는 접근해야할 태그를 알아 두었으니 아래와 같이 접근하면 됩니다. ``` soup.find_all('span','value')[0].string ``` 혹시 몰라서 find_all로 찾고 리스트로 결과가 반환되니까 첫 번째를 선택하도록 했습니다. ## 3-3 실전: 시카고 샌드위치 맛집 소개 사이트에 접근하기 이제 시카고의 베스트 샌드위치 가게를 소개하고 있는 시카고 매거진 홈페이지에 접속해서 샌드위치 가게 정보를 얻어올 생각입니다. 일단, 접속 주소는 http://goo.gl/wAtvls입니다. 원래 긴 주소인데 구글의 URL Shortener를 사용했습니다. ``` import requests from bs4 import BeautifulSoup from urllib.request import urlopen headers = {'User-Agent': 'Mozilla/5.0'} url = 'http://www.chicagomag.com/Chicago-Magazine/November-2012/Best-Sandwiches-Chicago/' html = requests.get(url, headers = headers).text soup = BeautifulSoup(html, "html.parser") soup ``` 위 코드를 입력하면 전체 html코드를 다 받게 됩니다. 그리고 url_base, url_sub로 분리하고 다시 url로 합친 이유는 단지 책에서 표현하려다 보니 한 페이지에 주소가 다 안 나와서 입니다. ``` print(soup.find_all('div','sammy')) ``` 확인한 태그를 이용해서 find_all 명령을 이용해서 div의 Sammy 태그를 찾아 보았습니다. 내용을 유심히 보니 우리가 찾으려고 하던 내용이 맞는 것 같습니다. 더 확실히 하기 위해 len명령으로 길이를 확인해보면 50이라고 나타납니다. ``` len(soup.find_all('div','sammy')) ``` 애초 시카고 매거진의 기사 제목에 있지만 맛집 50개이므로 저 길이가 50이라면 일단 정확하게 찾은것 같습니다. 그중 첫 번째 것만 확인해보면 원하던 정보가 다 있는 걸로 보입니다. ``` print(soup.find_all('div','sammy')[0]) ``` ## 3-4 접근한 웹 페이지에서 원하는 데이터 추출하고 정리하기 그럼 이제 div의 sammy 태그에서 우리가 원하는 정보를 얻는 과정을 보겠습니다. ``` tmp_one = soup.find_all('div','sammy')[0] type(tmp_one) ``` find_all로 찾은 결과는 bs4.element.Tag라고 하는 형태로 이런 경우 그 변수에 다시 태그로 찾는 (find, find_all)명령을 사용할 수 있습니다. ``` tmp_one.find(class_='sammyRank') ``` 그래서 find 명령을 한 번 더 사용하고 sammyRank를 찾아보면 나타납니다. 여기서 text만 취하면 됩니다. ``` tmp_one.find(class_='sammyRank').get_text() ``` get_text() 명령을 사용하면 됩니다. 그러면 랭킹은 얻을 수 있습니다. ``` tmp_one.find(class_='sammyListing').get_text() ``` 이제 sammyListing을 얻으면 메뉴 이른과 가게 이름이 비록 같이 나오긴 했지만 억게 되었습니다. ``` tmp_one.find('a')['href'] ``` 또 a 태그에서 href정보를 가지고 클릭했을 때 연결될 주소도 저장할 수 있습니다.<br> 코드 [37]의 경우는 메뉴 이름과 가게 이름이 같이 있어서 분리해야 합니다. 저 구조에서 쉽게 접근해 볼 수 있는 것 중 하나가'정규식(Regular Express)'입니다. 정규식이라고 너무 어렵게 받아들일 필요는 없습니다. 이 책의 기본적인 기초는'필요한 건 의미만 파악되면 가져다 쓰지'입니다. ``` import re tmp_string = tmp_one.find(class_='sammyListing').get_text() re.split(('\n|\r\n'), tmp_string) print(re.split(('\n|\r\n'), tmp_string)[0]) print(re.split(('\n|\r\n'), tmp_string)[1]) ``` 먼저 당연히 정규식을 쓰기 위해 import re를 수행합니다. 그리고re에서 사용할 명령은 딱 하나 split입니다 . 말그대로 내가 지정한 특정 패턴이 일치하면 분리시킵니다. 저는 \n이거나, \r\n이면 분리시키고 싶습니다. 그래서 OR연산자(|-통상 키보드에서 엔터키 위에 있는 \ 기호와 함께 있는 문자)까지 사용해서 re.split(('\n|\r\n',tmp_string)이라고 명령했습니다. 그러면 그 결과가 두 개가 되는데 첫번째 것을 메뉴 이름으로, 두 번째 것을 가게 이름으로 하면 됩니다.<br> 또 코드[38]의 결과가 다른 49개에 적용했을 때 항상 동일하지 않습니다. 그 결과가 어떤 경우는 상대 경로로, 또 어떤 경우는 절대경로로 나오기 때문입니다. 이럴 때 사용하는 것이 urllib에 있는 urljoin이라는 명령입니다. 이 명령을 이용하면 절대경로로 잡힌 url은 그래로 두고 상대경로로 잡힌 url은 절대 경로로 변경할 수 있습니다. ``` from urllib.parse import urljoin rank = [] main_menu = [] cafe_name = [] url_add = [] list_soup = soup.find_all('div', 'sammy') for item in list_soup: rank.append(item.find(class_='sammyRank').get_text()) tmp_string = item.find(class_='sammyListing').get_text() main_menu.append(re.split(('\n|\r\n'), tmp_string)[0]) cafe_name.append(re.split(('\n|\r\n'), tmp_string)[1]) url_add.append(urljoin(url, item.find('a')['href'])) ``` 위 코드는 먼저 랭크 순위(rank), 메인 메뉴 이름(main_menu), 카페이름(cafe_name), 각각의 접근 주소(url_add)를 저장할 빈 list를 두었습니다. 그리고 find_all('div','sammy')로 찾은 50개의 정보를 가지고 반복문(for)을 돌리면서 방금 했던 내용인 각각의 정보를 .append명령으로 빈 리스트에 하나씩 추가하도록 했습니다. 이코드가 다 수행되고 나면, ``` rank[:5] ``` 순위도 잘 저장되어있고, ``` main_menu[:5] ``` 메뉴 이름도, ``` cafe_name[:5] ``` 카페 이름도 잘 받아왔습니다. ``` url_add[:5] ``` 마지막으로 url도 잘 왔습니다. ``` len(rank), len(main_menu), len(cafe_name), len(url_add) ``` 혹시나 하고 네 개의 변수의 길이를 조사해도 괜찮아 보입니다. 이제 이 데이터를 4개의 리스트에 저장할 수는 없으니 pandas를 이용하도록 하겠습니다 ``` import pandas as pd data = {'Rank':rank, 'Menu':main_menu, 'Cafe':cafe_name, 'URL':url_add} df = pd.DataFrame(data) df.head() ``` 이렇게 각 컬럼의 이름을 정의하고 해당 자료를 알려주면 됩니다.<br> 잘 나타난 것 같습니다. pandas로 정리도 잘 됐습니다. 그런데 하나 아쉬운 것은 컬럼의 순서입니다. 그래서 그 순서도 보기 좋게 정리합니다. ``` df = pd.DataFrame(data, columns=['Rank','Cafe','Menu','URL']) df.head(5) ``` 이렇게 결과가 나타납니다. 이렇게 일차적으로 한 페이지에서 각 원하는 부분의 데이터를 읽어와서 다시 원하는 형태로 정리를 완료했습니다. 여기서 한 단계 더 나가도록 하겠습니다. 그 전에 혹시 모르니 먼저 저장합니다. ``` df.to_csv('data/03. best_sandwiches_list_chicago.csv', sep=',', encoding='UTF-8') ``` ## 3-4 다수의 웹 페이지에 자동으로 접근해서 원하는 정보 가져오기 3-3절까지 시카고 매거진에서 시카고의 베스트 샌드위치 가게 50개에 대한 정보를 가져오는 페이지를 만들었습니다. html페이지로는 하나여서 페이지 간 이동은 필요가 없었습니다. 그저 한 페이지의 내용을 잘 이해하고 가져오면 되는데, 세부 메뉴를 설명하는 곳을 클릭하면 각각의 또 다른 매거진 기사로 연결됩니다. ``` df = pd.read_csv('data/03. best_sandwiches_list_chicago.csv', index_col=0) df.head() ``` 이전에 작업했던 것을 읽어옵니다. 위 표에서 URL 컬럼에 있는 내용을 50개 읽어서 각 페이지에서 가게 주소, 대표 샌드위치 가격, 가게 전화번호를 얻는 것입니다. 그러면 첫 번째 URL 정보를 확인하고 연습 삼아 진행한 다음 50개에 대해 반복문을 적용하겠습니다. ``` df['URL'][0] ``` 첫 번째 URL이 있습니다. 시카고 매거진의 또 다른 페이지입니다. 이 주소를 Beautiful Soup로 읽습니다. ``` headers = {'User-Agent': 'Mozilla/5.0'} url = (df['URL'][0]) html = requests.get(url, headers = headers).text soup_tmp = BeautifulSoup(html, "html.parser") soup_tmp print(soup_tmp.find('p', 'addy')) ``` 원하는 정보가 다 있습니다. 주소와 가격과 전화번호까지 있습니다. 그럼 이상태에서 텍스트(text)로 가지고와서 빈 칸으로 나누면 될 것 같습니다. ``` price_tmp = soup_tmp.find('p', 'addy').get_text() price_tmp ``` 일단.get_text()로 가지고 왔습니다. 원하는 내용은 다 있습니다. ``` price_tmp.split() ``` 그리고 split()을 적용해보면 위 결과처럼 나타납니다. 여기서 빨리 눈치를 채야 하는 것은 각격은 제일 첫 번째라는 것입니다. 그리고 가운데가 주소의 체계에 따라 칸 수가 바뀔 수는 있지만 제일 뒤는 웹 주소고, 제일 뒤에서 두 번째는 전화번호입니다. 파이썬 리스트에서 제일 뒤는-1로 호출할 수 있습니다. 일단 먼저 가격을 가지고 왔습니다. ``` price_tmp.split()[0] ``` 맨 위에 점(.)이 항상 붙어서 아래와 같이 사용하기로 합니다. ``` price_tmp.split()[0][:-1] ``` 이제 코드 [105]를 기준으로 두 번째부터 맨 마지막에서 세 번째까지 선택하고 싶은데 그렇게 해도 list형인 것은 변함없습니다. 바로 하나의 문장을 만들고 싶은 것입니다. 이럴 때 사용하는 명령이 join명령입니다. ``` ''.join(price_tmp.split()[1:-2]) ``` 이렇게 사용할 수 있습니다. 바로 주소가 되는 것입니다. ## 3-10 네이버 영화 평점 기준 영화의 평점 변화 확인하기 네이버에서는 영화 평점을 보여주는 사이트가 있습니다. 여기서는 인기 있는 영화를 알아볼 수도 있지만 지난 날짜의 인기도 확인할 수 있습니다. https://goo.gl/f5cHRG에 접근해보면 영화 평점순으로 정렬되어 있는 정보를 만날 수 있습니다. 여기서 크롬 개발자 도구를 사용해서 영화 제목이 나오는 부분의 태그를 확인합니다. ``` from bs4 import BeautifulSoup import pandas as pd ``` 우선 간단하게 Beautiful Soup와 pandas를 import합니다. ``` from urllib.request import urlopen url_base = "http://movie.naver.com/" url_syb = "movie/sdb/rank/rmovie.nhn?sel=cur&date=20170804" page = urlopen(url_base+url_syb) soup = BeautifulSoup(page, "html.parser") soup ``` 해당 주소를 한번 읽어보겠습니다. 앞에서도 이야기했지만 책으로 출판하다 보니 코드가 길어 스크롤되는 문제가 있어서 url_base+url_sub와 같이 표현한 것뿐입니다. ``` soup.find_all('div', 'tit5') soup.find_all('div', 'tit5')[0].a.string ``` 이렇게 제목만 찾을 수 있습니다. ``` soup.find_all('td','point')[0].string ``` 그다음은 포인트를 찾을 수 있습니다. ``` date = pd.date_range('2017-5-1', periods=100, freq='D') date ``` 이제 날짜를 5월 1일부터 100일간으로 정의하고 그 날짜에 해당하는 영화 정보 전체를 찾도록 하겠습니다. ``` import urllib from tqdm import tqdm_notebook movie_date = [] movie_name = [] movie_point = [] for today in tqdm_notebook(date): html = "http://movie.naver.com/" + \ "movie/sdb/rank/rmovie.nhn?sel=cur&date={date}" response = urlopen(html.format(date= urllib.parse.quote(today.strftime('%Y%m%d')))) soup = BeautifulSoup(response, "html.parser") end = len(soup.find_all('td', 'point')) movie_date.extend([today for n in range(0, end)]) movie_name.extend([soup.find_all('div', 'tit5')[n].a.string for n in range(0, end)]) movie_point.extend([soup.find_all('td', 'point')[n].string for n in range(0, end)]) ``` 어렵지 않습니다. 변수 html을 지정할 때 중괄호 {}로 date라고 잡은 것은 그 밑에 response라는 변수에서 {date}를 변수로 취급하고 내용을 바꿀 것이기 때문입니다. 그러고 나서 제목과 포인트를 읽어왔습니다. ``` movie = pd.DataFrame({'date':movie_date, 'name':movie_name, 'point':movie_point}) movie['point'] = movie['point'].astype(float) movie.head() ``` 읽은 내용을 pandas로 저장합니다. 이 내용에는 날짜별로 영화와 포인트가 저장되어 있을 것입니다. 만약 날짜가 아니라 영화별로 점수의 합산으로 데이터를 보고 싶다면 pivot_table을 사용하면 됩니다. ``` import numpy as np movie_unique = pd.pivot_table(movie, index=['name'], aggfunc=np.sum) movie_best = movie_unique.sort_values(by='point', ascending=False) movie_best.head() ``` 여기서 aggfunc으로 np.sum을 이용해서 합산을 해야 영화별 점수의 합계로 정렬될 것입니다. 5월1일 부터 100일간 점수의 합산으로 볼 때 코드[18]의 결과를 보면 고득점 영화 1위부터 5위가 보입니다. ``` tmp = movie.query('name == ["노무현입니다"]') tmp ``` 혹은 위 코드처럼'노무현입니다'라는 영화만 추려서 확인할 수 있습니다. 날짜별 평점의 변화를 확인할 수 있습니다. ``` import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12,8)) plt.plot(tmp['date'], tmp['point']) plt.legend(loc='best') plt.grid() plt.show() ``` 당연히 이것만 날짜별로 그려볼 수 있습니다. 결과는 위 그래프와 같습니다. ## 3-11 영화별 날짜 변화에 따른 평점 변화 확인하기 ``` movie_pivot = pd.pivot_table(movie, index=["date"], columns=['name'], values=['point']) movie_pivot.head() ``` 날짜별로 정리되어 있던 데이터를 pivot_table을 이용해서 코드[22]에서처럼 손쉽게 세로축으로 날짜를, 가로축에 영화 제목을 넣을 수 있습니다. ``` movie_pivot.columns = movie_pivot.columns.droplevel() movie_pivot.head() ``` 이 데이터에서 pivot_table의 결과로 붙은 컬럼 제목을 하나 정리하겠습니다. ``` import platform from matplotlib import font_manager, rc plt.rcParams['axes.unicode_minus'] = False if platform.system() == 'Darwin': rc('font', family='AppleGothic') print('Mac version') elif platform.system() == 'Windows': path = "c:/Windows/Fonts/malgun.ttf" font_name = font_manager.FontProperties(fname=path).get_name() rc('font', family=font_name) print('Windows version') elif platform.system() == 'Linux': path = "/usr/share/fonts/NanumFont/NanumGothicBold.ttf" font_name = font_manager.FontProperties(fname=path).get_name() plt.rc('font', family=font_name) print('Linux version') else: print('Unknown system... sorry~~~~') ``` matplotlib에서의 한글 문제를 설정합니다. ``` movie_pivot.plot(y=['군함도', '노무현입니다','택시운전사','다크 나이트'], figsize=(12,6)) plt.legend(loc='best') plt.grid() plt.show() ``` 관심 있는 영화 몇개를 지정해서 날짜별 변화를 확인해보겠습니다.<br> 결과는 위와 같습니다.'노무현입니다'와'택시운전사'의 성과가 눈에 보입니다. 그리고 그 변화 추이도 알 수 있습니다. 한참 논란이 되는'군함도'의 평점 결과도 나타나고 있습니다. 비록 늦은 감이 분명 있지만, 이런 영화(노무현입니다,택시운전사)들이 주목받고 높은 평점을 유지한다는 사실을 직접 확인 할 수 있따는 것이 참으로 다행입니다. 출처 : "파이썬으로 데이터 주무르기"
github_jupyter
from bs4 import BeautifulSoup page = open("data/03. test_first.html",'r').read() soup = BeautifulSoup(page, 'html.parser') print(soup.prettify()) list(soup.children) html = list(soup.children)[2] html list(html.children) body = list(html.children)[3] body soup.body list(body.children) soup.find_all('p') soup.find('p') soup.find_all('p', class_='outer-text') soup.find_all(class_='outer-text') soup.find_all(id='first') soup.find('p') soup.head soup.head.next_sibling soup.head.next_sibling.next_sibling body.p for each_tag in soup.find_all('p'): print(each_tag.get_text()) body.get_text() links = soup.find_all('a') links for each in links: href = each['href'] text = each.string print(text + '->' + href) from urllib.request import urlopen url = "https://finance.naver.com/marketindex/" page = urlopen(url) soup = BeautifulSoup(page, "html.parser") print(soup.prettify()) soup.find_all('span','value')[0].string import requests from bs4 import BeautifulSoup from urllib.request import urlopen headers = {'User-Agent': 'Mozilla/5.0'} url = 'http://www.chicagomag.com/Chicago-Magazine/November-2012/Best-Sandwiches-Chicago/' html = requests.get(url, headers = headers).text soup = BeautifulSoup(html, "html.parser") soup print(soup.find_all('div','sammy')) len(soup.find_all('div','sammy')) print(soup.find_all('div','sammy')[0]) tmp_one = soup.find_all('div','sammy')[0] type(tmp_one) tmp_one.find(class_='sammyRank') tmp_one.find(class_='sammyRank').get_text() tmp_one.find(class_='sammyListing').get_text() tmp_one.find('a')['href'] import re tmp_string = tmp_one.find(class_='sammyListing').get_text() re.split(('\n|\r\n'), tmp_string) print(re.split(('\n|\r\n'), tmp_string)[0]) print(re.split(('\n|\r\n'), tmp_string)[1]) from urllib.parse import urljoin rank = [] main_menu = [] cafe_name = [] url_add = [] list_soup = soup.find_all('div', 'sammy') for item in list_soup: rank.append(item.find(class_='sammyRank').get_text()) tmp_string = item.find(class_='sammyListing').get_text() main_menu.append(re.split(('\n|\r\n'), tmp_string)[0]) cafe_name.append(re.split(('\n|\r\n'), tmp_string)[1]) url_add.append(urljoin(url, item.find('a')['href'])) rank[:5] main_menu[:5] cafe_name[:5] url_add[:5] len(rank), len(main_menu), len(cafe_name), len(url_add) import pandas as pd data = {'Rank':rank, 'Menu':main_menu, 'Cafe':cafe_name, 'URL':url_add} df = pd.DataFrame(data) df.head() df = pd.DataFrame(data, columns=['Rank','Cafe','Menu','URL']) df.head(5) df.to_csv('data/03. best_sandwiches_list_chicago.csv', sep=',', encoding='UTF-8') df = pd.read_csv('data/03. best_sandwiches_list_chicago.csv', index_col=0) df.head() df['URL'][0] headers = {'User-Agent': 'Mozilla/5.0'} url = (df['URL'][0]) html = requests.get(url, headers = headers).text soup_tmp = BeautifulSoup(html, "html.parser") soup_tmp print(soup_tmp.find('p', 'addy')) price_tmp = soup_tmp.find('p', 'addy').get_text() price_tmp price_tmp.split() price_tmp.split()[0] price_tmp.split()[0][:-1] ''.join(price_tmp.split()[1:-2]) from bs4 import BeautifulSoup import pandas as pd from urllib.request import urlopen url_base = "http://movie.naver.com/" url_syb = "movie/sdb/rank/rmovie.nhn?sel=cur&date=20170804" page = urlopen(url_base+url_syb) soup = BeautifulSoup(page, "html.parser") soup soup.find_all('div', 'tit5') soup.find_all('div', 'tit5')[0].a.string soup.find_all('td','point')[0].string date = pd.date_range('2017-5-1', periods=100, freq='D') date import urllib from tqdm import tqdm_notebook movie_date = [] movie_name = [] movie_point = [] for today in tqdm_notebook(date): html = "http://movie.naver.com/" + \ "movie/sdb/rank/rmovie.nhn?sel=cur&date={date}" response = urlopen(html.format(date= urllib.parse.quote(today.strftime('%Y%m%d')))) soup = BeautifulSoup(response, "html.parser") end = len(soup.find_all('td', 'point')) movie_date.extend([today for n in range(0, end)]) movie_name.extend([soup.find_all('div', 'tit5')[n].a.string for n in range(0, end)]) movie_point.extend([soup.find_all('td', 'point')[n].string for n in range(0, end)]) movie = pd.DataFrame({'date':movie_date, 'name':movie_name, 'point':movie_point}) movie['point'] = movie['point'].astype(float) movie.head() import numpy as np movie_unique = pd.pivot_table(movie, index=['name'], aggfunc=np.sum) movie_best = movie_unique.sort_values(by='point', ascending=False) movie_best.head() tmp = movie.query('name == ["노무현입니다"]') tmp import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12,8)) plt.plot(tmp['date'], tmp['point']) plt.legend(loc='best') plt.grid() plt.show() movie_pivot = pd.pivot_table(movie, index=["date"], columns=['name'], values=['point']) movie_pivot.head() movie_pivot.columns = movie_pivot.columns.droplevel() movie_pivot.head() import platform from matplotlib import font_manager, rc plt.rcParams['axes.unicode_minus'] = False if platform.system() == 'Darwin': rc('font', family='AppleGothic') print('Mac version') elif platform.system() == 'Windows': path = "c:/Windows/Fonts/malgun.ttf" font_name = font_manager.FontProperties(fname=path).get_name() rc('font', family=font_name) print('Windows version') elif platform.system() == 'Linux': path = "/usr/share/fonts/NanumFont/NanumGothicBold.ttf" font_name = font_manager.FontProperties(fname=path).get_name() plt.rc('font', family=font_name) print('Linux version') else: print('Unknown system... sorry~~~~') movie_pivot.plot(y=['군함도', '노무현입니다','택시운전사','다크 나이트'], figsize=(12,6)) plt.legend(loc='best') plt.grid() plt.show()
0.150216
0.967256
<!--NAVIGATION--> < [Back to Intro](Intro.ipynb) > # Data manipulation and analyses (in Python) **Table of Contents** <div id="toc"></div> ## Introduction You can do complex biological data manipulation and analyses using the `pandas` python package (or by switching kernels, using `R`!) We will look at pandas here, which provides `R`-like functions for data manipulation and analyses. `pandas` is built on top of NumPy. Most importantly, it offers an R-like `DataFrame` object: a multidimensional array with explicit row and column names that can ontain heterogeneous types of data as well as missing values, which would not be possible using numpy arrays. `pandas` also implements a number of powerful data operations for filtering, grouping and reshaping data similar to R or spreadsheet programs. ## Installing Pandas `pandas` requires NumPy. See the [Pandas documentation](http://pandas.pydata.org/). If you installed Anaconda, you already have Pandas installed. Otherwise, you can `sudo apt-get install` it. Assuming `pandas` is installed, you can import it and check the version: ``` import pandas as pd pd.__version__ import scipy as sc ``` Just as we generally import NumPy under the alias ``np``, we will import Pandas under the alias ``pd``: ### Reminder about tabbing and help! As you read through these chapters, don't forget that Jupyter gives you the ability to quickly explore the contents of a package or methods applicable to an an object by using the tab-completion feature. Also documentation of various functions can be accessed using the ``?`` character. For example, to display all the contents of the pandas namespace, you can type ```ipython In [1]: pd.<TAB> ``` And to display Pandas's built-in documentation, you can use this: ```ipython In [2]: pd? ``` ## `dataframes` The dataframes is the main data object in pandas. ### importing data Dataframes can be created from multiple sources - e.g. CSV files, excel files, and JSON. ``` MyDF = pd.read_csv('../silbiocomp/Practicals/Data/testcsv.csv', sep=',') MyDF ``` ### Creating dataframes You can also create dataframes using a python dictionary like syntax: ``` MyDF = pd.DataFrame({ 'col1': ['Var1', 'Var2', 'Var3', 'Var4'], 'col2': ['Grass', 'Rabbit', 'Fox', 'Wolf'], 'col3': [1, 2, sc.nan, 4] }) MyDF ``` ### Examining your data ``` # Displays the top 5 rows. Accepts an optional int parameter - num. of rows to show MyDF.head() # Similar to head, but displays the last rows MyDF.tail() # The dimensions of the dataframe as a (rows, cols) tuple MyDF.shape # The number of columns. Equal to df.shape[0] len(MyDF) # An array of the column names MyDF.columns # Columns and their types MyDF.dtypes # Converts the frame to a two-dimensional table MyDF.values # Displays descriptive stats for all columns MyDF.describe() ``` OK, I am going to stop this brief intro to Jupyter with pandas here! I think you can alreay see the potential value of Jupyter for data analyses and visualization. As I mentioned above, you can also use R (e.g., using `tidyr` + `ggplot`) for this. ## Readings and Resources * [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook) * A [Jupyter + pandas quickstart tutorial](http://nikgrozev.com/2015/12/27/pandas-in-jupyter-quickstart-and-useful-snippets/)
github_jupyter
import pandas as pd pd.__version__ import scipy as sc In [1]: pd.<TAB> In [2]: pd? MyDF = pd.read_csv('../silbiocomp/Practicals/Data/testcsv.csv', sep=',') MyDF MyDF = pd.DataFrame({ 'col1': ['Var1', 'Var2', 'Var3', 'Var4'], 'col2': ['Grass', 'Rabbit', 'Fox', 'Wolf'], 'col3': [1, 2, sc.nan, 4] }) MyDF # Displays the top 5 rows. Accepts an optional int parameter - num. of rows to show MyDF.head() # Similar to head, but displays the last rows MyDF.tail() # The dimensions of the dataframe as a (rows, cols) tuple MyDF.shape # The number of columns. Equal to df.shape[0] len(MyDF) # An array of the column names MyDF.columns # Columns and their types MyDF.dtypes # Converts the frame to a two-dimensional table MyDF.values # Displays descriptive stats for all columns MyDF.describe()
0.339937
0.989202
# Introduction to the Monte Carlo method ---- Start by defining the [Gibbs (or Boltzmann) distribution](https://en.wikipedia.org/wiki/Boltzmann_distribution): $$P(\alpha) = e^{-E(\alpha)/kT}$$ this expression, defines the probability of observing a particular configuration of spins, $\alpha$. As you can see, the probability of $\alpha$ decays exponentially with increasing energy of $\alpha$, $E(\alpha)$, where $k$ is the Boltzmann constant, $k = 1.38064852 \times 10^{-23} J/K$ and $T$ is the temperature in Kelvin. ## What defines the energy of a configuration of spins? Given a configuration of spins (e.g., $\uparrow\downarrow\downarrow\uparrow\downarrow$) we can define the energy using what is referred to as an Ising Hamiltonian: $$ \hat{H}' = \frac{\hat{H}}{k} = -\frac{J}{k}\sum_{<ij>} s_is_j,$$ where, $s_i=1$ if the $i^{th}$ spin is `up` and $s_i=-1$ if it is `down`, and the brackets $<ij>$ indicate a sum over spins that are connected, and $J$ is a constant that determines the energy scale. The energy here has been divided by the Boltzmann constant to yield units of temperature. Let's consider the following case, which has the sites connected in a single 1D line: $$\alpha = \uparrow-\downarrow-\downarrow-\uparrow-\downarrow.$$ What is the energy of such a configuration? $$ E(\alpha)' = J/k(-1 + 1 - 1 - 1) = \frac{E(\alpha)}{k} = -2J/k$$ ## P1: Write a class that defines a spin configuration ## P2: Write a class that defines the 1D hamiltonian, containing a function that computes the energy of a configuration ## Properties For any fixed state, $\alpha$, the `magnetization` ($M$) is proportional to the _excess_ number of spins pointing up or down while the energy is given by the Hamiltonian: $$M(\alpha) = N_{\text{up}}(\alpha) - N_{\text{down}}(\alpha).$$ As a dynamical, fluctuating system, each time you measure the magnetization, the system might be in a different state ($\alpha$) and so you'll get a different number! However, we already know what the probability of measuring any particular $\alpha$ is, so in order to compute the average magnetization, $\left<M\right>$, we just need to multiply the magnetization of each possible configuration times the probability of it being measured, and then add them all up! $$ \left<M\right> = \sum_\alpha M(\alpha)P(\alpha).$$ In fact, any average value can be obtained by adding up the value of an individual configuration multiplied by it's probability: $$ \left<E\right> = \sum_\alpha E(\alpha)P(\alpha).$$ This means that to obtain any average value (also known as an `expectation value`) computationally, we must compute the both the value and probability of all possible configurations. This becomes extremely expensive as the number of spins ($N$) increases. ## P3: Write a function that computes the magnetization of a spin configuration ## Q2: How many configurations are possible for: (a) N=10? (b) N=100? (c) N=1000? ## Q3: What is the energy for (++-+---+--+)?
github_jupyter
# Introduction to the Monte Carlo method ---- Start by defining the [Gibbs (or Boltzmann) distribution](https://en.wikipedia.org/wiki/Boltzmann_distribution): $$P(\alpha) = e^{-E(\alpha)/kT}$$ this expression, defines the probability of observing a particular configuration of spins, $\alpha$. As you can see, the probability of $\alpha$ decays exponentially with increasing energy of $\alpha$, $E(\alpha)$, where $k$ is the Boltzmann constant, $k = 1.38064852 \times 10^{-23} J/K$ and $T$ is the temperature in Kelvin. ## What defines the energy of a configuration of spins? Given a configuration of spins (e.g., $\uparrow\downarrow\downarrow\uparrow\downarrow$) we can define the energy using what is referred to as an Ising Hamiltonian: $$ \hat{H}' = \frac{\hat{H}}{k} = -\frac{J}{k}\sum_{<ij>} s_is_j,$$ where, $s_i=1$ if the $i^{th}$ spin is `up` and $s_i=-1$ if it is `down`, and the brackets $<ij>$ indicate a sum over spins that are connected, and $J$ is a constant that determines the energy scale. The energy here has been divided by the Boltzmann constant to yield units of temperature. Let's consider the following case, which has the sites connected in a single 1D line: $$\alpha = \uparrow-\downarrow-\downarrow-\uparrow-\downarrow.$$ What is the energy of such a configuration? $$ E(\alpha)' = J/k(-1 + 1 - 1 - 1) = \frac{E(\alpha)}{k} = -2J/k$$ ## P1: Write a class that defines a spin configuration ## P2: Write a class that defines the 1D hamiltonian, containing a function that computes the energy of a configuration ## Properties For any fixed state, $\alpha$, the `magnetization` ($M$) is proportional to the _excess_ number of spins pointing up or down while the energy is given by the Hamiltonian: $$M(\alpha) = N_{\text{up}}(\alpha) - N_{\text{down}}(\alpha).$$ As a dynamical, fluctuating system, each time you measure the magnetization, the system might be in a different state ($\alpha$) and so you'll get a different number! However, we already know what the probability of measuring any particular $\alpha$ is, so in order to compute the average magnetization, $\left<M\right>$, we just need to multiply the magnetization of each possible configuration times the probability of it being measured, and then add them all up! $$ \left<M\right> = \sum_\alpha M(\alpha)P(\alpha).$$ In fact, any average value can be obtained by adding up the value of an individual configuration multiplied by it's probability: $$ \left<E\right> = \sum_\alpha E(\alpha)P(\alpha).$$ This means that to obtain any average value (also known as an `expectation value`) computationally, we must compute the both the value and probability of all possible configurations. This becomes extremely expensive as the number of spins ($N$) increases. ## P3: Write a function that computes the magnetization of a spin configuration ## Q2: How many configurations are possible for: (a) N=10? (b) N=100? (c) N=1000? ## Q3: What is the energy for (++-+---+--+)?
0.9334
0.993294
``` import numpy as np import pandas as pd import tensorflow_datasets as tfds import os from sklearn.datasets import load_files import shutil from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from sklearn.pipeline import make_pipeline import mglearn from zipfile import ZipFile ``` # Logistic Regression for 10 vs. 37 BPs Several ideas from the code below come from Andreas C. Müller & Sarah Guido ``` #unzip dataset if os.path.isdir('decisions') == False: ZipFile("decisions.zip").extractall("decisions") df = pd.read_csv('metadata1037.csv') df ids = list(df['doc_id']) bp_label = list(df['bp']) bp_set = list(set(df['bp'])) bp_set if os.path.isdir('bp1037') == False: os.mkdir('bp1037') if os.path.isdir('bp1037/10') == False: os.mkdir('bp1037/10') if os.path.isdir('bp1037/37') == False: os.mkdir('bp1037/37') for index in range(len(ids)): if bp_label[index] == '[10]': string = 'decisions/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'bp1037/10/'+ids[index]+'.txt') elif bp_label[index] == '[37]': string = 'decisions/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'bp1037/37/'+ids[index]+'.txt') bp_10_37 = load_files("bp1037") bp_train, label_train = bp_10_37.data, bp_10_37.target print("type of text_train: {}".format(type(bp_train))) print("length of text_train: {}".format(len(bp_train))) print("text_train[1]:\n{}".format(bp_train[1].decode("utf-8"))) def numeric_masker(dataset): '''Substitues all integers in a text dataset by an empty string Input: dataset made of texts Output: the same dataset made of texts, only with masked integers''' masked_set = [] for index in range(len(dataset)): new_string = '' for word in range(len(dataset[index])): if dataset[index][word].isnumeric()==False: new_string += dataset[index][word] else: new_string += '' masked_set.append(new_string) return masked_set #balances dataset to have same number of bps 10 and 37 bp_10 = [bp_train[index] for index in range(len(label_train)) if label_train[index]==0] print(len(bp_10),' ', list(label_train).count(0)) #choose randomly from bp 10 same number elements as bp 37 random_indexes = np.random.permutation(len(bp_10)) bp_10_bal = [bp_10[index] for index in random_indexes[:list(label_train).count(1)]] print(len(bp_10_bal),' ', list(label_train).count(1)) #produces the balanced set bp_37 = [bp_train[index] for index in range(len(label_train)) if label_train[index]==1] bp_train_bal = bp_10_bal+bp_37 bp_train_bal = np.array(bp_train_bal) label_train_bal = list(np.zeros(len(bp_10_bal)))+list(np.ones(len(bp_37))) label_train_bal = np.array(label_train_bal) print(len(bp_train_bal),' ', len(label_train_bal)) uft_dataset = [bp_train_bal[index].decode("utf-8") for index in range(len(bp_train_bal))] masked_dataset = numeric_masker(uft_dataset) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) mglearn.tools.visualize_coefficients( grid.best_estimator_.named_steps["logisticregression"].coef_, feature_names, n_top_features=40) ``` Súmula Vinculante 10: Viola a cláusula de reserva de plenário (CF, artigo 97) a decisão de órgão fracionário de tribunal que, embora não declare expressamente a inconstitucionalidade de lei ou ato normativo do Poder Público, afasta sua incidência, no todo ou em parte. Súmula Vinculante 37: Não cabe ao Poder Judiciário, que não tem função legislativa, aumentar vencimentos de servidores públicos sob o fundamento de isonomia ``` #deletes extra folder for splitting the dataset shutil.rmtree('bp1037') ``` # One vs. All Regression for BPs ``` #unzip dataset if os.path.isdir('decisions_allbps') == False: ZipFile("decisions_allbps.zip").extractall("decisions_allbps") df = pd.read_csv('metadata.csv') df ids = list(df['doc_id']) minister = list(df['minister']) bp_label = list(df['bp']) bp_set = list(set(df['bp'])) bp_set_single_bp = [element for element in bp_set if len(element)<5] bp_set_single_bp if os.path.isdir('allbp') == False: os.mkdir('allbp') if os.path.isdir('allbp/3') == False: os.mkdir('allbp/3') if os.path.isdir('allbp/33') == False: os.mkdir('allbp/33') if os.path.isdir('allbp/37') == False: os.mkdir('allbp/37') if os.path.isdir('allbp/4') == False: os.mkdir('allbp/4') if os.path.isdir('allbp/14') == False: os.mkdir('allbp/14') if os.path.isdir('allbp/17') == False: os.mkdir('allbp/17') if os.path.isdir('allbp/26') == False: os.mkdir('allbp/26') if os.path.isdir('allbp/20') == False: os.mkdir('allbp/20') if os.path.isdir('allbp/10') == False: os.mkdir('allbp/10') if os.path.isdir('allbp/11') == False: os.mkdir('allbp/11') for index in range(len(ids)): string = 'decisions_allbps/decisions/'+ids[index]+'.txt' if bp_label[index] == '[3]': shutil.copyfile(string, 'allbp/3/'+ids[index]+'.txt') elif bp_label[index] == '[33]': shutil.copyfile(string, 'allbp/33/'+ids[index]+'.txt') elif bp_label[index] == '[37]': shutil.copyfile(string, 'allbp/37/'+ids[index]+'.txt') elif bp_label[index] == '[4]': shutil.copyfile(string, 'allbp/4/'+ids[index]+'.txt') elif bp_label[index] == '[14]': shutil.copyfile(string, 'allbp/14/'+ids[index]+'.txt') elif bp_label[index] == '[17]': shutil.copyfile(string, 'allbp/17/'+ids[index]+'.txt') elif bp_label[index] == '[26]': shutil.copyfile(string, 'allbp/26/'+ids[index]+'.txt') elif bp_label[index] == '[20]': shutil.copyfile(string, 'allbp/20/'+ids[index]+'.txt') elif bp_label[index] == '[10]': shutil.copyfile(string, 'allbp/10/'+ids[index]+'.txt') elif bp_label[index] == '[11]': shutil.copyfile(string, 'allbp/11/'+ids[index]+'.txt') bp = load_files("allbp") bp_train, label_train = bp.data, bp.target print("type of text_train: {}".format(type(bp_train))) print("length of text_train: {}".format(len(bp_train))) print("text_train[1]:\n{}".format(bp_train[1].decode("utf-8"))) label_set = list(set(label_train)) for element in label_set: print(element, '---->', list(label_train).count(element)) #take only 760 elements of each bp bp_train_bal = [] label_train_bal = [] for bp in label_set: bp_only = [bp_train[index] for index in range(len(label_train)) if label_train[index]==bp] random_indexes = np.random.permutation(len(bp_only)) bp_only = [bp_only[index] for index in random_indexes[:760]] bp_train_bal += bp_only label_train_bal += [bp for i in range(len(bp_only))] uft_dataset = [bp_train_bal[index].decode("utf-8") for index in range(len(bp_train_bal))] masked_dataset = numeric_masker(uft_dataset) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) for bp in range(len(grid.best_estimator_.named_steps["logisticregression"].coef_)): test_coef = grid.best_estimator_.named_steps["logisticregression"].coef_[bp] idx = np.argpartition(test_coef, -40)[-40:] indices = idx[np.argsort((-test_coef)[idx])] most_important_feature = feature_names[indices] print(bp, '---->', most_important_feature) ``` # One vs. All Ministers ``` #remove nan minister entry ministers_set = list(set(df['minister'])) ministers_set.pop(0) ministers_set for element in ministers_set: print(element, '---->',minister.count(element)) #use only ministers that have more than 700 citations ministers_set_copy = ministers_set[:] for element in ministers_set_copy: if minister.count(element)<700: ministers_set.remove(element) ministers_set #make a folder to store the data if os.path.isdir('minister') == False: os.mkdir('minister') #make a folder for each limit, allowing then to use sklearn.datasets.load_files for minis in ministers_set: if os.path.isdir('minister/'+str(minis)) == False: os.mkdir('minister/'+str(minis)) for index in range(len(ids)): if minister[index] in ministers_set: string = 'decisions_allbps/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'minister/'+str(minister[index])+'/'+ ids[index]+'.txt') ministers = load_files("minister") #creates train and label sets minister_train, label_train = ministers.data, ministers.target print("type of text_train: {}".format(type(minister_train))) print("length of text_train: {}".format(len(minister_train))) print("text_train[1]:\n{}".format(minister_train[1].decode("utf-8"))) label_set = list(set(label_train)) for element in label_set: print(element, '---->', list(label_train).count(element)) #take only 700 elements of each bp minister_train_bal = [] label_train_bal = [] #balances the dataset so every minister have same representativity for minis in label_set: minister_only = [minister_train[index] for index in range(len(label_train)) if label_train[index]==minis] random_indexes = np.random.permutation(len(minister_only)) minister_only = [minister_only[index] for index in random_indexes[:700]] #picks 700 random documents for each minister minister_train_bal += minister_only label_train_bal += [minis for i in range(len(minister_only))] #make a list of words to mask minister_names = [minis.split() for minis in ministers_set] flat_names = [item for sublist in minister_names for item in sublist] flat_names += ['carmen', 'luis', 'lucia'] #substitute gender specific words that can be used to relate to some ministers flat_names += ['relatora', 'relator', 'ministro','ministra','minis', 'turma', 'primeira', 'segunda', 'min'] cap_names = [minis.capitalize() for minis in flat_names] up_names = [minis.upper() for minis in flat_names] mask_list = flat_names+cap_names+up_names #add puntuation to everything comma_list = [word+',' for word in mask_list] dot_list = [word+'.' for word in mask_list] mask_list += comma_list+dot_list mask_list def masker(dataset, list_of_masked): '''Substitues words that are in the masked list by an empty string Input: dataset made of texts Output: the same dataset made of texts, only with masked words''' masked_set = [] for index in range(len(dataset)): new_string = '' splitted_doc = dataset[index].split() for word in range(len(splitted_doc)): if splitted_doc[word] in list_of_masked: new_string += '' else: new_string += ' '+ splitted_doc[word] masked_set.append(new_string) return masked_set uft_dataset = [minister_train_bal[index].decode("utf-8") for index in range(len(minister_train_bal))] masked_dataset = masker(uft_dataset, mask_list) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) for minis in range(len(grid.best_estimator_.named_steps["logisticregression"].coef_)): test_coef = grid.best_estimator_.named_steps["logisticregression"].coef_[minis] idx = np.argpartition(test_coef, -40)[-40:] indices = idx[np.argsort((-test_coef)[idx])] #shows 40 most important words for deciding belonging to each class most_important_feature = feature_names[indices] print(minis, '---->', most_important_feature) #deletes extra folder for splitting the dataset shutil.rmtree('minister') ```
github_jupyter
import numpy as np import pandas as pd import tensorflow_datasets as tfds import os from sklearn.datasets import load_files import shutil from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from sklearn.pipeline import make_pipeline import mglearn from zipfile import ZipFile #unzip dataset if os.path.isdir('decisions') == False: ZipFile("decisions.zip").extractall("decisions") df = pd.read_csv('metadata1037.csv') df ids = list(df['doc_id']) bp_label = list(df['bp']) bp_set = list(set(df['bp'])) bp_set if os.path.isdir('bp1037') == False: os.mkdir('bp1037') if os.path.isdir('bp1037/10') == False: os.mkdir('bp1037/10') if os.path.isdir('bp1037/37') == False: os.mkdir('bp1037/37') for index in range(len(ids)): if bp_label[index] == '[10]': string = 'decisions/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'bp1037/10/'+ids[index]+'.txt') elif bp_label[index] == '[37]': string = 'decisions/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'bp1037/37/'+ids[index]+'.txt') bp_10_37 = load_files("bp1037") bp_train, label_train = bp_10_37.data, bp_10_37.target print("type of text_train: {}".format(type(bp_train))) print("length of text_train: {}".format(len(bp_train))) print("text_train[1]:\n{}".format(bp_train[1].decode("utf-8"))) def numeric_masker(dataset): '''Substitues all integers in a text dataset by an empty string Input: dataset made of texts Output: the same dataset made of texts, only with masked integers''' masked_set = [] for index in range(len(dataset)): new_string = '' for word in range(len(dataset[index])): if dataset[index][word].isnumeric()==False: new_string += dataset[index][word] else: new_string += '' masked_set.append(new_string) return masked_set #balances dataset to have same number of bps 10 and 37 bp_10 = [bp_train[index] for index in range(len(label_train)) if label_train[index]==0] print(len(bp_10),' ', list(label_train).count(0)) #choose randomly from bp 10 same number elements as bp 37 random_indexes = np.random.permutation(len(bp_10)) bp_10_bal = [bp_10[index] for index in random_indexes[:list(label_train).count(1)]] print(len(bp_10_bal),' ', list(label_train).count(1)) #produces the balanced set bp_37 = [bp_train[index] for index in range(len(label_train)) if label_train[index]==1] bp_train_bal = bp_10_bal+bp_37 bp_train_bal = np.array(bp_train_bal) label_train_bal = list(np.zeros(len(bp_10_bal)))+list(np.ones(len(bp_37))) label_train_bal = np.array(label_train_bal) print(len(bp_train_bal),' ', len(label_train_bal)) uft_dataset = [bp_train_bal[index].decode("utf-8") for index in range(len(bp_train_bal))] masked_dataset = numeric_masker(uft_dataset) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) mglearn.tools.visualize_coefficients( grid.best_estimator_.named_steps["logisticregression"].coef_, feature_names, n_top_features=40) #deletes extra folder for splitting the dataset shutil.rmtree('bp1037') #unzip dataset if os.path.isdir('decisions_allbps') == False: ZipFile("decisions_allbps.zip").extractall("decisions_allbps") df = pd.read_csv('metadata.csv') df ids = list(df['doc_id']) minister = list(df['minister']) bp_label = list(df['bp']) bp_set = list(set(df['bp'])) bp_set_single_bp = [element for element in bp_set if len(element)<5] bp_set_single_bp if os.path.isdir('allbp') == False: os.mkdir('allbp') if os.path.isdir('allbp/3') == False: os.mkdir('allbp/3') if os.path.isdir('allbp/33') == False: os.mkdir('allbp/33') if os.path.isdir('allbp/37') == False: os.mkdir('allbp/37') if os.path.isdir('allbp/4') == False: os.mkdir('allbp/4') if os.path.isdir('allbp/14') == False: os.mkdir('allbp/14') if os.path.isdir('allbp/17') == False: os.mkdir('allbp/17') if os.path.isdir('allbp/26') == False: os.mkdir('allbp/26') if os.path.isdir('allbp/20') == False: os.mkdir('allbp/20') if os.path.isdir('allbp/10') == False: os.mkdir('allbp/10') if os.path.isdir('allbp/11') == False: os.mkdir('allbp/11') for index in range(len(ids)): string = 'decisions_allbps/decisions/'+ids[index]+'.txt' if bp_label[index] == '[3]': shutil.copyfile(string, 'allbp/3/'+ids[index]+'.txt') elif bp_label[index] == '[33]': shutil.copyfile(string, 'allbp/33/'+ids[index]+'.txt') elif bp_label[index] == '[37]': shutil.copyfile(string, 'allbp/37/'+ids[index]+'.txt') elif bp_label[index] == '[4]': shutil.copyfile(string, 'allbp/4/'+ids[index]+'.txt') elif bp_label[index] == '[14]': shutil.copyfile(string, 'allbp/14/'+ids[index]+'.txt') elif bp_label[index] == '[17]': shutil.copyfile(string, 'allbp/17/'+ids[index]+'.txt') elif bp_label[index] == '[26]': shutil.copyfile(string, 'allbp/26/'+ids[index]+'.txt') elif bp_label[index] == '[20]': shutil.copyfile(string, 'allbp/20/'+ids[index]+'.txt') elif bp_label[index] == '[10]': shutil.copyfile(string, 'allbp/10/'+ids[index]+'.txt') elif bp_label[index] == '[11]': shutil.copyfile(string, 'allbp/11/'+ids[index]+'.txt') bp = load_files("allbp") bp_train, label_train = bp.data, bp.target print("type of text_train: {}".format(type(bp_train))) print("length of text_train: {}".format(len(bp_train))) print("text_train[1]:\n{}".format(bp_train[1].decode("utf-8"))) label_set = list(set(label_train)) for element in label_set: print(element, '---->', list(label_train).count(element)) #take only 760 elements of each bp bp_train_bal = [] label_train_bal = [] for bp in label_set: bp_only = [bp_train[index] for index in range(len(label_train)) if label_train[index]==bp] random_indexes = np.random.permutation(len(bp_only)) bp_only = [bp_only[index] for index in random_indexes[:760]] bp_train_bal += bp_only label_train_bal += [bp for i in range(len(bp_only))] uft_dataset = [bp_train_bal[index].decode("utf-8") for index in range(len(bp_train_bal))] masked_dataset = numeric_masker(uft_dataset) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) for bp in range(len(grid.best_estimator_.named_steps["logisticregression"].coef_)): test_coef = grid.best_estimator_.named_steps["logisticregression"].coef_[bp] idx = np.argpartition(test_coef, -40)[-40:] indices = idx[np.argsort((-test_coef)[idx])] most_important_feature = feature_names[indices] print(bp, '---->', most_important_feature) #remove nan minister entry ministers_set = list(set(df['minister'])) ministers_set.pop(0) ministers_set for element in ministers_set: print(element, '---->',minister.count(element)) #use only ministers that have more than 700 citations ministers_set_copy = ministers_set[:] for element in ministers_set_copy: if minister.count(element)<700: ministers_set.remove(element) ministers_set #make a folder to store the data if os.path.isdir('minister') == False: os.mkdir('minister') #make a folder for each limit, allowing then to use sklearn.datasets.load_files for minis in ministers_set: if os.path.isdir('minister/'+str(minis)) == False: os.mkdir('minister/'+str(minis)) for index in range(len(ids)): if minister[index] in ministers_set: string = 'decisions_allbps/decisions/'+ids[index]+'.txt' shutil.copyfile(string, 'minister/'+str(minister[index])+'/'+ ids[index]+'.txt') ministers = load_files("minister") #creates train and label sets minister_train, label_train = ministers.data, ministers.target print("type of text_train: {}".format(type(minister_train))) print("length of text_train: {}".format(len(minister_train))) print("text_train[1]:\n{}".format(minister_train[1].decode("utf-8"))) label_set = list(set(label_train)) for element in label_set: print(element, '---->', list(label_train).count(element)) #take only 700 elements of each bp minister_train_bal = [] label_train_bal = [] #balances the dataset so every minister have same representativity for minis in label_set: minister_only = [minister_train[index] for index in range(len(label_train)) if label_train[index]==minis] random_indexes = np.random.permutation(len(minister_only)) minister_only = [minister_only[index] for index in random_indexes[:700]] #picks 700 random documents for each minister minister_train_bal += minister_only label_train_bal += [minis for i in range(len(minister_only))] #make a list of words to mask minister_names = [minis.split() for minis in ministers_set] flat_names = [item for sublist in minister_names for item in sublist] flat_names += ['carmen', 'luis', 'lucia'] #substitute gender specific words that can be used to relate to some ministers flat_names += ['relatora', 'relator', 'ministro','ministra','minis', 'turma', 'primeira', 'segunda', 'min'] cap_names = [minis.capitalize() for minis in flat_names] up_names = [minis.upper() for minis in flat_names] mask_list = flat_names+cap_names+up_names #add puntuation to everything comma_list = [word+',' for word in mask_list] dot_list = [word+'.' for word in mask_list] mask_list += comma_list+dot_list mask_list def masker(dataset, list_of_masked): '''Substitues words that are in the masked list by an empty string Input: dataset made of texts Output: the same dataset made of texts, only with masked words''' masked_set = [] for index in range(len(dataset)): new_string = '' splitted_doc = dataset[index].split() for word in range(len(splitted_doc)): if splitted_doc[word] in list_of_masked: new_string += '' else: new_string += ' '+ splitted_doc[word] masked_set.append(new_string) return masked_set uft_dataset = [minister_train_bal[index].decode("utf-8") for index in range(len(minister_train_bal))] masked_dataset = masker(uft_dataset, mask_list) masked_dataset[0] pipe = make_pipeline(TfidfVectorizer(min_df=5, norm=None), LogisticRegression()) param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(masked_dataset, label_train_bal) print("Best cross-validation score: {:.2f}".format(grid.best_score_)) vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"] # transform the training dataset X_train = vectorizer.transform(masked_dataset) # find maximum value for each of the features over the dataset max_value = X_train.max(axis=0).toarray().ravel() sorted_by_tfidf = max_value.argsort() # get feature names feature_names = np.array(vectorizer.get_feature_names()) for minis in range(len(grid.best_estimator_.named_steps["logisticregression"].coef_)): test_coef = grid.best_estimator_.named_steps["logisticregression"].coef_[minis] idx = np.argpartition(test_coef, -40)[-40:] indices = idx[np.argsort((-test_coef)[idx])] #shows 40 most important words for deciding belonging to each class most_important_feature = feature_names[indices] print(minis, '---->', most_important_feature) #deletes extra folder for splitting the dataset shutil.rmtree('minister')
0.229708
0.617195
# **Open-Source**: Examples for style transfer & time series Source: [https://github.com/d-insight/code-bank.git](https://github.com/d-insight/code-bank.git) License: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository. ------------- ## Overview In this exercise, we leverage open-source tools to show the power of re-using existing work from the data science community. We will (1) convert the style of an image, based on a pre-trained open-source model, and (2) use cutting-edge models for time series predictions __Main exercise 1__ This *neural style transfer* takes a *content image* and a *style reference image* (e.g. by Picasso, Kandinsky, Van Gogh). The goal is to "paint" the content image in the style of the reference image, using neural networks. Original paper: *A Neural Algorithm of Artistic Style* by [Gatys et al. (2015)](https://arxiv.org/abs/1508.06576) __Main exercise 2__ In the second part, we will explore how to build and manipulate a time series, train a forecasting model, and evaluate the predictions. --------- ## Part 0: Setup ``` # Imports import os # Style transfer import tensorflow as tf import tensorflow_hub as hub import IPython.display as display import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import PIL.Image import time import functools # Load compressed models from tensorflow_hub os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED' # Time series import pandas as pd from darts import TimeSeries from darts.models import ( NaiveDrift, Prophet, ExponentialSmoothing, AutoARIMA, Theta ) from darts.metrics import mape, mase # Define plotting format mpl.rcParams['figure.figsize'] = (12, 12) mpl.rcParams['axes.grid'] = False import warnings warnings.filterwarnings("ignore") import logging logging.disable(logging.CRITICAL) # Helper functions def tensor_to_image(tensor): tensor = tensor*255 tensor = np.array(tensor, dtype=np.uint8) if np.ndim(tensor)>3: assert tensor.shape[0] == 1 tensor = tensor[0] return PIL.Image.fromarray(tensor) def load_img(path_to_img): max_dim = 512 img = tf.io.read_file(path_to_img) img = tf.image.decode_image(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) shape = tf.cast(tf.shape(img)[:-1], tf.float32) long_dim = max(shape) scale = max_dim / long_dim new_shape = tf.cast(shape * scale, tf.int32) img = tf.image.resize(img, new_shape) img = img[tf.newaxis, :] return img def imshow(image, title=None): if len(image.shape) > 3: image = tf.squeeze(image, axis=0) plt.imshow(image) if title: plt.title(title) ``` # **MAIN EXERCISE 1** ## Part 1: Upload image (optional) Upload an image to the directory of this notebook. You can (1) upload an image from your computer or (2) copy an image from the web. We encourage the former – it's more fun. **Q 1:** Upload image titled `myImage.jpg` and replace the existing `myImage.jpg`. ## Part 2: Choose style image Now comes the creative part. Choose one of the available style reference images below. **Q 1:** Define the path to the style image in the `/styles` directory. Hint: Create a variable called `content_path` to reference the style image you want to use. **Q 2:** Show the style image. Hint: Use the `load_img()` and `imshow()` helper functions. ## Part 3: Apply open-source model We now download an open-source, pre-trained neural network to "paint" our image in the style above. The model is available on the TensorFlow Hub [here](https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2). ``` # Download the model hub_model = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2') ``` **Q 1:** Apply the style to your image. Hint: Use the `hub_model()` with the `content_image` and `style_image` as inputs. **Q 2:** Plot the stylized image # **MAIN EXERCISE 2** ## Part 4: Load and prepare data We will use the well known [monthly airline passengers dataset](https://github.com/jbrownlee/Datasets/blob/master/monthly-airline-passengers.csv). A `TimeSeries` simply represents a univariate or multivariate time series, with a proper time index. It is a wrapper around a `pandas.DataFrame`, and it can be built in a few different ways: * From an entire Pandas `DataFrame` directly * From a time index and an array of corresponding values * From a subset of Pandas `DataFrame` columns, indicating which are the time column and the values columns. ``` df = pd.read_csv('data/AirPassengers.csv', delimiter=",") series = TimeSeries.from_dataframe(df, 'Month', ['#Passengers']) mpl.rcParams['figure.figsize'] = (8, 8) series.plot(grid=True, lw=3) ``` **Q 1:** Create a training and validation series and plot. Let's split our `TimeSeries` into a training and a validation series. Note: in general, it is also a good practice to keep a test series aside and never touch it until the end of the process. Here, we just build a training and a test series for simplicity. The training series will be a `TimeSeries` containing values until January 1958 (excluded), and the validation series a `TimeSeries` containing the rest: ## Part 5: Fit different time series models `darts` is built to make it easy to train and validate several models in a unified way. Let's train a few more and compute their respective mean absolute percentage error (MAPE) on the validation set. **Q 1:** Evaluate the following time series models: `NaiveDrift() ExponentialSmoothing() Prophet() AutoARIMA() Theta()`. Hint 1: The above models are all functions readily built into DARTS. Hint 2: Write a model evaluation helper function that takes one of the above models as an input. Here, we did only built these models with their default parameters. We can probably do better if we fine-tune model-specific parameters to our problem. We skip this step here, but encourage you to try it out yourself and see how by how much you can improve model performance. ## Part 6: Plot the best model Finally, we plot how well the predictions fit the actual value in the validation set. **Q 1:** Re-fit the best performing model from the preceding part and save the predictions in a variable, so we can plot the predictions later. **Q 2:** Plot predicted vs. actual values in the validation dataset.
github_jupyter
# Imports import os # Style transfer import tensorflow as tf import tensorflow_hub as hub import IPython.display as display import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import PIL.Image import time import functools # Load compressed models from tensorflow_hub os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED' # Time series import pandas as pd from darts import TimeSeries from darts.models import ( NaiveDrift, Prophet, ExponentialSmoothing, AutoARIMA, Theta ) from darts.metrics import mape, mase # Define plotting format mpl.rcParams['figure.figsize'] = (12, 12) mpl.rcParams['axes.grid'] = False import warnings warnings.filterwarnings("ignore") import logging logging.disable(logging.CRITICAL) # Helper functions def tensor_to_image(tensor): tensor = tensor*255 tensor = np.array(tensor, dtype=np.uint8) if np.ndim(tensor)>3: assert tensor.shape[0] == 1 tensor = tensor[0] return PIL.Image.fromarray(tensor) def load_img(path_to_img): max_dim = 512 img = tf.io.read_file(path_to_img) img = tf.image.decode_image(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) shape = tf.cast(tf.shape(img)[:-1], tf.float32) long_dim = max(shape) scale = max_dim / long_dim new_shape = tf.cast(shape * scale, tf.int32) img = tf.image.resize(img, new_shape) img = img[tf.newaxis, :] return img def imshow(image, title=None): if len(image.shape) > 3: image = tf.squeeze(image, axis=0) plt.imshow(image) if title: plt.title(title) # Download the model hub_model = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2') df = pd.read_csv('data/AirPassengers.csv', delimiter=",") series = TimeSeries.from_dataframe(df, 'Month', ['#Passengers']) mpl.rcParams['figure.figsize'] = (8, 8) series.plot(grid=True, lw=3)
0.748812
0.978529
``` import torch import torch.nn as nn import torch.nn.functional as F from utils.match import match ``` # MultiBoxLoss ``` class MultiBoxLoss(nn.Module): def __init__(self, jaccard_thresh=0.5, neg_pos=3, device='cpu'): super(MultiBoxLoss, self).__init__() self.jaccard_thresh = jaccard_thresh # 0.5 self.negpos_ratio = neg_pos # 3:1 Hard Negative Mining self.device = device def forward(self, predictions, targets): loc_data, conf_data, dbox_list = predictions num_batch = loc_data.size(0) num_dbox = loc_data.size(1) num_classes = conf_data.size(2) conf_t_label = torch.LongTensor(num_batch, num_dbox).to(self.device) loc_t = torch.Tensor(num_batch, num_dbox, 4).to(self.device) for idx in range(num_batch): truths = targets[idx][:, :-1].to(self.device) # BBox labels = targets[idx][:, -1].to(self.device) dbox = dbox_list.to(self.device) variance = [0.1, 0.2] match(self.jaccard_thresh, truths, dbox, variance, labels, loc_t, conf_t_label, idx) pos_mask = conf_t_label > 0 # torch.Size([num_batch, 8732]) pos_idx = pos_mask.unsqueeze(pos_mask.dim()).expand_as(loc_data) loc_p = loc_data[pos_idx].view(-1, 4) loc_t = loc_t[pos_idx].view(-1, 4) loss_l = F.smooth_l1_loss(loc_p, loc_t, reduction='sum') batch_conf = conf_data.view(-1, num_classes) loss_c = F.cross_entropy( batch_conf, conf_t_label.view(-1), reduction='none') num_pos = pos_mask.long().sum(1, keepdim=True) loss_c = loss_c.view(num_batch, -1) # torch.Size([num_batch, 8732]) loss_c[pos_mask] = 0 _, loss_idx = loss_c.sort(1, descending=True) _, idx_rank = loss_idx.sort(1) num_neg = torch.clamp(num_pos*self.negpos_ratio, max=num_dbox) # torch.Size([num_batch, 8732]) neg_mask = idx_rank < (num_neg).expand_as(idx_rank) # pos_mask:torch.Size([num_batch, 8732])→pos_idx_mask:torch.Size([num_batch, 8732, 21]) pos_idx_mask = pos_mask.unsqueeze(2).expand_as(conf_data) neg_idx_mask = neg_mask.unsqueeze(2).expand_as(conf_data) # torch.Size([num_pos+num_neg, 21]) conf_hnm = conf_data[(pos_idx_mask+neg_idx_mask).gt(0) ].view(-1, num_classes) # torch.Size([pos+neg]) conf_t_label_hnm = conf_t_label[(pos_mask+neg_mask).gt(0)] loss_c = F.cross_entropy(conf_hnm, conf_t_label_hnm, reduction='sum') N = num_pos.sum() loss_l /= N loss_c /= N return loss_l, loss_c ```
github_jupyter
import torch import torch.nn as nn import torch.nn.functional as F from utils.match import match class MultiBoxLoss(nn.Module): def __init__(self, jaccard_thresh=0.5, neg_pos=3, device='cpu'): super(MultiBoxLoss, self).__init__() self.jaccard_thresh = jaccard_thresh # 0.5 self.negpos_ratio = neg_pos # 3:1 Hard Negative Mining self.device = device def forward(self, predictions, targets): loc_data, conf_data, dbox_list = predictions num_batch = loc_data.size(0) num_dbox = loc_data.size(1) num_classes = conf_data.size(2) conf_t_label = torch.LongTensor(num_batch, num_dbox).to(self.device) loc_t = torch.Tensor(num_batch, num_dbox, 4).to(self.device) for idx in range(num_batch): truths = targets[idx][:, :-1].to(self.device) # BBox labels = targets[idx][:, -1].to(self.device) dbox = dbox_list.to(self.device) variance = [0.1, 0.2] match(self.jaccard_thresh, truths, dbox, variance, labels, loc_t, conf_t_label, idx) pos_mask = conf_t_label > 0 # torch.Size([num_batch, 8732]) pos_idx = pos_mask.unsqueeze(pos_mask.dim()).expand_as(loc_data) loc_p = loc_data[pos_idx].view(-1, 4) loc_t = loc_t[pos_idx].view(-1, 4) loss_l = F.smooth_l1_loss(loc_p, loc_t, reduction='sum') batch_conf = conf_data.view(-1, num_classes) loss_c = F.cross_entropy( batch_conf, conf_t_label.view(-1), reduction='none') num_pos = pos_mask.long().sum(1, keepdim=True) loss_c = loss_c.view(num_batch, -1) # torch.Size([num_batch, 8732]) loss_c[pos_mask] = 0 _, loss_idx = loss_c.sort(1, descending=True) _, idx_rank = loss_idx.sort(1) num_neg = torch.clamp(num_pos*self.negpos_ratio, max=num_dbox) # torch.Size([num_batch, 8732]) neg_mask = idx_rank < (num_neg).expand_as(idx_rank) # pos_mask:torch.Size([num_batch, 8732])→pos_idx_mask:torch.Size([num_batch, 8732, 21]) pos_idx_mask = pos_mask.unsqueeze(2).expand_as(conf_data) neg_idx_mask = neg_mask.unsqueeze(2).expand_as(conf_data) # torch.Size([num_pos+num_neg, 21]) conf_hnm = conf_data[(pos_idx_mask+neg_idx_mask).gt(0) ].view(-1, num_classes) # torch.Size([pos+neg]) conf_t_label_hnm = conf_t_label[(pos_mask+neg_mask).gt(0)] loss_c = F.cross_entropy(conf_hnm, conf_t_label_hnm, reduction='sum') N = num_pos.sum() loss_l /= N loss_c /= N return loss_l, loss_c
0.886236
0.806967
## Scientific Python The goal of today's lab is to get familiar with data representation and manipulation in Python. To do the lab, you should make a copy of the repository on your machine (either by downloading it directly or by cloning/forking it from https://github.com/chagaz/ma2823_2016), then open this file from Jupyter. If you don't know how to start Jupyter, try ```jupyter notebook``` from a terminal (*not* from a Python console). ### 1. Let us check that your installation is working ``` # scientific python import numpy as np import scipy as sp # interactive plotting %pylab inline ``` The previous command is one of the "magics" of Jupyter. As indicated by the message you have gotten, it imports numpy and matplotlib. See http://ipython.readthedocs.io/en/stable/interactive/magics.html?highlight=pylab%20inline for details ### 2. Numeric Python: Numpy arrays NumPp arrays are a fundamental structure for scientific computing. Numpy arrays are homogeneous (i.e. all objects it contains have the same type) multi-dimensional arrays, which we’ll use among other things to represent vectors and matrices. Let us explore some basic Numpy commands. #### Creating arrays ``` # Create a random array of size 3 x 5 X = np.random.random((3, 5)) # Create an array of zeros of size 3 x 5 np.zeros((3, 5)) # Create an array of ones of size 3 x 5 np.ones((3, 5)) # Create the identity matrix of size 4 x 4 np.eye(4) # Visualize X print X # The dimensions of X are accessible via print X.shape # The total number of elements of X are accessible via print X.size ``` #### Accessing elements, rows, and columns of arrays Remember, in Python indices start at 0. ``` # Get a single element: X[0,1] print X[0, 1] # Get a row print X[0, :] print X[0] print "shape of a row vector:", X[0].shape # Get a column print X[:, 3] ``` #### Array manipulation We use 2-dimensional arrays to represent matrices, and can do basic linear algebra operations on them. ``` # Transposing an array print X.T # Applying the same transformation to all entries in an array # Multiply all entries of X by 2: print 2*X # Add 1 to all entries of x # Compute the array that has as entries the logarithm (base 2) of the entries of X # Square all entries of X # Compute the array that has as entries the logarithm (base 10) of the entries of X # Element-wise matrix multiplication print X*X # Matrix multiplication print np.dot(X, X.T) print X.dot(X.T) # Create a random array B of size 5 x 4 # Multiply X by B # Get the diagonal of X. Note that X is not square. np.diag(X) # Compute the trace of X np.trace(X) ``` More complex linear algebra operations are available via numpy.linalg: http://docs.scipy.org/doc/numpy/reference/routines.linalg.html ``` # Compute the determinant of X'X np.linalg.det(X.dot(X.T)) # Compute the eigenvalues and eigenvectors of X'X np.linalg.eig(X.dot(X.T)) # Compute the inverse of X'X np.linalg.inv(X.dot(X.T)) ``` For more on arrays, you can refer to http://docs.scipy.org/doc/numpy/reference/arrays.html. For more about NumPy, you can refer to: * The Tentative NumPy Tutorial at http://wiki.scipy.org/Tentative_NumPy_Tutorial; * The NumPy documentation at http://docs.scipy.org/doc/numpy/index.html. ### 3. Scientific Python with SciPy SciPy is a collection of mathematical algorithms and convenience functions built on Numpy. It has specialized submodules for integration, Fourier transforms, optimization, statistics and much more: http://docs.scipy.org/doc/scipy/reference/ It offers more linear algebra manipulation tools than Numpy: http://docs.scipy.org/doc/scipy/reference/linalg.html#module-scipy.linalg SciPy is particularly useful to manipulate sparse matrices. It often happens that the data we manipulate contains a lot of zeros. In this case, storing all the zeros is inefficient, and it is much more efficient to use data structures meant for sparse matrices. This can be done with the scipy.sparse submodule, which allows to store sparse matrices efficiently, and implements many interesting functions (linear algebra, sparse solvers, graph algorithms, etc.). http://docs.scipy.org/doc/scipy/reference/sparse.html#module-scipy.sparse ### 4. Plotting with Matplotlib Visualization is an important part of machine learning. Plotting your data will allow you to have a better feel for it (how are the features distributed, are there outliers, etc.). Plotting measures of performance (whether ROC curves or single-valued performance measures, with error bars) allows you to rapidly compare methods. matplotlib is a very flexible data visualization package, partially inspired by MATLAB. #### Lines ``` # Plotting a sinusoide # create an array of 100 equally-spaced points between 0 and 10 (to serve as x coordinates) x = np.linspace(0, 10, 100) # create the y coordinates y = np.sin(x) plt.plot(x, y) # Tweak some options plt.plot(x, y, color='orange', linestyle='--', linewidth=3) # Plot the individual points plt.plot(x, y, color='orange', marker='x', linestyle='') # Plot multiple lines plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend() # Add a title and caption and label the axes plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend(loc='lower left', fontsize=14) plt.title("Sinusoides", fontsize=14) plt.xlabel("$f(x)$", fontsize=16) plt.ylabel("$sin(x)$", fontsize=16) # Save the plot plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend(loc='lower left', fontsize=14) plt.title("Sinusoides", fontsize=14) plt.xlabel("$x$", fontsize=16) plt.ylabel("$f(x)$", fontsize=16) plt.savefig("my_sinusoide.png") # Add to the previous plot a sinusoide of half the amplitude and twice the frequency of the sine one. # Plot the line in green and give each line a different line style. ``` #### Scatterplots ``` # Create 500 points with random (x, y) coordinates x = np.random.normal(size=500) y = np.random.normal(size=500) # Plot them plt.scatter(x, y) # Use the same ranges for both axes plt.scatter(x, y) plt.xlim([-4, 4]) plt.ylim([-4, 4]) # Add a title and axis captions to the previous plot. Change the marker style and color. ``` #### Heatmaps Matplotlib will automatically assign a color to each numerical value, based on a color map. For more about color maps see: http://matplotlib.org/users/colormaps.html http://matplotlib.org/1.2.1/examples/pylab_examples/show_colormaps.html ``` # Create a random 50 x 100 array X = np.random.random((50, 100)) heatmap = plt.pcolor(X, cmap=plt.cm.Blues) plt.colorbar(heatmap) ``` #### Histograms ``` # Create a random vector (normally distributed) of size 5000 X = np.random.normal(size=(5000,)) # Plot the histogram of its values over 50 bins h = plt.hist(X, bins=50, color='orange', histtype='stepfilled') ``` #### Images ``` # create an image x = np.linspace(1, 12, 100) # transform an array of shape (100,) into an array of shape (100, 1) y = x[:, np.newaxis] y = y * np.cos(y) # Create an image matrix: image[i,j] = y cos(y)[i] * sin(x)[j] image = y * np.sin(x) # show the image (the origin is, by default, at the top-left corner!) plt.imshow(image, cmap=plt.cm.prism) # Contour plot - note that origin here is at the bottom-left by default! # A contour line or isoline of a function of two variables is a curve along which the function has a constant value. contours = plt.contour(image, cmap=plt.cm.prism) plt.clabel(contours, inline=1, fontsize=10) ``` Many more types of plots and functionalities to label axes, display legends, etc. are available. The matplotlib gallery (http://matplotlib.org/gallery.html) is a good place to start to get an idea of what is possible and how to do it. Note that there are many more plotting libraries for Python. Two of the more popular are: * Seaborn (based on Matplotlib, but with more aesthetically pleasing defaults): http://stanford.edu/~mwaskom/software/seaborn/index.html * Bokeh (creates interactive plots): http://bokeh.pydata.org/
github_jupyter
# scientific python import numpy as np import scipy as sp # interactive plotting %pylab inline # Create a random array of size 3 x 5 X = np.random.random((3, 5)) # Create an array of zeros of size 3 x 5 np.zeros((3, 5)) # Create an array of ones of size 3 x 5 np.ones((3, 5)) # Create the identity matrix of size 4 x 4 np.eye(4) # Visualize X print X # The dimensions of X are accessible via print X.shape # The total number of elements of X are accessible via print X.size # Get a single element: X[0,1] print X[0, 1] # Get a row print X[0, :] print X[0] print "shape of a row vector:", X[0].shape # Get a column print X[:, 3] # Transposing an array print X.T # Applying the same transformation to all entries in an array # Multiply all entries of X by 2: print 2*X # Add 1 to all entries of x # Compute the array that has as entries the logarithm (base 2) of the entries of X # Square all entries of X # Compute the array that has as entries the logarithm (base 10) of the entries of X # Element-wise matrix multiplication print X*X # Matrix multiplication print np.dot(X, X.T) print X.dot(X.T) # Create a random array B of size 5 x 4 # Multiply X by B # Get the diagonal of X. Note that X is not square. np.diag(X) # Compute the trace of X np.trace(X) # Compute the determinant of X'X np.linalg.det(X.dot(X.T)) # Compute the eigenvalues and eigenvectors of X'X np.linalg.eig(X.dot(X.T)) # Compute the inverse of X'X np.linalg.inv(X.dot(X.T)) # Plotting a sinusoide # create an array of 100 equally-spaced points between 0 and 10 (to serve as x coordinates) x = np.linspace(0, 10, 100) # create the y coordinates y = np.sin(x) plt.plot(x, y) # Tweak some options plt.plot(x, y, color='orange', linestyle='--', linewidth=3) # Plot the individual points plt.plot(x, y, color='orange', marker='x', linestyle='') # Plot multiple lines plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend() # Add a title and caption and label the axes plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend(loc='lower left', fontsize=14) plt.title("Sinusoides", fontsize=14) plt.xlabel("$f(x)$", fontsize=16) plt.ylabel("$sin(x)$", fontsize=16) # Save the plot plt.plot(x, y, color='orange', linewidth=2, label='sine') plt.plot(x, np.cos(x), color='blue', linewidth=2, label='cosine') plt.legend(loc='lower left', fontsize=14) plt.title("Sinusoides", fontsize=14) plt.xlabel("$x$", fontsize=16) plt.ylabel("$f(x)$", fontsize=16) plt.savefig("my_sinusoide.png") # Add to the previous plot a sinusoide of half the amplitude and twice the frequency of the sine one. # Plot the line in green and give each line a different line style. # Create 500 points with random (x, y) coordinates x = np.random.normal(size=500) y = np.random.normal(size=500) # Plot them plt.scatter(x, y) # Use the same ranges for both axes plt.scatter(x, y) plt.xlim([-4, 4]) plt.ylim([-4, 4]) # Add a title and axis captions to the previous plot. Change the marker style and color. # Create a random 50 x 100 array X = np.random.random((50, 100)) heatmap = plt.pcolor(X, cmap=plt.cm.Blues) plt.colorbar(heatmap) # Create a random vector (normally distributed) of size 5000 X = np.random.normal(size=(5000,)) # Plot the histogram of its values over 50 bins h = plt.hist(X, bins=50, color='orange', histtype='stepfilled') # create an image x = np.linspace(1, 12, 100) # transform an array of shape (100,) into an array of shape (100, 1) y = x[:, np.newaxis] y = y * np.cos(y) # Create an image matrix: image[i,j] = y cos(y)[i] * sin(x)[j] image = y * np.sin(x) # show the image (the origin is, by default, at the top-left corner!) plt.imshow(image, cmap=plt.cm.prism) # Contour plot - note that origin here is at the bottom-left by default! # A contour line or isoline of a function of two variables is a curve along which the function has a constant value. contours = plt.contour(image, cmap=plt.cm.prism) plt.clabel(contours, inline=1, fontsize=10)
0.83471
0.987326
``` #export from fastai.imports import * from fastai.data.all import * from fastai.optimizer import * from fastai.learner import * from fastai.callback.core import * from torch.utils.data import TensorDataset # default_exp test_utils ``` # Synthetic Learner > For quick testing of the training loop and Callbacks ``` #export from torch.utils.data import TensorDataset #export def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(bs*n, 1) return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n, 1)) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) #export class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b # export @delegates(Learner.__init__) def synth_learner(n_trn=10, n_val=2, cuda=False, lr=1e-3, data=None, model=None, **kwargs): if data is None: data=synth_dbunch(n_train=n_trn,n_valid=n_val, cuda=cuda) if model is None: model=RegModel() return Learner(data, model, lr=lr, loss_func=MSELossFlat(), opt_func=partial(SGD, mom=0.9), **kwargs) #export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) ``` ## Install Utils ``` #export def get_env(name): "Return env var value if it's defined and not an empty string, or return Unknown" res = os.environ.get(name,'') return res if len(res) else "Unknown" #export def try_import(module): "Try to import `module`. Returns module's object on success, None on failure" try: return importlib.import_module(module) except: return None #export def nvidia_smi(cmd = "nvidia-smi"): try: res = run(cmd) except OSError as e: return None return res res = nvidia_smi() #export def nvidia_mem(): try: mem = run("nvidia-smi --query-gpu=memory.total --format=csv,nounits,noheader") except: return None return mem.strip().split('\n') nvidia_mem() #export def show_install(show_nvidia_smi:bool=False): "Print user's setup information" import fastai, platform, fastprogress rep = [] opt_mods = [] rep.append(["=== Software ===", None]) rep.append(["python", platform.python_version()]) rep.append(["fastai", fastai.__version__]) rep.append(["fastprogress", fastprogress.__version__]) rep.append(["torch", torch.__version__]) # nvidia-smi smi = nvidia_smi() if smi: match = re.findall(r'Driver Version: +(\d+\.\d+)', smi) if match: rep.append(["nvidia driver", match[0]]) available = "available" if torch.cuda.is_available() else "**Not available** " rep.append(["torch cuda", f"{torch.version.cuda} / is {available}"]) # no point reporting on cudnn if cuda is not available, as it # seems to be enabled at times even on cpu-only setups if torch.cuda.is_available(): enabled = "enabled" if torch.backends.cudnn.enabled else "**Not enabled** " rep.append(["torch cudnn", f"{torch.backends.cudnn.version()} / is {enabled}"]) rep.append(["\n=== Hardware ===", None]) gpu_total_mem = [] nvidia_gpu_cnt = 0 if smi: mem = nvidia_mem() nvidia_gpu_cnt = len(ifnone(mem, [])) if nvidia_gpu_cnt: rep.append(["nvidia gpus", nvidia_gpu_cnt]) torch_gpu_cnt = torch.cuda.device_count() if torch_gpu_cnt: rep.append(["torch devices", torch_gpu_cnt]) # information for each gpu for i in range(torch_gpu_cnt): rep.append([f" - gpu{i}", (f"{gpu_total_mem[i]}MB | " if gpu_total_mem else "") + torch.cuda.get_device_name(i)]) else: if nvidia_gpu_cnt: rep.append([f"Have {nvidia_gpu_cnt} GPU(s), but torch can't use them (check nvidia driver)", None]) else: rep.append([f"No GPUs available", None]) rep.append(["\n=== Environment ===", None]) rep.append(["platform", platform.platform()]) if platform.system() == 'Linux': distro = try_import('distro') if distro: # full distro info rep.append(["distro", ' '.join(distro.linux_distribution())]) else: opt_mods.append('distro'); # partial distro info rep.append(["distro", platform.uname().version]) rep.append(["conda env", get_env('CONDA_DEFAULT_ENV')]) rep.append(["python", sys.executable]) rep.append(["sys.path", "\n".join(sys.path)]) print("\n\n```text") keylen = max([len(e[0]) for e in rep if e[1] is not None]) for e in rep: print(f"{e[0]:{keylen}}", (f": {e[1]}" if e[1] is not None else "")) if smi: if show_nvidia_smi: print(f"\n{smi}") else: if torch_gpu_cnt: print("no nvidia-smi is found") else: print("no supported gpus found on this system") print("```\n") print("Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections.\n") if opt_mods: print("Optional package(s) to enhance the diagnostics can be installed with:") print(f"pip install {' '.join(opt_mods)}") print("Once installed, re-run this utility to get the additional information") #hide show_install(True) ``` ## - Export ``` #hide from nbdev.export import * notebook2script() ```
github_jupyter
#export from fastai.imports import * from fastai.data.all import * from fastai.optimizer import * from fastai.learner import * from fastai.callback.core import * from torch.utils.data import TensorDataset # default_exp test_utils #export from torch.utils.data import TensorDataset #export def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(bs*n, 1) return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n, 1)) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) #export class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b # export @delegates(Learner.__init__) def synth_learner(n_trn=10, n_val=2, cuda=False, lr=1e-3, data=None, model=None, **kwargs): if data is None: data=synth_dbunch(n_train=n_trn,n_valid=n_val, cuda=cuda) if model is None: model=RegModel() return Learner(data, model, lr=lr, loss_func=MSELossFlat(), opt_func=partial(SGD, mom=0.9), **kwargs) #export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #export def get_env(name): "Return env var value if it's defined and not an empty string, or return Unknown" res = os.environ.get(name,'') return res if len(res) else "Unknown" #export def try_import(module): "Try to import `module`. Returns module's object on success, None on failure" try: return importlib.import_module(module) except: return None #export def nvidia_smi(cmd = "nvidia-smi"): try: res = run(cmd) except OSError as e: return None return res res = nvidia_smi() #export def nvidia_mem(): try: mem = run("nvidia-smi --query-gpu=memory.total --format=csv,nounits,noheader") except: return None return mem.strip().split('\n') nvidia_mem() #export def show_install(show_nvidia_smi:bool=False): "Print user's setup information" import fastai, platform, fastprogress rep = [] opt_mods = [] rep.append(["=== Software ===", None]) rep.append(["python", platform.python_version()]) rep.append(["fastai", fastai.__version__]) rep.append(["fastprogress", fastprogress.__version__]) rep.append(["torch", torch.__version__]) # nvidia-smi smi = nvidia_smi() if smi: match = re.findall(r'Driver Version: +(\d+\.\d+)', smi) if match: rep.append(["nvidia driver", match[0]]) available = "available" if torch.cuda.is_available() else "**Not available** " rep.append(["torch cuda", f"{torch.version.cuda} / is {available}"]) # no point reporting on cudnn if cuda is not available, as it # seems to be enabled at times even on cpu-only setups if torch.cuda.is_available(): enabled = "enabled" if torch.backends.cudnn.enabled else "**Not enabled** " rep.append(["torch cudnn", f"{torch.backends.cudnn.version()} / is {enabled}"]) rep.append(["\n=== Hardware ===", None]) gpu_total_mem = [] nvidia_gpu_cnt = 0 if smi: mem = nvidia_mem() nvidia_gpu_cnt = len(ifnone(mem, [])) if nvidia_gpu_cnt: rep.append(["nvidia gpus", nvidia_gpu_cnt]) torch_gpu_cnt = torch.cuda.device_count() if torch_gpu_cnt: rep.append(["torch devices", torch_gpu_cnt]) # information for each gpu for i in range(torch_gpu_cnt): rep.append([f" - gpu{i}", (f"{gpu_total_mem[i]}MB | " if gpu_total_mem else "") + torch.cuda.get_device_name(i)]) else: if nvidia_gpu_cnt: rep.append([f"Have {nvidia_gpu_cnt} GPU(s), but torch can't use them (check nvidia driver)", None]) else: rep.append([f"No GPUs available", None]) rep.append(["\n=== Environment ===", None]) rep.append(["platform", platform.platform()]) if platform.system() == 'Linux': distro = try_import('distro') if distro: # full distro info rep.append(["distro", ' '.join(distro.linux_distribution())]) else: opt_mods.append('distro'); # partial distro info rep.append(["distro", platform.uname().version]) rep.append(["conda env", get_env('CONDA_DEFAULT_ENV')]) rep.append(["python", sys.executable]) rep.append(["sys.path", "\n".join(sys.path)]) print("\n\n```text") keylen = max([len(e[0]) for e in rep if e[1] is not None]) for e in rep: print(f"{e[0]:{keylen}}", (f": {e[1]}" if e[1] is not None else "")) if smi: if show_nvidia_smi: print(f"\n{smi}") else: if torch_gpu_cnt: print("no nvidia-smi is found") else: print("no supported gpus found on this system") print("```\n") print("Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections.\n") if opt_mods: print("Optional package(s) to enhance the diagnostics can be installed with:") print(f"pip install {' '.join(opt_mods)}") print("Once installed, re-run this utility to get the additional information") #hide show_install(True) #hide from nbdev.export import * notebook2script()
0.577734
0.535463
# Eager Execution Adapted from: https://www.tensorflow.org/get_started/eager ``` import os import matplotlib.pyplot as plt import tensorflow as tf import tensorflow.contrib.eager as tfe tf.enable_eager_execution() import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt train_dataset_fp = '../data/iris_training.csv' !head -n5 {train_dataset_fp} ``` ## Csv parser ``` def parse_csv(line): example_defaults = [[0.], [0.], [0.], [0.], [0]] parsed_line = tf.decode_csv(line, example_defaults) features = tf.reshape(parsed_line[:-1], shape=(4,)) label = tf.reshape(parsed_line[-1], shape=()) return features, label ``` ## Dataset API ``` train_dataset = tf.data.TextLineDataset(train_dataset_fp) train_dataset = train_dataset.skip(1) train_dataset = train_dataset.map(parse_csv) train_dataset = train_dataset.shuffle(buffer_size=1000) train_dataset = train_dataset.batch(32) train_dataset features, label = tfe.Iterator(train_dataset).next() features label ``` ## Model Note that the model is outputting the logits, not the softmax probabilities. ``` model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation="relu", input_shape=(4,)), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(3) ]) model ``` model behaves like a function: ``` model(features) ``` In eager mode we can access the values of the weights directly: ``` for i, v in enumerate(model.variables): print("Weight shape: ", v.shape) print("Weight tensor: ", v) print() ``` ## Loss Loss is sparse categorical cross entropy ``` def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) loss(model, features, label) ``` ## Gradients In eager mode we can evaluate the gradients ``` def grad(model, inputs, targets): with tfe.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, model.variables) grads = grad(model, features, label) for i, g in enumerate(grads): print("Gradient shape: ", g.shape) print("Gradient tensor: ", g) print() ``` ## Optimizer Let's use simple gradient descent ``` optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) ``` ## Training Loop ``` train_loss_results = [] train_accuracy_results = [] num_epochs = 201 # Loop over epochs for epoch in range(num_epochs): # accumulators for mean loss and accuracy epoch_loss_avg = tfe.metrics.Mean() epoch_accuracy = tfe.metrics.Accuracy() # loop on dataset, for each batch: for x, y in tfe.Iterator(train_dataset): # Calculate gradients grads = grad(model, x, y) # Apply gradients to the weights optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step()) # accumulate loss epoch_loss_avg(loss(model, x, y)) # calculate predictions y_pred = tf.argmax(model(x), axis=1, output_type=tf.int32) # acccumulate accuracy epoch_accuracy(y_pred, y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ``` ## Plot Metrics ``` fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ``` ## Test ``` test_fp = '../data/iris_test.csv' test_dataset = tf.data.TextLineDataset(test_fp) test_dataset = test_dataset.skip(1) # skip header row test_dataset = test_dataset.map(parse_csv) # parse each row with the funcition created earlier test_dataset = test_dataset.shuffle(1000) # randomize test_dataset = test_dataset.batch(32) # use the same batch size as the training set test_accuracy = tfe.metrics.Accuracy() for (x, y) in tfe.Iterator(test_dataset): prediction = tf.argmax(model(x), axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) class_ids = ["Iris setosa", "Iris versicolor", "Iris virginica"] predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() name = class_ids[class_idx] print("Example {} prediction: {}".format(i, name)) ```
github_jupyter
import os import matplotlib.pyplot as plt import tensorflow as tf import tensorflow.contrib.eager as tfe tf.enable_eager_execution() import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt train_dataset_fp = '../data/iris_training.csv' !head -n5 {train_dataset_fp} def parse_csv(line): example_defaults = [[0.], [0.], [0.], [0.], [0]] parsed_line = tf.decode_csv(line, example_defaults) features = tf.reshape(parsed_line[:-1], shape=(4,)) label = tf.reshape(parsed_line[-1], shape=()) return features, label train_dataset = tf.data.TextLineDataset(train_dataset_fp) train_dataset = train_dataset.skip(1) train_dataset = train_dataset.map(parse_csv) train_dataset = train_dataset.shuffle(buffer_size=1000) train_dataset = train_dataset.batch(32) train_dataset features, label = tfe.Iterator(train_dataset).next() features label model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation="relu", input_shape=(4,)), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(3) ]) model model(features) for i, v in enumerate(model.variables): print("Weight shape: ", v.shape) print("Weight tensor: ", v) print() def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) loss(model, features, label) def grad(model, inputs, targets): with tfe.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, model.variables) grads = grad(model, features, label) for i, g in enumerate(grads): print("Gradient shape: ", g.shape) print("Gradient tensor: ", g) print() optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) train_loss_results = [] train_accuracy_results = [] num_epochs = 201 # Loop over epochs for epoch in range(num_epochs): # accumulators for mean loss and accuracy epoch_loss_avg = tfe.metrics.Mean() epoch_accuracy = tfe.metrics.Accuracy() # loop on dataset, for each batch: for x, y in tfe.Iterator(train_dataset): # Calculate gradients grads = grad(model, x, y) # Apply gradients to the weights optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step()) # accumulate loss epoch_loss_avg(loss(model, x, y)) # calculate predictions y_pred = tf.argmax(model(x), axis=1, output_type=tf.int32) # acccumulate accuracy epoch_accuracy(y_pred, y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() test_fp = '../data/iris_test.csv' test_dataset = tf.data.TextLineDataset(test_fp) test_dataset = test_dataset.skip(1) # skip header row test_dataset = test_dataset.map(parse_csv) # parse each row with the funcition created earlier test_dataset = test_dataset.shuffle(1000) # randomize test_dataset = test_dataset.batch(32) # use the same batch size as the training set test_accuracy = tfe.metrics.Accuracy() for (x, y) in tfe.Iterator(test_dataset): prediction = tf.argmax(model(x), axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) class_ids = ["Iris setosa", "Iris versicolor", "Iris virginica"] predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() name = class_ids[class_idx] print("Example {} prediction: {}".format(i, name))
0.808446
0.934155
# Circular motion ## A simple one-link manipulator Consider the simple manipulator below, consisting of a homogenous rod of mass $m=0.1$kg and length $l=1$m. The rod is rotating about the single joint with a constant angular velocity of $\omega=2$rad/s. <!--- ![Block diagram of cruise control system](cruise-control-pid-block.svg) --> <img src="single-link-circular-motion.svg" alt="One-link manipulator" width="400"> ### What is the centripetal acceleration at the endpoint? ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Parameters. SI units, of course m = 0.1 l = 1.0 omega = 2.0 # The formular for centripetal acceleration is a_c = omega**2 * l print "The centripetal acceleration at the endpoint is %1.2f m/s^2" % a_c ``` ### What is the centripetal acceleration at the center of mass? ``` # %load com-acceleration.py # The center of mass for a homogeneous rod is of course in the middle of the rod from IPython.display import display, Math, Latex display(Latex(r"The centripetal acceleration at the center of mass is %1.2f $m/s^2$" % (omega**2*l/2))) ``` ## Derivation of the acceleration for uniform circular motion Define the x- and y axis as in the figure below. The angle $\theta$ to be the angle to the x-axis <!--- ![Block diagram of cruise control system](cruise-control-pid-block.svg) --> <img src="single-link-polar-axes.png" alt="One-link manipulator polar coordinates" width="400"> We have defined unit vectors in a polar coordinate system attached to the moving link \begin{align} e_r &= \begin{bmatrix}\cos\theta\\\sin\theta\end{bmatrix}\\ e_\theta &= \begin{bmatrix}-\sin\theta\\\cos\theta\end{bmatrix} \end{align} With these unit vectors it is easy to express the position $p$ of the endpoint with respect to the origin, which is located at the joint: $$ p(t) = l e_r(t) = l \begin{bmatrix}\cos\theta\\\sin\theta\end{bmatrix}. $$ In order to find the velocity of the endpoint we take the time-derivative of the position \begin{align} v(t) &= \frac{d}{dt} p(t) = \left( \frac{d}{dt} l \right) e_r + l \frac{d}{dt} e_r\\ &= l \frac{d}{dt} \begin{bmatrix}\cos\theta\\\sin\theta\end{bmatrix}\\ &= l \frac{d}{d\theta}\begin{bmatrix}\cos\theta\\\sin\theta\end{bmatrix} \frac{d\theta}{dt}\\ &= l \omega \begin{bmatrix}-\sin\theta\\\cos\theta\end{bmatrix} = l \omega e_\theta. \end{align} We see that the *speed*, the magnitude of the velocity, $|v| = l \omega $ is proportional to both the distance from the center of rotation and to the angular velocity. The velocity vector is tangential to the path of the endpoint, and if $\omega$ is positive, then the direction of the velocity is in the positive direction of $e_\theta$. ### Exercise: derive the centripetal acceleration by differentiating the velocity Also: Discuss the magnitude and direction of the centripetal acceleration. ``` # %load centripetal-acceleration-solution.py display(Latex(""" The acceleration becomes \\begin{align} a(t) &= \\frac{d}{dt} v(t) = \\frac{d}{dt} l \\omega e_\\theta = \\frac{d}{dt} l \\omega \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\ &= \\frac{d}{dt} \left( l \\ omega \\right) e_\\theta + l \\omega \\frac{d}{dt} \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\ &= 0 + l \\omega^2 \\begin{bmatrix} -\\cos\\theta \\\\ -\\sin\\theta \\end{bmatrix} = - l \omega^2 e_r. \\end{align} """)) ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Parameters. SI units, of course m = 0.1 l = 1.0 omega = 2.0 # The formular for centripetal acceleration is a_c = omega**2 * l print "The centripetal acceleration at the endpoint is %1.2f m/s^2" % a_c # %load com-acceleration.py # The center of mass for a homogeneous rod is of course in the middle of the rod from IPython.display import display, Math, Latex display(Latex(r"The centripetal acceleration at the center of mass is %1.2f $m/s^2$" % (omega**2*l/2))) # %load centripetal-acceleration-solution.py display(Latex(""" The acceleration becomes \\begin{align} a(t) &= \\frac{d}{dt} v(t) = \\frac{d}{dt} l \\omega e_\\theta = \\frac{d}{dt} l \\omega \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\ &= \\frac{d}{dt} \left( l \\ omega \\right) e_\\theta + l \\omega \\frac{d}{dt} \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\ &= 0 + l \\omega^2 \\begin{bmatrix} -\\cos\\theta \\\\ -\\sin\\theta \\end{bmatrix} = - l \omega^2 e_r. \\end{align} """))
0.684897
0.990786
# Fitting XMM-Newton data with the APEC model # The Astrophysical Plasma Emission Code (APEC, [Smith et al. 2001](https://ui.adsabs.harvard.edu/abs/2001ApJ...556L..91S/abstract)) is a state-of-the-art code to model the X-ray emission of optically thin astrophysical plasma in collisional equilibrium. APEC has been widely used in the literature to study the spectra of a wide variety of sources, such as galaxy clusters and groups, supernova remnants, or stellar coronae. The code self-consistently models the continuum (bremsstrahlung) emission as well as the pseudo-continuum and over 10,000 individual emission lines. Astromodels includes a native Python implementation of APEC based on [``pyatomdb``](https://atomdb.readthedocs.io/en/master/). As the model is only available when an installation of ``pyatomdb`` can be found, the user must make sure ``pyatomdb`` is installed and the required atomic data files have been downloaded. ## Setting up pyatomdb ## ``pyatomdb`` is easily installable using pip: $] pip install pyatomdb When running the code for the first time, the user will be requested to download the ATOMDB database locally $] python -c "import pyatomdb" This command will prompt the user to choose a directory to store the atomic data (usually ``$HOME/atomdb``). Once the data have been downloaded, the ATOMDB environment variable must be set to indicate the location of the atomic data $] export ATOMDB=$HOME/atomdb OK, we're all set! ## Loading the APEC model ## Once ``pyatomdb`` is properly installed, it is available as a "Function1D" object ``` %%capture from threeML import * modapec = APEC() ``` The intensity of the various lines in the model is set relative to the Solar abundance. Therefore, one must set the Solar abundance table to predict the line intensity, using the ``init_session`` method of the APEC class. By default, i.e. if the ``init_session`` method is ran with no argument, the code defaults to [Anders & Grevesse (1989)](https://ui.adsabs.harvard.edu/abs/1989GeCoA..53..197A/abstract). ``` modapec.init_session(abund_table='AG89') ``` If, for instance, we wish to initialize the APEC model to use the [Lodders & Palme (2009)](https://www.lpi.usra.edu/meetings/metsoc2009/pdf/5154.pdf) table, we pass modapec.init_session(abund_table='Lodd09') ``` modapec.display() ``` The parameters of the model are set in the following way ``` modapec.kT.value = 3.0 # 3 keV temperature modapec.K.value = 1e-3 # Normalization, proportional to emission measure modapec.redshift.value = 0. # Source redshift modapec.abund.value = 0.3 # The metal abundance of each element is set to 0.3 times the Solar abundance ``` Now let's see how the model depends on the input temperature... ``` import numpy as np energies = np.logspace(-1., 1.5, 1000) # Set up the energy grid ktgrid = [0.2,0.5,1.0,2.0,3.0,5.0,7.0,9.0,12.0,15.0] # Temperature grid import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx %matplotlib inline plt.clf() fig=plt.figure(figsize=(13,10)) ax = fig.add_axes([0.12, 0.12, 0.85, 0.85]) for item in (ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(18) nspec = len(ktgrid) values = range(nspec) cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cm) ccc=[] for i in range(nspec): ccc.append(scalarMap.to_rgba(i)) for i in range(nspec): modapec.kT.value = ktgrid[i] plt.plot(energies,modapec(energies),color=ccc[i],label='kT=%g keV'%(ktgrid[i])) plt.xscale('log') plt.yscale('log') plt.xlabel('Energy [keV]',fontsize=28) plt.ylabel('Photon Flux',fontsize=28) plt.axis([0.1,15.,1e-8,1.0]) plt.title('Z=0.3 $Z_\odot$ and varying temperature',fontsize=28) plt.legend(fontsize=22, ncol=2) ``` Now let's see how the model depends on metallicity for a temperature of 1 keV... ``` Zgrid = [0., 0.1, 0.3, 0.5, 1., 2.] # Metallicities wrt Solar modapec.kT.value = 1.0 plt.clf() fig=plt.figure(figsize=(13,10)) ax = fig.add_axes([0.12, 0.12, 0.85, 0.85]) for item in (ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(18) nspec = len(Zgrid) values = range(nspec) cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cm) ccc=[] for i in range(nspec): ccc.append(scalarMap.to_rgba(i)) for i in range(nspec): modapec.abund.value = Zgrid[i] plt.plot(energies,modapec(energies),color=ccc[i],label='$Z/Z_\odot$=%g'%(Zgrid[i])) plt.xscale('log') plt.yscale('log') plt.xlabel('Energy [keV]',fontsize=28) plt.ylabel('Photon Flux',fontsize=28) plt.axis([0.1,15.,1e-7,0.1]) plt.title('kT=1 keV and varying metallicity',fontsize=28) plt.legend(fontsize=22) ``` ## Comparison with XSPEC ## To test the implementation, let's simulate a spectrum with XSPEC and fit it within 3ML using the APEC model. We simulate an _XMM-Newton_/EPIC-pn spectrum with a temperature of 5 keV, a metallicity of 0.3 Solar, a normalization of unity and a redshift of 0.1. The simulated model is absorbed by photo-electric absorption (PhAbs model) with a column density of $10^{21}$ cm$^{-2}$. We start by declaring the model, ``` phabs = PhAbs() phabs.NH.value = 0.1 # A value of 1 corresponds to 1e22 cm-2 phabs.NH.fix = True # NH is fixed phabs.init_xsect(abund_table='AG89') modapec.kT = 3.0 # Initial values modapec.K = 0.1 modapec.redshift = 0.1 modapec.abund = 0.3 mod_comb = phabs * modapec ``` Now we load the simulated spectrum in 3ML... ``` xmm_pha = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.fak") xmm_rmf = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.rmf") xmm_arf = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.arf") ogip = OGIPLike("ogip", observation=xmm_pha, response=xmm_rmf, arf_file=xmm_arf) pts = PointSource('mysource',0,0,spectral_shape=mod_comb) ``` Let's have a look at the loaded spectrum... ``` fig = ogip.view_count_spectrum() fig.set_size_inches(13,10) ax = fig.get_axes()[0] ax.set_xlim(left=0.3,right=14.) ax.set_xlabel('Energy [keV]',fontsize=28) ax.set_ylabel('Rate [counts s$^{-1}$ keV$^{-1}$]',fontsize=28) ``` Here we set up the likelihood and fit the data ``` ogip.remove_rebinning() ogip.set_active_measurements('0.5-10.') ogip.rebin_on_source(20) model = Model(pts) jl = JointLikelihood(model,DataList(ogip)) result = jl.fit() ``` The fitted values are within 1% of the input ones from XSPEC, showing that the native 3ML implementation is accurate. We can visualize the fit in the following way ``` fig = display_spectrum_model_counts(jl,data_color='blue',model_color='red', min_rate=5e-4) fig.set_size_inches(13,10) ```
github_jupyter
%%capture from threeML import * modapec = APEC() modapec.init_session(abund_table='AG89') modapec.display() modapec.kT.value = 3.0 # 3 keV temperature modapec.K.value = 1e-3 # Normalization, proportional to emission measure modapec.redshift.value = 0. # Source redshift modapec.abund.value = 0.3 # The metal abundance of each element is set to 0.3 times the Solar abundance import numpy as np energies = np.logspace(-1., 1.5, 1000) # Set up the energy grid ktgrid = [0.2,0.5,1.0,2.0,3.0,5.0,7.0,9.0,12.0,15.0] # Temperature grid import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx %matplotlib inline plt.clf() fig=plt.figure(figsize=(13,10)) ax = fig.add_axes([0.12, 0.12, 0.85, 0.85]) for item in (ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(18) nspec = len(ktgrid) values = range(nspec) cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cm) ccc=[] for i in range(nspec): ccc.append(scalarMap.to_rgba(i)) for i in range(nspec): modapec.kT.value = ktgrid[i] plt.plot(energies,modapec(energies),color=ccc[i],label='kT=%g keV'%(ktgrid[i])) plt.xscale('log') plt.yscale('log') plt.xlabel('Energy [keV]',fontsize=28) plt.ylabel('Photon Flux',fontsize=28) plt.axis([0.1,15.,1e-8,1.0]) plt.title('Z=0.3 $Z_\odot$ and varying temperature',fontsize=28) plt.legend(fontsize=22, ncol=2) Zgrid = [0., 0.1, 0.3, 0.5, 1., 2.] # Metallicities wrt Solar modapec.kT.value = 1.0 plt.clf() fig=plt.figure(figsize=(13,10)) ax = fig.add_axes([0.12, 0.12, 0.85, 0.85]) for item in (ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(18) nspec = len(Zgrid) values = range(nspec) cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cm) ccc=[] for i in range(nspec): ccc.append(scalarMap.to_rgba(i)) for i in range(nspec): modapec.abund.value = Zgrid[i] plt.plot(energies,modapec(energies),color=ccc[i],label='$Z/Z_\odot$=%g'%(Zgrid[i])) plt.xscale('log') plt.yscale('log') plt.xlabel('Energy [keV]',fontsize=28) plt.ylabel('Photon Flux',fontsize=28) plt.axis([0.1,15.,1e-7,0.1]) plt.title('kT=1 keV and varying metallicity',fontsize=28) plt.legend(fontsize=22) phabs = PhAbs() phabs.NH.value = 0.1 # A value of 1 corresponds to 1e22 cm-2 phabs.NH.fix = True # NH is fixed phabs.init_xsect(abund_table='AG89') modapec.kT = 3.0 # Initial values modapec.K = 0.1 modapec.redshift = 0.1 modapec.abund = 0.3 mod_comb = phabs * modapec xmm_pha = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.fak") xmm_rmf = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.rmf") xmm_arf = get_path_of_data_file("datasets/xmm/pnS004-A2443_reg2.arf") ogip = OGIPLike("ogip", observation=xmm_pha, response=xmm_rmf, arf_file=xmm_arf) pts = PointSource('mysource',0,0,spectral_shape=mod_comb) fig = ogip.view_count_spectrum() fig.set_size_inches(13,10) ax = fig.get_axes()[0] ax.set_xlim(left=0.3,right=14.) ax.set_xlabel('Energy [keV]',fontsize=28) ax.set_ylabel('Rate [counts s$^{-1}$ keV$^{-1}$]',fontsize=28) ogip.remove_rebinning() ogip.set_active_measurements('0.5-10.') ogip.rebin_on_source(20) model = Model(pts) jl = JointLikelihood(model,DataList(ogip)) result = jl.fit() fig = display_spectrum_model_counts(jl,data_color='blue',model_color='red', min_rate=5e-4) fig.set_size_inches(13,10)
0.548432
0.961858
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import arviz as az import seaborn as sns %matplotlib inline az.style.use("arviz-darkgrid") from jax import numpy as jnp from jax import lax from jax.random import PRNGKey import numpyro from numpyro.infer import SVI, Predictive, ELBO from numpyro.optim import Adam from numpyro.infer.autoguide import AutoLaplaceApproximation import numpyro.diagnostics as diag import numpyro.distributions as dist from causalgraphicalmodels import CausalGraphicalModel as CGM from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression as OLS url = r"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/WaffleDivorce.csv" A = 'MedianAgeMarriage' D = 'Divorce' M = 'Marriage' data = pd.read_csv(url, sep=';')[[M,A,D]] data.head() scalers = dict() for col in [A,M,D]: data[col + "_origin"] = data[col] scalers[col] = StandardScaler(copy=False).fit(data[[col]]) data[col] = scalers[col].transform(data[[col + "_origin"]]) sns.pairplot(data[[M,A,D]], kind='reg'); ``` # 5H1: is data consistent with M->A->D ``` def model(M, A=None, D=None): # M->A aA = numpyro.sample('aA', dist.Normal(0,0.2)) bAM = numpyro.sample('bAM', dist.Normal(0, 0.5)) muA = aA + bAM*M sigmaA = numpyro.sample('sigmaA', dist.Exponential(1)) numpyro.deterministic('muA', muA) A = numpyro.sample('A', dist.Normal(muA, sigmaA), obs=A) # A-> D <- M aD = numpyro.sample('aD', dist.Normal(0,0.2)) bA = numpyro.sample('bA', dist.Normal(0, 0.5)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muD = aD + bA*A +bM*M sigmaD = numpyro.sample('sigmaD', dist.Exponential(1)) numpyro.deterministic('muD', muD) numpyro.sample('D', dist.Normal(muD, sigmaD), obs=D) guide = AutoLaplaceApproximation(model) svi = SVI(model, guide, Adam(1), ELBO(), M=data[M].values, A=data[A].values, D=data[D].values) state, loss = lax.scan(lambda x, i: svi.update(x), svi.init(PRNGKey(1)), np.zeros(1000)) param = svi.get_params(state) post = guide.sample_posterior(PRNGKey(1), param, (1000,)) pred = Predictive(model, post, return_sites=['muA','A','muD','D']) M_seq = np.linspace(data[M].min(), data[M].max(), 20) A_seq = np.zeros(20) post.update(pred(PRNGKey(2), M=M_seq, A=A_seq)) muD_mean = np.mean(post['muD'], 0) muD_PI = np.percentile(post['muD'], q=(5.5, 94.5), axis=0) D_PI = np.percentile(post['D'], q=(5.5, 94.5), axis=0) az.plot_pair(data[[M, D]].to_dict(orient="list")) plt.plot(M_seq, muD_mean, "b") plt.fill_between(M_seq, muD_PI[0], muD_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq, D_PI[0], D_PI[1], color="k", alpha=0.2) plt.title('Counterfactual effect of M on D with A=0\nM->A->D ; M->D'); ``` # => M->D is spurious. So M->A->D is true # 5H2: fit model with M->A->D. compute counterfactual effect with half M ``` def model2(M, A=None, D=None): # M->A aA = numpyro.sample('aA', dist.Normal(0,0.2)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muA = aA + bM*M sigmaA = numpyro.sample('sigmaA', dist.Exponential(1)) numpyro.deterministic('muA', muA) A = numpyro.sample('A', dist.Normal(muA, sigmaA), obs=A) # A-> D aD = numpyro.sample('aD', dist.Normal(0,0.2)) bA = numpyro.sample('bA', dist.Normal(0, 0.5)) muD = aD + bA*A sigmaD = numpyro.sample('sigmaD', dist.Exponential(1)) numpyro.deterministic('muD', muD) numpyro.sample('D', dist.Normal(muD, sigmaD), obs=D) guide2 = AutoLaplaceApproximation(model2) svi2 = SVI(model2, guide2, Adam(1), ELBO(), M=data[M].values, A=data[A].values, D=data[D].values) state2, loss2 = lax.scan(lambda x, i: svi2.update(x), svi2.init(PRNGKey(1)), np.zeros(1000)) param2 = svi2.get_params(state2) post2 = guide2.sample_posterior(PRNGKey(1), param2, (1000,)) pred2 = Predictive(model2, post2, return_sites=['muD','D']) M_seq2 = np.linspace(data[M+'_origin'].min(), data[M+'_origin'].max(), 20) M_seq2 = np.ravel(scalers[M].transform(M_seq2.reshape(-1,1))) post2.update(pred2(PRNGKey(2), M=M_seq2)) muD_mean2 = np.mean(post2['muD'], 0) muD_PI2 = np.percentile(post2['muD'], q=(5.5, 94.5), axis=0) D_PI2 = np.percentile(post2['D'], q=(5.5, 94.5), axis=0) az.plot_pair(data[[M,D]].to_dict(orient="list")) plt.plot(M_seq2, muD_mean2, "b") plt.fill_between(M_seq2, muD_PI2[0], muD_PI2[1], color="b", alpha=0.4) plt.fill_between(M_seq2, D_PI2[0], D_PI2[1], color="k", alpha=0.2) plt.title('Total counterfactual effect from M->D'); data[M+'_half'] = scalers[M].transform(data[[M+"_origin"]]/2) post2 = guide2.sample_posterior(PRNGKey(1), param2, (1000,)) pred2 = Predictive(model2, post2, return_sites=['D']) np.mean( pred2(PRNGKey(2), M=data[M+"_half"].values)['D'] - \ pred2(PRNGKey(2), M=data[M].values)['D'], axis=0) f = lambda x: scalers[D].inverse_transform(np.array(x)) np.mean( f(pred2(PRNGKey(2), M=data[M+"_half"].values)['D']) - \ f(pred2(PRNGKey(2), M=data[M].values)['D']), axis=0) scalers[D].scale_ ``` # => By halving the `M` rate, `D` rate is reduce about 1 SD # 5H3: Milk example: M->N->K, M->K. compute counterfactual effect doubling M ``` url2 = r"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/milk.csv" M2 = 'mass' N = 'neocortex.perc' K = 'kcal.per.g' data2 = pd.read_csv(url2, sep=';')[[M2,N,K]] data2.dropna(inplace=True) data2.head() sns.pairplot(data2[[M2,N,K]], kind='reg'); scalers2 = dict() for col in [M2,N,K]: data2[col + "_origin"] = data2[col] scalers2[col] = StandardScaler(copy=False).fit(data2[[col]]) data2[col] = scalers2[col].transform(data2[[col + "_origin"]]) def model3(M, N=None, K=None): # M->N aN = numpyro.sample('aN', dist.Normal(0, 0.2)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muN = aN + bM*M sigmaN = numpyro.sample('sigmaN', dist.Exponential(1)) numpyro.deterministic('muN', muN) N = numpyro.sample('N', dist.Normal(muN, sigmaN), obs=N) # M,N -> K aK = numpyro.sample('aK', dist.Normal(0,0.2)) bM2 = numpyro.sample('bM2', dist.Normal(0, 0.5)) bN = numpyro.sample('bN', dist.Normal(0, 0.5)) muK = aK + bM2*M + bN*N sigmaK = numpyro.sample('sigmaK', dist.Exponential(1)) numpyro.deterministic('muK', muK) numpyro.sample('K', dist.Normal(muK, sigmaK), obs=K) guide3 = AutoLaplaceApproximation(model3) svi3 = SVI(model3, guide3, Adam(1), ELBO(), M=data2[M2].values, N=data2[N].values, K=data2[K].values) init_state = svi3.init(PRNGKey(2)) state3, loss3 = lax.scan(lambda x, i: svi3.update(x), init_state, np.zeros(1000)) param3 = svi3.get_params(state3) post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['muN','N','muK','K']) M_seq3 = np.linspace(data2[M2+'_origin'].min(), data2[M2+'_origin'].max(), 20) M_seq3 = np.ravel(scalers2[M2].transform(M_seq3.reshape(-1,1))) post3.update(pred3(PRNGKey(2), M=M_seq3)) muK_mean = np.mean(post3['muK'], 0) muK_PI = np.percentile(post3['muK'], q=(5.5, 94.5), axis=0) K_PI = np.percentile(post3['K'], q=(5.5, 94.5), axis=0) plt.plot(M_seq3, muK_mean, "b") plt.fill_between(M_seq3, muK_PI[0], muK_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq3, K_PI[0], K_PI[1], color="k", alpha=0.2) plt.title('Total Counterfactual effect of M on K\nM->N->K ; M->K'); post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['muN','N','muK','K']) M_seq3 = np.linspace(data2[M2+'_origin'].min(), data2[M2+'_origin'].max(), 20) M_seq3 = np.ravel(scalers2[M2].transform(M_seq3.reshape(-1,1))) N_seq3 = np.zeros(20) post3.update(pred3(PRNGKey(2), M=M_seq3, N=N_seq3)) muK_mean = np.mean(post3['muK'], 0) muK_PI = np.percentile(post3['muK'], q=(5.5, 94.5), axis=0) K_PI = np.percentile(post3['K'], q=(5.5, 94.5), axis=0) plt.plot(M_seq3, muK_mean, "b") plt.fill_between(M_seq3, muK_PI[0], muK_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq3, K_PI[0], K_PI[1], color="k", alpha=0.2) plt.title('Counterfactual effect of M on K with N=0\nM->N->K ; M->K'); data2[M2+'_x2'] = scalers2[M2].transform(data2[[M2+"_origin"]]*2) post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['K']) np.mean( pred3(PRNGKey(1), M=data2[M2+"_x2"].values)['K'] - \ pred3(PRNGKey(1), M=data2[M2].values)['K'], axis=0) ``` # => By doubling the mass, kcal.per.g is reduced by the amount of about 0.04 - 0.2 SD.
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import arviz as az import seaborn as sns %matplotlib inline az.style.use("arviz-darkgrid") from jax import numpy as jnp from jax import lax from jax.random import PRNGKey import numpyro from numpyro.infer import SVI, Predictive, ELBO from numpyro.optim import Adam from numpyro.infer.autoguide import AutoLaplaceApproximation import numpyro.diagnostics as diag import numpyro.distributions as dist from causalgraphicalmodels import CausalGraphicalModel as CGM from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression as OLS url = r"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/WaffleDivorce.csv" A = 'MedianAgeMarriage' D = 'Divorce' M = 'Marriage' data = pd.read_csv(url, sep=';')[[M,A,D]] data.head() scalers = dict() for col in [A,M,D]: data[col + "_origin"] = data[col] scalers[col] = StandardScaler(copy=False).fit(data[[col]]) data[col] = scalers[col].transform(data[[col + "_origin"]]) sns.pairplot(data[[M,A,D]], kind='reg'); def model(M, A=None, D=None): # M->A aA = numpyro.sample('aA', dist.Normal(0,0.2)) bAM = numpyro.sample('bAM', dist.Normal(0, 0.5)) muA = aA + bAM*M sigmaA = numpyro.sample('sigmaA', dist.Exponential(1)) numpyro.deterministic('muA', muA) A = numpyro.sample('A', dist.Normal(muA, sigmaA), obs=A) # A-> D <- M aD = numpyro.sample('aD', dist.Normal(0,0.2)) bA = numpyro.sample('bA', dist.Normal(0, 0.5)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muD = aD + bA*A +bM*M sigmaD = numpyro.sample('sigmaD', dist.Exponential(1)) numpyro.deterministic('muD', muD) numpyro.sample('D', dist.Normal(muD, sigmaD), obs=D) guide = AutoLaplaceApproximation(model) svi = SVI(model, guide, Adam(1), ELBO(), M=data[M].values, A=data[A].values, D=data[D].values) state, loss = lax.scan(lambda x, i: svi.update(x), svi.init(PRNGKey(1)), np.zeros(1000)) param = svi.get_params(state) post = guide.sample_posterior(PRNGKey(1), param, (1000,)) pred = Predictive(model, post, return_sites=['muA','A','muD','D']) M_seq = np.linspace(data[M].min(), data[M].max(), 20) A_seq = np.zeros(20) post.update(pred(PRNGKey(2), M=M_seq, A=A_seq)) muD_mean = np.mean(post['muD'], 0) muD_PI = np.percentile(post['muD'], q=(5.5, 94.5), axis=0) D_PI = np.percentile(post['D'], q=(5.5, 94.5), axis=0) az.plot_pair(data[[M, D]].to_dict(orient="list")) plt.plot(M_seq, muD_mean, "b") plt.fill_between(M_seq, muD_PI[0], muD_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq, D_PI[0], D_PI[1], color="k", alpha=0.2) plt.title('Counterfactual effect of M on D with A=0\nM->A->D ; M->D'); def model2(M, A=None, D=None): # M->A aA = numpyro.sample('aA', dist.Normal(0,0.2)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muA = aA + bM*M sigmaA = numpyro.sample('sigmaA', dist.Exponential(1)) numpyro.deterministic('muA', muA) A = numpyro.sample('A', dist.Normal(muA, sigmaA), obs=A) # A-> D aD = numpyro.sample('aD', dist.Normal(0,0.2)) bA = numpyro.sample('bA', dist.Normal(0, 0.5)) muD = aD + bA*A sigmaD = numpyro.sample('sigmaD', dist.Exponential(1)) numpyro.deterministic('muD', muD) numpyro.sample('D', dist.Normal(muD, sigmaD), obs=D) guide2 = AutoLaplaceApproximation(model2) svi2 = SVI(model2, guide2, Adam(1), ELBO(), M=data[M].values, A=data[A].values, D=data[D].values) state2, loss2 = lax.scan(lambda x, i: svi2.update(x), svi2.init(PRNGKey(1)), np.zeros(1000)) param2 = svi2.get_params(state2) post2 = guide2.sample_posterior(PRNGKey(1), param2, (1000,)) pred2 = Predictive(model2, post2, return_sites=['muD','D']) M_seq2 = np.linspace(data[M+'_origin'].min(), data[M+'_origin'].max(), 20) M_seq2 = np.ravel(scalers[M].transform(M_seq2.reshape(-1,1))) post2.update(pred2(PRNGKey(2), M=M_seq2)) muD_mean2 = np.mean(post2['muD'], 0) muD_PI2 = np.percentile(post2['muD'], q=(5.5, 94.5), axis=0) D_PI2 = np.percentile(post2['D'], q=(5.5, 94.5), axis=0) az.plot_pair(data[[M,D]].to_dict(orient="list")) plt.plot(M_seq2, muD_mean2, "b") plt.fill_between(M_seq2, muD_PI2[0], muD_PI2[1], color="b", alpha=0.4) plt.fill_between(M_seq2, D_PI2[0], D_PI2[1], color="k", alpha=0.2) plt.title('Total counterfactual effect from M->D'); data[M+'_half'] = scalers[M].transform(data[[M+"_origin"]]/2) post2 = guide2.sample_posterior(PRNGKey(1), param2, (1000,)) pred2 = Predictive(model2, post2, return_sites=['D']) np.mean( pred2(PRNGKey(2), M=data[M+"_half"].values)['D'] - \ pred2(PRNGKey(2), M=data[M].values)['D'], axis=0) f = lambda x: scalers[D].inverse_transform(np.array(x)) np.mean( f(pred2(PRNGKey(2), M=data[M+"_half"].values)['D']) - \ f(pred2(PRNGKey(2), M=data[M].values)['D']), axis=0) scalers[D].scale_ url2 = r"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/milk.csv" M2 = 'mass' N = 'neocortex.perc' K = 'kcal.per.g' data2 = pd.read_csv(url2, sep=';')[[M2,N,K]] data2.dropna(inplace=True) data2.head() sns.pairplot(data2[[M2,N,K]], kind='reg'); scalers2 = dict() for col in [M2,N,K]: data2[col + "_origin"] = data2[col] scalers2[col] = StandardScaler(copy=False).fit(data2[[col]]) data2[col] = scalers2[col].transform(data2[[col + "_origin"]]) def model3(M, N=None, K=None): # M->N aN = numpyro.sample('aN', dist.Normal(0, 0.2)) bM = numpyro.sample('bM', dist.Normal(0, 0.5)) muN = aN + bM*M sigmaN = numpyro.sample('sigmaN', dist.Exponential(1)) numpyro.deterministic('muN', muN) N = numpyro.sample('N', dist.Normal(muN, sigmaN), obs=N) # M,N -> K aK = numpyro.sample('aK', dist.Normal(0,0.2)) bM2 = numpyro.sample('bM2', dist.Normal(0, 0.5)) bN = numpyro.sample('bN', dist.Normal(0, 0.5)) muK = aK + bM2*M + bN*N sigmaK = numpyro.sample('sigmaK', dist.Exponential(1)) numpyro.deterministic('muK', muK) numpyro.sample('K', dist.Normal(muK, sigmaK), obs=K) guide3 = AutoLaplaceApproximation(model3) svi3 = SVI(model3, guide3, Adam(1), ELBO(), M=data2[M2].values, N=data2[N].values, K=data2[K].values) init_state = svi3.init(PRNGKey(2)) state3, loss3 = lax.scan(lambda x, i: svi3.update(x), init_state, np.zeros(1000)) param3 = svi3.get_params(state3) post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['muN','N','muK','K']) M_seq3 = np.linspace(data2[M2+'_origin'].min(), data2[M2+'_origin'].max(), 20) M_seq3 = np.ravel(scalers2[M2].transform(M_seq3.reshape(-1,1))) post3.update(pred3(PRNGKey(2), M=M_seq3)) muK_mean = np.mean(post3['muK'], 0) muK_PI = np.percentile(post3['muK'], q=(5.5, 94.5), axis=0) K_PI = np.percentile(post3['K'], q=(5.5, 94.5), axis=0) plt.plot(M_seq3, muK_mean, "b") plt.fill_between(M_seq3, muK_PI[0], muK_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq3, K_PI[0], K_PI[1], color="k", alpha=0.2) plt.title('Total Counterfactual effect of M on K\nM->N->K ; M->K'); post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['muN','N','muK','K']) M_seq3 = np.linspace(data2[M2+'_origin'].min(), data2[M2+'_origin'].max(), 20) M_seq3 = np.ravel(scalers2[M2].transform(M_seq3.reshape(-1,1))) N_seq3 = np.zeros(20) post3.update(pred3(PRNGKey(2), M=M_seq3, N=N_seq3)) muK_mean = np.mean(post3['muK'], 0) muK_PI = np.percentile(post3['muK'], q=(5.5, 94.5), axis=0) K_PI = np.percentile(post3['K'], q=(5.5, 94.5), axis=0) plt.plot(M_seq3, muK_mean, "b") plt.fill_between(M_seq3, muK_PI[0], muK_PI[1], color="b", alpha=0.4) plt.fill_between(M_seq3, K_PI[0], K_PI[1], color="k", alpha=0.2) plt.title('Counterfactual effect of M on K with N=0\nM->N->K ; M->K'); data2[M2+'_x2'] = scalers2[M2].transform(data2[[M2+"_origin"]]*2) post3 = guide3.sample_posterior(PRNGKey(1), param3, (1000,)) pred3 = Predictive(model3, post3, return_sites=['K']) np.mean( pred3(PRNGKey(1), M=data2[M2+"_x2"].values)['K'] - \ pred3(PRNGKey(1), M=data2[M2].values)['K'], axis=0)
0.526586
0.786008
## A collection of bode plot functions. HTML output built with: jupyter nbconvert --to html bodes.ipynb https://webaudio.github.io/Audio-EQ-Cookbook/Audio-EQ-Cookbook.txt ``` from math import * import cmath import matplotlib.pyplot as plt def db_from_lin(gain): return log(gain, 10.0) * 20.0 def lin_from_db(decibels): return pow(10.0, decibels * 0.05) ``` ### First order bode low pass \begin{align*} |H(\omega)| = \frac{\omega_c}{j \omega + \omega_c} \end{align*} ``` def first_order_bode_low_pass(f_hz, f_cutoff_hz): jw = pi * 2.0 * f_hz * 1.0j wc = f_cutoff_hz * pi * 2.0 h = wc / (jw + wc) return abs(h) ``` ### First order low pass filter with cutoff at 2kHz ``` cutoff_hz = 2000 amplitude = [] x = [] for i in range(20, 20000, 10): n = first_order_bode_low_pass(i, cutoff_hz) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) ``` ### Second order bode low pass \begin{align*} |H(\omega)| = \frac{\omega_c^2}{j \omega^2 + j \omega(\omega_c / Q) + \omega_c^2} \end{align*} ``` def second_order_bode_low_pass(f_hz, f_cutoff_hz, q_factor): jw = pi * 2.0 * f_hz * 1.0j wc = pi * 2.0 * f_cutoff_hz h = wc ** 2 / (jw ** 2 + jw * (wc / q_factor) + wc ** 2) return abs(h) ``` Second order low pass filter with cutoff at 1kHz, Q of 2 ``` data = [] x = [] for i in range(20, 20000, 10): n = second_order_bode_low_pass(i, 1000, 2) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) ``` ### First order bode high pass \begin{align*} |H(\omega)| = \frac{j \omega}{j \omega + \omega_c} \end{align*} ``` def first_order_bode_high_pass(f_hz, f_cutoff_hz): jw = pi * 2.0 * f_hz * 1.0j wc = f_cutoff_hz * pi * 2.0 h = jw / (jw + wc) return abs(h) ``` First order high pass filter with cutoff at 2kHz ``` data = [] x = [] for i in range(20, 20000, 10): n = first_order_bode_high_pass(i, 1000) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) ``` ### Second order bode high pass \begin{align*} |H(\omega)| = \frac{j \omega^2}{j \omega^2 + j \omega(\omega_c / Q) + \omega_c^2} \end{align*} ``` def second_order_bode_high_pass(f_hz, f_cutoff_hz, q_factor): jw = pi * 2.0 * f_hz * 1.0j wc = pi * 2.0 * f_cutoff_hz h = jw ** 2 / (jw ** 2 + jw * (wc / q_factor) + wc ** 2) return abs(h) ``` Second order high pass filter with cutoff at 1kHz, Q of 2 ``` data = [] x = [] for i in range(20, 20000, 10): n = second_order_bode_high_pass(i, 1000, 2) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) ``` ### General biquad first order bode plot ``` def biquad_bode(f_hz, biquad_coefs, sampleRate): a0 = biquad_coefs[3] b0, b1, b2, a0, a1, a2 = [x / a0 for x in biquad_coefs] jw = cmath.exp(-(pi * 2.0) * f_hz * 1.0j / sampleRate) numerator = (b0 * 1) + (b1 * jw) + (b2 * jw**2) denominator = (a1 * jw) + (a2 * jw ** 2) + 1.0 return abs(numerator / denominator) ``` ### Peaking biquad ``` def biquad_peaking(f_cutoff_hz, q_value, gain, sampleRate): a = sqrt(gain) omega = 2.0 * pi * f_cutoff_hz / sampleRate alpha = sin(omega) / (2.0 * q_value) b0 = 1 + alpha * a b1 = -2 * cos(omega) b2 = 1 - alpha * a a0 = 1 + alpha / a a1 = b1 a2 = 1 - alpha / a return b0, b1, b2, a0, a1, a2 ``` First order peaking filter with cutoff at 1kHz, Q of 3 at -9.0dB ``` amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_peaking(1000, 2, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) ``` ### Biquad low shelf ``` def biquad_low_shelf(f_cutoff_hz, q_value, gain, sampleRate): omega = 2.0 * pi * f_cutoff_hz / sampleRate a = sqrt(gain) alpha = sin(omega) / (2.0 * q_value) omega_c = cos(omega) a_plus1 = a+1 a_minus1 = a-1 beta = 2*sqrt(a)*alpha b0 = a*( a_plus1 - a_minus1 * omega_c + beta) b1 = 2*a*( a_minus1 - a_plus1 * omega_c ) b2 = a*( a_plus1 - a_minus1 * omega_c - beta) a0 = a_plus1 + a_minus1 * omega_c + beta a1 = -2*( a_minus1 + a_plus1 * omega_c ) a2 = a_plus1 + a_minus1 * omega_c - beta return b0, b1, b2, a0, a1, a2 ``` First order low shelf filter with cutoff at 1kHz, Q of .7 at -9.0dB ``` amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_low_shelf(1000, .7, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) ``` ### Biquad high shelf ``` def biquad_high_shelf(f_cutoff_hz, q_value, gain, sampleRate): omega = 2.0 * pi * f_cutoff_hz / sampleRate a = sqrt(gain) alpha = sin(omega) / (2.0 * q_value) omega_c = cos(omega) a_plus1 = a+1 a_minus1 = a-1 beta = 2*sqrt(a)*alpha b0 = a*( a_plus1 + a_minus1 * omega_c + beta) b1 = -2*a*( a_minus1 + a_plus1 * omega_c ) b2 = a*( a_plus1 + a_minus1 * omega_c - beta) a0 = a_plus1 - a_minus1 * omega_c + beta a1 = 2*( a_minus1 - a_plus1 * omega_c ) a2 = a_plus1 - a_minus1 * omega_c - beta return b0, b1, b2, a0, a1, a2 ``` First order high shelf filter with cutoff at 1kHz, Q of .7 at -9.0dB ``` amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_high_shelf(1000, .7, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) ``` ### Combining biquads ``` amplitude = [] x = [] for i in range(20, 20000, 10): n = 1 biquad_coefs = biquad_high_shelf(4000, .7, lin_from_db(3.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) biquad_coefs = biquad_low_shelf(500, .7, lin_from_db(-9.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) biquad_coefs = biquad_peaking(1000, 2, lin_from_db(-9.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) n *= second_order_bode_high_pass(i, 100, 1) n *= second_order_bode_low_pass(i, 15000, 1) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) ```
github_jupyter
from math import * import cmath import matplotlib.pyplot as plt def db_from_lin(gain): return log(gain, 10.0) * 20.0 def lin_from_db(decibels): return pow(10.0, decibels * 0.05) def first_order_bode_low_pass(f_hz, f_cutoff_hz): jw = pi * 2.0 * f_hz * 1.0j wc = f_cutoff_hz * pi * 2.0 h = wc / (jw + wc) return abs(h) cutoff_hz = 2000 amplitude = [] x = [] for i in range(20, 20000, 10): n = first_order_bode_low_pass(i, cutoff_hz) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) def second_order_bode_low_pass(f_hz, f_cutoff_hz, q_factor): jw = pi * 2.0 * f_hz * 1.0j wc = pi * 2.0 * f_cutoff_hz h = wc ** 2 / (jw ** 2 + jw * (wc / q_factor) + wc ** 2) return abs(h) data = [] x = [] for i in range(20, 20000, 10): n = second_order_bode_low_pass(i, 1000, 2) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) def first_order_bode_high_pass(f_hz, f_cutoff_hz): jw = pi * 2.0 * f_hz * 1.0j wc = f_cutoff_hz * pi * 2.0 h = jw / (jw + wc) return abs(h) data = [] x = [] for i in range(20, 20000, 10): n = first_order_bode_high_pass(i, 1000) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) def second_order_bode_high_pass(f_hz, f_cutoff_hz, q_factor): jw = pi * 2.0 * f_hz * 1.0j wc = pi * 2.0 * f_cutoff_hz h = jw ** 2 / (jw ** 2 + jw * (wc / q_factor) + wc ** 2) return abs(h) data = [] x = [] for i in range(20, 20000, 10): n = second_order_bode_high_pass(i, 1000, 2) data.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, data) def biquad_bode(f_hz, biquad_coefs, sampleRate): a0 = biquad_coefs[3] b0, b1, b2, a0, a1, a2 = [x / a0 for x in biquad_coefs] jw = cmath.exp(-(pi * 2.0) * f_hz * 1.0j / sampleRate) numerator = (b0 * 1) + (b1 * jw) + (b2 * jw**2) denominator = (a1 * jw) + (a2 * jw ** 2) + 1.0 return abs(numerator / denominator) def biquad_peaking(f_cutoff_hz, q_value, gain, sampleRate): a = sqrt(gain) omega = 2.0 * pi * f_cutoff_hz / sampleRate alpha = sin(omega) / (2.0 * q_value) b0 = 1 + alpha * a b1 = -2 * cos(omega) b2 = 1 - alpha * a a0 = 1 + alpha / a a1 = b1 a2 = 1 - alpha / a return b0, b1, b2, a0, a1, a2 amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_peaking(1000, 2, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) def biquad_low_shelf(f_cutoff_hz, q_value, gain, sampleRate): omega = 2.0 * pi * f_cutoff_hz / sampleRate a = sqrt(gain) alpha = sin(omega) / (2.0 * q_value) omega_c = cos(omega) a_plus1 = a+1 a_minus1 = a-1 beta = 2*sqrt(a)*alpha b0 = a*( a_plus1 - a_minus1 * omega_c + beta) b1 = 2*a*( a_minus1 - a_plus1 * omega_c ) b2 = a*( a_plus1 - a_minus1 * omega_c - beta) a0 = a_plus1 + a_minus1 * omega_c + beta a1 = -2*( a_minus1 + a_plus1 * omega_c ) a2 = a_plus1 + a_minus1 * omega_c - beta return b0, b1, b2, a0, a1, a2 amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_low_shelf(1000, .7, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) def biquad_high_shelf(f_cutoff_hz, q_value, gain, sampleRate): omega = 2.0 * pi * f_cutoff_hz / sampleRate a = sqrt(gain) alpha = sin(omega) / (2.0 * q_value) omega_c = cos(omega) a_plus1 = a+1 a_minus1 = a-1 beta = 2*sqrt(a)*alpha b0 = a*( a_plus1 + a_minus1 * omega_c + beta) b1 = -2*a*( a_minus1 + a_plus1 * omega_c ) b2 = a*( a_plus1 + a_minus1 * omega_c - beta) a0 = a_plus1 - a_minus1 * omega_c + beta a1 = 2*( a_minus1 - a_plus1 * omega_c ) a2 = a_plus1 - a_minus1 * omega_c - beta return b0, b1, b2, a0, a1, a2 amplitude = [] x = [] for i in range(20, 20000, 10): biquad_coefs = biquad_high_shelf(1000, .7, lin_from_db(-9.0), 96000) n = biquad_bode(i, biquad_coefs, 96000) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude) amplitude = [] x = [] for i in range(20, 20000, 10): n = 1 biquad_coefs = biquad_high_shelf(4000, .7, lin_from_db(3.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) biquad_coefs = biquad_low_shelf(500, .7, lin_from_db(-9.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) biquad_coefs = biquad_peaking(1000, 2, lin_from_db(-9.0), 96000) n *= biquad_bode(i, biquad_coefs, 96000) n *= second_order_bode_high_pass(i, 100, 1) n *= second_order_bode_low_pass(i, 15000, 1) amplitude.append(db_from_lin(n.real)) x.append(i) plt.xlim([20, 20000]) plt.ylim([-36, 12]) plt.semilogx(x, amplitude)
0.674908
0.940953
# [Day 4: Giant Squid](https://adventofcode.com/2021/day/4) ``` import dataclasses as dc ``` ## Part 1 ``` example_data = [ "7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1", "", "22 13 17 11 0", " 8 2 23 4 24", "21 9 14 16 7", " 6 10 3 18 5", " 1 12 20 15 19", "", " 3 15 0 2 22", " 9 18 13 17 5", "19 8 7 25 23", "20 11 10 24 4", "14 21 16 12 6", "", "14 21 17 24 4", "10 16 15 9 19", "18 8 23 26 20", "22 11 13 6 5", " 2 0 12 3 7", ] @dc.dataclass class Number: value: int marked: int = 0 @dc.dataclass class Board: board: list[Number] = dc.field(default_factory=list) def add_row(self, row): self.board.append([Number(int(v)) for v in row]) def is_valid(self): return len(self.board) > 0 def mark(self, number): for row in self.board: for col in row: if col.value == number: col.marked = 1 def is_winner(self): cols_marked_sums = [0] * len(self.board[0]) for row in self.board: # horizontal rows_marked = sum([col.marked for col in row]) if rows_marked == len(row): return True # vertical for i, col in enumerate(row): cols_marked_sums[i] += col.marked # no winning row, check for winning column for cols_marked_sum in cols_marked_sums: if cols_marked_sum == len(self.board): return True return False def score(self, number): sum_unmarked = 0 for row in self.board: for col in row: if not col.marked: sum_unmarked += col.value return sum_unmarked * number class Game1: def __init__(self, game_data): self.numbers = [int(v) for v in game_data[0].split(",")] self.boards = [] current_board = Board() for line in game_data[1:]: line = line.strip().split() if line: current_board.add_row(line) else: if current_board.is_valid(): self.boards.append(current_board) current_board = Board() if current_board.is_valid(): self.boards.append(current_board) def play_first_win(self): for number in self.numbers: for board in self.boards: board.mark(number) if board.is_winner(): return board.score(number) return None game = Game1(example_data) print(f"Check part 1 numbers: {len(game.numbers) == 27}") print(f"Check part 1 boards: {len(game.boards) == 3}") print(f"Check part 1: {game.play_first_win() == 4512}") with open(r"..\data\Day 04 input.txt", "r") as fh_in: game_data = fh_in.readlines() game = Game1(game_data) print(f"Answer part 1 check numbers: {len(game.numbers) == 100}") print(f"Answer part 1 check boards: {len(game.boards) == 100}") print(f"Answer part 1: {game.play_first_win()}") ``` ## Part 2 ``` class Game2(Game1): def play_last_win(self): for number in self.numbers: for i in range(len(self.boards) - 1, -1, -1): self.boards[i].mark(number) if self.boards[i].is_winner(): board_score = self.boards[i].score(number) del self.boards[i] if not self.boards: # no more boards: last winner return board_score return None game = Game2(example_data) print(f"Check part 2: {game.play_last_win() == 1924}") game = Game2(game_data) print(f"Answer part 2: {game.play_last_win()}") ```
github_jupyter
import dataclasses as dc example_data = [ "7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1", "", "22 13 17 11 0", " 8 2 23 4 24", "21 9 14 16 7", " 6 10 3 18 5", " 1 12 20 15 19", "", " 3 15 0 2 22", " 9 18 13 17 5", "19 8 7 25 23", "20 11 10 24 4", "14 21 16 12 6", "", "14 21 17 24 4", "10 16 15 9 19", "18 8 23 26 20", "22 11 13 6 5", " 2 0 12 3 7", ] @dc.dataclass class Number: value: int marked: int = 0 @dc.dataclass class Board: board: list[Number] = dc.field(default_factory=list) def add_row(self, row): self.board.append([Number(int(v)) for v in row]) def is_valid(self): return len(self.board) > 0 def mark(self, number): for row in self.board: for col in row: if col.value == number: col.marked = 1 def is_winner(self): cols_marked_sums = [0] * len(self.board[0]) for row in self.board: # horizontal rows_marked = sum([col.marked for col in row]) if rows_marked == len(row): return True # vertical for i, col in enumerate(row): cols_marked_sums[i] += col.marked # no winning row, check for winning column for cols_marked_sum in cols_marked_sums: if cols_marked_sum == len(self.board): return True return False def score(self, number): sum_unmarked = 0 for row in self.board: for col in row: if not col.marked: sum_unmarked += col.value return sum_unmarked * number class Game1: def __init__(self, game_data): self.numbers = [int(v) for v in game_data[0].split(",")] self.boards = [] current_board = Board() for line in game_data[1:]: line = line.strip().split() if line: current_board.add_row(line) else: if current_board.is_valid(): self.boards.append(current_board) current_board = Board() if current_board.is_valid(): self.boards.append(current_board) def play_first_win(self): for number in self.numbers: for board in self.boards: board.mark(number) if board.is_winner(): return board.score(number) return None game = Game1(example_data) print(f"Check part 1 numbers: {len(game.numbers) == 27}") print(f"Check part 1 boards: {len(game.boards) == 3}") print(f"Check part 1: {game.play_first_win() == 4512}") with open(r"..\data\Day 04 input.txt", "r") as fh_in: game_data = fh_in.readlines() game = Game1(game_data) print(f"Answer part 1 check numbers: {len(game.numbers) == 100}") print(f"Answer part 1 check boards: {len(game.boards) == 100}") print(f"Answer part 1: {game.play_first_win()}") class Game2(Game1): def play_last_win(self): for number in self.numbers: for i in range(len(self.boards) - 1, -1, -1): self.boards[i].mark(number) if self.boards[i].is_winner(): board_score = self.boards[i].score(number) del self.boards[i] if not self.boards: # no more boards: last winner return board_score return None game = Game2(example_data) print(f"Check part 2: {game.play_last_win() == 1924}") game = Game2(game_data) print(f"Answer part 2: {game.play_last_win()}")
0.420243
0.811937
# Clustering ``` import os import glob import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(rc={'figure.figsize':(16,9)}) %matplotlib inline ``` ## Local ``` path = 'dataset/*.csv' ``` ## colab ``` from google.colab import drive # 指定google drive雲端硬碟盤的根目錄,名為drive drive.mount('/content/drive') path = '/content/drive/My Drive/wids-taipei/brazilian-ecommerce/' # 改變當前工作目錄至指定位置 os.chdir(path) ``` ## Exploring the dataset ### About the dataset This dataset has information about the customer and its location. Use it to identify unique customers in the orders dataset and to find the orders delivery location. At our system each order is assigned to a unique `customer_id`. This means that the same customer will get different ids for different orders. The purpose of having a `customer_unique_id` on the dataset is to allow you to identify customers that made repurchases at the store. Otherwise you would find that each order had a different customer associated with. - customer_id: key to the orders dataset. Each order has a unique customer_id. - customer_unique_id: unique identifier of a customer. - customer_zip_code_prefix: first five digits of customer zip code - customer_city: customer city name - customer_state: customer state ### Data Schema The data is divided in multiple datasets for better understanding and organization. Please refer to the following data schema when working with it: ![Data Schema](https://i.imgur.com/HRhd2Y0.png) ### Read data from CSV file with pandas ``` filenames = glob.glob(path) pd_list = {} for filename in filenames: name = filename.split("/")[-1].split(".")[0] pd_list[name] = pd.read_csv(os.path.join(filename)) for key,value in pd_list.items(): print(key) cates = pd_list['product_category_name_translation'].set_index('product_category_name') product_name = pd_list['olist_products_dataset'].set_index('product_id') c_to_o = pd_list['olist_orders_dataset'].set_index('customer_id')["order_id"] def categorized(series): return [product_name.loc[i]["product_category_name"] for i in series] def concat(series): # for i in series: # for j in p.loc[c_to_o.loc[i]]["cate"]: # print(j) try: return [ j for i in series for j in p.loc[c_to_o.loc[i]]["cate"]] except: return[] ## the "'c306eca42d32507b970739b5b6a5a33a'" customer_id is shitty order = pd.DataFrame(pd_list['olist_order_items_dataset'][["order_id","product_id"]]) p = pd.DataFrame(pd_list['olist_order_items_dataset'].groupby('order_id')["product_id"].apply(list)) p["cate"] = pd.DataFrame(p["product_id"].apply(categorized)) #p.head(50) customer = pd.DataFrame(pd_list['olist_customers_dataset'][["customer_unique_id","customer_id"]]) b = pd.DataFrame(pd_list['olist_customers_dataset'].groupby('customer_unique_id')["customer_id"].apply(list)) b.head() b["cates"] = b["customer_id"].apply(concat) #b["cates"].head() b["cates"].head() ```
github_jupyter
import os import glob import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(rc={'figure.figsize':(16,9)}) %matplotlib inline path = 'dataset/*.csv' from google.colab import drive # 指定google drive雲端硬碟盤的根目錄,名為drive drive.mount('/content/drive') path = '/content/drive/My Drive/wids-taipei/brazilian-ecommerce/' # 改變當前工作目錄至指定位置 os.chdir(path) filenames = glob.glob(path) pd_list = {} for filename in filenames: name = filename.split("/")[-1].split(".")[0] pd_list[name] = pd.read_csv(os.path.join(filename)) for key,value in pd_list.items(): print(key) cates = pd_list['product_category_name_translation'].set_index('product_category_name') product_name = pd_list['olist_products_dataset'].set_index('product_id') c_to_o = pd_list['olist_orders_dataset'].set_index('customer_id')["order_id"] def categorized(series): return [product_name.loc[i]["product_category_name"] for i in series] def concat(series): # for i in series: # for j in p.loc[c_to_o.loc[i]]["cate"]: # print(j) try: return [ j for i in series for j in p.loc[c_to_o.loc[i]]["cate"]] except: return[] ## the "'c306eca42d32507b970739b5b6a5a33a'" customer_id is shitty order = pd.DataFrame(pd_list['olist_order_items_dataset'][["order_id","product_id"]]) p = pd.DataFrame(pd_list['olist_order_items_dataset'].groupby('order_id')["product_id"].apply(list)) p["cate"] = pd.DataFrame(p["product_id"].apply(categorized)) #p.head(50) customer = pd.DataFrame(pd_list['olist_customers_dataset'][["customer_unique_id","customer_id"]]) b = pd.DataFrame(pd_list['olist_customers_dataset'].groupby('customer_unique_id')["customer_id"].apply(list)) b.head() b["cates"] = b["customer_id"].apply(concat) #b["cates"].head() b["cates"].head()
0.042983
0.739822
``` from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() ``` # Introduction to Programming ### Who am I? - Name: __Hemma__ (名) __Philamore__ (姓) - From: __England__ - Lab: __Matsuno mechatronics lab__, Kyoto University, Katsura Campus - Research: __Bio-inspired, energy autonomous robots__ - Contact: [email protected]__ <img src="img/row_bot.jpg" alt="Drawing" style="width: 300px;"/> Today is an introductory seminar. __Part 1:__ Course Overview - What topics the course will cover. - Activities we will be doing. - How you will be assessed. __Part 2:__ Practical Introduction - Hands-on Practise - How to install software at home (and on-campus) ## Why study programming? - Increased use of computing in everyday life. - A tool you can use for the other subject you study. - A growing sector of the jobs market. - Coding in jobs, not traditionally related to computing. - It's fun! ## Why study Python? - Free and open source - Easy to learn - "High level" - Community - Increasingly used in industry ## Course Goal To develop: - A good standalone programming toolkit. - Skills to improve the quality of your work in other subjects. - A fundamental base from which to start developing further as a programmer. ## Course Entry Level - Beginner, no prior programming knowlegde. - Fundamental engineering background knowledge (1st year undergraduate level). ## Course Structure __Part 1: Fundamentals of programming:__ <br>An introduction to the core function and features of the Python language. <br>A focus on the use of Python for solving engineering-related problems. (~10 weeks) - Data types and basic operations e.g. arithmetic. - Control Flow - Data Structures - Functions (user-defined and built-in) - Arrays and numerical computation - Visually representing data - Inputs and outputs - Testing __Part 2: Applications of programming:__ <br>Practicing and demonstrating the use of the skills you have learnt. - Writing your own programs to express and solve engineering problems. - Incorporating examples from your other subjects where possible. - Structured examples to combine and reinforce the skills you have learnt. - Coursework assessment. __Version Control and Testing:__ - Code management skills. - Essential techniques used by professional programmers. - Storing, documenting, and sharing your code with others. - Improve the quality of your work. ## Lesson format In each class we will use a variety of learning methods: __Interactive Textbook:__ <br>In a moment you are going to download the interactive textbook. <br>First half of the course notes (the second will be added soon). <br>A chapter for each seminar. <br>Each chapter has several examples for you to complete in class. __Slides:__ <br>New material will be explained on slides. <br>Code will be demonstrated using slides. <br>We will complete examples together... __Whiteboards:__ <br>In-class feedback. <br>Sharing answers with the group. __Review Excercises:__ <br>At the end of each chapter there are excercises for you to complete independently. <br>Complete unfinished questions for homework. <br>For me to check your progress. <br>Answers will be used in subsequent classes. I will release example solutions weekly. <br>Please use these to check your work. <br>Review exercises are not assessed but are good preparation for the exam. <br>I strongly advise you to complete them. ## Assessment. __50% Coursework__ - You will choose a problem from one of your other courses. - You will express and solve the problem by writing a computer program. - There will be two lessons (after week 10) assigned to this task. - Remaining work must be completed independently for homework. - A sample problem will be available for you to complete if you cannot find a suitable problem to use from another module. *Purpose*: - To use Python programming to improve the quality of your work in the other subjects on your course. - If you choose your problem wisely you may even be able to submit as homework for __both__ this course and the course from which the problem was taken from. __50% Exam:__ <br>Week 15 <br>Open-book (interactive textbook). <br>Therefore I highly recommend you follow and complete all examples in class so you have as much information as possible when it comes to the exam. <br>If you miss a seminar try and catch up by completing the examples yourself. The tools we will use on this course may be completely new to you. We will use the remaining part of the seminar to set-up the software you will be using through a __practical introduction.__ - __Git__ - Creating an online repository for your work. - Accessing interactive textbook online. - Saving your changes in an online repository. <br> - __Jupyter notebook__: Interactive Python coding. <br> - __Home software installation__ - Anaconda (Python + Jupyter notebook) - Git <br> - __On-Campus software installation__ - Git ## Git __What is Git?__ Git is *version control* software. __What is version control software?__ Software that manages changes to a project. It tracks changes without overwriting any part of the project. Typically, when you save a file, for example a word document, you either: - overwrite the previous version (save) - save the file under a new name (save as) This means we either: - Lose the previous version - End up with multiple files In programming we often want to: - make a small change to our program - test our change works before moving on. - easily revert to a previous version if we don't like the changes It makes sense save incremental versions of our work. That way, if we break something we can just go back to the previous version. But this can lead to many files: <img src="img/many_files_same_name.png" alt="Drawing" style="width: 300px;"/> How can we tell what each one does? We could try giving them meaningful names: <img src="img/many_files.gif" alt="Drawing" style="width: 300px;"/> But the name can only tell us a little bit of information... ...before they start getting really long! <img src="img/many_files_different_names.png" alt="Drawing" style="width: 300px;"/> Things get very confusing! And many files take up lots of space on your computer. ### How Git works Git creates a folder in the same directory as your file. The folder containing both the file being tracked and the Git folder is now referred to as a repository or "repo". (The Git folder is hidden.) You can keep any type of file in a repository (code files, text files, image files....). Git logs changes you make to the file. It can track multiple files within this directory. It stores a *commit message* with each change you store (or *commit*), saying what you changed: <img src="img/git_commit_.png" alt="Drawing" style="width: 300px;"/> So if you make a mistake, you can just reset to a previous version. <img src="img/git_reset.png" alt="Drawing" style="width: 300px;"/> When you commit changes, Git does not save two versions of the same file. Git only saves the __difference__ between two files. This minimises the amount of space that tracking your changes takes up on your computer. __Example:__ Between files r3 and r4, the information saved is > -juice <br> > +soup <img src="img/git_diff.png" alt="Drawing" style="width: 500px;"/> ### Advantages and Disadvantages of Git A __great thing__ about Git is that it was made by programmers for programmers. It has an *enourmous* range of functionality. A __problem__ with so much freedom is that it can be easy to get things wrong. Git can be difficult to use. To keep things nice and easy we will learn only the basics of using Git. Even this basic understanding will give you essential skills that are used every day by professional programmers. A __problem__ with Git is that it was made by programmers for programmers. We have to use the command line (or Terminal) to access it. There is no graphical user interface. It can be difficult to visualise what is going on. <img src="img/git_command_line.png" alt="Drawing" style="width: 500px;"/> ## GitHub To provide a visual interface we can use an online *host site* to store and view code... GitHub.com is a "code hosting site". It provides a visual interface to: - view code - view changes to code(*commits*) - share and collaborate with others. There are many code hosting sites, however Github has a large community of users. So for programmers, it works like a social media site like Facebook or instagram. <img src="img/github-logo.jpg" alt="Drawing" style="width: 200px;"/> A repo can be a local folder on your computer. A repo can also be a storage space on GitHub or another online host site. <img src="img/github-logo.jpg" alt="Drawing" style="width: 200px;"/> Please log on to the computer. Let's start by downloading your interactive textbook from github.com Open a web browser and go to: https://github.com/hphilamore/ILAS_python [You can find it by Googling "github hphilamore" and selecting ILAS_python] This is a __repository__. It is an online directory where this project, the textbook, is stored. We can look at previous versions of the code by selecting *commits*... We can easily view the difference ("diff") between the previous and current version. You are going to download a personal copy of the textbook to you user area. ### Introduction to the Command Line. We are going to download the textbook using the command line. To open the terminal: - press "win key" + "R" - type: __cmd__ - press enter A terminal will launch. <img src="img/KUterminal.png" alt="Drawing" style="width: 500px;"/> The *command prompt* will say something like: C:¥Users¥Username: The C tells us that we are on the C drive of the computer. Lets switch to the M drive where the user (you!) can save files. In the terminal type: >`M:` ...and press enter. You should see the command prompt change. <img src="img/KUterminalMdrive.png" alt="Drawing" style="width: 700px;"/> To see what is on the M drive type: >`dir` ..and press enter. You will see all the folders in your personal user area. Double click on the computer icon on the desktop. Double click on Home Directory (M:). You should see the same folders as those listed in the terminal. To navigate to documents type: >`cd Documents` cd stands for "change directory". Type: >`dir` again to view the contents of your Documents folder. We can move down the filesystem of the computer by typing: >`cd` followed by the name of the folder we want to move to. The folder must be: - on the same branch - one step from our current location <img src="img/directory_tree.gif" alt="Drawing" style="width: 200px;"/> To move back up by one step, type: >`cd ..` Try this now. <img src="img/directory_tree.gif" alt="Drawing" style="width: 200px;"/> We can move by more than one step by seperating the names of the folders using the symbol: ¥ (note, this is \ or / on US and European computers, depending on the operating system) <img src="img/directory_tree.gif" alt="Drawing" style="width: 200px;"/> For example, now try navigating to any folder in your Documents folder by typing: >`cd Documents¥folder_name` where `folder_name` is the name of the folder to move to. And now let's go back to the main Documents folder by typing: > cd .. ## "Cloning" the Textbook Using Git Go to the Github site we opened earlier. We are going to download a copy of the textbook from an online *repository*. This is referred to as *cloning*. This will allow you to work on the textbook and save it locally on a computer. Click the button "Clone or download" and copy the link by presssing Ctrl , C <img src="img/clone-URL.png" alt="Drawing" style="width: 500px;"/> In the terminal type `git clone`. After the word `clone` __leave a space__ and then paste the URL that you just copied: > `git clone` &nbsp; PASTE_COPIED_URL_HERE `Clone` copies all the files from the repository at the URL you have entered. In the terminal type: > `dir` A folder called "ILAS_python" should have appeared. Go into the folder and view the content by typing: >`cd ILAS_pyhon` ><br>`dir` Hint: If you start typing a folder or file name and press "tab", the name autocompletes! Try it for yourself e.g. in the Documents directory type: >`cd ILAS` then press "tab". ## Creating an Online Github Account The __online Github repository__ that you cloned the textbook from belongs to me. You are going to create your own online Github user account. You will use Github to update the online version of your textbook with the changes you make to the version stored locally on the university M drive. This means you can easily access it from outside the Kyoto University system, for example, to complete your homework. I will use your online repositories to view your work and check your progress during the course. Open https://github.com Click "Sign up" at the top right hand corner. <img src="img/github_signup.png" alt="Drawing" style="width: 500px;"/> Follow the steps to create an account, the same way as you would for a social media site. Choose a user name, email address, password. <img src="img/github-signup.png" alt="Drawing" style="width: 300px;"/> Use the confirmation email to complete your registration. ## Creating an Online GitHub Repository Now we are going to set up your first online repository. Click the + sign in the top right corner. Choose "New repository". <img src="img/github_newrepo.png" alt="Drawing" style="width: 500px;"/> Choose a repository name (e.g. Python Textbook, python_textbook, Intro_to_python) <img src="img/github_namerepo.jpg" alt="Drawing" style="width: 300px;"/> Leave the other settings as they are for now. We will learn about these later in the course. Click the button "Create repository". <img src="img/github_create_repo.jpg" alt="Drawing" style="width: 300px;"/> ## Adding Files to an Online Github Repository Your online repository is currently empty. We are now going to link your local repository (stored on the computer on the M drive) to your online repository (stored at github.com). In the terminal, make sure you are __inside__ the folder named ILAS_python. If you are not, then navigate to the folder using >`cd` Enter the username that you registered when setting up your account on GitHub: >`git config --global user.name "username"` Enter the email adress that you registered when setting up your account on GitHub: >`git config --global user.email "[email protected]"` Copy the URL of your repo from the "Quick setup" section of the GitHub page that should have appeared. <img src="img/github_copyurl.png" alt="Drawing" style="width: 300px;"/> __NOTE__ <br>Earlier we copied the URL of __my repository__ (https://github.com/hphilamore/ILAS_python.git). <br>We used it to tell the computer where to copy files __from__. <br>Now we are copying the URL of __your repository__(https://github.com/yourGithub/yourRepo.git). <br>We will now use a similar procedure to tell the computer where it should copy files __to__. First we will disconnect your local repo from __my__ online repo. <br>In the terminal type >`git remote rm origin` <br>The command removes (`rm`) a remote (`remote`) URL from your local repository. `origin` is a name that was given by default to the URL you cloned the repository from). Second we will connect your local repo to __your__ online repo. <br>In the terminal type `git remote add origin` <br>After the word `origin` __leave a space__ and then paste the URL that you just copied: >`git remote add origin` &nbsp; PASTE_COPIED_URL_HERE <br>The command connects (`add`) a remote (`remote`) URL to your local repository using: - a name for you remote (let's use origin, again) - a URL (the URL just just copied) Finally, type: >`git push -u origin master` The command uploads (`push`) the contents of your *local repository* to a *remote repository* using: - a remote name (ours is "origin") - a *branch* of your repository (this is a more advanced feature of github. We will use the default branch ony. It is called "master") `-u` sets the remote repository, `origin`, (and branch, `master`) as the default. So from now on, you only need to type `git push` to upload the contents of your local repository. to upload the contents of your local repository. A new window may open: <img src="img/GitHubLogin.png" alt="Drawing" style="width: 200px;"/> If a new window opens, enter your github login details then return to the teminal. If the window does not appear, skip this step and return to the teminal. A prompt to enter your Github login details should have appeared. Enter your login details. <img src="img/GitHubTermLogin.png" alt="Drawing" style="width: 500px;"/> You should see a few lines of code appear, ending with the message: >`Branch master set up to track remote branch master from origin` Now look again at your online GitHub page. Click on the "code" tab to reload the page. <img src="img/github_code.png" alt="Drawing" style="width: 300px;"/> The textbook (comprising several jupyter notebook (.ipynb) files) should now have appeared in your online repository. ## Jupyter Notebook There are different ways of writing and running Python code. For the first part of this course we will be writing code in the Python interactive execution environment, __Jupyter notebook__. A Jupyter notebook lets you write and execute Python code in a web browser. You can write and execute code in sections. This allows you to immediately view the output of each section. For this reason Jupyter notebooks are widely used in scientific computing. To launch Jupyter notebook go to: Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook <br>(Start >> すべてのプログラム >> Programming >> Anaconda3 >> JupyterNotebook) and double click the Jupyter notebook icon to launch the application. You should see the content of your documents folder appear in the window that opens. Open the folder ILAS_python which contains the interactive textbook. The textbook is written as a Jupyter notebook; each chapter (or seminar) is a seperate .ipynb file. This format allows you to: - add to the textbook - run your code and view the outcome within the file <img src="img/jupyter-file-browser.png" alt="Drawing" style="width: 500px;"/> Click on the file 0_Introduction.ipynb to open it. If the following error message appears, choose `Python [python3]` from the drop-down menu and click OK. <img src="img/kernel_not_found.png" alt="Drawing" style="width: 450px;"/> You will see the contents of the slides that you have been following today. Scroll down to find __THIS POINT__ in the notebook. <img src="img/youarehere.png" alt="Drawing" style="width: 300px;"/> A Jupyter notebook is made up of a number of cells. Each cell can contain Python code. Let's learn how to: - create a new notebook - write code - run code At the top of the screen click: File >> New Notebook >> Python 3 To execute a cell: - Click on it (a green box will appear around it once selected) - Press "shift" + "enter" When executed: - the code in the cell will run - the output of the cell will be displayed beneath the cell. ### Your First Python Code Python can be used like a calculator to do basic arithmetic. In the empty cell in the new notebook in the next tab of your web browser (Untitled) enter: <br> 1 + 2 Press "shift" + "enter" Your notebook should look something like this: <img src="img/jupyter_cell_one.png" alt="Drawing" style="width: 500px;"/> In Python we can create variables that store values. In the next cell create the value `x` which has the value `2 + 3`: ```python x = 2 + 3 ``` To display the value of x we need to "print" it. Type: ```python print(x) ``` Press "shift" + "enter" to run the cell. Your notebook should look something like this: <img src="img/jupyter_celltwo.png" alt="Drawing" style="width: 500px;"/> Variables are shared between cells. Therefore, executing: ```python y = x - 2 print(y) ``` in the next cell should give the following result in your notebook: <img src="img/jupyter_cell_three.png" alt="Drawing" style="width: 500px;"/> Cells are executed in the order that the user runs them. By convention, Jupyter notebooks are __expected__ to be run from top to bottom. If you skip a cell or run the cells in a different order you may get unexpected behaviour. For example, in the next cell enter: ```python x = 10 ``` then re-run the cell above containing: ```python y = x - 2 print(y) ``` and you should see the value of y change. The original value of `x` has been replaced by the new definition `x = 10`. <img src="img/jupyter_change_value.png" alt="Drawing" style="width: 500px;"/> Now run the cell containing: ```python x = 2 + 3 ``` then the cell containing: ```python y = x - 2 print(y) ``` and you will see the value of `y` change back to it's original value. <img src="img/jupyter_value_change2.png" alt="Drawing" style="width: 500px;"/> To run all the cells sequentially from top to bottom click: Cell >> Run All This menu also contains other options for running cells. Close the file Untitled.ipynb Delete the file Untitled.ipynb by selecting the check box next to the file name... <img src="img/select_jupyter_untitled.png" alt="Drawing" style="width: 200px;"/> ...and clicking the red bin icon: <img src="img/delete_jupyter_untitled.png" alt="Drawing" style="width: 200px;"/> ## Adding to the Python Textbook Throughout this course you will develop new skills by completing excercises in the interactive textbook. At the end of the course you will have all of your notes and practise excercises in one place, accessible from almost anywhere. Let's practise making a change to this chapter of the texbook and saving the change to your Git repository. Below is a series of print statements. ```PYTHON print(2 + 1) print(5 - 2) print(1 + 1) print(2 - 1) print(1 + 5) ``` Select all the print statements and press "ctrl" + "c" to copy them. Now paste them in __the next cell__ (below this one) using "ctrl" + "v". Select the cell into which you have pasted the print statements by clicking on it (a green box should appear around it) and press "shift" + "enter" to run the cell. <img src="img/change.jpg" alt="Drawing" style="width: 300px;"/> ``` print(2 + 1) print(5 - 2) print(1 + 1) print(2 - 1) print(1 + 5) ``` Save your changes by clicking the button in the top left of this window (jupyter notebook). <img src="img/Jupytersave.png" alt="Drawing" style="width: 300px;"/> You have saved your changes to your personal copy of the textbook on the computer M drive. ## Tracking changes using Git We are now going to: - use Git to record the changes you make to the textbook. - upload it to your online GitHub repository so that you can access it online. Git has a two-step process for saving changes. 1. Select files for which to log changes (__"add"__) 1. Log changes (__"commit"__) This is an advanced feature. For now, we will choose to track (__add__) all the files in our directory as this is simpler than selecting to track some and not others. When files have been __added__ but not yet __commited__, we say they have been *staged*. <img src="img/git-local-workflow.png" alt="Drawing" style="width: 300px;"/> In the terminal type: >`git add -A` to take a snapshot of the changes to all (`-A`) the files in your local directory. <br>These changes are now "staged". Now type the following, replacing `"A short message explaining your changes"` with a message of your own between " " quotation marks: >`git commit -m "A short message explaining your changes"` This saves the changes with a message (`-m`) you can refer to to remind you what you changed. <br> To avoid losing any changes, these commands (`add` and `commit`) are usually executed in immediate succession. There is one last __very important step__ we need to do... ## Updating your Online GitHub Repository We have updated the Git repository held on the computer. The last thing we need to do is to update your online repository. We do this using the `push` command. <img src="img/git-local-remote-workflow-cropped.png" alt="Drawing" style="width: 500px;"/> You used the `push` command when you originally uploaded the textbook to your repository. Enter exactly the same code into the terminal: Type: git push Enter your GitHub login details when prompted. ### Checking that your changes have appeared in your online repo. Go to your GitHub page in your web browser and open the file 0_Introduction. Scroll down to where you made the change. Hint: look for the marker: <img src="img/change.jpg" alt="Drawing" style="width: 100px;"/> ## Installing Software for Home Use It is highly recommended that you download and install the software we will use in class: - Jupyter notebook (anaconda) - Git You will need to use this software to complete homework assigments and prepare for the exam. Both are free to download and install. Anaconda (which includes Jupyter notebook) can be downloaded from: https://www.anaconda.com/download/ ><br>Python 3.6 version and Python 2.7 version are available. ><br>Choose Python 3.6 version Git can be downloaded from: https://github.com/git-for-windows/git/releases/tag/v2.14.1.windows.1 >Choose Git-2.14.1-64-bit.exe if you have a 64 bit operating system. ><br> Choose Git-2.14.1-32-bit.exe if you have a 32 bit operating system. An easy to follow download wizard will launch for each piece of software. ## Installing Git for On-Campus Use Git is only available in the computer lab (Room North wing 21, Academic Center Bldg., Yoshida-South Campus). If you want to use Git on a Kyoto University computer outside of the computer lab you will need install Git in your local user area. The following instructions tell you how to do this. __IMPORTANT NOTE__ If you are going to use your personal computer to complete work in/outside of the seminars, you DO NOT need to complete this step. Download the Git program from here: https://github.com/git-for-windows/git/releases/tag/v2.14.1.windows.1 The version you need is: PortableGit-2.14.1-32-bit.7z.exe When prompted, choose to __run the file__ 実行(R). <img src="img/GitHubInstallRun.png" alt="Drawing" style="width: 500px;"/> When prompted, change the location to save the file to: M:¥Documents¥PortableGit <img src="img/GitLocation.png" alt="Drawing" style="width: 400px;"/> Press OK The download may take some time. Once the download has completed... To open the terminal: - press "win key" + "R" - type: __cmd__ - press enter In the terminal type: >`M:` ...and press enter, to switch to the M drive. You should see the command prompt change. <img src="img/KUterminalMdrive.png" alt="Drawing" style="width: 700px;"/> To navigate to documents type: >`cd Documents` cd stands for "change directory". You should now see a folder called PortableGit in the contents list of __Documents__ folder. Type: >cd PortableGit to move into your PortableGit folder. To check git has installed type: >`git-bash.exe` A new terminal window will open. In this window type: >`git --version` If Git has installed, the version of the program will be dipolayed. You should see something like this: <img src="img/git-version.gif" alt="Drawing" style="width: 500px;"/> Close the window. The final thing we need to do is to tell the computer where to look for the Git program. Move one step up from the Git folder. In the original terminal window, type: > `cd ..` Now enter the following in the terminal: > PATH=M:¥Documents¥PortableGit¥bin;%PATH% (you may need to have your keyboard set to JP to achieve this) <img src="img/windows_change_lang.png" alt="Drawing" style="width: 400px;"/> You can type this or __copy and paste__ it from the README section on the github page we looked at earlier. <img src="img/readme_.png" alt="Drawing" style="width: 500px;"/> __Whenever to use Git on a Kyoto University computer outside of the computer lab (Room North wing 21, Academic Center Bldg., Yoshida-South Campus), you must first open a terminal and type the line of code above to tell the computer where to look for the Git program.__ The program Git has its own terminal commands. Each one starts with the word `git` You can check git is working by typing: >`git status` You should see something like this: <img src="img/git-version.gif" alt="Drawing" style="width: 500px;"/> # Summary - Course structure: - Part 1: Fundamentals of programming (~10 weeks) - Part 2: Applications of programming (~4 weeks). - Version Control and Testing (throughout) - Assessment: - 50% course work (mostly in-class) - 50% exam - New Software: - Git - GitHub - Jupyter notebook # Homework Download and install Jupyter notebook and Git on your personal computer. Send me an email with a link to your online GitHub repository: [email protected] # Next Seminar If possible, please bring your personal computer (on which you have installed Jupyter notebook and Git) to class. We are going to complete an excercise: __setting up a local repository on your personal computer.__ If you cannot bring your personal computer with you, you can practise using a laptop provided in class, but you wil need to repeat the steps at home in your own time.
github_jupyter
from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() x = 2 + 3 print(x) y = x - 2 print(y) x = 10 y = x - 2 print(y) x = 2 + 3 y = x - 2 print(y) print(2 + 1) print(5 - 2) print(1 + 1) print(2 - 1) print(1 + 5) print(2 + 1) print(5 - 2) print(1 + 1) print(2 - 1) print(1 + 5)
0.289573
0.796609
``` import pandas as pd import re import ipaddress as ip import csv gen_delims = [":", "/", "?", "#", "[", "]", "@"] sub_delims = ["!", "$", "&", "'", "(", ")", "*", "+", ",", ";", "="] reserved_characters = gen_delims + sub_delims unreserved_characters = ["-", ".", "_", "~"] def transform_class(x): return 1 if x == 'good' else 0 def load_known_tlds(): with open('known_tlds.txt', 'r') as file: tlds = file.read().split('\n') tlds = [tld.lower() for tld in tlds] return tlds load_known_tlds() def load_suspicious_tlds(): with open('top_abused_tlds.txt') as file: tlds = file.read().split('\n') tlds = [tld.lower() for tld in tlds] return tlds load_suspicious_tlds() def load_suspicious_words(): with open('suspicious_words.txt') as file: words = file.read().split('\n') words = [tld.lower() for tld in words] return words load_suspicious_words() def scrape_tld(url): try: index = url.index('/') except ValueError: index = len(url) - 1 dot_index = url.rfind('.', 0, index) try: index = url.index(':', dot_index, index + 1) except ValueError: () return url[(dot_index + 1):index] def len_of_url(url): return len(url) def is_tld_in_known_list(tld, known_list): return 1 if tld in known_list else 0 def is_tld_in_suspicious_list(tld, suspicious_list): return 1 if tld in suspicious_list else 0 def does_url_contain_ip_address(url): try: if ip.ip_address(url): return 1 except ValueError: return 0 def len_of_deep_url(url): try: index = url.index('/') return len(url[index:]) except ValueError: return len(url) def number_of_gen_delimiters_in_url(url): return sum([1 if item in url else 0 for item in gen_delims]) def number_of_sub_delimiters_in_url(url): return sum([1 if item in url else 0 for item in sub_delims]) def number_of_reserved_characters_in_url(url): return number_of_gen_delimiters_in_url(url) + number_of_sub_delimiters_in_url(url) def number_of_unreserved_special_characters_in_url(url): return sum([1 if item in url else 0 for item in unreserved_characters]) def number_of_sub_domains(url): try: index = url.index('/') return len(url[index:].split('.')) except ValueError: return 0 def does_url_contain_http_inside(url): return 1 if 'http' in url[1:] else 0 def remove_http_from_begining(url): if url.startswith('http://www.'): return url[11:] elif url.startswith('http://'): return url[7:] return url def count_suspicious_words_in_url(url, list_of_words): return sum(item in url for item in list_of_words) def count_percent_character_in_url(url): return url.count('%') def count_number_of_digits_in_url(url): return sum(c.isdigit() for c in url) def number_length_ration_in_url(url): return count_number_of_digits_in_url(url) / len_of_url(url) def does_url_contain_equal_sign_after_question_mark(url): try: index = url.index('?') return 1 if '=' in url[index:] else 0 except ValueError: return 0 def does_url_contain_non_standard_port(url): tmp = re.search(":([0-9].?.?.?)", url) try: port = tmp.group(0)[1:] return 1 if (port != '8080') and (port != '80') and (port != '443') else 0 except AttributeError: return 0 file = 'data.csv' df = pd.read_csv(file, converters={'label': transform_class}, low_memory=False) print("Original data length: " + str(len(df))) df.drop_duplicates(subset=None, inplace=True) print("Data without duplicates length: " + str(len(df)) + "\n") print(df.info()) if df.isnull().values.any(): print("Skup sadrzi null vrednosti!\n") else: print("Skup ne sadrzi nijednu null vrednost\n") if df.isna().values.any(): print("Skup sadrzi NaN vrednosti!\n") else: print("Skup ne sadrzi nijednu NaN vrednost\n") df_whois = pd.read_csv('whois_data.csv', low_memory=False) df = pd.merge(df, df_whois, how='inner', on='url') df.drop_duplicates(subset=None, inplace=True) x = df.values[:, 0] y = df.values[:, 1] y = y.astype('int') df_whois = df[['rd', 'ed', 'ud']] print(df_whois.head(10)) whois_rd = df_whois.values[:, 0] whois_ed = df_whois.values[:, 1] whois_ud = df_whois.values[:, 2] x = [remove_http_from_begining(item) for item in x] known_tlds = load_known_tlds() abused_tlds = load_suspicious_tlds() suspicious_words = load_suspicious_words() data_columns = ['url', 'url_len', 'tld_in_known', 'tld_in_abused', 'contain_ip', 'deep_url_len', 'num_of_gen_deli', 'num_of_sub_deli', 'num_of_reserved_char', 'num_of_unreserved_spec_char', 'num_of_sub_domains', 'contain_http', 'number_of_suspicious_words', 'number_of_percentage_signs', 'number_of_numbers', 'number_of_numbers_length_of_url_ratio', 'contain_equal_sign_after_question_mark', 'contain_non_standard_port', 'whois_rd', 'whois_ed', 'whois_ud', 'class'] data_list = [data_columns] c = 0 for (i, url) in enumerate(x): url_string = url url_string = url url_len = len_of_url(url) url_tld = scrape_tld(url) is_tld_known = is_tld_in_known_list(url_tld, known_tlds) is_tld_abused = is_tld_in_suspicious_list(url_tld, abused_tlds) url_contain_ip = does_url_contain_ip_address(url) url_deep_url_len = len_of_deep_url(url) url_num_of_gen_delim = number_of_gen_delimiters_in_url(url) url_num_of_sub_delim = number_of_sub_delimiters_in_url(url) url_num_of_res_delim = number_of_reserved_characters_in_url(url) url_num_of_unres_delim = number_of_unreserved_special_characters_in_url(url) url_num_of_sub_domains = number_of_sub_domains(url) url_contain_http = does_url_contain_http_inside(url) number_of_suspicious_words = count_suspicious_words_in_url(url, suspicious_words) number_of_percentage_signs = count_percent_character_in_url(url) number_of_digits = count_number_of_digits_in_url(url) digits_url_length_ratio = number_length_ration_in_url(url) url_contain_es_after_qm = does_url_contain_equal_sign_after_question_mark(url) url_contain_non_standard_port = does_url_contain_non_standard_port(url) url_class = y[i] days_since_created = whois_rd[i] days_until_expires = whois_ed[i] days_since_last_updated = whois_ud[i] data_list.append([url_string, url_len, is_tld_known, is_tld_abused, url_contain_ip, url_deep_url_len, url_num_of_gen_delim, url_num_of_sub_delim, url_num_of_res_delim, url_num_of_unres_delim, url_num_of_sub_domains, url_contain_http, number_of_suspicious_words, number_of_percentage_signs, number_of_digits, digits_url_length_ratio, url_contain_es_after_qm, url_contain_non_standard_port, days_since_created, days_until_expires, days_since_last_updated, url_class]) c += 1 if(c%10000 == 0): print(c) with open('data_for_classification.csv', 'w', encoding='utf-8') as file: writer = csv.writer(file) for row in data_list: writer.writerow(row) ```
github_jupyter
import pandas as pd import re import ipaddress as ip import csv gen_delims = [":", "/", "?", "#", "[", "]", "@"] sub_delims = ["!", "$", "&", "'", "(", ")", "*", "+", ",", ";", "="] reserved_characters = gen_delims + sub_delims unreserved_characters = ["-", ".", "_", "~"] def transform_class(x): return 1 if x == 'good' else 0 def load_known_tlds(): with open('known_tlds.txt', 'r') as file: tlds = file.read().split('\n') tlds = [tld.lower() for tld in tlds] return tlds load_known_tlds() def load_suspicious_tlds(): with open('top_abused_tlds.txt') as file: tlds = file.read().split('\n') tlds = [tld.lower() for tld in tlds] return tlds load_suspicious_tlds() def load_suspicious_words(): with open('suspicious_words.txt') as file: words = file.read().split('\n') words = [tld.lower() for tld in words] return words load_suspicious_words() def scrape_tld(url): try: index = url.index('/') except ValueError: index = len(url) - 1 dot_index = url.rfind('.', 0, index) try: index = url.index(':', dot_index, index + 1) except ValueError: () return url[(dot_index + 1):index] def len_of_url(url): return len(url) def is_tld_in_known_list(tld, known_list): return 1 if tld in known_list else 0 def is_tld_in_suspicious_list(tld, suspicious_list): return 1 if tld in suspicious_list else 0 def does_url_contain_ip_address(url): try: if ip.ip_address(url): return 1 except ValueError: return 0 def len_of_deep_url(url): try: index = url.index('/') return len(url[index:]) except ValueError: return len(url) def number_of_gen_delimiters_in_url(url): return sum([1 if item in url else 0 for item in gen_delims]) def number_of_sub_delimiters_in_url(url): return sum([1 if item in url else 0 for item in sub_delims]) def number_of_reserved_characters_in_url(url): return number_of_gen_delimiters_in_url(url) + number_of_sub_delimiters_in_url(url) def number_of_unreserved_special_characters_in_url(url): return sum([1 if item in url else 0 for item in unreserved_characters]) def number_of_sub_domains(url): try: index = url.index('/') return len(url[index:].split('.')) except ValueError: return 0 def does_url_contain_http_inside(url): return 1 if 'http' in url[1:] else 0 def remove_http_from_begining(url): if url.startswith('http://www.'): return url[11:] elif url.startswith('http://'): return url[7:] return url def count_suspicious_words_in_url(url, list_of_words): return sum(item in url for item in list_of_words) def count_percent_character_in_url(url): return url.count('%') def count_number_of_digits_in_url(url): return sum(c.isdigit() for c in url) def number_length_ration_in_url(url): return count_number_of_digits_in_url(url) / len_of_url(url) def does_url_contain_equal_sign_after_question_mark(url): try: index = url.index('?') return 1 if '=' in url[index:] else 0 except ValueError: return 0 def does_url_contain_non_standard_port(url): tmp = re.search(":([0-9].?.?.?)", url) try: port = tmp.group(0)[1:] return 1 if (port != '8080') and (port != '80') and (port != '443') else 0 except AttributeError: return 0 file = 'data.csv' df = pd.read_csv(file, converters={'label': transform_class}, low_memory=False) print("Original data length: " + str(len(df))) df.drop_duplicates(subset=None, inplace=True) print("Data without duplicates length: " + str(len(df)) + "\n") print(df.info()) if df.isnull().values.any(): print("Skup sadrzi null vrednosti!\n") else: print("Skup ne sadrzi nijednu null vrednost\n") if df.isna().values.any(): print("Skup sadrzi NaN vrednosti!\n") else: print("Skup ne sadrzi nijednu NaN vrednost\n") df_whois = pd.read_csv('whois_data.csv', low_memory=False) df = pd.merge(df, df_whois, how='inner', on='url') df.drop_duplicates(subset=None, inplace=True) x = df.values[:, 0] y = df.values[:, 1] y = y.astype('int') df_whois = df[['rd', 'ed', 'ud']] print(df_whois.head(10)) whois_rd = df_whois.values[:, 0] whois_ed = df_whois.values[:, 1] whois_ud = df_whois.values[:, 2] x = [remove_http_from_begining(item) for item in x] known_tlds = load_known_tlds() abused_tlds = load_suspicious_tlds() suspicious_words = load_suspicious_words() data_columns = ['url', 'url_len', 'tld_in_known', 'tld_in_abused', 'contain_ip', 'deep_url_len', 'num_of_gen_deli', 'num_of_sub_deli', 'num_of_reserved_char', 'num_of_unreserved_spec_char', 'num_of_sub_domains', 'contain_http', 'number_of_suspicious_words', 'number_of_percentage_signs', 'number_of_numbers', 'number_of_numbers_length_of_url_ratio', 'contain_equal_sign_after_question_mark', 'contain_non_standard_port', 'whois_rd', 'whois_ed', 'whois_ud', 'class'] data_list = [data_columns] c = 0 for (i, url) in enumerate(x): url_string = url url_string = url url_len = len_of_url(url) url_tld = scrape_tld(url) is_tld_known = is_tld_in_known_list(url_tld, known_tlds) is_tld_abused = is_tld_in_suspicious_list(url_tld, abused_tlds) url_contain_ip = does_url_contain_ip_address(url) url_deep_url_len = len_of_deep_url(url) url_num_of_gen_delim = number_of_gen_delimiters_in_url(url) url_num_of_sub_delim = number_of_sub_delimiters_in_url(url) url_num_of_res_delim = number_of_reserved_characters_in_url(url) url_num_of_unres_delim = number_of_unreserved_special_characters_in_url(url) url_num_of_sub_domains = number_of_sub_domains(url) url_contain_http = does_url_contain_http_inside(url) number_of_suspicious_words = count_suspicious_words_in_url(url, suspicious_words) number_of_percentage_signs = count_percent_character_in_url(url) number_of_digits = count_number_of_digits_in_url(url) digits_url_length_ratio = number_length_ration_in_url(url) url_contain_es_after_qm = does_url_contain_equal_sign_after_question_mark(url) url_contain_non_standard_port = does_url_contain_non_standard_port(url) url_class = y[i] days_since_created = whois_rd[i] days_until_expires = whois_ed[i] days_since_last_updated = whois_ud[i] data_list.append([url_string, url_len, is_tld_known, is_tld_abused, url_contain_ip, url_deep_url_len, url_num_of_gen_delim, url_num_of_sub_delim, url_num_of_res_delim, url_num_of_unres_delim, url_num_of_sub_domains, url_contain_http, number_of_suspicious_words, number_of_percentage_signs, number_of_digits, digits_url_length_ratio, url_contain_es_after_qm, url_contain_non_standard_port, days_since_created, days_until_expires, days_since_last_updated, url_class]) c += 1 if(c%10000 == 0): print(c) with open('data_for_classification.csv', 'w', encoding='utf-8') as file: writer = csv.writer(file) for row in data_list: writer.writerow(row)
0.166709
0.259474
``` %matplotlib inline ``` Auto-tuning a Convolutional Network for x86 CPU =============================================== **Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_ This is a tutorial about how to tune convolution neural network for x86 CPU. Note that this tutorial will not run on Windows or recent versions of macOS. To get it to run, you will need to wrap the body of this tutorial in a :code:`if __name__ == "__main__":` block. ``` import os import numpy as np import tvm from tvm import relay, autotvm from tvm.relay import testing from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner from tvm.autotvm.graph_tuner import DPTuner, PBQPTuner import tvm.contrib.graph_runtime as runtime ``` Define network -------------- First we need to define the network in relay frontend API. We can either load some pre-defined network from :code:`relay.testing` or building :any:`relay.testing.resnet` with relay. We can also load models from MXNet, ONNX and TensorFlow. In this tutorial, we choose resnet-18 as tuning example. ``` def get_network(name, batch_size): """Get the symbol definition and random weight of a network""" input_shape = (batch_size, 3, 224, 224) output_shape = (batch_size, 1000) if "resnet" in name: n_layer = int(name.split("-")[1]) mod, params = relay.testing.resnet.get_workload( num_layers=n_layer, batch_size=batch_size, dtype=dtype ) elif "vgg" in name: n_layer = int(name.split("-")[1]) mod, params = relay.testing.vgg.get_workload( num_layers=n_layer, batch_size=batch_size, dtype=dtype ) elif name == "mobilenet": mod, params = relay.testing.mobilenet.get_workload(batch_size=batch_size, dtype=dtype) elif name == "squeezenet_v1.1": mod, params = relay.testing.squeezenet.get_workload( batch_size=batch_size, version="1.1", dtype=dtype ) elif name == "inception_v3": input_shape = (batch_size, 3, 299, 299) mod, params = relay.testing.inception_v3.get_workload(batch_size=batch_size, dtype=dtype) elif name == "mxnet": # an example for mxnet model from mxnet.gluon.model_zoo.vision import get_model block = get_model("resnet18_v1", pretrained=True) mod, params = relay.frontend.from_mxnet(block, shape={input_name: input_shape}, dtype=dtype) net = mod["main"] net = relay.Function( net.params, relay.nn.softmax(net.body), None, net.type_params, net.attrs ) mod = tvm.IRModule.from_expr(net) else: raise ValueError("Unsupported network: " + name) return mod, params, input_shape, output_shape # Replace "llvm" with the correct target of your CPU. # For example, for AWS EC2 c5 instance with Intel Xeon # Platinum 8000 series, the target should be "llvm -mcpu=skylake-avx512". # For AWS EC2 c4 instance with Intel Xeon E5-2666 v3, it should be # "llvm -mcpu=core-avx2". target = "llvm" batch_size = 1 dtype = "float32" model_name = "resnet-18" log_file = "%s.log" % model_name graph_opt_sch_file = "%s_graph_opt.log" % model_name # Set the input name of the graph # For ONNX models, it is typically "0". input_name = "data" # Set number of threads used for tuning based on the number of # physical CPU cores on your machine. num_threads = 1 os.environ["TVM_NUM_THREADS"] = str(num_threads) ``` Configure tensor tuning settings and create tasks ------------------------------------------------- To get better kernel execution performance on x86 CPU, we need to change data layout of convolution kernel from "NCHW" to "NCHWc". To deal with this situation, we define conv2d_NCHWc operator in topi. We will tune this operator instead of plain conv2d. We will use local mode for tuning configuration. RPC tracker mode can be setup similarly to the approach in `tune_relay_arm` tutorial. To perform a precise measurement, we should repeat the measurement several times and use the average of results. In addition, we need to flush the cache for the weight tensors between repeated measurements. This can make the measured latency of one operator closer to its actual latency during end-to-end inference. ``` tuning_option = { "log_filename": log_file, "tuner": "random", "early_stopping": None, "measure_option": autotvm.measure_option( builder=autotvm.LocalBuilder(), runner=autotvm.LocalRunner( number=1, repeat=10, min_repeat_ms=0, enable_cpu_cache_flush=True ), ), } # You can skip the implementation of this function for this tutorial. def tune_kernels( tasks, measure_option, tuner="gridsearch", early_stopping=None, log_filename="tuning.log" ): for i, task in enumerate(tasks): prefix = "[Task %2d/%2d] " % (i + 1, len(tasks)) # create tuner if tuner == "xgb" or tuner == "xgb-rank": tuner_obj = XGBTuner(task, loss_type="rank") elif tuner == "ga": tuner_obj = GATuner(task, pop_size=50) elif tuner == "random": tuner_obj = RandomTuner(task) elif tuner == "gridsearch": tuner_obj = GridSearchTuner(task) else: raise ValueError("Invalid tuner: " + tuner) # do tuning n_trial = len(task.config_space) tuner_obj.tune( n_trial=n_trial, early_stopping=early_stopping, measure_option=measure_option, callbacks=[ autotvm.callback.progress_bar(n_trial, prefix=prefix), autotvm.callback.log_to_file(log_filename), ], ) # Use graph tuner to achieve graph level optimal schedules # Set use_DP=False if it takes too long to finish. def tune_graph(graph, dshape, records, opt_sch_file, use_DP=True): target_op = [ relay.op.get("nn.conv2d"), ] Tuner = DPTuner if use_DP else PBQPTuner executor = Tuner(graph, {input_name: dshape}, records, target_op, target) executor.benchmark_layout_transform(min_exec_num=2000) executor.run() executor.write_opt_sch2record_file(opt_sch_file) ``` Finally, we launch tuning jobs and evaluate the end-to-end performance. ``` def tune_and_evaluate(tuning_opt): # extract workloads from relay program print("Extract tasks...") mod, params, data_shape, out_shape = get_network(model_name, batch_size) tasks = autotvm.task.extract_from_program( mod["main"], target=target, params=params, ops=(relay.op.get("nn.conv2d"),) ) # run tuning tasks tune_kernels(tasks, **tuning_opt) tune_graph(mod["main"], data_shape, log_file, graph_opt_sch_file) # compile kernels with graph-level best records with autotvm.apply_graph_best(graph_opt_sch_file): print("Compile...") with tvm.transform.PassContext(opt_level=3): lib = relay.build_module.build(mod, target=target, params=params) # upload parameters to device ctx = tvm.cpu() data_tvm = tvm.nd.array((np.random.uniform(size=data_shape)).astype(dtype)) module = runtime.GraphModule(lib["default"](ctx)) module.set_input(input_name, data_tvm) # evaluate print("Evaluate inference time cost...") ftimer = module.module.time_evaluator("run", ctx, number=100, repeat=3) prof_res = np.array(ftimer().results) * 1000 # convert to millisecond print( "Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res)) ) # We do not run the tuning in our webpage server since it takes too long. # Uncomment the following line to run it by yourself. # tune_and_evaluate(tuning_option) ``` Sample Output ------------- The tuning needs to compile many programs and extract feature from them. So a high performance CPU is recommended. One sample output is listed below. .. code-block:: bash Extract tasks... Tuning... [Task 1/12] Current/Best: 598.05/2497.63 GFLOPS | Progress: (252/252) | 1357.95 s Done. [Task 2/12] Current/Best: 522.63/2279.24 GFLOPS | Progress: (784/784) | 3989.60 s Done. [Task 3/12] Current/Best: 447.33/1927.69 GFLOPS | Progress: (784/784) | 3869.14 s Done. [Task 4/12] Current/Best: 481.11/1912.34 GFLOPS | Progress: (672/672) | 3274.25 s Done. [Task 5/12] Current/Best: 414.09/1598.45 GFLOPS | Progress: (672/672) | 2720.78 s Done. [Task 6/12] Current/Best: 508.96/2273.20 GFLOPS | Progress: (768/768) | 3718.75 s Done. [Task 7/12] Current/Best: 469.14/1955.79 GFLOPS | Progress: (576/576) | 2665.67 s Done. [Task 8/12] Current/Best: 230.91/1658.97 GFLOPS | Progress: (576/576) | 2435.01 s Done. [Task 9/12] Current/Best: 487.75/2295.19 GFLOPS | Progress: (648/648) | 3009.95 s Done. [Task 10/12] Current/Best: 182.33/1734.45 GFLOPS | Progress: (360/360) | 1755.06 s Done. [Task 11/12] Current/Best: 372.18/1745.15 GFLOPS | Progress: (360/360) | 1684.50 s Done. [Task 12/12] Current/Best: 215.34/2271.11 GFLOPS | Progress: (400/400) | 2128.74 s Done. Compile... Evaluate inference time cost... Mean inference time (std dev): 3.16 ms (0.03 ms)
github_jupyter
%matplotlib inline import os import numpy as np import tvm from tvm import relay, autotvm from tvm.relay import testing from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner from tvm.autotvm.graph_tuner import DPTuner, PBQPTuner import tvm.contrib.graph_runtime as runtime def get_network(name, batch_size): """Get the symbol definition and random weight of a network""" input_shape = (batch_size, 3, 224, 224) output_shape = (batch_size, 1000) if "resnet" in name: n_layer = int(name.split("-")[1]) mod, params = relay.testing.resnet.get_workload( num_layers=n_layer, batch_size=batch_size, dtype=dtype ) elif "vgg" in name: n_layer = int(name.split("-")[1]) mod, params = relay.testing.vgg.get_workload( num_layers=n_layer, batch_size=batch_size, dtype=dtype ) elif name == "mobilenet": mod, params = relay.testing.mobilenet.get_workload(batch_size=batch_size, dtype=dtype) elif name == "squeezenet_v1.1": mod, params = relay.testing.squeezenet.get_workload( batch_size=batch_size, version="1.1", dtype=dtype ) elif name == "inception_v3": input_shape = (batch_size, 3, 299, 299) mod, params = relay.testing.inception_v3.get_workload(batch_size=batch_size, dtype=dtype) elif name == "mxnet": # an example for mxnet model from mxnet.gluon.model_zoo.vision import get_model block = get_model("resnet18_v1", pretrained=True) mod, params = relay.frontend.from_mxnet(block, shape={input_name: input_shape}, dtype=dtype) net = mod["main"] net = relay.Function( net.params, relay.nn.softmax(net.body), None, net.type_params, net.attrs ) mod = tvm.IRModule.from_expr(net) else: raise ValueError("Unsupported network: " + name) return mod, params, input_shape, output_shape # Replace "llvm" with the correct target of your CPU. # For example, for AWS EC2 c5 instance with Intel Xeon # Platinum 8000 series, the target should be "llvm -mcpu=skylake-avx512". # For AWS EC2 c4 instance with Intel Xeon E5-2666 v3, it should be # "llvm -mcpu=core-avx2". target = "llvm" batch_size = 1 dtype = "float32" model_name = "resnet-18" log_file = "%s.log" % model_name graph_opt_sch_file = "%s_graph_opt.log" % model_name # Set the input name of the graph # For ONNX models, it is typically "0". input_name = "data" # Set number of threads used for tuning based on the number of # physical CPU cores on your machine. num_threads = 1 os.environ["TVM_NUM_THREADS"] = str(num_threads) tuning_option = { "log_filename": log_file, "tuner": "random", "early_stopping": None, "measure_option": autotvm.measure_option( builder=autotvm.LocalBuilder(), runner=autotvm.LocalRunner( number=1, repeat=10, min_repeat_ms=0, enable_cpu_cache_flush=True ), ), } # You can skip the implementation of this function for this tutorial. def tune_kernels( tasks, measure_option, tuner="gridsearch", early_stopping=None, log_filename="tuning.log" ): for i, task in enumerate(tasks): prefix = "[Task %2d/%2d] " % (i + 1, len(tasks)) # create tuner if tuner == "xgb" or tuner == "xgb-rank": tuner_obj = XGBTuner(task, loss_type="rank") elif tuner == "ga": tuner_obj = GATuner(task, pop_size=50) elif tuner == "random": tuner_obj = RandomTuner(task) elif tuner == "gridsearch": tuner_obj = GridSearchTuner(task) else: raise ValueError("Invalid tuner: " + tuner) # do tuning n_trial = len(task.config_space) tuner_obj.tune( n_trial=n_trial, early_stopping=early_stopping, measure_option=measure_option, callbacks=[ autotvm.callback.progress_bar(n_trial, prefix=prefix), autotvm.callback.log_to_file(log_filename), ], ) # Use graph tuner to achieve graph level optimal schedules # Set use_DP=False if it takes too long to finish. def tune_graph(graph, dshape, records, opt_sch_file, use_DP=True): target_op = [ relay.op.get("nn.conv2d"), ] Tuner = DPTuner if use_DP else PBQPTuner executor = Tuner(graph, {input_name: dshape}, records, target_op, target) executor.benchmark_layout_transform(min_exec_num=2000) executor.run() executor.write_opt_sch2record_file(opt_sch_file) def tune_and_evaluate(tuning_opt): # extract workloads from relay program print("Extract tasks...") mod, params, data_shape, out_shape = get_network(model_name, batch_size) tasks = autotvm.task.extract_from_program( mod["main"], target=target, params=params, ops=(relay.op.get("nn.conv2d"),) ) # run tuning tasks tune_kernels(tasks, **tuning_opt) tune_graph(mod["main"], data_shape, log_file, graph_opt_sch_file) # compile kernels with graph-level best records with autotvm.apply_graph_best(graph_opt_sch_file): print("Compile...") with tvm.transform.PassContext(opt_level=3): lib = relay.build_module.build(mod, target=target, params=params) # upload parameters to device ctx = tvm.cpu() data_tvm = tvm.nd.array((np.random.uniform(size=data_shape)).astype(dtype)) module = runtime.GraphModule(lib["default"](ctx)) module.set_input(input_name, data_tvm) # evaluate print("Evaluate inference time cost...") ftimer = module.module.time_evaluator("run", ctx, number=100, repeat=3) prof_res = np.array(ftimer().results) * 1000 # convert to millisecond print( "Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res)) ) # We do not run the tuning in our webpage server since it takes too long. # Uncomment the following line to run it by yourself. # tune_and_evaluate(tuning_option)
0.58261
0.858422
## Which journals have published articles on RCR and metadata? We are interested in ranking journals by their previous interest in reproducible research and metadata. Because PubMed is paginated and the Python packages `eutils` and `entrezpy` are a bit lacking, I chose to use Lens.org searches, and exported all columns except `abstract` using the following search terms: ` reproducible research OR ( reproducibility OR reproducible computational research ) ` ` metadata ` Export appears limited to 50k records. ``` import pandas as pd pd.set_option('display.max_rows', None) metadata=pd.read_csv('../data/lens/metadata.lens.csv.gz', compression='gzip') time.sleep(1000) #don't rush rcr=pd.read_csv('../data/lens/rcr.lens.csv.gz', compression='gzip') ``` Let's see what columns exist ``` metadata.columns.tolist() #pandas is kind of awkward compared to dplyr, let's count unique lens ids #https://nbviewer.jupyter.org/gist/TomAugspurger/6e052140eaa5fdb6e8c0 meta_jrnls = metadata.groupby(['Source Title']).agg({"Lens ID": "count"}).rename(columns={"Lens ID": "metadata_cnt"}) rcr_jrnls = rcr.groupby(['Source Title']).agg({"Lens ID": "count"}).rename(columns={"Lens ID": "rcr_cnt"}) ``` ## Top journals for metadata Let's use a scaled rank to ensure metadata and rcr are roughly comparable ``` meta_jrnls['meta_rank']=meta_jrnls['metadata_cnt'].rank() from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() meta_jrnls['meta_scaled'] = scaler.fit_transform(meta_jrnls['meta_rank'].values.reshape(-1,1)) meta_jrnls.sort_values("metadata_cnt",ascending=False).head(n=100) ``` ## Top journals for reproducible research ``` rcr_jrnls['rcr_rank']=rcr_jrnls['rcr_cnt'].rank() rcr_jrnls['rcr_scaled'] = scaler.fit_transform(rcr_jrnls['rcr_rank'].values.reshape(-1,1)) rcr_jrnls.sort_values("rcr_cnt",ascending=False).head(n=100) jrnls = meta_jrnls.merge(rcr_jrnls, how='inner', left_on='Source Title', right_on='Source Title') jrnls['total'] = jrnls['rcr_scaled'] + jrnls['meta_scaled'] jrnls.sort_values("total",ascending=False).head(n=100) ``` ## Top authors by Metadata ``` metadata_auth_counts=metadata['Author/s'].str.split(';').explode().value_counts().rank().to_frame().rename(columns={'Author/s':'meta_auth_cnt'}) metadata_auth_counts['meta_auth_scaled'] = scaler.fit_transform(metadata_auth_counts['meta_auth_cnt'].values.reshape(-1,1)) metadata_auth_counts.head(n=15) ## Top authors by RCR rcr_auth_counts=rcr['Author/s'].str.split(';').explode().value_counts().rank().to_frame().rename(columns={'Author/s':'rcr_auth_cnt'}) rcr_auth_counts['rcr_auth_scaled'] = scaler.fit_transform(rcr_auth_counts['rcr_auth_cnt'].values.reshape(-1,1)) rcr_auth_counts.head(n=15) ``` ## Top authors for both fields ``` authors=metadata_auth_counts.merge(rcr_auth_counts, how='inner', on=None, left_on=None, right_on=None, left_index=True, right_index=True, sort=False, suffixes=('_meta', '_rcr'), copy=True, indicator=False, validate=None) authors['total'] = authors['rcr_auth_scaled'] + authors['meta_auth_scaled'] authors.sort_values("total",ascending=False).head(n=100) ```
github_jupyter
import pandas as pd pd.set_option('display.max_rows', None) metadata=pd.read_csv('../data/lens/metadata.lens.csv.gz', compression='gzip') time.sleep(1000) #don't rush rcr=pd.read_csv('../data/lens/rcr.lens.csv.gz', compression='gzip') metadata.columns.tolist() #pandas is kind of awkward compared to dplyr, let's count unique lens ids #https://nbviewer.jupyter.org/gist/TomAugspurger/6e052140eaa5fdb6e8c0 meta_jrnls = metadata.groupby(['Source Title']).agg({"Lens ID": "count"}).rename(columns={"Lens ID": "metadata_cnt"}) rcr_jrnls = rcr.groupby(['Source Title']).agg({"Lens ID": "count"}).rename(columns={"Lens ID": "rcr_cnt"}) meta_jrnls['meta_rank']=meta_jrnls['metadata_cnt'].rank() from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() meta_jrnls['meta_scaled'] = scaler.fit_transform(meta_jrnls['meta_rank'].values.reshape(-1,1)) meta_jrnls.sort_values("metadata_cnt",ascending=False).head(n=100) rcr_jrnls['rcr_rank']=rcr_jrnls['rcr_cnt'].rank() rcr_jrnls['rcr_scaled'] = scaler.fit_transform(rcr_jrnls['rcr_rank'].values.reshape(-1,1)) rcr_jrnls.sort_values("rcr_cnt",ascending=False).head(n=100) jrnls = meta_jrnls.merge(rcr_jrnls, how='inner', left_on='Source Title', right_on='Source Title') jrnls['total'] = jrnls['rcr_scaled'] + jrnls['meta_scaled'] jrnls.sort_values("total",ascending=False).head(n=100) metadata_auth_counts=metadata['Author/s'].str.split(';').explode().value_counts().rank().to_frame().rename(columns={'Author/s':'meta_auth_cnt'}) metadata_auth_counts['meta_auth_scaled'] = scaler.fit_transform(metadata_auth_counts['meta_auth_cnt'].values.reshape(-1,1)) metadata_auth_counts.head(n=15) ## Top authors by RCR rcr_auth_counts=rcr['Author/s'].str.split(';').explode().value_counts().rank().to_frame().rename(columns={'Author/s':'rcr_auth_cnt'}) rcr_auth_counts['rcr_auth_scaled'] = scaler.fit_transform(rcr_auth_counts['rcr_auth_cnt'].values.reshape(-1,1)) rcr_auth_counts.head(n=15) authors=metadata_auth_counts.merge(rcr_auth_counts, how='inner', on=None, left_on=None, right_on=None, left_index=True, right_index=True, sort=False, suffixes=('_meta', '_rcr'), copy=True, indicator=False, validate=None) authors['total'] = authors['rcr_auth_scaled'] + authors['meta_auth_scaled'] authors.sort_values("total",ascending=False).head(n=100)
0.32896
0.906529
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) # **PySpark Tutorial-8 PySpark Specifics for Spark NLP** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/PySpark/8.PySpark_Specifics_for_Spark_NLP.ipynb) In this notebook, some special Spark NLP annotators have been performed. ### Install PySpark ``` # install PySpark ! pip install -q pyspark==3.2.0 spark-nlp ``` ### Initializing Spark ``` import sparknlp spark = sparknlp.start(spark32=True) # params =>> gpu=False, spark23=False (start with spark 2.3) print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) spark # DO NOT FORGET WHEN YOU'RE DONE => spark.stop() from sparknlp.base import * import pandas as pd from sparknlp.functions import * from pyspark.sql.functions import col from pyspark.sql.types import ArrayType, StringType from pyspark.ml.param.shared import HasInputCol, HasOutputCol from pyspark.ml.util import DefaultParamsReadable, DefaultParamsWritable import pyspark.sql.functions as F import pyspark.sql.types as T from pyspark.sql import Row ``` # Annotators and Transformer Concepts In Spark NLP, all Annotators are either Estimators or Transformers as we see in Spark ML. An Estimator in Spark ML is an algorithm which can be fit on a DataFrame to produce a Transformer. E.g., a learning algorithm is an Estimator which trains on a DataFrame and produces a model. A Transformer is an algorithm which can transform one DataFrame into another DataFrame. E.g., an ML model is a Transformer that transforms a DataFrame with features into a DataFrame with predictions. In Spark NLP, there are two types of annotators: AnnotatorApproach and AnnotatorModel AnnotatorApproach extends Estimators from Spark ML, which are meant to be trained through fit(), and AnnotatorModel extends Transformers which are meant to transform data frames through transform(). Some of Spark NLP annotators have a Model suffix and some do not. The model suffix is explicitly stated when the annotator is the result of a training process. Some annotators, such as Tokenizer are transformers but do not contain the suffix Model since they are not trained, annotators. Model annotators have a pre-trained() on its static object, to retrieve the public pre-trained version of a model. Long story short, if it trains on a DataFrame and produces a model, it’s an AnnotatorApproach; and if it transforms one DataFrame into another DataFrame through some models, it’s an AnnotatorModel (e.g. WordEmbeddingsModel) and it doesn’t take Model suffix if it doesn’t rely on a pre-trained annotator while transforming a DataFrame (e.g. Tokenizer). ``` !wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt with open('./sample-sentences-en.txt') as f: print (f.read()) spark_df = spark.read.text('./sample-sentences-en.txt').toDF('text') spark_df.show(truncate=False) ``` ## Spark NLP Annotators ### Document Assembler To get through the process in Spark NLP, we need to get raw data transformed into Document type at first. DocumentAssembler() is a special transformer that does this for us; it creates the first annotation of type Document which may be used by annotators down the road. ``` documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document")\ .setCleanupMode("shrink") doc_df = documentAssembler.transform(spark_df) doc_df.show(truncate=30) doc_df.printSchema() doc_df.select('document.result','document.begin','document.end').show(truncate=False) doc_df.select("document.result").take(1) ``` ### Sentence Detector Finds sentence bounds in raw text. `setCustomBounds(string)`: Custom sentence separator text e.g. `["\n"]` ``` from sparknlp.annotator import * # we feed the document column coming from Document Assembler sentenceDetector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') sent_df = sentenceDetector.transform(doc_df) sent_df.show(truncate=False) sent_df.select('sentences.result').take(5) ``` ### Tokenizer Identifies tokens with tokenization open standards. It is an **Annotator Approach, so it requires .fit()**. A few rules will help customizing it if defaults do not fit user needs. setExceptions(StringArray): List of tokens to not alter at all. Allows composite tokens like two worded tokens that the user may not want to split. ``` tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") text = 'Peter Parker (Spiderman) is a nice guy and lives in New York but has no e-mail!' spark_df = spark.createDataFrame([[text]]).toDF("text") doc_df = documentAssembler.transform(spark_df) token_df = tokenizer.fit(doc_df).transform(doc_df) token_df.show(truncate=100) token_df.select('token.result').take(1) ``` ### Perceptron Model POS - Part of speech tags Averaged Perceptron model to tag words part-of-speech. Sets a POS tag to each word within a sentence. This is the instantiated model of the PerceptronApproach. For training your own model, please see the documentation of that class. ``` pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') ``` ## Custom Annotator ### SentenceChecking ``` class SentenceChecking( Transformer, HasInputCol, HasOutputCol, DefaultParamsReadable, DefaultParamsWritable): output_annotation_type = "document" def __init__(self,f,output_annotation_type="document"): super(SentenceChecking, self).__init__() self.f = f def setInputCol(self, value): """ Sets the value of :py:attr:`inputCol`. """ return self._set(inputCol=value) def setOutputCol(self, value): """ Sets the value of :py:attr:`outputCol`. """ return self._set(outputCol=value) def _transform(self, dataset): t = Annotation.arrayType() out_col = self.getOutputCol() in_col = dataset[self.getInputCol()] return dataset.withColumn(out_col, map_annotations(self.f, t)(in_col).alias(out_col, metadata={ 'annotatorType': self.output_annotation_type})) def checking_sentences(annotations): anns = [] for a in annotations: result = a.result + " - CHECKED SENTENCE" anns.append(sparknlp.annotation.Annotation(a.annotator_type, a.begin, a.begin + len(result), result, a.metadata, a.embeddings)) return anns ``` ## Creating Pipeline with Custom Annotator ``` document_assembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentence_detector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') sentence_checker = SentenceChecking(f=checking_sentences, output_annotation_type="document")\ .setInputCol("sentences")\ .setOutputCol("checked_sentences") tokenizer = Tokenizer()\ .setInputCols(["checked_sentences"])\ .setOutputCol("tokens") pipeline = Pipeline(stages=[document_assembler, sentence_detector, sentence_checker, tokenizer ]) test_string = "This is a sample text with multiple sentences. It aims to show our custom annotator problem." test_data = spark.createDataFrame([[test_string]]).toDF("text") %%time fitted_pipeline = pipeline.fit(test_data) spark_results = fitted_pipeline.transform(test_data) %%time spark_results.show(truncate=False) %%time spark_results.select("checked_sentences").show(truncate=False) spark_results.select("tokens.result").show(truncate=False) ``` ## LightPipeline Spark NLP LightPipelines are Spark ML pipelines converted into a single machine but the multi-threaded task, becoming more than **10x times faster** for smaller amounts of data (small is relative, but 50k sentences are roughly a good maximum). To use them, we simply plug in a trained (fitted) pipeline and then annotate a plain text. We don't even need to convert the input text to DataFrame in order to feed it into a pipeline that's accepting DataFrame as an input in the first place. This feature would be quite useful when it comes to getting a prediction for a few lines of text from a trained ML model. Here is the medium post [Spark NLP 101: LightPipeline](https://medium.com/spark-nlp/spark-nlp-101-lightpipeline-a544e93f20f1) ``` document_assembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentence_detector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') tokenizer = Tokenizer()\ .setInputCols(["sentences"])\ .setOutputCol("token") pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, pos ]) empty_data = spark.createDataFrame([[""]]).toDF("text") model = pipeline.fit(empty_data) ``` **IMPORTANT!** In Lightpipelines, you can not use Custom annotators ``` from sparknlp.base import LightPipeline light_model = LightPipeline(model) light_result = light_model.annotate("John and Peter are brothers. However they don't support each other that much.") list(zip(light_result['token'], light_result['pos'])) ``` ------------- # Spark NLP Annotation UDFs ``` documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document")\ tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') pipeline = Pipeline().setStages([ documentAssembler, tokenizer, pos]) data = spark.read.text('./sample-sentences-en.txt').toDF('text') data.show(truncate = False) model = pipeline.fit(data) result = model.transform(data) result.show(5) result.select('pos').show(1, truncate=False) @udf( StringType()) def nn_annotation(res,meta): nn = [] for i,j in zip(res,meta): if i == "NN" or i == "NNP": nn.append(j["word"]) return nn result.withColumn("nn & NNp tokens", nn_annotation(col("pos.result"), col("pos.metadata")))\ .select("nn & NNp tokens")\ .show(truncate=False) ```
github_jupyter
# install PySpark ! pip install -q pyspark==3.2.0 spark-nlp import sparknlp spark = sparknlp.start(spark32=True) # params =>> gpu=False, spark23=False (start with spark 2.3) print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) spark # DO NOT FORGET WHEN YOU'RE DONE => spark.stop() from sparknlp.base import * import pandas as pd from sparknlp.functions import * from pyspark.sql.functions import col from pyspark.sql.types import ArrayType, StringType from pyspark.ml.param.shared import HasInputCol, HasOutputCol from pyspark.ml.util import DefaultParamsReadable, DefaultParamsWritable import pyspark.sql.functions as F import pyspark.sql.types as T from pyspark.sql import Row !wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt with open('./sample-sentences-en.txt') as f: print (f.read()) spark_df = spark.read.text('./sample-sentences-en.txt').toDF('text') spark_df.show(truncate=False) documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document")\ .setCleanupMode("shrink") doc_df = documentAssembler.transform(spark_df) doc_df.show(truncate=30) doc_df.printSchema() doc_df.select('document.result','document.begin','document.end').show(truncate=False) doc_df.select("document.result").take(1) from sparknlp.annotator import * # we feed the document column coming from Document Assembler sentenceDetector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') sent_df = sentenceDetector.transform(doc_df) sent_df.show(truncate=False) sent_df.select('sentences.result').take(5) tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") text = 'Peter Parker (Spiderman) is a nice guy and lives in New York but has no e-mail!' spark_df = spark.createDataFrame([[text]]).toDF("text") doc_df = documentAssembler.transform(spark_df) token_df = tokenizer.fit(doc_df).transform(doc_df) token_df.show(truncate=100) token_df.select('token.result').take(1) pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') class SentenceChecking( Transformer, HasInputCol, HasOutputCol, DefaultParamsReadable, DefaultParamsWritable): output_annotation_type = "document" def __init__(self,f,output_annotation_type="document"): super(SentenceChecking, self).__init__() self.f = f def setInputCol(self, value): """ Sets the value of :py:attr:`inputCol`. """ return self._set(inputCol=value) def setOutputCol(self, value): """ Sets the value of :py:attr:`outputCol`. """ return self._set(outputCol=value) def _transform(self, dataset): t = Annotation.arrayType() out_col = self.getOutputCol() in_col = dataset[self.getInputCol()] return dataset.withColumn(out_col, map_annotations(self.f, t)(in_col).alias(out_col, metadata={ 'annotatorType': self.output_annotation_type})) def checking_sentences(annotations): anns = [] for a in annotations: result = a.result + " - CHECKED SENTENCE" anns.append(sparknlp.annotation.Annotation(a.annotator_type, a.begin, a.begin + len(result), result, a.metadata, a.embeddings)) return anns document_assembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentence_detector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') sentence_checker = SentenceChecking(f=checking_sentences, output_annotation_type="document")\ .setInputCol("sentences")\ .setOutputCol("checked_sentences") tokenizer = Tokenizer()\ .setInputCols(["checked_sentences"])\ .setOutputCol("tokens") pipeline = Pipeline(stages=[document_assembler, sentence_detector, sentence_checker, tokenizer ]) test_string = "This is a sample text with multiple sentences. It aims to show our custom annotator problem." test_data = spark.createDataFrame([[test_string]]).toDF("text") %%time fitted_pipeline = pipeline.fit(test_data) spark_results = fitted_pipeline.transform(test_data) %%time spark_results.show(truncate=False) %%time spark_results.select("checked_sentences").show(truncate=False) spark_results.select("tokens.result").show(truncate=False) document_assembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentence_detector = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentences') tokenizer = Tokenizer()\ .setInputCols(["sentences"])\ .setOutputCol("token") pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, pos ]) empty_data = spark.createDataFrame([[""]]).toDF("text") model = pipeline.fit(empty_data) from sparknlp.base import LightPipeline light_model = LightPipeline(model) light_result = light_model.annotate("John and Peter are brothers. However they don't support each other that much.") list(zip(light_result['token'], light_result['pos'])) documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document")\ tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') pipeline = Pipeline().setStages([ documentAssembler, tokenizer, pos]) data = spark.read.text('./sample-sentences-en.txt').toDF('text') data.show(truncate = False) model = pipeline.fit(data) result = model.transform(data) result.show(5) result.select('pos').show(1, truncate=False) @udf( StringType()) def nn_annotation(res,meta): nn = [] for i,j in zip(res,meta): if i == "NN" or i == "NNP": nn.append(j["word"]) return nn result.withColumn("nn & NNp tokens", nn_annotation(col("pos.result"), col("pos.metadata")))\ .select("nn & NNp tokens")\ .show(truncate=False)
0.586286
0.957397
<a href="https://colab.research.google.com/github/mikislin/summer20-Intro-python/blob/master/01_Flow_control_Solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## **Problem 1:** List (a) Creat two lists from the following data: one with animals, and second with number of neurons in the nervous system. Keep the same order Sponge: 0 Ciona: 231 C.elegans: 302 Wasp: 7400 Snail: 11000 Zebrafish: 100000 Ant: 250000 Mouse: 71000000 Cat: 760000000 Lemur: 254710000 ``` # creat the lists animals = ['Sponge', 'Ciona', 'C.elegans', 'Wasp', 'Snail', 'Zebrafish', 'Ant', 'Mouse','Cat','Lemur'] num_neurons = [0, 231, 302, 7400, 11000, 100000, 250000, 71000000, 760000000,254710000] whos ``` **(b)** Create a program that asks the user to enter an integer variable from 0 to number of animals in the list, printout a sentence indicating animal and number of neurons ``` var=len(animals) animal_number = input(f"What animal rank from 0 to {var:d}: ") i=int(animal_number) print('Rank', i, animals[i],'has', num_neurons[i], 'neurons') ``` **(c)** add new data to the lists, keeping ascending order Jellyfish: 5600 Ferret: 404000000 Octopus: 500000000 ``` animals.insert(3, 'Jellyfish') num_neurons.insert(3, 5600) animals.extend(('Ferret', 'Octopus')) num_neurons.extend((404000000,500000000)) ``` **(d)** group animals into invertebrate and vertebrate and print out average number of neurons per group ``` invertebrates=animals[0:6]+[animals[8]]+[animals[12]] num_for_invertebrates = num_neurons[0:6]+[num_neurons[8]]+[num_neurons[12]] vertebrates=animals[9:11]+[animals[7]] num_for_vertebrates = num_neurons[9:11]+[num_neurons[7]] print('Average number of neurons in invertebrate group: %d' %int(sum(num_for_invertebrates)/len(num_for_invertebrates))) print('Average number of neurons in vertebrate group: %d' %int(sum(num_for_vertebrates)/len(num_for_vertebrates))) ``` ## **Problem 2:** if statement **(a)** Ask the user for a number. Depending on whether the number is even or odd, print out an appropriate message to the user ``` num = input("Enter a number: ") mod = int(num)% 2 if mod > 0: print("You picked an odd number.") else: print("You picked an even number.") ``` (b) Ask the user for index of three animals to compare. Print animal with higher number of neurons ``` num = input("Enter list of three numbers with no gaps: ") print('from', animals[int(num[0])],animals[int(num[1])],animals[int(num[2])]) if num_neurons[int(num[0])] >= num_neurons[int(num[1])] and \ num_neurons[int(num[0])] >= num_neurons[int(num[2])]: print(animals[int(num[0])]) elif num_neurons[int(num[1])] >= num_neurons[int(num[0])] and \ num_neurons[int(num[1])] >= num_neurons[int(num[2])]: print(animals[int(num[1])]) else: print(animals[int(num[2])]) print('has max neurons') ``` ##**Problem 3** while loop **(a)** print out two list with while loop to replicate input list of animals ordered by number of neurons ``` x= 0 while x<3: animal = animals[x] num = num_neurons[x] print(animal, num) x+=1 ``` **(b)** print out animals with number of neurons smaller than group average ``` ``` ## **Problem 4:** for loop (a) Let's assume that the neurons in every animal are "fully connected" – this means that any given neuron makes a synaptic connection with every other neuron. Furthermore, let's assume that each of these connections consists of exactly one synapse. For example, if a brain has 500 neurons, then each neuron makes 499 connections. For every animal from the list, calculate the total number of synapses in a brain. Store the results in a new list called synapse_counts, with the same length as your other 2 lists. ``` # for each region, each neuron connects to every other one # so naively it would be n * (n - 1) # however, this counts each synapse twice, so let's divide by two: synapse_counts = [] for counts in num_neurons: synapse_counts.append(counts * (counts - 1) / 2) print(synapse_counts) ``` (b) Find out and print which animal has more synapses then another animal has neurons ``` ``` creat a dictionary with key = animals and value = num_neurons ``` y= [{animals[i]:num_neurons[i]} for i in range(len(animals))] print(y) ```
github_jupyter
# creat the lists animals = ['Sponge', 'Ciona', 'C.elegans', 'Wasp', 'Snail', 'Zebrafish', 'Ant', 'Mouse','Cat','Lemur'] num_neurons = [0, 231, 302, 7400, 11000, 100000, 250000, 71000000, 760000000,254710000] whos var=len(animals) animal_number = input(f"What animal rank from 0 to {var:d}: ") i=int(animal_number) print('Rank', i, animals[i],'has', num_neurons[i], 'neurons') animals.insert(3, 'Jellyfish') num_neurons.insert(3, 5600) animals.extend(('Ferret', 'Octopus')) num_neurons.extend((404000000,500000000)) invertebrates=animals[0:6]+[animals[8]]+[animals[12]] num_for_invertebrates = num_neurons[0:6]+[num_neurons[8]]+[num_neurons[12]] vertebrates=animals[9:11]+[animals[7]] num_for_vertebrates = num_neurons[9:11]+[num_neurons[7]] print('Average number of neurons in invertebrate group: %d' %int(sum(num_for_invertebrates)/len(num_for_invertebrates))) print('Average number of neurons in vertebrate group: %d' %int(sum(num_for_vertebrates)/len(num_for_vertebrates))) num = input("Enter a number: ") mod = int(num)% 2 if mod > 0: print("You picked an odd number.") else: print("You picked an even number.") num = input("Enter list of three numbers with no gaps: ") print('from', animals[int(num[0])],animals[int(num[1])],animals[int(num[2])]) if num_neurons[int(num[0])] >= num_neurons[int(num[1])] and \ num_neurons[int(num[0])] >= num_neurons[int(num[2])]: print(animals[int(num[0])]) elif num_neurons[int(num[1])] >= num_neurons[int(num[0])] and \ num_neurons[int(num[1])] >= num_neurons[int(num[2])]: print(animals[int(num[1])]) else: print(animals[int(num[2])]) print('has max neurons') x= 0 while x<3: animal = animals[x] num = num_neurons[x] print(animal, num) x+=1 ``` ## **Problem 4:** for loop (a) Let's assume that the neurons in every animal are "fully connected" – this means that any given neuron makes a synaptic connection with every other neuron. Furthermore, let's assume that each of these connections consists of exactly one synapse. For example, if a brain has 500 neurons, then each neuron makes 499 connections. For every animal from the list, calculate the total number of synapses in a brain. Store the results in a new list called synapse_counts, with the same length as your other 2 lists. (b) Find out and print which animal has more synapses then another animal has neurons creat a dictionary with key = animals and value = num_neurons
0.352202
0.978426
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Contextual-Multi-Armed-Bandits-with-Constraints" data-toc-modified-id="Contextual-Multi-Armed-Bandits-with-Constraints-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Contextual Multi Armed Bandits with Constraints</a></span><ul class="toc-item"><li><span><a href="#Matching-patients-to-providers" data-toc-modified-id="Matching-patients-to-providers-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Matching patients to providers</a></span></li><li><span><a href="#Satisfying-Constraints-with-a-Perfect-Oracle" data-toc-modified-id="Satisfying-Constraints-with-a-Perfect-Oracle-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Satisfying Constraints with a Perfect Oracle</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Example:" data-toc-modified-id="Example:-1.2.0.1"><span class="toc-item-num">1.2.0.1&nbsp;&nbsp;</span>Example:</a></span></li></ul></li><li><span><a href="#Linear-Optimization-without-Assignment" data-toc-modified-id="Linear-Optimization-without-Assignment-1.2.1"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Linear Optimization without Assignment</a></span></li><li><span><a href="#Formulation" data-toc-modified-id="Formulation-1.2.2"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Formulation</a></span><ul class="toc-item"><li><span><a href="#Classic-Binary-Optimization" data-toc-modified-id="Classic-Binary-Optimization-1.2.2.1"><span class="toc-item-num">1.2.2.1&nbsp;&nbsp;</span>Classic Binary Optimization</a></span></li><li><span><a href="#Linear-optimization-w/o-assignment" data-toc-modified-id="Linear-optimization-w/o-assignment-1.2.2.2"><span class="toc-item-num">1.2.2.2&nbsp;&nbsp;</span>Linear optimization w/o assignment</a></span></li><li><span><a href="#Convert-quadratic-to-linear-using-a-trick-(Mccormick-Envelopes?)" data-toc-modified-id="Convert-quadratic-to-linear-using-a-trick-(Mccormick-Envelopes?)-1.2.2.3"><span class="toc-item-num">1.2.2.3&nbsp;&nbsp;</span>Convert quadratic to linear using a trick (Mccormick Envelopes?)</a></span></li><li><span><a href="#Final-Formulation" data-toc-modified-id="Final-Formulation-1.2.2.4"><span class="toc-item-num">1.2.2.4&nbsp;&nbsp;</span>Final Formulation</a></span></li></ul></li><li><span><a href="#Simulation" data-toc-modified-id="Simulation-1.2.3"><span class="toc-item-num">1.2.3&nbsp;&nbsp;</span>Simulation</a></span><ul class="toc-item"><li><span><a href="#Generate-dummy-data" data-toc-modified-id="Generate-dummy-data-1.2.3.1"><span class="toc-item-num">1.2.3.1&nbsp;&nbsp;</span>Generate dummy data</a></span></li><li><span><a href="#Dummy-Data-with-maximums-and-minimums" data-toc-modified-id="Dummy-Data-with-maximums-and-minimums-1.2.3.2"><span class="toc-item-num">1.2.3.2&nbsp;&nbsp;</span>Dummy Data with maximums and minimums</a></span></li><li><span><a href="#Optimization-function" data-toc-modified-id="Optimization-function-1.2.3.3"><span class="toc-item-num">1.2.3.3&nbsp;&nbsp;</span>Optimization function</a></span></li><li><span><a href="#Run-Simulation" data-toc-modified-id="Run-Simulation-1.2.3.4"><span class="toc-item-num">1.2.3.4&nbsp;&nbsp;</span>Run Simulation</a></span></li></ul></li><li><span><a href="#Lift-over-greedy-algorithm" data-toc-modified-id="Lift-over-greedy-algorithm-1.2.4"><span class="toc-item-num">1.2.4&nbsp;&nbsp;</span>Lift over greedy algorithm</a></span><ul class="toc-item"><li><span><a href="#Expected-number-of-appointments-of-the-linear-optimizer" data-toc-modified-id="Expected-number-of-appointments-of-the-linear-optimizer-1.2.4.1"><span class="toc-item-num">1.2.4.1&nbsp;&nbsp;</span>Expected number of appointments of the linear optimizer</a></span></li><li><span><a href="#Expected-number-of-appointments-of-greedy-algorithm" data-toc-modified-id="Expected-number-of-appointments-of-greedy-algorithm-1.2.4.2"><span class="toc-item-num">1.2.4.2&nbsp;&nbsp;</span>Expected number of appointments of greedy algorithm</a></span></li></ul></li><li><span><a href="#Test-on-unseen-data-with-noise" data-toc-modified-id="Test-on-unseen-data-with-noise-1.2.5"><span class="toc-item-num">1.2.5&nbsp;&nbsp;</span>Test on unseen data with noise</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.2.6"><span class="toc-item-num">1.2.6&nbsp;&nbsp;</span>Conclusion</a></span></li></ul></li></ul></li></ul></div> # Contextual Multi Armed Bandits with Constraints Multi-armed bandits is a classic formulation of the exploration versus exploitation problem - It is a hypothetical experiment, where a bandit must choose between multiple slot machines, each with an unknown payout. More background on the problem can be found here: [1](https://en.wikipedia.org/wiki/Multi-armed_bandit), [2](https://lilianweng.github.io/lil-log/2018/01/23/the-multi-armed-bandit-problem-and-its-solutions.html) Contextual multi-armed bandits is an extension of this approach where we factor in the bandit’s environment, or context, when choosing an action. For the scope of this problem, we are going to reduce the contextual bandit problem into a supervised learning problem, where an "oracle" takes into account the context and gives us a score that indicates the probability of success on the next action. ## Matching patients to providers One of the real-world applications of contextual multi armed bandit problem is trying to send patient referrals to a network of providers. Let's consider a patient referral system that matches a new patient with the "best" available provider. The system does not have access to real-time availability of the providers, but it knows the number of new patients the provider can generally take week over week. This is the maximum number of referrals a provider can have (1). Similarly, the providers expect a minimum number of referrals every week, to keep them engaged on the system, and that sets a minimum contraint on the number of referrals (2). When a patient is considered for a provider, there is a lot of information that could be used to make a successful match. Patients have conditions that they seek treatments for, and they might have preferences on gender, race, age, specialization etc of the provider. Similary, the system has data on the provider's history in declining and scheduling referrals. This information containing patient's preferences and provider's history, constitutes the "context" for the problem (3). All three of these together make this use-case equivalent to a contextual multi armed bandit problem with minimum and maximum constraints: - A patient referral can be thought of as "playing" a machine; similarly, providers are slot machines. - The payout of a slot machine is equivalent to the inherent "scheduling rate" of a provider. Scheduling rate of a provider is the proportion of the received referrals scheduled for an appointment. - The inherent scheduling rate varies over time, but we will assume that it remains nearly the same if we manage to honor the maximum and minimum constraints. A provider will remain happy if they get a minimum number of referrals each week, and not more than the maximum they want. If we don't honor the constraints, the providers might become disengaged and stop accepting the referrals - New providers are onboarded on the system often, which makes exploration a consistent part of the system. - Even though providers have an "inherent" scheduling rate, it is common for their rate to vary over time due to staffing changes, seasonal variation, business changes etc. Unlike the goal of a typical MAB problem, where the goal is to lower "regert", this system's goal is to optimize the total number of appointments scheduled by the system, across all providers. Referrals are sent out in real-time, which means we cannot do a batch assignment, and that makes this problem tricky. ## Satisfying Constraints with a Perfect Oracle For the scope of this solution, we are only going to focus on the constraint satisfaction problem. For simplicity, we assume an oracle that accurately predicts the likelihood of success if a patient is sent to a provider. This oracle, in practice, is a well-calibrated supervised learning algorithm that predicts the odds that a referral will get scheduled if we send it to a given provider. This is clearly a formulation that violates the standard Multi-armed bandit problem– where the focus is on minimizing regret. One might argue that this is not a classic MAB problem anymore; Before implementing this solution, we used thompson sampling to solve this problem (a classic solution to the MAB problem), but we realized that minimizing regret is not an issue at all. On the contrary, we found that greedy algorithms do just as well as thompson sampling for this use-case, because new providers are eager to schedule referrals; but as time passes, failure to maintain constraints leads to their dissatisfaction and a drop in their "payout" aka scheduling rate. Also, Thompson sampling does not allow us to put contraints on the providers. The oracle, which is a supervised learning algorithm, uses context for new MHPs to predict their success rate quite well, and it reduced our concern for exploration and regret. So, this problem can be simplified into the following: - A system consists of sending m referrals to n providers - An oracle gives likelihood for each referral-provider pair – Providers have an inherent/true scheduling rate - each provider has a maximum and minimum constraint on the number of referrals it can receive The simplest solution to this problem could be to use a simple greedy algorithm for sending a referral to a provider. For any given referral, we will send it to the provider with the highest payout, but then, there is no guarantee that it will satisfy the min and max constraints. Let's consider an example to undrestand the problem: #### Example: Let's consider a provider MHP1 which is nearing its maximum quota/cap, but is not full yet. Let's say we can only send them two more referrals this week, but they end up being the top pick for four referrals. How do we decide which two to send? Let's say the referrals, oracle scores, and MHPs look like this: | |mhp1 | mhp2 | mhp3| |----|-----|------|-----| | r1 | 0.7 | 0.65 | 0.2 | | r2 | 0.8 | 0.7 | 0.75| | r3 | 0.9 | 0.2 | 0.1 | | r4 | 0.7 | 0.24 | 0.3 | <b>Strategy 1 </b> We suspend the MHP after the quota is met - after the first two referrals in this case. That means: - r1 and r2 are sent to MHP1 and - r3 is sent to MHP2 - r4 is sent to MHP3 What are the expected number of appointments in this case? Assuming that the scores from the oracle are true probabilities.<br/> Expected number of apppoitments = 0.7 + 0.8 + 0.2 + 0.3 = 2 <b>Strategy 2</b> - r3 and r4 appointments to mhp1 instead - r1 to mhp2 - r2 to mhp3 Expected number of appointments = 0.65 + 0.75 + 0.9 + 0.7 = 3 <br/> So, a slightly different strategy resulted in one extra appointment in this group. This shows that a greedy approach might not be the best approach here. In this case, we can easily see which two referrals to route to MHP1 because we can see all the referrals. Unfortunately, in production, the referrals are sent in real-time, as they come, and we have no information about the future. So, when deciding for r1, we do not know about r2, r3 and r4. We still want to try and optimize the total number of appointments. But how? ### Linear Optimization without Assignment It is clear that we have reduced this to a standard linear optimzation problem: it has an objective function (expected number of appointments), a cost function (scores) and constraints to satisfy. But there is one issue - we cannot send referrals in batches. They are sent in real-time as soon as a patient is received by the system. This means that we cannot do "assignment" like a typical linear optimization problem. This notebook describes a solution that using linear optimization for this problem without assignment by using score adjustments ### Formulation #### Classic Binary Optimization Let's assume we could batch referrals, what would the solution look like? Let's say there are N MHPs, R referrals, Smart match gives a probability score for each referral Volume caps: $V_{0}, V_{1}, V_{2} ... V_{N}$ Boosting minimums $ B_{0}, B_{1} ... B_{N}$ Let's consider the score matrix | i\j |mhp_0| mhp_1|mhp_N| ... | |---- |-----|------|-----|-----| | r0 | 0.7 | 0.64 | 0.1 | ... | | r1 | 0.8 | 0.58 | 0.32| ... | | r2 | 0.91 | 0.35 | 0.3 | ... | | ... | ... | ... | ... | ... | For example, $s_{01} = 0.64$ , $s_{22} = 0.35$ Assignment Matrix: | i\j |mhp_0| mhp_1|mhp_N| ... | |---- |-----|------|-----|-----| | r0 | 0 | 1 | 0 | ... | | r1 | 1 | 0 | 0 | ... | | r2 | 0 | 0 | 0 | ... | | ... | ... | ... | ... | ... | <b>Formulation </b>: $$ \max_{a} \sum \sum a_{ij} s_{ij}$$ $$ \sum_i a_{ij} < V_j \qquad \forall j $$ $$ \sum_i a_{ij} > B_j \qquad \forall j $$ $$ \sum_j a_{ij} = 1 \qquad \forall i $$ So far so good, but this formulation is not very useful, because we cannot send referrals to providers in batches. In the absence of assignments, we use compute score adjustments, which are listed below: #### Linear optimization w/o assignment We force assignment to be on the maximum score per referral, which makes it a purely endogenous variable - as opposed to a decision variable. <br/> We introduce a new set of decision variables $P$ (for score ajustments) such that $P_j$ is the score adjustment (boost + or penalty -) for each MHP $M_j$ | j |---> | | | | |----|-----|------|-----|-----| | p | 0.7 | 0.64 | 0.1 | ... | <b>Formulation </b>: $$ \max_{a} \sum \sum a_{ij} s_{ij}$$ $$ \sum_i a_{ij} < V_j \qquad \forall j $$ $$ \sum_j a_{ij} = 1 \qquad \forall i $$ $$ s_{ij} + p_j \leq \sum_k a_{ik} (s_{ik} + p_k) \qquad \forall i \forall j $$ There is a problem with the last equation. <br/> It is not linear! #### Convert quadratic to linear using a trick (Mccormick Envelopes?) $ z = xy $ where x is binary and y is continuous $ L \leq y \leq U $ where L and U are bounds on y Then the following constraints will make it linear $$ z \leq Ux$$ $$ z \geq Lx$$ $$ z \leq y - L(1-x) $$ $$ z \geq y - U(1-x) $$ #### Final Formulation $$ \max_{a} \sum \sum a_{ij} s_{ij} - (\sum abs (p_{j})) $$ $$ \sum_i a_{ij} < V_j \qquad \forall j $$ $$ \sum_j a_{ij} = 1 \qquad \forall i $$ $$ s_{ij} + p_j \leq \sum_k a_{ik} s_{ik} + \sum_k z_{ik} \qquad \forall i \forall j$$ $$ z_{ik} = a_{ik} p_k $$ $$ -a_{ik} \leq z_{ik} \leq a_{ik} $$ $$ p_k - (1 - a_{ik}) \leq z_{ik} \leq p_k + (1 - a_{ik}) $$ That's it! <br/> Time to code ... ``` import os from scipy.stats import pearsonr as pearson_corr import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import plotly.offline as py import plotly.figure_factory as ff py.init_notebook_mode(connected=True) import plotly.graph_objs as go %matplotlib inline import pulp from time import time ``` ### Simulation #### Generate dummy data ``` N_REFERRALS = 1500 N_MHPS = 10 MIN_, MAX_ = 0, None # base scheduling rates mhp_names = ["mhp_%d" % i for i in range(N_MHPS)] mhp_mu = np.random.uniform(0.1, 0.7, N_MHPS) mhp_sigma = np.random.uniform(0.1, 0.4, N_MHPS) # standard deviation for probability scores mhp_supply = np.random.uniform(0.2, 0.7, N_MHPS) # Number of nans def generate_referral_data(mus, sigmas): df = pd.DataFrame({ "mhp_%d" % i: np.random.normal(mus[i], sigmas[i], N_REFERRALS) for i in range(N_MHPS) }) df = df[df >= 0].fillna(0) # Replace negative values with zeros df[df > 1] = 1.0 # Replace values greater than 1.0 with 1.0 # Introduce nulls because not all provideres are always eligible for i, col in enumerate(df.columns): frac = mhp_supply[i] df.loc[df.sample(frac=frac).index, col] = np.nan return df def generate_maximums(): mean_cap = int(N_REFERRALS/(N_MHPS)*0.6) caps = np.random.normal(mean_cap, int(mean_cap/2), N_MHPS) caps = [int(max(10, x)) for x in caps] caps[0] = MAX_ # first two providers are not capped caps[1] = MAX_ return caps def generate_minimums(maximums): # randomly pick providers to be boosted with about 40 % probability # ** TURNING OFF MINIMUMS JUST TO COMPARE MAXIMUMS TO A SIMPLE GREEDY ALGORITHM boosted = (np.random.uniform(0, 1, N_MHPS) > 1.0).astype(int) mins = [int(maximums[i] / 2.0) if (boosted[i] and maximums[i] is not None) else 0 for i in range(N_MHPS)] mins[0] = None # first two providers are hardcoded for control mins[1] = 0 return mins df = generate_referral_data(mhp_mu, mhp_sigma) maximums = generate_maximums() minimums = generate_minimums(maximums) df.head() ``` #### Dummy Data with maximums and minimums In this section, we demonstrate the total referrals that an MHP would receive with a greedy strategy. - "current_referrals" is the count the MHP would get with greedy - "minimum" and "maximums" are the constraints set on them ``` df_vol = pd.DataFrame({ "mean_score": df.mean().round(2), "std_dev_score": df.std().round(2), "supply_shortage": mhp_supply.round(2), "current_referrals": df.idxmax(axis=1).value_counts(), "minimums": minimums, "maximums": maximums, }, index=mhp_names) # .fillna(0) df_vol['current_referrals'] = df_vol.current_referrals.fillna(0).astype(int) df_vol ``` #### Optimization function We use Pulp to code the optimizer forumalated above. ``` def run_linear_optimization(df, maximums, minimums): N_REFERRALS, N_MHPS = df.shape st = time() scores = df.values prob = pulp.LpProblem("Network Optimization", pulp.LpMaximize) adjustment = pulp.LpVariable.dicts("adjustment", range(N_MHPS), lowBound=-1, upBound=1, cat='Continuous') adj_abs = pulp.LpVariable.dicts("adjustment_absolute", range(N_MHPS), lowBound=0, upBound=1, cat='Continuous') z_dummy = pulp.LpVariable.dicts("z_dummy", [(i, j) for i in range(N_REFERRALS) for j in range(N_MHPS)], -1, 1, cat='Continuous') assignment = pulp.LpVariable.dicts("assignment", [(i,j) for i in range(N_REFERRALS) for j in range(N_MHPS)], 0, 1, cat='Integer') # The objective function is added to 'prob' first prob += pulp.lpSum([assignment[i, j]*scores[i, j] for i in range(N_REFERRALS) for j in range(N_MHPS) if pd.notnull(scores[i, j])]) , "expected_appts" # Constraints # 1. Only one MHP assignment per referral for i in range(N_REFERRALS): prob += pulp.lpSum([assignment[i, j] for j in range(N_MHPS) if pd.notnull(scores[i, j])]) == 1.0, "assignment_sum_%d" % i # 2. Volume caps for j in range(N_MHPS): if pd.notnull(maximums[j]): prob += pulp.lpSum([assignment[i, j] for i in range(N_REFERRALS) if pd.notnull(scores[i, j])]) <= maximums[j] # 3. Boosted mins for j in range(N_MHPS): n_referrals_for_mhp = (scores[:, j] >= 0).sum() if pd.notnull(minimums[j]) and (n_referrals_for_mhp > 0): # find the minimum referrals they are eligible for lower_bound = min(minimums[j], n_referrals_for_mhp) if pd.notnull(maximums[j]): # lower_bound should not be too close to upper bound lower_bound = min(lower_bound, int(maximums[j]*0.7)) prob += pulp.lpSum([assignment[i, j] for i in range(N_REFERRALS) if pd.notnull(scores[i, j])]) >= lower_bound for j in range(N_MHPS): # adjustment[j].setInitialValue(0) if pd.isnull(minimums[j]): adjustment[j].upBound = 0 if pd.isnull(maximums[j]): adjustment[j].lowBound = 0 # no constraints for MHPs without limits if pd.isnull(minimums[j]) and pd.isnull(maximums[j]): adj_abs[j].upBound = 0 adj_abs[j].lowBound = 0 for i in range(N_REFERRALS): for j in range(N_MHPS): if pd.isnull(scores[i, j]): assignment[i, j].lowBound = 0 assignment[i, j].upBound = 0 z_dummy[i, j].lowBound = 0 z_dummy[i, j].upBound = 0 if pd.isnull(minimums[j]) and pd.isnull(maximums[j]): # no constraints for MHPs without limits z_dummy[i, j].lowBound = 0 z_dummy[i, j].upBound = 0 # 4. Assignment should only happen to the maximum score after applying the adjustment for i in range(N_REFERRALS): # Trick to turn quadratic (assignment * adjustment) into Linear for k in range(N_MHPS): U = 1 # 0 if minimums[k] is None else 1 L = -1 # 0 if maximums[k] is None else -1 if pd.isnull(scores[i, k]): continue if pd.isnull(minimums[k]) and pd.isnull(maximums[k]): # no constraints for MHPs without limits continue prob += z_dummy[i, k] <= U * assignment[i, k] prob += z_dummy[i, k] >= L * assignment[i, k] prob += z_dummy[i, k] <= adjustment[k] - L *(1 - assignment[i, k]) prob += z_dummy[i, k] >= adjustment[k] - U * (1 - assignment[i, k]) for j in range(N_MHPS): # <= (pulp.lpSum([assignment[i, k] * (scores[i, k] + adjustment[j]) for k in range(N_MHPS)]) # Ideally you want this ^ , but it is quadratic, so introducing a dummy variable z_dummy if pd.isnull(scores[i, j]): continue prob += (scores[i, j] + adjustment[j]) <= (pulp.lpSum([assignment[i, k] * scores[i, k] for k in range(N_MHPS) if pd.notnull(scores[i, k])]) + pulp.lpSum([z_dummy[i, k] for k in range(N_MHPS)])) for j in range(N_MHPS): prob += adjustment[j] <= adj_abs[j] prob += -adjustment[j] <= adj_abs[j] print("Optimization Initialized in %.2f seconds" % (time() - st)) print("Running solver ...") pulp.list_solvers(onlyAvailable=True) prob.solve(pulp.PULP_CBC_CMD(gapRel=0.1, timeLimit=600)) # prob.solve(pulp.PULP_CBC_CMD(timeLimit=10, msg=1, gapRel=0.1, logPath="/tmp/pulp.log")) # prob.solve(pulp.CPLEX_PY()) print("Solver status:", pulp.LpStatus[prob.status]) print("Solver status:", (pulp.LpStatus[prob.status] == 'Optimal')) print("Total time: %.2f seconds" % (time() - st)) print("Value of objective: %.2f" % pulp.value(prob.objective)) columns = df.columns.to_list() adjustment = pd.Series({columns[j]: adjustment[j].varValue for j in range(N_MHPS)}) return adjustment, assignment ``` #### Run Simulation Let's send the referral data through the optimizer and look at the results ``` # minimums = [None] * len(maximums) adjustment, assignment = run_linear_optimization(df, maximums, minimums) assign = ["mhp_%d" % j for i in range(N_REFERRALS) for j in range(N_MHPS) if assignment[i, j].varValue == 1] df_vol["new_referrals_using_assignment"] = pd.value_counts(assign) # df_vol['adjustment'] = pd.Series({"mhp_%d" % j: adjustment[j].varValue for j in range(N_MHPS)}) df_vol['adjustment'] = adjustment df_new = df.copy(deep=True) for i in range(N_MHPS): cc = "mhp_%d" % i df_new[cc] = (df_new[cc] + df_vol['adjustment'].iloc[i]) # Note - the new counts of referrals using adjustment should be exactly the same # as the counts with assignment (decision variable), because assignment of a referral # happens to the MHP with maximum `score + adjustment` for the given referral # df_vol['new_referrals_using_adjustment'] = df_new.idxmax(axis=1).value_counts() # df_vol[['minimums', 'maximums', 'current_referrals', 'new_referrals_using_assignment', 'adjustment']] df_vol ``` *The optimizer works !* As we can see here, the best MHPs (high scheduling rate) got a lot of referrals, and were maxed out when possible. On the other hand, the worst MHPs did not get many referrals. ### Lift over greedy algorithm #### Expected number of appointments of the linear optimizer ``` def get_expected_appts_linop(df): df_new = df.copy(deep=True) for i in range(N_MHPS): cc = "mhp_%d" % i df_new[cc] = (df_new[cc] + df_vol['adjustment'].iloc[i]) expected_appts = 0 for i in range(N_REFERRALS): row = df_new.iloc[i] best_mhp = row.idxmax() best_score = df.loc[i, best_mhp] # Note - this is the original df expected_appts += best_score print("Expected number of appointments Linop: %d" % expected_appts) mhp_counts = df_new.idxmax(axis=1).value_counts() return mhp_counts _ = get_expected_appts_linop(df) ``` #### Expected number of appointments of greedy algorithm ``` def get_expected_appts_greedy(df): df_new = df.copy(deep=True) referral_counts = {"mhp_%d" % i: 0 for i in range(N_MHPS)} expected_appts = 0 greedy_assignments = [] overdraft = 0 for i in range(N_REFERRALS): row = df_new.iloc[i] best_mhp = row.idxmax() if pd.isnull(best_mhp): # No MHPs available best_mhp = df.iloc[i].idxmax() # original df best_score = df.loc[i, best_mhp] # original df else: best_score = df_new.loc[i, best_mhp] referral_counts[best_mhp] += 1 # maximum cannot be zero expected_appts += best_score greedy_assignments.append(best_mhp) max_for_best_mhp = df_vol.maximums.loc[best_mhp] if (best_mhp in df_new.columns.tolist()) and \ referral_counts[best_mhp] >= max_for_best_mhp: del df_new[best_mhp] # MHP maxed out, remove it print("Expected number of appointments Greedy: %d" % expected_appts) print("Expected number of maximum possible appointments: %d" % df.max(axis=1).sum()) mhp_counts = pd.Series(greedy_assignments).value_counts() return mhp_counts _ = get_expected_appts_greedy(df) ``` ### Test on unseen data with noise ``` noise_mu = 0 noise_sigma1 = 0.001 noise_sigma2 = 0.001 mhp_mu_new = [x + np.random.normal(noise_mu, noise_sigma1) for x in mhp_mu] mhp_sigma_new = [x + np.random.normal(noise_mu, noise_sigma2) for x in mhp_sigma] df_test = generate_referral_data(mhp_mu_new, mhp_sigma_new) linop_counts = get_expected_appts_linop(df_test) greedy_counts = get_expected_appts_greedy(df_test) df_test_vol = pd.DataFrame({ "mean_score": df.mean().round(2), "std_dev_score": df.std().round(2), "current_referrals": df.idxmax(axis=1).value_counts(), "minimums": minimums, "maximums": maximums, }, index=mhp_names) # .fillna(0) df_test_vol['current_referrals'] = df_vol.current_referrals.fillna(0).astype(int) df_test_vol['linop_counts'] = linop_counts df_test_vol['greedy_counts'] = greedy_counts x = (df_test_vol['linop_counts'] - df_test_vol.maximums) print("Overdraft linop: %.2f" % x[x>0].sum()) x = (df_test_vol['greedy_counts'] - df_test_vol.maximums) print("Overdraft greedy: %.2f" % x[x>0].sum()) ``` ### Conclusion **As you can see, the linear optimization solution not only has a higher number of appointments than greedy, but also a lower overdraft**. We are counting the score of overdrafted referrals towards expected number of appointments, which is not true in practice. The greedy algorithm has 707 referrals with 192 overdrafted. Since overdrafted referrals have a high chance of rejection, the actual number of appointments is much lower. If we assume the chance of sucess for an overdrafted referral to be zero, the total appointments for greedy would be about 600, compared to ~700 for linop, giving over 15% improvement. **This demo was limited to maximum constraints, where we could show an easy comparison to a greedy algorithm. When we turn on minimum constraints, the linear optimizer gives even better results than greedy** note - we've done many trials of this experiement and linop is consistently better.
github_jupyter
import os from scipy.stats import pearsonr as pearson_corr import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import plotly.offline as py import plotly.figure_factory as ff py.init_notebook_mode(connected=True) import plotly.graph_objs as go %matplotlib inline import pulp from time import time N_REFERRALS = 1500 N_MHPS = 10 MIN_, MAX_ = 0, None # base scheduling rates mhp_names = ["mhp_%d" % i for i in range(N_MHPS)] mhp_mu = np.random.uniform(0.1, 0.7, N_MHPS) mhp_sigma = np.random.uniform(0.1, 0.4, N_MHPS) # standard deviation for probability scores mhp_supply = np.random.uniform(0.2, 0.7, N_MHPS) # Number of nans def generate_referral_data(mus, sigmas): df = pd.DataFrame({ "mhp_%d" % i: np.random.normal(mus[i], sigmas[i], N_REFERRALS) for i in range(N_MHPS) }) df = df[df >= 0].fillna(0) # Replace negative values with zeros df[df > 1] = 1.0 # Replace values greater than 1.0 with 1.0 # Introduce nulls because not all provideres are always eligible for i, col in enumerate(df.columns): frac = mhp_supply[i] df.loc[df.sample(frac=frac).index, col] = np.nan return df def generate_maximums(): mean_cap = int(N_REFERRALS/(N_MHPS)*0.6) caps = np.random.normal(mean_cap, int(mean_cap/2), N_MHPS) caps = [int(max(10, x)) for x in caps] caps[0] = MAX_ # first two providers are not capped caps[1] = MAX_ return caps def generate_minimums(maximums): # randomly pick providers to be boosted with about 40 % probability # ** TURNING OFF MINIMUMS JUST TO COMPARE MAXIMUMS TO A SIMPLE GREEDY ALGORITHM boosted = (np.random.uniform(0, 1, N_MHPS) > 1.0).astype(int) mins = [int(maximums[i] / 2.0) if (boosted[i] and maximums[i] is not None) else 0 for i in range(N_MHPS)] mins[0] = None # first two providers are hardcoded for control mins[1] = 0 return mins df = generate_referral_data(mhp_mu, mhp_sigma) maximums = generate_maximums() minimums = generate_minimums(maximums) df.head() df_vol = pd.DataFrame({ "mean_score": df.mean().round(2), "std_dev_score": df.std().round(2), "supply_shortage": mhp_supply.round(2), "current_referrals": df.idxmax(axis=1).value_counts(), "minimums": minimums, "maximums": maximums, }, index=mhp_names) # .fillna(0) df_vol['current_referrals'] = df_vol.current_referrals.fillna(0).astype(int) df_vol def run_linear_optimization(df, maximums, minimums): N_REFERRALS, N_MHPS = df.shape st = time() scores = df.values prob = pulp.LpProblem("Network Optimization", pulp.LpMaximize) adjustment = pulp.LpVariable.dicts("adjustment", range(N_MHPS), lowBound=-1, upBound=1, cat='Continuous') adj_abs = pulp.LpVariable.dicts("adjustment_absolute", range(N_MHPS), lowBound=0, upBound=1, cat='Continuous') z_dummy = pulp.LpVariable.dicts("z_dummy", [(i, j) for i in range(N_REFERRALS) for j in range(N_MHPS)], -1, 1, cat='Continuous') assignment = pulp.LpVariable.dicts("assignment", [(i,j) for i in range(N_REFERRALS) for j in range(N_MHPS)], 0, 1, cat='Integer') # The objective function is added to 'prob' first prob += pulp.lpSum([assignment[i, j]*scores[i, j] for i in range(N_REFERRALS) for j in range(N_MHPS) if pd.notnull(scores[i, j])]) , "expected_appts" # Constraints # 1. Only one MHP assignment per referral for i in range(N_REFERRALS): prob += pulp.lpSum([assignment[i, j] for j in range(N_MHPS) if pd.notnull(scores[i, j])]) == 1.0, "assignment_sum_%d" % i # 2. Volume caps for j in range(N_MHPS): if pd.notnull(maximums[j]): prob += pulp.lpSum([assignment[i, j] for i in range(N_REFERRALS) if pd.notnull(scores[i, j])]) <= maximums[j] # 3. Boosted mins for j in range(N_MHPS): n_referrals_for_mhp = (scores[:, j] >= 0).sum() if pd.notnull(minimums[j]) and (n_referrals_for_mhp > 0): # find the minimum referrals they are eligible for lower_bound = min(minimums[j], n_referrals_for_mhp) if pd.notnull(maximums[j]): # lower_bound should not be too close to upper bound lower_bound = min(lower_bound, int(maximums[j]*0.7)) prob += pulp.lpSum([assignment[i, j] for i in range(N_REFERRALS) if pd.notnull(scores[i, j])]) >= lower_bound for j in range(N_MHPS): # adjustment[j].setInitialValue(0) if pd.isnull(minimums[j]): adjustment[j].upBound = 0 if pd.isnull(maximums[j]): adjustment[j].lowBound = 0 # no constraints for MHPs without limits if pd.isnull(minimums[j]) and pd.isnull(maximums[j]): adj_abs[j].upBound = 0 adj_abs[j].lowBound = 0 for i in range(N_REFERRALS): for j in range(N_MHPS): if pd.isnull(scores[i, j]): assignment[i, j].lowBound = 0 assignment[i, j].upBound = 0 z_dummy[i, j].lowBound = 0 z_dummy[i, j].upBound = 0 if pd.isnull(minimums[j]) and pd.isnull(maximums[j]): # no constraints for MHPs without limits z_dummy[i, j].lowBound = 0 z_dummy[i, j].upBound = 0 # 4. Assignment should only happen to the maximum score after applying the adjustment for i in range(N_REFERRALS): # Trick to turn quadratic (assignment * adjustment) into Linear for k in range(N_MHPS): U = 1 # 0 if minimums[k] is None else 1 L = -1 # 0 if maximums[k] is None else -1 if pd.isnull(scores[i, k]): continue if pd.isnull(minimums[k]) and pd.isnull(maximums[k]): # no constraints for MHPs without limits continue prob += z_dummy[i, k] <= U * assignment[i, k] prob += z_dummy[i, k] >= L * assignment[i, k] prob += z_dummy[i, k] <= adjustment[k] - L *(1 - assignment[i, k]) prob += z_dummy[i, k] >= adjustment[k] - U * (1 - assignment[i, k]) for j in range(N_MHPS): # <= (pulp.lpSum([assignment[i, k] * (scores[i, k] + adjustment[j]) for k in range(N_MHPS)]) # Ideally you want this ^ , but it is quadratic, so introducing a dummy variable z_dummy if pd.isnull(scores[i, j]): continue prob += (scores[i, j] + adjustment[j]) <= (pulp.lpSum([assignment[i, k] * scores[i, k] for k in range(N_MHPS) if pd.notnull(scores[i, k])]) + pulp.lpSum([z_dummy[i, k] for k in range(N_MHPS)])) for j in range(N_MHPS): prob += adjustment[j] <= adj_abs[j] prob += -adjustment[j] <= adj_abs[j] print("Optimization Initialized in %.2f seconds" % (time() - st)) print("Running solver ...") pulp.list_solvers(onlyAvailable=True) prob.solve(pulp.PULP_CBC_CMD(gapRel=0.1, timeLimit=600)) # prob.solve(pulp.PULP_CBC_CMD(timeLimit=10, msg=1, gapRel=0.1, logPath="/tmp/pulp.log")) # prob.solve(pulp.CPLEX_PY()) print("Solver status:", pulp.LpStatus[prob.status]) print("Solver status:", (pulp.LpStatus[prob.status] == 'Optimal')) print("Total time: %.2f seconds" % (time() - st)) print("Value of objective: %.2f" % pulp.value(prob.objective)) columns = df.columns.to_list() adjustment = pd.Series({columns[j]: adjustment[j].varValue for j in range(N_MHPS)}) return adjustment, assignment # minimums = [None] * len(maximums) adjustment, assignment = run_linear_optimization(df, maximums, minimums) assign = ["mhp_%d" % j for i in range(N_REFERRALS) for j in range(N_MHPS) if assignment[i, j].varValue == 1] df_vol["new_referrals_using_assignment"] = pd.value_counts(assign) # df_vol['adjustment'] = pd.Series({"mhp_%d" % j: adjustment[j].varValue for j in range(N_MHPS)}) df_vol['adjustment'] = adjustment df_new = df.copy(deep=True) for i in range(N_MHPS): cc = "mhp_%d" % i df_new[cc] = (df_new[cc] + df_vol['adjustment'].iloc[i]) # Note - the new counts of referrals using adjustment should be exactly the same # as the counts with assignment (decision variable), because assignment of a referral # happens to the MHP with maximum `score + adjustment` for the given referral # df_vol['new_referrals_using_adjustment'] = df_new.idxmax(axis=1).value_counts() # df_vol[['minimums', 'maximums', 'current_referrals', 'new_referrals_using_assignment', 'adjustment']] df_vol def get_expected_appts_linop(df): df_new = df.copy(deep=True) for i in range(N_MHPS): cc = "mhp_%d" % i df_new[cc] = (df_new[cc] + df_vol['adjustment'].iloc[i]) expected_appts = 0 for i in range(N_REFERRALS): row = df_new.iloc[i] best_mhp = row.idxmax() best_score = df.loc[i, best_mhp] # Note - this is the original df expected_appts += best_score print("Expected number of appointments Linop: %d" % expected_appts) mhp_counts = df_new.idxmax(axis=1).value_counts() return mhp_counts _ = get_expected_appts_linop(df) def get_expected_appts_greedy(df): df_new = df.copy(deep=True) referral_counts = {"mhp_%d" % i: 0 for i in range(N_MHPS)} expected_appts = 0 greedy_assignments = [] overdraft = 0 for i in range(N_REFERRALS): row = df_new.iloc[i] best_mhp = row.idxmax() if pd.isnull(best_mhp): # No MHPs available best_mhp = df.iloc[i].idxmax() # original df best_score = df.loc[i, best_mhp] # original df else: best_score = df_new.loc[i, best_mhp] referral_counts[best_mhp] += 1 # maximum cannot be zero expected_appts += best_score greedy_assignments.append(best_mhp) max_for_best_mhp = df_vol.maximums.loc[best_mhp] if (best_mhp in df_new.columns.tolist()) and \ referral_counts[best_mhp] >= max_for_best_mhp: del df_new[best_mhp] # MHP maxed out, remove it print("Expected number of appointments Greedy: %d" % expected_appts) print("Expected number of maximum possible appointments: %d" % df.max(axis=1).sum()) mhp_counts = pd.Series(greedy_assignments).value_counts() return mhp_counts _ = get_expected_appts_greedy(df) noise_mu = 0 noise_sigma1 = 0.001 noise_sigma2 = 0.001 mhp_mu_new = [x + np.random.normal(noise_mu, noise_sigma1) for x in mhp_mu] mhp_sigma_new = [x + np.random.normal(noise_mu, noise_sigma2) for x in mhp_sigma] df_test = generate_referral_data(mhp_mu_new, mhp_sigma_new) linop_counts = get_expected_appts_linop(df_test) greedy_counts = get_expected_appts_greedy(df_test) df_test_vol = pd.DataFrame({ "mean_score": df.mean().round(2), "std_dev_score": df.std().round(2), "current_referrals": df.idxmax(axis=1).value_counts(), "minimums": minimums, "maximums": maximums, }, index=mhp_names) # .fillna(0) df_test_vol['current_referrals'] = df_vol.current_referrals.fillna(0).astype(int) df_test_vol['linop_counts'] = linop_counts df_test_vol['greedy_counts'] = greedy_counts x = (df_test_vol['linop_counts'] - df_test_vol.maximums) print("Overdraft linop: %.2f" % x[x>0].sum()) x = (df_test_vol['greedy_counts'] - df_test_vol.maximums) print("Overdraft greedy: %.2f" % x[x>0].sum())
0.435181
0.930805
# [Hackerearth Predict the Emotion Challenge from audio files](https://www.hackerearth.com/challenges/competitive/ia-for-ai/) ![Problem Statement](https://i.ibb.co/Yf1948g/hackimg.png) ## My Bronze Medal 3rd Place Solution Write Up: * **Leaderboard Rank** : https://www.hackerearth.com/challenges/competitive/ia-for-ai/leaderboard/ ![imgage_leaderboard](https://i.ibb.co/wwWnzvM/hackerearth-emotio-detection-leaderboard.png) * Firstly, I was occupied in my exams didn't have time. I accidently opened hackerearth found the comeptition been going on for about a month. * I just joined the competition 3 days before the completion. * So, Being time constrained, I don't time to experiment a lot. First I thought of using torchaudio models. But ran a first draft through it. * Then I thought, though torchaudio model could give better given time. To do a fast experiment I have to use CNNs, with spectograms. * I did audio exploration files conatined both mp3 wav. Since I didn't have attened AMA in beginnig didn't find proper meta regarding audio files. I assumed they were sample at traditional 44100 Hz. * I used a 5 fold Stratified KFold. * After exploring the audio lengths. They varied from 0.5s to > 22.0s. I took a distibution of lengths of audio files, took a max lenths of 5 secs for audion spectrograms of size (128,455). If files were less than 5.0s, They were extrpolated of to 5 sec using zeros. * Converted the train and Test files into MelSpecgrams using librosa audio library. Saved on the drive. * Choose fastai instead of traditional core pytorch. And changed fastai's codes were needed. * Experimented with 6 models architectures. * resnest50 * efficientnet b0-b4. * There individual best scores varied from 57 to 59 * I tried making ensemble using different weights to 5 models, they didn't gave better results. * So for final submission and best score of **61.0835**. I simply took a mode of my best submissions of above 6 models. ``` !pip install -q gdown !pip install -q torchaudio import albumentations import pandas as pd import gdown import plotly.express as px import seaborn as sns import torch,os from tqdm.notebook import tqdm import torch.nn as nn from torch.nn import functional as F import torchaudio import librosa import numpy as np import IPython import matplotlib.pyplot as plt import audioread import cv2 from sklearn.model_selection import train_test_split import torch.nn as nn import random,os,gc,sys import warnings warnings.filterwarnings('ignore') ``` ### Downloading our data and already preprocessed audio files. ``` url = 'https://drive.google.com/uc?export=download&id=131jmLqPqORC6hdRLaFGb_HEYCtBdPURS' output = 'dbb3bd26ead211eb.zip' gdown.download(url, output, quiet=False) url = 'https://drive.google.com/uc?export=download&id=1zbl2e5pqiN7ghLlcVXL2r5nZP_IIWIGk' output = 'trainMelspec.zip' gdown.download(url, output, quiet=False) url = 'https://drive.google.com/uc?export=download&id=1lpq68t-VvnS1k3uNXA5IlbkzhjbL4b4-' output = 'testMelspec.zip' gdown.download(url, output, quiet=False) !unzip -qq /content/dbb3bd26ead211eb.zip df = pd.read_csv('/content/dataset/train.csv') df['file_path'] = df['filename'].apply(lambda x:f'/content/dataset/TrainAudioFiles/{x}') df.info() sns.countplot(x=df.emotion); emotion_map = {'neutral':0, 'joy':1, 'disgust':2, 'surprise':3, 'sadness':4, 'fear':5,'anger':6} inv_emo = {v: k for k, v in emotion_map.items()}; df['target'] = df['emotion'].map(emotion_map) df.sample(n=10) tsdf = pd.read_csv('/content/dataset/test.csv') tsdf['file_path'] = tsdf['filename'].apply(lambda x:f'/content/dataset/TestAudioFiles/{x}') tsdf.head(3) !unzip -qq '/content/trainMelspec.zip' -d '/content/TrainMelSpec' !unzip -qq '/content/testMelspec.zip' -d '/content/TestMelSpec' !pip uninstall fastai -y !pip install -Uqq fastai from fastai.vision.all import * def get_audio(x): try: wave = torchaudio.load(x)[0][0] except: wave,sr = librosa.load(x,sr=44100) wave = torch.from_numpy(wave) temp_wv = torch.zeros([550634]) if wave.numel() < 550634: temp_wv[:wave.numel()] = wave[:] else: temp_wv[:] = wave[:550634] return temp_wv[::2].unsqueeze(0) wave = get_audio('/content/dataset/TrainAudioFiles/38543.wav') def label_func(fname): return int(df[df['filename']==fname.name]['target'].values[0]) ``` ### StratifiedKFOLD ``` from sklearn.model_selection import StratifiedKFold df_comb = df.sample(frac=1.,random_state = 2020) df_comb['kfold'] = -1 y = df_comb['target'].values kf = StratifiedKFold(n_splits=5,random_state = 2020,shuffle = True) for fold ,(trn_,val_ )in enumerate(kf.split(X=df_comb,y=y)): df_comb.loc[val_,'kfold'] = fold df_comb['id'] = df_comb['filename'].apply(lambda x:str(x[:-4])+'.png') trn_df = df_comb path = '/content' FOLD = 1 ``` ### Dataloader ``` def get_data(FOLD,bs=32,sz=128): item_tfms = RandomCrop(sz) batch_tfms = [Normalize.from_stats(*imagenet_stats)] trn_idx,val_idx = trn_df[trn_df.kfold!=FOLD].index, trn_df[trn_df.kfold==FOLD].index dls = ImageDataLoaders.from_df(trn_df, path, fn_col = 'id', folder='TrainMelSpec',label_col='target', bs=bs,y_block=CategoryBlock,item_tfms=None,batch_tfms=batch_tfms,val_idxs=val_idx) return dls dls = get_data(0) dls.show_batch(figsize=(20,8)) test_data_path = tsdf['filename'].apply(lambda x:f'/content/TestMelSpec/{x[:-4]}.png') !pip install -q timm import timm ``` ### Finding Optimal Weights for CrossEntropyLoss ``` trn_df[trn_df.kfold==0].shape[0] / (7*np.bincount(trn_df[trn_df.kfold==0]['target'].values)) 7*np.bincount(trn_df[trn_df.kfold==0]['target'].values)/trn_df[trn_df.kfold==0].shape[0] ``` ### Training models for ensemble ``` models_list = ['efficientnet_b0','efficientnet_b1','efficientnet_b2','efficientnet_b3','efficientnet_b4','resnest50d'] test_sc = [] for mdl in models_list: print(f'StartifiedKFOLD_{mdl}') for F in tqdm(range(5)): dls = get_data(F,bs=64) wt = torch.FloatTensor([3.26546392, 1.13659794, 0.34278351, 0.65549828, 0.39089347,0.47508591, 0.73367698]) mymodel = timm.create_model(mdl,pretrained=True,num_classes=dls.c) learn = Learner(dls, mymodel,loss_func = LabelSmoothingCrossEntropyFlat(), #cbs=[CutMix()], metrics=[accuracy,F1Score(average='macro')]).to_fp16() # Callbacks cb1 = SaveModelCallback(monitor='accuracy',fname=f'best_{mdl}_fold_{F}') cb2 = ReduceLROnPlateau(monitor='accuracy', min_delta=0.01, patience=2,factor=0.2) learn.fit_one_cycle(20, 1e-3/2,cbs = [cb1,cb2]) learn.load(f'best_{mdl}_fold_{F}'); #learn = learn.to_fp32() cb3 = SaveModelCallback(monitor='accuracy',fname=f'best_{mdl}_foldB_{F}') learn.fit_one_cycle(10, 1e-5/2,cbs = [cb3]) learn.load(f'best_{mdl}_foldB_{F}'); tst_dl = learn.dls.test_dl(test_data_path) predictions = learn.get_preds(dl = tst_dl) test_sc.append(predictions[0].numpy()) learn,dls,mymodel= None, None, None gc.collect() test_sc = np.array(test_sc) test_sc_avg = test_sc.mean(axis=0) preds = test_sc_avg.argmax(axis=1);preds.shape test_sc_avg.shape sub = tsdf[['filename']].copy() sub['emotion'] = preds sub['emotion'] = sub['emotion'].map(inv_emo) sub.to_csv('submission_emotion_detection.csv',index=False) sub ```
github_jupyter
!pip install -q gdown !pip install -q torchaudio import albumentations import pandas as pd import gdown import plotly.express as px import seaborn as sns import torch,os from tqdm.notebook import tqdm import torch.nn as nn from torch.nn import functional as F import torchaudio import librosa import numpy as np import IPython import matplotlib.pyplot as plt import audioread import cv2 from sklearn.model_selection import train_test_split import torch.nn as nn import random,os,gc,sys import warnings warnings.filterwarnings('ignore') url = 'https://drive.google.com/uc?export=download&id=131jmLqPqORC6hdRLaFGb_HEYCtBdPURS' output = 'dbb3bd26ead211eb.zip' gdown.download(url, output, quiet=False) url = 'https://drive.google.com/uc?export=download&id=1zbl2e5pqiN7ghLlcVXL2r5nZP_IIWIGk' output = 'trainMelspec.zip' gdown.download(url, output, quiet=False) url = 'https://drive.google.com/uc?export=download&id=1lpq68t-VvnS1k3uNXA5IlbkzhjbL4b4-' output = 'testMelspec.zip' gdown.download(url, output, quiet=False) !unzip -qq /content/dbb3bd26ead211eb.zip df = pd.read_csv('/content/dataset/train.csv') df['file_path'] = df['filename'].apply(lambda x:f'/content/dataset/TrainAudioFiles/{x}') df.info() sns.countplot(x=df.emotion); emotion_map = {'neutral':0, 'joy':1, 'disgust':2, 'surprise':3, 'sadness':4, 'fear':5,'anger':6} inv_emo = {v: k for k, v in emotion_map.items()}; df['target'] = df['emotion'].map(emotion_map) df.sample(n=10) tsdf = pd.read_csv('/content/dataset/test.csv') tsdf['file_path'] = tsdf['filename'].apply(lambda x:f'/content/dataset/TestAudioFiles/{x}') tsdf.head(3) !unzip -qq '/content/trainMelspec.zip' -d '/content/TrainMelSpec' !unzip -qq '/content/testMelspec.zip' -d '/content/TestMelSpec' !pip uninstall fastai -y !pip install -Uqq fastai from fastai.vision.all import * def get_audio(x): try: wave = torchaudio.load(x)[0][0] except: wave,sr = librosa.load(x,sr=44100) wave = torch.from_numpy(wave) temp_wv = torch.zeros([550634]) if wave.numel() < 550634: temp_wv[:wave.numel()] = wave[:] else: temp_wv[:] = wave[:550634] return temp_wv[::2].unsqueeze(0) wave = get_audio('/content/dataset/TrainAudioFiles/38543.wav') def label_func(fname): return int(df[df['filename']==fname.name]['target'].values[0]) from sklearn.model_selection import StratifiedKFold df_comb = df.sample(frac=1.,random_state = 2020) df_comb['kfold'] = -1 y = df_comb['target'].values kf = StratifiedKFold(n_splits=5,random_state = 2020,shuffle = True) for fold ,(trn_,val_ )in enumerate(kf.split(X=df_comb,y=y)): df_comb.loc[val_,'kfold'] = fold df_comb['id'] = df_comb['filename'].apply(lambda x:str(x[:-4])+'.png') trn_df = df_comb path = '/content' FOLD = 1 def get_data(FOLD,bs=32,sz=128): item_tfms = RandomCrop(sz) batch_tfms = [Normalize.from_stats(*imagenet_stats)] trn_idx,val_idx = trn_df[trn_df.kfold!=FOLD].index, trn_df[trn_df.kfold==FOLD].index dls = ImageDataLoaders.from_df(trn_df, path, fn_col = 'id', folder='TrainMelSpec',label_col='target', bs=bs,y_block=CategoryBlock,item_tfms=None,batch_tfms=batch_tfms,val_idxs=val_idx) return dls dls = get_data(0) dls.show_batch(figsize=(20,8)) test_data_path = tsdf['filename'].apply(lambda x:f'/content/TestMelSpec/{x[:-4]}.png') !pip install -q timm import timm trn_df[trn_df.kfold==0].shape[0] / (7*np.bincount(trn_df[trn_df.kfold==0]['target'].values)) 7*np.bincount(trn_df[trn_df.kfold==0]['target'].values)/trn_df[trn_df.kfold==0].shape[0] models_list = ['efficientnet_b0','efficientnet_b1','efficientnet_b2','efficientnet_b3','efficientnet_b4','resnest50d'] test_sc = [] for mdl in models_list: print(f'StartifiedKFOLD_{mdl}') for F in tqdm(range(5)): dls = get_data(F,bs=64) wt = torch.FloatTensor([3.26546392, 1.13659794, 0.34278351, 0.65549828, 0.39089347,0.47508591, 0.73367698]) mymodel = timm.create_model(mdl,pretrained=True,num_classes=dls.c) learn = Learner(dls, mymodel,loss_func = LabelSmoothingCrossEntropyFlat(), #cbs=[CutMix()], metrics=[accuracy,F1Score(average='macro')]).to_fp16() # Callbacks cb1 = SaveModelCallback(monitor='accuracy',fname=f'best_{mdl}_fold_{F}') cb2 = ReduceLROnPlateau(monitor='accuracy', min_delta=0.01, patience=2,factor=0.2) learn.fit_one_cycle(20, 1e-3/2,cbs = [cb1,cb2]) learn.load(f'best_{mdl}_fold_{F}'); #learn = learn.to_fp32() cb3 = SaveModelCallback(monitor='accuracy',fname=f'best_{mdl}_foldB_{F}') learn.fit_one_cycle(10, 1e-5/2,cbs = [cb3]) learn.load(f'best_{mdl}_foldB_{F}'); tst_dl = learn.dls.test_dl(test_data_path) predictions = learn.get_preds(dl = tst_dl) test_sc.append(predictions[0].numpy()) learn,dls,mymodel= None, None, None gc.collect() test_sc = np.array(test_sc) test_sc_avg = test_sc.mean(axis=0) preds = test_sc_avg.argmax(axis=1);preds.shape test_sc_avg.shape sub = tsdf[['filename']].copy() sub['emotion'] = preds sub['emotion'] = sub['emotion'].map(inv_emo) sub.to_csv('submission_emotion_detection.csv',index=False) sub
0.486332
0.826502
# RQ1: How many higher education institutions are found in counties with majority underrepresented groups? Answer: 659 ## What are the characteristics of those institutions? - More private-for-profit - More 2-year (vs 4-year or above) ``` import pandas as pd import geopandas as gpd import geoplot as gplt import geoplot.crs as gcrs import matplotlib.pyplot as plt from tools import tree from pathlib import Path from datetime import datetime as dt today = dt.today().strftime("%d-%b-%y") today RAW_DATA = Path("../data/raw/") INTERIM_DATA = Path("../data/interim/") PROCESSED_DATA = Path("../data/processed/") FINAL_DATA = Path("../data/final/") EXTERNAL_DATA = Path("../data/external/") tree(PROCESSED_DATA) counties_data = pd.read_csv(PROCESSED_DATA / 'counties.csv') institutions_data = pd.read_csv(PROCESSED_DATA / 'institutions_data.csv') counties_shapes = gpd.read_file(PROCESSED_DATA / 'geodata' / 'tl_2019_us_county.shp') counties_data.head().T institutions_data.head().T counties_shapes.head().T counties_shapes.sample(5) counties_shapes.plot(); import us contiguous_fips = [state.fips for state in us.STATES_CONTIGUOUS] mask_contiguous_fips = counties_shapes['STATEFP'].isin(contiguous_fips) mask_contiguous_fips counties_shapes = counties_shapes[mask_contiguous_fips] name_to_fips_map = us.states.mapping("name", "fips") institutions_data['fips_state_code'] = institutions_data['fips_state_code'].map(name_to_fips_map) mask_contiguous_fips_institutions = institutions_data['fips_state_code'].isin(contiguous_fips) institutions_data = institutions_data[mask_contiguous_fips_institutions] counties_data.head() counties_data['share_underrepresented'] = ( counties_data['black_alone'] + counties_data['latino_alone'] + counties_data['american_indian_and_alaska_native'] + counties_data['native_hawaiian_and_pacific_islander'] ) / counties_data['universe'] counties_data.head().T subset_counties_data = counties_data[['geoid', 'name', 'share_underrepresented']].copy() subset_counties_shapes = counties_shapes[['GEOID', 'NAME', 'geometry']].copy() subset_counties_data['geoid'] = subset_counties_data['geoid'].astype(str).str.zfill(5) subset_counties_data = subset_counties_data.set_index('geoid') subset_counties_shapes = subset_counties_shapes.set_index('GEOID') working_gdf = subset_counties_shapes.join(subset_counties_data) working_gdf.plot(column = 'share_underrepresented'); gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented'); geo_institutions = gpd.GeoDataFrame(institutions_data, geometry = gpd.points_from_xy(institutions_data['longitude'], institutions_data['latitude'])) working_gdf.crs geo_institutions.crs = working_gdf.crs geo_institutions.crs gplt.pointplot(geo_institutions) ax = gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented', figsize = (12,12)) gplt.pointplot(geo_institutions, ax = ax, zorder = 3, alpha = 0.3, color = "red", s = 2) mask_majority_underrepresented = working_gdf['share_underrepresented'] > 0.50 working_gdf[mask_majority_underrepresented].plot(); working_gdf[~mask_majority_underrepresented].plot(); majority_underrepresented = working_gdf[mask_majority_underrepresented].copy() majority_underrepresented.head() institutions_in_majority_underrepresented = gpd.sjoin(geo_institutions, majority_underrepresented, how="inner", op="intersects") ax = gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented', figsize = (12,12)) gplt.pointplot(institutions_in_majority_underrepresented, ax = ax, zorder = 3, alpha = 0.3, color = "red", s = 2) majority_underrepresented.shape counties_shapes.shape geo_institutions.shape institutions_in_majority_underrepresented.shape institutions_in_majority_underrepresented.head().T institutions_in_majority_underrepresented['control'].value_counts(normalize = True) geo_institutions['control'].value_counts(normalize = True) ``` In counties where there's a majority of underrepresented groups, 29% of higher ed institutions are private for-profits whereas across the united states that number drops to 19%. ``` institutions_in_majority_underrepresented['level'].value_counts(normalize = True) geo_institutions['level'].value_counts(normalize = True) ``` # Checkpoint ``` tree(PROCESSED_DATA) PROCESSED_DATA.joinpath("processed_geodata").mkdir() PROCESSED_DATA.joinpath("processed_institutions").mkdir() tree(PROCESSED_DATA) working_gdf.to_file(PROCESSED_DATA / 'processed_geodata' / 'contiguous_us.shp') geo_institutions.to_file(PROCESSED_DATA / 'processed_institutions' / 'geo_institutions.shp') ```
github_jupyter
import pandas as pd import geopandas as gpd import geoplot as gplt import geoplot.crs as gcrs import matplotlib.pyplot as plt from tools import tree from pathlib import Path from datetime import datetime as dt today = dt.today().strftime("%d-%b-%y") today RAW_DATA = Path("../data/raw/") INTERIM_DATA = Path("../data/interim/") PROCESSED_DATA = Path("../data/processed/") FINAL_DATA = Path("../data/final/") EXTERNAL_DATA = Path("../data/external/") tree(PROCESSED_DATA) counties_data = pd.read_csv(PROCESSED_DATA / 'counties.csv') institutions_data = pd.read_csv(PROCESSED_DATA / 'institutions_data.csv') counties_shapes = gpd.read_file(PROCESSED_DATA / 'geodata' / 'tl_2019_us_county.shp') counties_data.head().T institutions_data.head().T counties_shapes.head().T counties_shapes.sample(5) counties_shapes.plot(); import us contiguous_fips = [state.fips for state in us.STATES_CONTIGUOUS] mask_contiguous_fips = counties_shapes['STATEFP'].isin(contiguous_fips) mask_contiguous_fips counties_shapes = counties_shapes[mask_contiguous_fips] name_to_fips_map = us.states.mapping("name", "fips") institutions_data['fips_state_code'] = institutions_data['fips_state_code'].map(name_to_fips_map) mask_contiguous_fips_institutions = institutions_data['fips_state_code'].isin(contiguous_fips) institutions_data = institutions_data[mask_contiguous_fips_institutions] counties_data.head() counties_data['share_underrepresented'] = ( counties_data['black_alone'] + counties_data['latino_alone'] + counties_data['american_indian_and_alaska_native'] + counties_data['native_hawaiian_and_pacific_islander'] ) / counties_data['universe'] counties_data.head().T subset_counties_data = counties_data[['geoid', 'name', 'share_underrepresented']].copy() subset_counties_shapes = counties_shapes[['GEOID', 'NAME', 'geometry']].copy() subset_counties_data['geoid'] = subset_counties_data['geoid'].astype(str).str.zfill(5) subset_counties_data = subset_counties_data.set_index('geoid') subset_counties_shapes = subset_counties_shapes.set_index('GEOID') working_gdf = subset_counties_shapes.join(subset_counties_data) working_gdf.plot(column = 'share_underrepresented'); gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented'); geo_institutions = gpd.GeoDataFrame(institutions_data, geometry = gpd.points_from_xy(institutions_data['longitude'], institutions_data['latitude'])) working_gdf.crs geo_institutions.crs = working_gdf.crs geo_institutions.crs gplt.pointplot(geo_institutions) ax = gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented', figsize = (12,12)) gplt.pointplot(geo_institutions, ax = ax, zorder = 3, alpha = 0.3, color = "red", s = 2) mask_majority_underrepresented = working_gdf['share_underrepresented'] > 0.50 working_gdf[mask_majority_underrepresented].plot(); working_gdf[~mask_majority_underrepresented].plot(); majority_underrepresented = working_gdf[mask_majority_underrepresented].copy() majority_underrepresented.head() institutions_in_majority_underrepresented = gpd.sjoin(geo_institutions, majority_underrepresented, how="inner", op="intersects") ax = gplt.choropleth(working_gdf, projection=gcrs.WebMercator(), hue = 'share_underrepresented', figsize = (12,12)) gplt.pointplot(institutions_in_majority_underrepresented, ax = ax, zorder = 3, alpha = 0.3, color = "red", s = 2) majority_underrepresented.shape counties_shapes.shape geo_institutions.shape institutions_in_majority_underrepresented.shape institutions_in_majority_underrepresented.head().T institutions_in_majority_underrepresented['control'].value_counts(normalize = True) geo_institutions['control'].value_counts(normalize = True) institutions_in_majority_underrepresented['level'].value_counts(normalize = True) geo_institutions['level'].value_counts(normalize = True) tree(PROCESSED_DATA) PROCESSED_DATA.joinpath("processed_geodata").mkdir() PROCESSED_DATA.joinpath("processed_institutions").mkdir() tree(PROCESSED_DATA) working_gdf.to_file(PROCESSED_DATA / 'processed_geodata' / 'contiguous_us.shp') geo_institutions.to_file(PROCESSED_DATA / 'processed_institutions' / 'geo_institutions.shp')
0.310799
0.739258
``` from AlbroIRSpectroscopy import * ``` ## Read the file into the program: ``` filename = "AlbroFordham.csv" absorb, waveln = readSpectrum(filename) ``` ## Plot the full spectrum: ``` plt.figure(figsize = (20, 4)) plotfull(absorb, waveln) plt.show() ``` ## Trim full spectrum down to a single transition: ``` peak1a = absorb[waveindex(5850):waveindex(5450)] peak1w = waveln[waveindex(5850):waveindex(5450)] ``` ## Plot and label spectrum of a single transition: ``` plt.figure(figsize = (20, 4)) rb1, jr1, pb1, jp1 = detectpeaks(peak1a, peak1w) maketitle("HCl", 0, 2) plt.show() ``` ## Fit Funciton to Wavenumber vs Quantum Number ``` plt.figure(figsize = (16, 4)) plt.subplot(121) ar1, br1, cr1, dr1, rerr1 = fit_r_branch(rb1, jr1, False) plt.subplot(122) ap1, bp1, cp1, dp1, perr1 = fit_p_branch(pb1, jp1, False) plt.show() ``` ## Print Out Fitting Parameters ``` print("a_e(P-Branch): ", ap1, "\na_e(R-Branch): ", ar1) print("B_e(P-Branch): ", bp1, "\nB_e(R-Branch): ", br1) print("2ve-6vexe(P-Branch):", cp1, "\n2ve-6vexe(R-Branch):", cr1) print("D(P-Branch): ", dp1, "\nD(R-Branch): ", dr1) ``` ## Plotted and labelled spectra for other transitions: ``` peak3a = absorb[waveindex(3100):waveindex(2590)] peak3w = waveln[waveindex(3100):waveindex(2590)] plt.figure(figsize = (20, 4)) rb3, jr3, pb3, jp3 = detectpeaks(peak3a, peak3w) maketitle("HCl", 0, 1) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar3, br3, cr3, dr3, rerr3 = fit_r_branch(rb3, jr3, True) plt.subplot(122) ap3, bp3, cp3, dp3, perr3 = fit_p_branch(pb3, jp3, True) plt.show() peak2a = absorb[waveindex(4240):waveindex(3990)] peak2w = waveln[waveindex(4240):waveindex(3990)] plt.figure(figsize = (20, 4)) rb2, jr2, pb2, jp2 = detectpeaks(peak2a, peak2w) maketitle("DCl", 0, 2) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar2, br2, cr2, dr2, rerr2 = fit_r_branch(rb2, jr2, False) plt.subplot(122) ap2, bp2, cp2, dp2, perr2 = fit_p_branch(pb2, jp2, False) plt.show() peak4a = absorb[waveindex(2250):waveindex(1875)] peak4w = waveln[waveindex(2250):waveindex(1875)] plt.figure(figsize = (20, 4)) rb4, jr4, pb4, jp4 = detectpeaks(peak4a, peak4w) maketitle("DCl", 0, 1) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar4, br4, cr4, dr4, rerr4 = fit_r_branch(rb4, jr4, True) plt.subplot(122) ap4, bp4, cp4, dp4, perr4 = fit_p_branch(pb4, jp4, True) plt.show() ``` ## Print Out All Fitting Parameters ``` print(ar1, ar3, ar2, ar4) print(ap1, ap3, ap2, ap4) print(br1, br3, br2, br4) print(bp1, bp3, bp2, bp4) print(cr1, cr3, cr2, cr4) print(cp1, cp3, cp2, cp4) print(dr1, dr3, dr2, dr4) print(dp1, dp3, dp2, dp4) ```
github_jupyter
from AlbroIRSpectroscopy import * filename = "AlbroFordham.csv" absorb, waveln = readSpectrum(filename) plt.figure(figsize = (20, 4)) plotfull(absorb, waveln) plt.show() peak1a = absorb[waveindex(5850):waveindex(5450)] peak1w = waveln[waveindex(5850):waveindex(5450)] plt.figure(figsize = (20, 4)) rb1, jr1, pb1, jp1 = detectpeaks(peak1a, peak1w) maketitle("HCl", 0, 2) plt.show() plt.figure(figsize = (16, 4)) plt.subplot(121) ar1, br1, cr1, dr1, rerr1 = fit_r_branch(rb1, jr1, False) plt.subplot(122) ap1, bp1, cp1, dp1, perr1 = fit_p_branch(pb1, jp1, False) plt.show() print("a_e(P-Branch): ", ap1, "\na_e(R-Branch): ", ar1) print("B_e(P-Branch): ", bp1, "\nB_e(R-Branch): ", br1) print("2ve-6vexe(P-Branch):", cp1, "\n2ve-6vexe(R-Branch):", cr1) print("D(P-Branch): ", dp1, "\nD(R-Branch): ", dr1) peak3a = absorb[waveindex(3100):waveindex(2590)] peak3w = waveln[waveindex(3100):waveindex(2590)] plt.figure(figsize = (20, 4)) rb3, jr3, pb3, jp3 = detectpeaks(peak3a, peak3w) maketitle("HCl", 0, 1) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar3, br3, cr3, dr3, rerr3 = fit_r_branch(rb3, jr3, True) plt.subplot(122) ap3, bp3, cp3, dp3, perr3 = fit_p_branch(pb3, jp3, True) plt.show() peak2a = absorb[waveindex(4240):waveindex(3990)] peak2w = waveln[waveindex(4240):waveindex(3990)] plt.figure(figsize = (20, 4)) rb2, jr2, pb2, jp2 = detectpeaks(peak2a, peak2w) maketitle("DCl", 0, 2) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar2, br2, cr2, dr2, rerr2 = fit_r_branch(rb2, jr2, False) plt.subplot(122) ap2, bp2, cp2, dp2, perr2 = fit_p_branch(pb2, jp2, False) plt.show() peak4a = absorb[waveindex(2250):waveindex(1875)] peak4w = waveln[waveindex(2250):waveindex(1875)] plt.figure(figsize = (20, 4)) rb4, jr4, pb4, jp4 = detectpeaks(peak4a, peak4w) maketitle("DCl", 0, 1) plt.show() plt.figure(figsize = (20, 4)) plt.subplot(121) ar4, br4, cr4, dr4, rerr4 = fit_r_branch(rb4, jr4, True) plt.subplot(122) ap4, bp4, cp4, dp4, perr4 = fit_p_branch(pb4, jp4, True) plt.show() print(ar1, ar3, ar2, ar4) print(ap1, ap3, ap2, ap4) print(br1, br3, br2, br4) print(bp1, bp3, bp2, bp4) print(cr1, cr3, cr2, cr4) print(cp1, cp3, cp2, cp4) print(dr1, dr3, dr2, dr4) print(dp1, dp3, dp2, dp4)
0.437583
0.912514
``` #hide !pip install -Uqq fastbook import fastbook fastbook.setup_book() #hide from fastbook import * from IPython.display import display,HTML ``` # Data Munging with fastai's Mid-Level API We have seen what `Tokenizer` and `Numericalize` do to a collection of texts, and how they're used inside the data block API, which handles those transforms for us directly using the `TextBlock`. But what if we want to only apply one of those transforms, either to see intermediate results or because we have already tokenized texts? More generally, what can we do when the data block API is not flexible enough to accommodate our particular use case? For this, we need to use fastai's *mid-level API* for processing data. The data block API is built on top of that layer, so it will allow you to do everything the data block API does, and much much more. ## Going Deeper into fastai's Layered API The fastai library is built on a *layered API*. In the very top layer there are *applications* that allow us to train a model in five lines of codes, as we saw in <<chapter_intro>>. In the case of creating `DataLoaders` for a text classifier, for instance, we used the line: ``` from fastai.text.all import * dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test') ``` The factory method `TextDataLoaders.from_folder` is very convenient when your data is arranged the exact same way as the IMDb dataset, but in practice, that often won't be the case. The data block API offers more flexibility. As we saw in the last chapter, we can get the same result with: ``` path = untar_data(URLs.IMDB) dls = DataBlock( blocks=(TextBlock.from_folder(path),CategoryBlock), get_y = parent_label, get_items=partial(get_text_files, folders=['train', 'test']), splitter=GrandparentSplitter(valid_name='test') ).dataloaders(path) ``` But it's sometimes not flexible enough. For debugging purposes, for instance, we might need to apply just parts of the transforms that come with this data block. Or we might want to create a `DataLoaders` for some application that isn't directly supported by fastai. In this section, we'll dig into the pieces that are used inside fastai to implement the data block API. Understanding these will enable you to leverage the power and flexibility of this mid-tier API. > note: Mid-Level API: The mid-level API does not only contain functionality for creating `DataLoaders`. It also has the _callback_ system, which allows us to customize the training loop any way we like, and the _general optimizer_. Both will be covered in <<chapter_accel_sgd>>. ### Transforms When we studied tokenization and numericalization in the last chapter, we started by grabbing a bunch of texts: ``` files = get_text_files(path, folders = ['train', 'test']) txts = L(o.open().read() for o in files[:2000]) ``` We then showed how to tokenize them with a `Tokenizer`: ``` tok = Tokenizer.from_folder(path) tok.setup(txts) toks = txts.map(tok) toks[0] ``` and how to numericalize, including automatically creating the vocab for our corpus: ``` num = Numericalize() num.setup(toks) nums = toks.map(num) nums[0][:10] ``` The classes also have a `decode` method. For instance, `Numericalize.decode` gives us back the string tokens: ``` nums_dec = num.decode(nums[0][:10]); nums_dec ``` and `Tokenizer.decode` turns this back into a single string (it may not, however, be exactly the same as the original string; this depends on whether the tokenizer is *reversible*, which the default word tokenizer is not at the time we're writing this book): ``` tok.decode(nums_dec) ``` `decode` is used by fastai's `show_batch` and `show_results`, as well as some other inference methods, to convert predictions and mini-batches into a human-understandable representation. For each of `tok` or `num` in the preceding example, we created an object, called the `setup` method (which trains the tokenizer if needed for `tok` and creates the vocab for `num`), applied it to our raw texts (by calling the object as a function), and then finally decoded the result back to an understandable representation. These steps are needed for most data preprocessing tasks, so fastai provides a class that encapsulates them. This is the `Transform` class. Both `Tokenize` and `Numericalize` are `Transform`s. In general, a `Transform` is an object that behaves like a function and has an optional `setup` method that will initialize some inner state (like the vocab inside `num`) and an optional `decode` that will reverse the function (this reversal may not be perfect, as we saw with `tok`). A good example of `decode` is found in the `Normalize` transform that we saw in <<chapter_sizing_and_tta>>: to be able to plot the images its `decode` method undoes the normalization (i.e., it multiplies by the standard deviation and adds back the mean). On the other hand, data augmentation transforms do not have a `decode` method, since we want to show the effects on images to make sure the data augmentation is working as we want. A special behavior of `Transform`s is that they always get applied over tuples. In general, our data is always a tuple `(input,target)` (sometimes with more than one input or more than one target). When applying a transform on an item like this, such as `Resize`, we don't want to resize the tuple as a whole; instead, we want to resize the input (if applicable) and the target (if applicable) separately. It's the same for batch transforms that do data augmentation: when the input is an image and the target is a segmentation mask, the transform needs to be applied (the same way) to the input and the target. We can see this behavior if we pass a tuple of texts to `tok`: ``` tok((txts[0], txts[1])) ``` ### Writing Your Own Transform If you want to write a custom transform to apply to your data, the easiest way is to write a function. As you can see in this example, a `Transform` will only be applied to a matching type, if a type is provided (otherwise it will always be applied). In the following code, the `:int` in the function signature means that `f` only gets applied to `int`s. That's why `tfm(2.0)` returns `2.0`, but `tfm(2)` returns `3` here: ``` def f(x:int): return x+1 tfm = Transform(f) tfm(2),tfm(2.0) ``` Here, `f` is converted to a `Transform` with no `setup` and no `decode` method. Python has a special syntax for passing a function (like `f`) to another function (or something that behaves like a function, known as a *callable* in Python), called a *decorator*. A decorator is used by prepending a callable with `@` and placing it before a function definition (there are lots of good online tutorials about Python decorators, so take a look at one if this is a new concept for you). The following is identical to the previous code: ``` @Transform def f(x:int): return x+1 f(2),f(2.0) ``` If you need either `setup` or `decode`, you will need to subclass `Transform` to implement the actual encoding behavior in `encodes`, then (optionally), the setup behavior in `setups` and the decoding behavior in `decodes`: ``` class NormalizeMean(Transform): def setups(self, items): self.mean = sum(items)/len(items) def encodes(self, x): return x-self.mean def decodes(self, x): return x+self.mean ``` Here, `NormalizeMean` will initialize some state during the setup (the mean of all elements passed), then the transformation is to subtract that mean. For decoding purposes, we implement the reverse of that transformation by adding the mean. Here is an example of `NormalizeMean` in action: ``` tfm = NormalizeMean() tfm.setup([1,2,3,4,5]) start = 2 y = tfm(start) z = tfm.decode(y) tfm.mean,y,z ``` Note that the method called and the method implemented are different, for each of these methods: ```asciidoc [options="header"] |====== | Class | To call | To implement | `nn.Module` (PyTorch) | `()` (i.e., call as function) | `forward` | `Transform` | `()` | `encodes` | `Transform` | `decode()` | `decodes` | `Transform` | `setup()` | `setups` |====== ``` So, for instance, you would never call `setups` directly, but instead would call `setup`. The reason for this is that `setup` does some work before and after calling `setups` for you. To learn more about `Transform`s and how you can use them to implement different behavior depending on the type of the input, be sure to check the tutorials in the fastai docs. ### Pipeline To compose several transforms together, fastai provides the `Pipeline` class. We define a `Pipeline` by passing it a list of `Transform`s; it will then compose the transforms inside it. When you call `Pipeline` on an object, it will automatically call the transforms inside, in order: ``` tfms = Pipeline([tok, num]) t = tfms(txts[0]); t[:20] ``` And you can call `decode` on the result of your encoding, to get back something you can display and analyze: ``` tfms.decode(t)[:100] ``` The only part that doesn't work the same way as in `Transform` is the setup. To properly set up a `Pipeline` of `Transform`s on some data, you need to use a `TfmdLists`. ## TfmdLists and Datasets: Transformed Collections Your data is usually a set of raw items (like filenames, or rows in a DataFrame) to which you want to apply a succession of transformations. We just saw that a succession of transformations is represented by a `Pipeline` in fastai. The class that groups together this `Pipeline` with your raw items is called `TfmdLists`. ### TfmdLists Here is the short way of doing the transformation we saw in the previous section: ``` tls = TfmdLists(files, [Tokenizer.from_folder(path), Numericalize]) ``` At initialization, the `TfmdLists` will automatically call the `setup` method of each `Transform` in order, providing them not with the raw items but the items transformed by all the previous `Transform`s in order. We can get the result of our `Pipeline` on any raw element just by indexing into the `TfmdLists`: ``` t = tls[0]; t[:20] ``` And the `TfmdLists` knows how to decode for show purposes: ``` tls.decode(t)[:100] ``` In fact, it even has a `show` method: ``` tls.show(t) ``` The `TfmdLists` is named with an "s" because it can handle a training and a validation set with a `splits` argument. You just need to pass the indices of which elements are in the training set, and which are in the validation set: ``` cut = int(len(files)*0.8) splits = [list(range(cut)), list(range(cut,len(files)))] tls = TfmdLists(files, [Tokenizer.from_folder(path), Numericalize], splits=splits) ``` You can then access them through the `train` and `valid` attributes: ``` tls.valid[0][:20] ``` If you have manually written a `Transform` that performs all of your preprocessing at once, turning raw items into a tuple with inputs and targets, then `TfmdLists` is the class you need. You can directly convert it to a `DataLoaders` object with the `dataloaders` method. This is what we will do in our Siamese example later in this chapter. In general, though, you will have two (or more) parallel pipelines of transforms: one for processing your raw items into inputs and one to process your raw items into targets. For instance, here, the pipeline we defined only processes the raw text into inputs. If we want to do text classification, we also have to process the labels into targets. For this we need to do two things. First we take the label name from the parent folder. There is a function, `parent_label`, for this: ``` lbls = files.map(parent_label) lbls ``` Then we need a `Transform` that will grab the unique items and build a vocab with them during setup, then transform the string labels into integers when called. fastai provides this for us; it's called `Categorize`: ``` cat = Categorize() cat.setup(lbls) cat.vocab, cat(lbls[0]) ``` To do the whole setup automatically on our list of files, we can create a `TfmdLists` as before: ``` tls_y = TfmdLists(files, [parent_label, Categorize()]) tls_y[0] ``` But then we end up with two separate objects for our inputs and targets, which is not what we want. This is where `Datasets` comes to the rescue. ### Datasets `Datasets` will apply two (or more) pipelines in parallel to the same raw object and build a tuple with the result. Like `TfmdLists`, it will automatically do the setup for us, and when we index into a `Datasets`, it will return us a tuple with the results of each pipeline: ``` x_tfms = [Tokenizer.from_folder(path), Numericalize] y_tfms = [parent_label, Categorize()] dsets = Datasets(files, [x_tfms, y_tfms]) x,y = dsets[0] x[:20],y ``` Like a `TfmdLists`, we can pass along `splits` to a `Datasets` to split our data between training and validation sets: ``` x_tfms = [Tokenizer.from_folder(path), Numericalize] y_tfms = [parent_label, Categorize()] dsets = Datasets(files, [x_tfms, y_tfms], splits=splits) x,y = dsets.valid[0] x[:20],y ``` It can also decode any processed tuple or show it directly: ``` t = dsets.valid[0] dsets.decode(t) ``` The last step is to convert our `Datasets` object to a `DataLoaders`, which can be done with the `dataloaders` method. Here we need to pass along a special argument to take care of the padding problem (as we saw in the last chapter). This needs to happen just before we batch the elements, so we pass it to `before_batch`: ``` dls = dsets.dataloaders(bs=64, before_batch=pad_input) ``` `dataloaders` directly calls `DataLoader` on each subset of our `Datasets`. fastai's `DataLoader` expands the PyTorch class of the same name and is responsible for collating the items from our datasets into batches. It has a lot of points of customization, but the most important ones that you should know are: - `after_item`:: Applied on each item after grabbing it inside the dataset. This is the equivalent of `item_tfms` in `DataBlock`. - `before_batch`:: Applied on the list of items before they are collated. This is the ideal place to pad items to the same size. - `after_batch`:: Applied on the batch as a whole after its construction. This is the equivalent of `batch_tfms` in `DataBlock`. As a conclusion, here is the full code necessary to prepare the data for text classification: ``` tfms = [[Tokenizer.from_folder(path), Numericalize], [parent_label, Categorize]] files = get_text_files(path, folders = ['train', 'test']) splits = GrandparentSplitter(valid_name='test')(files) dsets = Datasets(files, tfms, splits=splits) dls = dsets.dataloaders(dl_type=SortedDL, before_batch=pad_input) ``` The two differences from the previous code are the use of `GrandparentSplitter` to split our training and validation data, and the `dl_type` argument. This is to tell `dataloaders` to use the `SortedDL` class of `DataLoader`, and not the usual one. `SortedDL` constructs batches by putting samples of roughly the same lengths into batches. This does the exact same thing as our previous `DataBlock`: ``` path = untar_data(URLs.IMDB) dls = DataBlock( blocks=(TextBlock.from_folder(path),CategoryBlock), get_y = parent_label, get_items=partial(get_text_files, folders=['train', 'test']), splitter=GrandparentSplitter(valid_name='test') ).dataloaders(path) ``` But now, you know how to customize every single piece of it! Let's practice what we just learned on this mid-level API for data preprocessing about using a computer vision example now. ## Applying the Mid-Level Data API: SiamesePair A *Siamese model* takes two images and has to determine if they are of the same class or not. For this example, we will use the Pet dataset again and prepare the data for a model that will have to predict if two images of pets are of the same breed or not. We will explain here how to prepare the data for such a model, then we will train that model in <<chapter_arch_details>>. First things first, let's get the images in our dataset: ``` from fastai.vision.all import * path = untar_data(URLs.PETS) files = get_image_files(path/"images") ``` If we didn't care about showing our objects at all, we could directly create one transform to completely preprocess that list of files. We will want to look at those images though, so we need to create a custom type. When you call the `show` method on a `TfmdLists` or a `Datasets` object, it will decode items until it reaches a type that contains a `show` method and use it to show the object. That `show` method gets passed a `ctx`, which could be a `matplotlib` axis for images, or a row of a DataFrame for texts. Here we create a `SiameseImage` object that subclasses `Tuple` and is intended to contain three things: two images, and a Boolean that's `True` if the images are of the same breed. We also implement the special `show` method, such that it concatenates the two images with a black line in the middle. Don't worry too much about the part that is in the `if` test (which is to show the `SiameseImage` when the images are Python images, not tensors); the important part is in the last three lines: ``` class SiameseImage(Tuple): def show(self, ctx=None, **kwargs): img1,img2,same_breed = self if not isinstance(img1, Tensor): if img2.size != img1.size: img2 = img2.resize(img1.size) t1,t2 = tensor(img1),tensor(img2) t1,t2 = t1.permute(2,0,1),t2.permute(2,0,1) else: t1,t2 = img1,img2 line = t1.new_zeros(t1.shape[0], t1.shape[1], 10) return show_image(torch.cat([t1,line,t2], dim=2), title=same_breed, ctx=ctx) ``` Let's create a first `SiameseImage` and check our `show` method works: ``` img = PILImage.create(files[0]) s = SiameseImage(img, img, True) s.show(); ``` We can also try with a second image that's not from the same class: ``` img1 = PILImage.create(files[1]) s1 = SiameseImage(img, img1, False) s1.show(); ``` The important thing with transforms that we saw before is that they dispatch over tuples or their subclasses. That's precisely why we chose to subclass `Tuple` in this instance—this way we can apply any transform that works on images to our `SiameseImage` and it will be applied on each image in the tuple: ``` s2 = Resize(224)(s1) s2.show(); ``` Here the `Resize` transform is applied to each of the two images, but not the Boolean flag. Even if we have a custom type, we can thus benefit from all the data augmentation transforms inside the library. We are now ready to build the `Transform` that we will use to get our data ready for a Siamese model. First, we will need a function to determine the classes of all our images: ``` def label_func(fname): return re.match(r'^(.*)_\d+.jpg$', fname.name).groups()[0] ``` For each image our tranform will, with a probability of 0.5, draw an image from the same class and return a `SiameseImage` with a true label, or draw an image from another class and return a `SiameseImage` with a false label. This is all done in the private `_draw` function. There is one difference between the training and validation sets, which is why the transform needs to be initialized with the splits: on the training set we will make that random pick each time we read an image, whereas on the validation set we make this random pick once and for all at initialization. This way, we get more varied samples during training, but always the same validation set: ``` class SiameseTransform(Transform): def __init__(self, files, label_func, splits): self.labels = files.map(label_func).unique() self.lbl2files = {l: L(f for f in files if label_func(f) == l) for l in self.labels} self.label_func = label_func self.valid = {f: self._draw(f) for f in files[splits[1]]} def encodes(self, f): f2,t = self.valid.get(f, self._draw(f)) img1,img2 = PILImage.create(f),PILImage.create(f2) return SiameseImage(img1, img2, t) def _draw(self, f): same = random.random() < 0.5 cls = self.label_func(f) if not same: cls = random.choice(L(l for l in self.labels if l != cls)) return random.choice(self.lbl2files[cls]),same ``` We can then create our main transform: ``` splits = RandomSplitter()(files) tfm = SiameseTransform(files, label_func, splits) tfm(files[0]).show(); ``` In the mid-level API for data collection we have two objects that can help us apply transforms on a set of items, `TfmdLists` and `Datasets`. If you remember what we have just seen, one applies a `Pipeline` of transforms and the other applies several `Pipeline` of transforms in parallel, to build tuples. Here, our main transform already builds the tuples, so we use `TfmdLists`: ``` tls = TfmdLists(files, tfm, splits=splits) show_at(tls.valid, 0); ``` And we can finally get our data in `DataLoaders` by calling the `dataloaders` method. One thing to be careful of here is that this method does not take `item_tfms` and `batch_tfms` like a `DataBlock`. The fastai `DataLoader` has several hooks that are named after events; here what we apply on the items after they are grabbed is called `after_item`, and what we apply on the batch once it's built is called `after_batch`: ``` dls = tls.dataloaders(after_item=[Resize(224), ToTensor], after_batch=[IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]) ``` Note that we need to pass more transforms than usual—that's because the data block API usually adds them automatically: - `ToTensor` is the one that converts images to tensors (again, it's applied on every part of the tuple). - `IntToFloatTensor` converts the tensor of images containing integers from 0 to 255 to a tensor of floats, and divides by 255 to make the values between 0 and 1. We can now train a model using this `DataLoaders`. It will need a bit more customization than the usual model provided by `cnn_learner` since it has to take two images instead of one, but we will see how to create such a model and train it in <<chapter_arch_dtails>>. ## Conclusion fastai provides a layered API. It takes one line of code to grab the data when it's in one of the usual settings, making it easy for beginners to focus on training a model without spending too much time assembling the data. Then, the high-level data block API gives you more flexibility by allowing you to mix and match some building blocks. Underneath it, the mid-level API gives you greater flexibility to apply any transformations on your items. In your real-world problems, this is probably what you will need to use, and we hope it makes the step of data-munging as easy as possible. ## Questionnaire 1. Why do we say that fastai has a "layered" API? What does it mean? 1. Why does a `Transform` have a `decode` method? What does it do? 1. Why does a `Transform` have a `setup` method? What does it do? 1. How does a `Transform` work when called on a tuple? 1. Which methods do you need to implement when writing your own `Transform`? 1. Write a `Normalize` transform that fully normalizes items (subtract the mean and divide by the standard deviation of the dataset), and that can decode that behavior. Try not to peek! 1. Write a `Transform` that does the numericalization of tokenized texts (it should set its vocab automatically from the dataset seen and have a `decode` method). Look at the source code of fastai if you need help. 1. What is a `Pipeline`? 1. What is a `TfmdLists`? 1. What is a `Datasets`? How is it different from a `TfmdLists`? 1. Why are `TfmdLists` and `Datasets` named with an "s"? 1. How can you build a `DataLoaders` from a `TfmdLists` or a `Datasets`? 1. How do you pass `item_tfms` and `batch_tfms` when building a `DataLoaders` from a `TfmdLists` or a `Datasets`? 1. What do you need to do when you want to have your custom items work with methods like `show_batch` or `show_results`? 1. Why can we easily apply fastai data augmentation transforms to the `SiamesePair` we built? ### Further Research 1. Use the mid-level API to prepare the data in `DataLoaders` on your own datasets. Try this with the Pet dataset and the Adult dataset from Chapter 1. 1. Look at the Siamese tutorial in the fastai documentation to learn how to customize the behavior of `show_batch` and `show_results` for new type of items. Implement it in your own project. ## Understanding fastai's Applications: Wrap Up Congratulations—you've completed all of the chapters in this book that cover the key practical parts of training models and using deep learning! You know how to use all of fastai's built-in applications, and how to customize them using the data block API and loss functions. You even know how to create a neural network from scratch, and train it! (And hopefully you now know some of the questions to ask to make sure your creations help improve society too.) The knowledge you already have is enough to create full working prototypes of many types of neural network applications. More importantly, it will help you understand the capabilities and limitations of deep learning models, and how to design a system that's well adapted to them. In the rest of this book we will be pulling apart those applications, piece by piece, to understand the foundations they are built on. This is important knowledge for a deep learning practitioner, because it is what allows you to inspect and debug models that you build and create new applications that are customized for your particular projects.
github_jupyter
#hide !pip install -Uqq fastbook import fastbook fastbook.setup_book() #hide from fastbook import * from IPython.display import display,HTML from fastai.text.all import * dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test') path = untar_data(URLs.IMDB) dls = DataBlock( blocks=(TextBlock.from_folder(path),CategoryBlock), get_y = parent_label, get_items=partial(get_text_files, folders=['train', 'test']), splitter=GrandparentSplitter(valid_name='test') ).dataloaders(path) files = get_text_files(path, folders = ['train', 'test']) txts = L(o.open().read() for o in files[:2000]) tok = Tokenizer.from_folder(path) tok.setup(txts) toks = txts.map(tok) toks[0] num = Numericalize() num.setup(toks) nums = toks.map(num) nums[0][:10] nums_dec = num.decode(nums[0][:10]); nums_dec tok.decode(nums_dec) tok((txts[0], txts[1])) def f(x:int): return x+1 tfm = Transform(f) tfm(2),tfm(2.0) @Transform def f(x:int): return x+1 f(2),f(2.0) class NormalizeMean(Transform): def setups(self, items): self.mean = sum(items)/len(items) def encodes(self, x): return x-self.mean def decodes(self, x): return x+self.mean tfm = NormalizeMean() tfm.setup([1,2,3,4,5]) start = 2 y = tfm(start) z = tfm.decode(y) tfm.mean,y,z [options="header"] |====== | Class | To call | To implement | `nn.Module` (PyTorch) | `()` (i.e., call as function) | `forward` | `Transform` | `()` | `encodes` | `Transform` | `decode()` | `decodes` | `Transform` | `setup()` | `setups` |====== tfms = Pipeline([tok, num]) t = tfms(txts[0]); t[:20] tfms.decode(t)[:100] tls = TfmdLists(files, [Tokenizer.from_folder(path), Numericalize]) t = tls[0]; t[:20] tls.decode(t)[:100] tls.show(t) cut = int(len(files)*0.8) splits = [list(range(cut)), list(range(cut,len(files)))] tls = TfmdLists(files, [Tokenizer.from_folder(path), Numericalize], splits=splits) tls.valid[0][:20] lbls = files.map(parent_label) lbls cat = Categorize() cat.setup(lbls) cat.vocab, cat(lbls[0]) tls_y = TfmdLists(files, [parent_label, Categorize()]) tls_y[0] x_tfms = [Tokenizer.from_folder(path), Numericalize] y_tfms = [parent_label, Categorize()] dsets = Datasets(files, [x_tfms, y_tfms]) x,y = dsets[0] x[:20],y x_tfms = [Tokenizer.from_folder(path), Numericalize] y_tfms = [parent_label, Categorize()] dsets = Datasets(files, [x_tfms, y_tfms], splits=splits) x,y = dsets.valid[0] x[:20],y t = dsets.valid[0] dsets.decode(t) dls = dsets.dataloaders(bs=64, before_batch=pad_input) tfms = [[Tokenizer.from_folder(path), Numericalize], [parent_label, Categorize]] files = get_text_files(path, folders = ['train', 'test']) splits = GrandparentSplitter(valid_name='test')(files) dsets = Datasets(files, tfms, splits=splits) dls = dsets.dataloaders(dl_type=SortedDL, before_batch=pad_input) path = untar_data(URLs.IMDB) dls = DataBlock( blocks=(TextBlock.from_folder(path),CategoryBlock), get_y = parent_label, get_items=partial(get_text_files, folders=['train', 'test']), splitter=GrandparentSplitter(valid_name='test') ).dataloaders(path) from fastai.vision.all import * path = untar_data(URLs.PETS) files = get_image_files(path/"images") class SiameseImage(Tuple): def show(self, ctx=None, **kwargs): img1,img2,same_breed = self if not isinstance(img1, Tensor): if img2.size != img1.size: img2 = img2.resize(img1.size) t1,t2 = tensor(img1),tensor(img2) t1,t2 = t1.permute(2,0,1),t2.permute(2,0,1) else: t1,t2 = img1,img2 line = t1.new_zeros(t1.shape[0], t1.shape[1], 10) return show_image(torch.cat([t1,line,t2], dim=2), title=same_breed, ctx=ctx) img = PILImage.create(files[0]) s = SiameseImage(img, img, True) s.show(); img1 = PILImage.create(files[1]) s1 = SiameseImage(img, img1, False) s1.show(); s2 = Resize(224)(s1) s2.show(); def label_func(fname): return re.match(r'^(.*)_\d+.jpg$', fname.name).groups()[0] class SiameseTransform(Transform): def __init__(self, files, label_func, splits): self.labels = files.map(label_func).unique() self.lbl2files = {l: L(f for f in files if label_func(f) == l) for l in self.labels} self.label_func = label_func self.valid = {f: self._draw(f) for f in files[splits[1]]} def encodes(self, f): f2,t = self.valid.get(f, self._draw(f)) img1,img2 = PILImage.create(f),PILImage.create(f2) return SiameseImage(img1, img2, t) def _draw(self, f): same = random.random() < 0.5 cls = self.label_func(f) if not same: cls = random.choice(L(l for l in self.labels if l != cls)) return random.choice(self.lbl2files[cls]),same splits = RandomSplitter()(files) tfm = SiameseTransform(files, label_func, splits) tfm(files[0]).show(); tls = TfmdLists(files, tfm, splits=splits) show_at(tls.valid, 0); dls = tls.dataloaders(after_item=[Resize(224), ToTensor], after_batch=[IntToFloatTensor, Normalize.from_stats(*imagenet_stats)])
0.62601
0.977436
![Logo](https://github.com/martinchristen/pyRT/blob/master/docs/img/pyRT_64.png?raw=true) # Tutorial: An introduction to pyRT ("Pirate") using the Jupyter Notebook FHNW - University of Applied Sciences and Arts Northwestern Switzerland Martin Christen, [email protected], @MartinChristen PyRT (pronounced pirate) is a raytracer/image generator for Python 3.5. This project is mainly done with the following in mind: * 2D Graphics in the Jupyter Notebook * Ray Tracing in the Jupyter Notebook * Teaching ray tracing * Exploring ray tracing concepts for geo data using Python. * Rendering geo data, including large point clouds. * Implementing new algorithms for rendering large 3D city models. * Creating 3D-Maps from OpenStreetMap data * Server-side rendering / cloud based rendering pyRT is in development. Everything is (pre-)alpha! pip install https://github.com/martinchristen/pyRT/archive/master.zip pyRT **doesn't have any dependencies**, but it is recommended to install the following modules: pip install pillow --upgrade pip install numpy --upgrade pip install moviepy --upgrade pip install jupyter -–upgrade Basically, it is up to you how to display images, pyRT only creates lists with RGB Values etc. The easiest way to get started is using the jupyter notebook. Source code is available on github: https://github.com/martinchristen/pyRT The project is also on Twitter: @PythonRayTracer # Part 1: Using pyrt to Draw ## 1.1 Drawing Points pyRT also contains some functionatly to create images using Standard functions to draw points, lines, circle, rectangles. This has nothing todo with ray tracing, but it is fun and an easy way to get started to "Image Generators". When using Jupyter notebook, an image can be displayed using: pyrt.utils.display(imagedata, w, h) ``` from pyrt.renderer import RGBImage from pyrt.math import Vec2, Vec3 from pyrt.utils import display import numpy as np import random w = 320 h = 240 image = RGBImage(w, h) for i in range(5000): position = Vec2(random.randint(0, w - 1), random.randint(0, h - 1)) color = Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)) image.drawPoint(position, color) display(image.data, w, h) ``` ### Saving Image to File ``` from PIL import Image as PImage im = PImage.new("RGB", (w, h)) im.putdata(image.data) im.save("rendering.png") ``` ## 1.2 Drawing Lines ``` image = RGBImage(w, h) for i in range(50): image.drawLine(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data, w, h) ``` ## 1.3 Drawing Circles A more elegant way to display images in Jupyter notebook is creating a base64 encoded PNG and display it using the HTML img tag. This can be done by storing the image using the BytesIO base64 modules. ``` from pyrt.utils import display image = RGBImage(w, h) for i in range(100): image.drawCircle(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), random.randint(3, 100), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data,w,h) ``` ## 1.4 Drawing Rectangles As a last example on how to "draw" directly, we use the "drawRectangle" method to - surprise - draw some rectangles. ``` image = RGBImage(w, h) for i in range(100): image.drawRectangle(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), random.randint(1, w / 2), random.randint(1, h / 2), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data,w,h) ``` ## 1.5 Koch Snowflake ``` import math def snowflake(image, lev, x1, y1, x5, y5, color): if lev == 0: image.drawLine(Vec2(x1, y1), Vec2(x5, y5), color) else: deltaX = x5 - x1 deltaY = y5 - y1 x2 = int(x1 + deltaX / 3.) y2 = int(y1 + deltaY / 3.) x3 = int(0.5 * (x1 + x5) + math.sqrt(3.) * (y1 - y5) / 6.) y3 = int(0.5 * (y1 + y5) + math.sqrt(3.) * (x5 - x1) / 6.) x4 = int(x1 + 2. * deltaX / 3.) y4 = int(y1 + 2. * deltaY / 3.) snowflake(image, lev - 1, x1, y1, x2, y2, color) snowflake(image, lev - 1, x2, y2, x3, y3, color) snowflake(image, lev - 1, x3, y3, x4, y4, color) snowflake(image, lev - 1, x4, y4, x5, y5, color) image = RGBImage(256, 256) level = 3 snowflake(image, level, 0, 127, 255, 127, Vec3(1, 0, 0)) display(image.data,256,256) ``` # Part 2: An introduction to 3D Graphics This part shows how we can draw in 3D. It introduces the submodules camera and geometry. ## 2.1 Drawing Perspective Triangles ``` from pyrt.renderer import RGBImage from pyrt.math import Vec2, Vec3 from pyrt.camera import PerspectiveCamera from pyrt.geometry import Triangle, Vertex ``` let's create a 320x240 image again: ``` w = 320 h = 240 image = RGBImage(w, h) ``` Now we create a camera: the image plane has the same size like out output image (320x240) and we define a field of view of 60 degrees. ``` camera = PerspectiveCamera(w,h,60) ``` This camera has its origin in (0,0,0) and points along the z-axis. This is not really what we want. So we can set where it is positioned and where it looks at. camera.setview(position, lookat, upvector) position: camera position lookat: where it looks at upvector: for the orientation of the camera ``` camera.setView(Vec3(0,-10,0), Vec3(0,0,0), Vec3(0,0,1)) ``` we can access the projection matrix and the view matrix using: camera.projection camera.view These are 4x4 matrices. ``` print("Projection:") print(camera.projection) print("View:") print(camera.view) ``` Thew view-projection matrix can be created by multiplying the two matrices. Please note that multiplications are done from right to left. So we create the view-projection matrix using: ``` vp = camera.projection * camera.view print(vp) ``` Now we create a triangle. Triangles consists of 3 Vertices. A Vertex can have attributes like position, color, normal, ...). For this demo we only care about positions. ``` t = Triangle(Vertex(position=(-5, 1, 0)), Vertex(position=(0, 1, 5)), Vertex(position=(5, 1, 0))) ``` Now we multiply every vertex position of the triangle (t.a.position, t.b.position, t.c.position) with the view-projection matrix. This results in normalized device coordinates (NDC) in the range (-1,-1,-1) to (1,1,1) ``` at = vp * t.a.position bt = vp * t.b.position ct = vp * t.c.position ``` The NDC are now transformed to image coordinates. The division by z is a perspective transformation. ``` a_screenpos = Vec2(int(w * 0.5*(at.x + 1.) / at.z), int(h * 0.5*(at.y + 1.)/ at.z)) b_screenpos = Vec2(int(w * 0.5*(bt.x + 1.) / bt.z), int(h * 0.5*(bt.y + 1.)/ at.z)) c_screenpos = Vec2(int(w * 0.5*(ct.x + 1.) / ct.z), int(h * 0.5*(ct.y + 1.)/ at.z)) ``` And now we display the triangle by drawing the edges: ``` color = Vec3(1,1,1) image.drawLine(a_screenpos, c_screenpos, color) image.drawLine(c_screenpos, b_screenpos, color) image.drawLine(b_screenpos, a_screenpos, color) display(image.data,w,h) ``` # Part 3: Ray Tracing With this knowledge we know how 3D graphics basically work. Now we use ray-tracing to create the same triangle as in the last example. First we import the requires (sub-)modules ``` from pyrt.math import * from pyrt.scene import * from pyrt.geometry import Triangle, Vertex from pyrt.camera import PerspectiveCamera from pyrt.renderer import SimpleRT ``` Then we create our camera ``` w = 320 h = 240 camera = PerspectiveCamera(w, h, 60) camera.setView(Vec3(0,-10,0), Vec3(0,0,0), Vec3(0,0,1)) ``` The next step is to create a scene. A scene consists of all objects you want to display. We just add a triangle to the scene. ``` scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0)), Vertex(position=(0, 1, 5)), Vertex(position=(5, 1, 0)))) ``` Now the scene has to know which camera we use for rendering, so we just set the camera: ``` scene.setCamera(camera) ``` Now we specify the raytracer. At the moment there is only one (reference) implementation of a raytracer, called **SimpleRT** ``` engine = SimpleRT() ``` Now we render the scene and display the result ``` imgdata = engine.render(scene) display(imgdata,w,h) ``` We create a new scene and this time we add some colors to the vertices: ``` scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0), color=(1,0,0)), Vertex(position=(0, 1, 5), color=(0,1,0)), Vertex(position=(5, 1, 0), color=(0,0,1)))) scene.setCamera(camera) ``` and render again... ``` imgdata = engine.render(scene) display(imgdata,w,h) ``` We can also create a scene with 2 triangles and render it: ``` scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0), color=(1,1,1)), Vertex(position=(0, 1, 5), color=(0,1,1)), Vertex(position=(5, 1, 0), color=(1,1,1)))) scene.add(Triangle(Vertex(position=(5, 1, 0), color=(1,1,1)), Vertex(position=(0, 1, -5), color=(1,1,0)), Vertex(position=(-5, 1, 0), color=(1,1,1)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) ``` Instead of triangles we can also use spheres. Let's also look at materials. One material type is "PhongMaterial" where you can define the material of the object by specifying its color, its shinines, its reflectivity etc. ``` from pyrt.geometry import Sphere from pyrt.material import PhongMaterial scene = Scene() scene.add(Sphere(center=Vec3(0.,0.,0.), radius=3., material=PhongMaterial(color=Vec3(1.,0.,0.)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) ``` however, without light this object doesn't really look like a 3D-Object. So let's also add a point light. This light type is somewhat like a light bulb. ``` from pyrt.light import PointLight scene = Scene() scene.addLight(PointLight(Vec3(-1,-8,1))) scene.add(Sphere(center=Vec3(0.,0.,0.), radius=3., material=PhongMaterial(color=Vec3(1.,0.,0.)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) ``` Now we're ready to create a larger scene. Let's create 4 spheres on top of a plane created using two triangles. Every piece should have a different material, so let's create materials first. ``` floormaterial = PhongMaterial(color=Vec3(0.5,0.5,0.5)) sphere0material = PhongMaterial(color=Vec3(1.,0.,0.), reflectivity=0.5) sphere1material = PhongMaterial(color=Vec3(0.,1.,0.), reflectivity=0.5) sphere2material = PhongMaterial(color=Vec3(0.,0.,1.), reflectivity=0.5) sphere3material = PhongMaterial(color=Vec3(1.,1.,0.), reflectivity=0.5) ``` Lets create another view, from more above looking to (0,0,0) ``` camera = PerspectiveCamera(w, h, 45) camera.setView(Vec3(0.,-10.,10.), Vec3(0.,0.,0.), Vec3(0.,0.,1.)) ``` Now we create and add a light and geometries: ``` # Create a scene scene = Scene() # Add a light to the scene scene.addLight(PointLight(Vec3(0,0,15))) # Add "floor" A = Vertex(position=(-5.0, -5.0, 0.0)) B = Vertex(position=( 5.0, -5.0, 0.0)) C = Vertex(position=( 5.0, 5.0, 0.0)) D = Vertex(position=(-5.0, 5.0, 0.0)) scene.add(Triangle(A,B,C, material=floormaterial)) scene.add(Triangle(A,C,D, material=floormaterial)) # Add 4 spheres scene.add(Sphere(center=Vec3(-2.5,-2.5,1.75), radius=1.75, material=sphere0material)) scene.add(Sphere(center=Vec3( 2.5,-2.5,1.75), radius=1.75, material=sphere1material)) scene.add(Sphere(center=Vec3( 2.5, 2.5,1.75), radius=1.75, material=sphere2material)) scene.add(Sphere(center=Vec3(-2.5, 2.5,1.75), radius=1.75, material=sphere3material)) # Set the camera scene.setCamera(camera) ``` now we are ready to render as usual: ``` imgdata = engine.render(scene) display(imgdata,w,h) ``` Now we tell the renderer to support shadows: ``` engine = SimpleRT(shadow=True) imgdata = engine.render(scene) display(imgdata,w,h) ``` And we can enable multiple iterations for rays. So a ray doesn't stop at an object if the material is reflecting. ``` engine = SimpleRT(shadow=True, iterations=3) imgdata = engine.render(scene) display(imgdata,w,h) ``` # Outlook Master Thesis Markus Fehr adds GPU support. Speeds of ca. 89'000'000 rays/sec with a NVIDIA 1080 GPU. His MTh will finish in January 2017. Markus Fehr, "Beleuchtungsmodelle für 3D Stadtmodelle für GPU optimiertes Rendering in der Cloud", Master Thesis 2017 <img src="img/Berlin_AO.PNG">
github_jupyter
from pyrt.renderer import RGBImage from pyrt.math import Vec2, Vec3 from pyrt.utils import display import numpy as np import random w = 320 h = 240 image = RGBImage(w, h) for i in range(5000): position = Vec2(random.randint(0, w - 1), random.randint(0, h - 1)) color = Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)) image.drawPoint(position, color) display(image.data, w, h) from PIL import Image as PImage im = PImage.new("RGB", (w, h)) im.putdata(image.data) im.save("rendering.png") image = RGBImage(w, h) for i in range(50): image.drawLine(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data, w, h) from pyrt.utils import display image = RGBImage(w, h) for i in range(100): image.drawCircle(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), random.randint(3, 100), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data,w,h) image = RGBImage(w, h) for i in range(100): image.drawRectangle(Vec2(random.randint(0, w - 1), random.randint(0, h - 1)), random.randint(1, w / 2), random.randint(1, h / 2), Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1))) display(image.data,w,h) import math def snowflake(image, lev, x1, y1, x5, y5, color): if lev == 0: image.drawLine(Vec2(x1, y1), Vec2(x5, y5), color) else: deltaX = x5 - x1 deltaY = y5 - y1 x2 = int(x1 + deltaX / 3.) y2 = int(y1 + deltaY / 3.) x3 = int(0.5 * (x1 + x5) + math.sqrt(3.) * (y1 - y5) / 6.) y3 = int(0.5 * (y1 + y5) + math.sqrt(3.) * (x5 - x1) / 6.) x4 = int(x1 + 2. * deltaX / 3.) y4 = int(y1 + 2. * deltaY / 3.) snowflake(image, lev - 1, x1, y1, x2, y2, color) snowflake(image, lev - 1, x2, y2, x3, y3, color) snowflake(image, lev - 1, x3, y3, x4, y4, color) snowflake(image, lev - 1, x4, y4, x5, y5, color) image = RGBImage(256, 256) level = 3 snowflake(image, level, 0, 127, 255, 127, Vec3(1, 0, 0)) display(image.data,256,256) from pyrt.renderer import RGBImage from pyrt.math import Vec2, Vec3 from pyrt.camera import PerspectiveCamera from pyrt.geometry import Triangle, Vertex w = 320 h = 240 image = RGBImage(w, h) camera = PerspectiveCamera(w,h,60) camera.setView(Vec3(0,-10,0), Vec3(0,0,0), Vec3(0,0,1)) print("Projection:") print(camera.projection) print("View:") print(camera.view) vp = camera.projection * camera.view print(vp) t = Triangle(Vertex(position=(-5, 1, 0)), Vertex(position=(0, 1, 5)), Vertex(position=(5, 1, 0))) at = vp * t.a.position bt = vp * t.b.position ct = vp * t.c.position a_screenpos = Vec2(int(w * 0.5*(at.x + 1.) / at.z), int(h * 0.5*(at.y + 1.)/ at.z)) b_screenpos = Vec2(int(w * 0.5*(bt.x + 1.) / bt.z), int(h * 0.5*(bt.y + 1.)/ at.z)) c_screenpos = Vec2(int(w * 0.5*(ct.x + 1.) / ct.z), int(h * 0.5*(ct.y + 1.)/ at.z)) color = Vec3(1,1,1) image.drawLine(a_screenpos, c_screenpos, color) image.drawLine(c_screenpos, b_screenpos, color) image.drawLine(b_screenpos, a_screenpos, color) display(image.data,w,h) from pyrt.math import * from pyrt.scene import * from pyrt.geometry import Triangle, Vertex from pyrt.camera import PerspectiveCamera from pyrt.renderer import SimpleRT w = 320 h = 240 camera = PerspectiveCamera(w, h, 60) camera.setView(Vec3(0,-10,0), Vec3(0,0,0), Vec3(0,0,1)) scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0)), Vertex(position=(0, 1, 5)), Vertex(position=(5, 1, 0)))) scene.setCamera(camera) engine = SimpleRT() imgdata = engine.render(scene) display(imgdata,w,h) scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0), color=(1,0,0)), Vertex(position=(0, 1, 5), color=(0,1,0)), Vertex(position=(5, 1, 0), color=(0,0,1)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) scene = Scene() scene.add(Triangle(Vertex(position=(-5, 1, 0), color=(1,1,1)), Vertex(position=(0, 1, 5), color=(0,1,1)), Vertex(position=(5, 1, 0), color=(1,1,1)))) scene.add(Triangle(Vertex(position=(5, 1, 0), color=(1,1,1)), Vertex(position=(0, 1, -5), color=(1,1,0)), Vertex(position=(-5, 1, 0), color=(1,1,1)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) from pyrt.geometry import Sphere from pyrt.material import PhongMaterial scene = Scene() scene.add(Sphere(center=Vec3(0.,0.,0.), radius=3., material=PhongMaterial(color=Vec3(1.,0.,0.)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) from pyrt.light import PointLight scene = Scene() scene.addLight(PointLight(Vec3(-1,-8,1))) scene.add(Sphere(center=Vec3(0.,0.,0.), radius=3., material=PhongMaterial(color=Vec3(1.,0.,0.)))) scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) floormaterial = PhongMaterial(color=Vec3(0.5,0.5,0.5)) sphere0material = PhongMaterial(color=Vec3(1.,0.,0.), reflectivity=0.5) sphere1material = PhongMaterial(color=Vec3(0.,1.,0.), reflectivity=0.5) sphere2material = PhongMaterial(color=Vec3(0.,0.,1.), reflectivity=0.5) sphere3material = PhongMaterial(color=Vec3(1.,1.,0.), reflectivity=0.5) camera = PerspectiveCamera(w, h, 45) camera.setView(Vec3(0.,-10.,10.), Vec3(0.,0.,0.), Vec3(0.,0.,1.)) # Create a scene scene = Scene() # Add a light to the scene scene.addLight(PointLight(Vec3(0,0,15))) # Add "floor" A = Vertex(position=(-5.0, -5.0, 0.0)) B = Vertex(position=( 5.0, -5.0, 0.0)) C = Vertex(position=( 5.0, 5.0, 0.0)) D = Vertex(position=(-5.0, 5.0, 0.0)) scene.add(Triangle(A,B,C, material=floormaterial)) scene.add(Triangle(A,C,D, material=floormaterial)) # Add 4 spheres scene.add(Sphere(center=Vec3(-2.5,-2.5,1.75), radius=1.75, material=sphere0material)) scene.add(Sphere(center=Vec3( 2.5,-2.5,1.75), radius=1.75, material=sphere1material)) scene.add(Sphere(center=Vec3( 2.5, 2.5,1.75), radius=1.75, material=sphere2material)) scene.add(Sphere(center=Vec3(-2.5, 2.5,1.75), radius=1.75, material=sphere3material)) # Set the camera scene.setCamera(camera) imgdata = engine.render(scene) display(imgdata,w,h) engine = SimpleRT(shadow=True) imgdata = engine.render(scene) display(imgdata,w,h) engine = SimpleRT(shadow=True, iterations=3) imgdata = engine.render(scene) display(imgdata,w,h)
0.651355
0.951369
# SPD Replicate the design of [Smaldino et al. (2013)](https://www.journals.uchicago.edu/doi/10.1086/669615). Add per-agent memory. Remember defectors. Vary memory size. Add local communication: shout to nearby neighbors. Vary range & size. ## Imports ``` # model from mesa_fork import Model, Agent from mesa_fork.time import RandomActivation from mesa_fork.space import SingleGrid from mesa_fork.datacollection import DataCollector from enum import Enum # visualization import numpy as np import matplotlib.pyplot as plt plt.rc('axes', labelsize=8) %matplotlib inline plt.style.use('seaborn') import seaborn as sns sns.set_theme(style="ticks") from my_plot import my_plot_export # parameter sweep from mesa_fork.batchrunner import BatchRunnerMP ``` ## Setup model ``` class Memory: """ Fixed-size FIFO (first-in-first-out) memory. Used by agents to remeber the most recent defectors. """ def __init__(self, size): """ Args: size: memory size """ self.size = size self.memory = [] def add(self, item): # remove duplicates self.memory[:] = list(filter( lambda x: x != item, self.memory)) self.memory.insert(0, item) # truncate to size while len(self.memory) > self.size: self.memory.pop(-1) def contains(self, item): try: self.memory.index(item) return True except ValueError: return False def get(self): return self.memory.copy() class Action(Enum): COOPERATE = 1 DEFECT = 2 class MyAgent(Agent): def __init__(self, model, energy, max_energy, memory_size, gossip_range, gossip_size, cooperator=False): """ Agent behaviour Args: model: reference to the model containing this agent energy: starting energy level max_energy: maximal energy limit memory_size: size of agent memory gossip_range: maximal range at which this agent can gossip gossip_size: maximal amount of information which can be gossip in a single round """ super().__init__(model.next_id(), model) self.memory = Memory(memory_size) self.energy = energy self.max_energy = max_energy self.cooperator = cooperator self.gossip_range = gossip_range self.gossip_size = gossip_size self.played = True self.newborn = True def play(self): """ Return game action based on agent type. All agents use either always-cooperate or always-defect strategy. """ if self.cooperator: return Action.COOPERATE else: return Action.DEFECT def gossip(self): """ Return gossip about remebered defectors """ gossip = self.memory.get() # truncate to size while len(gossip) > self.gossip_size: gossip.pop(-1) return gossip def will_play_with(self, opponent): """ Return True if willing to play with the opponent Otherwise False """ # can only play once a turn if self.played: return False # check own memory if self.memory.contains(opponent): return False # get gossip gossip_knowledge = set() gossipers = self.model.grid.get_neighbors(self.pos, moore=True, radius=self.gossip_range) for gossiper in gossipers: gossip_knowledge.update(gossiper.gossip()) # check gossip about opponent if opponent in gossip_knowledge: return False # opponent does not have bad reputation return True def step(self): """ Agent behaviour in a single timestep. """ # don't step if created this turn if self.newborn: return # get eligible opponents neighbors = self.model.grid.get_neighbors(self.pos, moore=True) # discard neighbors who have already played a game in this step opponents = list(filter( lambda a: self.will_play_with(a.unique_id) and a.will_play_with(self.unique_id), neighbors)) # find opponent if opponents: opponent = self.random.choice(opponents) # play pd game if not self.played: a = self.play() b = opponent.play() R, T, S, P = self.model.R, self.model.T, self.model.S, self.model.P if a == Action.COOPERATE and b == Action.COOPERATE: self.energy += R opponent.energy += R elif a == Action.COOPERATE and b == Action.DEFECT: self.energy += S opponent.energy += T # remember betrayal self.memory.add(opponent.unique_id) elif a == Action.DEFECT and b == Action.COOPERATE: self.energy += T opponent.energy += S # remember betrayal opponent.memory.add(self.unique_id) elif a == Action.DEFECT and b == Action.DEFECT: self.energy += P opponent.energy += P # remember betrayals self.memory.add(opponent.unique_id) opponent.memory.add(self.unique_id) self.energy = min(self.energy, self.max_energy) opponent.energy = min(opponent.energy, opponent.max_energy) self.played = True opponent.played = True # reproduce max_population = (self.model.grid.width * self.model.grid.height) / 2 if ( self.cooperator and self.model.last_cooperator_count < max_population) or \ (not self.cooperator and self.model.last_defector_count < max_population): neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True) unoccupied = list(filter( lambda c: self.model.grid.is_cell_empty(c), neighborhood)) if self.energy >= (self.model.energy_to_reproduce * 2) and unoccupied: cell = self.random.choice(unoccupied) offspring = MyAgent(self.model, energy=self.model.energy_to_reproduce, max_energy=self.max_energy, cooperator=self.cooperator, memory_size=self.memory.size, gossip_size=self.gossip_size, gossip_range=self.gossip_range) self.model.grid.position_agent(offspring, cell[0], cell[1]) self.model.schedule.add(offspring) # update values for DataCollector self.model.agent_count += 1 if self.cooperator: self.model.cooperator_count += 1 else: self.model.defector_count += 1 self.energy -= self.model.energy_to_reproduce elif not self.played: # attempt movement neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True) unoccupied = list(filter( lambda c: self.model.grid.is_cell_empty(c), neighborhood)) if unoccupied: cell = self.random.choice(unoccupied) self.model.grid.move_agent(self, cell) # energy deduction (cost of living) self.energy -= self.model.living_cost if self.energy <= 0: # die self.model.grid.remove_agent(self) self.model.schedule.remove(self) return # update values for DataCollector self.model.agent_count += 1 if self.cooperator: self.model.cooperator_count += 1 else: self.model.defector_count += 1 class SPDModel(Model): def __init__(self, R=3, T=5, S=-1, P=0, starting_agent_count=10, starting_energies=range(1,51), max_energy=150, energy_to_reproduce=50, living_cost=1, memory_size=0, gossip_range=0, gossip_size=0, grid_size=10, wrap=True): """ Smaldino's spatial prisonner's dilemma model extended with limited memory Args: R, T, S, P: PD payoffs starting_agent_count: starting number of agents starting_energies: list of possible starting energies for agents (picked at random) max_energy: maximal energy an agent can hold energy_to_reproduce: energy required to reproduce living_cost: energy deducted at the end of each step memory_size: agent memory size gossip_range: neighborhood range for gossip gossip_size: amount of info to gossip grid_size: size length of square grid to use wrap: whether to wrap grid (torus bounds) """ super().__init__() self.schedule = RandomActivation(self) self.grid = SingleGrid(grid_size, grid_size, torus=wrap) self.R = R self.T = T self.S = S self.P = P self.energy_to_reproduce = energy_to_reproduce self.living_cost = living_cost # Setup agents self.cooperator_count = 0 self.defector_count = 0 for i in range(starting_agent_count): energy = self.random.choice(starting_energies) cooperator = i%2 == 0 if cooperator: self.cooperator_count += 1 else: self.defector_count += 1 agent = MyAgent(self, energy, max_energy=max_energy, cooperator=cooperator, memory_size=memory_size, gossip_range=gossip_range, gossip_size=gossip_size) cell = self.random.choice(list(self.grid.empties)) self.grid.position_agent(agent, cell[0], cell[1]) self.schedule.add(agent) self.agent_count = starting_agent_count # Init model self.running = True self.datacollector = DataCollector( { "agent_count": "agent_count", "cooperator_count": "cooperator_count", "defector_count": "defector_count", }, ) self.datacollector.collect(self) def step(self): # setup for step self.last_cooperator_count = self.cooperator_count self.last_defector_count = self.defector_count self.agent_count = 0 self.cooperator_count = 0 self.defector_count = 0 for a in self.schedule.agents: a.played = False a.newborn = False # step self.schedule.step() self.datacollector.collect(self) # stop the model if no agents are alive if self.agent_count == 0: self.running = False ``` ## Run model ``` spd = SPDModel(R=3, T=5, S=-1.5, P=0, starting_agent_count=64, starting_energies=range(1,50), max_energy=150, energy_to_reproduce=50, living_cost=0.5, memory_size=1, gossip_size=1, gossip_range=3, grid_size=20, wrap=True) i = 0 while spd.running and i < 500: spd.step() i += 1 ``` ### Check results ``` max_population = (spd.grid.width * spd.grid.height) / 2 results = spd.datacollector.get_model_vars_dataframe() results['cooperator_saturation'] = (results['cooperator_count'] / max_population) results['defector_saturation'] = (results['defector_count'] / max_population) fig, ax = plt.subplots(1, 1) sns.lineplot(data=results[['cooperator_saturation', 'defector_saturation']], ax=ax) # ax.set_xlim(0, 1000) ax.set_ylim(-0.1, 1.1) ax.set_xlabel('step') ax.set_ylabel('agent type saturation') my_plot_export(fig, [ax], 'saturation&step-memory1+gossip1+range3') ``` ### Render visualization ``` from matplotlib import cm c = cm.get_cmap('RdYlGn', 3) print(c(0.0)) print(c(0.5)) print(c(1.0)) from matplotlib.colors import ListedColormap def value(cell): if cell is None: return 50 elif isinstance(cell, Agent): if cell.cooperator: return 100 else: return 0 else: raise Exception("Unidentified cell: {}".format(cell)) data = np.array([[value(c) for c in row] for row in spd.grid.grid]) colors = [ # [0.6470588235294118, 0.0, 0.14901960784313725, 1.0], [1.0, 1.0, 0.7490196078431373, 1.0], [0.0, 0.40784313725490196, 0.21568627450980393, 1.0], ] fig = plt.imshow(data, cmap=ListedColormap(colors)) plt.axis('off') from os import path data_large = np.zeros(np.array(data.shape) * 10) for j in range(data.shape[0]): for k in range(data.shape[1]): data_large[j * 10: (j+1) * 10, k * 10: (k+1) * 10] = data[j, k] fig = plt.imsave(path.join('plots', 'spatial-memory1+gossip1+range3-C.pdf'), data_large, cmap=ListedColormap(colors), format='pdf') ``` ## Paramater sweep ``` variable_parameters = { "living_cost": np.linspace(0.0, 3.5, num=10), } fixed_parameters = { "gossip_size": 1, "memory_size": 1, "gossip_range": 3, "R": 3, "T": 5, "S": -1.5, "P": 0, "starting_agent_count": 64, "starting_energies": range(1,50), "max_energy": 150, "energy_to_reproduce": 50, "grid_size": 20, "wrap": True, } iterations = 30 max_steps = 1000 param_run = BatchRunnerMP(SPDModel, nr_processes=None, # detect automatically variable_parameters=variable_parameters, fixed_parameters=fixed_parameters, iterations=iterations, max_steps=max_steps, model_reporters={ "agent_count": lambda m: m.agent_count, "cooperator_count": lambda m: m.cooperator_count, "defector_count": lambda m: m.defector_count, }) param_run.run_all() max_population = (fixed_parameters["grid_size"] ** 2) / 2 run_data = param_run.get_model_vars_dataframe() run_data['cooperator_saturation'] = (run_data['cooperator_count'] / max_population) run_data['defector_saturation'] = (run_data['defector_count'] / max_population) run_data = run_data.dropna() # run_data.head() PROPS = { 'boxprops':{'facecolor':'lightgrey', 'edgecolor':'black'}, 'medianprops':{'color':'black'}, 'whiskerprops':{'color':'black'}, 'capprops':{'color':'black'} } fig, axs = plt.subplots(1, 2, figsize=(16, 3)) plt.suptitle("Agent type saturation after {} steps".format(max_steps), fontsize=16) sns.boxplot(x="living_cost", y="cooperator_saturation", data=run_data, ax=axs[0], showfliers=False, **PROPS) sns.boxplot(x="living_cost", y="defector_saturation", data=run_data, ax=axs[1], showfliers=False, **PROPS) for ax in axs: ax.set_xlabel("Cost of living") ax.set_ylim(-0.1, 1.1) labels = [item.get_text() for item in ax.get_xticklabels()] ax.set_xticklabels([str(round(float(label), 2)) for label in labels]) axs[0].set_ylabel("Cooperator saturation") axs[1].set_ylabel("Defector saturation") name = 'saturation&living_cost-memory{}_gossip{}_range{}_{}steps'.format( fixed_parameters["memory_size"], fixed_parameters["gossip_size"], fixed_parameters["gossip_range"], max_steps) my_plot_export(fig, axs, name) my_plot_export(fig, axs, '{}_large'.format(name), fontsize=16, width=16, height=3) ```
github_jupyter
# model from mesa_fork import Model, Agent from mesa_fork.time import RandomActivation from mesa_fork.space import SingleGrid from mesa_fork.datacollection import DataCollector from enum import Enum # visualization import numpy as np import matplotlib.pyplot as plt plt.rc('axes', labelsize=8) %matplotlib inline plt.style.use('seaborn') import seaborn as sns sns.set_theme(style="ticks") from my_plot import my_plot_export # parameter sweep from mesa_fork.batchrunner import BatchRunnerMP class Memory: """ Fixed-size FIFO (first-in-first-out) memory. Used by agents to remeber the most recent defectors. """ def __init__(self, size): """ Args: size: memory size """ self.size = size self.memory = [] def add(self, item): # remove duplicates self.memory[:] = list(filter( lambda x: x != item, self.memory)) self.memory.insert(0, item) # truncate to size while len(self.memory) > self.size: self.memory.pop(-1) def contains(self, item): try: self.memory.index(item) return True except ValueError: return False def get(self): return self.memory.copy() class Action(Enum): COOPERATE = 1 DEFECT = 2 class MyAgent(Agent): def __init__(self, model, energy, max_energy, memory_size, gossip_range, gossip_size, cooperator=False): """ Agent behaviour Args: model: reference to the model containing this agent energy: starting energy level max_energy: maximal energy limit memory_size: size of agent memory gossip_range: maximal range at which this agent can gossip gossip_size: maximal amount of information which can be gossip in a single round """ super().__init__(model.next_id(), model) self.memory = Memory(memory_size) self.energy = energy self.max_energy = max_energy self.cooperator = cooperator self.gossip_range = gossip_range self.gossip_size = gossip_size self.played = True self.newborn = True def play(self): """ Return game action based on agent type. All agents use either always-cooperate or always-defect strategy. """ if self.cooperator: return Action.COOPERATE else: return Action.DEFECT def gossip(self): """ Return gossip about remebered defectors """ gossip = self.memory.get() # truncate to size while len(gossip) > self.gossip_size: gossip.pop(-1) return gossip def will_play_with(self, opponent): """ Return True if willing to play with the opponent Otherwise False """ # can only play once a turn if self.played: return False # check own memory if self.memory.contains(opponent): return False # get gossip gossip_knowledge = set() gossipers = self.model.grid.get_neighbors(self.pos, moore=True, radius=self.gossip_range) for gossiper in gossipers: gossip_knowledge.update(gossiper.gossip()) # check gossip about opponent if opponent in gossip_knowledge: return False # opponent does not have bad reputation return True def step(self): """ Agent behaviour in a single timestep. """ # don't step if created this turn if self.newborn: return # get eligible opponents neighbors = self.model.grid.get_neighbors(self.pos, moore=True) # discard neighbors who have already played a game in this step opponents = list(filter( lambda a: self.will_play_with(a.unique_id) and a.will_play_with(self.unique_id), neighbors)) # find opponent if opponents: opponent = self.random.choice(opponents) # play pd game if not self.played: a = self.play() b = opponent.play() R, T, S, P = self.model.R, self.model.T, self.model.S, self.model.P if a == Action.COOPERATE and b == Action.COOPERATE: self.energy += R opponent.energy += R elif a == Action.COOPERATE and b == Action.DEFECT: self.energy += S opponent.energy += T # remember betrayal self.memory.add(opponent.unique_id) elif a == Action.DEFECT and b == Action.COOPERATE: self.energy += T opponent.energy += S # remember betrayal opponent.memory.add(self.unique_id) elif a == Action.DEFECT and b == Action.DEFECT: self.energy += P opponent.energy += P # remember betrayals self.memory.add(opponent.unique_id) opponent.memory.add(self.unique_id) self.energy = min(self.energy, self.max_energy) opponent.energy = min(opponent.energy, opponent.max_energy) self.played = True opponent.played = True # reproduce max_population = (self.model.grid.width * self.model.grid.height) / 2 if ( self.cooperator and self.model.last_cooperator_count < max_population) or \ (not self.cooperator and self.model.last_defector_count < max_population): neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True) unoccupied = list(filter( lambda c: self.model.grid.is_cell_empty(c), neighborhood)) if self.energy >= (self.model.energy_to_reproduce * 2) and unoccupied: cell = self.random.choice(unoccupied) offspring = MyAgent(self.model, energy=self.model.energy_to_reproduce, max_energy=self.max_energy, cooperator=self.cooperator, memory_size=self.memory.size, gossip_size=self.gossip_size, gossip_range=self.gossip_range) self.model.grid.position_agent(offspring, cell[0], cell[1]) self.model.schedule.add(offspring) # update values for DataCollector self.model.agent_count += 1 if self.cooperator: self.model.cooperator_count += 1 else: self.model.defector_count += 1 self.energy -= self.model.energy_to_reproduce elif not self.played: # attempt movement neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True) unoccupied = list(filter( lambda c: self.model.grid.is_cell_empty(c), neighborhood)) if unoccupied: cell = self.random.choice(unoccupied) self.model.grid.move_agent(self, cell) # energy deduction (cost of living) self.energy -= self.model.living_cost if self.energy <= 0: # die self.model.grid.remove_agent(self) self.model.schedule.remove(self) return # update values for DataCollector self.model.agent_count += 1 if self.cooperator: self.model.cooperator_count += 1 else: self.model.defector_count += 1 class SPDModel(Model): def __init__(self, R=3, T=5, S=-1, P=0, starting_agent_count=10, starting_energies=range(1,51), max_energy=150, energy_to_reproduce=50, living_cost=1, memory_size=0, gossip_range=0, gossip_size=0, grid_size=10, wrap=True): """ Smaldino's spatial prisonner's dilemma model extended with limited memory Args: R, T, S, P: PD payoffs starting_agent_count: starting number of agents starting_energies: list of possible starting energies for agents (picked at random) max_energy: maximal energy an agent can hold energy_to_reproduce: energy required to reproduce living_cost: energy deducted at the end of each step memory_size: agent memory size gossip_range: neighborhood range for gossip gossip_size: amount of info to gossip grid_size: size length of square grid to use wrap: whether to wrap grid (torus bounds) """ super().__init__() self.schedule = RandomActivation(self) self.grid = SingleGrid(grid_size, grid_size, torus=wrap) self.R = R self.T = T self.S = S self.P = P self.energy_to_reproduce = energy_to_reproduce self.living_cost = living_cost # Setup agents self.cooperator_count = 0 self.defector_count = 0 for i in range(starting_agent_count): energy = self.random.choice(starting_energies) cooperator = i%2 == 0 if cooperator: self.cooperator_count += 1 else: self.defector_count += 1 agent = MyAgent(self, energy, max_energy=max_energy, cooperator=cooperator, memory_size=memory_size, gossip_range=gossip_range, gossip_size=gossip_size) cell = self.random.choice(list(self.grid.empties)) self.grid.position_agent(agent, cell[0], cell[1]) self.schedule.add(agent) self.agent_count = starting_agent_count # Init model self.running = True self.datacollector = DataCollector( { "agent_count": "agent_count", "cooperator_count": "cooperator_count", "defector_count": "defector_count", }, ) self.datacollector.collect(self) def step(self): # setup for step self.last_cooperator_count = self.cooperator_count self.last_defector_count = self.defector_count self.agent_count = 0 self.cooperator_count = 0 self.defector_count = 0 for a in self.schedule.agents: a.played = False a.newborn = False # step self.schedule.step() self.datacollector.collect(self) # stop the model if no agents are alive if self.agent_count == 0: self.running = False spd = SPDModel(R=3, T=5, S=-1.5, P=0, starting_agent_count=64, starting_energies=range(1,50), max_energy=150, energy_to_reproduce=50, living_cost=0.5, memory_size=1, gossip_size=1, gossip_range=3, grid_size=20, wrap=True) i = 0 while spd.running and i < 500: spd.step() i += 1 max_population = (spd.grid.width * spd.grid.height) / 2 results = spd.datacollector.get_model_vars_dataframe() results['cooperator_saturation'] = (results['cooperator_count'] / max_population) results['defector_saturation'] = (results['defector_count'] / max_population) fig, ax = plt.subplots(1, 1) sns.lineplot(data=results[['cooperator_saturation', 'defector_saturation']], ax=ax) # ax.set_xlim(0, 1000) ax.set_ylim(-0.1, 1.1) ax.set_xlabel('step') ax.set_ylabel('agent type saturation') my_plot_export(fig, [ax], 'saturation&step-memory1+gossip1+range3') from matplotlib import cm c = cm.get_cmap('RdYlGn', 3) print(c(0.0)) print(c(0.5)) print(c(1.0)) from matplotlib.colors import ListedColormap def value(cell): if cell is None: return 50 elif isinstance(cell, Agent): if cell.cooperator: return 100 else: return 0 else: raise Exception("Unidentified cell: {}".format(cell)) data = np.array([[value(c) for c in row] for row in spd.grid.grid]) colors = [ # [0.6470588235294118, 0.0, 0.14901960784313725, 1.0], [1.0, 1.0, 0.7490196078431373, 1.0], [0.0, 0.40784313725490196, 0.21568627450980393, 1.0], ] fig = plt.imshow(data, cmap=ListedColormap(colors)) plt.axis('off') from os import path data_large = np.zeros(np.array(data.shape) * 10) for j in range(data.shape[0]): for k in range(data.shape[1]): data_large[j * 10: (j+1) * 10, k * 10: (k+1) * 10] = data[j, k] fig = plt.imsave(path.join('plots', 'spatial-memory1+gossip1+range3-C.pdf'), data_large, cmap=ListedColormap(colors), format='pdf') variable_parameters = { "living_cost": np.linspace(0.0, 3.5, num=10), } fixed_parameters = { "gossip_size": 1, "memory_size": 1, "gossip_range": 3, "R": 3, "T": 5, "S": -1.5, "P": 0, "starting_agent_count": 64, "starting_energies": range(1,50), "max_energy": 150, "energy_to_reproduce": 50, "grid_size": 20, "wrap": True, } iterations = 30 max_steps = 1000 param_run = BatchRunnerMP(SPDModel, nr_processes=None, # detect automatically variable_parameters=variable_parameters, fixed_parameters=fixed_parameters, iterations=iterations, max_steps=max_steps, model_reporters={ "agent_count": lambda m: m.agent_count, "cooperator_count": lambda m: m.cooperator_count, "defector_count": lambda m: m.defector_count, }) param_run.run_all() max_population = (fixed_parameters["grid_size"] ** 2) / 2 run_data = param_run.get_model_vars_dataframe() run_data['cooperator_saturation'] = (run_data['cooperator_count'] / max_population) run_data['defector_saturation'] = (run_data['defector_count'] / max_population) run_data = run_data.dropna() # run_data.head() PROPS = { 'boxprops':{'facecolor':'lightgrey', 'edgecolor':'black'}, 'medianprops':{'color':'black'}, 'whiskerprops':{'color':'black'}, 'capprops':{'color':'black'} } fig, axs = plt.subplots(1, 2, figsize=(16, 3)) plt.suptitle("Agent type saturation after {} steps".format(max_steps), fontsize=16) sns.boxplot(x="living_cost", y="cooperator_saturation", data=run_data, ax=axs[0], showfliers=False, **PROPS) sns.boxplot(x="living_cost", y="defector_saturation", data=run_data, ax=axs[1], showfliers=False, **PROPS) for ax in axs: ax.set_xlabel("Cost of living") ax.set_ylim(-0.1, 1.1) labels = [item.get_text() for item in ax.get_xticklabels()] ax.set_xticklabels([str(round(float(label), 2)) for label in labels]) axs[0].set_ylabel("Cooperator saturation") axs[1].set_ylabel("Defector saturation") name = 'saturation&living_cost-memory{}_gossip{}_range{}_{}steps'.format( fixed_parameters["memory_size"], fixed_parameters["gossip_size"], fixed_parameters["gossip_range"], max_steps) my_plot_export(fig, axs, name) my_plot_export(fig, axs, '{}_large'.format(name), fontsize=16, width=16, height=3)
0.685634
0.831417
``` # Uncomment and run this cell if Pandas library is not already installed # ! pip install --upgrade pip # ! pip install pandas ``` ### Import Libraries ``` import pandas as pd ``` ### Load data from S3 ``` # define CloudFront domain name (to access S3) and data cloudfront = 'https://d3cu2src083cxg.cloudfront.net' data_key = 'rfmo_query_result.csv' data_location = '{}/{}'.format(cloudfront, data_key) # load select columns df_rfmo = pd.read_csv(data_location, usecols = ['year', 'name_rfmo', 'name_comm_group', 'name_fishing_entity', 'name_sector_type', 'catch_sum', 'real_value']) # rename columns as needed df_rfmo.rename(columns = {"name_rfmo": "rfmo", "name_comm_group": "commercial_group", "name_fishing_entity": "fishing_entity", "name_sector_type": "sector_type"}, inplace = True) # print shape (rows, columns) of dataframe df_rfmo.shape # sample data df_rfmo.head(3) # dataframe information df_rfmo.info() ``` ### Data analysis ``` # Function definitions for simple data analysis scenarios def average_catch_and_value_by_year(start_year = 2010, end_year = 2015): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year)].groupby('year', as_index = False).agg({'catch_sum':'mean', 'real_value':'mean'}).round(2).copy() def average_catch_and_value_by_region(): return df_rfmo.groupby('fishing_entity', as_index = False)[['catch_sum', 'real_value']].mean().copy() def average_catch_and_value_by_year_and_region(start_year = 2010, end_year = 2015, fishing_entity = 'Canada'): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year) & (df_rfmo['fishing_entity'] == fishing_entity)].groupby('year', as_index = False).agg({'catch_sum':'mean', 'real_value':'mean'}).round(2).copy() def catch_and_value_by_commercial_groups(): return df_rfmo.groupby('commercial_group', as_index = False)[['catch_sum', 'real_value']].sum().copy() def catch_and_value_by_commercial_groups_and_year(start_year = 2010, end_year = 2015): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year)].groupby(['year', 'commercial_group'], as_index = False)[['catch_sum', 'real_value']].sum().copy() ``` ### Average catch sum & real value by year (parameters: start year, end year) ``` # Aggregate the data by Year, and display the result dataframe df_table1 = average_catch_and_value_by_year(2005, 2010) df_table1 ``` ### Average catch sum & real value by fishing entity or region ``` # Aggregate the data by Fishing entity, and display the result dataframe df_table2 = average_catch_and_value_by_region() df_table2.head(10) # display only first 10 records ``` ### Average catch sum & real value by year and fishing entity (parameters: start year, end year, region) ``` # Aggregate the data by Year, filter the data to a Region, and display the result dataframe df_table3 = average_catch_and_value_by_year_and_region(2010, 2015, 'Iceland') df_table3 ``` ### Total catch sum & real value of all commercial groups ``` # Aggregate the data by Commercial group, and display the result dataframe df_table4 = catch_and_value_by_commercial_groups() df_table4 ``` ### Total catch sum & real value of all commercial groups by year (parameters: start year, end year) ``` # Aggregate the data by Year and Commercial group, and display the result dataframe df_table5 = catch_and_value_by_commercial_groups_and_year(2010, 2011) df_table5 ```
github_jupyter
# Uncomment and run this cell if Pandas library is not already installed # ! pip install --upgrade pip # ! pip install pandas import pandas as pd # define CloudFront domain name (to access S3) and data cloudfront = 'https://d3cu2src083cxg.cloudfront.net' data_key = 'rfmo_query_result.csv' data_location = '{}/{}'.format(cloudfront, data_key) # load select columns df_rfmo = pd.read_csv(data_location, usecols = ['year', 'name_rfmo', 'name_comm_group', 'name_fishing_entity', 'name_sector_type', 'catch_sum', 'real_value']) # rename columns as needed df_rfmo.rename(columns = {"name_rfmo": "rfmo", "name_comm_group": "commercial_group", "name_fishing_entity": "fishing_entity", "name_sector_type": "sector_type"}, inplace = True) # print shape (rows, columns) of dataframe df_rfmo.shape # sample data df_rfmo.head(3) # dataframe information df_rfmo.info() # Function definitions for simple data analysis scenarios def average_catch_and_value_by_year(start_year = 2010, end_year = 2015): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year)].groupby('year', as_index = False).agg({'catch_sum':'mean', 'real_value':'mean'}).round(2).copy() def average_catch_and_value_by_region(): return df_rfmo.groupby('fishing_entity', as_index = False)[['catch_sum', 'real_value']].mean().copy() def average_catch_and_value_by_year_and_region(start_year = 2010, end_year = 2015, fishing_entity = 'Canada'): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year) & (df_rfmo['fishing_entity'] == fishing_entity)].groupby('year', as_index = False).agg({'catch_sum':'mean', 'real_value':'mean'}).round(2).copy() def catch_and_value_by_commercial_groups(): return df_rfmo.groupby('commercial_group', as_index = False)[['catch_sum', 'real_value']].sum().copy() def catch_and_value_by_commercial_groups_and_year(start_year = 2010, end_year = 2015): return df_rfmo[(df_rfmo['year'] >= start_year) & (df_rfmo['year'] <= end_year)].groupby(['year', 'commercial_group'], as_index = False)[['catch_sum', 'real_value']].sum().copy() # Aggregate the data by Year, and display the result dataframe df_table1 = average_catch_and_value_by_year(2005, 2010) df_table1 # Aggregate the data by Fishing entity, and display the result dataframe df_table2 = average_catch_and_value_by_region() df_table2.head(10) # display only first 10 records # Aggregate the data by Year, filter the data to a Region, and display the result dataframe df_table3 = average_catch_and_value_by_year_and_region(2010, 2015, 'Iceland') df_table3 # Aggregate the data by Commercial group, and display the result dataframe df_table4 = catch_and_value_by_commercial_groups() df_table4 # Aggregate the data by Year and Commercial group, and display the result dataframe df_table5 = catch_and_value_by_commercial_groups_and_year(2010, 2011) df_table5
0.294215
0.877844
## Dogs v Cats super-charged! ``` # Put these at the top of every notebook, to get automatic reloading and inline plotting %reload_ext autoreload %autoreload 2 %matplotlib inline # This file contains all the main external libs we'll use from fastai.imports import * from fastai.transforms import * from fastai.conv_learner import * from fastai.model import * from fastai.dataset import * from fastai.sgdr import * from fastai.plots import * PATH = "../nik/data/dogscats/" sz=299 arch=resnext50 bs=28 tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1) data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, num_workers=4) learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5) learn.fit(1e-2, 1) learn.precompute=False learn.fit(1e-2, 2, cycle_len=1) learn.unfreeze() lr=np.array([1e-4,1e-3,1e-2]) learn.fit(lr, 3, cycle_len=1) learn.save('224_all_50') learn.load('224_all_50') log_preds,y = learn.TTA() probs = np.mean(np.exp(log_preds),0) accuracy(probs,y) ``` ## Analyzing results ``` preds = np.argmax(probs, axis=1) probs = probs[:,1] from sklearn.metrics import confusion_matrix cm = confusion_matrix(y, preds) plot_confusion_matrix(cm, data.classes) def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False) def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct) def plot_val_with_title(idxs, title): imgs = np.stack([data.val_ds[x][0] for x in idxs]) title_probs = [probs[x] for x in idxs] print(title) return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs) def plots(ims, figsize=(12,6), rows=1, titles=None): f = plt.figure(figsize=figsize) for i in range(len(ims)): sp = f.add_subplot(rows, len(ims)//rows, i+1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) plt.imshow(ims[i]) def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx])) def plot_val_with_title(idxs, title): imgs = [load_img_id(data.val_ds,x) for x in idxs] title_probs = [probs[x] for x in idxs] print(title) return plots(imgs, rows=1, titles=title_probs, figsize=(16,8)) def most_by_mask(mask, mult): idxs = np.where(mask)[0] return idxs[np.argsort(mult * probs[idxs])[:4]] def most_by_correct(y, is_correct): mult = -1 if (y==1)==is_correct else 1 return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult) plot_val_with_title(most_by_correct(0, False), "Most incorrect cats") plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs") ```
github_jupyter
# Put these at the top of every notebook, to get automatic reloading and inline plotting %reload_ext autoreload %autoreload 2 %matplotlib inline # This file contains all the main external libs we'll use from fastai.imports import * from fastai.transforms import * from fastai.conv_learner import * from fastai.model import * from fastai.dataset import * from fastai.sgdr import * from fastai.plots import * PATH = "../nik/data/dogscats/" sz=299 arch=resnext50 bs=28 tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1) data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, num_workers=4) learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5) learn.fit(1e-2, 1) learn.precompute=False learn.fit(1e-2, 2, cycle_len=1) learn.unfreeze() lr=np.array([1e-4,1e-3,1e-2]) learn.fit(lr, 3, cycle_len=1) learn.save('224_all_50') learn.load('224_all_50') log_preds,y = learn.TTA() probs = np.mean(np.exp(log_preds),0) accuracy(probs,y) preds = np.argmax(probs, axis=1) probs = probs[:,1] from sklearn.metrics import confusion_matrix cm = confusion_matrix(y, preds) plot_confusion_matrix(cm, data.classes) def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False) def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct) def plot_val_with_title(idxs, title): imgs = np.stack([data.val_ds[x][0] for x in idxs]) title_probs = [probs[x] for x in idxs] print(title) return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs) def plots(ims, figsize=(12,6), rows=1, titles=None): f = plt.figure(figsize=figsize) for i in range(len(ims)): sp = f.add_subplot(rows, len(ims)//rows, i+1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) plt.imshow(ims[i]) def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx])) def plot_val_with_title(idxs, title): imgs = [load_img_id(data.val_ds,x) for x in idxs] title_probs = [probs[x] for x in idxs] print(title) return plots(imgs, rows=1, titles=title_probs, figsize=(16,8)) def most_by_mask(mask, mult): idxs = np.where(mask)[0] return idxs[np.argsort(mult * probs[idxs])[:4]] def most_by_correct(y, is_correct): mult = -1 if (y==1)==is_correct else 1 return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult) plot_val_with_title(most_by_correct(0, False), "Most incorrect cats") plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
0.628863
0.747017
# Transaction Amt Dec ``` import pandas as pd import numpy as np import matplotlib.pylab as plt %matplotlib inline train = pd.read_parquet('../../data/train_FE004.parquet') test = pd.read_parquet('../../data/test_FE004.parquet') tt = pd.concat([train, test], axis=0, sort=False) pd.set_option('max_columns', 500) tt.head() tt.groupby('TransactionAmt').nunique() train['P_isproton']=(train['P_emaildomain']=='protonmail.com') train['R_isproton']=(train['R_emaildomain']=='protonmail.com') test['P_isproton']=(test['P_emaildomain']=='protonmail.com') test['R_isproton']=(test['R_emaildomain']=='protonmail.com') a = np.zeros(train.shape[0]) train["lastest_browser"] = a a = np.zeros(test.shape[0]) test["lastest_browser"] = a def setbrowser(df): df.loc[df["id_31"]=="samsung browser 7.0",'lastest_browser']=1 df.loc[df["id_31"]=="opera 53.0",'lastest_browser']=1 df.loc[df["id_31"]=="mobile safari 10.0",'lastest_browser']=1 df.loc[df["id_31"]=="google search application 49.0",'lastest_browser']=1 df.loc[df["id_31"]=="firefox 60.0",'lastest_browser']=1 df.loc[df["id_31"]=="edge 17.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 69.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 67.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 63.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 63.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0 for ios",'lastest_browser']=1 return df train=setbrowser(train) test=setbrowser(test) train['card1_count_full'] = train['card1'].map(pd.concat([train['card1'], test['card1']], ignore_index=True).value_counts(dropna=False)) test['card1_count_full'] = test['card1'].map(pd.concat([train['card1'], test['card1']], ignore_index=True).value_counts(dropna=False)) train['card2_count_full'] = train['card2'].map(pd.concat([train['card2'], test['card2']], ignore_index=True).value_counts(dropna=False)) test['card2_count_full'] = test['card2'].map(pd.concat([train['card2'], test['card2']], ignore_index=True).value_counts(dropna=False)) train['card3_count_full'] = train['card3'].map(pd.concat([train['card3'], test['card3']], ignore_index=True).value_counts(dropna=False)) test['card3_count_full'] = test['card3'].map(pd.concat([train['card3'], test['card3']], ignore_index=True).value_counts(dropna=False)) train['card4_count_full'] = train['card4'].map(pd.concat([train['card4'], test['card4']], ignore_index=True).value_counts(dropna=False)) test['card4_count_full'] = test['card4'].map(pd.concat([train['card4'], test['card4']], ignore_index=True).value_counts(dropna=False)) train['card5_count_full'] = train['card5'].map(pd.concat([train['card5'], test['card5']], ignore_index=True).value_counts(dropna=False)) test['card5_count_full'] = test['card5'].map(pd.concat([train['card5'], test['card5']], ignore_index=True).value_counts(dropna=False)) train['card6_count_full'] = train['card6'].map(pd.concat([train['card6'], test['card6']], ignore_index=True).value_counts(dropna=False)) test['card6_count_full'] = test['card6'].map(pd.concat([train['card6'], test['card6']], ignore_index=True).value_counts(dropna=False)) train['addr1_count_full'] = train['addr1'].map(pd.concat([train['addr1'], test['addr1']], ignore_index=True).value_counts(dropna=False)) test['addr1_count_full'] = test['addr1'].map(pd.concat([train['addr1'], test['addr1']], ignore_index=True).value_counts(dropna=False)) train['addr2_count_full'] = train['addr2'].map(pd.concat([train['addr2'], test['addr2']], ignore_index=True).value_counts(dropna=False)) test['addr2_count_full'] = test['addr2'].map(pd.concat([train['addr2'], test['addr2']], ignore_index=True).value_counts(dropna=False)) train['TransactionAmt_to_mean_card1'] = train['TransactionAmt'] / train.groupby(['card1'])['TransactionAmt'].transform('mean') train['TransactionAmt_to_mean_card4'] = train['TransactionAmt'] / train.groupby(['card4'])['TransactionAmt'].transform('mean') train['TransactionAmt_to_std_card1'] = train['TransactionAmt'] / train.groupby(['card1'])['TransactionAmt'].transform('std') train['TransactionAmt_to_std_card4'] = train['TransactionAmt'] / train.groupby(['card4'])['TransactionAmt'].transform('std') test['TransactionAmt_to_mean_card1'] = test['TransactionAmt'] / test.groupby(['card1'])['TransactionAmt'].transform('mean') test['TransactionAmt_to_mean_card4'] = test['TransactionAmt'] / test.groupby(['card4'])['TransactionAmt'].transform('mean') test['TransactionAmt_to_std_card1'] = test['TransactionAmt'] / test.groupby(['card1'])['TransactionAmt'].transform('std') test['TransactionAmt_to_std_card4'] = test['TransactionAmt'] / test.groupby(['card4'])['TransactionAmt'].transform('std') train['id_02_to_mean_card1'] = train['id_02'] / train.groupby(['card1'])['id_02'].transform('mean') train['id_02_to_mean_card4'] = train['id_02'] / train.groupby(['card4'])['id_02'].transform('mean') train['id_02_to_std_card1'] = train['id_02'] / train.groupby(['card1'])['id_02'].transform('std') train['id_02_to_std_card4'] = train['id_02'] / train.groupby(['card4'])['id_02'].transform('std') test['id_02_to_mean_card1'] = test['id_02'] / test.groupby(['card1'])['id_02'].transform('mean') test['id_02_to_mean_card4'] = test['id_02'] / test.groupby(['card4'])['id_02'].transform('mean') test['id_02_to_std_card1'] = test['id_02'] / test.groupby(['card1'])['id_02'].transform('std') test['id_02_to_std_card4'] = test['id_02'] / test.groupby(['card4'])['id_02'].transform('std') train['D15_to_mean_card1'] = train['D15'] / train.groupby(['card1'])['D15'].transform('mean') train['D15_to_mean_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('mean') train['D15_to_std_card1'] = train['D15'] / train.groupby(['card1'])['D15'].transform('std') train['D15_to_std_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('std') test['D15_to_mean_card1'] = test['D15'] / test.groupby(['card1'])['D15'].transform('mean') test['D15_to_mean_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('mean') test['D15_to_std_card1'] = test['D15'] / test.groupby(['card1'])['D15'].transform('std') test['D15_to_std_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('std') train['D15_to_mean_addr1'] = train['D15'] / train.groupby(['addr1'])['D15'].transform('mean') train['D15_to_mean_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('mean') train['D15_to_std_addr1'] = train['D15'] / train.groupby(['addr1'])['D15'].transform('std') train['D15_to_std_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('std') test['D15_to_mean_addr1'] = test['D15'] / test.groupby(['addr1'])['D15'].transform('mean') test['D15_to_mean_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('mean') test['D15_to_std_addr1'] = test['D15'] / test.groupby(['addr1'])['D15'].transform('std') test['D15_to_std_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('std') train['Transaction_day_of_week'] = np.floor((train['TransactionDT'] / (3600 * 24) - 1) % 7) test['Transaction_day_of_week'] = np.floor((test['TransactionDT'] / (3600 * 24) - 1) % 7) train['Transaction_hour_of_day'] = np.floor(train['TransactionDT'] / 3600) % 24 test['Transaction_hour_of_day'] = np.floor(test['TransactionDT'] / 3600) % 24 train['TransactionAmt_decimal'] = ((train['TransactionAmt'] - train['TransactionAmt'].astype(int)) * 1000).astype(int) test['TransactionAmt_decimal'] = ((test['TransactionAmt'] - test['TransactionAmt'].astype(int)) * 1000).astype(int) from sklearn import preprocessing for feature in ['id_02__id_20', 'id_02__D8', 'D11__DeviceInfo', 'DeviceInfo__P_emaildomain', 'P_emaildomain__C2', 'card2__dist1', 'card1__card5', 'card2__id_20', 'card5__P_emaildomain', 'addr1__card1']: f1, f2 = feature.split('__') train[feature] = train[f1].astype(str) + '_' + train[f2].astype(str) test[feature] = test[f1].astype(str) + '_' + test[f2].astype(str) le =preprocessing.LabelEncoder() le.fit(list(train[feature].astype(str).values) + list(test[feature].astype(str).values)) train[feature] = le.transform(list(train[feature].astype(str).values)) test[feature] = le.transform(list(test[feature].astype(str).values)) for feature in ['id_34', 'id_36']: # Count encoded for both train and test train[feature + '_count_full'] = train[feature].map(pd.concat([train[feature], test[feature]], ignore_index=True).value_counts(dropna=False)) test[feature + '_count_full'] = test[feature].map(pd.concat([train[feature], test[feature]], ignore_index=True).value_counts(dropna=False)) for feature in ['id_01', 'id_31', 'id_33', 'id_35', 'id_36']: # Count encoded separately for train and test train[feature + '_count_dist'] = train[feature].map(train[feature].value_counts(dropna=False)) test[feature + '_count_dist'] = test[feature].map(test[feature].value_counts(dropna=False)) train.to_parquet('../../data/train_FE005.parquet') test.to_parquet('../../data/test_FE005.parquet') ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pylab as plt %matplotlib inline train = pd.read_parquet('../../data/train_FE004.parquet') test = pd.read_parquet('../../data/test_FE004.parquet') tt = pd.concat([train, test], axis=0, sort=False) pd.set_option('max_columns', 500) tt.head() tt.groupby('TransactionAmt').nunique() train['P_isproton']=(train['P_emaildomain']=='protonmail.com') train['R_isproton']=(train['R_emaildomain']=='protonmail.com') test['P_isproton']=(test['P_emaildomain']=='protonmail.com') test['R_isproton']=(test['R_emaildomain']=='protonmail.com') a = np.zeros(train.shape[0]) train["lastest_browser"] = a a = np.zeros(test.shape[0]) test["lastest_browser"] = a def setbrowser(df): df.loc[df["id_31"]=="samsung browser 7.0",'lastest_browser']=1 df.loc[df["id_31"]=="opera 53.0",'lastest_browser']=1 df.loc[df["id_31"]=="mobile safari 10.0",'lastest_browser']=1 df.loc[df["id_31"]=="google search application 49.0",'lastest_browser']=1 df.loc[df["id_31"]=="firefox 60.0",'lastest_browser']=1 df.loc[df["id_31"]=="edge 17.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 69.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 67.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 63.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 63.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 64.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 65.0 for ios",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0 for android",'lastest_browser']=1 df.loc[df["id_31"]=="chrome 66.0 for ios",'lastest_browser']=1 return df train=setbrowser(train) test=setbrowser(test) train['card1_count_full'] = train['card1'].map(pd.concat([train['card1'], test['card1']], ignore_index=True).value_counts(dropna=False)) test['card1_count_full'] = test['card1'].map(pd.concat([train['card1'], test['card1']], ignore_index=True).value_counts(dropna=False)) train['card2_count_full'] = train['card2'].map(pd.concat([train['card2'], test['card2']], ignore_index=True).value_counts(dropna=False)) test['card2_count_full'] = test['card2'].map(pd.concat([train['card2'], test['card2']], ignore_index=True).value_counts(dropna=False)) train['card3_count_full'] = train['card3'].map(pd.concat([train['card3'], test['card3']], ignore_index=True).value_counts(dropna=False)) test['card3_count_full'] = test['card3'].map(pd.concat([train['card3'], test['card3']], ignore_index=True).value_counts(dropna=False)) train['card4_count_full'] = train['card4'].map(pd.concat([train['card4'], test['card4']], ignore_index=True).value_counts(dropna=False)) test['card4_count_full'] = test['card4'].map(pd.concat([train['card4'], test['card4']], ignore_index=True).value_counts(dropna=False)) train['card5_count_full'] = train['card5'].map(pd.concat([train['card5'], test['card5']], ignore_index=True).value_counts(dropna=False)) test['card5_count_full'] = test['card5'].map(pd.concat([train['card5'], test['card5']], ignore_index=True).value_counts(dropna=False)) train['card6_count_full'] = train['card6'].map(pd.concat([train['card6'], test['card6']], ignore_index=True).value_counts(dropna=False)) test['card6_count_full'] = test['card6'].map(pd.concat([train['card6'], test['card6']], ignore_index=True).value_counts(dropna=False)) train['addr1_count_full'] = train['addr1'].map(pd.concat([train['addr1'], test['addr1']], ignore_index=True).value_counts(dropna=False)) test['addr1_count_full'] = test['addr1'].map(pd.concat([train['addr1'], test['addr1']], ignore_index=True).value_counts(dropna=False)) train['addr2_count_full'] = train['addr2'].map(pd.concat([train['addr2'], test['addr2']], ignore_index=True).value_counts(dropna=False)) test['addr2_count_full'] = test['addr2'].map(pd.concat([train['addr2'], test['addr2']], ignore_index=True).value_counts(dropna=False)) train['TransactionAmt_to_mean_card1'] = train['TransactionAmt'] / train.groupby(['card1'])['TransactionAmt'].transform('mean') train['TransactionAmt_to_mean_card4'] = train['TransactionAmt'] / train.groupby(['card4'])['TransactionAmt'].transform('mean') train['TransactionAmt_to_std_card1'] = train['TransactionAmt'] / train.groupby(['card1'])['TransactionAmt'].transform('std') train['TransactionAmt_to_std_card4'] = train['TransactionAmt'] / train.groupby(['card4'])['TransactionAmt'].transform('std') test['TransactionAmt_to_mean_card1'] = test['TransactionAmt'] / test.groupby(['card1'])['TransactionAmt'].transform('mean') test['TransactionAmt_to_mean_card4'] = test['TransactionAmt'] / test.groupby(['card4'])['TransactionAmt'].transform('mean') test['TransactionAmt_to_std_card1'] = test['TransactionAmt'] / test.groupby(['card1'])['TransactionAmt'].transform('std') test['TransactionAmt_to_std_card4'] = test['TransactionAmt'] / test.groupby(['card4'])['TransactionAmt'].transform('std') train['id_02_to_mean_card1'] = train['id_02'] / train.groupby(['card1'])['id_02'].transform('mean') train['id_02_to_mean_card4'] = train['id_02'] / train.groupby(['card4'])['id_02'].transform('mean') train['id_02_to_std_card1'] = train['id_02'] / train.groupby(['card1'])['id_02'].transform('std') train['id_02_to_std_card4'] = train['id_02'] / train.groupby(['card4'])['id_02'].transform('std') test['id_02_to_mean_card1'] = test['id_02'] / test.groupby(['card1'])['id_02'].transform('mean') test['id_02_to_mean_card4'] = test['id_02'] / test.groupby(['card4'])['id_02'].transform('mean') test['id_02_to_std_card1'] = test['id_02'] / test.groupby(['card1'])['id_02'].transform('std') test['id_02_to_std_card4'] = test['id_02'] / test.groupby(['card4'])['id_02'].transform('std') train['D15_to_mean_card1'] = train['D15'] / train.groupby(['card1'])['D15'].transform('mean') train['D15_to_mean_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('mean') train['D15_to_std_card1'] = train['D15'] / train.groupby(['card1'])['D15'].transform('std') train['D15_to_std_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('std') test['D15_to_mean_card1'] = test['D15'] / test.groupby(['card1'])['D15'].transform('mean') test['D15_to_mean_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('mean') test['D15_to_std_card1'] = test['D15'] / test.groupby(['card1'])['D15'].transform('std') test['D15_to_std_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('std') train['D15_to_mean_addr1'] = train['D15'] / train.groupby(['addr1'])['D15'].transform('mean') train['D15_to_mean_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('mean') train['D15_to_std_addr1'] = train['D15'] / train.groupby(['addr1'])['D15'].transform('std') train['D15_to_std_card4'] = train['D15'] / train.groupby(['card4'])['D15'].transform('std') test['D15_to_mean_addr1'] = test['D15'] / test.groupby(['addr1'])['D15'].transform('mean') test['D15_to_mean_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('mean') test['D15_to_std_addr1'] = test['D15'] / test.groupby(['addr1'])['D15'].transform('std') test['D15_to_std_card4'] = test['D15'] / test.groupby(['card4'])['D15'].transform('std') train['Transaction_day_of_week'] = np.floor((train['TransactionDT'] / (3600 * 24) - 1) % 7) test['Transaction_day_of_week'] = np.floor((test['TransactionDT'] / (3600 * 24) - 1) % 7) train['Transaction_hour_of_day'] = np.floor(train['TransactionDT'] / 3600) % 24 test['Transaction_hour_of_day'] = np.floor(test['TransactionDT'] / 3600) % 24 train['TransactionAmt_decimal'] = ((train['TransactionAmt'] - train['TransactionAmt'].astype(int)) * 1000).astype(int) test['TransactionAmt_decimal'] = ((test['TransactionAmt'] - test['TransactionAmt'].astype(int)) * 1000).astype(int) from sklearn import preprocessing for feature in ['id_02__id_20', 'id_02__D8', 'D11__DeviceInfo', 'DeviceInfo__P_emaildomain', 'P_emaildomain__C2', 'card2__dist1', 'card1__card5', 'card2__id_20', 'card5__P_emaildomain', 'addr1__card1']: f1, f2 = feature.split('__') train[feature] = train[f1].astype(str) + '_' + train[f2].astype(str) test[feature] = test[f1].astype(str) + '_' + test[f2].astype(str) le =preprocessing.LabelEncoder() le.fit(list(train[feature].astype(str).values) + list(test[feature].astype(str).values)) train[feature] = le.transform(list(train[feature].astype(str).values)) test[feature] = le.transform(list(test[feature].astype(str).values)) for feature in ['id_34', 'id_36']: # Count encoded for both train and test train[feature + '_count_full'] = train[feature].map(pd.concat([train[feature], test[feature]], ignore_index=True).value_counts(dropna=False)) test[feature + '_count_full'] = test[feature].map(pd.concat([train[feature], test[feature]], ignore_index=True).value_counts(dropna=False)) for feature in ['id_01', 'id_31', 'id_33', 'id_35', 'id_36']: # Count encoded separately for train and test train[feature + '_count_dist'] = train[feature].map(train[feature].value_counts(dropna=False)) test[feature + '_count_dist'] = test[feature].map(test[feature].value_counts(dropna=False)) train.to_parquet('../../data/train_FE005.parquet') test.to_parquet('../../data/test_FE005.parquet')
0.130923
0.521532
``` # without chossing values. work on 19 features # Importing the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv("dataset.csv") df.head() # Drop customerID because it is unnecessary df.drop('customerID', axis=1, inplace=True) df.shape df.describe() df.isna().sum() ``` ## Fill nan ``` # fill null values with the mean values of that feature df["tenure"].fillna(df["tenure"].mean(), inplace=True) # fill null values with the mode values of that feature is repeated more often than any other df["SeniorCitizen"].fillna(df["SeniorCitizen"].mode()[0], inplace=True) df.isna().sum() corr=df.corr() plt.figure(figsize=(14,6)) sns.heatmap(corr,annot=True) ``` ### Standardization ``` from sklearn.preprocessing import StandardScaler sc=StandardScaler() df['tenure']=sc.fit_transform(df['tenure'].values.reshape(-1,1)) df['MonthlyCharges']=sc.fit_transform(df['MonthlyCharges'].values.reshape(-1,1)) df['TotalCharges']=sc.fit_transform(df['TotalCharges'].values.reshape(-1,1)) ``` ### Label encoding ``` df['Partner'].dtype if df['Partner'].dtype == 'O': print('yes') else: print('no') from sklearn.preprocessing import LabelEncoder le = {} le_name_mapping = {} for i in df.columns: if df[i].dtype == 'O': le[i] = LabelEncoder() df[i] = le[i].fit_transform(df[i]) le_name_mapping[i] = dict(zip(le[i].classes_, le[i].transform(le[i].classes_))) print(i,":-",le_name_mapping[i]) corr=df.corr() plt.figure(figsize=(35,24)) sns.heatmap(corr,annot=True) corr X = df.drop('Churn', axis = 1) y = df['Churn'] y.value_counts() ``` * The data is biased to 0 new_df = pd.DataFrame() new_df=df[df['Churn'] == 0 ].iloc[:1869].copy() new_df new_df2 = pd.DataFrame() new_df2=df[df['Churn'] == 1 ].copy() new_df=new_df.append(new_df2) #df = df1.append(df2,ignore_index=True) new_df.shape ``` # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) print(y_train.shape) y_train.value_counts() ``` * the ratio of label 0 = 73.696 % from y train ``` print(y_test.shape) y_test.value_counts() ``` * the ratio of label 0 = 72.989 % from y test ## LDA ``` # Applying LDA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA(n_components = 8) X_train = lda.fit_transform(X_train, y_train) X_test = lda.transform(X_test) ``` ## Logistic Regression ``` # Importing the libraries from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # Fitting Logistic Regression to the Training set log_reg = LogisticRegression() log_reg.fit(X_train,y_train) # Predicting the Test set results y_pred = log_reg.predict(X_test) # accuracy print('Accuracy for train= ',round(log_reg.score(X_train,y_train),4) *100, '%') print('Accuracy for test = ',round(accuracy_score(y_test,y_pred),4) *100, '%') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_log_reg = confusion_matrix(y_test, y_pred) print(cm_log_reg) from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) ``` ## KNN ``` from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X_train,y_train) knn_pred=knn.predict(X_test) # Accuracy print('Accuracy for train = ',round(knn.score(X_train,y_train),4) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,knn_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_knn = confusion_matrix(y_test, knn_pred) print('Confusion Matrix :- \n',cm_knn) import sklearn.metrics as metrics score=[] for k in range(1,100): knn=KNeighborsClassifier(n_neighbors=k,weights='uniform') knn.fit(X_train,y_train) predKNN=knn.predict(X_test) accuracy=metrics.accuracy_score(predKNN,y_test) score.append(accuracy*100) print ('k = ',k,'-> accuracy : ',accuracy) print(score.index(max(score))+1,' : ',round(max(score),2),'%') knn = KNeighborsClassifier(n_neighbors=97) knn.fit(X_train,y_train) knn_pred=knn.predict(X_test) # Accuracy print('Accuracy for train = ',round(knn.score(X_train,y_train),4) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,knn_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_knn = confusion_matrix(y_test, knn_pred) print('Confusion Matrix :- \n',cm_knn) from sklearn.metrics import classification_report print(classification_report(y_test, knn_pred)) train_accuracy=np.empty(len(range(1,100))) test_accuracy=np.empty(len(range(1,100))) for i, k in enumerate(range(1,100)): # Setup a k-NN Classifier with k neighbors: knn knn = KNeighborsClassifier(k) # Fit the classifier to the training data knn.fit(X_train, y_train) #Compute accuracy on the training set train_accuracy[i] = knn.score(X_train,y_train) #Compute accuracy on the testing set test_accuracy[i] = knn.score(X_test, y_test) # Generate plot plt.title('k-NN: Varying Number of Neighbors') plt.plot(range(1,100), test_accuracy, label = 'Testing Accuracy') plt.plot(range(1,100), train_accuracy, label = 'Training Accuracy') plt.legend() plt.xlabel('Number of Neighbors') plt.ylabel('Accuracy') plt.show() ``` ## SVM ``` from sklearn.svm import SVC svm_rbf=SVC(kernel='rbf').fit(X_train,y_train) svm_rbf_pred=svm_rbf.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_rbf.score(X_train, y_train),2) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_rbf_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_rbf = confusion_matrix(y_test, svm_rbf_pred) print('Confusion Matrix :- \n',cm_svm_rbf) from sklearn.metrics import classification_report print(classification_report(y_test, svm_rbf_pred)) svm_linear=SVC(kernel='linear').fit(X_train,y_train) svm_pred=svm_linear.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_linear.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_lin = confusion_matrix(y_test, svm_pred) print('Confusion Matrix :- \n',cm_svm_lin) from sklearn.metrics import classification_report print(classification_report(y_test, svm_pred)) svm_poly=SVC(kernel='poly').fit(X_train,y_train) svm_polr_pred=svm_poly.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_poly.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_polr_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_polr = confusion_matrix(y_test, svm_polr_pred) print('Confusion Matrix :- \n',cm_svm_polr) from sklearn.metrics import classification_report print(classification_report(y_test, svm_polr_pred)) ``` ## Naive bayes ``` from sklearn.naive_bayes import GaussianNB nb=GaussianNB().fit(X_train,y_train) nb_pred=nb.predict(X_test) # Accuracy print('Accuracy for train = ',round(nb.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,nb_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_nb = confusion_matrix(y_test, nb_pred) print('Confusion Matrix :- \n',cm_nb) from sklearn.metrics import classification_report print(classification_report(y_test, nb_pred)) ``` ## Decision Tree ``` # Fitting Decision Tree Classification to the Training set from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) dt.fit(X_train, y_train) # Predicting the Test set results y_pred_dt = dt.predict(X_test) # Accuracy print('Accuracy for train = ',round(dt.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,y_pred_dt),3) *100, '%\n') # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm_dt = confusion_matrix(y_test, y_pred_dt) print('Confusion Matrix :- \n',cm_dt) from sklearn.metrics import classification_report print(classification_report(y_test, y_pred_dt)) ``` ## Random Forest ``` from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score,confusion_matrix rfc=RandomForestClassifier(n_estimators=10,random_state=45,criterion='gini').fit(X_train,y_train) rfc_pred=rfc.predict(X_test) accuracy_score(y_test,rfc_pred) # Accuracy print('Accuracy for train = ',round(rfc.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,rfc_pred),3) *100, '%\n') # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm_rfc = confusion_matrix(y_test, rfc_pred) print('Confusion Matrix :- \n',cm_rfc) from sklearn.metrics import classification_report print(classification_report(y_test, rfc_pred)) ```
github_jupyter
# without chossing values. work on 19 features # Importing the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv("dataset.csv") df.head() # Drop customerID because it is unnecessary df.drop('customerID', axis=1, inplace=True) df.shape df.describe() df.isna().sum() # fill null values with the mean values of that feature df["tenure"].fillna(df["tenure"].mean(), inplace=True) # fill null values with the mode values of that feature is repeated more often than any other df["SeniorCitizen"].fillna(df["SeniorCitizen"].mode()[0], inplace=True) df.isna().sum() corr=df.corr() plt.figure(figsize=(14,6)) sns.heatmap(corr,annot=True) from sklearn.preprocessing import StandardScaler sc=StandardScaler() df['tenure']=sc.fit_transform(df['tenure'].values.reshape(-1,1)) df['MonthlyCharges']=sc.fit_transform(df['MonthlyCharges'].values.reshape(-1,1)) df['TotalCharges']=sc.fit_transform(df['TotalCharges'].values.reshape(-1,1)) df['Partner'].dtype if df['Partner'].dtype == 'O': print('yes') else: print('no') from sklearn.preprocessing import LabelEncoder le = {} le_name_mapping = {} for i in df.columns: if df[i].dtype == 'O': le[i] = LabelEncoder() df[i] = le[i].fit_transform(df[i]) le_name_mapping[i] = dict(zip(le[i].classes_, le[i].transform(le[i].classes_))) print(i,":-",le_name_mapping[i]) corr=df.corr() plt.figure(figsize=(35,24)) sns.heatmap(corr,annot=True) corr X = df.drop('Churn', axis = 1) y = df['Churn'] y.value_counts() # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) print(y_train.shape) y_train.value_counts() print(y_test.shape) y_test.value_counts() # Applying LDA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA(n_components = 8) X_train = lda.fit_transform(X_train, y_train) X_test = lda.transform(X_test) # Importing the libraries from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # Fitting Logistic Regression to the Training set log_reg = LogisticRegression() log_reg.fit(X_train,y_train) # Predicting the Test set results y_pred = log_reg.predict(X_test) # accuracy print('Accuracy for train= ',round(log_reg.score(X_train,y_train),4) *100, '%') print('Accuracy for test = ',round(accuracy_score(y_test,y_pred),4) *100, '%') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_log_reg = confusion_matrix(y_test, y_pred) print(cm_log_reg) from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X_train,y_train) knn_pred=knn.predict(X_test) # Accuracy print('Accuracy for train = ',round(knn.score(X_train,y_train),4) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,knn_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_knn = confusion_matrix(y_test, knn_pred) print('Confusion Matrix :- \n',cm_knn) import sklearn.metrics as metrics score=[] for k in range(1,100): knn=KNeighborsClassifier(n_neighbors=k,weights='uniform') knn.fit(X_train,y_train) predKNN=knn.predict(X_test) accuracy=metrics.accuracy_score(predKNN,y_test) score.append(accuracy*100) print ('k = ',k,'-> accuracy : ',accuracy) print(score.index(max(score))+1,' : ',round(max(score),2),'%') knn = KNeighborsClassifier(n_neighbors=97) knn.fit(X_train,y_train) knn_pred=knn.predict(X_test) # Accuracy print('Accuracy for train = ',round(knn.score(X_train,y_train),4) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,knn_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. from sklearn.metrics import confusion_matrix cm_knn = confusion_matrix(y_test, knn_pred) print('Confusion Matrix :- \n',cm_knn) from sklearn.metrics import classification_report print(classification_report(y_test, knn_pred)) train_accuracy=np.empty(len(range(1,100))) test_accuracy=np.empty(len(range(1,100))) for i, k in enumerate(range(1,100)): # Setup a k-NN Classifier with k neighbors: knn knn = KNeighborsClassifier(k) # Fit the classifier to the training data knn.fit(X_train, y_train) #Compute accuracy on the training set train_accuracy[i] = knn.score(X_train,y_train) #Compute accuracy on the testing set test_accuracy[i] = knn.score(X_test, y_test) # Generate plot plt.title('k-NN: Varying Number of Neighbors') plt.plot(range(1,100), test_accuracy, label = 'Testing Accuracy') plt.plot(range(1,100), train_accuracy, label = 'Training Accuracy') plt.legend() plt.xlabel('Number of Neighbors') plt.ylabel('Accuracy') plt.show() from sklearn.svm import SVC svm_rbf=SVC(kernel='rbf').fit(X_train,y_train) svm_rbf_pred=svm_rbf.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_rbf.score(X_train, y_train),2) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_rbf_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_rbf = confusion_matrix(y_test, svm_rbf_pred) print('Confusion Matrix :- \n',cm_svm_rbf) from sklearn.metrics import classification_report print(classification_report(y_test, svm_rbf_pred)) svm_linear=SVC(kernel='linear').fit(X_train,y_train) svm_pred=svm_linear.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_linear.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_lin = confusion_matrix(y_test, svm_pred) print('Confusion Matrix :- \n',cm_svm_lin) from sklearn.metrics import classification_report print(classification_report(y_test, svm_pred)) svm_poly=SVC(kernel='poly').fit(X_train,y_train) svm_polr_pred=svm_poly.predict(X_test) # Accuracy print('Accuracy for train = ',round(svm_poly.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,svm_polr_pred),4) *100, '% \n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_svm_polr = confusion_matrix(y_test, svm_polr_pred) print('Confusion Matrix :- \n',cm_svm_polr) from sklearn.metrics import classification_report print(classification_report(y_test, svm_polr_pred)) from sklearn.naive_bayes import GaussianNB nb=GaussianNB().fit(X_train,y_train) nb_pred=nb.predict(X_test) # Accuracy print('Accuracy for train = ',round(nb.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,nb_pred),4) *100, '%\n') # Making the Confusion Matrix will contain the correct and incorrect prediction on the dataset. cm_nb = confusion_matrix(y_test, nb_pred) print('Confusion Matrix :- \n',cm_nb) from sklearn.metrics import classification_report print(classification_report(y_test, nb_pred)) # Fitting Decision Tree Classification to the Training set from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) dt.fit(X_train, y_train) # Predicting the Test set results y_pred_dt = dt.predict(X_test) # Accuracy print('Accuracy for train = ',round(dt.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,y_pred_dt),3) *100, '%\n') # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm_dt = confusion_matrix(y_test, y_pred_dt) print('Confusion Matrix :- \n',cm_dt) from sklearn.metrics import classification_report print(classification_report(y_test, y_pred_dt)) from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score,confusion_matrix rfc=RandomForestClassifier(n_estimators=10,random_state=45,criterion='gini').fit(X_train,y_train) rfc_pred=rfc.predict(X_test) accuracy_score(y_test,rfc_pred) # Accuracy print('Accuracy for train = ',round(rfc.score(X_train, y_train),3) *100, '%\n') print('Accuracy for test = ',round(accuracy_score(y_test,rfc_pred),3) *100, '%\n') # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm_rfc = confusion_matrix(y_test, rfc_pred) print('Confusion Matrix :- \n',cm_rfc) from sklearn.metrics import classification_report print(classification_report(y_test, rfc_pred))
0.483405
0.814238
``` from __future__ import division, print_function import numpy as np from collections import OrderedDict import logging from IPython.display import display %matplotlib inline import matplotlib import matplotlib.pyplot as plt from astropy.io import fits import astropy.wcs from astropy import coordinates import astropy.units as apu from astropy import table import astropyp from astropyp.wrappers.astromatic import ldac from astropyp.phot import stack import bd_search alogger = logging.getLogger('astropyp') alogger.setLevel(logging.INFO) idx_connect = 'sqlite:////media/data-beta/users/fmooleka/decam/decam.db' ref_path = '/media/data-beta/users/fmooleka/decam/catalogs/ref/' # SExtractor 'extract' detection parameters conv_filter = np.load('/media/data-beta/users/fmooleka/2016decam/5x5gauss.npy') sex_params = { 'extract': { 'thresh': 40, #'err':, 'minarea': 3, # default 'conv': conv_filter, #'deblend_nthresh': 32, #default 'deblend_cont': 0.001, #'clean': True, #default #'clean_param': 1 #default }, 'kron_k': 2.5, 'kron_min_radius': 3.5, 'filter': conv_filter, #'thresh': 1.5 # *bkg.globalrms } obj='F100' refname = '2MASS' #refname = 'UCAC4' fullref = ldac.get_table_from_ldac(ref_path+'{0}-{1}.fits'.format(obj, refname)) import warnings from astropy.utils.exceptions import AstropyWarning warnings.simplefilter('ignore', category=AstropyWarning) def get_exp_files(expnum, night, filtr, idx_connect): sql = 'select * from decam_obs where expnum={0} and filter like "{1}%" and dtcaldat="{2}"'.format( expnum, filtr, night) exp_info = astropyp.db_utils.index.query(sql, idx_connect) img_filename = exp_info[exp_info['PRODTYPE']=='image'][0]['filename'] img = fits.open(img_filename) dqmask_filename = exp_info[exp_info['PRODTYPE']=='dqmask'][0]['filename'] dqmask = fits.open(dqmask_filename) return img, dqmask min_flux = 1000 min_amplitude = 1000 good_amplitude = 50 calibrate_amplitude = 200 frame = 1 explist = [442433, 442434, 442435] aper_radius = 8 ccds = [] for expnum in explist: #img, dqmask = get_exp_files(expnum, "2015-05-26", "i", idx_connect) img, dqmask = get_exp_files(expnum, "2015-05-26", "z", idx_connect) header = img[frame].header wcs = astropy.wcs.WCS(header) img_data = img[frame].data dqmask_data = dqmask[frame].data ccd = astropyp.phot.phot.SingleImage(header, img_data, dqmask_data, wcs=wcs, gain=4., exptime=30, aper_radius=aper_radius) ccds.append(ccd) ccd_stack = stack.Stack(ccds, 1) ccd_stack.detect_sources(min_flux=min_flux, good_amplitude=good_amplitude, calibrate_amplitude=calibrate_amplitude, psf_amplitude=1000, sex_params=sex_params, subtract_bkg=True, windowed=False) ccd_stack.get_transforms() def stack_full_images(imgs, ref_index, tx_solutions, dqmasks = None, combine_method='mean', dqmask_min=0, bad_pix_val=1, buf=10, order=3): from scipy import interpolate from astropy.nddata import extract_array, overlap_slices from astropyp.astrometry import ImageSolution buf = float(buf) # Get the minimum size of the final stack by projecting all of # the images onto the reference frame img_x = np.arange(0, imgs[ref_index].shape[1], 1) img_y = np.arange(0, imgs[ref_index].shape[0], 1) xmin = img_x[0]-buf xmax = img_x[-1]+buf ymin = img_y[0]-buf ymax = img_y[-1]+buf for n in range(len(imgs)): if n!=ref_index: tx_x,tx_y = tx_solutions[n].transform_coords( x=[img_x[0], img_x[-1]], y=[img_y[0], img_y[-1]]) if tx_x[0]<xmin: xmin = tx_x[0] if tx_x[1]>xmax: xmax = tx_x[1] if tx_y[0]<ymin: ymin = tx_y[0] if tx_y[1]>ymax: ymax = tx_y[1] x_tx = OrderedDict([('Intercept', xmin), ('A_1_0', 1.0), ('A_0_1', 0.0)]) y_tx = OrderedDict([('Intercept', ymin), ('B_1_0', 1.0), ('B_0_1', 0.0)]) # Modify the tx solutions to fit the coadd for n in range(len(imgs)): if n!= ref_index: new_x_tx = tx_solutions[n].x_tx.copy() new_y_tx = tx_solutions[n].y_tx.copy() new_x_tx['Intercept'] += xmin new_y_tx['Intercept'] += ymin tx_solutions[n] = ImageSolution(x_tx=new_x_tx, y_tx=new_y_tx, order=tx_solutions[n].order) else: tx_solutions[n] = ImageSolution(x_tx=x_tx, y_tx=y_tx, order=1) coadd_x = np.arange(0, xmax-xmin, 1) coadd_y = np.arange(0, ymax-ymin, 1) Xc, Yc = np.meshgrid(coadd_x, coadd_y) patches = [] # Reproject each image to the coadded image for n in range(len(imgs)): tx_x,tx_y = tx_solutions[n].transform_coords( x=Xc.flatten(),y=Yc.flatten()) tx_x = np.array(tx_x).reshape(Xc.shape) tx_y = np.array(tx_y).reshape(Yc.shape) img_x = np.arange(0, imgs[n].shape[1], 1) img_y = np.arange(0, imgs[n].shape[0], 1) # Create an interpolating function and reproject the image data_func = interpolate.RectBivariateSpline(img_y, img_x, imgs[n],kx=order,ky=order) continue patch = data_func(tx_y.flatten(), tx_x.flatten(), grid=False) patch = patch.reshape(Xc.shape) #Create the dqmask if dqmasks is not None: points = (img_y, img_x) values = dqmasks[n] dq_func = interpolate.RegularGridInterpolator( points, values, method='nearest', fill_value=bad_pix_val, bounds_error=False) dqmask = dq_func(zip(tx_y.flatten(),tx_x.flatten())) dqmask = dqmask.reshape(Xc.shape) dqmask = dqmask.astype(bool) else: dqmask = None # Apply the dqmask to the image patch[dqmask>dqmask_min] = np.nan patch[dqmask>dqmask_min] patch = np.ma.array(patch) patch.mask = np.isnan(patch) patches.append(patch) return None,None,None stack = np.ma.mean(patches, axis=0) dqmask = patches[0].mask for n in range(1,len(patches)): dqmask = np.bitwise_and(dqmask, patches[n].mask) return stack, dqmask, patches #imgs = [ccd.img for ccd in ccd_stack.ccds] #dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] imgs = [ccd.img[:200,:100] for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask[:200,:100] for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img[:1000,:500] for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask[:1000,:500] for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches=stack_full_images(imgs, 1, tx_solutions, dqmasks, order=5) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches=stack_full_images(imgs, 1, tx_solutions, dqmasks, order=3) stk = stack.filled(0) max_offset=3 ccd = astropyp.phot.phot.SingleImage( img=stk, dqmask=dqmask, gain=4., exptime=30, aper_radius=8) ccd.detect_sources(sex_params, subtract_bkg=True) ccd.select_psf_sources(min_flux, min_amplitude, edge_dist=aper_radius+max_offset) psf_array = ccd.create_psf() ccd.show_psf() good_idx = ccd.catalog.sources['peak']>calibrate_amplitude #good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) result = ccd.perform_psf_photometry(indices=good_idx) good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) good_idx = good_idx & np.isfinite(ccd.catalog.sources['psf_mag']) good_sources = ccd.catalog.sources[good_idx] print('rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))) print('mean', np.mean(good_sources['psf_mag_err'])) print('median', np.median(good_sources['psf_mag_err'])) print('stddev', np.std(good_sources['psf_mag_err'])) bad_count = np.sum(good_sources['psf_mag_err']>.05) print('bad psf error: {0}, or {1}%'.format(bad_count, bad_count/len(good_sources)*100)) print('Better than 5%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.05), len(good_sources))) print('Better than 2%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.02), len(good_sources))) good_sources['aper_flux','psf_flux','peak','psf_mag_err'][good_sources['psf_mag_err']>.05] max_offset=3 ccd = astropyp.phot.phot.SingleImage( img=stk, dqmask=dqmask, gain=4., exptime=30, aper_radius=8) ccd.detect_sources(sex_params, subtract_bkg=True) ccd.select_psf_sources(min_flux, min_amplitude, edge_dist=aper_radius+max_offset) psf_array = ccd.create_psf() ccd.show_psf() #good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) result = ccd.perform_psf_photometry(indices=good_idx) #good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) good_idx = good_idx & np.isfinite(ccd.catalog.sources['psf_mag']) good_sources = ccd.catalog.sources[good_idx] print('rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))) print('mean', np.mean(good_sources['psf_mag_err'])) print('median', np.median(good_sources['psf_mag_err'])) print('stddev', np.std(good_sources['psf_mag_err'])) bad_count = np.sum(good_sources['psf_mag_err']>.05) print('bad psf error: {0}, or {1}%'.format(bad_count, bad_count/len(good_sources)*100)) print('Better than 5%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.05), len(good_sources))) print('Better than 2%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.02), len(good_sources))) good_sources['aper_flux','psf_flux','peak','psf_mag_err'][good_sources['psf_mag_err']>.05] ```
github_jupyter
from __future__ import division, print_function import numpy as np from collections import OrderedDict import logging from IPython.display import display %matplotlib inline import matplotlib import matplotlib.pyplot as plt from astropy.io import fits import astropy.wcs from astropy import coordinates import astropy.units as apu from astropy import table import astropyp from astropyp.wrappers.astromatic import ldac from astropyp.phot import stack import bd_search alogger = logging.getLogger('astropyp') alogger.setLevel(logging.INFO) idx_connect = 'sqlite:////media/data-beta/users/fmooleka/decam/decam.db' ref_path = '/media/data-beta/users/fmooleka/decam/catalogs/ref/' # SExtractor 'extract' detection parameters conv_filter = np.load('/media/data-beta/users/fmooleka/2016decam/5x5gauss.npy') sex_params = { 'extract': { 'thresh': 40, #'err':, 'minarea': 3, # default 'conv': conv_filter, #'deblend_nthresh': 32, #default 'deblend_cont': 0.001, #'clean': True, #default #'clean_param': 1 #default }, 'kron_k': 2.5, 'kron_min_radius': 3.5, 'filter': conv_filter, #'thresh': 1.5 # *bkg.globalrms } obj='F100' refname = '2MASS' #refname = 'UCAC4' fullref = ldac.get_table_from_ldac(ref_path+'{0}-{1}.fits'.format(obj, refname)) import warnings from astropy.utils.exceptions import AstropyWarning warnings.simplefilter('ignore', category=AstropyWarning) def get_exp_files(expnum, night, filtr, idx_connect): sql = 'select * from decam_obs where expnum={0} and filter like "{1}%" and dtcaldat="{2}"'.format( expnum, filtr, night) exp_info = astropyp.db_utils.index.query(sql, idx_connect) img_filename = exp_info[exp_info['PRODTYPE']=='image'][0]['filename'] img = fits.open(img_filename) dqmask_filename = exp_info[exp_info['PRODTYPE']=='dqmask'][0]['filename'] dqmask = fits.open(dqmask_filename) return img, dqmask min_flux = 1000 min_amplitude = 1000 good_amplitude = 50 calibrate_amplitude = 200 frame = 1 explist = [442433, 442434, 442435] aper_radius = 8 ccds = [] for expnum in explist: #img, dqmask = get_exp_files(expnum, "2015-05-26", "i", idx_connect) img, dqmask = get_exp_files(expnum, "2015-05-26", "z", idx_connect) header = img[frame].header wcs = astropy.wcs.WCS(header) img_data = img[frame].data dqmask_data = dqmask[frame].data ccd = astropyp.phot.phot.SingleImage(header, img_data, dqmask_data, wcs=wcs, gain=4., exptime=30, aper_radius=aper_radius) ccds.append(ccd) ccd_stack = stack.Stack(ccds, 1) ccd_stack.detect_sources(min_flux=min_flux, good_amplitude=good_amplitude, calibrate_amplitude=calibrate_amplitude, psf_amplitude=1000, sex_params=sex_params, subtract_bkg=True, windowed=False) ccd_stack.get_transforms() def stack_full_images(imgs, ref_index, tx_solutions, dqmasks = None, combine_method='mean', dqmask_min=0, bad_pix_val=1, buf=10, order=3): from scipy import interpolate from astropy.nddata import extract_array, overlap_slices from astropyp.astrometry import ImageSolution buf = float(buf) # Get the minimum size of the final stack by projecting all of # the images onto the reference frame img_x = np.arange(0, imgs[ref_index].shape[1], 1) img_y = np.arange(0, imgs[ref_index].shape[0], 1) xmin = img_x[0]-buf xmax = img_x[-1]+buf ymin = img_y[0]-buf ymax = img_y[-1]+buf for n in range(len(imgs)): if n!=ref_index: tx_x,tx_y = tx_solutions[n].transform_coords( x=[img_x[0], img_x[-1]], y=[img_y[0], img_y[-1]]) if tx_x[0]<xmin: xmin = tx_x[0] if tx_x[1]>xmax: xmax = tx_x[1] if tx_y[0]<ymin: ymin = tx_y[0] if tx_y[1]>ymax: ymax = tx_y[1] x_tx = OrderedDict([('Intercept', xmin), ('A_1_0', 1.0), ('A_0_1', 0.0)]) y_tx = OrderedDict([('Intercept', ymin), ('B_1_0', 1.0), ('B_0_1', 0.0)]) # Modify the tx solutions to fit the coadd for n in range(len(imgs)): if n!= ref_index: new_x_tx = tx_solutions[n].x_tx.copy() new_y_tx = tx_solutions[n].y_tx.copy() new_x_tx['Intercept'] += xmin new_y_tx['Intercept'] += ymin tx_solutions[n] = ImageSolution(x_tx=new_x_tx, y_tx=new_y_tx, order=tx_solutions[n].order) else: tx_solutions[n] = ImageSolution(x_tx=x_tx, y_tx=y_tx, order=1) coadd_x = np.arange(0, xmax-xmin, 1) coadd_y = np.arange(0, ymax-ymin, 1) Xc, Yc = np.meshgrid(coadd_x, coadd_y) patches = [] # Reproject each image to the coadded image for n in range(len(imgs)): tx_x,tx_y = tx_solutions[n].transform_coords( x=Xc.flatten(),y=Yc.flatten()) tx_x = np.array(tx_x).reshape(Xc.shape) tx_y = np.array(tx_y).reshape(Yc.shape) img_x = np.arange(0, imgs[n].shape[1], 1) img_y = np.arange(0, imgs[n].shape[0], 1) # Create an interpolating function and reproject the image data_func = interpolate.RectBivariateSpline(img_y, img_x, imgs[n],kx=order,ky=order) continue patch = data_func(tx_y.flatten(), tx_x.flatten(), grid=False) patch = patch.reshape(Xc.shape) #Create the dqmask if dqmasks is not None: points = (img_y, img_x) values = dqmasks[n] dq_func = interpolate.RegularGridInterpolator( points, values, method='nearest', fill_value=bad_pix_val, bounds_error=False) dqmask = dq_func(zip(tx_y.flatten(),tx_x.flatten())) dqmask = dqmask.reshape(Xc.shape) dqmask = dqmask.astype(bool) else: dqmask = None # Apply the dqmask to the image patch[dqmask>dqmask_min] = np.nan patch[dqmask>dqmask_min] patch = np.ma.array(patch) patch.mask = np.isnan(patch) patches.append(patch) return None,None,None stack = np.ma.mean(patches, axis=0) dqmask = patches[0].mask for n in range(1,len(patches)): dqmask = np.bitwise_and(dqmask, patches[n].mask) return stack, dqmask, patches #imgs = [ccd.img for ccd in ccd_stack.ccds] #dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] imgs = [ccd.img[:200,:100] for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask[:200,:100] for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img[:1000,:500] for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask[:1000,:500] for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches = stack_full_images(imgs, 1, tx_solutions, dqmasks) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches=stack_full_images(imgs, 1, tx_solutions, dqmasks, order=5) imgs = [ccd.img for ccd in ccd_stack.ccds] dqmasks = [ccd.dqmask for ccd in ccd_stack.ccds] tx_solutions = [ccd_stack.tx_solutions[(1,0)], None, ccd_stack.tx_solutions[(1,2)]] %time stack, dqmask, patches=stack_full_images(imgs, 1, tx_solutions, dqmasks, order=3) stk = stack.filled(0) max_offset=3 ccd = astropyp.phot.phot.SingleImage( img=stk, dqmask=dqmask, gain=4., exptime=30, aper_radius=8) ccd.detect_sources(sex_params, subtract_bkg=True) ccd.select_psf_sources(min_flux, min_amplitude, edge_dist=aper_radius+max_offset) psf_array = ccd.create_psf() ccd.show_psf() good_idx = ccd.catalog.sources['peak']>calibrate_amplitude #good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) result = ccd.perform_psf_photometry(indices=good_idx) good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) good_idx = good_idx & np.isfinite(ccd.catalog.sources['psf_mag']) good_sources = ccd.catalog.sources[good_idx] print('rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))) print('mean', np.mean(good_sources['psf_mag_err'])) print('median', np.median(good_sources['psf_mag_err'])) print('stddev', np.std(good_sources['psf_mag_err'])) bad_count = np.sum(good_sources['psf_mag_err']>.05) print('bad psf error: {0}, or {1}%'.format(bad_count, bad_count/len(good_sources)*100)) print('Better than 5%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.05), len(good_sources))) print('Better than 2%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.02), len(good_sources))) good_sources['aper_flux','psf_flux','peak','psf_mag_err'][good_sources['psf_mag_err']>.05] max_offset=3 ccd = astropyp.phot.phot.SingleImage( img=stk, dqmask=dqmask, gain=4., exptime=30, aper_radius=8) ccd.detect_sources(sex_params, subtract_bkg=True) ccd.select_psf_sources(min_flux, min_amplitude, edge_dist=aper_radius+max_offset) psf_array = ccd.create_psf() ccd.show_psf() #good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) result = ccd.perform_psf_photometry(indices=good_idx) #good_idx = ccd.catalog.sources['peak']>calibrate_amplitude good_idx = ccd.catalog.sources['peak']>good_amplitude good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0) good_idx = good_idx & np.isfinite(ccd.catalog.sources['psf_mag']) good_sources = ccd.catalog.sources[good_idx] print('rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))) print('mean', np.mean(good_sources['psf_mag_err'])) print('median', np.median(good_sources['psf_mag_err'])) print('stddev', np.std(good_sources['psf_mag_err'])) bad_count = np.sum(good_sources['psf_mag_err']>.05) print('bad psf error: {0}, or {1}%'.format(bad_count, bad_count/len(good_sources)*100)) print('Better than 5%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.05), len(good_sources))) print('Better than 2%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.02), len(good_sources))) good_sources['aper_flux','psf_flux','peak','psf_mag_err'][good_sources['psf_mag_err']>.05]
0.417984
0.293316
``` %matplotlib inline import numpy as np import scipy.stats as stats import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import random sns.set(style="whitegrid") ``` # Three or More Variables What happens if there are three or more variables? Given a 2d flat surface, there isn't a whole lot that can be done. Here are a few suggestions. ## Color It's incredibly difficult to compare three or more variables using a static medium. While it is possible to compare three variables using interactive plots, these are not without their drawbacks. The major drawback is that any particular view is just as deceiving as having had a single 3d view. This means that you can only experience them while you are moving the chart. Another approach is to use color, if one of the variables is categorical. You simply assign a color to each category. ``` random.seed(57483249) data = pd.DataFrame(np.concatenate([stats.norm.rvs( 90.0, 9.0, 100), stats.norm.rvs(120.0, 12.0, 100)])) data.columns = ["X"] data["Y"] = data["X"].apply(lambda x: 5.0 + 0.4 * x + stats.norm.rvs( 0.0, 8.0, 1)[0]) data["Z"] = ["a"] * 100 + ["b"] * 100 data.head() figure = plt.figure(figsize=(10, 6)) colors = data["Z"].apply(lambda x: "DarkRed" if x == "a" else "CornflowerBlue") axes = figure.add_subplot(1, 1, 1) axes.scatter( data[ "X"], data["Y"], marker="o", color=colors) axes.set_title("X v. Y") axes.set_xlabel( "X") axes.set_ylabel( "Y") plt.show() plt.close() ``` ## Subsetting As you begin to explore the data in the context of your question or problem, there may arise subsets of data that strike your interest. You may only be interested in repeat customers or women or men or people with a BMI over 25. You can take your entire data set, subset it to various groups of interest and repeat the process of EDA all over again. Do the relationships still hold? ## Projections When we look a scatterplot of a variable like height versus weight, we are looking at the variables in their measured dimensions on the standard Cartesian plane. This may not, however, be the best space for viewing the variation in the data. Principal Component Analysis (PCA) first projects a data set into its dimension of highest variance and successively finds orthogonal dimensions that explain the remaining variance. The results are thus a "change of basis" for the data (if you remember what a "change of basis" is from linear algebra, that might help). It accomplishes this by eigen-decomposition of the correlation matrix after mean centering and normalizing of the data or singular value decomposition of the data matrix. When you're done, you have N linear combinations of your variables (the principal components) where the weights on each principle component are the *loadings* for each variable and the tranformed data is called the *scores*. The downside to PCA is that the new *basis* is a linear combination of variables and not easy to explain or understand. PCA can only be used with numerical data. If there are only a few categorical variables, color might help or you may need to rely on small multiples. Additionally, PCA is often overused. You almost never need it for modeling until you reach upwards of 100s or 1000s of variables. Additionally, the principal components are difficult to explain. We will talk more about PCA in a later chapter. Another projection is MDS (Multi-Dimensional Scaling). If you have a similarity (or distance) metric for your observations, you can use MDS to project the entire dataset onto a flat plane. Unfortunately, MDS is right up there with Word Clouds in terms of true usefulness. ## Lattice Plot One plot you will sometimes see in EDA is the Lattice plot (it goes by a variety of other names as well, Matrix plot, etc). The Lattice plot will automatically create a set of pairwise comparisons between all variables: ![Lattice Plot](../../resources/eda/lattice_plot_example.png) While the Lattice plot above is only for numerical variables, there are Lattice plots that handle categorical variables as well. Because the off diagonal is a mirror image of the main, we often do something different with the off diagonal. Here the library is added a LOWESS line. In other libraries, they show the correlation coefficent. So why not use them? First, they don't scale very well. The example above has 8 variables and that is probably already too many. Second, there's a lot of "gee whiz" going on here. It *looks* impressive but it violates one of the goals of EDA as presented herein, a *methodical* exploration of your data. The method of EDA described herein asks you to examine each variable individually, making notes about the variable, your domain knowledge, and what you think you'll find. You then look at the variable and make notes about what you did find. If someone goes to read your analysis, it should all fit, more or less, on one page/screen so that they don't have to scroll back and forth (very much). You then complete the process for pairwise comparisons, guided by correlation analysis, domain knowledge and your problem/question. In contrast, how would you do the same thing with the Lattice plot? Do you describe all the expected pairwise comparisons? If you see something interesting, you make a note of it. If you don't make a note of it, did you not see something or did you not look? As someone reads your notes, do they have to keep scrolling back further and further to see what you saw? At most, you might consider a Lattice Plot at the *end* of your pairwise exploration to summarize the main relationships you found but it should not be the start of your EDA.
github_jupyter
%matplotlib inline import numpy as np import scipy.stats as stats import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import random sns.set(style="whitegrid") random.seed(57483249) data = pd.DataFrame(np.concatenate([stats.norm.rvs( 90.0, 9.0, 100), stats.norm.rvs(120.0, 12.0, 100)])) data.columns = ["X"] data["Y"] = data["X"].apply(lambda x: 5.0 + 0.4 * x + stats.norm.rvs( 0.0, 8.0, 1)[0]) data["Z"] = ["a"] * 100 + ["b"] * 100 data.head() figure = plt.figure(figsize=(10, 6)) colors = data["Z"].apply(lambda x: "DarkRed" if x == "a" else "CornflowerBlue") axes = figure.add_subplot(1, 1, 1) axes.scatter( data[ "X"], data["Y"], marker="o", color=colors) axes.set_title("X v. Y") axes.set_xlabel( "X") axes.set_ylabel( "Y") plt.show() plt.close()
0.341034
0.934515
# Search Terms This project starts with curated collections of terms, including ERP terms, and potential associations, such as cognitive and disease terms. Automated literature collection then collects information from papers using those terms, using [LISC](https://lisc-tools.github.io/). Current analysis takes two forms: - `Words` analyses: analyses text data from articles that discuss ERP related research - This approach collects text and metadata from papers, and builds data driven profiles for ERP components - `Count` analyses: searches for co-occurences of terms, between ERPs and associated terms - This approach looks for patterns based on how commonly terms occur together This notebook introduces the terms that are used in the project. ``` from collections import Counter # Import Base LISC object to load and check search terms from lisc.objects.base import Base from lisc.utils.io import load_txt_file import seaborn as sns sns.set_context('talk') # Import custom project code import sys sys.path.append('../code') from plts import plot_latencies # Set the location of the terms term_dir = '../terms/' # Load a test object to check the terms erps = Base() ``` ## ERP Terms ``` # Load erps and labels terms from file erps.add_terms('erps.txt', directory=term_dir) erps.add_labels('erp_labels.txt', directory=term_dir) # Check the number of ERP terms print('Number of ERP terms: {}'.format(erps.n_terms)) # Add exclusion words erps.add_terms('erps_exclude.txt', term_type='exclusions', directory=term_dir) # Check the exclusion terms used erps.check_terms('exclusions') ``` ## Latencies ``` # Load canonical latency information labels = load_txt_file('erp_labels.txt', term_dir, split_elements=False) latencies = load_txt_file('latencies.txt', term_dir, split_elements=False) latency_dict = {label : latency.split(', ') for label, latency in zip(labels, latencies)} # Extract the labelled polarities and latencies for each ERP polarities = [el[0] for el in latency_dict.values()] latencies = [int(el[1]) for el in latency_dict.values()] # Check the count of polarities polarity_counts = Counter(polarities) print(polarity_counts) # Plot the ERP latencies plot_latencies(polarities, latencies) print('Typical ERP latency:') for label, lat in zip(labels, latencies): print(' {:s}\t\t{:4d}'.format(label, lat)) ``` ## Cognitive Terms ``` # Load cognitive terms from file cogs = Base() cogs.add_terms('cognitive.txt', directory=term_dir) # Check the number of ERP terms print('Number of cognitive terms: {}'.format(cogs.n_terms)) # Check the cognitive terms used cogs.check_terms() ``` ## Disease Terms ``` # Load the disease terms from file disease = Base() disease.add_terms('disorders.txt', directory=term_dir) # Check the number of ERP terms print('Number of disease terms: {}'.format(disease.n_terms)) # Check the disease terms disease.check_terms() ```
github_jupyter
from collections import Counter # Import Base LISC object to load and check search terms from lisc.objects.base import Base from lisc.utils.io import load_txt_file import seaborn as sns sns.set_context('talk') # Import custom project code import sys sys.path.append('../code') from plts import plot_latencies # Set the location of the terms term_dir = '../terms/' # Load a test object to check the terms erps = Base() # Load erps and labels terms from file erps.add_terms('erps.txt', directory=term_dir) erps.add_labels('erp_labels.txt', directory=term_dir) # Check the number of ERP terms print('Number of ERP terms: {}'.format(erps.n_terms)) # Add exclusion words erps.add_terms('erps_exclude.txt', term_type='exclusions', directory=term_dir) # Check the exclusion terms used erps.check_terms('exclusions') # Load canonical latency information labels = load_txt_file('erp_labels.txt', term_dir, split_elements=False) latencies = load_txt_file('latencies.txt', term_dir, split_elements=False) latency_dict = {label : latency.split(', ') for label, latency in zip(labels, latencies)} # Extract the labelled polarities and latencies for each ERP polarities = [el[0] for el in latency_dict.values()] latencies = [int(el[1]) for el in latency_dict.values()] # Check the count of polarities polarity_counts = Counter(polarities) print(polarity_counts) # Plot the ERP latencies plot_latencies(polarities, latencies) print('Typical ERP latency:') for label, lat in zip(labels, latencies): print(' {:s}\t\t{:4d}'.format(label, lat)) # Load cognitive terms from file cogs = Base() cogs.add_terms('cognitive.txt', directory=term_dir) # Check the number of ERP terms print('Number of cognitive terms: {}'.format(cogs.n_terms)) # Check the cognitive terms used cogs.check_terms() # Load the disease terms from file disease = Base() disease.add_terms('disorders.txt', directory=term_dir) # Check the number of ERP terms print('Number of disease terms: {}'.format(disease.n_terms)) # Check the disease terms disease.check_terms()
0.67822
0.833189
# Parts of Speech Tagging Practice The purpose of this practical session is to experiment with Part-of-Speech tagging, using the tools provided by NLTK. We will make use of the contents of the [Chapter 5](http://www.nltk.org/book/ch05.html) of the [Natural Language Processing with Python --- Analyzing Text with the Natural Language Toolkit](http://www.nltk.org/book). As experimental dataset, we will use the [Brown Corpus](http://en.wikipedia.org/wiki/Brown_Corpus). The Brown Corpus defines a tagset (specific collection of part-of-speech labels) that has been reused in many other annotated resources in English. The [universal tagset](http://universaldependencies.org/u/pos/) includes 17 tags: Tag | Meaning | Examples ----|------------|---------- ADJ | adjective | new, good, high, special, big, local ADV | adverb | really, already, still, early, now CONJ| conjunction| and, or, but, if, while, although DET | determiner | the, a, some, most, every, no X | other, foreign words | dolce, ersatz, esprit, quo, maitre NOUN | noun | year, home, costs, time, education PROPN| proper noun | Alison, Africa, April, Washington NUM | numeral | twenty-four, fourth, 1991, 14:24 PRON | pronoun | he, their, her, its, my, I, us ADP | adposition, preposition | on, of, at, with, by, into, under AUX | auxiliary verb | has (done), is (doing), will (do), should (do), must (do), can (do) INTJ | interjection | ah, bang, ha, whee, hmpf, oops VERB | verb | is, has, get, do, make, see, run PART | particle | possessive marker 's, negation 'not' SCONJ | subordinating conjunction: complementizer, adverbial clause introducer | I believe 'that' he will come, if, while SYM | symbol | $, (C), +, *, /, =, :), [email protected] Note that the decision on how to tag a word, without more information is ambiguous for multiple reasons: - The same string can be understood as a `noun` or a `verb` (e.g, **book**). - Some POS tags have a systematically ambiguous definition: a present participle can be used in progressive verb usages (I am going:VERB), but it can also be used in an adjectival position modifying a noun: (A striking:ADJ comparison). In other words, it is unclear in the definition itself of the tag whether the tag refers to a syntactic function or to a morphological property of the word. ## 0. Working on the Brown Corpus with NLTK NLTK contains a collection of tagged corpora, arranged as convenient Python objects. We will use the Brown corpus in this experiment. The `tagged_sents` version of the corpus is a list of sentences. Each sentence is a list of pairs (`tuples`) `(word, tag)`. Similarly, one can access the corpus as a flat list of tagged words. ``` import nltk nltk.download('brown') from nltk.corpus import brown brown_news_tagged = brown.tagged_sents(categories='news', tagset='universal') brown_news_words = brown.tagged_words(categories='news', tagset='universal') ``` ### Measuring success: Accuracy, Training Dataset, Test Dataset Assume we develop a tagger. How do we know how successful it is? Can we trust the decisions the tagger makes? In order to evaluate the tagger, we are going to split the dataset into training and testing: ``` brown_train = brown_news_tagged[100:] brown_test = brown_news_tagged[:100] from nltk.tag import untag test_sent = untag(brown_test[0]) print("Tagged: ", brown_test[0]) print() print("Untagged: ", test_sent) ``` ## 1. Baseline Tagger: Default Tag In the absence of any knowledge, the most basic tagging approach is to assign the same tag to all the words. It can be done with the `DefaultTagger` class, which takes a tag and assign it to all the words. ``` # A default tagger assigns the same tag to all words from nltk import DefaultTagger default_tagger = DefaultTagger('default_tag') default_tagger.tag('This is a test'.split()) ``` ### Exercise 1.1 Using the `DefaultTagger`, try different tags (see the available options in the table at the beginning of the notebook). **Which one is offering the best performance? Why?** To measure success, in this task, we will measure accuracy. The tagger object in NLTK includes a method called `evaluate` to measure the accuracy of a tagger on a given test set (our `brown_test` object). Let's try different tags: ``` # List of tags to try tags = ['ADJ', 'ADV', 'NOUN', 'DET', 'VERB', 'PRON'] # Create and evaluate a DefaultTagger for each of the tags for t in tags: default_tagger = DefaultTagger(t) print(default_tagger.tag(test_sent)) print('Accuracy for the tag %s: %4.1f%%' % (t, 100.0 * default_tagger.evaluate(brown_test))) print() print() ``` As you can see, the `DefaultTagger` is giving the same tag to all the words. Since 'NOUN' is the most frequent universal tag in the Brown corpus, it is the one that offers the best performance. ## 2. Sources of Knowledge to Improve Tagging Accuracy Intuitively, the sources of knowledge that can help us decide what is the tag of a word include: - A dictionary that lists the possible parts of speech for each word - The context of the word in a sentence (neighboring words) - The morphological form of the word (suffixes, prefixes) ### 2.1 Lookup Tagger: Using Dictionary Knowledge Assume we have a dictionary that lists the possible tags for each word in English. Could we use this information to perform better tagging? The intuition is that we would only assign to a word a tag that it can have in the dictionary. For example, if `box` can only be a `Verb` or a `Noun`, when we have to tag an instance of the word `box`, we only choose between 2 options - and not between 17 options. There are 3 issues we must address to turn this into working code: - Where do we get the dictionary? - How do we choose between the various tags associated to a word in the dictionary? (For example, how do we choose between `VERB` and `NOUN` for `box`). - What do we do for words that do not appear in the dictionary? The simple solutions we will test are the following - note that for each question, there exist other strategies that we will investigate later: - Where do we get the dictionary?: we will learn it from a sample dataset. - How do we choose between the various tags associated to a word in the dictionary?: we will choose the most likely tag as observed in the sample dataset. - What do we do for words that do not appear in the dictionary?: we will pass unknown words to a backoff tagger (tag all unknown words as `NOUN`). The `nltk.UnigramTagger` implements this overall strategy. It must be trained on a dataset, from which it builds a model of "unigrams". The following code shows how it is used: ### Exercise 2.1.1 Use the `UnigramTagger` class and the `brown_train` object to create a unigram tagger. **Which tag is selecting to annotate each word?** **What's happening with unknown words?** ``` # Prepare training and test datasets from nltk import UnigramTagger # Train the unigram model unigram_tagger = UnigramTagger(brown_train) # Test it on a single sentence unigram_tagger.tag(untag(brown_test[0])) ``` As you can see in the results, the tagger is assigning the most common tag to each word. (e.g., `took` = `VERB`) Note that the unigram tagger leaves some words tagged as `None` -- these are **unknown words**, words that were not observed in the training dataset. We will try to solve that in the following exercises. ### Exercise 2.1.2 Making use of the `evaluate` method measure how successful is this tagger. **Are we improving the performance of the tagger?** **Do your find the new performance sufficient enough for a NLP system?** ``` print('Unigram tagger accuracy: %4.1f%%' % ( 100.0 * unigram_tagger.evaluate(brown_test))) ``` 88.9% is quite an improvement on the 31% of the default tagger. And this is without any backoff and without using morphological clues. Is 88.9% a good level of accuracy? In fact it is not. It is accuracy per word. It means that on average, in every sentence of about 20 words, we will accumulate 2 errors. 2 errors in each sentence is a very high error rate. It makes it difficult to run another task on the output of such a tagger. Think how difficult the life of a parser would be if 2 words in every sentence are wrongly tagged. The problem is known as the **pipeline effect** -- when language processing tools are chained in a pipeline, error rates accumulate from module to module. ### Exercise 2.1.3 If we analyze the tagger annotation, we will see that it assigns `None` to unknown words. As explained in class, a good way to improve this is to tag unknowns words as `NOUN` (the most common tag). This is known as a backoff tagger (i.e., a second tagger that applies where the original one cannot identify the tag for a word) NLTK provides a simple way to implement this backoff tagging. All the constructors for the Tagger classes (e.g., `UnigramTagger`) have a parameter `backoff` where you can set the backoff tagger that will apply. In this case, our backoff tagger will be the `DefaultTagger` that annotates `NOUN`, which we developed in the exercise below . **Using the `DefaultTagger` and `UnigramTagger` classes, create a tagger that assigns the most common tag to each word and, for unknown words, assigns a backoff tag of `NOUN`.** **What's the accuracy of this tagger? Do we improved our performance?** ``` nn_tagger = DefaultTagger('NOUN') ut2 = UnigramTagger(brown_train, backoff=nn_tagger) print('Unigram tagger with backoff accuracy: %4.1f%%' % ( 100.0 * ut2.evaluate(brown_test))) ``` Adding a simple backoff (with accuracy of 31%) improved accuracy from 88.9% to 94.5%. The error rate went down from 11.1% (100-88.9) to 5.5%. In other words, out of the words not tagged by the original model (with no backoff), 41.4% were corrected by the backoff. One lesson to learn from this is that the **distribution of unknown words is significantly different from the distribution of all the words in the corpus**. ### 2.2 Using Morphological Clues As mentioned above, another knowledge source to perform tagging is to look at the letter structure of the words. We will look at 2 different methods to use this knowledge. First, we will use nltk.RegexpTagger to recognize specific regular expressions in words. ``` from nltk import RegexpTagger regexp_tagger = RegexpTagger( [(r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers (r'(The|the|A|a|An|an)$', 'DET'), # articles (r'.*able$', 'ADJ'), # adjectives (r'.*ness$', 'NOUN'), # nouns formed from adjectives (r'.*ly$', 'ADV'), # adverbs (r'.*s$', 'NOUN'), # plural nouns (r'.*ing$', 'VERB'), # gerunds (r'.*ed$', 'VERB'), # past tense verbs (r'.*', 'NOUN') # nouns (default) ]) print('Regexp accuracy %4.1f%%' % (100.0 * regexp_tagger.evaluate(brown_test))) ``` The regular expressions are tested in order. If one matches, it decides the tag. Else it tries the next tag. The question we face when we see such a "rule-based" tagger are: - How do we find the most successful regular expressions? - In which order should we try the regular expressions? A typical answer to such questions is: - let's learn these parameters from a training corpus. The `nltk.AffixTagger` is a trainable tagger that attempts to learn word patterns. It only looks at the last letters in the words in the training corpus, and counts how often a word suffix can predict the word tag. In other words, we only learn rules of the form ('.*xyz' , POS). This is how the affix tagger is used: ``` from nltk import AffixTagger affix_tagger = AffixTagger(brown_train, backoff=nn_tagger) print('Affix tagger accuracy: %4.1f%%' % (100.0 * affix_tagger.evaluate(brown_test))) ``` Should we be disappointed that the "data-based approach" performs worse than the hand-written rules (42% vs. 48%)? Not necessarily: note that our hand-written rules include cases that the AffixTagger cannot learn - we match cardinal numbers and suffixes with more than 3 letters. Let us see whether the combination of the 2 taggers helps: ### Exercise 2.2.1 **Using the `AffixTagger` class, creates a tagger that learns from word patterns and that uses the previous `RegexpTagger` as backoff.** **Evaluate and analyze the performance of the model** ``` at2 = AffixTagger(brown_train, backoff=regexp_tagger) print("Affix tagger with regexp backoff accuracy: %4.1f%%" % (100.0 * at2.evaluate(brown_test))) ``` This is not bad - the machine learning in AffixTagger helped us reduce the error from 52% to 47% (10% error reduction). ### Exercise 2.2.2 In the previous exercise we created an `AffixTagger` that is able to learn the annotation from the word patterns. Perhaps, we could apply this tagger to annotate the unknown words (i.e., to use it as a backoff tagger). In the previous section, we used a NOUN-default tagger for that. How much does this tagger help the `UnigramTagger` if we use it as a backoff instead of the NOUN-default tagger? **Use the `AffixTagger` that we created below as a backoff tagger for the `UnigramTagger` in the previous section** **Are we improving our tagger?** ``` ut3 = UnigramTagger(brown_train, backoff=at2) print('Unigram with affix backoff accuracy: %4.1f%%' % (100.0 * ut3.evaluate(brown_test))) ``` The error reduction is from 88.9% to 95.4% -- better that the 94.5% obtained with the NOUN backoff. ### 2.3 Looking at the Context At this point, we have combined 2 major sources of information: dictionary and morphology and obtained about 95.4% accuracy. The last source of knowledge we want to exploit the context of the word to be tagged: **the words that appear around the word to be tagged**. The intuition is that if we have to decide between `book` as a verb or a noun, the word/s preceding `book` can give us strong cues: for example, if it is an article (`the` or `a`) then we would be sure that `book` is a noun; if it is `to`, then we would be sure it is a verb. How can we turn this intuition into working code? The easiest way to detect predictive contexts is to construct a list of contexts - and for each context, keep track of the distribution of tags that follow it. Luckily for us, this procedure is already implemented into the `NgramTagger`, which takes a parameter a number setting the length of the context. As usual, if the tagger cannot make a decision (because the observed context was never seen at training time), the decision is delegated to a backoff tagger. ``` # Where we stand before looking at the context ut3.evaluate(brown_test) ``` ### Exercise 2.3.1 **Use the `NgramTagger` to create a context-based tagger. For the cases that this tagger cannot annotate anything, use the previous `UnigramTagger` as backoff.** **Try different context sizes (you can set that as a parameter when you create the `NgramTagger`) and analyze how it affects to the final performance of the model.** ``` from nltk import NgramTagger ct2 = NgramTagger(2, brown_train, backoff=ut3) ct2.evaluate(brown_test) ct3 = NgramTagger(3, brown_train, backoff=ut3) ct3.evaluate(brown_test) ``` We find on our dataset that looking at a context of 2 tags in the past improves the accuracy from 95.4% to 96.1% -- this is 18% error reduction. If we try to use a larger context of 3 tags, we get less improvement (from 95.4% to 95.8%). The main problem we face is that of **data sparseness**: there are not enough samples of each context for the context to help. We will return to this issue in the next lectures. If we take even larger context sizes, the sparseness will be larger and, consequently, the performance of the tagger smaller. ``` ct5 = NgramTagger(5, brown_train, backoff=ut3) ct5.evaluate(brown_test) ``` ## Summary This practice introduced tools to tag parts of speech in free text. The key point of the approach we investigated is that it is **data-driven**: - We first define possible knowledge sources that can help us solve the task. Specifically, we investigated * dictionary, * morphological * context as possible sources. - We tested simple machine learning methods: data is acquired by inspecting a training dataset, then evaluated by testing on a test dataset. - We investigated one method to combine several systems into a combined system: backoff models. # Additional Materials: Practical Tagging In this practice we have played with the development of new Taggers. You can refer back to this Notebook if and when you need to create your own Taggers. Nevertheless, most of the time the Taggers already included in the different libraries will do the trick for you. In particular, NLTK provides you a way to tag your dataset with just a couple of lines of code by using the `pos_tag` function. The first thing we need to do is to tokenize the sentence to be tagged. To that end, we can make use of the `word_tokenize` function in NLTK. ``` text = nltk.word_tokenize("And now for something completely different") text ``` Then we should feed our tokenized text to the pos tagging function ``` nltk.pos_tag(text) ``` If we have more than one sentence to parse, we can make use of some of the Sentence Tokenizers that nltk provides (e.g. `sent_tokenize`) to split the text in sentences, and the the word tokenizer to split each sentence in words. ``` sentences = nltk.sent_tokenize("And now for something completely different. This is just another sentence") print("Sentences:", sentences) text = [nltk.word_tokenize(sentence) for sentence in sentences] print("Text:", text) print() for tagging in [nltk.pos_tag(t) for t in text]: print("Tagging:",tagging) ``` Let's see a full example with a proper corpus. NLTK provides many corpora that can be used for research or for the training of our NLP system. To find a comprehensive list of all the corpus and how to use them, please refer to the [2nd Chapter of the NLTK book](https://www.nltk.org/book/ch02.html). We will use the corpus `state_union` including the texts of the State of the Union addresses since 1945. Let's load one of these speeches. ``` from nltk.corpus import state_union text = state_union.raw("1946-Truman.txt") text[:1000] ``` We now define a function `tag_corpus` that takes care of the tagging process. First, it splits the text in sentences with the `sent_tokenize` function. Then, it iterates over these sentences, tokenize them with the `word_tokenize` function and apply the `pos_tag` function to the tokens. ``` def tag_corpus(corpus_text): try: for sentence in nltk.sent_tokenize(corpus_text)[:5]: # We just process 5 sentences for the sake of simplicity words = nltk.word_tokenize(sentence) tagged = nltk.pos_tag(words) print("Sentence:", sentence, "\nTagging:", tagged) print() except Exception as e: print(str(e)) tag_corpus(text) ``` Same function in a more *pythonic* way ``` def pythonized_tag_corpus(corpus_text): try: [print("Sentence:", sentence, "\nTagging:", nltk.pos_tag(nltk.word_tokenize(sentence)), "\n") for sentence in nltk.sent_tokenize(corpus_text)[:5]] except Exception as e: print(str(e)) pythonized_tag_corpus(text) ```
github_jupyter
import nltk nltk.download('brown') from nltk.corpus import brown brown_news_tagged = brown.tagged_sents(categories='news', tagset='universal') brown_news_words = brown.tagged_words(categories='news', tagset='universal') brown_train = brown_news_tagged[100:] brown_test = brown_news_tagged[:100] from nltk.tag import untag test_sent = untag(brown_test[0]) print("Tagged: ", brown_test[0]) print() print("Untagged: ", test_sent) # A default tagger assigns the same tag to all words from nltk import DefaultTagger default_tagger = DefaultTagger('default_tag') default_tagger.tag('This is a test'.split()) # List of tags to try tags = ['ADJ', 'ADV', 'NOUN', 'DET', 'VERB', 'PRON'] # Create and evaluate a DefaultTagger for each of the tags for t in tags: default_tagger = DefaultTagger(t) print(default_tagger.tag(test_sent)) print('Accuracy for the tag %s: %4.1f%%' % (t, 100.0 * default_tagger.evaluate(brown_test))) print() print() # Prepare training and test datasets from nltk import UnigramTagger # Train the unigram model unigram_tagger = UnigramTagger(brown_train) # Test it on a single sentence unigram_tagger.tag(untag(brown_test[0])) print('Unigram tagger accuracy: %4.1f%%' % ( 100.0 * unigram_tagger.evaluate(brown_test))) nn_tagger = DefaultTagger('NOUN') ut2 = UnigramTagger(brown_train, backoff=nn_tagger) print('Unigram tagger with backoff accuracy: %4.1f%%' % ( 100.0 * ut2.evaluate(brown_test))) from nltk import RegexpTagger regexp_tagger = RegexpTagger( [(r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers (r'(The|the|A|a|An|an)$', 'DET'), # articles (r'.*able$', 'ADJ'), # adjectives (r'.*ness$', 'NOUN'), # nouns formed from adjectives (r'.*ly$', 'ADV'), # adverbs (r'.*s$', 'NOUN'), # plural nouns (r'.*ing$', 'VERB'), # gerunds (r'.*ed$', 'VERB'), # past tense verbs (r'.*', 'NOUN') # nouns (default) ]) print('Regexp accuracy %4.1f%%' % (100.0 * regexp_tagger.evaluate(brown_test))) from nltk import AffixTagger affix_tagger = AffixTagger(brown_train, backoff=nn_tagger) print('Affix tagger accuracy: %4.1f%%' % (100.0 * affix_tagger.evaluate(brown_test))) at2 = AffixTagger(brown_train, backoff=regexp_tagger) print("Affix tagger with regexp backoff accuracy: %4.1f%%" % (100.0 * at2.evaluate(brown_test))) ut3 = UnigramTagger(brown_train, backoff=at2) print('Unigram with affix backoff accuracy: %4.1f%%' % (100.0 * ut3.evaluate(brown_test))) # Where we stand before looking at the context ut3.evaluate(brown_test) from nltk import NgramTagger ct2 = NgramTagger(2, brown_train, backoff=ut3) ct2.evaluate(brown_test) ct3 = NgramTagger(3, brown_train, backoff=ut3) ct3.evaluate(brown_test) ct5 = NgramTagger(5, brown_train, backoff=ut3) ct5.evaluate(brown_test) text = nltk.word_tokenize("And now for something completely different") text nltk.pos_tag(text) sentences = nltk.sent_tokenize("And now for something completely different. This is just another sentence") print("Sentences:", sentences) text = [nltk.word_tokenize(sentence) for sentence in sentences] print("Text:", text) print() for tagging in [nltk.pos_tag(t) for t in text]: print("Tagging:",tagging) from nltk.corpus import state_union text = state_union.raw("1946-Truman.txt") text[:1000] def tag_corpus(corpus_text): try: for sentence in nltk.sent_tokenize(corpus_text)[:5]: # We just process 5 sentences for the sake of simplicity words = nltk.word_tokenize(sentence) tagged = nltk.pos_tag(words) print("Sentence:", sentence, "\nTagging:", tagged) print() except Exception as e: print(str(e)) tag_corpus(text) def pythonized_tag_corpus(corpus_text): try: [print("Sentence:", sentence, "\nTagging:", nltk.pos_tag(nltk.word_tokenize(sentence)), "\n") for sentence in nltk.sent_tokenize(corpus_text)[:5]] except Exception as e: print(str(e)) pythonized_tag_corpus(text)
0.396652
0.95803
<!--NOTEBOOK_HEADER--> *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks); content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* <!--NAVIGATION--> < [RosettaCarbohydrates: Trees, Selectors and Movers](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/13.01-Glycan-Trees-Selectors-and-Movers.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [RNA in PyRosetta](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/14.00-RNA-Basics.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/13.02-Glycan-Modeling-and-Design.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> # RosettaCarbohydrates: Modeling and Design Keywords: carbohydrate, glycan, glucose, mannose, sugar, design, prediction ## Overview Here, you will learn how to model glycans and design optimal glycosylation positions in a protein. We will be using the RosettaCarbohydrate framework to build and model glycans. The `GlycanModeler`, which is our main method for modeling glycans, will be published in 2020. We will be using some custom glycan options to load pdbs. First, one needs the `-include_sugars` option, which will tell Rosetta to load sugars and add the sugar_bb energy term to a default scorefunction. This scoreterm is like rama for the sugar dihedrals which connect each sugar residue. -include_sugars When loading structures from the PDB that include glycans, we use these options. This includes an option to write out the structures in pdb format instead of the Rosetta format (which is actually better). Again, this is included in the config/flags files you will be using. -maintain_links -auto_detect_glycan_connections -alternate_3_letter_codes pdb_sugar -write_glycan_pdb_codes More information on working with glycans can be found at this page: [Working With Glycans](https://www.rosettacommons.org/docs/latest/application_documentation/carbohydrates/WorkingWithGlycans) ## Algorithm The `GlycanModeler` essentially builds glycans from the root (The first residue of the Tree) out to the trees in a way that simulates a tree growing. It uses a notion of a 'layer' where the layer is defined as the number of residues to the glycan root (with the glycan root being layer 0). Within modeling, all glycan residues other than the ones being optimized are 'virtualized'. In Rosetta, the term 'Virtual' means that these residues are present, but not scored. (It should be noted that it is now possible to turn any residues Virtual and back to Real using two movers: `ConvertVirtualToRealMover` and `ConvertRealToVirtualMover`. ) Within the modeling application, sampling of glycan DOFs is done through the `GlycanSampler`. The sampler attempts to sample the large amount of DOFs available to a glycan tree. The GlycanSampler is a `WeightedRandomSampler`, which is a container of highly specific sampling strategies, where each strategy is weighted by a particular probability. At each apply, the mover selects one of these samplers using the probability set to it. This is the same way the SnugDock algorithm for antibody modeling works. Sampling is always scaled with the number of glycan residues that you are modeling, so run-time will increase proportionally as well. If you are modeling a huge viral particle with lots of glycans, one can use quench mode, which will optimize each glycan individually. Tpyically for these cases, multiple rounds of glycan modeling is desired. ### GlycanSampler Major components Some of these components were covered in the previous tutorial. 1. __Glycan Conformers__ These conformers have been generated through an in-depth bioinformatic analysis of the PDB using adaptive kernal density estimates and are unique for each linkage type including glycan residues connected to ASN residues. A conformer is a specific conformation of all of the backbone dihedrals of a particular glycan linkage. Essentialy glycan 'fragments' for a particular type of linkage. 2. __SugarBB Sampling__ This sampling is done through turning the `sugar_bb` energy term into a set of probabilities using the -log(e) function. This allows us to sample on the QM derived torsonal potentials during modeling. 3. __Random Sampling and Shear Moves__ We sample random torsions at +/- 15 , +/- 45, +/- 90 degrees, each at decreasing probabilities at a 4:2:1 ratio of sampling Small,Medium,Large. Shear sampling is done where torsions are set for two residues in order to reduce downsteam effects and allow 'flipping' of the glycan torsions. 4. __Minimization__ We Minimize Sugar residues by randomly selecting a residue from what is set to model, and selecting all residues out to the tree that are not virtualized. This reduces computational time that would otherwise restrict the total number of glycan residues we could model at once. 5. __Packing__ Of the residues set to optimize, we chooses a random residue and pack that residue and all residues out to the tree that are not virtualized. We pack the sugar residues (OH and constituents) and any neighboring protein sidechains. TaskOperations may be set to allow design of protein residues during this. We do packing this way to once again reduce total computational time. ``` # Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.mount_pyrosetta_install() print ("Notebook is set for PyRosetta use in Colab. Have fun!") ``` **Make sure you are in the directory with the pdb files:** `cd google_drive/My\ Drive/student-notebooks/` # General Setup and Inputs You will be using a few different inputs. We will be designing in glycosylation spots in order to block antibody binding at a highly curved epitope, and we will be loading a human structure from the PDB that has internal glycans. ## Notes for Tutorial Shortening Typically, the value of `-glycan_sampler_rounds` is set to 25 (which typically is enough) and nstruct is about 5-10k per input structure. You may increase glycan_sampler_rounds to 100 and then decrease output to 1-2500 nstruct in order to have the same level of sampling, which will result in very good models as well. Since this is denovo modeling of glycans, more nstruct is almost always better. For some tutorials, we may decrease this value below our optimal value in order to shorten the length of the tutorial. ## General Notes We will use a flags file for all common options in this tutorial. Note that instead of passing this flag on init, you can instead put it into your working directory or a particular place in your home directory and rename it common. See this page for more info on using rosetta with custom config files: <https://www.rosettacommons.org/docs/latest/rosetta_basics/running-rosetta-with-options#common-options-and-default-user-configuration> All tutorials have generated output in output_files and their approximate time to finish on a single (core i7) processor. ``` #Python from pyrosetta import * from pyrosetta.rosetta import * from pyrosetta.teaching import * init('@inputs/glycans/common_glycans @inputs/glycans/pdb_flags @inputs/glycans/map_flags') ``` # Tutorial GlycanModeling is done through the RosettaScripts interface. Each tutorial has you copying a base XML and adding/modifying specific components to achieve a goal. ALL of these movers are available as components in PyRosetta - however, setup is much more difficult and time consuming. So for now, we will rely on the RS interface, but ## Tutorial A: Epitope Blocking, De-novo Glycan Modeling Here, we will start with the antigen known as Bee Hyaluronidase, from PDB ID 2J88. The PDB file has an antibody bound to it as a HIGHLY immunogenic site. We would like to block this in order to begin to use this enzyme for therapy as Hyaluronidase can be effective in breaking down sugars in the extracellular matrix, allowing certain larger drugs to get to regions of interest. The antibody is renumbered into the AHo numbering scheme that we used in the RAbD tutorial, and it has been relaxed with constraints into the Rosetta energy function. We will be designing in at least one optimal glycan at the most immunogenic site. Note that a prototocol called SugarCoat is in development that will scan regions of interest for potential ideal glycosylation, however, one can certainly do this manually as we do below. ### A 1. Designing in a Glycosylation Site: CreateGlycanSequonMover` and `CreateSequenceMotifMover` A sugar glycosylation site is known as a `Sequon`. The glycan sequon is made up of three protein residues which are recognized by the GlycosylTransferase Enzyme during translation in the ER. This enzyme adds the root of nascent glycan onto a protein. In this case, we use the sequon for ASN glycosylation. The sequon is as follows: `N[^P][S/T]`. The `[^P]` notation means that any residue other than P can be there. The `[S/T]` notation means that either S or T is recognized. This notation can be used to directly create Motifs in proteins using the `CreateSequenceMotifMover` and associated `SequenceMotifTaskOperation`. Documentation for these is available here: <https://www.rosettacommons.org/docs/wiki/scripting_documentation/RosettaScripts/xsd/mover_CreateSequenceMotifMover_type> <https://www.rosettacommons.org/docs/wiki/scripting_documentation/RosettaScripts/xsd/to_SequenceMotifTaskOperation_type> The create GlycanSequonMover can also be used for glycosylation of different AA than ASN. #### A1.1 Design using a typical sequon Before we begin, take a look at the complex either by PyMol or use the PyMolMover as you have previously. The file is `inputs/glycans/2j88_complex.pdb` Where can we introduce a glycan to block binding? Where do you think the optimal glycan position would be for this particular antibody? Take a look at the xml. Is this the position we are targeting? Typically, we may want to allow some backbone movement in our sequon. The full glycan scanning protocol can be found in an input file, simple_glycan_scanner_manual.xml, where we relax the motif residues with constraints, add the sequon, and then relax again, comparing the energy between them to get the full energetic contributions of the sequon on the structure. In order to reduce the run time in these tutorials, we will be removing this going forward. The XML syntax is below: <CreateGlycanSequeonMover name="motif_creator" residue_selector="select"/> Go ahead and run the xml (`inputs/glycans/tutA11.xml`) using what you have learned previously, or run the code below (about 15 seconds) Note that the xml uses `SimpleMetrics` to output a variety of metrics that are in pose.scores or output into a scorefile. Here is the full XML. We will only use part of it in code. ```xml <ROSETTASCRIPTS> <SCOREFXNS> </SCOREFXNS> <RESIDUE_SELECTORS> <Index name="select" resnums="%%start%%" /> <Index name="motif" resnums="%%start%%-%%end%%" /> <Neighborhood name="nbrhood" selector="motif"/> <Not name="others" selector="nbrhood" /> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <TimingProfileMetric name="timing"/> <TotalEnergyMetric name="total_energy_delta" use_native="1" custom_type="native_delta"/> <TotalEnergyMetric name="total_energy" use_native="0"/> <CompositeEnergyMetric name="composite_energy" use_native="1"/> <SelectedResiduesMetric name="selection" residue_selector="select"/> <SelectedResiduesMetric name="rosetta_sele" residue_selector="select" rosetta_numbering="1"/> <SelectedResiduesPyMOLMetric name="pymol_selection" residue_selector="select" /> <SelectedResiduesPyMOLMetric name="region" residue_selector="nbrhood" /> <SequenceMetric name="sequence" residue_selector="motif" /> <SasaMetric name="sasa" residue_selector="select" /> <RMSDMetric name="rmsd" use_native="1" rmsd_type="rmsd_protein_bb_heavy"/> </SIMPLE_METRICS> <MOVERS> <CreateGlycanSequonMover name="motif_creator" residue_selector="select" pack_neighbors="1"/> <RunSimpleMetrics name="selections" metrics="rosetta_sele,pymol_selection,sequence" /> <RunSimpleMetrics name="pre_metrics" metrics="sasa,total_energy" prefix="pre_" /> <RunSimpleMetrics name="post_sequon_metrics" metrics="total_energy_delta,sequence,sasa,timing,rmsd" prefix="post-sequon_" /> </MOVERS> <PROTOCOLS> <Add mover="selections" /> <Add mover="pre_metrics" /> <Add mover="motif_creator"/> <Add mover="post_sequon_metrics"/> </PROTOCOLS> </ROSETTASCRIPTS> ``` ``` from rosetta.protocols.carbohydrates import * from rosetta.core.select.residue_selector import * from rosetta.core.simple_metrics.metrics import * from rosetta.core.simple_metrics.composite_metrics import * from rosetta.core.simple_metrics.per_residue_metrics import * pose_complex = pose_from_pdb("inputs/glycans/2j88_complex.pdb") pose_complex_original = pose_complex.clone() pose_antigen = pose_from_pdb("inputs/glycans/2j88_antigen.pdb") pose_antigen_original = pose_antigen.clone() start = pose_antigen.pdb_info().pdb2pose("A", 143) end = pose_antigen.pdb_info().pdb2pose("A", 145) select = ResidueIndexSelector() select.set_index(start) motif = ResidueIndexSelector() motif.set_index_range(start, end) sequon_creator = CreateGlycanSequonMover(select) sequon_creator.apply(pose_antigen) seq_metric = SequenceMetric(motif) print("original", seq_metric.calculate(pose_antigen_original)) print("designed", seq_metric.calculate(pose_antigen)) ``` Did we design in the sequon? (`N[^P][S/T]`) What do our scores look like? Note we are packing neighbors here, so our score does not totally say that we actually reduced the score. ``` score = get_score_function() print("original", score.score(pose_antigen_original)) print("designed", score.score(pose_antigen)) ``` Note we are packing neighbors here, so our score does not totally say that we actually reduced the score. #### A1.2 Design using the `N[^P][T]` motif This motif has been shown to have higher occupancy of the glycosation site with glycans in the resulting protein. Glycosylation is not 100% in some cases at some positions for (currently) unknown reasons, but it has been shown that a THR at the +2 site results in higher occupancy. If we were creating a drug, we can use chromatography during protein isolation to choose peaks which include our glycan. Here, we are using the [-] notation as to not actually design the second position. We will use what is in the native protein here. Again, you may run the XML (`inputs/glycans/tutA12.xml`) within PyRosetta replacing start and end with: `start=143A end=145A`, or run the code below. The key XML line is here: <CreateSequenceMotifMover name="create_sequon" residue_selector="p1" motif="N[-]T"/> ``` from rosetta.protocols.calc_taskop_movers import * general_motif_creator = CreateSequenceMotifMover(select) general_motif_creator.set_motif("N[-]T") pose_antigen_d1 = pose_antigen.clone() general_motif_creator.apply(pose_antigen) ``` Was the sequon successfully designed? ``` print("original", seq_metric.calculate(pose_antigen_original)) print("designed1", seq_metric.calculate(pose_antigen_d1)) print("designed2", seq_metric.calculate(pose_antigen)) print("original", score.score(pose_antigen_original)) print("designed1", score.score(pose_antigen_d1)) print("designed2", score.score(pose_antigen)) ``` ### A2. Adding a man5 glycan: __SimpleGlycosylateMover__ Now, we will expand on our first tutorial by glycosylating afterward. We will use the common name for a man5 sugar, which is a high-mannose branching glycan of 7 sugar residues (and 5 mannoses). You can use a few common names to make glycosylation easier, or an IUPAC string, or a file that has the IUPAC string in the first name of the file. Common names include man5,man7,man9 and a few others. You can find these in database/chemical/carbohydrates/common_glycans The IUPAC nomenclature of the man5 is as follows: a-D-Manp-(1->3)-[a-D-Manp-(1->3)-[a-D-Manp-(1->6)]-a-D-Manp-(1->6)] -b-D-Manp-(1->4)-b-D-GlcpNAc-(1->4)-b-D-GlcpNAc- More information on IUPAC nomenclature of sugar trees is here: <http://www.chem.qmul.ac.uk/iupac/2carb>. There is also a very detailed README in the common glycan directory for your reference. Note that within the `SimpleGlycosylateMover` you may also give multiple glycans using the `glycans` option, which will randomly choose a glycan tree to use for glycosylation from the list given. Glycosylation is not deterministic in that you always get a man5 at a particular position and is influenced by a great deal of structural biology that is not yet fully determined. For now, since we are aiming to create a drug and purifying our result, using a man5 is sufficient. Once again, either run the xml (inputs/glycans/tutA2.xml) as before or run the code below. This takes about 15 seconds. XML syntax is as follows: <SimpleGlycosylateMover name="glycosylate" residue_selector="select" glycan="man5" /> ``` glycosylate = SimpleGlycosylateMover() glycosylate.set_glycosylation("man5") glycosylate.set_residue_selector(select) glycosylate.apply(pose_antigen) ``` Take a look at the structure using the PyMol Mover or output it. What does the structure look like? The SimpleGlycosylateMover simply adds in glycans at ideal values for rings and backbones. Lets optimize this and model some glycans! ### A3. Modeling glycans `GlycanResidueSelector` and the `GlycanModeler` We will run the previous tutorials in a single rosetta script where we end with modeling the glycan residues. We use a very short run time and nstruct, so results will not be as clean as they would otherwise, but this should give you an idea of how all this works. Typically, we would model different positions of potential glycosylations, but here to save time, we will simply continue to build and model the glycan position we started with. Output files have been provided for you if you wish to use these. We will not be giving the mover a residue selector as it uses all glycans by default, but you can use the `GlycanResidueSelector` to choose specific trees or even glycan residues within those trees to model as we have seen in the previous tutorial. This takes about 380 seconds to run. You can either use the XML (`inputs/glycans/tutA3.xml`) or the code below. Output 10 structures. XML Syntax: <GlycanModeler name="model" layer_size="2" window_size="1" rounds="1" refine="false" /> ``` #Here, we setup some defaults that will be set in master shortly. # Note that the name of the GTM will change to simply GlycanModeler within the next week or two. modeler = GlycanTreeModeler() modeler.set_hybrid_protocol(True) modeler.set_use_shear(True) modeler.set_use_gaussian_sampling(True) modeler.apply(pose_antigen) ``` Run this code for a total of 10 nstruct. Compare energies and take a look at the lowest one. Scores and pdbs are also available in `expected_outputs/glycans` with prefix `tutA3` How does it look? Load the native (complexed structure) into pymol as well. Would this glycan block this particular antibody? Where else could we place a glycan? ## Tutorial B: Using Glycan Density In this tutorial we will load a pdb directly into Rosetta with sugars already present. The config for this has been provided for you. We will exclusively be using the XML interface to PyRosetta for these tutorials, however, using the knowledge gained in the Density and Symmetry tutorials, all of this is available code-wise as well, but the XML interface greatly simplifies this task. The glycan tree that we will be working with is 5 residues long. I use coot to look at density maps. Density maps were generated by downloading the cif file from the PDB and using PHENIX maps and default `maps.params`. This command was used to generate them. This is covered in the Working-With-Density tutorial: phenix.maps inputs/glycans/4do4.pdb inputs/glycans/4do4-sf.cif The density map generated is too large to be distributed with the rest of the tuorial, so I have uploaded it to Google Drive for you to download. <https://drive.google.com/open?id=1h569jpwLxyHu7iHLG8eu2Q9_B-Q9e_C9> Please download and place it in your working directory, if it's not already there. The tutorial will use a density map that is small enough to fit into the repo, but you are welcome to use 4d04 instead with the downloaded map. ### B1. Calculating Density Fit Although a structure may be solved with high resolution, not all solved residues may fit the density well. A structure from the PDB is still a model afterall, informed through experimentation. This is especially true of glycan residues, which are fairly mobile. Crystal contacts of neighboring proteins help to reduce the movemment of glycans and may help to induce a state that can be solved more easily given high-resolution density. In this tutorial, we will be using Rosetta to determine how well a residue fits into the given density. There are methods to do this in the coot program, but we want to be able to do this for any structure in a streamlined way - especially if we need to calculate RMSDs on only well-fitting glycan residues. The methods we will be employing in Rosetta are based on Frank Dimaio's work with Rosetta density. To do this, we will once again be employing the SimpleMetric system. In this case, we use the `PerResidueDensityFitMetric`, which is a PerResidueRealMetric. This type of SimpleMetric calculates a particular value for each residue given a residue selector. Very useful here. We will also be employing the DensityFitResidueSelector, which uses the metric. Since this is a fairly slow metric, we will use in-built functionality for using our calculated values from the metric, which are stored in the pose. We will then use the SelectedResidueCountMetric to determine how many residues have great fit. In later tutorials, we will be using the RMSDMetric with this selector in order to calculate RMSD on well-fitting glycan residues. Residues higher than .8 are great fit to density. Residues between .6 - .8 are good fit to density Residues below .4 fit to density are BAD fits The XML is `inputs/glycans/tutB1.xml` The key XML lines are: <PerResidueDensityFitMetric name="fit_native" residue_selector="tree" output_as_pdb_nums="1" sliding_window_size="1" match_res="1"/> <DensityFitResidueSelector name="fits8" den_fit_metric="fit_native" cutoff=".8" use_cache="1" fail_on_missing_cache="1"/> <SelectedResidueCountMetric name="n_fits8" custom_type="fit8" residue_selector="fits8"/> Run the xml and while it is running, take a look at the XML (runtime is about 80 seconds). It is fairly complicated and we will be building on it during the rest of these tutorials. Note that we first define the density metric, and then we use it within the selector. At the bottom, we add these to our set of native_metrics. What other metrics are we using? ``` branch="200A" map_file="inputs/1jnd_2mFo-DFc_map.ccp4" symmdef="inputs/1jnd_crys.symm" xml = f''' <ROSETTASCRIPTS> <SCOREFXNS> </SCOREFXNS> NEEDED FOR CACHING density fit info <RESIDUE_SELECTORS> <Glycan name="tree" branch="{branch}" include_root="0" /> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <PerResidueDensityFitMetric name="fit_native" residue_selector="tree" output_as_pdb_nums="1" sliding_window_size="1" match_res="1"/> </SIMPLE_METRICS> END Density Fit Setup <RESIDUE_SELECTORS> <Index name="root" resnums="{branch}" /> <GlycanLayerSelector name="first_layer" start="0" end="1"/> <And name="layer01" selectors="tree,first_layer" /> <DensityFitResidueSelector name="fits8" den_fit_metric="fit_native" cutoff=".8" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> <DensityFitResidueSelector name="fits6" den_fit_metric="fit_native" cutoff=".6" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> <DensityFitResidueSelector name="fits4" den_fit_metric="fit_native" cutoff=".4" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <TimingProfileMetric name="timing" /> <SelectedResidueCountMetric name="n_tree" custom_type="tree_size" residue_selector="tree"/> <SelectedResidueCountMetric name="n_fits8" custom_type="fit8" residue_selector="fits8"/> <SelectedResidueCountMetric name="n_fits6" custom_type="fit6" residue_selector="fits6"/> <SelectedResidueCountMetric name="n_fits4" custom_type="fit4" residue_selector="fits4"/> <SelectedResidueCountMetric name="n_layer01" custom_type="layer01" residue_selector="layer01"/> <PerResidueGlycanLayerMetric name="layers" residue_selector="tree" output_as_pdb_nums="1"/> <SelectedResiduesPyMOLMetric name="pymol_tree" residue_selector="tree" custom_type="glycans"/> <SelectedResiduesPyMOLMetric name="pymol_branch" residue_selector="root" custom_type="branch"/> <SelectedResiduesMetric name="pdb_glycans" residue_selector="tree" rosetta_numbering="0" custom_type="glycans"/> <SelectedResiduesMetric name="pdb_branch" residue_selector="root" rosetta_numbering="0" custom_type="branch"/> </SIMPLE_METRICS> <MOVERS> <SetupForSymmetry name="setup_symm" definition="{symmdef}"/> <LoadDensityMap name="loaddens" mapfile="{map_file}"/> <SetupForDensityScoring name="setupdens"/> <RunSimpleMetrics name="native_metrics" metrics="fit_native" prefix="native_" /> <RunSimpleMetrics name="selections" metrics="layers,pymol_tree,pymol_branch,pdb_glycans,pdb_branch" /> <RunSimpleMetrics name="counts" metrics="n_tree,n_layer01,n_fits6,n_fits8"/> <RunSimpleMetrics name="timings" metrics="timing" /> </MOVERS> <PROTOCOLS> <Add mover_name="loaddens"/> <Add mover_name="setupdens"/> <Add mover_name="selections"/> <Add mover_name="native_metrics" /> <Add mover_name="counts"/> <Add mover_name="timings"/> </PROTOCOLS> </ROSETTASCRIPTS> ''' pose = pose_from_pdb("inputs/1jnd_refined.pdb.gz") mover = pyrosetta.rosetta.protocols.rosetta_scripts.XmlObjects.create_from_string(xml).get_mover("ParsedProtocol") mover.apply(pose) ``` Ok, take a look at the (tutB1) scorefile or pose.scores. You can use the scorefile.py script to output as tabs if you would like. How many residues have great fit to density (hint, look for `fit6_selection_count` and `fit8_selection_count` data terms)? Are there any residues that fit poorly into the density? ``` print(pose.scores) print(pose.scores["fit6_selection_count"]) print(pose.scores["fit8_selection_count"]) import re for term in pose.scores: if re.search("native_res_density_fit", term): print(term.split("_")[-1], pose.scores[term]) ``` ### B2. Refinement into density Here, we will be doing a short refinement protocol into the density, with its crystal symmetry. This is a short protocol, but will work for our purposes. For a much longer (albeit very similar) refinement protocol of the glycan and whole protein, see Frenz et al (referenced in the Working With Glycans tutorial). The full protocol used in this paper is included in the input files as cryoem_glycan_refinement.xml. Take a look and see how it compares to what we are doing here. As usual, output files are available. Runtime for all 10 structures is about 2 hours. The XML is `inputs/glycans/tutB2.xml`. Since this takes so long to run, we will skip running it here. Take a look at the scorefile either using the scorefile.py script or through pandas: `expected_outputs/glycans/tutB2_score.sc` Are the density fit scores higher? How different are the RMSDs of the glycan residues? Take a look at the structure of the lowest energy - how different does it look? Are any new contacts created? Were we able to improve the density fit for some of those residues? ### B3. Denovo building into Density In this tutorial, we will be once again loading our crystal structure with density and symmetry. However, we will be randomizing the bb torsions and building the glycan out from scratch. In reality, we would have some idea of what glycan we are building and we would glycosylate the protein with the chemical motif we have figured out from means such as mass spec. We would then model the glycan to solve the crystal structure. With the new PackerPalette machinery in Rosetta and the ability to design glycans, we could actually build a protocol to sample chemical motifs of the glycans we are building out into the density, however, since this a very very large combinatorial problem, we should have some idea of what exists in the structure. We will first rebuild the glycan tree using the density as a guide, and then refine it further using what we learned in the previous tutorial. Note that like tutorial B2, this one takes a good long while (4 hours) The XML for this is `inputs/glycans/tutB3.xml`. Once again, we will not be running this here. Take a look at the scorefile and PDBs. (prefix tutB3_) How are our RMSDs? Were we able to do enough sampling to get close to the native structure? Are the energies acceptable? Are there parts of the glycan that are closer to native than others? Why might this be? Is an nstruct of 10 enough?? Thank you for doing this tutorial! I hope you learned a lot and are ready to work with these crazy carbohydrates! Cheers! <!--NAVIGATION--> < [RosettaCarbohydrates: Trees, Selectors and Movers](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/13.01-Glycan-Trees-Selectors-and-Movers.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [RNA in PyRosetta](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/14.00-RNA-Basics.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/13.02-Glycan-Modeling-and-Design.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
github_jupyter
# Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.mount_pyrosetta_install() print ("Notebook is set for PyRosetta use in Colab. Have fun!") #Python from pyrosetta import * from pyrosetta.rosetta import * from pyrosetta.teaching import * init('@inputs/glycans/common_glycans @inputs/glycans/pdb_flags @inputs/glycans/map_flags') <ROSETTASCRIPTS> <SCOREFXNS> </SCOREFXNS> <RESIDUE_SELECTORS> <Index name="select" resnums="%%start%%" /> <Index name="motif" resnums="%%start%%-%%end%%" /> <Neighborhood name="nbrhood" selector="motif"/> <Not name="others" selector="nbrhood" /> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <TimingProfileMetric name="timing"/> <TotalEnergyMetric name="total_energy_delta" use_native="1" custom_type="native_delta"/> <TotalEnergyMetric name="total_energy" use_native="0"/> <CompositeEnergyMetric name="composite_energy" use_native="1"/> <SelectedResiduesMetric name="selection" residue_selector="select"/> <SelectedResiduesMetric name="rosetta_sele" residue_selector="select" rosetta_numbering="1"/> <SelectedResiduesPyMOLMetric name="pymol_selection" residue_selector="select" /> <SelectedResiduesPyMOLMetric name="region" residue_selector="nbrhood" /> <SequenceMetric name="sequence" residue_selector="motif" /> <SasaMetric name="sasa" residue_selector="select" /> <RMSDMetric name="rmsd" use_native="1" rmsd_type="rmsd_protein_bb_heavy"/> </SIMPLE_METRICS> <MOVERS> <CreateGlycanSequonMover name="motif_creator" residue_selector="select" pack_neighbors="1"/> <RunSimpleMetrics name="selections" metrics="rosetta_sele,pymol_selection,sequence" /> <RunSimpleMetrics name="pre_metrics" metrics="sasa,total_energy" prefix="pre_" /> <RunSimpleMetrics name="post_sequon_metrics" metrics="total_energy_delta,sequence,sasa,timing,rmsd" prefix="post-sequon_" /> </MOVERS> <PROTOCOLS> <Add mover="selections" /> <Add mover="pre_metrics" /> <Add mover="motif_creator"/> <Add mover="post_sequon_metrics"/> </PROTOCOLS> </ROSETTASCRIPTS> from rosetta.protocols.carbohydrates import * from rosetta.core.select.residue_selector import * from rosetta.core.simple_metrics.metrics import * from rosetta.core.simple_metrics.composite_metrics import * from rosetta.core.simple_metrics.per_residue_metrics import * pose_complex = pose_from_pdb("inputs/glycans/2j88_complex.pdb") pose_complex_original = pose_complex.clone() pose_antigen = pose_from_pdb("inputs/glycans/2j88_antigen.pdb") pose_antigen_original = pose_antigen.clone() start = pose_antigen.pdb_info().pdb2pose("A", 143) end = pose_antigen.pdb_info().pdb2pose("A", 145) select = ResidueIndexSelector() select.set_index(start) motif = ResidueIndexSelector() motif.set_index_range(start, end) sequon_creator = CreateGlycanSequonMover(select) sequon_creator.apply(pose_antigen) seq_metric = SequenceMetric(motif) print("original", seq_metric.calculate(pose_antigen_original)) print("designed", seq_metric.calculate(pose_antigen)) score = get_score_function() print("original", score.score(pose_antigen_original)) print("designed", score.score(pose_antigen)) from rosetta.protocols.calc_taskop_movers import * general_motif_creator = CreateSequenceMotifMover(select) general_motif_creator.set_motif("N[-]T") pose_antigen_d1 = pose_antigen.clone() general_motif_creator.apply(pose_antigen) print("original", seq_metric.calculate(pose_antigen_original)) print("designed1", seq_metric.calculate(pose_antigen_d1)) print("designed2", seq_metric.calculate(pose_antigen)) print("original", score.score(pose_antigen_original)) print("designed1", score.score(pose_antigen_d1)) print("designed2", score.score(pose_antigen)) glycosylate = SimpleGlycosylateMover() glycosylate.set_glycosylation("man5") glycosylate.set_residue_selector(select) glycosylate.apply(pose_antigen) #Here, we setup some defaults that will be set in master shortly. # Note that the name of the GTM will change to simply GlycanModeler within the next week or two. modeler = GlycanTreeModeler() modeler.set_hybrid_protocol(True) modeler.set_use_shear(True) modeler.set_use_gaussian_sampling(True) modeler.apply(pose_antigen) branch="200A" map_file="inputs/1jnd_2mFo-DFc_map.ccp4" symmdef="inputs/1jnd_crys.symm" xml = f''' <ROSETTASCRIPTS> <SCOREFXNS> </SCOREFXNS> NEEDED FOR CACHING density fit info <RESIDUE_SELECTORS> <Glycan name="tree" branch="{branch}" include_root="0" /> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <PerResidueDensityFitMetric name="fit_native" residue_selector="tree" output_as_pdb_nums="1" sliding_window_size="1" match_res="1"/> </SIMPLE_METRICS> END Density Fit Setup <RESIDUE_SELECTORS> <Index name="root" resnums="{branch}" /> <GlycanLayerSelector name="first_layer" start="0" end="1"/> <And name="layer01" selectors="tree,first_layer" /> <DensityFitResidueSelector name="fits8" den_fit_metric="fit_native" cutoff=".8" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> <DensityFitResidueSelector name="fits6" den_fit_metric="fit_native" cutoff=".6" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> <DensityFitResidueSelector name="fits4" den_fit_metric="fit_native" cutoff=".4" use_cache="1" fail_on_missing_cache="1" prefix="native_"/> </RESIDUE_SELECTORS> <SIMPLE_METRICS> <TimingProfileMetric name="timing" /> <SelectedResidueCountMetric name="n_tree" custom_type="tree_size" residue_selector="tree"/> <SelectedResidueCountMetric name="n_fits8" custom_type="fit8" residue_selector="fits8"/> <SelectedResidueCountMetric name="n_fits6" custom_type="fit6" residue_selector="fits6"/> <SelectedResidueCountMetric name="n_fits4" custom_type="fit4" residue_selector="fits4"/> <SelectedResidueCountMetric name="n_layer01" custom_type="layer01" residue_selector="layer01"/> <PerResidueGlycanLayerMetric name="layers" residue_selector="tree" output_as_pdb_nums="1"/> <SelectedResiduesPyMOLMetric name="pymol_tree" residue_selector="tree" custom_type="glycans"/> <SelectedResiduesPyMOLMetric name="pymol_branch" residue_selector="root" custom_type="branch"/> <SelectedResiduesMetric name="pdb_glycans" residue_selector="tree" rosetta_numbering="0" custom_type="glycans"/> <SelectedResiduesMetric name="pdb_branch" residue_selector="root" rosetta_numbering="0" custom_type="branch"/> </SIMPLE_METRICS> <MOVERS> <SetupForSymmetry name="setup_symm" definition="{symmdef}"/> <LoadDensityMap name="loaddens" mapfile="{map_file}"/> <SetupForDensityScoring name="setupdens"/> <RunSimpleMetrics name="native_metrics" metrics="fit_native" prefix="native_" /> <RunSimpleMetrics name="selections" metrics="layers,pymol_tree,pymol_branch,pdb_glycans,pdb_branch" /> <RunSimpleMetrics name="counts" metrics="n_tree,n_layer01,n_fits6,n_fits8"/> <RunSimpleMetrics name="timings" metrics="timing" /> </MOVERS> <PROTOCOLS> <Add mover_name="loaddens"/> <Add mover_name="setupdens"/> <Add mover_name="selections"/> <Add mover_name="native_metrics" /> <Add mover_name="counts"/> <Add mover_name="timings"/> </PROTOCOLS> </ROSETTASCRIPTS> ''' pose = pose_from_pdb("inputs/1jnd_refined.pdb.gz") mover = pyrosetta.rosetta.protocols.rosetta_scripts.XmlObjects.create_from_string(xml).get_mover("ParsedProtocol") mover.apply(pose) print(pose.scores) print(pose.scores["fit6_selection_count"]) print(pose.scores["fit8_selection_count"]) import re for term in pose.scores: if re.search("native_res_density_fit", term): print(term.split("_")[-1], pose.scores[term])
0.422266
0.942135
# Recommendations on GCP with TensorFlow and WALS with Cloud Composer *** This lab is adapted from the original [solution](https://github.com/GoogleCloudPlatform/tensorflow-recommendation-wals) created by [lukmanr](https://github.com/GoogleCloudPlatform/tensorflow-recommendation-wals/commits?author=lukmanr) This project deploys a solution for a recommendation service on GCP, using the WALS algorithm in TensorFlow. Components include: - Recommendation model code, and scripts to train and tune the model on ML Engine - A REST endpoint using Google Cloud Endpoints for serving recommendations - An Airflow server managed by Cloud Composer for running scheduled model training ## Confirm Prerequisites ### Create a Cloud Composer Instance - Create a Cloud Composer [instance](https://console.cloud.google.com/composer/environments/create?project=) 1. Specify 'composer' for name 2. Choose a location 3. Keep the remaining settings at their defaults 4. Select Create This takes 15 - 20 minutes. Continue with the rest of the lab as you will be using Cloud Composer near the end. ``` %%bash pip install sh --upgrade pip # needed to execute shell scripts later ``` ### Setup environment variables <span style="color: blue">__Replace the below settings with your own.__</span> Note: you can leave AIRFLOW_BUCKET blank and come back to it after your Composer instance is created which automatically will create an Airflow bucket for you. <br><br> ### 1. Make a GCS bucket with the name recserve_[YOUR-PROJECT-ID]: ``` import os BUCKET = 'BUCKET' # REPLACE WITH A BUCKET NAME (PUT YOUR PROJECT ID AND WE CREATE THE BUCKET ITSELF NEXT) PROJECT = 'PROJECT' # REPLACE WITH YOUR PROJECT ID REGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1 # do not change these os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = 'recserve_' + BUCKET os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION %%bash # create GCS bucket with recserve_PROJECT_NAME if not exists exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo "Not creating recserve_bucket since it already exists." else echo "Creating recserve_bucket" gsutil mb -l ${REGION} gs://${BUCKET} fi ``` ### Setup Google App Engine permissions 1. In [IAM](https://console.cloud.google.com/iam-admin/iam?project=), __change permissions for "Compute Engine default service account" from Editor to Owner__. This is required so you can create and deploy App Engine versions from within Cloud Datalab. Note: the alternative is to run all app engine commands directly in Cloud Shell instead of from within Cloud Datalab.<br/><br/> 2. Create an App Engine instance if you have not already by uncommenting and running the below code ``` # %%bash # run app engine creation commands # gcloud app create --region ${REGION} # see: https://cloud.google.com/compute/docs/regions-zones/ # gcloud app update --no-split-health-checks ``` # Part One: Setup and Train the WALS Model ## Upload sample data to BigQuery This tutorial comes with a sample Google Analytics data set, containing page tracking events from the Austrian news site Kurier.at. The schema file '''ga_sessions_sample_schema.json''' is located in the folder data in the tutorial code, and the data file '''ga_sessions_sample.json.gz''' is located in a public Cloud Storage bucket associated with this tutorial. To upload this data set to BigQuery: ### Copy sample data files into our bucket ``` %%bash gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/ga_sessions_sample.json.gz gs://${BUCKET}/data/ga_sessions_sample.json.gz gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv data/recommendation_events.csv gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv gs://${BUCKET}/data/recommendation_events.csv ``` ### 2. Create empty BigQuery dataset and load sample JSON data Note: Ingesting the 400K rows of sample data. This usually takes 5-7 minutes. ``` %%bash # create BigQuery dataset if it doesn't already exist exists=$(bq ls -d | grep -w GA360_test) if [ -n "$exists" ]; then echo "Not creating GA360_test since it already exists." else echo "Creating GA360_test dataset." bq --project_id=${PROJECT} mk GA360_test fi # create the schema and load our sample Google Analytics session data bq load --source_format=NEWLINE_DELIMITED_JSON \ GA360_test.ga_sessions_sample \ gs://${BUCKET}/data/ga_sessions_sample.json.gz \ data/ga_sessions_sample_schema.json # can't load schema files from GCS ``` ## Install WALS model training package and model data ### 1. Create a distributable package. Copy the package up to the code folder in the bucket you created previously. ``` %%bash cd wals_ml_engine echo "creating distributable package" python setup.py sdist echo "copying ML package to bucket" gsutil cp dist/wals_ml_engine-0.1.tar.gz gs://${BUCKET}/code/ ``` ### 2. Run the WALS model on the sample data set: ``` %%bash # view the ML train local script before running cat wals_ml_engine/mltrain.sh %%bash cd wals_ml_engine # train locally with unoptimized hyperparams ./mltrain.sh local ../data/recommendation_events.csv --data-type web_views --use-optimized # Options if we wanted to train on CMLE. We will do this with Cloud Composer later # train on ML Engine with optimized hyperparams # ./mltrain.sh train ../data/recommendation_events.csv --data-type web_views --use-optimized # tune hyperparams on ML Engine: # ./mltrain.sh tune ../data/recommendation_events.csv --data-type web_views ``` This will take a couple minutes, and create a job directory under wals_ml_engine/jobs like "wals_ml_local_20180102_012345/model", containing the model files saved as numpy arrays. ### View the locally trained model directory ``` ls wals_ml_engine/jobs ``` ### 3. Copy the model files from this directory to the model folder in the project bucket: In the case of multiple models, take the most recent (tail -1) ``` %bash export JOB_MODEL=$(find wals_ml_engine/jobs -name "model" | tail -1) gsutil cp ${JOB_MODEL}/* gs://${BUCKET}/model/ echo "Recommendation model file numpy arrays in bucket:" gsutil ls gs://${BUCKET}/model/ ``` # Install the recserve endpoint ### 1. Prepare the deploy template for the Cloud Endpoint API: ``` %bash cd scripts cat prepare_deploy_api.sh %%bash printf "\nCopy and run the deploy script generated below:\n" cd scripts ./prepare_deploy_api.sh # Prepare config file for the API. ``` This will output somthing like: ```To deploy: gcloud endpoints services deploy /var/folders/1m/r3slmhp92074pzdhhfjvnw0m00dhhl/T/tmp.n6QVl5hO.yaml``` ### 2. Run the endpoints deploy command output above: <span style="color: blue">Be sure to __replace the below [FILE_NAME]__ with the results from above before running.</span> ``` %%bash gcloud endpoints services deploy [REPLACE_WITH_TEMP_FILE_NAME.yaml] ``` ### 3. Prepare the deploy template for the App Engine App: ``` %%bash # view the app deployment script cat scripts/prepare_deploy_app.sh %%bash # prepare to deploy cd scripts ./prepare_deploy_app.sh ``` You can ignore the script output "ERROR: (gcloud.app.create) The project [...] already contains an App Engine application. You can deploy your application using gcloud app deploy." This is expected. The script will output something like: ```To deploy: gcloud -q app deploy app/app_template.yaml_deploy.yaml``` ### 4. Run the command above: ``` %%bash gcloud -q app deploy app/app_template.yaml_deploy.yaml ``` This will take 7 - 10 minutes to deploy the app. While you wait, consider starting on Part Two below and completing the Cloud Composer DAG file. ## Query the API for Article Recommendations Lastly, you are able to test the recommendation model API by submitting a query request. Note the example userId passed and numRecs desired as the URL parameters for the model input. ``` %%bash cd scripts ./query_api.sh # Query the API. ./generate_traffic.sh # Send traffic to the API. ``` If the call is successful, you will see the article IDs recommended for that specific user by the WALS ML model <br/> (Example: curl "https://qwiklabs-gcp-12345.appspot.com/recommendation?userId=5448543647176335931&numRecs=5" {"articles":["299824032","1701682","299935287","299959410","298157062"]} ) __Part One is done!__ You have successfully created the back-end architecture for serving your ML recommendation system. But we're not done yet, we still need to automatically retrain and redeploy our model once new data comes in. For that we will use [Cloud Composer](https://cloud.google.com/composer/) and [Apache Airflow](https://airflow.apache.org/).<br/><br/> *** # Part Two: Setup a scheduled workflow with Cloud Composer In this section you will complete a partially written training.py DAG file and copy it to the DAGS folder in your Composer instance. ## Copy your Airflow bucket name 1. Navigate to your Cloud Composer [instance](https://console.cloud.google.com/composer/environments?project=)<br/><br/> 2. Select __DAGs Folder__<br/><br/> 3. You will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/> 4. __Copy the bucket name__ into the variable below (example: us-central1-composer-08f6edeb-bucket) ``` AIRFLOW_BUCKET = 'us-central1-composer-21587538-bucket' # REPLACE WITH AIRFLOW BUCKET NAME os.environ['AIRFLOW_BUCKET'] = AIRFLOW_BUCKET ``` ## Complete the training.py DAG file Apache Airflow orchestrates tasks out to other services through a [DAG (Directed Acyclic Graph)](https://airflow.apache.org/concepts.html) file which specifies what services to call, what to do, and when to run these tasks. DAG files are written in python and are loaded automatically into Airflow once present in the Airflow/dags/ folder in your Cloud Composer bucket. Your task is to complete the partially written DAG file below which will enable the automatic retraining and redeployment of our WALS recommendation model. __Complete the #TODOs__ in the Airflow DAG file below and execute the code block to save the file ``` %%writefile airflow/dags/training.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """DAG definition for recserv model training.""" import airflow from airflow import DAG # Reference for all available airflow operators: # https://github.com/apache/incubator-airflow/tree/master/airflow/contrib/operators from airflow.contrib.operators.bigquery_operator import BigQueryOperator from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator from airflow.hooks.base_hook import BaseHook # from airflow.contrib.operators.mlengine_operator import MLEngineTrainingOperator # above mlengine_operator currently doesnt support custom MasterType so we import our own plugins: # custom plugins from airflow.operators.app_engine_admin_plugin import AppEngineVersionOperator from airflow.operators.ml_engine_plugin import MLEngineTrainingOperator import datetime def _get_project_id(): """Get project ID from default GCP connection.""" extras = BaseHook.get_connection('google_cloud_default').extra_dejson key = 'extra__google_cloud_platform__project' if key in extras: project_id = extras[key] else: raise ('Must configure project_id in google_cloud_default ' 'connection from Airflow Console') return project_id PROJECT_ID = _get_project_id() # Data set constants, used in BigQuery tasks. You can change these # to conform to your data. # TODO: Specify your BigQuery dataset name and table name DATASET = '' TABLE_NAME = '' ARTICLE_CUSTOM_DIMENSION = '10' # TODO: Confirm bucket name and region # GCS bucket names and region, can also be changed. BUCKET = 'gs://recserve_' + PROJECT_ID REGION = 'us-east1' # The code package name comes from the model code in the wals_ml_engine # directory of the solution code base. PACKAGE_URI = BUCKET + '/code/wals_ml_engine-0.1.tar.gz' JOB_DIR = BUCKET + '/jobs' default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': airflow.utils.dates.days_ago(2), 'email': ['[email protected]'], 'email_on_failure': True, 'email_on_retry': False, 'retries': 5, 'retry_delay': datetime.timedelta(minutes=5) } # Default schedule interval using cronjob syntax - can be customized here # or in the Airflow console. # TODO: Specify a schedule interval in CRON syntax to run once a day at 2100 hours (9pm) # Reference: https://airflow.apache.org/scheduler.html schedule_interval = '' # example '00 XX 0 0 0' # TODO: Title your DAG to be recommendations_training_v1 dag = DAG('', default_args=default_args, schedule_interval=schedule_interval) dag.doc_md = __doc__ # # # Task Definition # # # BigQuery training data query bql=''' #legacySql SELECT fullVisitorId as clientId, ArticleID as contentId, (nextTime - hits.time) as timeOnPage, FROM( SELECT fullVisitorId, hits.time, MAX(IF(hits.customDimensions.index={0}, hits.customDimensions.value,NULL)) WITHIN hits AS ArticleID, LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId, visitNumber ORDER BY hits.time ASC) as nextTime FROM [{1}.{2}.{3}] WHERE hits.type = "PAGE" ) HAVING timeOnPage is not null and contentId is not null; ''' bql = bql.format(ARTICLE_CUSTOM_DIMENSION, PROJECT_ID, DATASET, TABLE_NAME) # TODO: Complete the BigQueryOperator task to truncate the table if it already exists before writing # Reference: https://airflow.apache.org/integration.html#bigqueryoperator t1 = BigQuerySomething( # correct the operator name task_id='bq_rec_training_data', bql=bql, destination_dataset_table='%s.recommendation_events' % DATASET, write_disposition='WRITE_T_______', # specify to truncate on writes dag=dag) # BigQuery training data export to GCS # TODO: Fill in the missing operator name for task #2 which # takes a BigQuery dataset and table as input and exports it to GCS as a CSV training_file = BUCKET + '/data/recommendation_events.csv' t2 = BigQueryToCloudSomethingSomething( # correct the name task_id='bq_export_op', source_project_dataset_table='%s.recommendation_events' % DATASET, destination_cloud_storage_uris=[training_file], export_format='CSV', dag=dag ) # ML Engine training job job_id = 'recserve_{0}'.format(datetime.datetime.now().strftime('%Y%m%d%H%M')) job_dir = BUCKET + '/jobs/' + job_id output_dir = BUCKET training_args = ['--job-dir', job_dir, '--train-files', training_file, '--output-dir', output_dir, '--data-type', 'web_views', '--use-optimized'] # TODO: Fill in the missing operator name for task #3 which will # start a new training job to Cloud ML Engine # Reference: https://airflow.apache.org/integration.html#cloud-ml-engine # https://cloud.google.com/ml-engine/docs/tensorflow/machine-types t3 = MLEngineSomethingSomething( # complete the name task_id='ml_engine_training_op', project_id=PROJECT_ID, job_id=job_id, package_uris=[PACKAGE_URI], training_python_module='trainer.task', training_args=training_args, region=REGION, scale_tier='CUSTOM', master_type='complex_model_m_gpu', dag=dag ) # App Engine deploy new version t4 = AppEngineVersionOperator( task_id='app_engine_deploy_version', project_id=PROJECT_ID, service_id='default', region=REGION, service_spec=None, dag=dag ) # TODO: Be sure to set_upstream dependencies for all tasks t2.set_upstream(t1) t3.set_upstream(t2) t4.set_upstream(t) # complete ``` ### Copy local Airflow DAG file and plugins into the DAGs folder ``` %bash gsutil cp airflow/dags/training.py gs://${AIRFLOW_BUCKET}/dags # overwrite if it exists gsutil cp -r airflow/plugins gs://${AIRFLOW_BUCKET} # copy custom plugins ``` 2. Navigate to your Cloud Composer [instance](https://console.cloud.google.com/composer/environments?project=)<br/><br/> 3. Trigger a __manual run__ of your DAG for testing<br/><br/> 3. Ensure your DAG runs successfully (all nodes outlined in dark green and 'success' tag shows) ![Successful Airflow DAG run](./img/airflow_successful_run.jpg "Successful Airflow DAG run") ## Troubleshooting your DAG DAG not executing successfully? Follow these below steps to troubleshoot. Click on the name of a DAG to view a run (ex: recommendations_training_v1) 1. Select a node in the DAG (red or yellow borders mean failed nodes) 2. Select View Log 3. Scroll to the bottom of the log to diagnose 4. X Option: Clear and immediately restart the DAG after diagnosing the issue Tips: - If bq_rec_training_data immediately fails without logs, your DAG file is missing key parts and is not compiling - ml_engine_training_op will take 9 - 12 minutes to run. Monitor the training job in [ML Engine](https://console.cloud.google.com/mlengine/jobs?project=) - Lastly, check the [solution endtoend.ipynb](../endtoend/endtoend.ipynb) to compare your lab answers ![Viewing Airflow logs](./img/airflow_viewing_logs.jpg "Viewing Airflow logs") # Congratulations! You have made it to the end of the end-to-end recommendation system lab. You have successfully setup an automated workflow to retrain and redeploy your recommendation model. *** # Challenges Looking to solidify your Cloud Composer skills even more? Complete the __optional challenges__ below <br/><br/> ### Challenge 1 Use either the [BigQueryCheckOperator](https://airflow.apache.org/integration.html#bigquerycheckoperator) or the [BigQueryValueCheckOperator](https://airflow.apache.org/integration.html#bigqueryvaluecheckoperator) to create a new task in your DAG that ensures the SQL query for training data is returning valid results before it is passed to Cloud ML Engine for training. <br/><br/> Hint: Check for COUNT() = 0 or other health check <br/><br/><br/> ### Challenge 2 Create a Cloud Function to [automatically trigger](https://cloud.google.com/composer/docs/how-to/using/triggering-with-gcf) your DAG when a new recommendation_events.csv file is loaded into your Google Cloud Storage Bucket. <br/><br/> Hint: Check the [composer_gcf_trigger.ipynb lab](../composer_gcf_trigger/composertriggered.ipynb) for inspiration <br/><br/><br/> ### Challenge 3 Modify the BigQuery query in the DAG to only train on a portion of the data available in the dataset using a WHERE clause filtering on date. Next, parameterize the WHERE clause to be based on when the Airflow DAG is run <br/><br/> Hint: Make use of prebuilt [Airflow macros](https://airflow.incubator.apache.org/_modules/airflow/macros.html) like the below: _constants or can be dynamic based on Airflow macros_ <br/> max_query_date = '2018-02-01' # {{ macros.ds_add(ds, -7) }} <br/> min_query_date = '2018-01-01' # {{ macros.ds_add(ds, -1) }} ## Additional Resources - Follow the latest [Airflow operators](https://github.com/apache/incubator-airflow/tree/master/airflow/contrib/operators) on github
github_jupyter
%%bash pip install sh --upgrade pip # needed to execute shell scripts later import os BUCKET = 'BUCKET' # REPLACE WITH A BUCKET NAME (PUT YOUR PROJECT ID AND WE CREATE THE BUCKET ITSELF NEXT) PROJECT = 'PROJECT' # REPLACE WITH YOUR PROJECT ID REGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1 # do not change these os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = 'recserve_' + BUCKET os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION %%bash # create GCS bucket with recserve_PROJECT_NAME if not exists exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo "Not creating recserve_bucket since it already exists." else echo "Creating recserve_bucket" gsutil mb -l ${REGION} gs://${BUCKET} fi # %%bash # run app engine creation commands # gcloud app create --region ${REGION} # see: https://cloud.google.com/compute/docs/regions-zones/ # gcloud app update --no-split-health-checks %%bash gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/ga_sessions_sample.json.gz gs://${BUCKET}/data/ga_sessions_sample.json.gz gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv data/recommendation_events.csv gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv gs://${BUCKET}/data/recommendation_events.csv %%bash # create BigQuery dataset if it doesn't already exist exists=$(bq ls -d | grep -w GA360_test) if [ -n "$exists" ]; then echo "Not creating GA360_test since it already exists." else echo "Creating GA360_test dataset." bq --project_id=${PROJECT} mk GA360_test fi # create the schema and load our sample Google Analytics session data bq load --source_format=NEWLINE_DELIMITED_JSON \ GA360_test.ga_sessions_sample \ gs://${BUCKET}/data/ga_sessions_sample.json.gz \ data/ga_sessions_sample_schema.json # can't load schema files from GCS %%bash cd wals_ml_engine echo "creating distributable package" python setup.py sdist echo "copying ML package to bucket" gsutil cp dist/wals_ml_engine-0.1.tar.gz gs://${BUCKET}/code/ %%bash # view the ML train local script before running cat wals_ml_engine/mltrain.sh %%bash cd wals_ml_engine # train locally with unoptimized hyperparams ./mltrain.sh local ../data/recommendation_events.csv --data-type web_views --use-optimized # Options if we wanted to train on CMLE. We will do this with Cloud Composer later # train on ML Engine with optimized hyperparams # ./mltrain.sh train ../data/recommendation_events.csv --data-type web_views --use-optimized # tune hyperparams on ML Engine: # ./mltrain.sh tune ../data/recommendation_events.csv --data-type web_views ls wals_ml_engine/jobs %bash export JOB_MODEL=$(find wals_ml_engine/jobs -name "model" | tail -1) gsutil cp ${JOB_MODEL}/* gs://${BUCKET}/model/ echo "Recommendation model file numpy arrays in bucket:" gsutil ls gs://${BUCKET}/model/ %bash cd scripts cat prepare_deploy_api.sh %%bash printf "\nCopy and run the deploy script generated below:\n" cd scripts ./prepare_deploy_api.sh # Prepare config file for the API. ### 2. Run the endpoints deploy command output above: <span style="color: blue">Be sure to __replace the below [FILE_NAME]__ with the results from above before running.</span> ### 3. Prepare the deploy template for the App Engine App: You can ignore the script output "ERROR: (gcloud.app.create) The project [...] already contains an App Engine application. You can deploy your application using gcloud app deploy." This is expected. The script will output something like: ### 4. Run the command above: This will take 7 - 10 minutes to deploy the app. While you wait, consider starting on Part Two below and completing the Cloud Composer DAG file. ## Query the API for Article Recommendations Lastly, you are able to test the recommendation model API by submitting a query request. Note the example userId passed and numRecs desired as the URL parameters for the model input. If the call is successful, you will see the article IDs recommended for that specific user by the WALS ML model <br/> (Example: curl "https://qwiklabs-gcp-12345.appspot.com/recommendation?userId=5448543647176335931&numRecs=5" {"articles":["299824032","1701682","299935287","299959410","298157062"]} ) __Part One is done!__ You have successfully created the back-end architecture for serving your ML recommendation system. But we're not done yet, we still need to automatically retrain and redeploy our model once new data comes in. For that we will use [Cloud Composer](https://cloud.google.com/composer/) and [Apache Airflow](https://airflow.apache.org/).<br/><br/> *** # Part Two: Setup a scheduled workflow with Cloud Composer In this section you will complete a partially written training.py DAG file and copy it to the DAGS folder in your Composer instance. ## Copy your Airflow bucket name 1. Navigate to your Cloud Composer [instance](https://console.cloud.google.com/composer/environments?project=)<br/><br/> 2. Select __DAGs Folder__<br/><br/> 3. You will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/> 4. __Copy the bucket name__ into the variable below (example: us-central1-composer-08f6edeb-bucket) ## Complete the training.py DAG file Apache Airflow orchestrates tasks out to other services through a [DAG (Directed Acyclic Graph)](https://airflow.apache.org/concepts.html) file which specifies what services to call, what to do, and when to run these tasks. DAG files are written in python and are loaded automatically into Airflow once present in the Airflow/dags/ folder in your Cloud Composer bucket. Your task is to complete the partially written DAG file below which will enable the automatic retraining and redeployment of our WALS recommendation model. __Complete the #TODOs__ in the Airflow DAG file below and execute the code block to save the file ### Copy local Airflow DAG file and plugins into the DAGs folder
0.444083
0.972571
# Randomized Benchmarking ## Introduction One of the main challenges in building a quantum information processor is the non-scalability of completely characterizing the noise affecting a quantum system via process tomography. In addition, process tomography is sensitive to noise in the pre- and post rotation gates plus the measurements (SPAM errors). Gateset tomography can take these errors into account, but the scaling is even worse. A complete characterization of the noise is useful because it allows for the determination of good error-correction schemes, and thus the possibility of reliable transmission of quantum information. Since complete process tomography is infeasible for large systems, there is growing interest in scalable methods for partially characterizing the noise affecting a quantum system. A scalable (in the number $n$ of qubits comprising the system) and robust algorithm for benchmarking the full set of Clifford gates by a single parameter using randomization techniques was presented in [1]. The concept of using randomization methods for benchmarking quantum gates is commonly called **Randomized Benchmarking (RB)**. ### References 1. Easwar Magesan, J. M. Gambetta, and Joseph Emerson, *Robust randomized benchmarking of quantum processes*, https://arxiv.org/pdf/1009.3639 2. Easwar Magesan, Jay M. Gambetta, and Joseph Emerson, *Characterizing Quantum Gates via Randomized Benchmarking*, https://arxiv.org/pdf/1109.6887 3. A. D. C'orcoles, Jay M. Gambetta, Jerry M. Chow, John A. Smolin, Matthew Ware, J. D. Strand, B. L. T. Plourde, and M. Steffen, *Process verification of two-qubit quantum gates by randomized benchmarking*, https://arxiv.org/pdf/1210.7011 4. Jay M. Gambetta, A. D. C´orcoles, S. T. Merkel, B. R. Johnson, John A. Smolin, Jerry M. Chow, Colm A. Ryan, Chad Rigetti, S. Poletto, Thomas A. Ohki, Mark B. Ketchen, and M. Steffen, *Characterization of addressability by simultaneous randomized benchmarking*, https://arxiv.org/pdf/1204.6308 5. David C. McKay, Sarah Sheldon, John A. Smolin, Jerry M. Chow, and Jay M. Gambetta, *Three Qubit Randomized Benchmarking*, https://arxiv.org/pdf/1712.06550 ## The Randomized Benchmarking Protocol A RB protocol (see [1,2]) consists of the following steps: (We should first import the relevant qiskit classes for the demonstration). ``` #Import general libraries (needed for functions) import numpy as np import matplotlib.pyplot as plt from IPython import display #Import the RB Functions import qiskit.ignis.verification.randomized_benchmarking as rb #Import Qiskit classes import qiskit from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error ``` ### Step 1: Generate RB sequences The RB sequences consist of random Clifford elements chosen uniformly from the Clifford group on $n$-qubits, including a computed reversal element, that should return the qubits to the initial state. More precisely, for each length $m$, we choose $K_m$ RB sequences. Each such sequence contains $m$ random elements $C_{i_j}$ chosen uniformly from the Clifford group on $n$-qubits, and the $m+1$ element is defined as follows: $C_{i_{m+1}} = (C_{i_1}\cdot ... \cdot C_{i_m})^{-1}$. It can be found efficiently by the Gottesmann-Knill theorem. For example, we generate below several sequences of 2-qubit Clifford circuits. ``` #Generate RB circuits (2Q RB) #number of qubits nQ=2 rb_opts = {} #Number of Cliffords in the sequence rb_opts['length_vector'] = [1, 10, 20, 50, 75, 100, 125, 150, 175, 200] #Number of seeds (random sequences) rb_opts['nseeds'] = 5 #Default pattern rb_opts['rb_pattern'] = [[0,1]] rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts) ``` As an example, we print the circuit corresponding to the first RB sequence ``` print(rb_circs[0][0]) ``` One can verify that the Unitary representing each RB circuit should be the identity (with a global phase). We simulate this using Aer unitary simulator. ``` # Create a new circuit without the measurement qregs = rb_circs[0][-1].qregs cregs = rb_circs[0][-1].cregs qc = qiskit.QuantumCircuit(*qregs, *cregs) for i in rb_circs[0][-1][0:-nQ]: qc.data.append(i) # The Unitary is an identity (with a global phase) backend = qiskit.Aer.get_backend('unitary_simulator') basis_gates = ['u1','u2','u3','cx'] # use U,CX for now job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates) print(np.around(job.result().get_unitary(),3)) ``` ### Step 2: Execute the RB sequences (with some noise) We can execute the RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results. By assumption each operation $C_{i_j}$ is allowed to have some error, represented by $\Lambda_{i_j,j}$, and each sequence can be modeled by the operation: $$\textit{S}_{\textbf{i}_\textbf{m}} = \bigcirc_{j=1}^{m+1} (\Lambda_{i_j,j} \circ C_{i_j})$$ where ${\textbf{i}_\textbf{m}} = (i_1,...,i_m)$ and $i_{m+1}$ is uniquely determined by ${\textbf{i}_\textbf{m}}$. ``` # Run on a noisy simulator noise_model = NoiseModel() # Depolarizing_error dp = 0.005 noise_model.add_all_qubit_quantum_error(depolarizing_error(dp, 1), ['u1', 'u2', 'u3']) noise_model.add_all_qubit_quantum_error(depolarizing_error(2*dp, 2), 'cx') backend = qiskit.Aer.get_backend('qasm_simulator') ``` ### Step 3: Get statistics about the survival probabilities For each of the $K_m$ sequences the survival probability $Tr[E_\psi \textit{S}_{\textbf{i}_\textbf{m}}(\rho_\psi)]$ is measured. Here $\rho_\psi$ is the initial state taking into account preparation errors and $E_\psi$ is the POVM element that takes into account measurement errors. In the ideal (noise-free) case $\rho_\psi = E_\psi = | \psi {\rangle} {\langle} \psi |$. In practice one can measure the probability to go back to the exact initial state, i.e. all the qubits in the ground state $ {|} 00...0 {\rangle}$ or just the probability for one of the qubits to return back to the ground state. Measuring the qubits independently can be more convenient if a correlated measurement scheme is not possible. Both measurements will fit to the same decay parameter according to the properties of the *twirl*. ### Step 4: Find the averaged sequence fidelity Average over the $K_m$ random realizations of the sequence to find the averaged sequence **fidelity**, $$F_{seq}(m,|\psi{\rangle}) = Tr[E_\psi \textit{S}_{K_m}(\rho_\psi)]$$ where $$\textit{S}_{K_m} = \frac{1}{K_m} \sum_{\textbf{i}_\textbf{m}} \textit{S}_{\textbf{i}_\textbf{m}}$$ is the average sequence operation. ### Step 5: Fit the results Repeat Steps 1 through 4 for different values of $m$ and fit the results for the averaged sequence fidelity to the model: $$ \textit{F}_{seq}^{(0)} \big(m,{|}\psi {\rangle} \big) = A_0 \alpha^m +B_0$$ where $A_0$ and $B_0$ absorb state preparation and measurement errors as well as an edge effect from the error on the final gate. $\alpha$ determines the average error-rate $r$, which is also called **Error per Clifford (EPC)** according to the relation $$ r = 1-\alpha-\frac{1-\alpha}{2^n} = \frac{2^n-1}{2^n}(1-\alpha)$$ (where $n=nQ$ is the number of qubits). As an example, we calculate the average sequence fidelity for each of the RB sequences, fit the results to the exponential curve, and compute the parameters $\alpha$ and EPC. ``` # Create the RB fitter backend = qiskit.Aer.get_backend('qasm_simulator') basis_gates = ['u1','u2','u3','cx'] shots = 200 qobj_list = [] rb_fit = rb.RBFitter(None, xdata, rb_opts['rb_pattern']) for rb_seed,rb_circ_seed in enumerate(rb_circs): print('Compiling seed %d'%rb_seed) new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates) qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots) print('Simulating seed %d'%rb_seed) job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0}) qobj_list.append(qobj) # Add data to the fitter rb_fit.add_data(job.result()) print('After seed %d, alpha: %f, EPC: %f'%(rb_seed,rb_fit.fit[0]['params'][1], rb_fit.fit[0]['epc'])) ``` ### Plot the results ``` plt.figure(figsize=(8, 6)) ax = plt.subplot(1, 1, 1) # Plot the essence by calling plot_rb_data rb_fit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False) # Add title and label ax.set_title('%d Qubit RB'%(nQ), fontsize=18) plt.show() ``` ### The intuition behind RB The depolarizing quantum channel has a parameter $\alpha$, and works like this: with probability $\alpha$, the state remains the same as before; with probability $1-\alpha$, the state becomes the totally mixed state, namely: $$\rho_f = \alpha \rho_i + \frac{1-\alpha}{2^n} * \mathbf{I}$$ Suppose that we have a sequence of $m$ gates, not necessarily Clifford gates, where the error channel of the gates is a depolarizing channel with parameter $\alpha$ (same $\alpha$ for all the gates). Then with probability $\alpha^m$ the state is correct at the end of the sequence, and with probability $1-\alpha^m$ it becomes the totally mixed state, therefore: $$\rho_f^m = \alpha^m \rho_i + \frac{1-\alpha^m}{2^n} * \mathbf{I}$$ Now suppose that in addition we start with the ground state; that the entire sequence amounts to the identity; and that we measure the state at the end of the sequence with the standard basis. We derive that the probability of success at the end of the sequence is: $$\alpha^m + \frac{1-\alpha^m}{2^n} = \frac{2^n-1}{2^n}\alpha^m + \frac{1}{2^n} = A_0\alpha^m + B_0$$ It follows that the probability of success, aka fidelity, decays exponentially with the sequence length, with exponent $\alpha$. The last statement is not necessarily true when the channel is other than the depolarizing channel. However, it turns out that if the gates are uniformly-randomized Clifford gates, then the noise of each gate behaves on average as if it was the depolarizing channel, with some parameter that can be computed from the channel, and we obtain the exponential decay of the fidelity. Formally, taking an average over a finite group $G$ (like the Clifford group) of a quantum channel $\bar \Lambda$ is also called a *twirl*: $$ W_G(\bar \Lambda) \frac{1}{|G|} \sum_{u \in G} U^{\dagger} \circ \bar \Lambda \circ U$$ Twirling over the entire unitary group yields exactly the same result as the Clifford group. The Clifford group is a *2-design* of the unitary group. ## Simultaneous Randomized Benchmarking RB is designed to address fidelities in multiqubit systems in two ways. For one, RB over the full $n$-qubit space can be performed by constructing sequences from the $n$-qubit Clifford group. Additionally, the $n$-qubit space can be subdivided into sets of qubits $\{n_i\}$ and $n_i$-qubit RB performed in each subset simultaneously [4]. Both methods give metrics of fidelity in the $n$-qubit space. For example, it is common to perform 2Q RB on the subset of two-qubits defining a CNOT gate while the other qubits are quiescent. As explained in [4], this RB data will not necessarily decay exponentially because the other qubit subspaces are not twirled. Subsets are more rigorously characterized by simultaneous RB, which also measures some level of crosstalk error since all qubits are active. An example of simultaneous RB (1Q RB and 2Q RB) can be found in: https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/ignis/randomized_benchmarking.ipynb ## Predicted Gate Fidelity If we know the errors on the underlying gates (the gateset) we can predict the fidelity. First we need to count the number of these gates per Clifford. Then, the two qubit Clifford gate error function gives the error per 2Q Clifford. It assumes that the error in the underlying gates is depolarizing. This function is derived in the supplement to [5]. ``` #Count the number of single and 2Q gates in the 2Q Cliffords gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list, xdata[0],basis_gates, rb_opts['rb_pattern'][0]) for i in range(len(basis_gates)): print("Number of %s gates per Clifford: %f"%(basis_gates[i], np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]]))) # Prepare lists of the number of qubits and the errors ngates = np.zeros(7) ngates[0:3] = gates_per_cliff[0][0:3] ngates[3:6] = gates_per_cliff[1][0:3] ngates[6] = gates_per_cliff[0][3] gate_qubits = np.array([0, 0, 0, 1, 1, 1, -1], dtype=int) gate_errs = np.zeros(len(gate_qubits)) gate_errs[[1, 4]] = dp/2 #convert from depolarizing error to epg (1Q) gate_errs[[2, 5]] = 2*dp/2 #convert from depolarizing error to epg (1Q) gate_errs[6] = dp*3/4 #convert from depolarizing error to epg (2Q) #Calculate the predicted epc pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs) print("Predicted 2Q Error per Clifford: %e"%pred_epc) ```
github_jupyter
#Import general libraries (needed for functions) import numpy as np import matplotlib.pyplot as plt from IPython import display #Import the RB Functions import qiskit.ignis.verification.randomized_benchmarking as rb #Import Qiskit classes import qiskit from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error #Generate RB circuits (2Q RB) #number of qubits nQ=2 rb_opts = {} #Number of Cliffords in the sequence rb_opts['length_vector'] = [1, 10, 20, 50, 75, 100, 125, 150, 175, 200] #Number of seeds (random sequences) rb_opts['nseeds'] = 5 #Default pattern rb_opts['rb_pattern'] = [[0,1]] rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts) print(rb_circs[0][0]) # Create a new circuit without the measurement qregs = rb_circs[0][-1].qregs cregs = rb_circs[0][-1].cregs qc = qiskit.QuantumCircuit(*qregs, *cregs) for i in rb_circs[0][-1][0:-nQ]: qc.data.append(i) # The Unitary is an identity (with a global phase) backend = qiskit.Aer.get_backend('unitary_simulator') basis_gates = ['u1','u2','u3','cx'] # use U,CX for now job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates) print(np.around(job.result().get_unitary(),3)) # Run on a noisy simulator noise_model = NoiseModel() # Depolarizing_error dp = 0.005 noise_model.add_all_qubit_quantum_error(depolarizing_error(dp, 1), ['u1', 'u2', 'u3']) noise_model.add_all_qubit_quantum_error(depolarizing_error(2*dp, 2), 'cx') backend = qiskit.Aer.get_backend('qasm_simulator') # Create the RB fitter backend = qiskit.Aer.get_backend('qasm_simulator') basis_gates = ['u1','u2','u3','cx'] shots = 200 qobj_list = [] rb_fit = rb.RBFitter(None, xdata, rb_opts['rb_pattern']) for rb_seed,rb_circ_seed in enumerate(rb_circs): print('Compiling seed %d'%rb_seed) new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates) qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots) print('Simulating seed %d'%rb_seed) job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0}) qobj_list.append(qobj) # Add data to the fitter rb_fit.add_data(job.result()) print('After seed %d, alpha: %f, EPC: %f'%(rb_seed,rb_fit.fit[0]['params'][1], rb_fit.fit[0]['epc'])) plt.figure(figsize=(8, 6)) ax = plt.subplot(1, 1, 1) # Plot the essence by calling plot_rb_data rb_fit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False) # Add title and label ax.set_title('%d Qubit RB'%(nQ), fontsize=18) plt.show() #Count the number of single and 2Q gates in the 2Q Cliffords gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list, xdata[0],basis_gates, rb_opts['rb_pattern'][0]) for i in range(len(basis_gates)): print("Number of %s gates per Clifford: %f"%(basis_gates[i], np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]]))) # Prepare lists of the number of qubits and the errors ngates = np.zeros(7) ngates[0:3] = gates_per_cliff[0][0:3] ngates[3:6] = gates_per_cliff[1][0:3] ngates[6] = gates_per_cliff[0][3] gate_qubits = np.array([0, 0, 0, 1, 1, 1, -1], dtype=int) gate_errs = np.zeros(len(gate_qubits)) gate_errs[[1, 4]] = dp/2 #convert from depolarizing error to epg (1Q) gate_errs[[2, 5]] = 2*dp/2 #convert from depolarizing error to epg (1Q) gate_errs[6] = dp*3/4 #convert from depolarizing error to epg (2Q) #Calculate the predicted epc pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs) print("Predicted 2Q Error per Clifford: %e"%pred_epc)
0.579162
0.989781
# Notebook initialization ``` %matplotlib inline import random import numpy as np import pandas as pd import matplotlib.pyplot as plt import librosa from IPython.display import Image, display, Audio def load_features(directory): au_features = pd.read_csv('{}/{}/audio_features.csv'.format('../data/output/features',directory), index_col=0) im_features = pd.read_csv('{}/{}/image_features.csv'.format('../data/output/features',directory), index_col=0) # Drop redundant columns im_features = im_features.drop(['label'], axis=1) # Merge audio and image features features = pd.concat([au_features, im_features], axis=1) # Only look at clips less than 300s long features = features[features.length < 300] return features ``` # Import data ``` features = load_features('train') features.head() ``` ## Clean data ``` print(len(features[features.isnull().any(axis=1)])) features[features.isnull().any(axis=1)].head() # just drop the remaning rows with nan values features = features.dropna() ``` # Prepare data ``` f = features features_1 = f[f.label == 1] from sklearn import preprocessing # See if we can distinguish voice mail clips from the others # Features to use columns = ['length', 'ring_count', 'last_ring_to_end', 'percent_silence', 'white_proportion'] X_train_all = features_1[columns] #features_1 = features_1[['length', 'last_ring_to_end_length', 'white_proportion']] scaler = preprocessing.StandardScaler().fit(X_train_all) X_train_all_scaled = scaler.transform(X_train_all) ``` # Clustering ``` from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2) labels = kmeans.fit_predict(X_train_all_scaled) # Look at cluster sizes unique, counts = np.unique(labels, return_counts=True) print(counts) ``` # Random check each cluster images ``` # Look at images by cluster to see if they seem to make sense images = features_1['image_file'] clusters = [[] for _ in range(max(labels)+1)] for label, img in zip(labels, images): clusters[label].append(img) # Cluster 1 random selection for img in random.sample(clusters[0], 10): display(Image(filename=img, width=320)) # Cluster 2 random selection for img in random.sample(clusters[1], 10): display(Image(filename=img, width=320)) # Cluster 2 random selection for img in clusters[2]: display(Image(filename=img, width=320)) ``` # Dimensionality reduction and plot ## TSNE ``` from sklearn.manifold import TSNE # http://alexanderfabisch.github.io/t-sne-in-scikit-learn.html # http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html X_tsne = TSNE(n_components=2, verbose=2).fit_transform(X_train_all_scaled) # Plot tsne results plt.scatter(X_tsne[:,0], X_tsne[:,1], c=labels) ``` ## PCA ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) X_pca = PCA(n_components=2).fit_transform(X_train_all_scaled) # Plot PCA results plt.scatter(X_pca[:,0], X_pca[:,1], c=labels) ```
github_jupyter
%matplotlib inline import random import numpy as np import pandas as pd import matplotlib.pyplot as plt import librosa from IPython.display import Image, display, Audio def load_features(directory): au_features = pd.read_csv('{}/{}/audio_features.csv'.format('../data/output/features',directory), index_col=0) im_features = pd.read_csv('{}/{}/image_features.csv'.format('../data/output/features',directory), index_col=0) # Drop redundant columns im_features = im_features.drop(['label'], axis=1) # Merge audio and image features features = pd.concat([au_features, im_features], axis=1) # Only look at clips less than 300s long features = features[features.length < 300] return features features = load_features('train') features.head() print(len(features[features.isnull().any(axis=1)])) features[features.isnull().any(axis=1)].head() # just drop the remaning rows with nan values features = features.dropna() f = features features_1 = f[f.label == 1] from sklearn import preprocessing # See if we can distinguish voice mail clips from the others # Features to use columns = ['length', 'ring_count', 'last_ring_to_end', 'percent_silence', 'white_proportion'] X_train_all = features_1[columns] #features_1 = features_1[['length', 'last_ring_to_end_length', 'white_proportion']] scaler = preprocessing.StandardScaler().fit(X_train_all) X_train_all_scaled = scaler.transform(X_train_all) from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2) labels = kmeans.fit_predict(X_train_all_scaled) # Look at cluster sizes unique, counts = np.unique(labels, return_counts=True) print(counts) # Look at images by cluster to see if they seem to make sense images = features_1['image_file'] clusters = [[] for _ in range(max(labels)+1)] for label, img in zip(labels, images): clusters[label].append(img) # Cluster 1 random selection for img in random.sample(clusters[0], 10): display(Image(filename=img, width=320)) # Cluster 2 random selection for img in random.sample(clusters[1], 10): display(Image(filename=img, width=320)) # Cluster 2 random selection for img in clusters[2]: display(Image(filename=img, width=320)) from sklearn.manifold import TSNE # http://alexanderfabisch.github.io/t-sne-in-scikit-learn.html # http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html X_tsne = TSNE(n_components=2, verbose=2).fit_transform(X_train_all_scaled) # Plot tsne results plt.scatter(X_tsne[:,0], X_tsne[:,1], c=labels) from sklearn.decomposition import PCA pca = PCA(n_components=2) X_pca = PCA(n_components=2).fit_transform(X_train_all_scaled) # Plot PCA results plt.scatter(X_pca[:,0], X_pca[:,1], c=labels)
0.719482
0.82029
# 1.- Inspect 2MASS output Catalogues ``` import numpy as np import warnings import os, glob, getpass, sys from astropy.table import Table, join, vstack, hstack, Column, MaskedColumn, unique from astropy.utils.exceptions import AstropyWarning from astropy import units as u user = getpass.getuser() sys.path.append('../') from extra_codes import sample_initial as samp_ini ``` ### READ Original & 2MASS-CROSSED CATALOGUES ``` # Read Original & 2MASS Crossed catalogues ============= warnings.filterwarnings('ignore', category=AstropyWarning, append=True) cat_0, cat_0_t = samp_ini.read_cats(inp_cat='2008_wilking_simbad.xml', inp_cat_2mass='2008_wilking_simbad_2mass.vot') cat_1, cat_1_t = samp_ini.read_cats(inp_cat='cat_erickson_1.vot', inp_cat_2mass='cat_erickson_1_tmasss.vot') cat_2, cat_2_t = samp_ini.read_cats(inp_cat='cat_erickson_2.vot', inp_cat_2mass='cat_erickson_2_tmasss.vot') cat_3, cat_3_t = samp_ini.read_cats(inp_cat='cat_erickson_3.vot', inp_cat_2mass='cat_erickson_3_tmasss.vot') cat_4, cat_4_t = samp_ini.read_cats(inp_cat='2015_dunham_OPH_YSO.vot', inp_cat_2mass='2015_dunham_OPH_YSO_tmasss.vot') # Rename cats for later ================================ cat_2008 = cat_0_t cat_2011 = vstack([cat_1_t, cat_2_t, cat_3_t]) cat_2015 = cat_4_t ``` # 2.- Construct Gaia Dr2 Input Table ``` # Construct Initial Sample ============================= cat_ini = vstack([cat_2008, cat_2011, cat_2015]) cat_ini = unique(cat_ini, keys='_2MASS') cat_ini = cat_ini['_2MASS', 'Jmag'] text = f'TOTAL SOURCES AFTER REMOVING DUPLICATES: {len(cat_ini)}' print('=' * len(text)) print(text) print('=' * len(text)) # Include Source Label ================================= cat_ini['ref_1'] = ['N'] * len(cat_ini) cat_ini['ref_2'] = ['N'] * len(cat_ini) cat_ini['ref_3'] = ['N'] * len(cat_ini) # Write labels ========================================= for i in range(len(cat_ini)): tmass = cat_ini['_2MASS'][i] if tmass in cat_2008['_2MASS']: cat_ini['ref_1'][i] = 'Y' # Wilking 2008 if tmass in cat_2011['_2MASS']: cat_ini['ref_2'][i] = 'Y' # Erickson 2011 if tmass in cat_2015['_2MASS']: cat_ini['ref_3'][i] = 'Y' # Dunham 2015 cat_ini['refs'] = [cat_ini['ref_1'][i] + cat_ini['ref_2'][i] + cat_ini['ref_3'][i] for i in range(len(cat_ini))] set(cat_ini['refs']) # Rewrite Reference label ============================== # To check all permutations: set(cat_ini['refs']) cat_ini['refs_2'] = ['1, 2, 3'] * len(cat_ini) for i in range(len(cat_ini)): label = cat_ini['refs'][i] if label == 'NNY': label_2 = '3' if label == 'NYN': label_2 = '2' if label == 'NYY': label_2 = '2, 3' if label == 'YNN': label_2 = '1' if label == 'YNY': label_2 = '1, 3' if label == 'YYN': label_2 = '1, 2' if label == 'YYY': label_2 = '1, 2, 3' cat_ini['refs_2'][i] = label_2 # Write & Remove unwanted cols ========================= cat_ini = cat_ini['_2MASS', 'Jmag', 'refs_2'] cat_ini.rename_column('_2MASS', 'col2mass') # For Gaia Archive cat_ini.rename_column('Jmag', 'jmag') # For Gaia Archive cat_ini.write('sample_ini.vot', format = 'votable', overwrite = True) cat_ini[75:80] # Sanity Check: There are no duplicate 2MASS IDs ======= len(cat_ini), len(unique(cat_ini, 'col2mass')) ```
github_jupyter
import numpy as np import warnings import os, glob, getpass, sys from astropy.table import Table, join, vstack, hstack, Column, MaskedColumn, unique from astropy.utils.exceptions import AstropyWarning from astropy import units as u user = getpass.getuser() sys.path.append('../') from extra_codes import sample_initial as samp_ini # Read Original & 2MASS Crossed catalogues ============= warnings.filterwarnings('ignore', category=AstropyWarning, append=True) cat_0, cat_0_t = samp_ini.read_cats(inp_cat='2008_wilking_simbad.xml', inp_cat_2mass='2008_wilking_simbad_2mass.vot') cat_1, cat_1_t = samp_ini.read_cats(inp_cat='cat_erickson_1.vot', inp_cat_2mass='cat_erickson_1_tmasss.vot') cat_2, cat_2_t = samp_ini.read_cats(inp_cat='cat_erickson_2.vot', inp_cat_2mass='cat_erickson_2_tmasss.vot') cat_3, cat_3_t = samp_ini.read_cats(inp_cat='cat_erickson_3.vot', inp_cat_2mass='cat_erickson_3_tmasss.vot') cat_4, cat_4_t = samp_ini.read_cats(inp_cat='2015_dunham_OPH_YSO.vot', inp_cat_2mass='2015_dunham_OPH_YSO_tmasss.vot') # Rename cats for later ================================ cat_2008 = cat_0_t cat_2011 = vstack([cat_1_t, cat_2_t, cat_3_t]) cat_2015 = cat_4_t # Construct Initial Sample ============================= cat_ini = vstack([cat_2008, cat_2011, cat_2015]) cat_ini = unique(cat_ini, keys='_2MASS') cat_ini = cat_ini['_2MASS', 'Jmag'] text = f'TOTAL SOURCES AFTER REMOVING DUPLICATES: {len(cat_ini)}' print('=' * len(text)) print(text) print('=' * len(text)) # Include Source Label ================================= cat_ini['ref_1'] = ['N'] * len(cat_ini) cat_ini['ref_2'] = ['N'] * len(cat_ini) cat_ini['ref_3'] = ['N'] * len(cat_ini) # Write labels ========================================= for i in range(len(cat_ini)): tmass = cat_ini['_2MASS'][i] if tmass in cat_2008['_2MASS']: cat_ini['ref_1'][i] = 'Y' # Wilking 2008 if tmass in cat_2011['_2MASS']: cat_ini['ref_2'][i] = 'Y' # Erickson 2011 if tmass in cat_2015['_2MASS']: cat_ini['ref_3'][i] = 'Y' # Dunham 2015 cat_ini['refs'] = [cat_ini['ref_1'][i] + cat_ini['ref_2'][i] + cat_ini['ref_3'][i] for i in range(len(cat_ini))] set(cat_ini['refs']) # Rewrite Reference label ============================== # To check all permutations: set(cat_ini['refs']) cat_ini['refs_2'] = ['1, 2, 3'] * len(cat_ini) for i in range(len(cat_ini)): label = cat_ini['refs'][i] if label == 'NNY': label_2 = '3' if label == 'NYN': label_2 = '2' if label == 'NYY': label_2 = '2, 3' if label == 'YNN': label_2 = '1' if label == 'YNY': label_2 = '1, 3' if label == 'YYN': label_2 = '1, 2' if label == 'YYY': label_2 = '1, 2, 3' cat_ini['refs_2'][i] = label_2 # Write & Remove unwanted cols ========================= cat_ini = cat_ini['_2MASS', 'Jmag', 'refs_2'] cat_ini.rename_column('_2MASS', 'col2mass') # For Gaia Archive cat_ini.rename_column('Jmag', 'jmag') # For Gaia Archive cat_ini.write('sample_ini.vot', format = 'votable', overwrite = True) cat_ini[75:80] # Sanity Check: There are no duplicate 2MASS IDs ======= len(cat_ini), len(unique(cat_ini, 'col2mass'))
0.208904
0.757256
# Lab: Feature Analysis Using TensorFlow Data Validation and Facets **Learning Objectives:** 1. Use TFRecords to load record-oriented binary format data 2. Use TFDV to generate statistics and Facets to visualize the data 3. Use the TFDV widget to answer questions 4. Analyze label distribution for subset groups ## Introduction Bias can manifest in any part of a typical machine learning pipeline, from an unrepresentative dataset, to learned model representations, to the way in which the results are presented to the user. Errors that result from this bias can disproportionately impact some users more than others. [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) (TFDV) is one tool you can use to analyze your data to find potential problems in your data, such as missing values and data imbalances - that can lead to Fairness disparities. The TFDV tool analyzes training and serving data to compute descriptive statistics, infer a schema, and detect data anomalies. [Facets Overview](https://pair-code.github.io/facets/) provides a succinct visualization of these statistics for easy browsing. Both the TFDV and Facets are tools that are part of the [Fairness Indicators](https://www.tensorflow.org/tfx/fairness_indicators). In this notebook, we use TFDV to compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. We use Facets Overview to visualize these statistics using the Civil Comments dataset. ** UPDATE LINK BEFORE PRODUCTION **: Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/gwendolyn-dev/courses/machine_learning/deepdive2/ml_on_gc/03_adv_tfdv_facets.ipynb) -- try to complete that notebook first before reviewing this solution notebook. ## Set up environment variables and load necessary libraries We will start by importing the necessary dependencies for the libraries we'll be using in this exercise. First, run the cell below to install Fairness Indicators. **NOTE:** You can ignore the "pip" being invoked by an old script wrapper, as it will not affect the lab's functionality. ``` !pip3 install fairness-indicators --user ``` <strong>Restart the kernel</strong> after you do a pip3 install (click on the <strong>Restart the kernel</strong> button above). Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Data Validation (tfdv), and the What-If tool (WIT) Facets Overview. ``` # %tensorflow_version 2.x import sys, os import warnings warnings.filterwarnings('ignore') #os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Ignore deprecation warnings import tempfile import apache_beam as beam import numpy as np import pandas as pd from datetime import datetime import tensorflow_hub as hub import tensorflow as tf import tensorflow_model_analysis as tfma import tensorflow_data_validation as tfdv from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators from tensorflow_model_analysis.addons.fairness.view import widget_view from fairness_indicators.examples import util import warnings warnings.filterwarnings("ignore") from witwidget.notebook.visualization import WitConfigBuilder from witwidget.notebook.visualization import WitWidget print(tf.version.VERSION) print(tf) # This statement shows us what version of Python we are currently running. ``` ### About the Civil Comments dataset Click below to learn more about the Civil Comments dataset, and how we've preprocessed it for this exercise. The Civil Comments dataset comprises approximately 2 million public comments that were submitted to the Civil Comments platform. [Jigsaw](https://jigsaw.google.com/) sponsored the effort to compile and annotate these comments for ongoing [research](https://arxiv.org/abs/1903.04561); they've also hosted competitions on [Kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) to help classify toxic comments as well as minimize unintended model bias. #### Features Within the Civil Comments data, a subset of comments are tagged with a variety of identity attributes pertaining to gender, sexual orientation, religion, race, and ethnicity. Each identity annotation column contains a value that represents the percentage of annotators who categorized a comment as containing references to that identity. Multiple identities may be present in a comment. **NOTE:** These identity attributes are intended *for evaluation purposes only*, to assess how well a classifier trained solely on the comment text performs on different tag sets. To collect these identity labels, each comment was reviewed by up to 10 annotators, who were asked to indicate all identities that were mentioned in the comment. For example, annotators were posed the question: "What genders are mentioned in the comment?", and asked to choose all of the following categories that were applicable. * Male * Female * Transgender * Other gender * No gender mentioned **NOTE:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.* Jigsaw used these ratings to generate an aggregate score for each identity attribute representing the percentage of raters who said the identity was mentioned in the comment. For example, if 10 annotators reviewed a comment, and 6 said that the comment mentioned the identity "female" and 0 said that the comment mentioned the identity "male," the comment would receive a `female` score of `0.6` and a `male` score of `0.0`. **NOTE:** For the purposes of annotation, a comment was considered to "mention" gender if it contained a comment about gender issues (e.g., a discussion about feminism, wage gap between men and women, transgender rights, etc.), gendered language, or gendered insults. Use of "he," "she," or gendered names (e.g., Donald, Margaret) did not require a gender label. #### Label Each comment was rated by up to 10 annotators for toxicity, who each classified it with one of the following ratings. * Very Toxic * Toxic * Hard to Say * Not Toxic Again, Jigsaw used these ratings to generate an aggregate toxicity "score" for each comment (ranging from `0.0` to `1.0`) to serve as the [label](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=label#label), representing the fraction of annotators who labeled the comment either "Very Toxic" or "Toxic." For example, if 10 annotators rated a comment, and 3 of them labeled it "Very Toxic" and 5 of them labeled it "Toxic", the comment would receive a toxicity score of `0.8`. **NOTE:** For more information on the Civil Comments labeling schema, see the [Data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data) section of the Jigsaw Untended Bias in Toxicity Classification Kaggle competition. ### Preprocessing the data For the purposes of this exercise, we converted toxicity and identity columns to booleans in order to work with our neural net and metrics calculations. In the preprocessed dataset, we considered any value ≥ 0.5 as True (i.e., a comment is considered toxic if 50% or more crowd raters labeled it as toxic). For identity labels, the threshold 0.5 was chosen and the identities were grouped together by their categories. For example, if one comment has `{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8, homosexual_gay_or_lesbian: 1.0 }`, after processing, the data will be `{ gender: [female], sexual_orientation: [heterosexual, homosexual_gay_or_lesbian] }`. **NOTE:** Missing identity fields were converted to False. ### Use TFRecords to load record-oriented binary format data ------------------------------------------------------------------------------------------------------- The [TFRecord format](https://www.tensorflow.org/tutorials/load_data/tfrecord) is a simple [Protobuf](https://developers.google.com/protocol-buffers)-based format for storing a sequence of binary records. It gives you and your machine learning models to handle arbitrarily large datasets over the network because it: 1. Splits up large files into 100-200MB chunks 2. Stores the results as serialized binary messages for faster ingestion If you already have a dataset in TFRecord format, you can use the tf.keras.utils functions for accessing the data (as you will below!). If you want to practice creating your own TFRecord datasets you can do so outside of this lab by [viewing the documentation here](https://www.tensorflow.org/tutorials/load_data/tfrecord). #### TODO 1: Use the utility functions tf.keras to download and import our datasets Run the following cell to download and import the training and validation preprocessed datasets. ``` download_original_data = False #@param {type:"boolean"} # TODO 1 if download_original_data: train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord') # The identity terms list will be grouped together by their categories # (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column, # text column and label column will be kept after processing. train_tf_file = util.convert_comments_data(train_tf_file) validate_tf_file = util.convert_comments_data(validate_tf_file) # TODO 1a else: train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord') ``` ### Use TFDV to generate statistics and Facets to visualize the data TensorFlow Data Validation supports data stored in a TFRecord file, a CSV input format, with extensibility for other common formats. You can find the available data decoders [here](https://github.com/tensorflow/data-validation/tree/master/tensorflow_data_validation/coders). In addition, TFDV provides the [tfdv.generate_statistics_from_dataframe](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) utility function for users with in-memory data represented as a pandas DataFrame. In addition to computing a default set of data statistics, TFDV can also compute statistics for semantic domains (e.g., images, text). To enable computation of semantic domain statistics, pass a tfdv.StatsOptions object with enable_semantic_domain_stats set to True to tfdv.generate_statistics_from_tfrecord.Before we train the model, let's do a quick audit of our training data using [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started), so we can better understand our data distribution. #### TODO 2: Use TFDV to get quick statistics on your dataset The following cell may take 2–3 minutes to run. **NOTE:** Please ignore the deprecation warnings. ``` # TODO 2 # The computation of statistics using TFDV. The returned value is a DatasetFeatureStatisticsList protocol buffer. stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file) # TODO 2Aa # A visualization of the statistics using Facets Overview. tfdv.visualize_statistics(stats) ``` ### TODO 3: Use the TensorFlow Data Validation widget above to answer the following questions. #### **1. How many total examples are in the training dataset?** #### Solution See below solution. **There are 1.08 million total examples in the training dataset.** The count column tells us how many examples there are for a given feature. Each feature (`sexual_orientation`, `comment_text`, `gender`, etc.) has 1.08 million examples. The missing column tells us what percentage of examples are missing that feature. ![Screenshot of first row of Categorical Features table in the TFDV widget, with 1.08 million count of examples and 0% missing examples highlighted](https://developers.google.com/machine-learning/practica/fairness-indicators/colab-images/tfdv_screenshot_exercise1.png) Each feature is missing from 0% of examples, so we know that the per-feature example count of 1.08 million is also the total number of examples in the dataset. #### **2. How many unique values are there for the `gender` feature? What are they, and what are the frequencies of each of these values?** **NOTE #1:** `gender` and the other identity features (`sexual_orientation`, `religion`, `disability`, and `race`) are included in this dataset for evaluation purposes only, so we can assess model performance on different identity slices. The only feature we will use for model training is `comment_text`. **NOTE #2:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.* #### Solution See below solution. The **unique** column of the **Categorical Features** table tells us that there are 4 unique values for the `gender` feature. To view the 4 values and their frequencies, we can click on the **SHOW RAW DATA** button: !["gender" row of the "Categorical Data" table in the TFDV widget, with raw data highlighted.](https://developers.google.com/machine-learning/practica/fairness-indicators/colab-images/tfdv_screenshot_exercise2.png) The raw data table shows that there are 32,208 examples with a gender value of `female`, 26,758 examples with a value of `male`, 1,551 examples with a value of `transgender`, and 4 examples with a value of `other gender`. **NOTE:** As described [earlier](#scrollTo=J3R2QWkru1WN), a `gender` feature can contain zero or more of these 4 values, depending on the content of the comment. For example, a comment containing the text "I am a transgender man" will have both `transgender` and `male` as `gender` values, whereas a comment that does not reference gender at all will have an empty/false `gender` value. #### **3. What percentage of total examples are labeled toxic? Overall, is this a class-balanced dataset (relatively even split of examples between positive and negative classes) or a class-imbalanced dataset (majority of examples are in one class)?** **NOTE:** In this dataset, a `toxicity` value of `0` signifies "not toxic," and a `toxicity` value of `1` signifies "toxic." #### Solution See below solution. **7.98 percent of examples are toxic.** Under **Numeric Features**, we can see the distribution of values for the `toxicity` feature. 92.02% of examples have a value of 0 (which signifies "non-toxic"), so 7.98% of examples are toxic. ![Screenshot of the "toxicity" row in the Numeric Features table in the TFDV widget, highlighting the "zeros" column showing that 92.01% of examples have a toxicity value of 0.](https://developers.google.com/machine-learning/practica/fairness-indicators/colab-images/tfdv_screenshot_exercise3.png) This is a [**class-imbalanced dataset**](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=class-imbalanced-dataset#class-imbalanced-dataset), as the overwhelming majority of examples (over 90%) are classified as nontoxic. Notice that there is one numeric feature (count of toxic comments) and six categorical features. ### Analyze label distribution for subset groups Run the following code to analyze label distribution for the subset of examples that contain a `gender` value** **NOTE:** *The cell run should for just a few minutes* ``` #@title Calculate label distribution for gender-related examples raw_dataset = tf.data.TFRecordDataset(train_tf_file) toxic_gender_examples = 0 nontoxic_gender_examples = 0 # TODO 4 # There are 1,082,924 examples in the dataset for raw_record in raw_dataset.take(1082924): example = tf.train.Example() example.ParseFromString(raw_record.numpy()) if str(example.features.feature["gender"].bytes_list.value) != "[]": if str(example.features.feature["toxicity"].float_list.value) == "[1.0]": toxic_gender_examples += 1 else: nontoxic_gender_examples += 1 # TODO 4a print("Toxic Gender Examples: %s" % toxic_gender_examples) print("Nontoxic Gender Examples: %s" % nontoxic_gender_examples) ``` #### **What percentage of `gender` examples are labeled toxic? Compare this percentage to the percentage of total examples that are labeled toxic from #3 above. What, if any, fairness concerns can you identify based on this comparison?** #### Solution Click below for one possible solution. There are 7,189 gender-related examples that are labeled toxic, which represent 14.7% of all gender-related examples. The percentage of gender-related examples that are toxic (14.7%) is nearly double the percentage of toxic examples overall (7.98%). In other words, in our dataset, gender-related comments are almost two times more likely than comments overall to be labeled as toxic. This skew suggests that a model trained on this dataset might learn a correlation between gender-related content and toxicity. This raises fairness considerations, as the model might be more likely to classify nontoxic comments as toxic if they contain gender terminology, which could lead to [disparate impact](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=disparate-impact#disparate-impact) for gender subgroups. Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
!pip3 install fairness-indicators --user # %tensorflow_version 2.x import sys, os import warnings warnings.filterwarnings('ignore') #os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Ignore deprecation warnings import tempfile import apache_beam as beam import numpy as np import pandas as pd from datetime import datetime import tensorflow_hub as hub import tensorflow as tf import tensorflow_model_analysis as tfma import tensorflow_data_validation as tfdv from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators from tensorflow_model_analysis.addons.fairness.view import widget_view from fairness_indicators.examples import util import warnings warnings.filterwarnings("ignore") from witwidget.notebook.visualization import WitConfigBuilder from witwidget.notebook.visualization import WitWidget print(tf.version.VERSION) print(tf) # This statement shows us what version of Python we are currently running. download_original_data = False #@param {type:"boolean"} # TODO 1 if download_original_data: train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord') # The identity terms list will be grouped together by their categories # (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column, # text column and label column will be kept after processing. train_tf_file = util.convert_comments_data(train_tf_file) validate_tf_file = util.convert_comments_data(validate_tf_file) # TODO 1a else: train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord') # TODO 2 # The computation of statistics using TFDV. The returned value is a DatasetFeatureStatisticsList protocol buffer. stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file) # TODO 2Aa # A visualization of the statistics using Facets Overview. tfdv.visualize_statistics(stats) #@title Calculate label distribution for gender-related examples raw_dataset = tf.data.TFRecordDataset(train_tf_file) toxic_gender_examples = 0 nontoxic_gender_examples = 0 # TODO 4 # There are 1,082,924 examples in the dataset for raw_record in raw_dataset.take(1082924): example = tf.train.Example() example.ParseFromString(raw_record.numpy()) if str(example.features.feature["gender"].bytes_list.value) != "[]": if str(example.features.feature["toxicity"].float_list.value) == "[1.0]": toxic_gender_examples += 1 else: nontoxic_gender_examples += 1 # TODO 4a print("Toxic Gender Examples: %s" % toxic_gender_examples) print("Nontoxic Gender Examples: %s" % nontoxic_gender_examples)
0.281307
0.991195
``` %%writefile spark_analysis.py import matplotlib matplotlib.use('agg') import argparse parser = argparse.ArgumentParser() parser.add_argument("--bucket", help="bucket for input and output") args = parser.parse_args() BUCKET = args.bucket ``` ## Migrating from Spark to BigQuery via Dataproc -- Part 1 * [Part 1](01_spark.ipynb): The original Spark code, now running on Dataproc (lift-and-shift). * [Part 2](02_gcs.ipynb): Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native) * [Part 3](03_automate.ipynb): Automate everything, so that we can run in a job-specific cluster. (cloud-optimized) * [Part 4](04_bigquery.ipynb): Load CSV into BigQuery, use BigQuery. (modernize) * [Part 5](05_functions.ipynb): Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless) ### Reading in data The data are CSV files. In Spark, these can be read using textFile and splitting rows on commas. ``` %%writefile -a spark_analysis.py from pyspark.sql import SparkSession, SQLContext, Row gcs_bucket='qwiklabs-gcp-00-bd3453440227' spark = SparkSession.builder.appName("kdd").getOrCreate() sc = spark.sparkContext data_file = "gs://"+gcs_bucket+"//kddcup.data_10_percent.gz" raw_rdd = sc.textFile(data_file).cache() raw_rdd.take(5) %%writefile -a spark_analysis.py csv_rdd = raw_rdd.map(lambda row: row.split(",")) parsed_rdd = csv_rdd.map(lambda r: Row( duration=int(r[0]), protocol_type=r[1], service=r[2], flag=r[3], src_bytes=int(r[4]), dst_bytes=int(r[5]), wrong_fragment=int(r[7]), urgent=int(r[8]), hot=int(r[9]), num_failed_logins=int(r[10]), num_compromised=int(r[12]), su_attempted=r[14], num_root=int(r[15]), num_file_creations=int(r[16]), label=r[-1] ) ) parsed_rdd.take(5) ``` ### Spark analysis One way to analyze data in Spark is to call methods on a dataframe. ``` %%writefile -a spark_analysis.py sqlContext = SQLContext(sc) df = sqlContext.createDataFrame(parsed_rdd) connections_by_protocol = df.groupBy('protocol_type').count().orderBy('count', ascending=False) connections_by_protocol.show() ``` Another way is to use Spark SQL ``` %%writefile -a spark_analysis.py df.registerTempTable("connections") attack_stats = sqlContext.sql(""" SELECT protocol_type, CASE label WHEN 'normal.' THEN 'no attack' ELSE 'attack' END AS state, COUNT(*) as total_freq, ROUND(AVG(src_bytes), 2) as mean_src_bytes, ROUND(AVG(dst_bytes), 2) as mean_dst_bytes, ROUND(AVG(duration), 2) as mean_duration, SUM(num_failed_logins) as total_failed_logins, SUM(num_compromised) as total_compromised, SUM(num_file_creations) as total_file_creations, SUM(su_attempted) as total_root_attempts, SUM(num_root) as total_root_acceses FROM connections GROUP BY protocol_type, state ORDER BY 3 DESC """) attack_stats.show() %%writefile -a spark_analysis.py # %matplotlib inline ax = attack_stats.toPandas().plot.bar(x='protocol_type', subplots=True, figsize=(10,25)) %%writefile -a spark_analysis.py ax[0].get_figure().savefig('report.png'); %%writefile -a spark_analysis.py import google.cloud.storage as gcs bucket = gcs.Client().get_bucket(BUCKET) for blob in bucket.list_blobs(prefix='sparktodp/'): blob.delete() bucket.blob('sparktodp/report.png').upload_from_filename('report.png') %%writefile -a spark_analysis.py connections_by_protocol.write.format("csv").mode("overwrite").save( "gs://{}/sparktodp/connections_by_protocol".format(BUCKET)) BUCKET_list = !gcloud info --format='value(config.project)' BUCKET=BUCKET_list[0] print('Writing to {}'.format(BUCKET)) !/opt/conda/miniconda3/bin/python spark_analysis.py --bucket=$BUCKET !gsutil ls gs://$BUCKET/sparktodp/** !gsutil cp spark_analysis.py gs://$BUCKET/sparktodp/spark_analysis.py ``` Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
%%writefile spark_analysis.py import matplotlib matplotlib.use('agg') import argparse parser = argparse.ArgumentParser() parser.add_argument("--bucket", help="bucket for input and output") args = parser.parse_args() BUCKET = args.bucket %%writefile -a spark_analysis.py from pyspark.sql import SparkSession, SQLContext, Row gcs_bucket='qwiklabs-gcp-00-bd3453440227' spark = SparkSession.builder.appName("kdd").getOrCreate() sc = spark.sparkContext data_file = "gs://"+gcs_bucket+"//kddcup.data_10_percent.gz" raw_rdd = sc.textFile(data_file).cache() raw_rdd.take(5) %%writefile -a spark_analysis.py csv_rdd = raw_rdd.map(lambda row: row.split(",")) parsed_rdd = csv_rdd.map(lambda r: Row( duration=int(r[0]), protocol_type=r[1], service=r[2], flag=r[3], src_bytes=int(r[4]), dst_bytes=int(r[5]), wrong_fragment=int(r[7]), urgent=int(r[8]), hot=int(r[9]), num_failed_logins=int(r[10]), num_compromised=int(r[12]), su_attempted=r[14], num_root=int(r[15]), num_file_creations=int(r[16]), label=r[-1] ) ) parsed_rdd.take(5) %%writefile -a spark_analysis.py sqlContext = SQLContext(sc) df = sqlContext.createDataFrame(parsed_rdd) connections_by_protocol = df.groupBy('protocol_type').count().orderBy('count', ascending=False) connections_by_protocol.show() %%writefile -a spark_analysis.py df.registerTempTable("connections") attack_stats = sqlContext.sql(""" SELECT protocol_type, CASE label WHEN 'normal.' THEN 'no attack' ELSE 'attack' END AS state, COUNT(*) as total_freq, ROUND(AVG(src_bytes), 2) as mean_src_bytes, ROUND(AVG(dst_bytes), 2) as mean_dst_bytes, ROUND(AVG(duration), 2) as mean_duration, SUM(num_failed_logins) as total_failed_logins, SUM(num_compromised) as total_compromised, SUM(num_file_creations) as total_file_creations, SUM(su_attempted) as total_root_attempts, SUM(num_root) as total_root_acceses FROM connections GROUP BY protocol_type, state ORDER BY 3 DESC """) attack_stats.show() %%writefile -a spark_analysis.py # %matplotlib inline ax = attack_stats.toPandas().plot.bar(x='protocol_type', subplots=True, figsize=(10,25)) %%writefile -a spark_analysis.py ax[0].get_figure().savefig('report.png'); %%writefile -a spark_analysis.py import google.cloud.storage as gcs bucket = gcs.Client().get_bucket(BUCKET) for blob in bucket.list_blobs(prefix='sparktodp/'): blob.delete() bucket.blob('sparktodp/report.png').upload_from_filename('report.png') %%writefile -a spark_analysis.py connections_by_protocol.write.format("csv").mode("overwrite").save( "gs://{}/sparktodp/connections_by_protocol".format(BUCKET)) BUCKET_list = !gcloud info --format='value(config.project)' BUCKET=BUCKET_list[0] print('Writing to {}'.format(BUCKET)) !/opt/conda/miniconda3/bin/python spark_analysis.py --bucket=$BUCKET !gsutil ls gs://$BUCKET/sparktodp/** !gsutil cp spark_analysis.py gs://$BUCKET/sparktodp/spark_analysis.py
0.235108
0.83868
``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_csv("usedcar.csv") df.head() df.isna().sum() df.dtypes df['price'] = df['price'].str.replace(',', '') df['mileage'] = df['mileage'].str.replace(',', '') df['model'] = df['model'].str.replace('c', '') df['price'] = df['price'].astype(int) df['mileage'] = df['mileage'].astype(int) df['model'] = df['model'].astype(int) df.dtypes x = df[['model','zip','mileage']] y = df['price'] x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=0) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 1) knn.fit(x_train, y_train) knn.score(x_test, y_test) car_prediction = knn.predict([[21411, 11010, 40000]]) car_prediction[0] from sklearn.linear_model import LinearRegression lin_reg=LinearRegression() lin_reg.fit(x,y) LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False) lin_reg.score(x,y) lin_reg.predict([[21411, 11010, 40000]])[0] from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.preprocessing import LabelBinarizer from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from wordcloud import WordCloud,STOPWORDS from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize,sent_tokenize from nltk.tokenize.toktok import ToktokTokenizer from nltk.stem import LancasterStemmer,WordNetLemmatizer from nltk import pos_tag from nltk.corpus import wordnet import string df.drop(columns=['model', 'zip','mileage']) stop = set(stopwords.words('english')) punctuation = list(string.punctuation) stop.update(punctuation) def get_simple_pos(tag): if tag.startswith('J'): return wordnet.ADJ elif tag.startswith('V'): return wordnet.VERB elif tag.startswith('N'): return wordnet.NOUN elif tag.startswith('R'): return wordnet.ADV else: return wordnet.NOUN lemmatizer = WordNetLemmatizer() def lemmatize_words(text): final_text = [] for i in text.split(): if i.strip().lower() not in stop: pos = pos_tag([i.strip()]) word = lemmatizer.lemmatize(i.strip(),get_simple_pos(pos[0][1])) final_text.append(word.lower()) return " ".join(final_text) df.text = df.text.apply(lemmatize_words) df.head() df.text from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression,SGDClassifier x_train,x_test,y_train,y_test = train_test_split(df.text,df.price,test_size = 0.2 , random_state = 0) cv=CountVectorizer(min_df=0,max_df=1,binary=False,ngram_range=(1,3)) #transformed train reviews cv_train_reviews=cv.fit_transform(x_train) #transformed test reviews cv_test_reviews=cv.transform(x_test) tv=TfidfVectorizer(min_df=0,max_df=1,use_idf=True,ngram_range=(1,3)) #transformed train reviews tv_train_reviews=tv.fit_transform(x_train) #transformed test reviews tv_test_reviews=tv.transform(x_test) lr=LogisticRegression(penalty='l2',max_iter=500,C=1,random_state=0) #Fitting the model for Bag of words lr_bow=lr.fit(cv_train_reviews,y_train) print(lr_bow) #Fitting the model for tfidf features lr_tfidf=lr.fit(tv_train_reviews,y_train) print(lr_tfidf) good = x_train[y_train[y_train > 30000].index] bad = x_train[y_train[y_train < 5000].index] x_train.shape,good.shape,bad.shape import matplotlib.pyplot as plt plt.figure(figsize = (20,20)) # Text Reviews with Poor Ratings wc = WordCloud(min_font_size = 3, max_words = 3000 , width = 1600 , height = 800).generate(" ".join(bad)) plt.imshow(wc,interpolation = 'bilinear') plt.figure(figsize = (20,20)) # # Text Reviews with Good Ratings wc = WordCloud(min_font_size = 3, max_words = 3000 , width = 1600 , height = 800).generate(" ".join(good)) plt.imshow(wc,interpolation = 'bilinear') ```
github_jupyter
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_csv("usedcar.csv") df.head() df.isna().sum() df.dtypes df['price'] = df['price'].str.replace(',', '') df['mileage'] = df['mileage'].str.replace(',', '') df['model'] = df['model'].str.replace('c', '') df['price'] = df['price'].astype(int) df['mileage'] = df['mileage'].astype(int) df['model'] = df['model'].astype(int) df.dtypes x = df[['model','zip','mileage']] y = df['price'] x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=0) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 1) knn.fit(x_train, y_train) knn.score(x_test, y_test) car_prediction = knn.predict([[21411, 11010, 40000]]) car_prediction[0] from sklearn.linear_model import LinearRegression lin_reg=LinearRegression() lin_reg.fit(x,y) LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False) lin_reg.score(x,y) lin_reg.predict([[21411, 11010, 40000]])[0] from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.preprocessing import LabelBinarizer from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from wordcloud import WordCloud,STOPWORDS from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize,sent_tokenize from nltk.tokenize.toktok import ToktokTokenizer from nltk.stem import LancasterStemmer,WordNetLemmatizer from nltk import pos_tag from nltk.corpus import wordnet import string df.drop(columns=['model', 'zip','mileage']) stop = set(stopwords.words('english')) punctuation = list(string.punctuation) stop.update(punctuation) def get_simple_pos(tag): if tag.startswith('J'): return wordnet.ADJ elif tag.startswith('V'): return wordnet.VERB elif tag.startswith('N'): return wordnet.NOUN elif tag.startswith('R'): return wordnet.ADV else: return wordnet.NOUN lemmatizer = WordNetLemmatizer() def lemmatize_words(text): final_text = [] for i in text.split(): if i.strip().lower() not in stop: pos = pos_tag([i.strip()]) word = lemmatizer.lemmatize(i.strip(),get_simple_pos(pos[0][1])) final_text.append(word.lower()) return " ".join(final_text) df.text = df.text.apply(lemmatize_words) df.head() df.text from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression,SGDClassifier x_train,x_test,y_train,y_test = train_test_split(df.text,df.price,test_size = 0.2 , random_state = 0) cv=CountVectorizer(min_df=0,max_df=1,binary=False,ngram_range=(1,3)) #transformed train reviews cv_train_reviews=cv.fit_transform(x_train) #transformed test reviews cv_test_reviews=cv.transform(x_test) tv=TfidfVectorizer(min_df=0,max_df=1,use_idf=True,ngram_range=(1,3)) #transformed train reviews tv_train_reviews=tv.fit_transform(x_train) #transformed test reviews tv_test_reviews=tv.transform(x_test) lr=LogisticRegression(penalty='l2',max_iter=500,C=1,random_state=0) #Fitting the model for Bag of words lr_bow=lr.fit(cv_train_reviews,y_train) print(lr_bow) #Fitting the model for tfidf features lr_tfidf=lr.fit(tv_train_reviews,y_train) print(lr_tfidf) good = x_train[y_train[y_train > 30000].index] bad = x_train[y_train[y_train < 5000].index] x_train.shape,good.shape,bad.shape import matplotlib.pyplot as plt plt.figure(figsize = (20,20)) # Text Reviews with Poor Ratings wc = WordCloud(min_font_size = 3, max_words = 3000 , width = 1600 , height = 800).generate(" ".join(bad)) plt.imshow(wc,interpolation = 'bilinear') plt.figure(figsize = (20,20)) # # Text Reviews with Good Ratings wc = WordCloud(min_font_size = 3, max_words = 3000 , width = 1600 , height = 800).generate(" ".join(good)) plt.imshow(wc,interpolation = 'bilinear')
0.46952
0.347731
Lane detection === In this Notebook we will learn how to detect the lanes on a road. This is an adaptation of the code present in 01_lane_detection.py, processing a single image. For a complete description, please read the content of the book. Import == We start importing what we need and initializing a few variables. ``` %matplotlib inline import cv2 import os import numpy as np from copy import deepcopy import sys sys.path.append('../') from utils import set_save_files, save_dir, ensure_dir, get_save_files import matplotlib from matplotlib.pyplot import imshow, figure, plot, clf perspective_correction = None perspective_correction_inv = None perspective_trapezoid = None warp_size = None orig_size = None left_fit_avg = None right_fit_avg = None MIN_DETECTIONS = 8 MAX_DETECTIONS = 10 ``` Perspective == The first step is computing the values of the variables used for the perspective correction. We can use these variables to "Warp" the area of the road that we are analyzing, to obtain a "bird's-eye view" ``` # pt1, pt2, ptr3, and pt4 and four points defining a trapezoid used for the perspective correction def compute_perspective(width, height, pt1, pt2, pt3, pt4): global perspective_trapezoid, perspective_dest perspective_trapezoid = [(pt1[0], pt1[1]), (pt2[0], pt2[1]), (pt3[0], pt3[1]), (pt4[0], pt4[1])] src = np.float32([pt1, pt2, pt3, pt4]) # widest side on the trapezoid x1 = pt1[0] x2 = pt4[0] # height of the trapezoid y1 = pt1[1] y2 = pt2[1] h = y1 - y2 # The destination is a rectangle with the height of the trapezoid and the width of the widest side dst = np.float32([[x1, h], [x1, 0], [x2, 0], [x2, h]]) perspective_dest = [(x1, y1), (x1, y2), (x2, y2), (x2, y1)] global perspective_correction, perspective_correction_inv global warp_size, orig_size perspective_correction = cv2.getPerspectiveTransform(src, dst) perspective_correction_inv = cv2.getPerspectiveTransform(dst, src) warp_size = (width, h) orig_size = (width, height) def warp(img, filename): img_persp = img.copy() cv2.line(img_persp, perspective_dest[0], perspective_dest[1], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[1], perspective_dest[2], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[2], perspective_dest[3], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[3], perspective_dest[0], (255, 255, 255), 3) cv2.line(img_persp, perspective_trapezoid[0], perspective_trapezoid[1], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[1], perspective_trapezoid[2], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[2], perspective_trapezoid[3], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[3], perspective_trapezoid[0], (0, 192, 0), 3) return img_persp, cv2.warpPerspective(img, perspective_correction, warp_size, flags=cv2.INTER_LANCZOS4) compute_perspective(1024, 600, [160, 425], [484, 310], [546, 310], [877, 425]) filename = "sd2.jpg" img_bgr = cv2.imread("test_images_sd/" + filename) imshow(cv2.cvtColor(img_bgr, cv2.COLOR_RGB2BGR)) figure() img_persp, img_warped = warp(img_bgr, filename) imshow(cv2.cvtColor(img_persp, cv2.COLOR_RGB2BGR)) figure() imshow(cv2.cvtColor(img_warped, cv2.COLOR_RGB2BGR)) ``` Edge detection == We will now apply Scharr, to perform an edge detection on the image of the road. ``` def edge_detection(channel, filename): edge_x = cv2.Scharr(channel, cv2.CV_64F, 1, 0) # Edge detection using the Scharr operator edge_x = np.absolute(edge_x) return save_dir(np.uint8(255 * edge_x / np.max(edge_x)), "edge_", filename) img_hls = cv2.cvtColor(img_warped, cv2.COLOR_BGR2HLS).astype(np.float) img_edge = edge_detection(img_warped[:, :, 1], filename) imshow(cv2.cvtColor(img_edge, cv2.COLOR_RGB2BGR)) ``` Thresholding == The next step is applying a threshold, to reduce the noise; to optimize the result, the threshold changes gradually. ``` def threshold(channel_threshold, channel_edge, filename): # Gradient threshold binary = np.zeros_like(channel_edge) height = binary.shape[0] threshold_up = 15 threshold_down = 60 threshold_delta = threshold_down - threshold_up for y in range(height): binary_line = binary[y, :] edge_line = channel_edge[y, :] threshold_line = threshold_up + threshold_delta * y / height binary_line[edge_line >= threshold_line] = 255 binary[(channel_threshold >= 140) & (channel_threshold <= 255)] = 255 binary_threshold = np.zeros_like(channel_threshold) binary_threshold[(channel_threshold >= 140) & (channel_threshold <= 255)] = 255 return binary, binary_threshold (img_binary_combined, img_binary_solo) = threshold(img_hls[:, :, 1], img_edge, filename) imshow(img_binary_combined) ``` Histogram == Now we will copute the histogram, to detect the position of the lines. ``` def histogram(img): partial_img = img[img.shape[0] * 2 // 3:, :] # Select the bottom part (one third of the image) hist = np.sum(partial_img, axis=0) plot(hist) return hist hist = histogram(img_binary_combined) ``` Lanes == We will now detect the lanes using the histogram and the slide window technique. ``` class HistLanes: def __init__(self, x_left, x_right, left_confidence, right_confidence): self.x_left = x_left self.x_right = x_right self.left_confidence = left_confidence self.right_confidence = right_confidence def lanes_full_histogram(histogram): size = len(histogram) max_index_left = np.argmax(histogram[0:size // 2]) max_index_right = np.argmax(histogram[size // 2:]) + size // 2 return HistLanes(max_index_left, max_index_right, histogram[max_index_left], histogram[max_index_right]) def moving_average(prev_average, new_value, beta): return beta * prev_average + (1 - beta) * new_value if prev_average is not None else new_value # Single lane line class Line: lane_indexes = None # pixel positions x = None y = None # Fit a second order polynomial to each fit = None # Plotting parameters fitx = None # Histogram hist_x = None # Data collected during the sliding windows phase class SlideWindow: left = Line() right = Line() hist = None left_avg = None right_avg = None ploty = None def __init__(self, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y): self.left.lane_indexes = np.concatenate(left_lane_indexes) self.right.lane_indexes = np.concatenate(right_lane_indexes) self.left.hist_x = hist.x_left self.right.hist_x = hist.x_right # Extract left and right positions self.left.x = non_zero_x[self.left.lane_indexes] self.left.y = non_zero_y[self.left.lane_indexes] self.right.x = non_zero_x[self.right.lane_indexes] self.right.y = non_zero_y[self.right.lane_indexes] def plot_lines(self, img, color_left=(0, 255, 255), color_right=(0, 255, 255)): left = [] right = [] for i in range(0, len(self.ploty)): left.append((self.left.fitx[i], self.ploty[i])) right.append((self.right.fitx[i], self.ploty[i])) cv2.polylines(img, np.int32([left]), False, color_left) cv2.polylines(img, np.int32([right]), False, color_right) return img def fit_slide_window(binary_warped, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y): sw = SlideWindow(hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y) # y coordinates sw.ploty = np.array([float(x) for x in range(binary_warped.shape[0])]) if len(sw.left.y) == 0: return False, sw # Fit a second order polynomial to approximate the points left_fit = np.polynomial.polynomial.polyfit(sw.left.y, sw.left.x, 2) right_fit = np.polynomial.polynomial.polyfit(sw.right.y, sw.right.x, 2) global left_fit_avg, right_fit_avg left_fit_avg = moving_average(left_fit_avg, left_fit, 0.92) right_fit_avg = moving_average(right_fit_avg, right_fit, 0.92) # Generate list of x and y values, using the terms of the polynomial # x = Ay^2 + By + C; sw.left.fitx = left_fit_avg[2] * sw.ploty ** 2 + left_fit_avg[1] * sw.ploty + left_fit_avg[0] sw.right.fitx = right_fit_avg[2] * sw.ploty ** 2 + right_fit_avg[1] * sw.ploty + right_fit_avg[0] return True, sw def slide_window(img, binary_warped, hist, num_windows): img_height = binary_warped.shape[0] window_height = np.int(img_height / num_windows) # Indices (e.g. coordinates) of the pixels that are not zero non_zero = binary_warped.nonzero() non_zero_y = np.array(non_zero[0]) non_zero_x = np.array(non_zero[1]) # Current positions, to be updated while sliding the window; we start with the ones identified with the histogram left_x = hist.x_left right_x = hist.x_right margin = 80 # Set minimum number of pixels found to recenter window min_pixels = 25 left_lane_indexes = [] right_lane_indexes = [] out_img = img.copy() for idx_window in range(num_windows): # X range where we expect the left lane to land win_x_left_min = left_x - margin win_x_left_max = left_x + margin # X range where we expect the right lane to land win_x_right_min = right_x - margin win_x_right_max = right_x + margin # Y range that we are analyzing win_y_top = img_height - idx_window * window_height win_y_bottom = win_y_top - window_height # Show the windows cv2.rectangle(out_img, (win_x_left_min, win_y_bottom), (win_x_left_max, win_y_top), (255, 255, 255), 2) cv2.rectangle(out_img, (win_x_right_min, win_y_bottom), (win_x_right_max, win_y_top), (255, 255, 255), 2) # Non zero pixels in x and y non_zero_left = ((non_zero_y >= win_y_bottom) & (non_zero_y < win_y_top) & (non_zero_x >= win_x_left_min) & ( non_zero_x < win_x_left_max)).nonzero()[0] non_zero_right = ((non_zero_y >= win_y_bottom) & (non_zero_y < win_y_top) & (non_zero_x >= win_x_right_min) & ( non_zero_x < win_x_right_max)).nonzero()[0] left_lane_indexes.append(non_zero_left) right_lane_indexes.append(non_zero_right) # If you found > min_pixels pixels, recenter next window on the mean position if len(non_zero_left) > min_pixels: left_x = np.int(np.mean(non_zero_x[non_zero_left])) if len(non_zero_right) > min_pixels: right_x = np.int(np.mean(non_zero_x[non_zero_right])) valid, sw = fit_slide_window(binary_warped, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y) if valid : out_img[non_zero_y[sw.left.lane_indexes], non_zero_x[sw.left.lane_indexes]] = [0, 255, 192] out_img[non_zero_y[sw.right.lane_indexes], non_zero_x[sw.right.lane_indexes]] = [0, 255, 192] img_plot = sw.plot_lines(out_img) imshow(img_plot) return valid, sw lanes = lanes_full_histogram(hist) ret, sw = slide_window(img_warped, img_binary_combined, lanes, 15) ``` Lanes! == Finally, we are ready to show the lanes on the original image. ``` def show_lanes(sw, img_warped, img_orig): img = img_warped.copy() if sw.left: fitx_points_warped = np.float32([np.transpose(np.vstack([sw.left.fitx, sw.ploty]))]) fitx_points = cv2.perspectiveTransform(fitx_points_warped, perspective_correction_inv) left_line_warped = np.int_(fitx_points_warped[0]) left_line = np.int_(fitx_points[0]) n = len(left_line) for i in range(n - 1): cv2.line(img_orig, (left_line[i][0], left_line[i][1]), (left_line[i + 1][0], left_line[i + 1][1]), (0, 255, 0), 5) cv2.line(img, (left_line_warped[i][0], left_line_warped[i][1]), (left_line_warped[i + 1][0], left_line_warped[i + 1][1]), (0, 255, 0), 5) if sw.right: fitx_points_warped = np.float32([np.transpose(np.vstack([sw.right.fitx, sw.ploty]))]) fitx_points = cv2.perspectiveTransform(fitx_points_warped, perspective_correction_inv) right_line_warped = np.int_(fitx_points_warped[0]) right_line = np.int_(fitx_points[0]) for i in range(len(right_line) - 1): cv2.line(img_orig, (right_line[i][0], right_line[i][1]), (right_line[i + 1][0], right_line[i + 1][1]), (0, 0, 255), 5) cv2.line(img, (right_line_warped[i][0], right_line_warped[i][1]), (right_line_warped[i + 1][0], right_line_warped[i + 1][1]), (0, 0, 255), 5) return img, img_orig left_lanes = deepcopy(sw.left) right_lanes = deepcopy(sw.right) img_lane, img_lane_orig = show_lanes(sw, img_warped, img_bgr) imshow(cv2.cvtColor(img_lane, cv2.COLOR_RGB2BGR)) figure() imshow(cv2.cvtColor(img_lane_orig, cv2.COLOR_RGB2BGR)) ```
github_jupyter
%matplotlib inline import cv2 import os import numpy as np from copy import deepcopy import sys sys.path.append('../') from utils import set_save_files, save_dir, ensure_dir, get_save_files import matplotlib from matplotlib.pyplot import imshow, figure, plot, clf perspective_correction = None perspective_correction_inv = None perspective_trapezoid = None warp_size = None orig_size = None left_fit_avg = None right_fit_avg = None MIN_DETECTIONS = 8 MAX_DETECTIONS = 10 # pt1, pt2, ptr3, and pt4 and four points defining a trapezoid used for the perspective correction def compute_perspective(width, height, pt1, pt2, pt3, pt4): global perspective_trapezoid, perspective_dest perspective_trapezoid = [(pt1[0], pt1[1]), (pt2[0], pt2[1]), (pt3[0], pt3[1]), (pt4[0], pt4[1])] src = np.float32([pt1, pt2, pt3, pt4]) # widest side on the trapezoid x1 = pt1[0] x2 = pt4[0] # height of the trapezoid y1 = pt1[1] y2 = pt2[1] h = y1 - y2 # The destination is a rectangle with the height of the trapezoid and the width of the widest side dst = np.float32([[x1, h], [x1, 0], [x2, 0], [x2, h]]) perspective_dest = [(x1, y1), (x1, y2), (x2, y2), (x2, y1)] global perspective_correction, perspective_correction_inv global warp_size, orig_size perspective_correction = cv2.getPerspectiveTransform(src, dst) perspective_correction_inv = cv2.getPerspectiveTransform(dst, src) warp_size = (width, h) orig_size = (width, height) def warp(img, filename): img_persp = img.copy() cv2.line(img_persp, perspective_dest[0], perspective_dest[1], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[1], perspective_dest[2], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[2], perspective_dest[3], (255, 255, 255), 3) cv2.line(img_persp, perspective_dest[3], perspective_dest[0], (255, 255, 255), 3) cv2.line(img_persp, perspective_trapezoid[0], perspective_trapezoid[1], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[1], perspective_trapezoid[2], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[2], perspective_trapezoid[3], (0, 192, 0), 3) cv2.line(img_persp, perspective_trapezoid[3], perspective_trapezoid[0], (0, 192, 0), 3) return img_persp, cv2.warpPerspective(img, perspective_correction, warp_size, flags=cv2.INTER_LANCZOS4) compute_perspective(1024, 600, [160, 425], [484, 310], [546, 310], [877, 425]) filename = "sd2.jpg" img_bgr = cv2.imread("test_images_sd/" + filename) imshow(cv2.cvtColor(img_bgr, cv2.COLOR_RGB2BGR)) figure() img_persp, img_warped = warp(img_bgr, filename) imshow(cv2.cvtColor(img_persp, cv2.COLOR_RGB2BGR)) figure() imshow(cv2.cvtColor(img_warped, cv2.COLOR_RGB2BGR)) def edge_detection(channel, filename): edge_x = cv2.Scharr(channel, cv2.CV_64F, 1, 0) # Edge detection using the Scharr operator edge_x = np.absolute(edge_x) return save_dir(np.uint8(255 * edge_x / np.max(edge_x)), "edge_", filename) img_hls = cv2.cvtColor(img_warped, cv2.COLOR_BGR2HLS).astype(np.float) img_edge = edge_detection(img_warped[:, :, 1], filename) imshow(cv2.cvtColor(img_edge, cv2.COLOR_RGB2BGR)) def threshold(channel_threshold, channel_edge, filename): # Gradient threshold binary = np.zeros_like(channel_edge) height = binary.shape[0] threshold_up = 15 threshold_down = 60 threshold_delta = threshold_down - threshold_up for y in range(height): binary_line = binary[y, :] edge_line = channel_edge[y, :] threshold_line = threshold_up + threshold_delta * y / height binary_line[edge_line >= threshold_line] = 255 binary[(channel_threshold >= 140) & (channel_threshold <= 255)] = 255 binary_threshold = np.zeros_like(channel_threshold) binary_threshold[(channel_threshold >= 140) & (channel_threshold <= 255)] = 255 return binary, binary_threshold (img_binary_combined, img_binary_solo) = threshold(img_hls[:, :, 1], img_edge, filename) imshow(img_binary_combined) def histogram(img): partial_img = img[img.shape[0] * 2 // 3:, :] # Select the bottom part (one third of the image) hist = np.sum(partial_img, axis=0) plot(hist) return hist hist = histogram(img_binary_combined) class HistLanes: def __init__(self, x_left, x_right, left_confidence, right_confidence): self.x_left = x_left self.x_right = x_right self.left_confidence = left_confidence self.right_confidence = right_confidence def lanes_full_histogram(histogram): size = len(histogram) max_index_left = np.argmax(histogram[0:size // 2]) max_index_right = np.argmax(histogram[size // 2:]) + size // 2 return HistLanes(max_index_left, max_index_right, histogram[max_index_left], histogram[max_index_right]) def moving_average(prev_average, new_value, beta): return beta * prev_average + (1 - beta) * new_value if prev_average is not None else new_value # Single lane line class Line: lane_indexes = None # pixel positions x = None y = None # Fit a second order polynomial to each fit = None # Plotting parameters fitx = None # Histogram hist_x = None # Data collected during the sliding windows phase class SlideWindow: left = Line() right = Line() hist = None left_avg = None right_avg = None ploty = None def __init__(self, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y): self.left.lane_indexes = np.concatenate(left_lane_indexes) self.right.lane_indexes = np.concatenate(right_lane_indexes) self.left.hist_x = hist.x_left self.right.hist_x = hist.x_right # Extract left and right positions self.left.x = non_zero_x[self.left.lane_indexes] self.left.y = non_zero_y[self.left.lane_indexes] self.right.x = non_zero_x[self.right.lane_indexes] self.right.y = non_zero_y[self.right.lane_indexes] def plot_lines(self, img, color_left=(0, 255, 255), color_right=(0, 255, 255)): left = [] right = [] for i in range(0, len(self.ploty)): left.append((self.left.fitx[i], self.ploty[i])) right.append((self.right.fitx[i], self.ploty[i])) cv2.polylines(img, np.int32([left]), False, color_left) cv2.polylines(img, np.int32([right]), False, color_right) return img def fit_slide_window(binary_warped, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y): sw = SlideWindow(hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y) # y coordinates sw.ploty = np.array([float(x) for x in range(binary_warped.shape[0])]) if len(sw.left.y) == 0: return False, sw # Fit a second order polynomial to approximate the points left_fit = np.polynomial.polynomial.polyfit(sw.left.y, sw.left.x, 2) right_fit = np.polynomial.polynomial.polyfit(sw.right.y, sw.right.x, 2) global left_fit_avg, right_fit_avg left_fit_avg = moving_average(left_fit_avg, left_fit, 0.92) right_fit_avg = moving_average(right_fit_avg, right_fit, 0.92) # Generate list of x and y values, using the terms of the polynomial # x = Ay^2 + By + C; sw.left.fitx = left_fit_avg[2] * sw.ploty ** 2 + left_fit_avg[1] * sw.ploty + left_fit_avg[0] sw.right.fitx = right_fit_avg[2] * sw.ploty ** 2 + right_fit_avg[1] * sw.ploty + right_fit_avg[0] return True, sw def slide_window(img, binary_warped, hist, num_windows): img_height = binary_warped.shape[0] window_height = np.int(img_height / num_windows) # Indices (e.g. coordinates) of the pixels that are not zero non_zero = binary_warped.nonzero() non_zero_y = np.array(non_zero[0]) non_zero_x = np.array(non_zero[1]) # Current positions, to be updated while sliding the window; we start with the ones identified with the histogram left_x = hist.x_left right_x = hist.x_right margin = 80 # Set minimum number of pixels found to recenter window min_pixels = 25 left_lane_indexes = [] right_lane_indexes = [] out_img = img.copy() for idx_window in range(num_windows): # X range where we expect the left lane to land win_x_left_min = left_x - margin win_x_left_max = left_x + margin # X range where we expect the right lane to land win_x_right_min = right_x - margin win_x_right_max = right_x + margin # Y range that we are analyzing win_y_top = img_height - idx_window * window_height win_y_bottom = win_y_top - window_height # Show the windows cv2.rectangle(out_img, (win_x_left_min, win_y_bottom), (win_x_left_max, win_y_top), (255, 255, 255), 2) cv2.rectangle(out_img, (win_x_right_min, win_y_bottom), (win_x_right_max, win_y_top), (255, 255, 255), 2) # Non zero pixels in x and y non_zero_left = ((non_zero_y >= win_y_bottom) & (non_zero_y < win_y_top) & (non_zero_x >= win_x_left_min) & ( non_zero_x < win_x_left_max)).nonzero()[0] non_zero_right = ((non_zero_y >= win_y_bottom) & (non_zero_y < win_y_top) & (non_zero_x >= win_x_right_min) & ( non_zero_x < win_x_right_max)).nonzero()[0] left_lane_indexes.append(non_zero_left) right_lane_indexes.append(non_zero_right) # If you found > min_pixels pixels, recenter next window on the mean position if len(non_zero_left) > min_pixels: left_x = np.int(np.mean(non_zero_x[non_zero_left])) if len(non_zero_right) > min_pixels: right_x = np.int(np.mean(non_zero_x[non_zero_right])) valid, sw = fit_slide_window(binary_warped, hist, left_lane_indexes, right_lane_indexes, non_zero_x, non_zero_y) if valid : out_img[non_zero_y[sw.left.lane_indexes], non_zero_x[sw.left.lane_indexes]] = [0, 255, 192] out_img[non_zero_y[sw.right.lane_indexes], non_zero_x[sw.right.lane_indexes]] = [0, 255, 192] img_plot = sw.plot_lines(out_img) imshow(img_plot) return valid, sw lanes = lanes_full_histogram(hist) ret, sw = slide_window(img_warped, img_binary_combined, lanes, 15) def show_lanes(sw, img_warped, img_orig): img = img_warped.copy() if sw.left: fitx_points_warped = np.float32([np.transpose(np.vstack([sw.left.fitx, sw.ploty]))]) fitx_points = cv2.perspectiveTransform(fitx_points_warped, perspective_correction_inv) left_line_warped = np.int_(fitx_points_warped[0]) left_line = np.int_(fitx_points[0]) n = len(left_line) for i in range(n - 1): cv2.line(img_orig, (left_line[i][0], left_line[i][1]), (left_line[i + 1][0], left_line[i + 1][1]), (0, 255, 0), 5) cv2.line(img, (left_line_warped[i][0], left_line_warped[i][1]), (left_line_warped[i + 1][0], left_line_warped[i + 1][1]), (0, 255, 0), 5) if sw.right: fitx_points_warped = np.float32([np.transpose(np.vstack([sw.right.fitx, sw.ploty]))]) fitx_points = cv2.perspectiveTransform(fitx_points_warped, perspective_correction_inv) right_line_warped = np.int_(fitx_points_warped[0]) right_line = np.int_(fitx_points[0]) for i in range(len(right_line) - 1): cv2.line(img_orig, (right_line[i][0], right_line[i][1]), (right_line[i + 1][0], right_line[i + 1][1]), (0, 0, 255), 5) cv2.line(img, (right_line_warped[i][0], right_line_warped[i][1]), (right_line_warped[i + 1][0], right_line_warped[i + 1][1]), (0, 0, 255), 5) return img, img_orig left_lanes = deepcopy(sw.left) right_lanes = deepcopy(sw.right) img_lane, img_lane_orig = show_lanes(sw, img_warped, img_bgr) imshow(cv2.cvtColor(img_lane, cv2.COLOR_RGB2BGR)) figure() imshow(cv2.cvtColor(img_lane_orig, cv2.COLOR_RGB2BGR))
0.605916
0.957794
``` from solver.jps import JPS from solver.jpsplus import JPSPlus from solver.astar import AStar from solver.base import findPathBase from solver.pruning.base import NoPruning from solver.pruning.bbox import BBoxPruning from utils.distance import diagonalDistance from container.open import OpenList from container.closed import ClosedList from evaluation.test import simpleTest from graph.node import Node from graph.grid import GridMap from utils.visualisation import drawResult from evaluation.movingai import MovingAIDataset import pandas as pd easy = MovingAIDataset( 'data/maps/lak307d.map', 'data/scen/lak307d.map.scen' ) hard = MovingAIDataset( 'data/maps/ost002d.map', 'data/scen/ost002d.map.scen' ) result_easy = { algo.__name__: { prune.__name__: pd.DataFrame(easy.test(algo, diagonalDistance, findPathBase, prune)) for prune in (NoPruning, BBoxPruning) } for algo in (AStar, JPS, JPSPlus) } result_hard = { algo.__name__: { prune.__name__: pd.DataFrame(hard.test(algo, diagonalDistance, findPathBase, prune)) for prune in (NoPruning, BBoxPruning) } for algo in (AStar, JPS, JPSPlus) } from matplotlib.colors import Normalize import matplotlib.cm as cm import matplotlib.pyplot as plt cmap = cm.autumn norm = Normalize(vmin=0, vmax=3) fig, axes = plt.subplots(1, 2, figsize=(8,6)) for ax, data, name in zip(axes, (result_easy, result_hard), ('easy', 'hard')): for i, algo in enumerate(data): for j, pr in enumerate(data[algo]): stats = data[algo][pr].describe()['time'] ax.errorbar( [i+j*0.25], [stats['mean']], [stats['std']], fmt='ok', lw=20, ecolor=cmap(norm(i)), ) ax.errorbar( [i+j*0.25], [stats['mean']], [[stats['mean'] - stats['min']], [stats['max'] - stats['mean']]], fmt='.k', ecolor=None, lw=1, ) ax.set_title(name) ax.set_ylim(0) ax.set_ylabel('sec') ax.set_xticks([i for i in range(len(data))]); ax.set_xticklabels([algo for algo in data]); fig, axes = plt.subplots(2, 2, figsize=(12,14)) for ax, data, name in zip(axes, (result_easy, result_hard), ('easy', 'hard')): for algo in data: for pr in data[algo]: means = data[algo][pr].groupby('complex').mean() for i, datatype in enumerate(('time', 'created')): ax[i].plot( means[datatype], label=f'{algo}+{pr}' ) for i, datatype in enumerate(('time', 'created')): ax[i].set_ylabel(f'mean {datatype}') ax[i].set_xlabel('complexity') ax[i].grid(True) ax[i].legend(); ax[i].set_title(name) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . # # . . . . . . # # . . . . . . . . . . # # . . . . . . . . # # . . . . . . # # # # # . . . . . . . # # . . . . . . . . # # . . . . . . # # # # # . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 7 jStart = 1 iGoal = 13 jGoal = 28 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) ``` ### BBoxPruning ``` prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList, visualise=True) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) ``` ### NoPruning ``` prune = NoPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . # # # . . . . . . . # # . . . . . . . . # # . . . . . . # . # . # . . . . . . . # # . . . . . . . . # # . . . . . . # . . . # . . . . . . . # # . . . . . . . . # # . . . . . . # # # # . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 1 jStart = 1 iGoal = 6 jGoal = 24 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) ``` ### BBox Pruning ``` prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . # # # . . . . . . . # # . . . . . . . . # # . . . . . . # . # . # . . . . . . . # # . . . . . . . . # # . . . . . . # . . . # . . . . . . . # # . . . . . . . . # # . . # # # # # # # # . . . . . . . . # # . . . . . . . . # # . . # . . . . . . . . . . . . . . . # # . . . . . . . . # # . . # . . . . . # # # # # . . . . . # # . . . . . . . . # # . . # . . . . . # . . . . . . . . . . . . . . . . . . . # # . # # . . . . . # . . . . . . . . . . . . . . . . . . . # # . . . # . # . # . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 1 jStart = 1 iGoal = 6 jGoal = 24 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) ```
github_jupyter
from solver.jps import JPS from solver.jpsplus import JPSPlus from solver.astar import AStar from solver.base import findPathBase from solver.pruning.base import NoPruning from solver.pruning.bbox import BBoxPruning from utils.distance import diagonalDistance from container.open import OpenList from container.closed import ClosedList from evaluation.test import simpleTest from graph.node import Node from graph.grid import GridMap from utils.visualisation import drawResult from evaluation.movingai import MovingAIDataset import pandas as pd easy = MovingAIDataset( 'data/maps/lak307d.map', 'data/scen/lak307d.map.scen' ) hard = MovingAIDataset( 'data/maps/ost002d.map', 'data/scen/ost002d.map.scen' ) result_easy = { algo.__name__: { prune.__name__: pd.DataFrame(easy.test(algo, diagonalDistance, findPathBase, prune)) for prune in (NoPruning, BBoxPruning) } for algo in (AStar, JPS, JPSPlus) } result_hard = { algo.__name__: { prune.__name__: pd.DataFrame(hard.test(algo, diagonalDistance, findPathBase, prune)) for prune in (NoPruning, BBoxPruning) } for algo in (AStar, JPS, JPSPlus) } from matplotlib.colors import Normalize import matplotlib.cm as cm import matplotlib.pyplot as plt cmap = cm.autumn norm = Normalize(vmin=0, vmax=3) fig, axes = plt.subplots(1, 2, figsize=(8,6)) for ax, data, name in zip(axes, (result_easy, result_hard), ('easy', 'hard')): for i, algo in enumerate(data): for j, pr in enumerate(data[algo]): stats = data[algo][pr].describe()['time'] ax.errorbar( [i+j*0.25], [stats['mean']], [stats['std']], fmt='ok', lw=20, ecolor=cmap(norm(i)), ) ax.errorbar( [i+j*0.25], [stats['mean']], [[stats['mean'] - stats['min']], [stats['max'] - stats['mean']]], fmt='.k', ecolor=None, lw=1, ) ax.set_title(name) ax.set_ylim(0) ax.set_ylabel('sec') ax.set_xticks([i for i in range(len(data))]); ax.set_xticklabels([algo for algo in data]); fig, axes = plt.subplots(2, 2, figsize=(12,14)) for ax, data, name in zip(axes, (result_easy, result_hard), ('easy', 'hard')): for algo in data: for pr in data[algo]: means = data[algo][pr].groupby('complex').mean() for i, datatype in enumerate(('time', 'created')): ax[i].plot( means[datatype], label=f'{algo}+{pr}' ) for i, datatype in enumerate(('time', 'created')): ax[i].set_ylabel(f'mean {datatype}') ax[i].set_xlabel('complexity') ax[i].grid(True) ax[i].legend(); ax[i].set_title(name) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . # # . . . . . . # # . . . . . . . . . . # # . . . . . . . . # # . . . . . . # # # # # . . . . . . . # # . . . . . . . . # # . . . . . . # # # # # . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 7 jStart = 1 iGoal = 13 jGoal = 28 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList, visualise=True) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) prune = NoPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . # # # . . . . . . . # # . . . . . . . . # # . . . . . . # . # . # . . . . . . . # # . . . . . . . . # # . . . . . . # . . . # . . . . . . . # # . . . . . . . . # # . . . . . . # # # # . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . # # . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 1 jStart = 1 iGoal = 6 jGoal = 24 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) height = 15 width = 30 mapstr = ''' . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . . . . . . . . . . . # # . . . . . . . . # # . . . . . . # . # # # . . . . . . . # # . . . . . . . . # # . . . . . . # . # . # . . . . . . . # # . . . . . . . . # # . . . . . . # . . . # . . . . . . . # # . . . . . . . . # # . . # # # # # # # # . . . . . . . . # # . . . . . . . . # # . . # . . . . . . . . . . . . . . . # # . . . . . . . . # # . . # . . . . . # # # # # . . . . . # # . . . . . . . . # # . . # . . . . . # . . . . . . . . . . . . . . . . . . . # # . # # . . . . . # . . . . . . . . . . . . . . . . . . . # # . . . # . # . # . . . . . . . . . . . . . . . . . . . . # # . . . . . . . . . . . . . . . ''' iStart = 1 jStart = 1 iGoal = 6 jGoal = 24 startNode = Node(iStart, jStart) goalNode = Node(iGoal, jGoal) grid = GridMap().readFromString(mapstr, width, height) prune = BBoxPruning() solver = AStar(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPS(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList) solver = JPSPlus(diagonalDistance, prune) solver.doPreprocess(grid) simpleTest(solver, findPathBase, grid, startNode, goalNode, OpenList, ClosedList)
0.538012
0.323433
# Regression Week 3: Assessing Fit (polynomial regression) In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will: * Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed * Use matplotlib to visualize polynomial regressions * Use matplotlib to visualize the same polynomial degree on different subsets of the data * Use a validation set to select a polynomial degree * Assess the final fit using test data We will continue to use the House data from previous notebooks. # Fire up graphlab create ``` import graphlab ``` Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree. The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions. For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab) ``` tmp = graphlab.SArray([1., 2., 3.]) tmp_cubed = tmp.apply(lambda x: x**3) print tmp print tmp_cubed ``` We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself). ``` ex_sframe = graphlab.SFrame() ex_sframe['power_1'] = tmp print ex_sframe ``` # Polynomial_sframe function Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree: ``` def polynomial_sframe(feature, degree): # assume that degree >= 1 # initialize the SFrame: poly_sframe = graphlab.SFrame() # and set poly_sframe['power_1'] equal to the passed feature poly_sframe['power_1'] = feature # first check if degree > 1 if degree > 1: # then loop over the remaining degrees: # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) # then assign poly_sframe[name] to the appropriate power of feature poly_sframe[name] = graphlab.SArray(feature).apply(lambda x: x**power) return poly_sframe ``` To test your function consider the smaller tmp variable and what you would expect the outcome of the following call: ``` print polynomial_sframe(tmp, 3) ``` # Visualizing polynomial regression Let's use matplotlib to visualize what a polynomial regression looks like on some real data. ``` sales = graphlab.SFrame('../week2-multiple-regression/kc_house_data.gl/') ``` As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices. ``` sales = sales.sort(['sqft_living', 'price']) ``` Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like. ``` poly1_data = polynomial_sframe(sales['sqft_living'], 1) poly1_data['price'] = sales['price'] # add price to the data since it's the target ``` NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users. ``` model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None) #let's take a look at the weights before we plot model1.get("coefficients") import matplotlib.pyplot as plt %matplotlib inline plt.plot(poly1_data['power_1'],poly1_data['price'],'.', poly1_data['power_1'], model1.predict(poly1_data),'-') ``` Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'. We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial? ``` poly2_data = polynomial_sframe(sales['sqft_living'], 2) my_features = poly2_data.column_names() # get the name of the features poly2_data['price'] = sales['price'] # add price to the data since it's the target model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None) model2.get("coefficients") plt.plot(poly2_data['power_1'],poly2_data['price'],'.', poly2_data['power_1'], model2.predict(poly2_data),'-') ``` The resulting model looks like half a parabola. Try on your own to see what the cubic looks like: ``` poly4_data = polynomial_sframe(sales['sqft_living'], 4) my_features4 = poly4_data.column_names() # get the name of the features poly4_data['price'] = sales['price'] # add price to the data since it's the target model4 = graphlab.linear_regression.create(poly4_data, target = 'price', features = my_features4, validation_set = None) model4.get("coefficients") plt.plot(poly4_data['power_1'],poly4_data['price'],'.', poly4_data['power_1'], model4.predict(poly4_data),'-') ``` Now try a 15th degree polynomial: ``` poly15_data = polynomial_sframe(sales['sqft_living'], 15) my_features15 = poly15_data.column_names() # get the name of the features poly15_data['price'] = sales['price'] # add price to the data since it's the target model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None) model15.get("coefficients") plt.plot(poly15_data['power_1'],poly15_data['price'],'.', poly15_data['power_1'], model15.predict(poly15_data),'-') ``` What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look. # Changing the data and re-learning We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results. To split the sales data into four subsets, we perform the following steps: * First split sales into 2 subsets with `.random_split(0.5, seed=0)`. * Next split the resulting subsets into 2 more subsets each. Use `.random_split(0.5, seed=0)`. We set `seed=0` in these steps so that different users get consistent results. You should end up with 4 subsets (`set_1`, `set_2`, `set_3`, `set_4`) of approximately equal size. ``` tmp_set1, tmp_set2 = sales.random_split(0.5, seed=0) set_1, set_2 = tmp_set1.random_split(0.5, seed=0) set_3, set_4 = tmp_set2.random_split(0.5, seed=0) ``` Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model. ``` poly151_data = polynomial_sframe(set_1['sqft_living'], 15) my_features151 = poly151_data.column_names() # get the name of the features poly151_data['price'] = set_1['price'] # add price to the data since it's the target model151 = graphlab.linear_regression.create(poly151_data, target = 'price', features = my_features151, validation_set = None) model151.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly151_data['power_1'],poly151_data['price'],'.', poly151_data['power_1'], model151.predict(poly151_data),'-') poly152_data = polynomial_sframe(set_2['sqft_living'], 15) my_features152 = poly152_data.column_names() # get the name of the features poly152_data['price'] = set_2['price'] # add price to the data since it's the target model152 = graphlab.linear_regression.create(poly152_data, target = 'price', features = my_features152, validation_set = None, verbose = False) model152.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly152_data['power_1'],poly152_data['price'],'.', poly152_data['power_1'], model152.predict(poly152_data),'-') poly153_data = polynomial_sframe(set_3['sqft_living'], 15) my_features153 = poly153_data.column_names() # get the name of the features poly153_data['price'] = set_3['price'] # add price to the data since it's the target model153 = graphlab.linear_regression.create(poly153_data, target = 'price', features = my_features153, validation_set = None) model153.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly153_data['power_1'],poly153_data['price'],'.', poly153_data['power_1'], model153.predict(poly153_data),'-') poly154_data = polynomial_sframe(set_4['sqft_living'], 15) my_features154 = poly154_data.column_names() # get the name of the features poly154_data['price'] = set_4['price'] # add price to the data since it's the target model154 = graphlab.linear_regression.create(poly154_data, target = 'price', features = my_features154, validation_set = None) model154.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly154_data['power_1'],poly154_data['price'],'.', poly154_data['power_1'], model154.predict(poly154_data),'-') ``` Some questions you will be asked on your quiz: **Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?** **Quiz Question: (True/False) the plotted fitted lines look the same in all four plots** # Selecting a Polynomial Degree Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4). We split the sales dataset 3-way into training set, test set, and validation set as follows: * Split our sales data into 2 sets: `training_and_validation` and `testing`. Use `random_split(0.9, seed=1)`. * Further split our training data into two sets: `training` and `validation`. Use `random_split(0.5, seed=1)`. Again, we set `seed=1` to obtain consistent results for different users. ``` training_and_validation, testing = sales.random_split(0.9, seed=1) training,validation = training_and_validation.random_split(0.5, seed=1) ``` Next you should write a loop that does the following: * For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1)) * Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree * hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features) * Add train_data['price'] to the polynomial SFrame * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data. * Report which degree had the lowest RSS on validation data (remember python indexes from 0) (Note you can turn off the print out of linear_regression.create() with verbose = False) ``` def optimize_model(): for i in range(1,16): print("@@@@@@@@@ Start index = " + str(i)) poly_validate_data = polynomial_sframe(validation['sqft_living'], i); poly_validate_data['price'] = validation['price'] poly_data = polynomial_sframe(training['sqft_living'], i) my_features = poly_data.column_names() # get the name of the features poly_data['price'] = training['price'] # add price to the data since it's the target model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None, verbose=False) print("@@@@@@@@@ Evaluationg result index = " + str(i)) print(model.get('coefficients')) print(model.evaluate(poly_validate_data)) optimize_model() ``` **Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?** Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz. Obviously we got minimum RMSE when degree = 6 **Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data?** ``` poly_test_data = polynomial_sframe(testing['sqft_living'], 6); poly_test_data['price'] = testing['price'] poly_data = polynomial_sframe(training['sqft_living'], 6) my_features = poly_data.column_names() # get the name of the features poly_data['price'] = training['price'] # add price to the data since it's the target model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set=None, verbose=False) print(model.get('coefficients')) print(model.evaluate(poly_test_data)) RMSE = 237952.22629191662 RSS = RMSE * RMSE * len(testing) print(RSS) ```
github_jupyter
import graphlab tmp = graphlab.SArray([1., 2., 3.]) tmp_cubed = tmp.apply(lambda x: x**3) print tmp print tmp_cubed ex_sframe = graphlab.SFrame() ex_sframe['power_1'] = tmp print ex_sframe def polynomial_sframe(feature, degree): # assume that degree >= 1 # initialize the SFrame: poly_sframe = graphlab.SFrame() # and set poly_sframe['power_1'] equal to the passed feature poly_sframe['power_1'] = feature # first check if degree > 1 if degree > 1: # then loop over the remaining degrees: # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) # then assign poly_sframe[name] to the appropriate power of feature poly_sframe[name] = graphlab.SArray(feature).apply(lambda x: x**power) return poly_sframe print polynomial_sframe(tmp, 3) sales = graphlab.SFrame('../week2-multiple-regression/kc_house_data.gl/') sales = sales.sort(['sqft_living', 'price']) poly1_data = polynomial_sframe(sales['sqft_living'], 1) poly1_data['price'] = sales['price'] # add price to the data since it's the target model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None) #let's take a look at the weights before we plot model1.get("coefficients") import matplotlib.pyplot as plt %matplotlib inline plt.plot(poly1_data['power_1'],poly1_data['price'],'.', poly1_data['power_1'], model1.predict(poly1_data),'-') poly2_data = polynomial_sframe(sales['sqft_living'], 2) my_features = poly2_data.column_names() # get the name of the features poly2_data['price'] = sales['price'] # add price to the data since it's the target model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None) model2.get("coefficients") plt.plot(poly2_data['power_1'],poly2_data['price'],'.', poly2_data['power_1'], model2.predict(poly2_data),'-') poly4_data = polynomial_sframe(sales['sqft_living'], 4) my_features4 = poly4_data.column_names() # get the name of the features poly4_data['price'] = sales['price'] # add price to the data since it's the target model4 = graphlab.linear_regression.create(poly4_data, target = 'price', features = my_features4, validation_set = None) model4.get("coefficients") plt.plot(poly4_data['power_1'],poly4_data['price'],'.', poly4_data['power_1'], model4.predict(poly4_data),'-') poly15_data = polynomial_sframe(sales['sqft_living'], 15) my_features15 = poly15_data.column_names() # get the name of the features poly15_data['price'] = sales['price'] # add price to the data since it's the target model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None) model15.get("coefficients") plt.plot(poly15_data['power_1'],poly15_data['price'],'.', poly15_data['power_1'], model15.predict(poly15_data),'-') tmp_set1, tmp_set2 = sales.random_split(0.5, seed=0) set_1, set_2 = tmp_set1.random_split(0.5, seed=0) set_3, set_4 = tmp_set2.random_split(0.5, seed=0) poly151_data = polynomial_sframe(set_1['sqft_living'], 15) my_features151 = poly151_data.column_names() # get the name of the features poly151_data['price'] = set_1['price'] # add price to the data since it's the target model151 = graphlab.linear_regression.create(poly151_data, target = 'price', features = my_features151, validation_set = None) model151.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly151_data['power_1'],poly151_data['price'],'.', poly151_data['power_1'], model151.predict(poly151_data),'-') poly152_data = polynomial_sframe(set_2['sqft_living'], 15) my_features152 = poly152_data.column_names() # get the name of the features poly152_data['price'] = set_2['price'] # add price to the data since it's the target model152 = graphlab.linear_regression.create(poly152_data, target = 'price', features = my_features152, validation_set = None, verbose = False) model152.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly152_data['power_1'],poly152_data['price'],'.', poly152_data['power_1'], model152.predict(poly152_data),'-') poly153_data = polynomial_sframe(set_3['sqft_living'], 15) my_features153 = poly153_data.column_names() # get the name of the features poly153_data['price'] = set_3['price'] # add price to the data since it's the target model153 = graphlab.linear_regression.create(poly153_data, target = 'price', features = my_features153, validation_set = None) model153.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly153_data['power_1'],poly153_data['price'],'.', poly153_data['power_1'], model153.predict(poly153_data),'-') poly154_data = polynomial_sframe(set_4['sqft_living'], 15) my_features154 = poly154_data.column_names() # get the name of the features poly154_data['price'] = set_4['price'] # add price to the data since it's the target model154 = graphlab.linear_regression.create(poly154_data, target = 'price', features = my_features154, validation_set = None) model154.get("coefficients").print_rows(num_rows=16, num_columns=4) plt.plot(poly154_data['power_1'],poly154_data['price'],'.', poly154_data['power_1'], model154.predict(poly154_data),'-') training_and_validation, testing = sales.random_split(0.9, seed=1) training,validation = training_and_validation.random_split(0.5, seed=1) def optimize_model(): for i in range(1,16): print("@@@@@@@@@ Start index = " + str(i)) poly_validate_data = polynomial_sframe(validation['sqft_living'], i); poly_validate_data['price'] = validation['price'] poly_data = polynomial_sframe(training['sqft_living'], i) my_features = poly_data.column_names() # get the name of the features poly_data['price'] = training['price'] # add price to the data since it's the target model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None, verbose=False) print("@@@@@@@@@ Evaluationg result index = " + str(i)) print(model.get('coefficients')) print(model.evaluate(poly_validate_data)) optimize_model() poly_test_data = polynomial_sframe(testing['sqft_living'], 6); poly_test_data['price'] = testing['price'] poly_data = polynomial_sframe(training['sqft_living'], 6) my_features = poly_data.column_names() # get the name of the features poly_data['price'] = training['price'] # add price to the data since it's the target model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set=None, verbose=False) print(model.get('coefficients')) print(model.evaluate(poly_test_data)) RMSE = 237952.22629191662 RSS = RMSE * RMSE * len(testing) print(RSS)
0.308711
0.99451
# Classical Logic Gates with Quantum Circuits ``` from qiskit import * from qiskit.tools.visualization import plot_histogram import numpy as np ``` Using the NOT gate (expressed as `x` in Qiskit), the CNOT gate (expressed as `cx` in Qiskit) and the Toffoli gate (expressed as `ccx` in Qiskit) create functions to implement the XOR, AND, NAND and OR gates. An implementation of the NOT gate is provided as an example. ## NOT gate This function takes a binary string input (`'0'` or `'1'`) and returns the opposite binary output'. ``` def NOT(input): q = QuantumRegister(1) # a qubit in which to encode and manipulate the input c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # We encode '0' as the qubit state |0⟩, and '1' as |1⟩ # Since the qubit is initially |0⟩, we don't need to do anything for an input of '0' # For an input of '1', we do an x to rotate the |0⟩ to |1⟩ if input=='1': qc.x( q[0] ) # Now we've encoded the input, we can do a NOT on it using x qc.x( q[0] ) # Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0] qc.measure( q[0], c[0] ) # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1) output = next(iter(job.result().get_counts())) return output ``` ## XOR gate Takes two binary strings as input and gives one as output. The output is `'0'` when the inputs are equal and `'1'` otherwise. ``` def XOR(input1,input2): q = QuantumRegister(2) # two qubits in which to encode and manipulate the input c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[1],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output ``` ## AND gate Takes two binary strings as input and gives one as output. The output is `'1'` only when both the inputs are `'1'`. ``` def AND(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output ``` ## NAND gate Takes two binary strings as input and gives one as output. The output is `'0'` only when both the inputs are `'1'`. ``` def NAND(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output ``` ## OR gate Takes two binary strings as input and gives one as output. The output is `'1'` if either input is `'1'`. ``` def OR(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output ``` ## Tests The following code runs the functions above for all possible inputs, so that you can check whether they work. ``` print('\nResults for the NOT gate') for input in ['0','1']: print(' Input',input,'gives output',NOT(input)) print('\nResults for the XOR gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',XOR(input1,input2)) print('\nResults for the AND gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',AND(input1,input2)) print('\nResults for the NAND gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',NAND(input1,input2)) print('\nResults for the OR gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',OR(input1,input2)) ```
github_jupyter
from qiskit import * from qiskit.tools.visualization import plot_histogram import numpy as np def NOT(input): q = QuantumRegister(1) # a qubit in which to encode and manipulate the input c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # We encode '0' as the qubit state |0⟩, and '1' as |1⟩ # Since the qubit is initially |0⟩, we don't need to do anything for an input of '0' # For an input of '1', we do an x to rotate the |0⟩ to |1⟩ if input=='1': qc.x( q[0] ) # Now we've encoded the input, we can do a NOT on it using x qc.x( q[0] ) # Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0] qc.measure( q[0], c[0] ) # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1) output = next(iter(job.result().get_counts())) return output def XOR(input1,input2): q = QuantumRegister(2) # two qubits in which to encode and manipulate the input c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[1],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output def AND(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output def NAND(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output def OR(input1,input2): q = QuantumRegister(3) # two qubits in which to encode the input, and one for the output c = ClassicalRegister(1) # a bit to store the output qc = QuantumCircuit(q, c) # this is where the quantum program goes # YOUR QUANTUM PROGRAM GOES HERE qc.measure(q[2],c[0]) # YOU CAN CHANGE THIS IF YOU WANT TO # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = execute(qc,backend,shots=1,memory=True) output = job.result().get_memory()[0] return output print('\nResults for the NOT gate') for input in ['0','1']: print(' Input',input,'gives output',NOT(input)) print('\nResults for the XOR gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',XOR(input1,input2)) print('\nResults for the AND gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',AND(input1,input2)) print('\nResults for the NAND gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',NAND(input1,input2)) print('\nResults for the OR gate') for input1 in ['0','1']: for input2 in ['0','1']: print(' Inputs',input1,input2,'give output',OR(input1,input2))
0.669853
0.982541
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> # Tools for Monitoring Backends and Jobs In this section, we will learn how to monitor the status of jobs submitted to devices and simulators (collectively called backends), as well as discover how to easily query backend details and view the collective state of all the backends available to you. ## Loading the Monitoring Tools First, let us load the default qiskit routines, and register our IBMQ credentials. ``` from qiskit import * IBMQ.load_accounts() ``` Functions for monitoring jobs and backends are here: ``` from qiskit.tools.monitor import job_monitor, backend_monitor, backend_overview ``` ## Tracking Job Status Many times a job(s) submitted to the IBM Q network can take a long time to process, e.g. jobs with many circuits and/or shots, or may have to wait in queue for other users. In situations such as these, it is beneficial to have a way of monitoring the progress of a job, or several jobs at once. As of Qiskit `0.6+` it is possible to monitor the status of a job in a Jupyter notebook, and also in a Python script (verision `0.7+`). Lets see how to make use of these tools. ### Monitoring the status of a single job Lets build a simple Bell circuit, submit it to a device, and then monitor its status. ``` q = QuantumRegister(2) c = ClassicalRegister(2) qc = QuantumCircuit(q, c) qc.h(q[0]) qc.cx(q[0], q[1]) qc.measure(q, c); ``` Lets grab the least busy backend ``` from qiskit.providers.ibmq import least_busy backend = least_busy(IBMQ.backends(filters=lambda x: x.configuration().n_qubits <= 5 and not x.configuration().simulator and x.status().operational==True)) backend.name() ``` Monitor the job using `job_monitor` in blocking-mode (i.e. using the same thread as the Python interpretor) ``` job1 = execute(qc, backend) job_monitor(job1) ``` Monitor the job using `job_monitor` in async-mode (Jupyter notebooks only). The job will be monitored in a separate thread, allowing you to continue to work in the notebook. ``` job2 = execute(qc, backend) job_monitor(job2, monitor_async=True) ``` ### Monitoring many jobs simultaneously Here we will monitor many jobs sent the the device. It is if the jobs are stored in a list to make retrevial easier. ``` num_jobs = 5 my_jobs = [] for j in range(num_jobs): my_jobs.append(execute(qc, backend)) job_monitor(my_jobs[j], monitor_async=True) ``` ### Changing the interval of status updating By default, the interval at which the job status is checked is every two seconds. However, the user is free to change this using the `interval` keyword argument in `job_monitor` ``` job3 = execute(qc, backend) job_monitor(job3, interval=5) ``` ## Backend Details So far we have been executing our jobs on a backend, but we have explored the backends in any detail. For example, we have found the least busy backend, but do not know if this is the best backend with respect to gate errors, topology etc. It is possible to get detailed information for a single backend by calling `backend_monitor`: ``` backend_monitor(backend) ``` Or, if we are interested in a higher-level view of all the backends available to us, then we can use `backend_overview()` ``` backend_overview() ``` There are also Jupyter magic equivalents that give more detailed information.
github_jupyter
from qiskit import * IBMQ.load_accounts() from qiskit.tools.monitor import job_monitor, backend_monitor, backend_overview q = QuantumRegister(2) c = ClassicalRegister(2) qc = QuantumCircuit(q, c) qc.h(q[0]) qc.cx(q[0], q[1]) qc.measure(q, c); from qiskit.providers.ibmq import least_busy backend = least_busy(IBMQ.backends(filters=lambda x: x.configuration().n_qubits <= 5 and not x.configuration().simulator and x.status().operational==True)) backend.name() job1 = execute(qc, backend) job_monitor(job1) job2 = execute(qc, backend) job_monitor(job2, monitor_async=True) num_jobs = 5 my_jobs = [] for j in range(num_jobs): my_jobs.append(execute(qc, backend)) job_monitor(my_jobs[j], monitor_async=True) job3 = execute(qc, backend) job_monitor(job3, interval=5) backend_monitor(backend) backend_overview()
0.397588
0.983738
``` from fastai2.torch_basics import * from fastai2.basics import * from fastai2.data.all import * from fastai2.callback.all import * from fastai2.vision.all import * from fastai2_audio.core.all import * from fastai2_audio.augment.all import * import torchaudio URLs.SPEAKERS10 p10speakers = untar_data(URLs.SPEAKERS10, extract_func=tar_extract_at_filename) p10speakers x = AudioGetter("", recurse=True, folders=None) files_10 = x(p10speakers) #crop 2s from the signal and turn it to a MelSpectrogram with no augmentation cfg_voice = AudioConfig.Voice() a2s = AudioToSpec.from_cfg(cfg_voice) # https://pytorch.org/audio/_modules/torchaudio/backend/utils.html torchaudio.get_audio_backend(), dir(torchaudio)#, torchaudio.backend.utils.list_audio_backends() #torchaudio.initialize_sox() auds = DataBlock(blocks=(AudioBlock.from_folder(p10speakers, crop_signal_to=2000), CategoryBlock), get_items=get_audio_files, splitter=RandomSplitter(), item_tfms = a2s, get_y=lambda x: str(x).split('/')[-1][:5]) cats = [y for _,y in auds.datasets(p10speakers)] #verify categories are being correctly assigned test_eq(min(cats).item(), 0) test_eq(max(cats).item(), 9) auds.summary(p10speakers) dbunch = auds.dataloaders(p10speakers, bs=64) ``` # In the end it was that I have a lot of conflicts on my conda env ``` name: fastai2 channels: - pytorch - fastai - rapidsai - nvidia - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=0_gnu - abseil-cpp=20200225.2=he1b5a44_0 - aiohttp=3.6.2=py37h516909a_0 - altair=4.1.0=py_1 - aplus=0.11.0=py_1 - appdirs=1.4.3=py_1 - arrow-cpp=0.15.0=py37h090bef1_2 - astroid=2.4.0=py37_0 - astropy=4.0.1.post1=py37h8f50634_0 - async-timeout=3.0.1=py_1000 - attrs=19.3.0=py_0 - aws-sdk-cpp=1.7.164=hc831370_1 - backcall=0.1.0=py37_0 - binutils_impl_linux-64=2.31.1=h6176602_1 - binutils_linux-64=2.31.1=h6176602_9 - blas=1.0=mkl - bleach=3.1.4=py_0 - blosc=1.19.0=he1b5a44_0 - bokeh=1.4.0=py37hc8dfbb8_1 - boost=1.70.0=py37h9de70de_1 - boost-cpp=1.70.0=h8e57a91_2 - boto3=1.14.7=pyh9f0ad1d_0 - botocore=1.17.7=pyh9f0ad1d_0 - bqplot=0.12.12=pyh9f0ad1d_0 - branca=0.3.1=py_0 - brotli=1.0.7=he6710b0_0 - brotlipy=0.7.0=py37h7b6447c_1000 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.15.0=h7b6447c_1001 - ca-certificates=2020.6.24=0 - cachetools=4.1.0=py_1 - cairo=1.16.0=hcf35c78_1003 - certifi=2020.6.20=py37_0 - cffi=1.14.0=py37h2e261b9_0 - cfitsio=3.470=hb60a0a2_2 - chardet=3.0.4=py37_1003 - charls=2.1.0=he1b5a44_2 - click=7.1.2=pyh9f0ad1d_0 - click-plugins=1.1.1=py_0 - cligj=0.5.0=py_0 - colorcet=2.0.1=py_0 - cryptography=2.9.2=py37h1ba5d50_0 - cudatoolkit=10.1.243=h6bb024c_0 - cudf=0.14.0=py37_0 - cudnn=7.6.0=cuda10.1_0 - cugraph=0.14.0=py37_0 - cuml=0.14.0=cuda10.1_py37_0 - cupy=7.5.0=py37h0632833_0 - curl=7.68.0=hf8cf82a_0 - cusignal=0.14.0=py37_0 - cuspatial=0.14.0=py37_0 - cuxfilter=0.14.0=py37_0 - cycler=0.10.0=py37_0 - cymem=2.0.2=py37he1b5a44_0 - cython=0.29.20=py37he6710b0_0 - cython-blis=0.2.4=py37h516909a_1 - cytoolz=0.10.1=py37h516909a_0 - dask=2.19.0=py_0 - dask-core=2.19.0=py_0 - dask-cuda=0.14.0=py37_0 - dask-cudf=0.14.0=py37_0 - dask-xgboost=0.2.0.dev28=cuda10.1py37_0 - datashader=0.10.0=py_0 - datashape=0.5.4=py_1 - dbus=1.13.14=hb2f20db_0 - decorator=4.4.2=py_0 - defusedxml=0.6.0=py_0 - distributed=2.19.0=py37hc8dfbb8_0 - dlpack=0.2=he1b5a44_1 - docutils=0.15.2=py37_0 - double-conversion=3.1.5=he6710b0_1 - entrypoints=0.3=py37_0 - expat=2.2.6=he6710b0_0 - fastavro=0.23.4=py37h8f50634_0 - fastrlock=0.5=py37h3340039_0 - fiona=1.8.13=py37h900e953_0 - fontconfig=2.13.1=h86ecdb6_1001 - freetype=2.9.1=h8a8886c_1 - freexl=1.0.5=h14c3975_1002 - fribidi=1.0.5=h7b6447c_0 - fsspec=0.7.4=py_0 - future=0.18.2=py37hc8dfbb8_1 - gcc_impl_linux-64=7.3.0=habb00fd_1 - gcc_linux-64=7.3.0=h553295d_9 - gdal=3.0.2=py37hbb6b9fb_2 - geopandas=0.7.0=py_1 - geos=3.7.2=he1b5a44_2 - geotiff=1.5.1=hbd99317_7 - gettext=0.19.8.1=hc5be6a0_1002 - gflags=2.2.2=he6710b0_0 - giflib=5.1.7=h516909a_1 - glib=2.63.1=h5a9c865_0 - glog=0.4.0=he6710b0_0 - gmp=6.1.2=h6c8ec71_1 - graphite2=1.3.13=h23475e2_0 - graphviz=2.42.3=h0511662_0 - grpc-cpp=1.23.0=h18db393_0 - gst-plugins-base=1.14.5=h0935bb2_2 - gstreamer=1.14.5=h36ae1b5_2 - gxx_impl_linux-64=7.3.0=hdf63c60_1 - gxx_linux-64=7.3.0=h553295d_9 - h5py=2.10.0=nompi_py37h513d04c_102 - harfbuzz=2.4.0=h9f30f68_3 - hdf4=4.2.13=hf30be14_1003 - hdf5=1.10.5=nompi_h3c11f04_1104 - heapdict=1.0.1=py_0 - icu=64.2=he1b5a44_1 - idna=2.9=py_1 - imagecodecs-lite=2019.12.3=py37h03ebfcd_1 - imageio=2.8.0=py_0 - importlib-metadata=1.6.1=py37hc8dfbb8_0 - importlib_metadata=1.5.0=py37_0 - intel-openmp=2020.0=166 - ipydatawidgets=4.0.1=py_0 - ipykernel=5.1.4=py37h39e3cac_0 - ipyleaflet=0.13.0=pyh9f0ad1d_0 - ipympl=0.5.6=pyh9f0ad1d_1 - ipyscales=0.5.0=pyh9f0ad1d_0 - ipython=7.13.0=py37h5ca1d4c_0 - ipython_genutils=0.2.0=py37_0 - ipyvolume=0.6.0a6=pyh9f0ad1d_0 - ipyvue=1.3.2=pyh9f0ad1d_0 - ipyvuetify=1.4.0=pyh8c360ce_1 - ipywebrtc=0.5.0=py37_0 - ipywidgets=7.5.1=py_0 - isort=4.3.21=py37_0 - jedi=0.17.0=py37_0 - jinja2=2.11.2=py_0 - jmespath=0.10.0=pyh9f0ad1d_0 - joblib=0.14.1=py_0 - jpeg=9d=h516909a_0 - json-c=0.13.1=hbfbb72e_1002 - json5=0.9.0=py_0 - jsonschema=3.2.0=py37_0 - jupyter=1.0.0=py37_7 - jupyter-server-proxy=1.5.0=py_0 - jupyter_client=6.1.3=py_0 - jupyter_console=6.1.0=py_0 - jupyter_core=4.6.3=py37_0 - jupyterlab=2.1.4=py_0 - jupyterlab_server=1.1.3=py_0 - jxrlib=1.1=h516909a_2 - kealib=1.4.13=hec59c27_0 - kiwisolver=1.2.0=py37hfd86e86_0 - krb5=1.16.4=h2fd8d38_0 - lazy-object-proxy=1.4.3=py37h7b6447c_0 - lcms2=2.11=hbd6801e_0 - ld_impl_linux-64=2.33.1=h53a641e_7 - libaec=1.0.4=he1b5a44_1 - libcudf=0.14.0=cuda10.1_0 - libcugraph=0.14.0=cuda10.1_0 - libcuml=0.14.0=cuda10.1_0 - libcumlprims=0.14.1=cuda10.1_0 - libcurl=7.68.0=hda55be3_0 - libcuspatial=0.14.0=cuda10.1_0 - libdap4=3.20.4=hd3bb157_0 - libedit=3.1.20181209=hc058e9b_0 - libevent=2.1.10=h72c5cf5_0 - libffi=3.2.1=hd88cf55_4 - libgcc-ng=9.2.0=h24d8f2e_2 - libgdal=3.0.2=hc7cfd23_2 - libgfortran-ng=7.3.0=hdf63c60_0 - libgomp=9.2.0=h24d8f2e_2 - libgpuarray=0.7.6=h14c3975_0 - libhwloc=2.1.0=h3c4fd83_0 - libiconv=1.15=h516909a_1006 - libkml=1.3.0=h4fcabce_1010 - libllvm8=8.0.1=hc9558a2_0 - libnetcdf=4.7.1=nompi_h94020b1_102 - libnvstrings=0.14.0=cuda10.1_0 - libpng=1.6.37=hbc83047_0 - libpq=11.5=hd9ab2ff_2 - libprotobuf=3.8.0=h8b12597_0 - librmm=0.14.0=cuda10.1_0 - libsodium=1.0.16=h1bed415_0 - libspatialindex=1.9.3=he1b5a44_3 - libspatialite=4.3.0a=h4f6d029_1032 - libssh2=1.9.0=hab1572f_2 - libstdcxx-ng=9.1.0=hdf63c60_0 - libtiff=4.1.0=hfc65ed5_0 - libtool=2.4.6=h14c3975_1002 - libuuid=2.32.1=h14c3975_1000 - libuv=1.34.0=h516909a_0 - libwebp=1.0.2=hf4e8a37_4 - libxcb=1.13=h1bed415_1 - libxgboost=1.1.0dev.rapidsai0.14=cuda10.1_0 - libxml2=2.9.10=hee79883_0 - libzopfli=1.0.3=he1b5a44_0 - llvmlite=0.32.1=py37h5202443_0 - locket=0.2.0=py_2 - lz4-c=1.8.3=he1b5a44_1001 - mako=1.1.2=py_0 - markdown=3.2.2=py_0 - markupsafe=1.1.1=py37h7b6447c_0 - matplotlib=3.2.2=0 - matplotlib-base=3.2.2=py37h30547a4_0 - mccabe=0.6.1=py37_1 - mistune=0.8.4=py37h7b6447c_0 - mkl=2020.0=166 - mkl-service=2.3.0=py37he904b0f_0 - mkl_fft=1.0.15=py37ha843d7b_0 - mkl_random=1.1.0=py37hd6b4f25_0 - msgpack-python=1.0.0=py37h99015e2_1 - multidict=4.7.5=py37h8f50634_1 - multipledispatch=0.6.0=py_0 - munch=2.5.0=py_0 - murmurhash=1.0.2=py37he6710b0_0 - nbconvert=5.6.1=py37_0 - nbformat=5.0.6=py_0 - nccl=2.5.7.1=h51cf6c1_0 - ncurses=6.2=he6710b0_1 - nest-asyncio=1.3.3=py_0 - networkx=2.4=py_1 - ninja=1.9.0=py37hfd86e86_0 - nodejs=13.13.0=hf5d1a2b_0 - notebook=6.0.3=py37_0 - numba=0.49.1=py37h0573a6f_0 - numpy=1.18.1=py37h4f9e942_0 - numpy-base=1.18.1=py37hde5b4d6_1 - nvstrings=0.14.0=py37_0 - olefile=0.46=py37_0 - onnx=1.6.0=py37he1b5a44_0 - openjpeg=2.3.1=h981e76c_3 - openssl=1.1.1g=h7b6447c_0 - pandas=0.25.3=py37hb3f55d8_0 - pandoc=2.2.3.2=0 - pandocfilters=1.4.2=py37_1 - panel=0.6.4=0 - pango=1.42.4=ha030887_1 - param=1.9.3=py_0 - parquet-cpp=1.5.1=2 - parso=0.7.0=py_0 - partd=1.1.0=py_0 - pcre=8.44=he1b5a44_0 - pexpect=4.8.0=py37_0 - pickleshare=0.7.5=py37_0 - pillow=7.1.2=py37hb39fc2d_0 - pip=20.0.2=py37_3 - pixman=0.38.0=h7b6447c_0 - plac=0.9.6=py37_0 - plotly=4.7.1=py_0 - poppler=0.67.0=h14e79db_8 - poppler-data=0.4.9=1 - postgresql=11.5=hc63931a_2 - preshed=2.0.1=py37he6710b0_0 - progressbar2=3.51.3=pyh9f0ad1d_0 - proj=6.2.1=hc80f0dc_0 - prometheus_client=0.7.1=py_0 - prompt-toolkit=3.0.4=py_0 - prompt_toolkit=3.0.4=0 - psutil=5.7.0=py37h7b6447c_0 - ptyprocess=0.6.0=py37_0 - py-xgboost=1.1.0dev.rapidsai0.14=cuda10.1py37_0 - pycparser=2.20=py_0 - pyct=0.4.6=py_0 - pyct-core=0.4.6=py_0 - pyee=7.0.2=pyh9f0ad1d_0 - pygments=2.6.1=py_0 - pygpu=0.7.6=py37h035aef0_0 - pylint=2.5.0=py37_1 - pynvml=8.0.4=py_0 - pyopengl=3.1.5=py_0 - pyopenssl=19.1.0=py37_0 - pyparsing=2.4.7=py_0 - pyppeteer=0.0.25=py_1 - pyproj=2.4.2.post1=py37h12732c1_0 - pyqt=5.9.2=py37h05f1152_2 - pyrsistent=0.16.0=py37h7b6447c_0 - pysocks=1.7.1=py37_0 - python=3.7.6=cpython_h8356626_6 - python-dateutil=2.8.1=py_0 - python-utils=2.4.0=py_0 - python_abi=3.7=1_cp37m - pythreejs=2.2.0=pyh8c360ce_0 - pytz=2020.1=py_0 - pyviz_comms=0.7.5=pyh9f0ad1d_0 - pywavelets=1.1.1=py37h03ebfcd_1 - pyyaml=5.3.1=py37h7b6447c_0 - pyzmq=18.1.1=py37he6710b0_0 - qt=5.9.7=h0c104cb_3 - qtconsole=4.7.3=py_0 - qtpy=1.9.0=py_0 - rapids=0.14.0=cuda10.1_py37_2 - rapids-xgboost=0.14.0=cuda10.1_py37_2 - re2=2020.04.01=he1b5a44_0 - readline=8.0=h7b6447c_0 - requests=2.23.0=py37_0 - retrying=1.3.3=py37_2 - rmm=0.14.0=py37_0 - rtree=0.9.4=py37h8526d28_1 - s3fs=0.2.2=py_0 - s3transfer=0.3.3=py37hc8dfbb8_1 - scikit-learn=0.22.1=py37hd81dba3_0 - scipy=1.4.1=py37h0b6359f_0 - seaborn=0.10.1=py_0 - send2trash=1.5.0=py37_0 - setuptools=46.1.3=py37_0 - shapely=1.6.4=py37hec07ddf_1006 - simpervisor=0.3=py_1 - sip=4.19.8=py37hf484d3e_0 - six=1.14.0=py37_0 - snappy=1.1.8=he1b5a44_2 - sortedcontainers=2.2.2=pyh9f0ad1d_0 - spacy=2.1.8=py37hc9558a2_0 - spdlog=1.6.1=hc9558a2_0 - sqlite=3.31.1=h62c20be_1 - srsly=0.1.0=py37he1b5a44_0 - tabulate=0.8.7=pyh9f0ad1d_0 - tbb=2018.0.5=h2d50403_0 - tblib=1.6.0=py_0 - terminado=0.8.3=py37_0 - testpath=0.4.4=py_0 - theano=1.0.4=py37hfd86e86_0 - thinc=7.0.8=py37hc9558a2_0 - thrift-cpp=0.12.0=hf3afdfd_1004 - tiledb=1.6.2=h7d710e0_2 - tk=8.6.10=hed695b0_0 - toml=0.10.0=py37h28b3542_0 - toolz=0.10.0=py_0 - torchaudio=0.5.0=py37 - tornado=6.0.4=py37h7b6447c_1 - tqdm=4.46.0=py_0 - traitlets=4.3.3=py37_0 - traittypes=0.2.1=py_1 - typed-ast=1.4.1=py37h7b6447c_0 - typing_extensions=3.7.4.2=py_0 - tzcode=2020a=h516909a_0 - ucx=1.8.0+gf6ec8d4=cuda10.1_20 - ucx-py=0.14.0+gf6ec8d4=py37_0 - uriparser=0.9.3=he6710b0_1 - vaex=1.0.0b7=py_0 - vaex-astro=0.7.0=pyh9f0ad1d_0 - vaex-core=2.0.3=py37h0da4684_0 - vaex-distributed=0.3.0=py_0 - vaex-hdf5=0.6.0=pyh9f0ad1d_0 - vaex-jupyter=0.5.2=pyh9f0ad1d_0 - vaex-ml=0.9.0=pyh9f0ad1d_0 - vaex-server=0.3.1=pyh9f0ad1d_0 - vaex-ui=0.3.0=py_0 - vaex-viz=0.4.0=pyh9f0ad1d_0 - vega_datasets=0.8.0=py_0 - wasabi=0.2.2=py_0 - wcwidth=0.1.9=py_0 - webencodings=0.5.1=py37_1 - websockets=8.1=py37h8f50634_1 - wheel=0.34.2=py37_0 - widgetsnbextension=3.5.1=py37_0 - wrapt=1.11.2=py37h7b6447c_0 - xarray=0.15.1=py_0 - xerces-c=3.2.2=h8412b87_1004 - xgboost=1.1.0dev.rapidsai0.14=cuda10.1py37_0 - xorg-kbproto=1.0.7=h14c3975_1002 - xorg-libice=1.0.10=h516909a_0 - xorg-libsm=1.2.3=h84519dc_1000 - xorg-libx11=1.6.9=h516909a_0 - xorg-libxext=1.3.4=h516909a_0 - xorg-libxpm=3.5.13=h516909a_0 - xorg-libxrender=0.9.10=h516909a_1002 - xorg-libxt=1.1.5=h516909a_1003 - xorg-renderproto=0.11.1=h14c3975_1002 - xorg-xextproto=7.3.0=h14c3975_1002 - xorg-xproto=7.0.31=h14c3975_1007 - xz=5.2.5=h7b6447c_0 - yaml=0.1.7=had09818_2 - yarl=1.3.0=py37h516909a_1000 - zeromq=4.3.1=he6710b0_3 - zict=2.0.0=py_0 - zipp=3.1.0=py_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.3=h3b9ef0a_0 - pip: - absl-py==0.9.0 - adamp==0.2.0 - albumentations==0.4.5 - apipkg==1.5 - astor==0.8.1 - attrdict==2.0.1 - audioread==2.1.8 - axial-positional-embedding==0.2.1 - blessings==1.7 - category-encoders==2.2.2 - clldutils==3.5.2 - cloudpickle==1.3.0 - colorama==0.4.3 - colorednoise==1.1.1 - colorlog==4.1.0 - configparser==5.0.0 - contrastive-learner==0.1.0 - csvw==1.7.0 - deepspeech==0.7.4 - dill==0.3.1.1 - docker-pycreds==0.4.0 - execnet==1.7.1 - fastai-xla-extensions==0.0.1 - fastprogress==0.2.4 - fastscript==0.1.5 - filelock==3.0.12 - fire==0.3.1 - flame-analyzer==0.1.5 - flask==1.1.2 - gast==0.2.2 - gitdb==4.0.5 - gitpython==3.1.2 - google-auth==1.14.2 - google-auth-oauthlib==0.4.1 - google-pasta==0.2.0 - gpustat==0.6.0 - gql==0.2.0 - graphql-core==1.1 - grpcio==1.28.1 - gym==0.17.2 - imagecodecs==2020.2.18 - imgaug==0.2.6 - ipyexperiments==0.1.18.dev0 - isodate==0.6.0 - itsdangerous==2.0.0a1 - kaggle==1.5.6 - keras==2.3.1 - keras-applications==1.0.8 - keras-preprocessing==1.1.2 - kornia==0.3.1 - lasagne==0.1 - librosa==0.7.2 - line-profiler==3.0.2 - linear-attention-transformer==0.11.0 - linformer==0.2.0 - local-attention==1.0.2 - more-itertools==8.4.0 - mpmath==1.1.0 - nbdev==0.2.20 - nlp==0.2.0 - nltk==3.5 - nose==1.3.7 - nvidia-ml-py3==7.352.0 - oauthlib==3.1.0 - opencv-python==4.2.0.34 - opt-einsum==3.3.0 - packaging==20.3 - pathtools==0.1.2 - patsy==0.5.1 - pluggy==0.13.1 - pprofile==2.0.5 - product-key-memory==0.1.10 - promise==2.3 - protobuf==3.11.3 - py==1.8.2 - py-heat==0.0.6 - py-heat-magic==0.0.2 - pyarrow==0.17.1 - pyasn1==0.4.8 - pyasn1-modules==0.2.8 - pybind11==2.5.0 - pydicom==1.4.2 - pyglet==1.5.0 - pytest==5.4.3 - pytest-forked==1.1.3 - pytest-xdist==1.32.0 - python-slugify==4.0.0 - pytorch-lightning==0.7.5 - regex==2020.5.14 - requests-oauthlib==1.3.0 - resampy==0.2.2 - retry==0.9.2 - rfc3986==1.4.0 - rouge-score==0.0.4 - rsa==4.0 - sacremoses==0.0.43 - scikit-image==0.17.2 - segments==2.1.3 - sentencepiece==0.1.91 - sentry-sdk==0.14.4 - seqeval==0.0.12 - shifterator==0.1.3 - shortuuid==1.0.1 - slimevolleygym==0.1.0 - smmap==3.0.4 - soundfile==0.10.3.post1 - statsmodels==0.11.1 - stylegan2-pytorch==0.18.3 - subprocess32==3.5.4 - sympy==1.6.1 - taichi==0.6.14 - tensorboard==1.15.0 - tensorboard-plugin-wit==1.6.0.post3 - tensorboardx==2.0 - tensorflow==1.15.0 - tensorflow-estimator==1.15.1 - termcolor==1.1.0 - text-unidecode==1.3 - tifffile==2020.5.11 - tokenizers==0.8.1rc1 - torch==1.6.0 - torchvision==0.7.0 - transformers==3.0.2 - unidecode==0.04.20 - uritemplate==3.0.1 - urllib3==1.24.3 - vector-quantize-pytorch==0.0.2 - wandb==0.8.36 - watchdog==0.10.2 - werkzeug==1.0.1 prefix: /home/tyoc213/miniconda3/envs/fastai2 ``` # The revisions ``` 2020-05-09 14:59:09 (rev 0) +_libgcc_mutex-0.1 (defaults/linux-64) +attrs-19.3.0 (defaults/noarch) +backcall-0.1.0 (defaults/linux-64) +blas-1.0 (defaults/linux-64) +bleach-3.1.4 (defaults/noarch) +ca-certificates-2020.1.1 (defaults/linux-64) +certifi-2020.4.5.1 (defaults/linux-64) +cffi-1.14.0 (defaults/linux-64) +chardet-3.0.4 (defaults/linux-64) +cryptography-2.9.2 (defaults/linux-64) +cudatoolkit-10.2.89 (defaults/linux-64) +cycler-0.10.0 (defaults/linux-64) +cymem-2.0.2 (fastai/linux-64) +cython-blis-0.2.4 (fastai/linux-64) +dbus-1.13.14 (defaults/linux-64) +decorator-4.4.2 (defaults/noarch) +defusedxml-0.6.0 (defaults/noarch) +entrypoints-0.3 (defaults/linux-64) +expat-2.2.6 (defaults/linux-64) +fastprogress-0.2.2 (fastai/noarch) +fontconfig-2.13.0 (defaults/linux-64) +freetype-2.9.1 (defaults/linux-64) +glib-2.63.1 (defaults/linux-64) +gmp-6.1.2 (defaults/linux-64) +gst-plugins-base-1.14.0 (defaults/linux-64) +gstreamer-1.14.0 (defaults/linux-64) +icu-58.2 (defaults/linux-64) +idna-2.9 (defaults/noarch) +importlib_metadata-1.5.0 (defaults/linux-64) +intel-openmp-2020.0 (defaults/linux-64) +ipykernel-5.1.4 (defaults/linux-64) +ipython-7.13.0 (defaults/linux-64) +ipython_genutils-0.2.0 (defaults/linux-64) +ipywidgets-7.5.1 (defaults/noarch) +jedi-0.17.0 (defaults/linux-64) +jinja2-2.11.2 (defaults/noarch) +joblib-0.14.1 (defaults/noarch) +jpeg-9b (defaults/linux-64) +jsonschema-3.2.0 (defaults/linux-64) +jupyter-1.0.0 (defaults/linux-64) +jupyter_client-6.1.3 (defaults/noarch) +jupyter_console-6.1.0 (defaults/noarch) +jupyter_core-4.6.3 (defaults/linux-64) +kiwisolver-1.2.0 (defaults/linux-64) +ld_impl_linux-64-2.33.1 (defaults/linux-64) +libedit-3.1.20181209 (defaults/linux-64) +libffi-3.2.1 (defaults/linux-64) +libgcc-ng-9.1.0 (defaults/linux-64) +libgfortran-ng-7.3.0 (defaults/linux-64) +libpng-1.6.37 (defaults/linux-64) +libsodium-1.0.16 (defaults/linux-64) +libstdcxx-ng-9.1.0 (defaults/linux-64) +libtiff-4.1.0 (defaults/linux-64) +libuuid-1.0.3 (defaults/linux-64) +libxcb-1.13 (defaults/linux-64) +libxml2-2.9.9 (defaults/linux-64) +markupsafe-1.1.1 (defaults/linux-64) +matplotlib-3.1.3 (defaults/linux-64) +matplotlib-base-3.1.3 (defaults/linux-64) +mistune-0.8.4 (defaults/linux-64) +mkl-2020.0 (defaults/linux-64) +mkl-service-2.3.0 (defaults/linux-64) +mkl_fft-1.0.15 (defaults/linux-64) +mkl_random-1.1.0 (defaults/linux-64) +murmurhash-1.0.2 (defaults/linux-64) +nbconvert-5.6.1 (defaults/linux-64) +nbformat-5.0.6 (defaults/noarch) +ncurses-6.2 (defaults/linux-64) +ninja-1.9.0 (defaults/linux-64) +notebook-6.0.3 (defaults/linux-64) +numpy-1.18.1 (defaults/linux-64) +numpy-base-1.18.1 (defaults/linux-64) +olefile-0.46 (defaults/linux-64) +openssl-1.1.1g (defaults/linux-64) +pandas-1.0.3 (defaults/linux-64) +pandoc-2.2.3.2 (defaults/linux-64) +pandocfilters-1.4.2 (defaults/linux-64) +parso-0.7.0 (defaults/noarch) +pcre-8.43 (defaults/linux-64) +pexpect-4.8.0 (defaults/linux-64) +pickleshare-0.7.5 (defaults/linux-64) +pillow-7.1.2 (defaults/linux-64) +pip-20.0.2 (defaults/linux-64) +plac-0.9.6 (defaults/linux-64) +preshed-2.0.1 (defaults/linux-64) +prometheus_client-0.7.1 (defaults/noarch) +prompt-toolkit-3.0.4 (defaults/noarch) +prompt_toolkit-3.0.4 (defaults/noarch) +ptyprocess-0.6.0 (defaults/linux-64) +pycparser-2.20 (defaults/noarch) +pygments-2.6.1 (defaults/noarch) +pyopenssl-19.1.0 (defaults/linux-64) +pyparsing-2.4.7 (defaults/noarch) +pyqt-5.9.2 (defaults/linux-64) +pyrsistent-0.16.0 (defaults/linux-64) +pysocks-1.7.1 (defaults/linux-64) +python-3.7.7 (defaults/linux-64) +python-dateutil-2.8.1 (defaults/noarch) +pytorch-1.5.0 (pytorch/linux-64) +pytz-2020.1 (defaults/noarch) +pyyaml-5.3.1 (defaults/linux-64) +pyzmq-18.1.1 (defaults/linux-64) +qt-5.9.7 (defaults/linux-64) +qtconsole-4.7.3 (defaults/noarch) +qtpy-1.9.0 (defaults/noarch) +readline-8.0 (defaults/linux-64) +requests-2.23.0 (defaults/linux-64) +scikit-learn-0.22.1 (defaults/linux-64) +scipy-1.4.1 (defaults/linux-64) +send2trash-1.5.0 (defaults/linux-64) +setuptools-46.1.3 (defaults/linux-64) +sip-4.19.8 (defaults/linux-64) +six-1.14.0 (defaults/linux-64) +spacy-2.1.8 (fastai/linux-64) +sqlite-3.31.1 (defaults/linux-64) +srsly-0.1.0 (fastai/linux-64) +terminado-0.8.3 (defaults/linux-64) +testpath-0.4.4 (defaults/noarch) +thinc-7.0.8 (fastai/linux-64) +tk-8.6.8 (defaults/linux-64) +torchvision-0.6.0 (pytorch/linux-64) +tornado-6.0.4 (defaults/linux-64) +tqdm-4.46.0 (defaults/noarch) +traitlets-4.3.3 (defaults/linux-64) +urllib3-1.25.8 (defaults/linux-64) +wasabi-0.2.2 (fastai/noarch) +wcwidth-0.1.9 (defaults/noarch) +webencodings-0.5.1 (defaults/linux-64) +wheel-0.34.2 (defaults/linux-64) +widgetsnbextension-3.5.1 (defaults/linux-64) +xz-5.2.5 (defaults/linux-64) +yaml-0.1.7 (defaults/linux-64) +zeromq-4.3.1 (defaults/linux-64) +zipp-3.1.0 (defaults/noarch) +zlib-1.2.11 (defaults/linux-64) +zstd-1.3.7 (defaults/linux-64) 2020-05-09 15:02:41 (rev 1) +arrow-cpp-0.15.1 (defaults/linux-64) +boost-cpp-1.71.0 (defaults/linux-64) +brotli-1.0.7 (defaults/linux-64) +bzip2-1.0.8 (defaults/linux-64) +c-ares-1.15.0 (defaults/linux-64) +double-conversion-3.1.5 (defaults/linux-64) +gflags-2.2.2 (defaults/linux-64) +glog-0.4.0 (defaults/linux-64) +grpc-cpp-1.26.0 (defaults/linux-64) +libboost-1.71.0 (defaults/linux-64) +libevent-2.1.8 (defaults/linux-64) +libprotobuf-3.11.2 (defaults/linux-64) +lz4-c-1.8.1.2 (defaults/linux-64) +pyarrow-0.15.1 (defaults/linux-64) +re2-2019.08.01 (defaults/linux-64) +snappy-1.1.7 (defaults/linux-64) +thrift-cpp-0.11.0 (defaults/linux-64) +uriparser-0.9.3 (defaults/linux-64) 2020-05-09 15:04:34 (rev 2) +astroid-2.4.0 (defaults/linux-64) +isort-4.3.21 (defaults/linux-64) +lazy-object-proxy-1.4.3 (defaults/linux-64) +mccabe-0.6.1 (defaults/linux-64) +pylint-2.5.0 (defaults/linux-64) +toml-0.10.0 (defaults/linux-64) +typed-ast-1.4.1 (defaults/linux-64) +wrapt-1.11.2 (defaults/linux-64) 2020-05-10 14:36:07 (rev 3) cudatoolkit {10.2.89 (defaults/linux-64) -> 10.1.243 (defaults/linux-64)} pytorch {1.5.0 (pytorch/linux-64) -> 1.5.0 (pytorch/linux-64)} torchvision {0.6.0 (pytorch/linux-64) -> 0.6.0 (pytorch/linux-64)} 2020-05-14 23:52:11 (rev 4) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} +json5-0.9.0 (conda-forge/noarch) +jupyterlab-2.1.2 (conda-forge/noarch) +jupyterlab_server-1.1.3 (conda-forge/noarch) +python_abi-3.7 (conda-forge/linux-64) 2020-05-15 13:37:47 (rev 5) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} 2020-05-22 18:57:33 (rev 6) +ipyexperiments-0.1.17 (stason/noarch) +nvidia-ml-py3-7.352.0 (fastai/noarch) +psutil-5.7.0 (defaults/linux-64) 2020-05-26 22:17:21 (rev 7) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} +onnx-1.6.0 (conda-forge/linux-64) +protobuf-3.11.2 (conda-forge/linux-64) 2020-05-29 00:05:22 (rev 8) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +seaborn-0.10.1 (defaults/noarch) 2020-05-29 00:05:59 (rev 9) +plotly-4.7.1 (defaults/noarch) +retrying-1.3.3 (defaults/linux-64) 2020-06-03 16:13:53 (rev 10) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} jupyterlab {2.1.2 (conda-forge/noarch) -> 2.1.4 (conda-forge/noarch)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} 2020-06-07 23:35:15 (rev 11) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +binutils_impl_linux-64-2.31.1 (defaults/linux-64) +binutils_linux-64-2.31.1 (defaults/linux-64) +gcc_impl_linux-64-7.3.0 (defaults/linux-64) +gcc_linux-64-7.3.0 (defaults/linux-64) +gxx_impl_linux-64-7.3.0 (defaults/linux-64) +gxx_linux-64-7.3.0 (defaults/linux-64) +libgpuarray-0.7.6 (defaults/linux-64) +mako-1.1.2 (defaults/noarch) +pygpu-0.7.6 (defaults/linux-64) +theano-1.0.4 (defaults/linux-64) 2020-06-16 21:10:28 (rev 12) certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.2 (defaults/linux-64)} urllib3 {1.25.8 (defaults/linux-64) -> 1.25.9 (defaults/noarch)} -ipyexperiments-0.1.17 (stason/noarch) -pyarrow-0.15.1 (defaults/linux-64) +brotlipy-0.7.0 (defaults/linux-64) +cairo-1.14.12 (defaults/linux-64) +fribidi-1.0.5 (defaults/linux-64) +graphite2-1.3.13 (defaults/linux-64) +graphviz-2.40.1 (defaults/linux-64) +harfbuzz-1.8.8 (defaults/linux-64) +pango-1.42.4 (defaults/linux-64) +pixman-0.38.0 (defaults/linux-64) 2020-06-19 20:00:05 (rev 13) arrow-cpp {0.15.1 (defaults/linux-64) -> 0.17.1 (conda-forge/linux-64)} boost-cpp {1.71.0 (defaults/linux-64) -> 1.72.0 (conda-forge/linux-64)} ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.2 (conda-forge/linux-64)} cairo {1.14.12 (defaults/linux-64) -> 1.16.0 (conda-forge/linux-64)} certifi {2020.4.5.2 (defaults/linux-64) -> 2020.4.5.2 (conda-forge/linux-64)} fontconfig {2.13.0 (defaults/linux-64) -> 2.13.1 (conda-forge/linux-64)} graphviz {2.40.1 (defaults/linux-64) -> 2.42.3 (conda-forge/linux-64)} grpc-cpp {1.26.0 (defaults/linux-64) -> 1.29.1 (conda-forge/linux-64)} gst-plugins-base {1.14.0 (defaults/linux-64) -> 1.14.5 (conda-forge/linux-64)} gstreamer {1.14.0 (defaults/linux-64) -> 1.14.5 (conda-forge/linux-64)} harfbuzz {1.8.8 (defaults/linux-64) -> 2.4.0 (conda-forge/linux-64)} icu {58.2 (defaults/linux-64) -> 64.2 (conda-forge/linux-64)} jpeg {9b (defaults/linux-64) -> 9d (conda-forge/linux-64)} libevent {2.1.8 (defaults/linux-64) -> 2.1.10 (conda-forge/linux-64)} libprotobuf {3.11.2 (defaults/linux-64) -> 3.12.3 (conda-forge/linux-64)} libtiff {4.1.0 (defaults/linux-64) -> 4.1.0 (conda-forge/linux-64)} libuuid {1.0.3 (defaults/linux-64) -> 2.32.1 (conda-forge/linux-64)} lz4-c {1.8.1.2 (defaults/linux-64) -> 1.9.2 (conda-forge/linux-64)} matplotlib {3.1.3 (defaults/linux-64) -> 3.2.2 (conda-forge/linux-64)} matplotlib-base {3.1.3 (defaults/linux-64) -> 3.2.2 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} pango {1.42.4 (defaults/linux-64) -> 1.42.4 (conda-forge/linux-64)} pcre {8.43 (defaults/linux-64) -> 8.44 (conda-forge/linux-64)} protobuf {3.11.2 (conda-forge/linux-64) -> 3.12.3 (conda-forge/linux-64)} qt {5.9.7 (defaults/linux-64) -> 5.9.7 (conda-forge/linux-64)} re2 {2019.08.01 (defaults/linux-64) -> 2020.06.01 (conda-forge/linux-64)} snappy {1.1.7 (defaults/linux-64) -> 1.1.8 (conda-forge/linux-64)} thrift-cpp {0.11.0 (defaults/linux-64) -> 0.13.0 (conda-forge/linux-64)} tk {8.6.8 (defaults/linux-64) -> 8.6.10 (conda-forge/linux-64)} zstd {1.3.7 (defaults/linux-64) -> 1.4.4 (conda-forge/linux-64)} -libboost-1.71.0 (defaults/linux-64) +abseil-cpp-20200225.2 (conda-forge/linux-64) +aplus-0.11.0 (conda-forge/noarch) +astropy-4.0.1.post1 (conda-forge/linux-64) +aws-sdk-cpp-1.7.164 (conda-forge/linux-64) +blosc-1.19.0 (conda-forge/linux-64) +bokeh-2.1.0 (conda-forge/linux-64) +boto3-1.14.7 (conda-forge/noarch) +botocore-1.17.7 (conda-forge/noarch) +bqplot-0.12.12 (conda-forge/noarch) +branca-0.3.1 (conda-forge/noarch) +cachetools-4.1.0 (conda-forge/noarch) +charls-2.1.0 (conda-forge/linux-64) +click-7.1.2 (conda-forge/noarch) +cloudpickle-1.4.1 (conda-forge/noarch) +curl-7.69.1 (conda-forge/linux-64) +cytoolz-0.10.1 (conda-forge/linux-64) +dask-2.19.0 (conda-forge/noarch) +dask-core-2.19.0 (conda-forge/noarch) +distributed-2.19.0 (conda-forge/linux-64) +docutils-0.15.2 (conda-forge/linux-64) +fsspec-0.7.4 (conda-forge/noarch) +future-0.18.2 (conda-forge/linux-64) +gettext-0.19.8.1 (conda-forge/linux-64) +giflib-5.2.1 (conda-forge/linux-64) +h5py-2.10.0 (conda-forge/linux-64) +hdf5-1.10.5 (conda-forge/linux-64) +heapdict-1.0.1 (conda-forge/noarch) +imagecodecs-2020.5.30 (conda-forge/linux-64) +imageio-2.8.0 (conda-forge/noarch) +ipydatawidgets-4.0.1 (conda-forge/noarch) +ipyleaflet-0.13.0 (conda-forge/noarch) +ipympl-0.5.6 (conda-forge/noarch) +ipyscales-0.5.0 (conda-forge/noarch) +ipyvolume-0.6.0a6 (conda-forge/noarch) +ipyvue-1.3.2 (conda-forge/noarch) +ipyvuetify-1.4.0 (conda-forge/noarch) +ipywebrtc-0.5.0 (conda-forge/linux-64) +jmespath-0.10.0 (conda-forge/noarch) +jxrlib-1.1 (conda-forge/linux-64) +krb5-1.17.1 (conda-forge/linux-64) +lcms2-2.11 (conda-forge/linux-64) +libaec-1.0.4 (conda-forge/linux-64) +libcurl-7.69.1 (conda-forge/linux-64) +libllvm8-8.0.1 (conda-forge/linux-64) +libssh2-1.9.0 (conda-forge/linux-64) +libtool-2.4.6 (conda-forge/linux-64) +libwebp-base-1.1.0 (conda-forge/linux-64) +libzopfli-1.0.3 (conda-forge/linux-64) +llvmlite-0.32.1 (conda-forge/linux-64) +locket-0.2.0 (conda-forge/noarch) +msgpack-python-1.0.0 (conda-forge/linux-64) +nest-asyncio-1.3.3 (conda-forge/noarch) +networkx-2.4 (conda-forge/noarch) +numba-0.49.1 (defaults/linux-64) +openjpeg-2.3.1 (conda-forge/linux-64) +packaging-20.4 (conda-forge/noarch) +parquet-cpp-1.5.1 (conda-forge/noarch) +partd-1.1.0 (conda-forge/noarch) +progressbar2-3.51.3 (conda-forge/noarch) +pyarrow-0.17.1 (conda-forge/linux-64) +python-utils-2.4.0 (conda-forge/noarch) +pythreejs-2.2.0 (conda-forge/noarch) +pywavelets-1.1.1 (conda-forge/linux-64) +s3fs-0.2.2 (conda-forge/noarch) +s3transfer-0.3.3 (conda-forge/linux-64) +scikit-image-0.17.2 (conda-forge/linux-64) +sortedcontainers-2.2.2 (conda-forge/noarch) +tabulate-0.8.7 (conda-forge/noarch) +tbb-2020.1 (conda-forge/linux-64) +tblib-1.6.0 (conda-forge/noarch) +tifffile-2020.6.3 (conda-forge/noarch) +toolz-0.10.0 (conda-forge/noarch) +traittypes-0.2.1 (conda-forge/noarch) +typing_extensions-3.7.4.2 (conda-forge/noarch) +vaex-3.0.0 (conda-forge/noarch) +vaex-arrow-0.5.1 (conda-forge/noarch) +vaex-astro-0.7.0 (conda-forge/noarch) +vaex-core-2.0.3 (conda-forge/linux-64) +vaex-hdf5-0.6.0 (conda-forge/noarch) +vaex-jupyter-0.5.2 (conda-forge/noarch) +vaex-ml-0.9.0 (conda-forge/noarch) +vaex-server-0.3.1 (conda-forge/noarch) +vaex-viz-0.4.0 (conda-forge/noarch) +xarray-0.15.1 (conda-forge/noarch) +xorg-kbproto-1.0.7 (conda-forge/linux-64) +xorg-libice-1.0.10 (conda-forge/linux-64) +xorg-libsm-1.2.3 (conda-forge/linux-64) +xorg-libx11-1.6.9 (conda-forge/linux-64) +xorg-libxext-1.3.4 (conda-forge/linux-64) +xorg-libxpm-3.5.13 (conda-forge/linux-64) +xorg-libxrender-0.9.10 (conda-forge/linux-64) +xorg-libxt-1.1.5 (conda-forge/linux-64) +xorg-renderproto-0.11.1 (conda-forge/linux-64) +xorg-xextproto-7.3.0 (conda-forge/linux-64) +xorg-xproto-7.0.31 (conda-forge/linux-64) +zict-2.0.0 (conda-forge/noarch) 2020-06-19 22:08:48 (rev 14) _libgcc_mutex {0.1 (defaults/linux-64) -> 0.1 (conda-forge/linux-64)} arrow-cpp {0.17.1 (conda-forge/linux-64) -> 0.15.0 (conda-forge/linux-64)} bokeh {2.1.0 (conda-forge/linux-64) -> 1.4.0 (conda-forge/linux-64)} boost-cpp {1.72.0 (conda-forge/linux-64) -> 1.70.0 (conda-forge/linux-64)} cudatoolkit {10.1.243 (defaults/linux-64) -> 10.1.243 (nvidia/linux-64)} curl {7.69.1 (conda-forge/linux-64) -> 7.68.0 (conda-forge/linux-64)} giflib {5.2.1 (conda-forge/linux-64) -> 5.1.7 (conda-forge/linux-64)} grpc-cpp {1.29.1 (conda-forge/linux-64) -> 1.23.0 (conda-forge/linux-64)} krb5 {1.17.1 (conda-forge/linux-64) -> 1.16.4 (conda-forge/linux-64)} libcurl {7.69.1 (conda-forge/linux-64) -> 7.68.0 (conda-forge/linux-64)} libgcc-ng {9.1.0 (defaults/linux-64) -> 9.2.0 (conda-forge/linux-64)} libprotobuf {3.12.3 (conda-forge/linux-64) -> 3.8.0 (conda-forge/linux-64)} libtiff {4.1.0 (conda-forge/linux-64) -> 4.1.0 (conda-forge/linux-64)} libxml2 {2.9.9 (defaults/linux-64) -> 2.9.10 (conda-forge/linux-64)} lz4-c {1.9.2 (conda-forge/linux-64) -> 1.8.3 (conda-forge/linux-64)} pandas {1.0.3 (defaults/linux-64) -> 0.25.3 (conda-forge/linux-64)} protobuf {3.12.3 (conda-forge/linux-64) -> 3.8.0 (conda-forge/linux-64)} pyarrow {0.17.1 (conda-forge/linux-64) -> 0.15.0 (conda-forge/linux-64)} python {3.7.7 (defaults/linux-64) -> 3.7.6 (conda-forge/linux-64)} re2 {2020.06.01 (conda-forge/linux-64) -> 2020.04.01 (conda-forge/linux-64)} tbb {2020.1 (conda-forge/linux-64) -> 2018.0.5 (conda-forge/linux-64)} thrift-cpp {0.13.0 (conda-forge/linux-64) -> 0.12.0 (conda-forge/linux-64)} tifffile {2020.6.3 (conda-forge/noarch) -> 2020.6.3 (conda-forge/noarch)} vaex {3.0.0 (conda-forge/noarch) -> 1.0.0b7 (conda-forge/noarch)} zstd {1.4.4 (conda-forge/linux-64) -> 1.4.3 (conda-forge/linux-64)} -imagecodecs-2020.5.30 (conda-forge/linux-64) -libwebp-base-1.1.0 (conda-forge/linux-64) -vaex-arrow-0.5.1 (conda-forge/noarch) +_openmp_mutex-4.5 (conda-forge/linux-64) +aiohttp-3.6.2 (conda-forge/linux-64) +appdirs-1.4.3 (conda-forge/noarch) +async-timeout-3.0.1 (conda-forge/noarch) +boost-1.70.0 (conda-forge/linux-64) +cfitsio-3.470 (conda-forge/linux-64) +click-plugins-1.1.1 (conda-forge/noarch) +cligj-0.5.0 (conda-forge/noarch) +colorcet-2.0.1 (conda-forge/noarch) +cudf-0.14.0 (rapidsai/linux-64) +cudnn-7.6.0 (nvidia/linux-64) +cugraph-0.14.0 (rapidsai/linux-64) +cuml-0.14.0 (rapidsai/linux-64) +cupy-7.5.0 (conda-forge/linux-64) +cusignal-0.14.0 (rapidsai/noarch) +cuspatial-0.14.0 (rapidsai/linux-64) +cuxfilter-0.14.0 (rapidsai/linux-64) +dask-cuda-0.14.0 (rapidsai/linux-64) +dask-cudf-0.14.0 (rapidsai/linux-64) +dask-xgboost-0.2.0.dev28 (rapidsai/noarch) +datashader-0.10.0 (conda-forge/noarch) +datashape-0.5.4 (conda-forge/noarch) +dlpack-0.2 (conda-forge/linux-64) +fastavro-0.23.4 (conda-forge/linux-64) +fastrlock-0.5 (conda-forge/linux-64) +fiona-1.8.13 (conda-forge/linux-64) +freexl-1.0.5 (conda-forge/linux-64) +gdal-3.0.2 (conda-forge/linux-64) +geopandas-0.7.0 (conda-forge/noarch) +geos-3.7.2 (conda-forge/linux-64) +geotiff-1.5.1 (conda-forge/linux-64) +hdf4-4.2.13 (conda-forge/linux-64) +imagecodecs-lite-2019.12.3 (conda-forge/linux-64) +importlib-metadata-1.6.1 (conda-forge/linux-64) +json-c-0.13.1 (conda-forge/linux-64) +jupyter-server-proxy-1.5.0 (conda-forge/noarch) +kealib-1.4.13 (conda-forge/linux-64) +libcudf-0.14.0 (rapidsai/linux-64) +libcugraph-0.14.0 (rapidsai/linux-64) +libcuml-0.14.0 (rapidsai/linux-64) +libcumlprims-0.14.1 (nvidia/linux-64) +libcuspatial-0.14.0 (rapidsai/linux-64) +libdap4-3.20.4 (conda-forge/linux-64) +libgdal-3.0.2 (conda-forge/linux-64) +libgomp-9.2.0 (conda-forge/linux-64) +libhwloc-2.1.0 (conda-forge/linux-64) +libiconv-1.15 (conda-forge/linux-64) +libkml-1.3.0 (conda-forge/linux-64) +libnetcdf-4.7.1 (conda-forge/linux-64) +libnvstrings-0.14.0 (rapidsai/linux-64) +libpq-11.5 (conda-forge/linux-64) +librmm-0.14.0 (rapidsai/linux-64) +libspatialindex-1.9.3 (conda-forge/linux-64) +libspatialite-4.3.0a (conda-forge/linux-64) +libuv-1.34.0 (conda-forge/linux-64) +libwebp-1.0.2 (conda-forge/linux-64) +libxgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +markdown-3.2.2 (conda-forge/noarch) +multidict-4.7.5 (conda-forge/linux-64) +multipledispatch-0.6.0 (conda-forge/noarch) +munch-2.5.0 (conda-forge/noarch) +nccl-2.5.7.1 (conda-forge/linux-64) +nodejs-13.13.0 (conda-forge/linux-64) +nvstrings-0.14.0 (rapidsai/linux-64) +panel-0.6.4 (conda-forge/noarch) +param-1.9.3 (conda-forge/noarch) +poppler-0.67.0 (conda-forge/linux-64) +poppler-data-0.4.9 (conda-forge/noarch) +postgresql-11.5 (conda-forge/linux-64) +proj-6.2.1 (conda-forge/linux-64) +py-xgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +pyct-0.4.6 (conda-forge/noarch) +pyct-core-0.4.6 (conda-forge/noarch) +pyee-7.0.2 (conda-forge/noarch) +pynvml-8.0.4 (conda-forge/noarch) +pyopengl-3.1.5 (conda-forge/noarch) +pyppeteer-0.0.25 (conda-forge/noarch) +pyproj-2.4.2.post1 (conda-forge/linux-64) +pyviz_comms-0.7.5 (conda-forge/noarch) +rapids-0.14.0 (rapidsai/linux-64) +rapids-xgboost-0.14.0 (rapidsai/linux-64) +rmm-0.14.0 (rapidsai/linux-64) +rtree-0.9.4 (conda-forge/linux-64) +shapely-1.6.4 (conda-forge/linux-64) +simpervisor-0.3 (conda-forge/noarch) +spdlog-1.6.1 (conda-forge/linux-64) +tiledb-1.6.2 (conda-forge/linux-64) +tzcode-2020a (conda-forge/linux-64) +ucx-1.8.0+gf6ec8d4 (rapidsai/linux-64) +ucx-py-0.14.0+gf6ec8d4 (rapidsai/linux-64) +vaex-distributed-0.3.0 (conda-forge/noarch) +vaex-ui-0.3.0 (conda-forge/noarch) +websockets-8.1 (conda-forge/linux-64) +xerces-c-3.2.2 (conda-forge/linux-64) +xgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +yarl-1.3.0 (conda-forge/linux-64) 2020-06-21 23:28:05 (rev 15) certifi {2020.4.5.2 (conda-forge/linux-64) -> 2020.4.5.2 (defaults/linux-64)} +torchaudio-0.5.0 (pytorch/linux-64) 2020-06-21 23:39:10 (rev 16) ca-certificates {2020.4.5.2 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +cython-0.29.20 (defaults/linux-64) ```
github_jupyter
from fastai2.torch_basics import * from fastai2.basics import * from fastai2.data.all import * from fastai2.callback.all import * from fastai2.vision.all import * from fastai2_audio.core.all import * from fastai2_audio.augment.all import * import torchaudio URLs.SPEAKERS10 p10speakers = untar_data(URLs.SPEAKERS10, extract_func=tar_extract_at_filename) p10speakers x = AudioGetter("", recurse=True, folders=None) files_10 = x(p10speakers) #crop 2s from the signal and turn it to a MelSpectrogram with no augmentation cfg_voice = AudioConfig.Voice() a2s = AudioToSpec.from_cfg(cfg_voice) # https://pytorch.org/audio/_modules/torchaudio/backend/utils.html torchaudio.get_audio_backend(), dir(torchaudio)#, torchaudio.backend.utils.list_audio_backends() #torchaudio.initialize_sox() auds = DataBlock(blocks=(AudioBlock.from_folder(p10speakers, crop_signal_to=2000), CategoryBlock), get_items=get_audio_files, splitter=RandomSplitter(), item_tfms = a2s, get_y=lambda x: str(x).split('/')[-1][:5]) cats = [y for _,y in auds.datasets(p10speakers)] #verify categories are being correctly assigned test_eq(min(cats).item(), 0) test_eq(max(cats).item(), 9) auds.summary(p10speakers) dbunch = auds.dataloaders(p10speakers, bs=64) name: fastai2 channels: - pytorch - fastai - rapidsai - nvidia - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=0_gnu - abseil-cpp=20200225.2=he1b5a44_0 - aiohttp=3.6.2=py37h516909a_0 - altair=4.1.0=py_1 - aplus=0.11.0=py_1 - appdirs=1.4.3=py_1 - arrow-cpp=0.15.0=py37h090bef1_2 - astroid=2.4.0=py37_0 - astropy=4.0.1.post1=py37h8f50634_0 - async-timeout=3.0.1=py_1000 - attrs=19.3.0=py_0 - aws-sdk-cpp=1.7.164=hc831370_1 - backcall=0.1.0=py37_0 - binutils_impl_linux-64=2.31.1=h6176602_1 - binutils_linux-64=2.31.1=h6176602_9 - blas=1.0=mkl - bleach=3.1.4=py_0 - blosc=1.19.0=he1b5a44_0 - bokeh=1.4.0=py37hc8dfbb8_1 - boost=1.70.0=py37h9de70de_1 - boost-cpp=1.70.0=h8e57a91_2 - boto3=1.14.7=pyh9f0ad1d_0 - botocore=1.17.7=pyh9f0ad1d_0 - bqplot=0.12.12=pyh9f0ad1d_0 - branca=0.3.1=py_0 - brotli=1.0.7=he6710b0_0 - brotlipy=0.7.0=py37h7b6447c_1000 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.15.0=h7b6447c_1001 - ca-certificates=2020.6.24=0 - cachetools=4.1.0=py_1 - cairo=1.16.0=hcf35c78_1003 - certifi=2020.6.20=py37_0 - cffi=1.14.0=py37h2e261b9_0 - cfitsio=3.470=hb60a0a2_2 - chardet=3.0.4=py37_1003 - charls=2.1.0=he1b5a44_2 - click=7.1.2=pyh9f0ad1d_0 - click-plugins=1.1.1=py_0 - cligj=0.5.0=py_0 - colorcet=2.0.1=py_0 - cryptography=2.9.2=py37h1ba5d50_0 - cudatoolkit=10.1.243=h6bb024c_0 - cudf=0.14.0=py37_0 - cudnn=7.6.0=cuda10.1_0 - cugraph=0.14.0=py37_0 - cuml=0.14.0=cuda10.1_py37_0 - cupy=7.5.0=py37h0632833_0 - curl=7.68.0=hf8cf82a_0 - cusignal=0.14.0=py37_0 - cuspatial=0.14.0=py37_0 - cuxfilter=0.14.0=py37_0 - cycler=0.10.0=py37_0 - cymem=2.0.2=py37he1b5a44_0 - cython=0.29.20=py37he6710b0_0 - cython-blis=0.2.4=py37h516909a_1 - cytoolz=0.10.1=py37h516909a_0 - dask=2.19.0=py_0 - dask-core=2.19.0=py_0 - dask-cuda=0.14.0=py37_0 - dask-cudf=0.14.0=py37_0 - dask-xgboost=0.2.0.dev28=cuda10.1py37_0 - datashader=0.10.0=py_0 - datashape=0.5.4=py_1 - dbus=1.13.14=hb2f20db_0 - decorator=4.4.2=py_0 - defusedxml=0.6.0=py_0 - distributed=2.19.0=py37hc8dfbb8_0 - dlpack=0.2=he1b5a44_1 - docutils=0.15.2=py37_0 - double-conversion=3.1.5=he6710b0_1 - entrypoints=0.3=py37_0 - expat=2.2.6=he6710b0_0 - fastavro=0.23.4=py37h8f50634_0 - fastrlock=0.5=py37h3340039_0 - fiona=1.8.13=py37h900e953_0 - fontconfig=2.13.1=h86ecdb6_1001 - freetype=2.9.1=h8a8886c_1 - freexl=1.0.5=h14c3975_1002 - fribidi=1.0.5=h7b6447c_0 - fsspec=0.7.4=py_0 - future=0.18.2=py37hc8dfbb8_1 - gcc_impl_linux-64=7.3.0=habb00fd_1 - gcc_linux-64=7.3.0=h553295d_9 - gdal=3.0.2=py37hbb6b9fb_2 - geopandas=0.7.0=py_1 - geos=3.7.2=he1b5a44_2 - geotiff=1.5.1=hbd99317_7 - gettext=0.19.8.1=hc5be6a0_1002 - gflags=2.2.2=he6710b0_0 - giflib=5.1.7=h516909a_1 - glib=2.63.1=h5a9c865_0 - glog=0.4.0=he6710b0_0 - gmp=6.1.2=h6c8ec71_1 - graphite2=1.3.13=h23475e2_0 - graphviz=2.42.3=h0511662_0 - grpc-cpp=1.23.0=h18db393_0 - gst-plugins-base=1.14.5=h0935bb2_2 - gstreamer=1.14.5=h36ae1b5_2 - gxx_impl_linux-64=7.3.0=hdf63c60_1 - gxx_linux-64=7.3.0=h553295d_9 - h5py=2.10.0=nompi_py37h513d04c_102 - harfbuzz=2.4.0=h9f30f68_3 - hdf4=4.2.13=hf30be14_1003 - hdf5=1.10.5=nompi_h3c11f04_1104 - heapdict=1.0.1=py_0 - icu=64.2=he1b5a44_1 - idna=2.9=py_1 - imagecodecs-lite=2019.12.3=py37h03ebfcd_1 - imageio=2.8.0=py_0 - importlib-metadata=1.6.1=py37hc8dfbb8_0 - importlib_metadata=1.5.0=py37_0 - intel-openmp=2020.0=166 - ipydatawidgets=4.0.1=py_0 - ipykernel=5.1.4=py37h39e3cac_0 - ipyleaflet=0.13.0=pyh9f0ad1d_0 - ipympl=0.5.6=pyh9f0ad1d_1 - ipyscales=0.5.0=pyh9f0ad1d_0 - ipython=7.13.0=py37h5ca1d4c_0 - ipython_genutils=0.2.0=py37_0 - ipyvolume=0.6.0a6=pyh9f0ad1d_0 - ipyvue=1.3.2=pyh9f0ad1d_0 - ipyvuetify=1.4.0=pyh8c360ce_1 - ipywebrtc=0.5.0=py37_0 - ipywidgets=7.5.1=py_0 - isort=4.3.21=py37_0 - jedi=0.17.0=py37_0 - jinja2=2.11.2=py_0 - jmespath=0.10.0=pyh9f0ad1d_0 - joblib=0.14.1=py_0 - jpeg=9d=h516909a_0 - json-c=0.13.1=hbfbb72e_1002 - json5=0.9.0=py_0 - jsonschema=3.2.0=py37_0 - jupyter=1.0.0=py37_7 - jupyter-server-proxy=1.5.0=py_0 - jupyter_client=6.1.3=py_0 - jupyter_console=6.1.0=py_0 - jupyter_core=4.6.3=py37_0 - jupyterlab=2.1.4=py_0 - jupyterlab_server=1.1.3=py_0 - jxrlib=1.1=h516909a_2 - kealib=1.4.13=hec59c27_0 - kiwisolver=1.2.0=py37hfd86e86_0 - krb5=1.16.4=h2fd8d38_0 - lazy-object-proxy=1.4.3=py37h7b6447c_0 - lcms2=2.11=hbd6801e_0 - ld_impl_linux-64=2.33.1=h53a641e_7 - libaec=1.0.4=he1b5a44_1 - libcudf=0.14.0=cuda10.1_0 - libcugraph=0.14.0=cuda10.1_0 - libcuml=0.14.0=cuda10.1_0 - libcumlprims=0.14.1=cuda10.1_0 - libcurl=7.68.0=hda55be3_0 - libcuspatial=0.14.0=cuda10.1_0 - libdap4=3.20.4=hd3bb157_0 - libedit=3.1.20181209=hc058e9b_0 - libevent=2.1.10=h72c5cf5_0 - libffi=3.2.1=hd88cf55_4 - libgcc-ng=9.2.0=h24d8f2e_2 - libgdal=3.0.2=hc7cfd23_2 - libgfortran-ng=7.3.0=hdf63c60_0 - libgomp=9.2.0=h24d8f2e_2 - libgpuarray=0.7.6=h14c3975_0 - libhwloc=2.1.0=h3c4fd83_0 - libiconv=1.15=h516909a_1006 - libkml=1.3.0=h4fcabce_1010 - libllvm8=8.0.1=hc9558a2_0 - libnetcdf=4.7.1=nompi_h94020b1_102 - libnvstrings=0.14.0=cuda10.1_0 - libpng=1.6.37=hbc83047_0 - libpq=11.5=hd9ab2ff_2 - libprotobuf=3.8.0=h8b12597_0 - librmm=0.14.0=cuda10.1_0 - libsodium=1.0.16=h1bed415_0 - libspatialindex=1.9.3=he1b5a44_3 - libspatialite=4.3.0a=h4f6d029_1032 - libssh2=1.9.0=hab1572f_2 - libstdcxx-ng=9.1.0=hdf63c60_0 - libtiff=4.1.0=hfc65ed5_0 - libtool=2.4.6=h14c3975_1002 - libuuid=2.32.1=h14c3975_1000 - libuv=1.34.0=h516909a_0 - libwebp=1.0.2=hf4e8a37_4 - libxcb=1.13=h1bed415_1 - libxgboost=1.1.0dev.rapidsai0.14=cuda10.1_0 - libxml2=2.9.10=hee79883_0 - libzopfli=1.0.3=he1b5a44_0 - llvmlite=0.32.1=py37h5202443_0 - locket=0.2.0=py_2 - lz4-c=1.8.3=he1b5a44_1001 - mako=1.1.2=py_0 - markdown=3.2.2=py_0 - markupsafe=1.1.1=py37h7b6447c_0 - matplotlib=3.2.2=0 - matplotlib-base=3.2.2=py37h30547a4_0 - mccabe=0.6.1=py37_1 - mistune=0.8.4=py37h7b6447c_0 - mkl=2020.0=166 - mkl-service=2.3.0=py37he904b0f_0 - mkl_fft=1.0.15=py37ha843d7b_0 - mkl_random=1.1.0=py37hd6b4f25_0 - msgpack-python=1.0.0=py37h99015e2_1 - multidict=4.7.5=py37h8f50634_1 - multipledispatch=0.6.0=py_0 - munch=2.5.0=py_0 - murmurhash=1.0.2=py37he6710b0_0 - nbconvert=5.6.1=py37_0 - nbformat=5.0.6=py_0 - nccl=2.5.7.1=h51cf6c1_0 - ncurses=6.2=he6710b0_1 - nest-asyncio=1.3.3=py_0 - networkx=2.4=py_1 - ninja=1.9.0=py37hfd86e86_0 - nodejs=13.13.0=hf5d1a2b_0 - notebook=6.0.3=py37_0 - numba=0.49.1=py37h0573a6f_0 - numpy=1.18.1=py37h4f9e942_0 - numpy-base=1.18.1=py37hde5b4d6_1 - nvstrings=0.14.0=py37_0 - olefile=0.46=py37_0 - onnx=1.6.0=py37he1b5a44_0 - openjpeg=2.3.1=h981e76c_3 - openssl=1.1.1g=h7b6447c_0 - pandas=0.25.3=py37hb3f55d8_0 - pandoc=2.2.3.2=0 - pandocfilters=1.4.2=py37_1 - panel=0.6.4=0 - pango=1.42.4=ha030887_1 - param=1.9.3=py_0 - parquet-cpp=1.5.1=2 - parso=0.7.0=py_0 - partd=1.1.0=py_0 - pcre=8.44=he1b5a44_0 - pexpect=4.8.0=py37_0 - pickleshare=0.7.5=py37_0 - pillow=7.1.2=py37hb39fc2d_0 - pip=20.0.2=py37_3 - pixman=0.38.0=h7b6447c_0 - plac=0.9.6=py37_0 - plotly=4.7.1=py_0 - poppler=0.67.0=h14e79db_8 - poppler-data=0.4.9=1 - postgresql=11.5=hc63931a_2 - preshed=2.0.1=py37he6710b0_0 - progressbar2=3.51.3=pyh9f0ad1d_0 - proj=6.2.1=hc80f0dc_0 - prometheus_client=0.7.1=py_0 - prompt-toolkit=3.0.4=py_0 - prompt_toolkit=3.0.4=0 - psutil=5.7.0=py37h7b6447c_0 - ptyprocess=0.6.0=py37_0 - py-xgboost=1.1.0dev.rapidsai0.14=cuda10.1py37_0 - pycparser=2.20=py_0 - pyct=0.4.6=py_0 - pyct-core=0.4.6=py_0 - pyee=7.0.2=pyh9f0ad1d_0 - pygments=2.6.1=py_0 - pygpu=0.7.6=py37h035aef0_0 - pylint=2.5.0=py37_1 - pynvml=8.0.4=py_0 - pyopengl=3.1.5=py_0 - pyopenssl=19.1.0=py37_0 - pyparsing=2.4.7=py_0 - pyppeteer=0.0.25=py_1 - pyproj=2.4.2.post1=py37h12732c1_0 - pyqt=5.9.2=py37h05f1152_2 - pyrsistent=0.16.0=py37h7b6447c_0 - pysocks=1.7.1=py37_0 - python=3.7.6=cpython_h8356626_6 - python-dateutil=2.8.1=py_0 - python-utils=2.4.0=py_0 - python_abi=3.7=1_cp37m - pythreejs=2.2.0=pyh8c360ce_0 - pytz=2020.1=py_0 - pyviz_comms=0.7.5=pyh9f0ad1d_0 - pywavelets=1.1.1=py37h03ebfcd_1 - pyyaml=5.3.1=py37h7b6447c_0 - pyzmq=18.1.1=py37he6710b0_0 - qt=5.9.7=h0c104cb_3 - qtconsole=4.7.3=py_0 - qtpy=1.9.0=py_0 - rapids=0.14.0=cuda10.1_py37_2 - rapids-xgboost=0.14.0=cuda10.1_py37_2 - re2=2020.04.01=he1b5a44_0 - readline=8.0=h7b6447c_0 - requests=2.23.0=py37_0 - retrying=1.3.3=py37_2 - rmm=0.14.0=py37_0 - rtree=0.9.4=py37h8526d28_1 - s3fs=0.2.2=py_0 - s3transfer=0.3.3=py37hc8dfbb8_1 - scikit-learn=0.22.1=py37hd81dba3_0 - scipy=1.4.1=py37h0b6359f_0 - seaborn=0.10.1=py_0 - send2trash=1.5.0=py37_0 - setuptools=46.1.3=py37_0 - shapely=1.6.4=py37hec07ddf_1006 - simpervisor=0.3=py_1 - sip=4.19.8=py37hf484d3e_0 - six=1.14.0=py37_0 - snappy=1.1.8=he1b5a44_2 - sortedcontainers=2.2.2=pyh9f0ad1d_0 - spacy=2.1.8=py37hc9558a2_0 - spdlog=1.6.1=hc9558a2_0 - sqlite=3.31.1=h62c20be_1 - srsly=0.1.0=py37he1b5a44_0 - tabulate=0.8.7=pyh9f0ad1d_0 - tbb=2018.0.5=h2d50403_0 - tblib=1.6.0=py_0 - terminado=0.8.3=py37_0 - testpath=0.4.4=py_0 - theano=1.0.4=py37hfd86e86_0 - thinc=7.0.8=py37hc9558a2_0 - thrift-cpp=0.12.0=hf3afdfd_1004 - tiledb=1.6.2=h7d710e0_2 - tk=8.6.10=hed695b0_0 - toml=0.10.0=py37h28b3542_0 - toolz=0.10.0=py_0 - torchaudio=0.5.0=py37 - tornado=6.0.4=py37h7b6447c_1 - tqdm=4.46.0=py_0 - traitlets=4.3.3=py37_0 - traittypes=0.2.1=py_1 - typed-ast=1.4.1=py37h7b6447c_0 - typing_extensions=3.7.4.2=py_0 - tzcode=2020a=h516909a_0 - ucx=1.8.0+gf6ec8d4=cuda10.1_20 - ucx-py=0.14.0+gf6ec8d4=py37_0 - uriparser=0.9.3=he6710b0_1 - vaex=1.0.0b7=py_0 - vaex-astro=0.7.0=pyh9f0ad1d_0 - vaex-core=2.0.3=py37h0da4684_0 - vaex-distributed=0.3.0=py_0 - vaex-hdf5=0.6.0=pyh9f0ad1d_0 - vaex-jupyter=0.5.2=pyh9f0ad1d_0 - vaex-ml=0.9.0=pyh9f0ad1d_0 - vaex-server=0.3.1=pyh9f0ad1d_0 - vaex-ui=0.3.0=py_0 - vaex-viz=0.4.0=pyh9f0ad1d_0 - vega_datasets=0.8.0=py_0 - wasabi=0.2.2=py_0 - wcwidth=0.1.9=py_0 - webencodings=0.5.1=py37_1 - websockets=8.1=py37h8f50634_1 - wheel=0.34.2=py37_0 - widgetsnbextension=3.5.1=py37_0 - wrapt=1.11.2=py37h7b6447c_0 - xarray=0.15.1=py_0 - xerces-c=3.2.2=h8412b87_1004 - xgboost=1.1.0dev.rapidsai0.14=cuda10.1py37_0 - xorg-kbproto=1.0.7=h14c3975_1002 - xorg-libice=1.0.10=h516909a_0 - xorg-libsm=1.2.3=h84519dc_1000 - xorg-libx11=1.6.9=h516909a_0 - xorg-libxext=1.3.4=h516909a_0 - xorg-libxpm=3.5.13=h516909a_0 - xorg-libxrender=0.9.10=h516909a_1002 - xorg-libxt=1.1.5=h516909a_1003 - xorg-renderproto=0.11.1=h14c3975_1002 - xorg-xextproto=7.3.0=h14c3975_1002 - xorg-xproto=7.0.31=h14c3975_1007 - xz=5.2.5=h7b6447c_0 - yaml=0.1.7=had09818_2 - yarl=1.3.0=py37h516909a_1000 - zeromq=4.3.1=he6710b0_3 - zict=2.0.0=py_0 - zipp=3.1.0=py_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.3=h3b9ef0a_0 - pip: - absl-py==0.9.0 - adamp==0.2.0 - albumentations==0.4.5 - apipkg==1.5 - astor==0.8.1 - attrdict==2.0.1 - audioread==2.1.8 - axial-positional-embedding==0.2.1 - blessings==1.7 - category-encoders==2.2.2 - clldutils==3.5.2 - cloudpickle==1.3.0 - colorama==0.4.3 - colorednoise==1.1.1 - colorlog==4.1.0 - configparser==5.0.0 - contrastive-learner==0.1.0 - csvw==1.7.0 - deepspeech==0.7.4 - dill==0.3.1.1 - docker-pycreds==0.4.0 - execnet==1.7.1 - fastai-xla-extensions==0.0.1 - fastprogress==0.2.4 - fastscript==0.1.5 - filelock==3.0.12 - fire==0.3.1 - flame-analyzer==0.1.5 - flask==1.1.2 - gast==0.2.2 - gitdb==4.0.5 - gitpython==3.1.2 - google-auth==1.14.2 - google-auth-oauthlib==0.4.1 - google-pasta==0.2.0 - gpustat==0.6.0 - gql==0.2.0 - graphql-core==1.1 - grpcio==1.28.1 - gym==0.17.2 - imagecodecs==2020.2.18 - imgaug==0.2.6 - ipyexperiments==0.1.18.dev0 - isodate==0.6.0 - itsdangerous==2.0.0a1 - kaggle==1.5.6 - keras==2.3.1 - keras-applications==1.0.8 - keras-preprocessing==1.1.2 - kornia==0.3.1 - lasagne==0.1 - librosa==0.7.2 - line-profiler==3.0.2 - linear-attention-transformer==0.11.0 - linformer==0.2.0 - local-attention==1.0.2 - more-itertools==8.4.0 - mpmath==1.1.0 - nbdev==0.2.20 - nlp==0.2.0 - nltk==3.5 - nose==1.3.7 - nvidia-ml-py3==7.352.0 - oauthlib==3.1.0 - opencv-python==4.2.0.34 - opt-einsum==3.3.0 - packaging==20.3 - pathtools==0.1.2 - patsy==0.5.1 - pluggy==0.13.1 - pprofile==2.0.5 - product-key-memory==0.1.10 - promise==2.3 - protobuf==3.11.3 - py==1.8.2 - py-heat==0.0.6 - py-heat-magic==0.0.2 - pyarrow==0.17.1 - pyasn1==0.4.8 - pyasn1-modules==0.2.8 - pybind11==2.5.0 - pydicom==1.4.2 - pyglet==1.5.0 - pytest==5.4.3 - pytest-forked==1.1.3 - pytest-xdist==1.32.0 - python-slugify==4.0.0 - pytorch-lightning==0.7.5 - regex==2020.5.14 - requests-oauthlib==1.3.0 - resampy==0.2.2 - retry==0.9.2 - rfc3986==1.4.0 - rouge-score==0.0.4 - rsa==4.0 - sacremoses==0.0.43 - scikit-image==0.17.2 - segments==2.1.3 - sentencepiece==0.1.91 - sentry-sdk==0.14.4 - seqeval==0.0.12 - shifterator==0.1.3 - shortuuid==1.0.1 - slimevolleygym==0.1.0 - smmap==3.0.4 - soundfile==0.10.3.post1 - statsmodels==0.11.1 - stylegan2-pytorch==0.18.3 - subprocess32==3.5.4 - sympy==1.6.1 - taichi==0.6.14 - tensorboard==1.15.0 - tensorboard-plugin-wit==1.6.0.post3 - tensorboardx==2.0 - tensorflow==1.15.0 - tensorflow-estimator==1.15.1 - termcolor==1.1.0 - text-unidecode==1.3 - tifffile==2020.5.11 - tokenizers==0.8.1rc1 - torch==1.6.0 - torchvision==0.7.0 - transformers==3.0.2 - unidecode==0.04.20 - uritemplate==3.0.1 - urllib3==1.24.3 - vector-quantize-pytorch==0.0.2 - wandb==0.8.36 - watchdog==0.10.2 - werkzeug==1.0.1 prefix: /home/tyoc213/miniconda3/envs/fastai2 2020-05-09 14:59:09 (rev 0) +_libgcc_mutex-0.1 (defaults/linux-64) +attrs-19.3.0 (defaults/noarch) +backcall-0.1.0 (defaults/linux-64) +blas-1.0 (defaults/linux-64) +bleach-3.1.4 (defaults/noarch) +ca-certificates-2020.1.1 (defaults/linux-64) +certifi-2020.4.5.1 (defaults/linux-64) +cffi-1.14.0 (defaults/linux-64) +chardet-3.0.4 (defaults/linux-64) +cryptography-2.9.2 (defaults/linux-64) +cudatoolkit-10.2.89 (defaults/linux-64) +cycler-0.10.0 (defaults/linux-64) +cymem-2.0.2 (fastai/linux-64) +cython-blis-0.2.4 (fastai/linux-64) +dbus-1.13.14 (defaults/linux-64) +decorator-4.4.2 (defaults/noarch) +defusedxml-0.6.0 (defaults/noarch) +entrypoints-0.3 (defaults/linux-64) +expat-2.2.6 (defaults/linux-64) +fastprogress-0.2.2 (fastai/noarch) +fontconfig-2.13.0 (defaults/linux-64) +freetype-2.9.1 (defaults/linux-64) +glib-2.63.1 (defaults/linux-64) +gmp-6.1.2 (defaults/linux-64) +gst-plugins-base-1.14.0 (defaults/linux-64) +gstreamer-1.14.0 (defaults/linux-64) +icu-58.2 (defaults/linux-64) +idna-2.9 (defaults/noarch) +importlib_metadata-1.5.0 (defaults/linux-64) +intel-openmp-2020.0 (defaults/linux-64) +ipykernel-5.1.4 (defaults/linux-64) +ipython-7.13.0 (defaults/linux-64) +ipython_genutils-0.2.0 (defaults/linux-64) +ipywidgets-7.5.1 (defaults/noarch) +jedi-0.17.0 (defaults/linux-64) +jinja2-2.11.2 (defaults/noarch) +joblib-0.14.1 (defaults/noarch) +jpeg-9b (defaults/linux-64) +jsonschema-3.2.0 (defaults/linux-64) +jupyter-1.0.0 (defaults/linux-64) +jupyter_client-6.1.3 (defaults/noarch) +jupyter_console-6.1.0 (defaults/noarch) +jupyter_core-4.6.3 (defaults/linux-64) +kiwisolver-1.2.0 (defaults/linux-64) +ld_impl_linux-64-2.33.1 (defaults/linux-64) +libedit-3.1.20181209 (defaults/linux-64) +libffi-3.2.1 (defaults/linux-64) +libgcc-ng-9.1.0 (defaults/linux-64) +libgfortran-ng-7.3.0 (defaults/linux-64) +libpng-1.6.37 (defaults/linux-64) +libsodium-1.0.16 (defaults/linux-64) +libstdcxx-ng-9.1.0 (defaults/linux-64) +libtiff-4.1.0 (defaults/linux-64) +libuuid-1.0.3 (defaults/linux-64) +libxcb-1.13 (defaults/linux-64) +libxml2-2.9.9 (defaults/linux-64) +markupsafe-1.1.1 (defaults/linux-64) +matplotlib-3.1.3 (defaults/linux-64) +matplotlib-base-3.1.3 (defaults/linux-64) +mistune-0.8.4 (defaults/linux-64) +mkl-2020.0 (defaults/linux-64) +mkl-service-2.3.0 (defaults/linux-64) +mkl_fft-1.0.15 (defaults/linux-64) +mkl_random-1.1.0 (defaults/linux-64) +murmurhash-1.0.2 (defaults/linux-64) +nbconvert-5.6.1 (defaults/linux-64) +nbformat-5.0.6 (defaults/noarch) +ncurses-6.2 (defaults/linux-64) +ninja-1.9.0 (defaults/linux-64) +notebook-6.0.3 (defaults/linux-64) +numpy-1.18.1 (defaults/linux-64) +numpy-base-1.18.1 (defaults/linux-64) +olefile-0.46 (defaults/linux-64) +openssl-1.1.1g (defaults/linux-64) +pandas-1.0.3 (defaults/linux-64) +pandoc-2.2.3.2 (defaults/linux-64) +pandocfilters-1.4.2 (defaults/linux-64) +parso-0.7.0 (defaults/noarch) +pcre-8.43 (defaults/linux-64) +pexpect-4.8.0 (defaults/linux-64) +pickleshare-0.7.5 (defaults/linux-64) +pillow-7.1.2 (defaults/linux-64) +pip-20.0.2 (defaults/linux-64) +plac-0.9.6 (defaults/linux-64) +preshed-2.0.1 (defaults/linux-64) +prometheus_client-0.7.1 (defaults/noarch) +prompt-toolkit-3.0.4 (defaults/noarch) +prompt_toolkit-3.0.4 (defaults/noarch) +ptyprocess-0.6.0 (defaults/linux-64) +pycparser-2.20 (defaults/noarch) +pygments-2.6.1 (defaults/noarch) +pyopenssl-19.1.0 (defaults/linux-64) +pyparsing-2.4.7 (defaults/noarch) +pyqt-5.9.2 (defaults/linux-64) +pyrsistent-0.16.0 (defaults/linux-64) +pysocks-1.7.1 (defaults/linux-64) +python-3.7.7 (defaults/linux-64) +python-dateutil-2.8.1 (defaults/noarch) +pytorch-1.5.0 (pytorch/linux-64) +pytz-2020.1 (defaults/noarch) +pyyaml-5.3.1 (defaults/linux-64) +pyzmq-18.1.1 (defaults/linux-64) +qt-5.9.7 (defaults/linux-64) +qtconsole-4.7.3 (defaults/noarch) +qtpy-1.9.0 (defaults/noarch) +readline-8.0 (defaults/linux-64) +requests-2.23.0 (defaults/linux-64) +scikit-learn-0.22.1 (defaults/linux-64) +scipy-1.4.1 (defaults/linux-64) +send2trash-1.5.0 (defaults/linux-64) +setuptools-46.1.3 (defaults/linux-64) +sip-4.19.8 (defaults/linux-64) +six-1.14.0 (defaults/linux-64) +spacy-2.1.8 (fastai/linux-64) +sqlite-3.31.1 (defaults/linux-64) +srsly-0.1.0 (fastai/linux-64) +terminado-0.8.3 (defaults/linux-64) +testpath-0.4.4 (defaults/noarch) +thinc-7.0.8 (fastai/linux-64) +tk-8.6.8 (defaults/linux-64) +torchvision-0.6.0 (pytorch/linux-64) +tornado-6.0.4 (defaults/linux-64) +tqdm-4.46.0 (defaults/noarch) +traitlets-4.3.3 (defaults/linux-64) +urllib3-1.25.8 (defaults/linux-64) +wasabi-0.2.2 (fastai/noarch) +wcwidth-0.1.9 (defaults/noarch) +webencodings-0.5.1 (defaults/linux-64) +wheel-0.34.2 (defaults/linux-64) +widgetsnbextension-3.5.1 (defaults/linux-64) +xz-5.2.5 (defaults/linux-64) +yaml-0.1.7 (defaults/linux-64) +zeromq-4.3.1 (defaults/linux-64) +zipp-3.1.0 (defaults/noarch) +zlib-1.2.11 (defaults/linux-64) +zstd-1.3.7 (defaults/linux-64) 2020-05-09 15:02:41 (rev 1) +arrow-cpp-0.15.1 (defaults/linux-64) +boost-cpp-1.71.0 (defaults/linux-64) +brotli-1.0.7 (defaults/linux-64) +bzip2-1.0.8 (defaults/linux-64) +c-ares-1.15.0 (defaults/linux-64) +double-conversion-3.1.5 (defaults/linux-64) +gflags-2.2.2 (defaults/linux-64) +glog-0.4.0 (defaults/linux-64) +grpc-cpp-1.26.0 (defaults/linux-64) +libboost-1.71.0 (defaults/linux-64) +libevent-2.1.8 (defaults/linux-64) +libprotobuf-3.11.2 (defaults/linux-64) +lz4-c-1.8.1.2 (defaults/linux-64) +pyarrow-0.15.1 (defaults/linux-64) +re2-2019.08.01 (defaults/linux-64) +snappy-1.1.7 (defaults/linux-64) +thrift-cpp-0.11.0 (defaults/linux-64) +uriparser-0.9.3 (defaults/linux-64) 2020-05-09 15:04:34 (rev 2) +astroid-2.4.0 (defaults/linux-64) +isort-4.3.21 (defaults/linux-64) +lazy-object-proxy-1.4.3 (defaults/linux-64) +mccabe-0.6.1 (defaults/linux-64) +pylint-2.5.0 (defaults/linux-64) +toml-0.10.0 (defaults/linux-64) +typed-ast-1.4.1 (defaults/linux-64) +wrapt-1.11.2 (defaults/linux-64) 2020-05-10 14:36:07 (rev 3) cudatoolkit {10.2.89 (defaults/linux-64) -> 10.1.243 (defaults/linux-64)} pytorch {1.5.0 (pytorch/linux-64) -> 1.5.0 (pytorch/linux-64)} torchvision {0.6.0 (pytorch/linux-64) -> 0.6.0 (pytorch/linux-64)} 2020-05-14 23:52:11 (rev 4) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} +json5-0.9.0 (conda-forge/noarch) +jupyterlab-2.1.2 (conda-forge/noarch) +jupyterlab_server-1.1.3 (conda-forge/noarch) +python_abi-3.7 (conda-forge/linux-64) 2020-05-15 13:37:47 (rev 5) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} 2020-05-22 18:57:33 (rev 6) +ipyexperiments-0.1.17 (stason/noarch) +nvidia-ml-py3-7.352.0 (fastai/noarch) +psutil-5.7.0 (defaults/linux-64) 2020-05-26 22:17:21 (rev 7) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} +onnx-1.6.0 (conda-forge/linux-64) +protobuf-3.11.2 (conda-forge/linux-64) 2020-05-29 00:05:22 (rev 8) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +seaborn-0.10.1 (defaults/noarch) 2020-05-29 00:05:59 (rev 9) +plotly-4.7.1 (defaults/noarch) +retrying-1.3.3 (defaults/linux-64) 2020-06-03 16:13:53 (rev 10) ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.1 (conda-forge/linux-64)} jupyterlab {2.1.2 (conda-forge/noarch) -> 2.1.4 (conda-forge/noarch)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} 2020-06-07 23:35:15 (rev 11) ca-certificates {2020.4.5.1 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} certifi {2020.4.5.1 (conda-forge/linux-64) -> 2020.4.5.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +binutils_impl_linux-64-2.31.1 (defaults/linux-64) +binutils_linux-64-2.31.1 (defaults/linux-64) +gcc_impl_linux-64-7.3.0 (defaults/linux-64) +gcc_linux-64-7.3.0 (defaults/linux-64) +gxx_impl_linux-64-7.3.0 (defaults/linux-64) +gxx_linux-64-7.3.0 (defaults/linux-64) +libgpuarray-0.7.6 (defaults/linux-64) +mako-1.1.2 (defaults/noarch) +pygpu-0.7.6 (defaults/linux-64) +theano-1.0.4 (defaults/linux-64) 2020-06-16 21:10:28 (rev 12) certifi {2020.4.5.1 (defaults/linux-64) -> 2020.4.5.2 (defaults/linux-64)} urllib3 {1.25.8 (defaults/linux-64) -> 1.25.9 (defaults/noarch)} -ipyexperiments-0.1.17 (stason/noarch) -pyarrow-0.15.1 (defaults/linux-64) +brotlipy-0.7.0 (defaults/linux-64) +cairo-1.14.12 (defaults/linux-64) +fribidi-1.0.5 (defaults/linux-64) +graphite2-1.3.13 (defaults/linux-64) +graphviz-2.40.1 (defaults/linux-64) +harfbuzz-1.8.8 (defaults/linux-64) +pango-1.42.4 (defaults/linux-64) +pixman-0.38.0 (defaults/linux-64) 2020-06-19 20:00:05 (rev 13) arrow-cpp {0.15.1 (defaults/linux-64) -> 0.17.1 (conda-forge/linux-64)} boost-cpp {1.71.0 (defaults/linux-64) -> 1.72.0 (conda-forge/linux-64)} ca-certificates {2020.1.1 (defaults/linux-64) -> 2020.4.5.2 (conda-forge/linux-64)} cairo {1.14.12 (defaults/linux-64) -> 1.16.0 (conda-forge/linux-64)} certifi {2020.4.5.2 (defaults/linux-64) -> 2020.4.5.2 (conda-forge/linux-64)} fontconfig {2.13.0 (defaults/linux-64) -> 2.13.1 (conda-forge/linux-64)} graphviz {2.40.1 (defaults/linux-64) -> 2.42.3 (conda-forge/linux-64)} grpc-cpp {1.26.0 (defaults/linux-64) -> 1.29.1 (conda-forge/linux-64)} gst-plugins-base {1.14.0 (defaults/linux-64) -> 1.14.5 (conda-forge/linux-64)} gstreamer {1.14.0 (defaults/linux-64) -> 1.14.5 (conda-forge/linux-64)} harfbuzz {1.8.8 (defaults/linux-64) -> 2.4.0 (conda-forge/linux-64)} icu {58.2 (defaults/linux-64) -> 64.2 (conda-forge/linux-64)} jpeg {9b (defaults/linux-64) -> 9d (conda-forge/linux-64)} libevent {2.1.8 (defaults/linux-64) -> 2.1.10 (conda-forge/linux-64)} libprotobuf {3.11.2 (defaults/linux-64) -> 3.12.3 (conda-forge/linux-64)} libtiff {4.1.0 (defaults/linux-64) -> 4.1.0 (conda-forge/linux-64)} libuuid {1.0.3 (defaults/linux-64) -> 2.32.1 (conda-forge/linux-64)} lz4-c {1.8.1.2 (defaults/linux-64) -> 1.9.2 (conda-forge/linux-64)} matplotlib {3.1.3 (defaults/linux-64) -> 3.2.2 (conda-forge/linux-64)} matplotlib-base {3.1.3 (defaults/linux-64) -> 3.2.2 (conda-forge/linux-64)} openssl {1.1.1g (defaults/linux-64) -> 1.1.1g (conda-forge/linux-64)} pango {1.42.4 (defaults/linux-64) -> 1.42.4 (conda-forge/linux-64)} pcre {8.43 (defaults/linux-64) -> 8.44 (conda-forge/linux-64)} protobuf {3.11.2 (conda-forge/linux-64) -> 3.12.3 (conda-forge/linux-64)} qt {5.9.7 (defaults/linux-64) -> 5.9.7 (conda-forge/linux-64)} re2 {2019.08.01 (defaults/linux-64) -> 2020.06.01 (conda-forge/linux-64)} snappy {1.1.7 (defaults/linux-64) -> 1.1.8 (conda-forge/linux-64)} thrift-cpp {0.11.0 (defaults/linux-64) -> 0.13.0 (conda-forge/linux-64)} tk {8.6.8 (defaults/linux-64) -> 8.6.10 (conda-forge/linux-64)} zstd {1.3.7 (defaults/linux-64) -> 1.4.4 (conda-forge/linux-64)} -libboost-1.71.0 (defaults/linux-64) +abseil-cpp-20200225.2 (conda-forge/linux-64) +aplus-0.11.0 (conda-forge/noarch) +astropy-4.0.1.post1 (conda-forge/linux-64) +aws-sdk-cpp-1.7.164 (conda-forge/linux-64) +blosc-1.19.0 (conda-forge/linux-64) +bokeh-2.1.0 (conda-forge/linux-64) +boto3-1.14.7 (conda-forge/noarch) +botocore-1.17.7 (conda-forge/noarch) +bqplot-0.12.12 (conda-forge/noarch) +branca-0.3.1 (conda-forge/noarch) +cachetools-4.1.0 (conda-forge/noarch) +charls-2.1.0 (conda-forge/linux-64) +click-7.1.2 (conda-forge/noarch) +cloudpickle-1.4.1 (conda-forge/noarch) +curl-7.69.1 (conda-forge/linux-64) +cytoolz-0.10.1 (conda-forge/linux-64) +dask-2.19.0 (conda-forge/noarch) +dask-core-2.19.0 (conda-forge/noarch) +distributed-2.19.0 (conda-forge/linux-64) +docutils-0.15.2 (conda-forge/linux-64) +fsspec-0.7.4 (conda-forge/noarch) +future-0.18.2 (conda-forge/linux-64) +gettext-0.19.8.1 (conda-forge/linux-64) +giflib-5.2.1 (conda-forge/linux-64) +h5py-2.10.0 (conda-forge/linux-64) +hdf5-1.10.5 (conda-forge/linux-64) +heapdict-1.0.1 (conda-forge/noarch) +imagecodecs-2020.5.30 (conda-forge/linux-64) +imageio-2.8.0 (conda-forge/noarch) +ipydatawidgets-4.0.1 (conda-forge/noarch) +ipyleaflet-0.13.0 (conda-forge/noarch) +ipympl-0.5.6 (conda-forge/noarch) +ipyscales-0.5.0 (conda-forge/noarch) +ipyvolume-0.6.0a6 (conda-forge/noarch) +ipyvue-1.3.2 (conda-forge/noarch) +ipyvuetify-1.4.0 (conda-forge/noarch) +ipywebrtc-0.5.0 (conda-forge/linux-64) +jmespath-0.10.0 (conda-forge/noarch) +jxrlib-1.1 (conda-forge/linux-64) +krb5-1.17.1 (conda-forge/linux-64) +lcms2-2.11 (conda-forge/linux-64) +libaec-1.0.4 (conda-forge/linux-64) +libcurl-7.69.1 (conda-forge/linux-64) +libllvm8-8.0.1 (conda-forge/linux-64) +libssh2-1.9.0 (conda-forge/linux-64) +libtool-2.4.6 (conda-forge/linux-64) +libwebp-base-1.1.0 (conda-forge/linux-64) +libzopfli-1.0.3 (conda-forge/linux-64) +llvmlite-0.32.1 (conda-forge/linux-64) +locket-0.2.0 (conda-forge/noarch) +msgpack-python-1.0.0 (conda-forge/linux-64) +nest-asyncio-1.3.3 (conda-forge/noarch) +networkx-2.4 (conda-forge/noarch) +numba-0.49.1 (defaults/linux-64) +openjpeg-2.3.1 (conda-forge/linux-64) +packaging-20.4 (conda-forge/noarch) +parquet-cpp-1.5.1 (conda-forge/noarch) +partd-1.1.0 (conda-forge/noarch) +progressbar2-3.51.3 (conda-forge/noarch) +pyarrow-0.17.1 (conda-forge/linux-64) +python-utils-2.4.0 (conda-forge/noarch) +pythreejs-2.2.0 (conda-forge/noarch) +pywavelets-1.1.1 (conda-forge/linux-64) +s3fs-0.2.2 (conda-forge/noarch) +s3transfer-0.3.3 (conda-forge/linux-64) +scikit-image-0.17.2 (conda-forge/linux-64) +sortedcontainers-2.2.2 (conda-forge/noarch) +tabulate-0.8.7 (conda-forge/noarch) +tbb-2020.1 (conda-forge/linux-64) +tblib-1.6.0 (conda-forge/noarch) +tifffile-2020.6.3 (conda-forge/noarch) +toolz-0.10.0 (conda-forge/noarch) +traittypes-0.2.1 (conda-forge/noarch) +typing_extensions-3.7.4.2 (conda-forge/noarch) +vaex-3.0.0 (conda-forge/noarch) +vaex-arrow-0.5.1 (conda-forge/noarch) +vaex-astro-0.7.0 (conda-forge/noarch) +vaex-core-2.0.3 (conda-forge/linux-64) +vaex-hdf5-0.6.0 (conda-forge/noarch) +vaex-jupyter-0.5.2 (conda-forge/noarch) +vaex-ml-0.9.0 (conda-forge/noarch) +vaex-server-0.3.1 (conda-forge/noarch) +vaex-viz-0.4.0 (conda-forge/noarch) +xarray-0.15.1 (conda-forge/noarch) +xorg-kbproto-1.0.7 (conda-forge/linux-64) +xorg-libice-1.0.10 (conda-forge/linux-64) +xorg-libsm-1.2.3 (conda-forge/linux-64) +xorg-libx11-1.6.9 (conda-forge/linux-64) +xorg-libxext-1.3.4 (conda-forge/linux-64) +xorg-libxpm-3.5.13 (conda-forge/linux-64) +xorg-libxrender-0.9.10 (conda-forge/linux-64) +xorg-libxt-1.1.5 (conda-forge/linux-64) +xorg-renderproto-0.11.1 (conda-forge/linux-64) +xorg-xextproto-7.3.0 (conda-forge/linux-64) +xorg-xproto-7.0.31 (conda-forge/linux-64) +zict-2.0.0 (conda-forge/noarch) 2020-06-19 22:08:48 (rev 14) _libgcc_mutex {0.1 (defaults/linux-64) -> 0.1 (conda-forge/linux-64)} arrow-cpp {0.17.1 (conda-forge/linux-64) -> 0.15.0 (conda-forge/linux-64)} bokeh {2.1.0 (conda-forge/linux-64) -> 1.4.0 (conda-forge/linux-64)} boost-cpp {1.72.0 (conda-forge/linux-64) -> 1.70.0 (conda-forge/linux-64)} cudatoolkit {10.1.243 (defaults/linux-64) -> 10.1.243 (nvidia/linux-64)} curl {7.69.1 (conda-forge/linux-64) -> 7.68.0 (conda-forge/linux-64)} giflib {5.2.1 (conda-forge/linux-64) -> 5.1.7 (conda-forge/linux-64)} grpc-cpp {1.29.1 (conda-forge/linux-64) -> 1.23.0 (conda-forge/linux-64)} krb5 {1.17.1 (conda-forge/linux-64) -> 1.16.4 (conda-forge/linux-64)} libcurl {7.69.1 (conda-forge/linux-64) -> 7.68.0 (conda-forge/linux-64)} libgcc-ng {9.1.0 (defaults/linux-64) -> 9.2.0 (conda-forge/linux-64)} libprotobuf {3.12.3 (conda-forge/linux-64) -> 3.8.0 (conda-forge/linux-64)} libtiff {4.1.0 (conda-forge/linux-64) -> 4.1.0 (conda-forge/linux-64)} libxml2 {2.9.9 (defaults/linux-64) -> 2.9.10 (conda-forge/linux-64)} lz4-c {1.9.2 (conda-forge/linux-64) -> 1.8.3 (conda-forge/linux-64)} pandas {1.0.3 (defaults/linux-64) -> 0.25.3 (conda-forge/linux-64)} protobuf {3.12.3 (conda-forge/linux-64) -> 3.8.0 (conda-forge/linux-64)} pyarrow {0.17.1 (conda-forge/linux-64) -> 0.15.0 (conda-forge/linux-64)} python {3.7.7 (defaults/linux-64) -> 3.7.6 (conda-forge/linux-64)} re2 {2020.06.01 (conda-forge/linux-64) -> 2020.04.01 (conda-forge/linux-64)} tbb {2020.1 (conda-forge/linux-64) -> 2018.0.5 (conda-forge/linux-64)} thrift-cpp {0.13.0 (conda-forge/linux-64) -> 0.12.0 (conda-forge/linux-64)} tifffile {2020.6.3 (conda-forge/noarch) -> 2020.6.3 (conda-forge/noarch)} vaex {3.0.0 (conda-forge/noarch) -> 1.0.0b7 (conda-forge/noarch)} zstd {1.4.4 (conda-forge/linux-64) -> 1.4.3 (conda-forge/linux-64)} -imagecodecs-2020.5.30 (conda-forge/linux-64) -libwebp-base-1.1.0 (conda-forge/linux-64) -vaex-arrow-0.5.1 (conda-forge/noarch) +_openmp_mutex-4.5 (conda-forge/linux-64) +aiohttp-3.6.2 (conda-forge/linux-64) +appdirs-1.4.3 (conda-forge/noarch) +async-timeout-3.0.1 (conda-forge/noarch) +boost-1.70.0 (conda-forge/linux-64) +cfitsio-3.470 (conda-forge/linux-64) +click-plugins-1.1.1 (conda-forge/noarch) +cligj-0.5.0 (conda-forge/noarch) +colorcet-2.0.1 (conda-forge/noarch) +cudf-0.14.0 (rapidsai/linux-64) +cudnn-7.6.0 (nvidia/linux-64) +cugraph-0.14.0 (rapidsai/linux-64) +cuml-0.14.0 (rapidsai/linux-64) +cupy-7.5.0 (conda-forge/linux-64) +cusignal-0.14.0 (rapidsai/noarch) +cuspatial-0.14.0 (rapidsai/linux-64) +cuxfilter-0.14.0 (rapidsai/linux-64) +dask-cuda-0.14.0 (rapidsai/linux-64) +dask-cudf-0.14.0 (rapidsai/linux-64) +dask-xgboost-0.2.0.dev28 (rapidsai/noarch) +datashader-0.10.0 (conda-forge/noarch) +datashape-0.5.4 (conda-forge/noarch) +dlpack-0.2 (conda-forge/linux-64) +fastavro-0.23.4 (conda-forge/linux-64) +fastrlock-0.5 (conda-forge/linux-64) +fiona-1.8.13 (conda-forge/linux-64) +freexl-1.0.5 (conda-forge/linux-64) +gdal-3.0.2 (conda-forge/linux-64) +geopandas-0.7.0 (conda-forge/noarch) +geos-3.7.2 (conda-forge/linux-64) +geotiff-1.5.1 (conda-forge/linux-64) +hdf4-4.2.13 (conda-forge/linux-64) +imagecodecs-lite-2019.12.3 (conda-forge/linux-64) +importlib-metadata-1.6.1 (conda-forge/linux-64) +json-c-0.13.1 (conda-forge/linux-64) +jupyter-server-proxy-1.5.0 (conda-forge/noarch) +kealib-1.4.13 (conda-forge/linux-64) +libcudf-0.14.0 (rapidsai/linux-64) +libcugraph-0.14.0 (rapidsai/linux-64) +libcuml-0.14.0 (rapidsai/linux-64) +libcumlprims-0.14.1 (nvidia/linux-64) +libcuspatial-0.14.0 (rapidsai/linux-64) +libdap4-3.20.4 (conda-forge/linux-64) +libgdal-3.0.2 (conda-forge/linux-64) +libgomp-9.2.0 (conda-forge/linux-64) +libhwloc-2.1.0 (conda-forge/linux-64) +libiconv-1.15 (conda-forge/linux-64) +libkml-1.3.0 (conda-forge/linux-64) +libnetcdf-4.7.1 (conda-forge/linux-64) +libnvstrings-0.14.0 (rapidsai/linux-64) +libpq-11.5 (conda-forge/linux-64) +librmm-0.14.0 (rapidsai/linux-64) +libspatialindex-1.9.3 (conda-forge/linux-64) +libspatialite-4.3.0a (conda-forge/linux-64) +libuv-1.34.0 (conda-forge/linux-64) +libwebp-1.0.2 (conda-forge/linux-64) +libxgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +markdown-3.2.2 (conda-forge/noarch) +multidict-4.7.5 (conda-forge/linux-64) +multipledispatch-0.6.0 (conda-forge/noarch) +munch-2.5.0 (conda-forge/noarch) +nccl-2.5.7.1 (conda-forge/linux-64) +nodejs-13.13.0 (conda-forge/linux-64) +nvstrings-0.14.0 (rapidsai/linux-64) +panel-0.6.4 (conda-forge/noarch) +param-1.9.3 (conda-forge/noarch) +poppler-0.67.0 (conda-forge/linux-64) +poppler-data-0.4.9 (conda-forge/noarch) +postgresql-11.5 (conda-forge/linux-64) +proj-6.2.1 (conda-forge/linux-64) +py-xgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +pyct-0.4.6 (conda-forge/noarch) +pyct-core-0.4.6 (conda-forge/noarch) +pyee-7.0.2 (conda-forge/noarch) +pynvml-8.0.4 (conda-forge/noarch) +pyopengl-3.1.5 (conda-forge/noarch) +pyppeteer-0.0.25 (conda-forge/noarch) +pyproj-2.4.2.post1 (conda-forge/linux-64) +pyviz_comms-0.7.5 (conda-forge/noarch) +rapids-0.14.0 (rapidsai/linux-64) +rapids-xgboost-0.14.0 (rapidsai/linux-64) +rmm-0.14.0 (rapidsai/linux-64) +rtree-0.9.4 (conda-forge/linux-64) +shapely-1.6.4 (conda-forge/linux-64) +simpervisor-0.3 (conda-forge/noarch) +spdlog-1.6.1 (conda-forge/linux-64) +tiledb-1.6.2 (conda-forge/linux-64) +tzcode-2020a (conda-forge/linux-64) +ucx-1.8.0+gf6ec8d4 (rapidsai/linux-64) +ucx-py-0.14.0+gf6ec8d4 (rapidsai/linux-64) +vaex-distributed-0.3.0 (conda-forge/noarch) +vaex-ui-0.3.0 (conda-forge/noarch) +websockets-8.1 (conda-forge/linux-64) +xerces-c-3.2.2 (conda-forge/linux-64) +xgboost-1.1.0dev.rapidsai0.14 (rapidsai/linux-64) +yarl-1.3.0 (conda-forge/linux-64) 2020-06-21 23:28:05 (rev 15) certifi {2020.4.5.2 (conda-forge/linux-64) -> 2020.4.5.2 (defaults/linux-64)} +torchaudio-0.5.0 (pytorch/linux-64) 2020-06-21 23:39:10 (rev 16) ca-certificates {2020.4.5.2 (conda-forge/linux-64) -> 2020.1.1 (defaults/linux-64)} openssl {1.1.1g (conda-forge/linux-64) -> 1.1.1g (defaults/linux-64)} +cython-0.29.20 (defaults/linux-64)
0.552057
0.538498
<a href="https://colab.research.google.com/github/seyonechithrananda/bert-loves-chemistry/blob/master/HuggingFace_ZINC_ROBERTA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip uninstall -y tensorflow !pip install transformers %%time from pathlib import Path from tokenizers import ByteLevelBPETokenizer tokenizer = ByteLevelBPETokenizer() tokenizer.train(files='/content/drive/My Drive/Project De Novo/100k_rndm_zinc_drugs_clean.txt', vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) ``` Lets make a new directory, BERT_loves_chemistry, to store our tokenize ``` !mkdir BERT_loves_chemistry tokenizer.save("BERT_loves_chemistry") from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "/content/drive/My Drive/Project De Novo/BERT_loves_chemistry/vocab.json", "/content/drive/My Drive/Project De Novo/BERT_loves_chemistry/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) #tokenize remdesivir SMILES to test the tokenizer! tokenizer.encode("CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@H]1O[C@](C#N)([C@H](O)[C@@H]1O)C1=CC=C2N1N=CN=C2N)OC1=CC=CC=C1") tokenizer.encode("CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@H]1O[C@](C#N)([C@H](O)[C@@H]1O)C1=CC=C2N1N=CN=C2N)OC1=CC=CC=C1").tokens ``` ## 3. Train a language model from scratch We will now train our language model using the [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) script from `transformers` (newly renamed from `run_lm_finetuning.py` as it now supports training from scratch more seamlessly). Just remember to leave `--model_name_or_path` to `None` to train from scratch vs. from an existing model or checkpoint. > We’ll train a BERT for chemistry model, with the help of our tokenizdr trained on the ZINC 250k dataset we used. As the model is BERT-like, we’ll train it on a task of *Masked language modeling*, i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset. This is taken care of by the example script. ``` !nvidia-smi #check GPU import torch torch.cuda.is_available() #checking if CUDA + Colab GPU works # Get the example scripts. !wget -c https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_language_modeling.py import json config = { "architectures":[ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 6, "type_vocab_size": 1, "vocab_size": 52000 } with open("./BERT_loves_chemistry/config.json", 'w') as fp: json.dump(config, fp) tokenizer_config = { "max_len": 512 } with open("./BERT_loves_chemistry/tokenizer_config.json", 'w') as fp: json.dump(tokenizer_config, fp) cd /content/drive/My Drive/Project De Novo # run script cmd = """ python run_language_modeling.py --train_data_file ./100k_rndm_zinc_drugs_clean.txt --output_dir ./output_dir --model_type roberta --mlm --config_name ./BERT_loves_chemistry --tokenizer_name ./BERT_loves_chemistry --do_train --line_by_line --learning_rate 1e-4 --num_train_epochs 12 --save_total_limit 2 --save_steps 2000 --per_gpu_train_batch_size 16 --seed 42 """.replace("\n", " ") %%time !{cmd} ``` To visualize the training progress, we can use `!tensorboard dev upload --logdir ./path/to/runs` # Exporting the model To share the model with the NLP community (in which access to large language models trained on chemical data is sparse) you need to export it in the appropriate format. Let's prepare the path for the model’s latest checkpoint and then run the following code: ``` from transformers import AutoModelWithLMHead, AutoTokenizer import os directory = "/path/to/your/model/checkpoint-30000" model = AutoModelWithLMHead.from_pretrained(directory) tokenizer = AutoTokenizer.from_pretrained(directory) out = "ChemBERTa-zinc-base-v1" os.makedirs(out, exist_ok=True) model.save_pretrained(out) tokenizer.save_pretrained(out) ``` From here, we'll upload the weights onto HuggingFace's system using `transformers-cli` util to upload the model: ``` transformers-cli upload ./ChemBERTa-zinc-base-v1/ ```
github_jupyter
!pip uninstall -y tensorflow !pip install transformers %%time from pathlib import Path from tokenizers import ByteLevelBPETokenizer tokenizer = ByteLevelBPETokenizer() tokenizer.train(files='/content/drive/My Drive/Project De Novo/100k_rndm_zinc_drugs_clean.txt', vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) !mkdir BERT_loves_chemistry tokenizer.save("BERT_loves_chemistry") from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "/content/drive/My Drive/Project De Novo/BERT_loves_chemistry/vocab.json", "/content/drive/My Drive/Project De Novo/BERT_loves_chemistry/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) #tokenize remdesivir SMILES to test the tokenizer! tokenizer.encode("CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@H]1O[C@](C#N)([C@H](O)[C@@H]1O)C1=CC=C2N1N=CN=C2N)OC1=CC=CC=C1") tokenizer.encode("CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@H]1O[C@](C#N)([C@H](O)[C@@H]1O)C1=CC=C2N1N=CN=C2N)OC1=CC=CC=C1").tokens !nvidia-smi #check GPU import torch torch.cuda.is_available() #checking if CUDA + Colab GPU works # Get the example scripts. !wget -c https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_language_modeling.py import json config = { "architectures":[ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 6, "type_vocab_size": 1, "vocab_size": 52000 } with open("./BERT_loves_chemistry/config.json", 'w') as fp: json.dump(config, fp) tokenizer_config = { "max_len": 512 } with open("./BERT_loves_chemistry/tokenizer_config.json", 'w') as fp: json.dump(tokenizer_config, fp) cd /content/drive/My Drive/Project De Novo # run script cmd = """ python run_language_modeling.py --train_data_file ./100k_rndm_zinc_drugs_clean.txt --output_dir ./output_dir --model_type roberta --mlm --config_name ./BERT_loves_chemistry --tokenizer_name ./BERT_loves_chemistry --do_train --line_by_line --learning_rate 1e-4 --num_train_epochs 12 --save_total_limit 2 --save_steps 2000 --per_gpu_train_batch_size 16 --seed 42 """.replace("\n", " ") %%time !{cmd} from transformers import AutoModelWithLMHead, AutoTokenizer import os directory = "/path/to/your/model/checkpoint-30000" model = AutoModelWithLMHead.from_pretrained(directory) tokenizer = AutoTokenizer.from_pretrained(directory) out = "ChemBERTa-zinc-base-v1" os.makedirs(out, exist_ok=True) model.save_pretrained(out) tokenizer.save_pretrained(out) transformers-cli upload ./ChemBERTa-zinc-base-v1/
0.568416
0.83346
<a href="https://colab.research.google.com/github/jegun19/scratch_kNN/blob/main/scratch_kNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # In this notebook, a KNN model is implemented from scratch and tested using Iris dataset imported from scikit-learn. ``` # Import packages and the iris dataset from scikit-learn import numpy as np from sklearn import datasets iris = datasets.load_iris() iris_data = iris.data iris_labels = iris.target print(iris_data[1], iris_data[51], iris_data[101]) print(iris_labels[1], iris_labels[51], iris_labels[101]) # Visualize sepal length and width features import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap X = iris_data y = iris_labels cm_bright = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) plt.figure(figsize=(8,6)) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright, edgecolor='y') plt.title('Iris Data Visualization') plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.gcf().patches.extend([plt.Rectangle((0.7,0.72), (0.19), (0.15), fill = False, color='k', transform=plt.gcf().transFigure)]) plt.gcf().text(0.71, 0.83, 'Iris-setosa', color='#FF0000', size=15) plt.gcf().text(0.71, 0.79, 'Iris-versicolor', color='#00FF00', size=15) plt.gcf().text(0.71, 0.75, 'Iris-virginica', color='#0000FF', size=15) plt.show() # Visualize petal length and width features X = iris_data y = iris_labels cm_bright = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) plt.figure(figsize=(8,6)) plt.scatter(X[:, 2], X[:, 3], c=y, cmap=cm_bright, edgecolor='y') plt.title('Iris Data Visualization') plt.xlabel('Petal length (cm)') plt.ylabel('Petal width (cm)') plt.gcf().patches.extend([plt.Rectangle((0.15,0.72), (0.19), (0.15), fill = False, color='k', transform=plt.gcf().transFigure)]) plt.gcf().text(0.16, 0.83, 'Iris-setosa', color='#FF0000', size=15) plt.gcf().text(0.16, 0.79, 'Iris-versicolor', color='#00FF00', size=15) plt.gcf().text(0.16, 0.75, 'Iris-virginica', color='#0000FF', size=15) plt.show() import numpy as np from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(iris.data, iris.target, test_size=.33) print('X_train', X_train.shape, 'Y_train', Y_train.shape) print('X_test', X_test.shape, 'Y_test', Y_test.shape) # Count euclidean distance using np.linalg.norm (L2 norm) def distance(data1, data2): data1 = np.array(data1) data2 = np.array(data2) return np.linalg.norm(data1 - data2) # Test the distance function for 2 data points print(iris_data[1]) print(iris_data[51]) print(distance(iris_data[1], iris_data[51])) # Count euclidean distance manually import math def euclideanDistance(data1, data2, length): distance = 0 for x in range(length): distance += pow((data1[x] - data2[x]), 2) return math.sqrt(distance) length = len(iris_data[1]) # Test the distance function for 2 data points (reproduce the results for the previous norm function) print(iris_data[1]) print(iris_data[51]) print(euclideanDistance(iris_data[1], iris_data[51], length)) def get_neighbors(training_set, labels, test_instance, k, distance): distances = [] # Count the distance between the test instances and each data point in the training set for index in range(len(training_set)): dist = distance(test_instance, training_set[index]) distances.append((training_set[index], dist, labels[index])) # Sort the neighbors by its distance metrics distances.sort(key=lambda x: x[1]) # Select only the nearest k neighbors neighbors = distances[:k] return neighbors # Create voting function to vote the nearest neighbor from collections import Counter def vote(neighbors): class_counter = Counter() for neighbor in neighbors: class_counter[neighbor[2]] += 1 vote_result = class_counter.most_common(1)[0][0] return vote_result # Define the number of samples in the test data n_samples = len(X_test) # Define the number of nearest neighbor n_neighbor = 3 # Predict the label of each sample in test data def predict(n_samples): Y_pred = list() for i in range(n_samples): neighbors = get_neighbors(X_train, Y_train, X_test[i], n_neighbor, distance=distance) print("index: ", i, ", prediction: ", vote(neighbors), ", label: ", Y_test[i] ) Y_pred.append(vote(neighbors)) return Y_pred Y_pred = predict(n_samples) # Draw confusion matrix from the prediction result and the actual label from sklearn.metrics import confusion_matrix cm = confusion_matrix(Y_pred, Y_test) print(cm) plt.matshow(cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() ``` # Next, we will use KNN model imported from scikit-learn to compare our model with. ``` # Use KNN from scikit-learn to compare the prediction results from sklearn.neighbors import KNeighborsClassifier # Call the KNN model sk_knn = KNeighborsClassifier(n_neighbors=3) # Fit the model with training data sk_knn.fit(X_train, Y_train) # Predict the testing data sk_Y_pred = sk_knn.predict(X_test) # Draw confusion matrix of the prediction results using KNN from scikit-learn sk_cm = confusion_matrix(sk_Y_pred, Y_test) print(sk_cm) plt.matshow(sk_cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() ``` # By seeing the confusion matrix from both self-implemented model and the model imported from scikit-learn, we have successfully implemented a KNN model from scratch!
github_jupyter
# Import packages and the iris dataset from scikit-learn import numpy as np from sklearn import datasets iris = datasets.load_iris() iris_data = iris.data iris_labels = iris.target print(iris_data[1], iris_data[51], iris_data[101]) print(iris_labels[1], iris_labels[51], iris_labels[101]) # Visualize sepal length and width features import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap X = iris_data y = iris_labels cm_bright = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) plt.figure(figsize=(8,6)) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright, edgecolor='y') plt.title('Iris Data Visualization') plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.gcf().patches.extend([plt.Rectangle((0.7,0.72), (0.19), (0.15), fill = False, color='k', transform=plt.gcf().transFigure)]) plt.gcf().text(0.71, 0.83, 'Iris-setosa', color='#FF0000', size=15) plt.gcf().text(0.71, 0.79, 'Iris-versicolor', color='#00FF00', size=15) plt.gcf().text(0.71, 0.75, 'Iris-virginica', color='#0000FF', size=15) plt.show() # Visualize petal length and width features X = iris_data y = iris_labels cm_bright = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) plt.figure(figsize=(8,6)) plt.scatter(X[:, 2], X[:, 3], c=y, cmap=cm_bright, edgecolor='y') plt.title('Iris Data Visualization') plt.xlabel('Petal length (cm)') plt.ylabel('Petal width (cm)') plt.gcf().patches.extend([plt.Rectangle((0.15,0.72), (0.19), (0.15), fill = False, color='k', transform=plt.gcf().transFigure)]) plt.gcf().text(0.16, 0.83, 'Iris-setosa', color='#FF0000', size=15) plt.gcf().text(0.16, 0.79, 'Iris-versicolor', color='#00FF00', size=15) plt.gcf().text(0.16, 0.75, 'Iris-virginica', color='#0000FF', size=15) plt.show() import numpy as np from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(iris.data, iris.target, test_size=.33) print('X_train', X_train.shape, 'Y_train', Y_train.shape) print('X_test', X_test.shape, 'Y_test', Y_test.shape) # Count euclidean distance using np.linalg.norm (L2 norm) def distance(data1, data2): data1 = np.array(data1) data2 = np.array(data2) return np.linalg.norm(data1 - data2) # Test the distance function for 2 data points print(iris_data[1]) print(iris_data[51]) print(distance(iris_data[1], iris_data[51])) # Count euclidean distance manually import math def euclideanDistance(data1, data2, length): distance = 0 for x in range(length): distance += pow((data1[x] - data2[x]), 2) return math.sqrt(distance) length = len(iris_data[1]) # Test the distance function for 2 data points (reproduce the results for the previous norm function) print(iris_data[1]) print(iris_data[51]) print(euclideanDistance(iris_data[1], iris_data[51], length)) def get_neighbors(training_set, labels, test_instance, k, distance): distances = [] # Count the distance between the test instances and each data point in the training set for index in range(len(training_set)): dist = distance(test_instance, training_set[index]) distances.append((training_set[index], dist, labels[index])) # Sort the neighbors by its distance metrics distances.sort(key=lambda x: x[1]) # Select only the nearest k neighbors neighbors = distances[:k] return neighbors # Create voting function to vote the nearest neighbor from collections import Counter def vote(neighbors): class_counter = Counter() for neighbor in neighbors: class_counter[neighbor[2]] += 1 vote_result = class_counter.most_common(1)[0][0] return vote_result # Define the number of samples in the test data n_samples = len(X_test) # Define the number of nearest neighbor n_neighbor = 3 # Predict the label of each sample in test data def predict(n_samples): Y_pred = list() for i in range(n_samples): neighbors = get_neighbors(X_train, Y_train, X_test[i], n_neighbor, distance=distance) print("index: ", i, ", prediction: ", vote(neighbors), ", label: ", Y_test[i] ) Y_pred.append(vote(neighbors)) return Y_pred Y_pred = predict(n_samples) # Draw confusion matrix from the prediction result and the actual label from sklearn.metrics import confusion_matrix cm = confusion_matrix(Y_pred, Y_test) print(cm) plt.matshow(cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() # Use KNN from scikit-learn to compare the prediction results from sklearn.neighbors import KNeighborsClassifier # Call the KNN model sk_knn = KNeighborsClassifier(n_neighbors=3) # Fit the model with training data sk_knn.fit(X_train, Y_train) # Predict the testing data sk_Y_pred = sk_knn.predict(X_test) # Draw confusion matrix of the prediction results using KNN from scikit-learn sk_cm = confusion_matrix(sk_Y_pred, Y_test) print(sk_cm) plt.matshow(sk_cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show()
0.738763
0.951323
# Random Forest Classification ### Based on social media data set expecting to predict the individual can able to buy SUV based on his salary. Here, I am used 75% data for training model and 25 as testing model and the random forest algorithm predicted the results with 91% of accuracy. ## Importing the libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ## Importing the dataset ``` dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values dataset.shape dataset.describe() ``` ## Splitting the dataset into the Training set and Test set ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train) print(y_train) print(X_test) print(y_test) ``` ## Feature Scaling ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train) print(X_test) ``` ## Training the Random Forest Classification model on the Training set ``` from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0) classifier.fit(X_train, y_train) ``` ## Predicting a new result ``` print(classifier.predict(sc.transform([[30,87000]]))) ``` ## Predicting the Test set results ``` y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) ``` ## Making the Confusion Matrix ``` from sklearn.metrics import confusion_matrix, accuracy_score cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) ``` ## Visualising the Training set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest Classification (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ## Visualising the Test set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest Classification (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values dataset.shape dataset.describe() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train) print(y_train) print(X_test) print(y_test) from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train) print(X_test) from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0) classifier.fit(X_train, y_train) print(classifier.predict(sc.transform([[30,87000]]))) y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) from sklearn.metrics import confusion_matrix, accuracy_score cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest Classification (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest Classification (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show()
0.629091
0.983036
## Introduction This notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper: M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*. The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4. ``` from keras.datasets import mnist, cifar10 from keras.models import load_model from keras.utils.np_utils import to_categorical import numpy as np from art import DATA_PATH from art.classifiers import KerasClassifier from art.attacks import ProjectedGradientDescent from art.utils import get_file ``` ## MNIST Three different MNIST models are available. Use the following URLs to access them: - `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1) - `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1) - `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data: ``` (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255 X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255 y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) ``` E.g. load the model trained on 50% benign and 50% adversarial samples: ``` path = get_file('mnist_ratio=0.5.h5',extract=False, path=DATA_PATH, url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1') model = load_model(path) classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1]) ``` Assess accuracy on first `n` benign test samples: ``` n = 10000 y_pred = classifier.predict(X_test[:n]) accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i benign test samples: %f" % (n, accuracy)) ``` Define adversarial attack: ``` attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False, random_init=True) ``` Assess accuracy on first `n` adversarial test samples: ``` n = 10 X_test_adv = attack.generate(X_test[:n], y=y_test[:n]) y_adv_pred = classifier.predict(X_test_adv) accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy)) ``` ## CIFAR-10 Similarly to MNIST, three different CIFAR-10 models are available at the following URLs: - `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1) - `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1) - `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data: ``` (X_train, y_train), (X_test, y_test) = cifar10.load_data() X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32') X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32') y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) ``` E.g. load the model trained on 50% benign and 50% adversarial samples: ``` path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=DATA_PATH, url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1') model = load_model(path) classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255]) ``` Assess accuracy on first `n` benign test samples: ``` n = 100 y_pred = classifier.predict(X_test[:n]) accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i benign test samples: %f" % (n, accuracy)) ``` Define adversarial attack: ``` attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False, random_init=True) ``` Assess accuracy on first `n` adversarial test samples: ``` n = 100 X_test_adv = attack.generate(X_test[:n], y=y_test[:n]) y_adv_pred = classifier.predict(X_test_adv) accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy)) ```
github_jupyter
from keras.datasets import mnist, cifar10 from keras.models import load_model from keras.utils.np_utils import to_categorical import numpy as np from art import DATA_PATH from art.classifiers import KerasClassifier from art.attacks import ProjectedGradientDescent from art.utils import get_file (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255 X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255 y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) path = get_file('mnist_ratio=0.5.h5',extract=False, path=DATA_PATH, url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1') model = load_model(path) classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1]) n = 10000 y_pred = classifier.predict(X_test[:n]) accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i benign test samples: %f" % (n, accuracy)) attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False, random_init=True) n = 10 X_test_adv = attack.generate(X_test[:n], y=y_test[:n]) y_adv_pred = classifier.predict(X_test_adv) accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy)) (X_train, y_train), (X_test, y_test) = cifar10.load_data() X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32') X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32') y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=DATA_PATH, url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1') model = load_model(path) classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255]) n = 100 y_pred = classifier.predict(X_test[:n]) accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i benign test samples: %f" % (n, accuracy)) attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False, random_init=True) n = 100 X_test_adv = attack.generate(X_test[:n], y=y_test[:n]) y_adv_pred = classifier.predict(X_test_adv) accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1)) print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
0.816845
0.970771
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.ensemble import IsolationForest from sklearn.metrics import f1_score import glob import os %matplotlib inline ``` ## Model ``` def conf_mat(pred,truth): res = [0,0,0,0] a = 0 for i in range(len(truth)): if truth[i] == 1: if truth[i] == pred[i]: a = 0 else: a = 2 else: if truth[i] == pred[i]: a = 1 else: a = 3 res[a] = res[a] + 1 print(res) return res def map_pred(pred): res = np.zeros(len(pred)) for i in range(len(pred)): if pred[i] == -1: res[i] = 1 return res def naive_classifier_mean(x_test,mu,std,u_term,l_term): print(mu - l_term * std,mu + u_term * std) predictions = np.where( (x_test > (mu + u_term * std)) | (x_test < (mu - l_term * std)), 1, 0) return predictions def naive_classifier_mean_2(x_test,mu,std,u_term,l_term): print(mu - l_term * std,mu + u_term * std) predictions = np.where( (x_test < (mu + u_term * std)) & (x_test > (mu - l_term * std)), 1, 0) return predictions ``` ## Multi-dataset Model ``` input_dir = './../../train/KPI/' summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE']) for fname in os.listdir(input_dir): df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp') kpi_name = df['KPI ID'].values[0] print(kpi_name) df = df.drop(['KPI ID'], axis=1) # Normalize Values normalized_df=(df-df.min())/(df.max()-df.min()) normalized_df = normalized_df.astype({'label': 'int64'}) # Split to Train and Test train_set, test_set= np.split(normalized_df, [int(.75 *len(normalized_df))]) # Format Train and Test X = np.array(train_set['value']).reshape(-1, 1) y = np.array(train_set['label']) x_test = np.array(test_set['value']).reshape(-1,1) y_test = np.array(test_set['label']) # Check Valid Train Dataset if len(np.unique(y)) > 1: # Train Model model = IsolationForest(n_estimators=100,contamination=float(0.005)) model.fit(df.value.values.reshape(-1, 1)) # Make Predictions predictions = map_pred(model.predict(df.value.values.reshape(-1, 1))) # Compute Confusion Matrix cf = conf_mat(predictions,df.label.values) # F1-Score prec = 0 rec = 0 f1 = 0 if (cf[0] + cf[2]) != 0: prec = cf[0] / (cf[0] + cf[2]) if (cf[0] + cf[3]) != 0: rec = cf[0] / (cf[0] + cf[3]) if (prec + rec) != 0: f1 = 2 * (prec * rec / (prec+rec)) # print(f1_score(predictions,y_test)) summary = summary.append({'KPI': kpi_name, 'TP': cf[0], 'TN': cf[1], 'FP': cf[2], 'FN': cf[3], 'PRECISION': prec, 'RECALL': rec, 'F1_SCORE': f1 }, ignore_index=True) else: summary = summary.append({'KPI': kpi_name, 'TP': None, 'TN': None, 'FP': None, 'FN': None, 'PRECISION': None, 'RECALL': None, 'F1_SCORE': None }, ignore_index=True) # summary.to_csv('DT_Result.csv') ``` ## Single Class ``` input_dir = './../../test/KPI/' fname = 'test_88cf3a776ba00e7c.csv' input_dir = './../../train/KPI/' fname = 'train_88cf3a776ba00e7c.csv' summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE']) df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp') kpi_name = df['KPI ID'].values[0] print(kpi_name) df = df.drop(['KPI ID'], axis=1) # Format Train and Test # X = np.array(train_set['value']).reshape(-1, 1) # x_test = np.array(test_set['value']).reshape(-1,1) model = IsolationForest(n_estimators=100,contamination=float(0.01)) model.fit(df.value.values.reshape(-1, 1)) # Make Predictions predictions = map_pred(model.predict(df.value.values.reshape(-1, 1))) # Compute Confusion Matrix # cf = conf_mat(predictions,df.label.values) f1_score(predictions,df.label.values) unique, counts = np.unique(predictions, return_counts=True) dict(zip(unique, counts)) df['pred'] = predictions df[df['pred'] == 1] ``` ## Plot ``` df.label = df.label * 0.1 df.head(1440*50).plot(kind='line',figsize=(12,8)) plt.plot(predictions*0.1,alpha=.5) # plt.plot(np.squeeze(df_pred.values.T)*1000) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.ensemble import IsolationForest from sklearn.metrics import f1_score import glob import os %matplotlib inline def conf_mat(pred,truth): res = [0,0,0,0] a = 0 for i in range(len(truth)): if truth[i] == 1: if truth[i] == pred[i]: a = 0 else: a = 2 else: if truth[i] == pred[i]: a = 1 else: a = 3 res[a] = res[a] + 1 print(res) return res def map_pred(pred): res = np.zeros(len(pred)) for i in range(len(pred)): if pred[i] == -1: res[i] = 1 return res def naive_classifier_mean(x_test,mu,std,u_term,l_term): print(mu - l_term * std,mu + u_term * std) predictions = np.where( (x_test > (mu + u_term * std)) | (x_test < (mu - l_term * std)), 1, 0) return predictions def naive_classifier_mean_2(x_test,mu,std,u_term,l_term): print(mu - l_term * std,mu + u_term * std) predictions = np.where( (x_test < (mu + u_term * std)) & (x_test > (mu - l_term * std)), 1, 0) return predictions input_dir = './../../train/KPI/' summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE']) for fname in os.listdir(input_dir): df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp') kpi_name = df['KPI ID'].values[0] print(kpi_name) df = df.drop(['KPI ID'], axis=1) # Normalize Values normalized_df=(df-df.min())/(df.max()-df.min()) normalized_df = normalized_df.astype({'label': 'int64'}) # Split to Train and Test train_set, test_set= np.split(normalized_df, [int(.75 *len(normalized_df))]) # Format Train and Test X = np.array(train_set['value']).reshape(-1, 1) y = np.array(train_set['label']) x_test = np.array(test_set['value']).reshape(-1,1) y_test = np.array(test_set['label']) # Check Valid Train Dataset if len(np.unique(y)) > 1: # Train Model model = IsolationForest(n_estimators=100,contamination=float(0.005)) model.fit(df.value.values.reshape(-1, 1)) # Make Predictions predictions = map_pred(model.predict(df.value.values.reshape(-1, 1))) # Compute Confusion Matrix cf = conf_mat(predictions,df.label.values) # F1-Score prec = 0 rec = 0 f1 = 0 if (cf[0] + cf[2]) != 0: prec = cf[0] / (cf[0] + cf[2]) if (cf[0] + cf[3]) != 0: rec = cf[0] / (cf[0] + cf[3]) if (prec + rec) != 0: f1 = 2 * (prec * rec / (prec+rec)) # print(f1_score(predictions,y_test)) summary = summary.append({'KPI': kpi_name, 'TP': cf[0], 'TN': cf[1], 'FP': cf[2], 'FN': cf[3], 'PRECISION': prec, 'RECALL': rec, 'F1_SCORE': f1 }, ignore_index=True) else: summary = summary.append({'KPI': kpi_name, 'TP': None, 'TN': None, 'FP': None, 'FN': None, 'PRECISION': None, 'RECALL': None, 'F1_SCORE': None }, ignore_index=True) # summary.to_csv('DT_Result.csv') input_dir = './../../test/KPI/' fname = 'test_88cf3a776ba00e7c.csv' input_dir = './../../train/KPI/' fname = 'train_88cf3a776ba00e7c.csv' summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE']) df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp') kpi_name = df['KPI ID'].values[0] print(kpi_name) df = df.drop(['KPI ID'], axis=1) # Format Train and Test # X = np.array(train_set['value']).reshape(-1, 1) # x_test = np.array(test_set['value']).reshape(-1,1) model = IsolationForest(n_estimators=100,contamination=float(0.01)) model.fit(df.value.values.reshape(-1, 1)) # Make Predictions predictions = map_pred(model.predict(df.value.values.reshape(-1, 1))) # Compute Confusion Matrix # cf = conf_mat(predictions,df.label.values) f1_score(predictions,df.label.values) unique, counts = np.unique(predictions, return_counts=True) dict(zip(unique, counts)) df['pred'] = predictions df[df['pred'] == 1] df.label = df.label * 0.1 df.head(1440*50).plot(kind='line',figsize=(12,8)) plt.plot(predictions*0.1,alpha=.5) # plt.plot(np.squeeze(df_pred.values.T)*1000)
0.41324
0.766796
<a href="https://colab.research.google.com/github/murthylab/sleap/blob/main/docs/notebooks/Interactive_and_realtime_inference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Interactive and realtime inference For most workflows, using the [`sleap-track` CLI](https://sleap.ai/guides/cli.html#sleap-track) is probably the most convenient option, but if you're developing a custom application you can take advantage of SLEAP's inference API to use your trained models in your own custom scripts. In this notebook we will explore how to predict poses from raw images in pure Python, and do some basic benchmarking on a simulated realtime predictor that could be used to enable closed-loop experiments. ## 1. Setup SLEAP Run this cell first to install SLEAP. If you get a dependency error in subsequent cells, just click **Runtime** → **Restart runtime** to reload the packages. Don't forget to set **Runtime** → **Change runtime type** → **GPU** as the accelerator. ``` # This should take care of all the dependencies on colab: !pip uninstall -y opencv-python opencv-contrib-python && pip install sleap # But to do it locally, we'd recommend the conda package (available on Windows + Linux): # conda create -n sleap -c sleap -c conda-forge -c nvidia sleap ``` Import SLEAP to make sure it installed correctly and print out some information about the system: ``` import sleap sleap.disable_preallocation() # This initializes the GPU and prevents TensorFlow from filling the entire GPU memory sleap.versions() sleap.system_summary() ``` ## 2. Setup data Before we start, let's download a raw video and a set of trained top-down ID models that we'll use to build our application around. ``` !curl -L --output video.mp4 https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.mp4 !curl -L --output centroid_model.zip https://storage.googleapis.com/sleap-data/reference/flies13/centroid.fast.210504_182918.centroid.n%3D1800.zip !curl -L --output centered_instance_id_model.zip https://storage.googleapis.com/sleap-data/reference/flies13/td_id.fast.v2.210519_111253.multi_class_topdown.n%3D1800.zip !ls -lah ``` **Note:** These zip files just have the contents of standard SLEAP model folders that are generated during training. ## 3. Interactive inference SLEAP provides a high-level API for performing inference in the form of `Predictor` classes specific to each approach/model type. To create one from a set of trained models, we can use the high-level `sleap.load_model()` function: ``` predictor = sleap.load_model(["centroid_model.zip", "centered_instance_id_model.zip"], batch_size=16) ``` This function handles all the logic of loading trained models, reading the configurations used to train them, and constructs inference models that also include non-trainable operations like peak finding and instance grouping. Next, we'll load a video that we want to use for inference. SLEAP `Video` objects don't actually load the whole video into memory, they just provide a common numpy-like interface for reading from different file formats: ``` video = sleap.load_video("video.mp4") video.shape, video.dtype ``` Our predictor is pretty flexible. It can handle a variety of different input formats, all of which will return a `Labels` object that contains all of our predictions: ``` # Load frames to a numpy array. imgs = video[:100] print(f"imgs.shape: {imgs.shape}") # Predict on numpy array. predictions = predictor.predict(imgs) predictions # Predict on the entire video with parallelizable loading/preprocessing: predictions = predictor.predict(video) predictions ``` We can then inspect the results of our predictor: ``` # Visualize a frame. predictions[100].plot(scale=0.25) # Inspect the contents of a single frame. labeled_frame = predictions[100] labeled_frame.instances # Convert an instance to a numpy array: labeled_frame[0].numpy() ``` What if we don't want or need the inference results wrapped in the SLEAP structures? By using the low-level inference model, we can actually go directly from image to numpy arrays of our results: ``` imgs = video[:16] # batch of 16 images predictions = predictor.inference_model.predict(imgs, numpy=True) predictions for key, value in predictions.items(): print(f"'{key}': {value.shape} ({value.dtype})") ``` ## 4. Realtime performance Now that we know how to do inference with different types of outputs, let's try to use that to build a simulated "realtime" application with timing. First, we'll create a class that simulates a camera grabber API that provides a sequence of pre-loaded frames. ``` from time import perf_counter import numpy as np class SimulatedCamera: """Simulated camera class that serves frames from memory continuously. Attributes: frames: Numpy array with pre-loaded frames. frame_counter: Count of frames that have been grabbed. """ frames: np.ndarray frame_counter: int def __init__(self, frames): self.frames = frames self.frame_counter = 0 def grab_frame(self): idx = self.frame_counter % len(self.frames) self.frame_counter += 1 return self.frames[idx] ``` Then, we'll define a simply acquisition loop, in which we repeatedly grab a frame and perform inference to time how long it takes. ``` recording_duration = 100 # session length in frames # Pre-load images onto "camera" camera = SimulatedCamera(video[:512]) # Camera capture loop inference_times = [] frames_recorded = 0 while frames_recorded < recording_duration: # Get the next frame. frame = camera.grab_frame() frames_recorded += 1 # Get inference results for the frame and time how long it took. t0 = perf_counter() frame_predictions = predictor.inference_model.predict_on_batch(np.expand_dims(frame, axis=0)) dt = perf_counter() - t0 inference_times.append(dt) # Convert to milliseconds. inference_times = np.array(inference_times) * 1000 # Separate out first timing from the rest. The first inference call is much slower as it builds the compute graph. first_inference_time, inference_times = inference_times[0], inference_times[1:] print(f"First inference time: {first_inference_time:.1f} ms") print(f"Inference times: {inference_times.mean():.1f} +- {inference_times.std():.1f} ms") ``` After the first batch, our inference latencies go way down and we can see how they vary over time: ``` import matplotlib.pyplot as plt plt.figure(figsize=(10, 4), dpi=120, facecolor="w") plt.plot(inference_times, ".") plt.xlabel("Time (frames)") plt.ylabel("Inference latency (ms)") plt.grid(True); plt.figure(figsize=(6, 4), dpi=120, facecolor="w") plt.hist(inference_times, bins=30) plt.xlabel("Inference latency (ms)") plt.ylabel("PDF"); ```
github_jupyter
# This should take care of all the dependencies on colab: !pip uninstall -y opencv-python opencv-contrib-python && pip install sleap # But to do it locally, we'd recommend the conda package (available on Windows + Linux): # conda create -n sleap -c sleap -c conda-forge -c nvidia sleap import sleap sleap.disable_preallocation() # This initializes the GPU and prevents TensorFlow from filling the entire GPU memory sleap.versions() sleap.system_summary() !curl -L --output video.mp4 https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.mp4 !curl -L --output centroid_model.zip https://storage.googleapis.com/sleap-data/reference/flies13/centroid.fast.210504_182918.centroid.n%3D1800.zip !curl -L --output centered_instance_id_model.zip https://storage.googleapis.com/sleap-data/reference/flies13/td_id.fast.v2.210519_111253.multi_class_topdown.n%3D1800.zip !ls -lah predictor = sleap.load_model(["centroid_model.zip", "centered_instance_id_model.zip"], batch_size=16) video = sleap.load_video("video.mp4") video.shape, video.dtype # Load frames to a numpy array. imgs = video[:100] print(f"imgs.shape: {imgs.shape}") # Predict on numpy array. predictions = predictor.predict(imgs) predictions # Predict on the entire video with parallelizable loading/preprocessing: predictions = predictor.predict(video) predictions # Visualize a frame. predictions[100].plot(scale=0.25) # Inspect the contents of a single frame. labeled_frame = predictions[100] labeled_frame.instances # Convert an instance to a numpy array: labeled_frame[0].numpy() imgs = video[:16] # batch of 16 images predictions = predictor.inference_model.predict(imgs, numpy=True) predictions for key, value in predictions.items(): print(f"'{key}': {value.shape} ({value.dtype})") from time import perf_counter import numpy as np class SimulatedCamera: """Simulated camera class that serves frames from memory continuously. Attributes: frames: Numpy array with pre-loaded frames. frame_counter: Count of frames that have been grabbed. """ frames: np.ndarray frame_counter: int def __init__(self, frames): self.frames = frames self.frame_counter = 0 def grab_frame(self): idx = self.frame_counter % len(self.frames) self.frame_counter += 1 return self.frames[idx] recording_duration = 100 # session length in frames # Pre-load images onto "camera" camera = SimulatedCamera(video[:512]) # Camera capture loop inference_times = [] frames_recorded = 0 while frames_recorded < recording_duration: # Get the next frame. frame = camera.grab_frame() frames_recorded += 1 # Get inference results for the frame and time how long it took. t0 = perf_counter() frame_predictions = predictor.inference_model.predict_on_batch(np.expand_dims(frame, axis=0)) dt = perf_counter() - t0 inference_times.append(dt) # Convert to milliseconds. inference_times = np.array(inference_times) * 1000 # Separate out first timing from the rest. The first inference call is much slower as it builds the compute graph. first_inference_time, inference_times = inference_times[0], inference_times[1:] print(f"First inference time: {first_inference_time:.1f} ms") print(f"Inference times: {inference_times.mean():.1f} +- {inference_times.std():.1f} ms") import matplotlib.pyplot as plt plt.figure(figsize=(10, 4), dpi=120, facecolor="w") plt.plot(inference_times, ".") plt.xlabel("Time (frames)") plt.ylabel("Inference latency (ms)") plt.grid(True); plt.figure(figsize=(6, 4), dpi=120, facecolor="w") plt.hist(inference_times, bins=30) plt.xlabel("Inference latency (ms)") plt.ylabel("PDF");
0.872646
0.962813
# Query 3.1 Import the file 'gold.csv' (you will find this in the intro section to download or in '/Data/gold.csv' if you are using the jupyter notebook), which contains the data of the last 2 years price action of Indian (MCX) gold standard. Explore the dataframe. You'd see 2 unique columns - 'Pred' and 'new'. One of the 2 columns is a linear combination of the OHLC prices with varying coefficients while the other is a polynomial function of the same inputs. Also, one of the 2 columns is partially filled. Using linear regression, find the coefficients of the inputs and using the same trained model, complete the entire column. Also, try to fit the other column as well using a new linear regression model. Check if the predictions are accurate. Mention which column is a linear function and which is polynomial. (Hint: Plotting a histogram & distplot helps in recognizing the discrepencies in prediction, if any.) ``` import pandas as pd gold_data = pd.read_csv('GOLD.csv') gold_data gold_data.set_index('Date',inplace=True) gold_data gold_without_nan = gold_data.dropna() gold_without_nan # imports import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score #dataset y = np.array(gold_without_nan["Pred"]) x = np.array(gold_without_nan["new"]) x = x.reshape(-1,1) y = y.reshape(-1,1) # sckit-learn implementation # Model initialization regression_model = LinearRegression() # Fit the data(train the model) regression_model.fit(x, y) # Predict y_predicted = regression_model.predict(x) # model evaluation rmse = mean_squared_error(y, y_predicted) r2 = r2_score(y, y_predicted) # printing values print('Slope:' ,regression_model.coef_) print('Intercept:', regression_model.intercept_) print('Root mean squared error: ', rmse) print('R2 score: ', r2) # plotting values # data points plt.scatter(x, y, s=10) plt.xlabel('x') plt.ylabel('y') # predicted values plt.plot(x, y_predicted, color='r') plt.show() pre_data = gold_data[:] pre_data_new = pre_data['new'] pre_data_new = pre_data_new.values.reshape(-1,1) na_data = (regression_model.predict(pre_data_new))#getting predicted values na_data_series = pd.Series(na_data.ravel()) sata = na_data_series.to_frame() gold_data['Pred'] = sata gold_data # imports import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score #dataset y = np.array(gold_data["new"]) x = np.array(gold_data["Pred"]) x = x.reshape(-1,1) y = y.reshape(-1,1) # sckit-learn implementation # Model initialization regression_model = LinearRegression() # Fit the data(train the model) regression_model.fit(x, y) # Predict y_predicted = regression_model.predict(x) # model evaluation rmse = mean_squared_error(y, y_predicted) r2 = r2_score(y, y_predicted) # printing values print('Slope:' ,regression_model.coef_) print('Intercept:', regression_model.intercept_) print('Root mean squared error: ', rmse) print('R2 score: ', r2) # plotting values # data points plt.scatter(x, y, s=10) plt.xlabel('x') plt.ylabel('y') # predicted values plt.plot(x, y_predicted, color='r') plt.show() import matplotlib.pyplot as plt plt.hist(gold_data['Pred']) plt.show() import seaborn as sns sns.distplot(gold_data['Pred']) plt.show() ``` # Query 3.2 Import the stock of your choosing AND the Nifty index. Using linear regression (OLS), calculate - The daily Beta value for the past 3 months. (Daily= Daily returns) The monthly Beta value. (Monthly= Monthly returns) Refrain from using the (covariance(x,y)/variance(x)) formula. Attempt the question using regression.(Regression Reference) Were the Beta values more or less than 1 ? What if it was negative ? Discuss. Include a brief writeup in the bottom of your jupyter notebook with your inferences from the Beta values and regression results ``` tcs_data = pd.read_csv('tcs_stock_data.csv') tcs_data['Date'] = pd.to_datetime(tcs_data['Date']) tcs_data = tcs_data.sort_values('Date') tcs_data.set_index('Date', inplace=True) tcs_data nifty_data = pd.read_csv('NIFTY50_Data.csv') nifty_data['Date'] = pd.to_datetime(nifty_data['Date']) nifty_data = nifty_data.sort_values('Date') nifty_data.set_index('Date', inplace=True) nifty_data fil_tcs = tcs_data[405:] fil_nifty = nifty_data[405:] return_tcs = fil_tcs['Close Price'].pct_change() return_nifty = fil_nifty['Close'].pct_change() plt.figure(figsize=(20,10)) return_tcs.plot() return_nifty.plot() plt.ylabel("Daily Return of TCS and NIFTY") plt.show() fil_tcs['pct_change'] = fil_tcs['Close Price'].pct_change() fil_nifty['pct_change'] = fil_nifty['Close'].pct_change() x = fil_tcs['pct_change'].dropna() y = fil_nifty['pct_change'].dropna() import pandas as pd import statsmodels.api as sm myModel = sm.OLS(y,x).fit() myModel.summary() import pandas as pd import statsmodels.api as sm ''' Download monthly prices of TCS and NIFTY 50 for Time period: 1-Jan-2014--12-Jan-2017 ''' tcs = pd.read_csv('TCS.NS.csv', parse_dates=True, index_col='Date',) nifty50 = pd.read_csv('^NSEI.csv', parse_dates=True, index_col='Date') # joining the closing prices of the two datasets monthly_prices = pd.concat([tcs['Close'], nifty50['Close']], axis=1) monthly_prices.columns = ['TCS', 'NIFTY50'] # check the head of the dataframe print(monthly_prices.head()) # calculate monthly returns monthly_returns = monthly_prices.pct_change(1) clean_monthly_returns = monthly_returns.dropna(axis=0) # drop first missing row print(clean_monthly_returns.head()) # split dependent and independent variable X = clean_monthly_returns['TCS'] y = clean_monthly_returns['NIFTY50'] # Add a constant to the independent value X1 = sm.add_constant(X) # make regression model model = sm.OLS(y, X1) # fit model and print results results = model.fit() print(results.summary()) # Daily beta value for the past 3 motnhs for the stock #TCS is 0.1968 which is less than 1 and hence it is # less volatile than the benchmark # The monthly beta value for the stock TCS is 0.1327 # which is less than 1 and hence it is less volatile # than the benchmark ```
github_jupyter
import pandas as pd gold_data = pd.read_csv('GOLD.csv') gold_data gold_data.set_index('Date',inplace=True) gold_data gold_without_nan = gold_data.dropna() gold_without_nan # imports import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score #dataset y = np.array(gold_without_nan["Pred"]) x = np.array(gold_without_nan["new"]) x = x.reshape(-1,1) y = y.reshape(-1,1) # sckit-learn implementation # Model initialization regression_model = LinearRegression() # Fit the data(train the model) regression_model.fit(x, y) # Predict y_predicted = regression_model.predict(x) # model evaluation rmse = mean_squared_error(y, y_predicted) r2 = r2_score(y, y_predicted) # printing values print('Slope:' ,regression_model.coef_) print('Intercept:', regression_model.intercept_) print('Root mean squared error: ', rmse) print('R2 score: ', r2) # plotting values # data points plt.scatter(x, y, s=10) plt.xlabel('x') plt.ylabel('y') # predicted values plt.plot(x, y_predicted, color='r') plt.show() pre_data = gold_data[:] pre_data_new = pre_data['new'] pre_data_new = pre_data_new.values.reshape(-1,1) na_data = (regression_model.predict(pre_data_new))#getting predicted values na_data_series = pd.Series(na_data.ravel()) sata = na_data_series.to_frame() gold_data['Pred'] = sata gold_data # imports import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score #dataset y = np.array(gold_data["new"]) x = np.array(gold_data["Pred"]) x = x.reshape(-1,1) y = y.reshape(-1,1) # sckit-learn implementation # Model initialization regression_model = LinearRegression() # Fit the data(train the model) regression_model.fit(x, y) # Predict y_predicted = regression_model.predict(x) # model evaluation rmse = mean_squared_error(y, y_predicted) r2 = r2_score(y, y_predicted) # printing values print('Slope:' ,regression_model.coef_) print('Intercept:', regression_model.intercept_) print('Root mean squared error: ', rmse) print('R2 score: ', r2) # plotting values # data points plt.scatter(x, y, s=10) plt.xlabel('x') plt.ylabel('y') # predicted values plt.plot(x, y_predicted, color='r') plt.show() import matplotlib.pyplot as plt plt.hist(gold_data['Pred']) plt.show() import seaborn as sns sns.distplot(gold_data['Pred']) plt.show() tcs_data = pd.read_csv('tcs_stock_data.csv') tcs_data['Date'] = pd.to_datetime(tcs_data['Date']) tcs_data = tcs_data.sort_values('Date') tcs_data.set_index('Date', inplace=True) tcs_data nifty_data = pd.read_csv('NIFTY50_Data.csv') nifty_data['Date'] = pd.to_datetime(nifty_data['Date']) nifty_data = nifty_data.sort_values('Date') nifty_data.set_index('Date', inplace=True) nifty_data fil_tcs = tcs_data[405:] fil_nifty = nifty_data[405:] return_tcs = fil_tcs['Close Price'].pct_change() return_nifty = fil_nifty['Close'].pct_change() plt.figure(figsize=(20,10)) return_tcs.plot() return_nifty.plot() plt.ylabel("Daily Return of TCS and NIFTY") plt.show() fil_tcs['pct_change'] = fil_tcs['Close Price'].pct_change() fil_nifty['pct_change'] = fil_nifty['Close'].pct_change() x = fil_tcs['pct_change'].dropna() y = fil_nifty['pct_change'].dropna() import pandas as pd import statsmodels.api as sm myModel = sm.OLS(y,x).fit() myModel.summary() import pandas as pd import statsmodels.api as sm ''' Download monthly prices of TCS and NIFTY 50 for Time period: 1-Jan-2014--12-Jan-2017 ''' tcs = pd.read_csv('TCS.NS.csv', parse_dates=True, index_col='Date',) nifty50 = pd.read_csv('^NSEI.csv', parse_dates=True, index_col='Date') # joining the closing prices of the two datasets monthly_prices = pd.concat([tcs['Close'], nifty50['Close']], axis=1) monthly_prices.columns = ['TCS', 'NIFTY50'] # check the head of the dataframe print(monthly_prices.head()) # calculate monthly returns monthly_returns = monthly_prices.pct_change(1) clean_monthly_returns = monthly_returns.dropna(axis=0) # drop first missing row print(clean_monthly_returns.head()) # split dependent and independent variable X = clean_monthly_returns['TCS'] y = clean_monthly_returns['NIFTY50'] # Add a constant to the independent value X1 = sm.add_constant(X) # make regression model model = sm.OLS(y, X1) # fit model and print results results = model.fit() print(results.summary()) # Daily beta value for the past 3 motnhs for the stock #TCS is 0.1968 which is less than 1 and hence it is # less volatile than the benchmark # The monthly beta value for the stock TCS is 0.1327 # which is less than 1 and hence it is less volatile # than the benchmark
0.63409
0.909787
``` # If you run on colab uncomment the following line #!pip install git+https://github.com/clementchadebec/benchmark_VAE.git import torch import torchvision.datasets as datasets %load_ext autoreload %autoreload 2 mnist_trainset = datasets.MNIST(root='../../data', train=True, download=True, transform=None) train_dataset = mnist_trainset.data[:-10000].reshape(-1, 1, 28, 28) / 255. eval_dataset = mnist_trainset.data[-10000:].reshape(-1, 1, 28, 28) / 255. from pythae.models import HVAE, HVAEConfig from pythae.trainers import BaseTrainingConfig from pythae.pipelines.training import TrainingPipeline from pythae.models.nn.benchmarks.mnist import Encoder_VAE_MNIST, Decoder_AE_MNIST config = BaseTrainingConfig( output_dir='my_model', learning_rate=1e-3, batch_size=100, num_epochs=100, ) model_config = HVAEConfig( input_dim=(1, 28, 28), latent_dim=16, n_lf=1, eps_lf=0.001, beta_zero=0.3, ) model = HVAE( model_config=model_config, encoder=Encoder_VAE_MNIST(model_config), decoder=Decoder_AE_MNIST(model_config) ) pipeline = TrainingPipeline( training_config=config, model=model ) pipeline( train_data=train_dataset, eval_data=eval_dataset ) import os last_training = sorted(os.listdir('my_model'))[-1] trained_model = HVAE.load_from_folder(os.path.join('my_model', last_training, 'final_model')) from pythae.samplers import NormalSampler # create normal sampler normal_samper = NormalSampler( model=trained_model ) # sample gen_data = normal_samper.sample( num_samples=25 ) import matplotlib.pyplot as plt # show results with normal sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.) from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig # set up gmm sampler config gmm_sampler_config = GaussianMixtureSamplerConfig( n_components=10 ) # create gmm sampler gmm_sampler = GaussianMixtureSampler( sampler_config=gmm_sampler_config, model=trained_model ) # fit the sampler gmm_sampler.fit(train_dataset) # sample gen_data = gmm_sampler.sample( num_samples=25 ) # show results with gmm sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.) ``` ## ... the other samplers work the same
github_jupyter
# If you run on colab uncomment the following line #!pip install git+https://github.com/clementchadebec/benchmark_VAE.git import torch import torchvision.datasets as datasets %load_ext autoreload %autoreload 2 mnist_trainset = datasets.MNIST(root='../../data', train=True, download=True, transform=None) train_dataset = mnist_trainset.data[:-10000].reshape(-1, 1, 28, 28) / 255. eval_dataset = mnist_trainset.data[-10000:].reshape(-1, 1, 28, 28) / 255. from pythae.models import HVAE, HVAEConfig from pythae.trainers import BaseTrainingConfig from pythae.pipelines.training import TrainingPipeline from pythae.models.nn.benchmarks.mnist import Encoder_VAE_MNIST, Decoder_AE_MNIST config = BaseTrainingConfig( output_dir='my_model', learning_rate=1e-3, batch_size=100, num_epochs=100, ) model_config = HVAEConfig( input_dim=(1, 28, 28), latent_dim=16, n_lf=1, eps_lf=0.001, beta_zero=0.3, ) model = HVAE( model_config=model_config, encoder=Encoder_VAE_MNIST(model_config), decoder=Decoder_AE_MNIST(model_config) ) pipeline = TrainingPipeline( training_config=config, model=model ) pipeline( train_data=train_dataset, eval_data=eval_dataset ) import os last_training = sorted(os.listdir('my_model'))[-1] trained_model = HVAE.load_from_folder(os.path.join('my_model', last_training, 'final_model')) from pythae.samplers import NormalSampler # create normal sampler normal_samper = NormalSampler( model=trained_model ) # sample gen_data = normal_samper.sample( num_samples=25 ) import matplotlib.pyplot as plt # show results with normal sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.) from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig # set up gmm sampler config gmm_sampler_config = GaussianMixtureSamplerConfig( n_components=10 ) # create gmm sampler gmm_sampler = GaussianMixtureSampler( sampler_config=gmm_sampler_config, model=trained_model ) # fit the sampler gmm_sampler.fit(train_dataset) # sample gen_data = gmm_sampler.sample( num_samples=25 ) # show results with gmm sampler fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10)) for i in range(5): for j in range(5): axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray') axes[i][j].axis('off') plt.tight_layout(pad=0.)
0.609873
0.621268
``` import os import numpy as np import pandas as pd import dash from dash import dcc, html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import plotly.express as px import plotly.graph_objects as go os.getcwd() # get data # identify NaN values as empty space, don't import NaN values to dataframe df = pd.read_csv("energeetika.csv", na_values='', keep_default_na=True) # here defo better way, remove unnecessary columns df = df.drop(df.columns[5:], axis=1) # remove column df = df.drop(df.columns[1], axis=1) df = df.drop(df.columns[2], axis=1) # clean up names df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '') df # prepare data for plotting df['muu_heide'] = (df['koguheide_maakasutuseta_kt_co2_ekv'] - df['energeetika_kt_co2_ekv']) df['en_protsent'] = (df['energeetika_kt_co2_ekv'] / df['koguheide_maakasutuseta_kt_co2_ekv'] * 100) df['muu_protsent'] = (100 - df['en_protsent']) # create web application app = dash.Dash(__name__, external_stylesheets=[dbc.themes.FLATLY]) # define markdown texts md_top = ''' # Kliimaandmed Interaktiivne rakendus kliimaandmetes orienteerumiseks. ''' md_dropdown = ''' ### Vali graafik: ''' md_slider = ''' ### Vali ajavahemik: ''' md_footnote = ''' *Selle graafiku koostamisel pole võetud arvesse kasvuhoonegaaside heidet maakasutuse muutustest. ''' app.layout = dbc.Container([ # must declare bootstrap theme at top or none of this will work # first row for title and subtitle dbc.Row([ html.Div([ dcc.Markdown(children=md_top, style={'text-align': 'center'}) ]) ]), # second row for the graph and its interactive elements dbc.Row([ html.Div([ dcc.Markdown(children=md_dropdown), dcc.Dropdown( id='dropdown', options=[ {'label': 'suhteline heide', 'value': 'protsent'}, {'label': 'koguheide', 'value': 'kogu'}, ], value='protsent', style={"width": "60%"}), ]), html.Div(dcc.Graph(id='graph')), html.Div([ dcc.Markdown(children=md_slider), dcc.RangeSlider( id='slider', min=1990, max=2020, step=1, value=[1990,2020], allowCross=False, tooltip={"placement": "top", "always_visible": True}) ]) ]), # third row for the footnote dbc.Row([ html.Div(dcc.Markdown(children=md_footnote)) ]) ]) @app.callback( Output('graph', 'figure'), [Input(component_id='dropdown', component_property='value'), Input(component_id='slider', component_property='value')] ) def select_graph(plot_type,year_range): df_copy = df.copy() # make copy of dataframe df_copy = df_copy[df_copy['aasta'].between(year_range[0], year_range[1])] if plot_type == 'protsent': fig = px.bar(df_copy, # suhteline heide, graafik protsentides x='aasta', y=['en_protsent','muu_protsent'], barmode='stack', color_discrete_map={ 'en_protsent': '#440154', 'muu_protsent': '#5ec962'}, opacity=0.7) return fig elif plot_type == 'kogu': fig = px.bar(df_copy, # absoluutne heide, graafik kt CO2 ekv x='aasta', y=['energeetika_kt_co2_ekv','muu_heide'], barmode='stack', color_discrete_map={ 'energeetika_kt_co2_ekv': '#440154', 'muu_heide': '#5ec962'}, opacity=0.7) return fig app.run_server(debug=True, use_reloader=False) ```
github_jupyter
import os import numpy as np import pandas as pd import dash from dash import dcc, html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import plotly.express as px import plotly.graph_objects as go os.getcwd() # get data # identify NaN values as empty space, don't import NaN values to dataframe df = pd.read_csv("energeetika.csv", na_values='', keep_default_na=True) # here defo better way, remove unnecessary columns df = df.drop(df.columns[5:], axis=1) # remove column df = df.drop(df.columns[1], axis=1) df = df.drop(df.columns[2], axis=1) # clean up names df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '') df # prepare data for plotting df['muu_heide'] = (df['koguheide_maakasutuseta_kt_co2_ekv'] - df['energeetika_kt_co2_ekv']) df['en_protsent'] = (df['energeetika_kt_co2_ekv'] / df['koguheide_maakasutuseta_kt_co2_ekv'] * 100) df['muu_protsent'] = (100 - df['en_protsent']) # create web application app = dash.Dash(__name__, external_stylesheets=[dbc.themes.FLATLY]) # define markdown texts md_top = ''' # Kliimaandmed Interaktiivne rakendus kliimaandmetes orienteerumiseks. ''' md_dropdown = ''' ### Vali graafik: ''' md_slider = ''' ### Vali ajavahemik: ''' md_footnote = ''' *Selle graafiku koostamisel pole võetud arvesse kasvuhoonegaaside heidet maakasutuse muutustest. ''' app.layout = dbc.Container([ # must declare bootstrap theme at top or none of this will work # first row for title and subtitle dbc.Row([ html.Div([ dcc.Markdown(children=md_top, style={'text-align': 'center'}) ]) ]), # second row for the graph and its interactive elements dbc.Row([ html.Div([ dcc.Markdown(children=md_dropdown), dcc.Dropdown( id='dropdown', options=[ {'label': 'suhteline heide', 'value': 'protsent'}, {'label': 'koguheide', 'value': 'kogu'}, ], value='protsent', style={"width": "60%"}), ]), html.Div(dcc.Graph(id='graph')), html.Div([ dcc.Markdown(children=md_slider), dcc.RangeSlider( id='slider', min=1990, max=2020, step=1, value=[1990,2020], allowCross=False, tooltip={"placement": "top", "always_visible": True}) ]) ]), # third row for the footnote dbc.Row([ html.Div(dcc.Markdown(children=md_footnote)) ]) ]) @app.callback( Output('graph', 'figure'), [Input(component_id='dropdown', component_property='value'), Input(component_id='slider', component_property='value')] ) def select_graph(plot_type,year_range): df_copy = df.copy() # make copy of dataframe df_copy = df_copy[df_copy['aasta'].between(year_range[0], year_range[1])] if plot_type == 'protsent': fig = px.bar(df_copy, # suhteline heide, graafik protsentides x='aasta', y=['en_protsent','muu_protsent'], barmode='stack', color_discrete_map={ 'en_protsent': '#440154', 'muu_protsent': '#5ec962'}, opacity=0.7) return fig elif plot_type == 'kogu': fig = px.bar(df_copy, # absoluutne heide, graafik kt CO2 ekv x='aasta', y=['energeetika_kt_co2_ekv','muu_heide'], barmode='stack', color_discrete_map={ 'energeetika_kt_co2_ekv': '#440154', 'muu_heide': '#5ec962'}, opacity=0.7) return fig app.run_server(debug=True, use_reloader=False)
0.314577
0.165121
# Comparison of Batch, Mini-Batch and Stochastic Gradient Descent This notebook displays an animation comparing Batch, Mini-Batch and Stochastic Gradient Descent (introduced in Chapter 4). Thanks to [Daniel Ingram](https://github.com/daniel-s-ingram) who contributed this notebook. ``` from __future__ import print_function, division, unicode_literals import numpy as np %matplotlib nbagg import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation m = 100 X = 2*np.random.rand(m, 1) X_b = np.c_[np.ones((m, 1)), X] y = 4 + 3*X + np.random.rand(m, 1) def batch_gradient_descent(): n_iterations = 1000 learning_rate = 0.05 thetas = np.random.randn(2, 1) thetas_path = [thetas] for i in range(n_iterations): gradients = 2*X_b.T.dot(X_b.dot(thetas) - y)/m thetas = thetas - learning_rate*gradients thetas_path.append(thetas) return thetas_path def stochastic_gradient_descent(): n_epochs = 50 t0, t1 = 5, 50 thetas = np.random.randn(2, 1) thetas_path = [thetas] for epoch in range(n_epochs): for i in range(m): random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2*xi.T.dot(xi.dot(thetas) - yi) eta = learning_schedule(epoch*m + i, t0, t1) thetas = thetas - eta*gradients thetas_path.append(thetas) return thetas_path def mini_batch_gradient_descent(): n_iterations = 50 minibatch_size = 20 t0, t1 = 200, 1000 thetas = np.random.randn(2, 1) thetas_path = [thetas] t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2*xi.T.dot(xi.dot(thetas) - yi)/minibatch_size eta = learning_schedule(t, t0, t1) thetas = thetas - eta*gradients thetas_path.append(thetas) return thetas_path def compute_mse(theta): return np.sum((np.dot(X_b, theta) - y)**2)/m def learning_schedule(t, t0, t1): return t0/(t+t1) theta0, theta1 = np.meshgrid(np.arange(0, 5, 0.1), np.arange(0, 5, 0.1)) r, c = theta0.shape cost_map = np.array([[0 for _ in range(c)] for _ in range(r)]) for i in range(r): for j in range(c): theta = np.array([theta0[i,j], theta1[i,j]]) cost_map[i,j] = compute_mse(theta) exact_solution = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) bgd_thetas = np.array(batch_gradient_descent()) sgd_thetas = np.array(stochastic_gradient_descent()) mbgd_thetas = np.array(mini_batch_gradient_descent()) bgd_len = len(bgd_thetas) sgd_len = len(sgd_thetas) mbgd_len = len(mbgd_thetas) n_iter = min(bgd_len, sgd_len, mbgd_len) fig = plt.figure(figsize=(10, 5)) data_ax = fig.add_subplot(121) cost_ax = fig.add_subplot(122) cost_ax.plot(exact_solution[0,0], exact_solution[1,0], 'y*') cost_img = cost_ax.pcolor(theta0, theta1, cost_map) fig.colorbar(cost_img) def animate(i): data_ax.cla() cost_ax.cla() data_ax.plot(X, y, 'k.') cost_ax.plot(exact_solution[0,0], exact_solution[1,0], 'y*') cost_ax.pcolor(theta0, theta1, cost_map) data_ax.plot(X, X_b.dot(bgd_thetas[i,:]), 'r-') cost_ax.plot(bgd_thetas[:i,0], bgd_thetas[:i,1], 'r--') data_ax.plot(X, X_b.dot(sgd_thetas[i,:]), 'g-') cost_ax.plot(sgd_thetas[:i,0], sgd_thetas[:i,1], 'g--') data_ax.plot(X, X_b.dot(mbgd_thetas[i,:]), 'b-') cost_ax.plot(mbgd_thetas[:i,0], mbgd_thetas[:i,1], 'b--') data_ax.set_xlim([0, 2]) data_ax.set_ylim([0, 15]) cost_ax.set_xlim([0, 5]) cost_ax.set_ylim([0, 5]) data_ax.set_xlabel(r'$x_1$') data_ax.set_ylabel(r'$y$', rotation=0) cost_ax.set_xlabel(r'$\theta_0$') cost_ax.set_ylabel(r'$\theta_1$') data_ax.legend(('Data', 'BGD', 'SGD', 'MBGD'), loc="upper left") cost_ax.legend(('Normal Equation', 'BGD', 'SGD', 'MBGD'), loc="upper left") animation = FuncAnimation(fig, animate, frames=n_iter) plt.show() ```
github_jupyter
from __future__ import print_function, division, unicode_literals import numpy as np %matplotlib nbagg import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation m = 100 X = 2*np.random.rand(m, 1) X_b = np.c_[np.ones((m, 1)), X] y = 4 + 3*X + np.random.rand(m, 1) def batch_gradient_descent(): n_iterations = 1000 learning_rate = 0.05 thetas = np.random.randn(2, 1) thetas_path = [thetas] for i in range(n_iterations): gradients = 2*X_b.T.dot(X_b.dot(thetas) - y)/m thetas = thetas - learning_rate*gradients thetas_path.append(thetas) return thetas_path def stochastic_gradient_descent(): n_epochs = 50 t0, t1 = 5, 50 thetas = np.random.randn(2, 1) thetas_path = [thetas] for epoch in range(n_epochs): for i in range(m): random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2*xi.T.dot(xi.dot(thetas) - yi) eta = learning_schedule(epoch*m + i, t0, t1) thetas = thetas - eta*gradients thetas_path.append(thetas) return thetas_path def mini_batch_gradient_descent(): n_iterations = 50 minibatch_size = 20 t0, t1 = 200, 1000 thetas = np.random.randn(2, 1) thetas_path = [thetas] t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2*xi.T.dot(xi.dot(thetas) - yi)/minibatch_size eta = learning_schedule(t, t0, t1) thetas = thetas - eta*gradients thetas_path.append(thetas) return thetas_path def compute_mse(theta): return np.sum((np.dot(X_b, theta) - y)**2)/m def learning_schedule(t, t0, t1): return t0/(t+t1) theta0, theta1 = np.meshgrid(np.arange(0, 5, 0.1), np.arange(0, 5, 0.1)) r, c = theta0.shape cost_map = np.array([[0 for _ in range(c)] for _ in range(r)]) for i in range(r): for j in range(c): theta = np.array([theta0[i,j], theta1[i,j]]) cost_map[i,j] = compute_mse(theta) exact_solution = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) bgd_thetas = np.array(batch_gradient_descent()) sgd_thetas = np.array(stochastic_gradient_descent()) mbgd_thetas = np.array(mini_batch_gradient_descent()) bgd_len = len(bgd_thetas) sgd_len = len(sgd_thetas) mbgd_len = len(mbgd_thetas) n_iter = min(bgd_len, sgd_len, mbgd_len) fig = plt.figure(figsize=(10, 5)) data_ax = fig.add_subplot(121) cost_ax = fig.add_subplot(122) cost_ax.plot(exact_solution[0,0], exact_solution[1,0], 'y*') cost_img = cost_ax.pcolor(theta0, theta1, cost_map) fig.colorbar(cost_img) def animate(i): data_ax.cla() cost_ax.cla() data_ax.plot(X, y, 'k.') cost_ax.plot(exact_solution[0,0], exact_solution[1,0], 'y*') cost_ax.pcolor(theta0, theta1, cost_map) data_ax.plot(X, X_b.dot(bgd_thetas[i,:]), 'r-') cost_ax.plot(bgd_thetas[:i,0], bgd_thetas[:i,1], 'r--') data_ax.plot(X, X_b.dot(sgd_thetas[i,:]), 'g-') cost_ax.plot(sgd_thetas[:i,0], sgd_thetas[:i,1], 'g--') data_ax.plot(X, X_b.dot(mbgd_thetas[i,:]), 'b-') cost_ax.plot(mbgd_thetas[:i,0], mbgd_thetas[:i,1], 'b--') data_ax.set_xlim([0, 2]) data_ax.set_ylim([0, 15]) cost_ax.set_xlim([0, 5]) cost_ax.set_ylim([0, 5]) data_ax.set_xlabel(r'$x_1$') data_ax.set_ylabel(r'$y$', rotation=0) cost_ax.set_xlabel(r'$\theta_0$') cost_ax.set_ylabel(r'$\theta_1$') data_ax.legend(('Data', 'BGD', 'SGD', 'MBGD'), loc="upper left") cost_ax.legend(('Normal Equation', 'BGD', 'SGD', 'MBGD'), loc="upper left") animation = FuncAnimation(fig, animate, frames=n_iter) plt.show()
0.626353
0.945349
<a href="https://colab.research.google.com/github/kd303/DL-NLP-Readings/blob/master/session_09_bert/bart_training/bart_training_paraphrasing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip install simpletransformers ``` # BART for Paraphrasing with Simple Transformers Thanks to original author [Thilina Rajapakse](www.linkedin.com/in/t-rajapakse/) for producing the original work ``` data_dir = /content/drive/MyDrive/bart_data !wget https://storage.googleapis.com/paws/english/paws_wiki_labeled_final.tar.gz -P /content/drive/MyDrive/bart_data data_dir = '/content/drive/MyDrive/bart_data' !tar -xvf $data_dir/paws_wiki_labeled_final.tar.gz -C $data_dir import warnings import pandas as pd def load_data( file_path, input_text_column, target_text_column, label_column, keep_label=1 ): df = pd.read_csv(file_path, sep="\t", error_bad_lines=False) df = df.loc[df[label_column] == keep_label] df = df.rename( columns={input_text_column: "input_text", target_text_column: "target_text"} ) df = df[["input_text", "target_text"]] df["prefix"] = "paraphrase" return df def clean_unnecessary_spaces(out_string): if not isinstance(out_string, str): warnings.warn(f">>> {out_string} <<< is not a string.") out_string = str(out_string) out_string = ( out_string.replace(" .", ".") .replace(" ?", "?") .replace(" !", "!") .replace(" ,", ",") .replace(" ' ", "'") .replace(" n't", "n't") .replace(" 'm", "'m") .replace(" 's", "'s") .replace(" 've", "'ve") .replace(" 're", "'re") ) return out_string import os from datetime import datetime import logging import pandas as pd from sklearn.model_selection import train_test_split from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.ERROR) train_df = pd.read_csv("/content/drive/MyDrive/bart_data/final/train.tsv", sep="\t").astype(str) eval_df = pd.read_csv("/content/drive/MyDrive/bart_data/final/dev.tsv", sep="\t").astype(str) train_df = train_df.loc[train_df["label"] == "1"] eval_df = eval_df.loc[eval_df["label"] == "1"] train_df = train_df.rename( columns={"sentence1": "input_text", "sentence2": "target_text"} ) eval_df = eval_df.rename( columns={"sentence1": "input_text", "sentence2": "target_text"} ) train_df = train_df[["input_text", "target_text"]] eval_df = eval_df[["input_text", "target_text"]] train_df["prefix"] = "paraphrase" eval_df["prefix"] = "paraphrase" train_df = train_df[["prefix", "input_text", "target_text"]] eval_df = eval_df[["prefix", "input_text", "target_text"]] train_df = train_df.dropna() eval_df = eval_df.dropna() train_df["input_text"] = train_df["input_text"].apply(clean_unnecessary_spaces) train_df["target_text"] = train_df["target_text"].apply(clean_unnecessary_spaces) eval_df["input_text"] = eval_df["input_text"].apply(clean_unnecessary_spaces) eval_df["target_text"] = eval_df["target_text"].apply(clean_unnecessary_spaces) print(train_df) train_df.shape train_df_1 = train_df[10214: ] train_df_1.shape help(Seq2SeqArgs) !tar -xvf /content/drive/MyDrive/bart_data/paws_wiki_labeled_final.tar.gz model_args = Seq2SeqArgs() model_args.do_sample = True model_args.eval_batch_size = 16 ## 64 model_args.evaluate_during_training = True model_args.evaluate_during_training_steps = 2500 model_args.evaluate_during_training_verbose = True model_args.fp16 = False model_args.learning_rate = 5e-5 model_args.max_length = 128 model_args.max_seq_length = 128 model_args.num_beams = None model_args.num_return_sequences = 3 model_args.num_train_epochs = 2 model_args.overwrite_output_dir = True model_args.reprocess_input_data = True model_args.save_eval_checkpoints = False model_args.save_steps = -1 model_args.top_k = 50 model_args.top_p = 0.95 model_args.train_batch_size = 16 ## 8 model_args.use_multiprocessing = False model_args.wandb_project = "Paraphrasing with BART" model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="facebook/bart-large", args=model_args, ) model.train_model(train_df_1, eval_data=eval_df) to_predict = [ prefix + ": " + str(input_text) for prefix, input_text in zip(eval_df["prefix"].tolist(), eval_df["input_text"].tolist()) ] truth = eval_df["target_text"].tolist() preds = model.predict(to_predict) # Saving the predictions if needed os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) len(preds) with open(f"/content/drive/MyDrive/bart_data/prediction/predictions_{datetime.now()}.txt", "w") as f: for i, text in enumerate(eval_df["input_text"].tolist()): f.write(str(text) + "\n\n") f.write("Truth:\n") f.write(truth[i] + "\n\n") f.write("Prediction:\n") for pred in preds[i]: f.write(str(pred) + "\n") f.write( "________________________________________________________________________________\n" ) import torch torch.cuda.empty_cache() !nvidia-smi to_predict = [ prefix + ": " + str(input_text) for prefix, input_text in zip(eval_df["prefix"].tolist(), eval_df["input_text"].tolist()) ] truth = eval_df["target_text"].tolist() preds = model.predict(to_predict) # Saving the predictions if needed os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) with open(f"/content/drive/MyDrive/bart_data/prediction/predictions_{datetime.now()}.txt", "w") as f: for i, text in enumerate(eval_df["input_text"].tolist()): f.write(str(text) + "\n\n") f.write("Truth:\n") f.write(truth[i] + "\n\n") f.write("Prediction:\n") for pred in preds[i]: f.write(str(pred) + "\n") # printing first 5 predidctions if i <= 4: print("Truth: " , truth[i]) print("\n Prediction: \n", ) for pred in preds[i]: print(str(pred) + "\n") print( "________________________________________________________________________________\n") f.write( "________________________________________________________________________________\n" ) import logging from simpletransformers.seq2seq import Seq2SeqModel logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.ERROR) model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="outputs" ) while True: original = input("Enter text to paraphrase: ") to_predict = [original] preds = model.predict(to_predict) print("---------------------------------------------------------") print(original) print() print("Predictions >>>") for pred in preds[0]: print(pred) print("---------------------------------------------------------") print() ```
github_jupyter
!pip install simpletransformers data_dir = /content/drive/MyDrive/bart_data !wget https://storage.googleapis.com/paws/english/paws_wiki_labeled_final.tar.gz -P /content/drive/MyDrive/bart_data data_dir = '/content/drive/MyDrive/bart_data' !tar -xvf $data_dir/paws_wiki_labeled_final.tar.gz -C $data_dir import warnings import pandas as pd def load_data( file_path, input_text_column, target_text_column, label_column, keep_label=1 ): df = pd.read_csv(file_path, sep="\t", error_bad_lines=False) df = df.loc[df[label_column] == keep_label] df = df.rename( columns={input_text_column: "input_text", target_text_column: "target_text"} ) df = df[["input_text", "target_text"]] df["prefix"] = "paraphrase" return df def clean_unnecessary_spaces(out_string): if not isinstance(out_string, str): warnings.warn(f">>> {out_string} <<< is not a string.") out_string = str(out_string) out_string = ( out_string.replace(" .", ".") .replace(" ?", "?") .replace(" !", "!") .replace(" ,", ",") .replace(" ' ", "'") .replace(" n't", "n't") .replace(" 'm", "'m") .replace(" 's", "'s") .replace(" 've", "'ve") .replace(" 're", "'re") ) return out_string import os from datetime import datetime import logging import pandas as pd from sklearn.model_selection import train_test_split from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.ERROR) train_df = pd.read_csv("/content/drive/MyDrive/bart_data/final/train.tsv", sep="\t").astype(str) eval_df = pd.read_csv("/content/drive/MyDrive/bart_data/final/dev.tsv", sep="\t").astype(str) train_df = train_df.loc[train_df["label"] == "1"] eval_df = eval_df.loc[eval_df["label"] == "1"] train_df = train_df.rename( columns={"sentence1": "input_text", "sentence2": "target_text"} ) eval_df = eval_df.rename( columns={"sentence1": "input_text", "sentence2": "target_text"} ) train_df = train_df[["input_text", "target_text"]] eval_df = eval_df[["input_text", "target_text"]] train_df["prefix"] = "paraphrase" eval_df["prefix"] = "paraphrase" train_df = train_df[["prefix", "input_text", "target_text"]] eval_df = eval_df[["prefix", "input_text", "target_text"]] train_df = train_df.dropna() eval_df = eval_df.dropna() train_df["input_text"] = train_df["input_text"].apply(clean_unnecessary_spaces) train_df["target_text"] = train_df["target_text"].apply(clean_unnecessary_spaces) eval_df["input_text"] = eval_df["input_text"].apply(clean_unnecessary_spaces) eval_df["target_text"] = eval_df["target_text"].apply(clean_unnecessary_spaces) print(train_df) train_df.shape train_df_1 = train_df[10214: ] train_df_1.shape help(Seq2SeqArgs) !tar -xvf /content/drive/MyDrive/bart_data/paws_wiki_labeled_final.tar.gz model_args = Seq2SeqArgs() model_args.do_sample = True model_args.eval_batch_size = 16 ## 64 model_args.evaluate_during_training = True model_args.evaluate_during_training_steps = 2500 model_args.evaluate_during_training_verbose = True model_args.fp16 = False model_args.learning_rate = 5e-5 model_args.max_length = 128 model_args.max_seq_length = 128 model_args.num_beams = None model_args.num_return_sequences = 3 model_args.num_train_epochs = 2 model_args.overwrite_output_dir = True model_args.reprocess_input_data = True model_args.save_eval_checkpoints = False model_args.save_steps = -1 model_args.top_k = 50 model_args.top_p = 0.95 model_args.train_batch_size = 16 ## 8 model_args.use_multiprocessing = False model_args.wandb_project = "Paraphrasing with BART" model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="facebook/bart-large", args=model_args, ) model.train_model(train_df_1, eval_data=eval_df) to_predict = [ prefix + ": " + str(input_text) for prefix, input_text in zip(eval_df["prefix"].tolist(), eval_df["input_text"].tolist()) ] truth = eval_df["target_text"].tolist() preds = model.predict(to_predict) # Saving the predictions if needed os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) len(preds) with open(f"/content/drive/MyDrive/bart_data/prediction/predictions_{datetime.now()}.txt", "w") as f: for i, text in enumerate(eval_df["input_text"].tolist()): f.write(str(text) + "\n\n") f.write("Truth:\n") f.write(truth[i] + "\n\n") f.write("Prediction:\n") for pred in preds[i]: f.write(str(pred) + "\n") f.write( "________________________________________________________________________________\n" ) import torch torch.cuda.empty_cache() !nvidia-smi to_predict = [ prefix + ": " + str(input_text) for prefix, input_text in zip(eval_df["prefix"].tolist(), eval_df["input_text"].tolist()) ] truth = eval_df["target_text"].tolist() preds = model.predict(to_predict) # Saving the predictions if needed os.makedirs("/content/drive/MyDrive/bart_data/prediction", exist_ok=True) with open(f"/content/drive/MyDrive/bart_data/prediction/predictions_{datetime.now()}.txt", "w") as f: for i, text in enumerate(eval_df["input_text"].tolist()): f.write(str(text) + "\n\n") f.write("Truth:\n") f.write(truth[i] + "\n\n") f.write("Prediction:\n") for pred in preds[i]: f.write(str(pred) + "\n") # printing first 5 predidctions if i <= 4: print("Truth: " , truth[i]) print("\n Prediction: \n", ) for pred in preds[i]: print(str(pred) + "\n") print( "________________________________________________________________________________\n") f.write( "________________________________________________________________________________\n" ) import logging from simpletransformers.seq2seq import Seq2SeqModel logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.ERROR) model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="outputs" ) while True: original = input("Enter text to paraphrase: ") to_predict = [original] preds = model.predict(to_predict) print("---------------------------------------------------------") print(original) print() print("Predictions >>>") for pred in preds[0]: print(pred) print("---------------------------------------------------------") print()
0.538255
0.67975
# Box Plots The following illustrates some options for the boxplot in statsmodels. These include `violin_plot` and `bean_plot`. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm ``` ## Bean Plots The following example is taken from the docstring of `beanplot`. We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. ``` data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] ``` Group age by party ID, and create a violin plot with it: ``` plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook age = [data.exog['age'][data.endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") #plt.show() def beanplot(data, plot_opts={}, jitter=False): """helper function to try out different plot options """ fig = plt.figure() ax = fig.add_subplot(111) plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} plot_opts_.update(plot_opts) sm.graphics.beanplot(data, ax=ax, labels=labels, jitter=jitter, plot_opts=plot_opts_) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") fig = beanplot(age, jitter=True) fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) ``` ## Advanced Box Plots Based of example script `example_enhanced_boxplots.py` (by Ralf Gommers) ``` from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Necessary to make horizontal axis labels fit plt.rcParams['figure.subplot.bottom'] = 0.23 data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] # Group age by party ID. age = [data.exog['age'][data.endog == id] for id in party_ID] # Create a violin plot. fig = plt.figure() ax = fig.add_subplot(111) sm.graphics.violinplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a bean plot. fig2 = plt.figure() ax = fig2.add_subplot(111) sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a jitter plot. fig3 = plt.figure() ax = fig3.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8), 'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00', 'bean_mean_color':'#009D91'} sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create an asymmetrical jitter plot. ix = data.exog['income'] < 16 # incomes < $30k age = data.exog['age'][ix] endog = data.endog[ix] age_lower_income = [age[endog == id] for id in party_ID] ix = data.exog['income'] >= 20 # incomes > $50k age = data.exog['age'][ix] endog = data.endog[ix] age_higher_income = [age[endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts['violin_fc'] = (0.5, 0.5, 0.5) plot_opts['bean_show_mean'] = False plot_opts['bean_show_median'] = False plot_opts['bean_legend_text'] = 'Income < \$30k' plot_opts['cutoff_val'] = 10 sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left', jitter=True, plot_opts=plot_opts) plot_opts['violin_fc'] = (0.7, 0.7, 0.7) plot_opts['bean_color'] = '#009D91' plot_opts['bean_legend_text'] = 'Income > \$50k' sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right', jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Show all plots. #plt.show() ```
github_jupyter
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook age = [data.exog['age'][data.endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") #plt.show() def beanplot(data, plot_opts={}, jitter=False): """helper function to try out different plot options """ fig = plt.figure() ax = fig.add_subplot(111) plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} plot_opts_.update(plot_opts) sm.graphics.beanplot(data, ax=ax, labels=labels, jitter=jitter, plot_opts=plot_opts_) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") fig = beanplot(age, jitter=True) fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Necessary to make horizontal axis labels fit plt.rcParams['figure.subplot.bottom'] = 0.23 data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] # Group age by party ID. age = [data.exog['age'][data.endog == id] for id in party_ID] # Create a violin plot. fig = plt.figure() ax = fig.add_subplot(111) sm.graphics.violinplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a bean plot. fig2 = plt.figure() ax = fig2.add_subplot(111) sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a jitter plot. fig3 = plt.figure() ax = fig3.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8), 'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00', 'bean_mean_color':'#009D91'} sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create an asymmetrical jitter plot. ix = data.exog['income'] < 16 # incomes < $30k age = data.exog['age'][ix] endog = data.endog[ix] age_lower_income = [age[endog == id] for id in party_ID] ix = data.exog['income'] >= 20 # incomes > $50k age = data.exog['age'][ix] endog = data.endog[ix] age_higher_income = [age[endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts['violin_fc'] = (0.5, 0.5, 0.5) plot_opts['bean_show_mean'] = False plot_opts['bean_show_median'] = False plot_opts['bean_legend_text'] = 'Income < \$30k' plot_opts['cutoff_val'] = 10 sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left', jitter=True, plot_opts=plot_opts) plot_opts['violin_fc'] = (0.7, 0.7, 0.7) plot_opts['bean_color'] = '#009D91' plot_opts['bean_legend_text'] = 'Income > \$50k' sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right', jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Show all plots. #plt.show()
0.621541
0.904397
# Homework 2 Visualize, describe, and model distributions Allen Downey [MIT License](https://en.wikipedia.org/wiki/MIT_License) ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style='white') from utils import decorate from thinkstats2 import Pmf, Cdf import thinkstats2 import thinkplot ``` Here are some of the functions from Chapter 5. ``` def MakeNormalModel(values, label=''): """Plots a CDF with a Normal model. values: sequence """ cdf = thinkstats2.Cdf(values, label=label) mean, var = thinkstats2.TrimmedMeanVar(values) std = np.sqrt(var) print('n, mean, std', len(values), mean, std) xmin = mean - 4 * std xmax = mean + 4 * std xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax) thinkplot.Plot(xs, ps, label='model', linewidth=4, color='0.8') thinkplot.Cdf(cdf) def MakeNormalPlot(values, label=''): """Generates a normal probability plot. values: sequence """ mean, var = thinkstats2.TrimmedMeanVar(values, p=0.01) std = np.sqrt(var) xs = [-5, 5] xs, ys = thinkstats2.FitLine(xs, mean, std) thinkplot.Plot(xs, ys, color='0.8', label='model') xs, ys = thinkstats2.NormalProbability(values) thinkplot.Plot(xs, ys, '+', alpha=0.3, label=label) ``` Read the GSS data again. ``` %time gss = pd.read_hdf('gss.hdf5', 'gss') gss.shape gss.head() ``` Most variables use special codes to indicate missing data. We have to be careful not to use these codes as numerical data; one way to manage that is to replace them with `NaN`, which Pandas recognizes as a missing value. ``` def replace_invalid(df): df.realinc.replace([0], np.nan, inplace=True) df.educ.replace([98,99], np.nan, inplace=True) # 89 means 89 or older df.age.replace([98, 99], np.nan, inplace=True) df.cohort.replace([9999], np.nan, inplace=True) df.adults.replace([9], np.nan, inplace=True) replace_invalid(gss) ``` ### Distribution of age Here's the CDF of ages. ``` cdf_age = Cdf(gss.age) thinkplot.cdf(cdf_age, label='age') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF') ``` **Exercise**: Each of the following cells shows the distribution of ages under various transforms, compared to various models. In each text cell, add a sentence or two that interprets the result. What can we say about the distribution of ages based on each figure? 1) Here's the CDF of ages compared to a normal distribution with the same mean and standard deviation. Interpretation: ``` MakeNormalModel(gss.age.dropna(), label='') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF') ``` 2) Here's a normal probability plot for the distribution of ages. Interpretation: ``` MakeNormalPlot(gss.age.dropna(), label='') decorate(title='Normal probability plot', xlabel='Standard normal sample', ylabel='Age (years)') ``` 3) Here's the complementary CDF on a log-y scale. Interpretation: ``` thinkplot.cdf(cdf_age, label='age', complement=True) decorate(title='Distribution of age', xlabel='Age (years)', ylabel='Complementary CDF, log scale', yscale='log') ``` 4) Here's the CDF of ages on a log-x scale. Interpretation: ``` thinkplot.cdf(cdf_age, label='age') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF', xscale='log') ``` 5) Here's the CDF of the logarithm of ages, compared to a normal model. Interpretation: ``` values = np.log10(gss.age.dropna()) MakeNormalModel(values, label='') decorate(title='Distribution of age', xlabel='Age (log10 years)', ylabel='CDF') ``` 6) Here's a normal probability plot for the logarithm of ages. Interpretation: ``` MakeNormalPlot(values, label='') decorate(title='Distribution of age', xlabel='Standard normal sample', ylabel='Age (log10 years)') ``` 7) Here's the complementary CDF on a log-log scale. Interpretation: ``` thinkplot.cdf(cdf_age, label='age', complement=True) decorate(title='Distribution of age', xlabel='Age (years)', ylabel='Complementary CDF, log scale', xscale='log', yscale='log') ``` 8) Here's a test to see whether ages are well-modeled by a Weibull distribution. Interpretation: ``` thinkplot.cdf(cdf_age, label='age', transform='Weibull') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='log Complementary CDF, log scale', xscale='log', yscale='log') ``` ### Distribution of income Here's the CDF of `realinc`. ``` cdf_realinc = Cdf(gss.realinc) thinkplot.cdf(cdf_realinc, label='income') decorate(title='Distribution of income', xlabel='Income (1986 $)', ylabel='CDF') ``` **Exercise:** Use visualizations like the ones in the previous exercise to see whether there is an analytic model that describes the distribution of `gss.realinc` well. ``` # Solution goes here ``` 2) Here's a normal probability plot for the values. ``` # Solution goes here ``` 3) Here's the complementary CDF on a log-y scale. ``` # Solution goes here ``` 4) Here's the CDF on a log-x scale. ``` # Solution goes here ``` 5) Here's the CDF of the logarithm of the values, compared to a normal model. ``` # Solution goes here ``` 6) Here's a normal probability plot for the logarithm of the values. ``` # Solution goes here ``` 7) Here's the complementary CDF on a log-log scale. ``` # Solution goes here ``` 8) Here's a test to see whether the values are well-modeled by a Weibull distribution. Interpretation: ``` # Solution goes here ``` ## BRFSS ``` %time brfss = pd.read_hdf('brfss.hdf5', 'brfss') brfss.head() ``` Let's look at the distribution of height in the BRFSS dataset. Here's the CDF. ``` heights = brfss.HTM4 cdf_heights = Cdf(heights) thinkplot.Cdf(cdf_heights) decorate(xlabel='Height (cm)', ylabel='CDF') ``` To see whether a normal model describes this data well, we can use KDE to estimate the PDF. ``` from scipy.stats import gaussian_kde ``` Here's an example using the default bandwidth method. ``` kde = gaussian_kde(heights.dropna()) xs = np.linspace(heights.min(), heights.max()) ds = kde.evaluate(xs) ds /= ds.sum() plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') ``` It doesn't work very well; we can improve it by overriding the bandwidth with a constant. ``` kde = gaussian_kde(heights.dropna(), bw_method=0.3) ds = kde.evaluate(xs) ds /= ds.sum() plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') ``` Now we can generate a normal model with the same mean and standard deviation. ``` mean = heights.mean() std = heights.std() mean, std ``` Here's the model compared to the estimated PDF. ``` normal_pdf = thinkstats2.NormalPdf(mean, std) ps = normal_pdf.Density(xs) ps /= ps.sum() plt.plot(xs, ps, color='gray', label='Normal model') plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') ``` The data don't fit the model particularly well, possibly because the distribution of heights is a mixture of two distributions, for men and women. **Exercise:** Generate a similar figure for just women's heights and see if the normal model does any better. ``` # Solution goes here # Solution goes here ``` **Exercise:** Generate a similar figure for men's weights, `brfss.WTKG3`. How well does the normal model fit? ``` # Solution goes here # Solution goes here ``` **Exercise:** Try it one more time with the log of men's weights. How well does the normal model fit? What does that imply about the distribution of weight? ``` # Solution goes here # Solution goes here ``` ## Skewness Let's look at the skewness of the distribution of weights for men and women. ``` male = (brfss.SEX == 1) male_weights = brfss.loc[male, 'WTKG3'] female = (brfss.SEX == 2) female_weights = brfss.loc[female, 'WTKG3'] ``` As we've seen, these distributions are skewed to the right, so we expect the mean to be higher than the median. ``` male_weights.mean(), male_weights.median() ``` We can compute the moment-based sample skewness using Pandas or `thinkstats2`. The results are almost the same. ``` male_weights.skew(), thinkstats2.Skewness(male_weights.dropna()) ``` But moment-based sample skewness is a terrible statistic! A more robust alternative is Pearson's median skewness: ``` thinkstats2.PearsonMedianSkewness(male_weights.dropna()) ``` **Exercise:** Compute the same statistics for women. Which distribution is more skewed? ``` # Solution goes here # Solution goes here # Solution goes here ``` **Exercise:** Explore the GSS or BRFSS dataset and find something interesting!
github_jupyter
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style='white') from utils import decorate from thinkstats2 import Pmf, Cdf import thinkstats2 import thinkplot def MakeNormalModel(values, label=''): """Plots a CDF with a Normal model. values: sequence """ cdf = thinkstats2.Cdf(values, label=label) mean, var = thinkstats2.TrimmedMeanVar(values) std = np.sqrt(var) print('n, mean, std', len(values), mean, std) xmin = mean - 4 * std xmax = mean + 4 * std xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax) thinkplot.Plot(xs, ps, label='model', linewidth=4, color='0.8') thinkplot.Cdf(cdf) def MakeNormalPlot(values, label=''): """Generates a normal probability plot. values: sequence """ mean, var = thinkstats2.TrimmedMeanVar(values, p=0.01) std = np.sqrt(var) xs = [-5, 5] xs, ys = thinkstats2.FitLine(xs, mean, std) thinkplot.Plot(xs, ys, color='0.8', label='model') xs, ys = thinkstats2.NormalProbability(values) thinkplot.Plot(xs, ys, '+', alpha=0.3, label=label) %time gss = pd.read_hdf('gss.hdf5', 'gss') gss.shape gss.head() def replace_invalid(df): df.realinc.replace([0], np.nan, inplace=True) df.educ.replace([98,99], np.nan, inplace=True) # 89 means 89 or older df.age.replace([98, 99], np.nan, inplace=True) df.cohort.replace([9999], np.nan, inplace=True) df.adults.replace([9], np.nan, inplace=True) replace_invalid(gss) cdf_age = Cdf(gss.age) thinkplot.cdf(cdf_age, label='age') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF') MakeNormalModel(gss.age.dropna(), label='') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF') MakeNormalPlot(gss.age.dropna(), label='') decorate(title='Normal probability plot', xlabel='Standard normal sample', ylabel='Age (years)') thinkplot.cdf(cdf_age, label='age', complement=True) decorate(title='Distribution of age', xlabel='Age (years)', ylabel='Complementary CDF, log scale', yscale='log') thinkplot.cdf(cdf_age, label='age') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='CDF', xscale='log') values = np.log10(gss.age.dropna()) MakeNormalModel(values, label='') decorate(title='Distribution of age', xlabel='Age (log10 years)', ylabel='CDF') MakeNormalPlot(values, label='') decorate(title='Distribution of age', xlabel='Standard normal sample', ylabel='Age (log10 years)') thinkplot.cdf(cdf_age, label='age', complement=True) decorate(title='Distribution of age', xlabel='Age (years)', ylabel='Complementary CDF, log scale', xscale='log', yscale='log') thinkplot.cdf(cdf_age, label='age', transform='Weibull') decorate(title='Distribution of age', xlabel='Age (years)', ylabel='log Complementary CDF, log scale', xscale='log', yscale='log') cdf_realinc = Cdf(gss.realinc) thinkplot.cdf(cdf_realinc, label='income') decorate(title='Distribution of income', xlabel='Income (1986 $)', ylabel='CDF') # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here %time brfss = pd.read_hdf('brfss.hdf5', 'brfss') brfss.head() heights = brfss.HTM4 cdf_heights = Cdf(heights) thinkplot.Cdf(cdf_heights) decorate(xlabel='Height (cm)', ylabel='CDF') from scipy.stats import gaussian_kde kde = gaussian_kde(heights.dropna()) xs = np.linspace(heights.min(), heights.max()) ds = kde.evaluate(xs) ds /= ds.sum() plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') kde = gaussian_kde(heights.dropna(), bw_method=0.3) ds = kde.evaluate(xs) ds /= ds.sum() plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') mean = heights.mean() std = heights.std() mean, std normal_pdf = thinkstats2.NormalPdf(mean, std) ps = normal_pdf.Density(xs) ps /= ps.sum() plt.plot(xs, ps, color='gray', label='Normal model') plt.plot(xs, ds, label='KDE heights') decorate(xlabel='Height (cm)', ylabel='PDF') # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here male = (brfss.SEX == 1) male_weights = brfss.loc[male, 'WTKG3'] female = (brfss.SEX == 2) female_weights = brfss.loc[female, 'WTKG3'] male_weights.mean(), male_weights.median() male_weights.skew(), thinkstats2.Skewness(male_weights.dropna()) thinkstats2.PearsonMedianSkewness(male_weights.dropna()) # Solution goes here # Solution goes here # Solution goes here
0.687315
0.975036
# 코로나19는 주린이의 선택을 어떻게 바꿀까? ## - 주가에 영향을 미치는 코로나19와 기타 변수 분석 및 시각화 프로젝트 --- 데잇걸즈4 ITR4 TEAM5 - 김소이, 김해인, 박유빈, 여지영, 오수희 발표일: 2020. 10. 16.(금) / 최종 제출일: 2020. 10. 20.(화) **목차** - **서론** - **본론1. 원본데이터 전처리** - 1-1. 코로나19 데이터 - 1-2. 주가 데이터 - 1-3. 뉴스 데이터 - 1-4. 독립변수 / 종속변수 정리 - **본론2. EDA시각화 및 회귀분석** - 2-1. EDA 시각화 - 2-2. 회귀분석 - **결론 및 한계점** --- # 서론 ### 프로젝트 배경 ![title](img03.png) - 한국은 코로나19가 비교적 초기에 발생했음에도 불구하고, 확진자 및 사망자 수가 전 세계의 다른 국가에 비하면 매우 낮고, 완치율 또한 매우 높은 편에 속한다. 몇 차례의 집단 감염 사례가 발생한 이후에도 안정적인 회복세를 보이고 있기도 하다. 그러나 완전히 달라져버린 생활 양식과 엄격한 사회규율, 이로 인한 경제적 타격으로 많은 국민이 전에 겪지 못한 어려운 시기를 보내고 있다. 이런 시기에 주식에 투자한다는 것은 도박일까, 기회일까? 데잇걸즈 5조는 **특정 기업이나 산업군의 주가가 코로나19의 영향을 어느 정도로 받고 있는지, 코로나19 관련 요소로부터 주가를 예측할 수 있을지 궁금해졌다.** - 하지만 많은 주알못, 경알못들이 쉽게 간과할수 있는 사실이 있다. 어떤 단일한 요인에 의해서 주가의 등락이 결정되지는 않는다는 점이다. 이에, 5조는 **코로나19와 관련된 요소를 우선적으로 탐구하고, 만일 그 결과가 충분히 유의미하지 않을 경우 그 이유와 다른 요인을 함께 탐색**해보기로 하였다. ### 분석 목적 및 분석 방법 - 이 분석이 목적은 코로나19가 특정 한국 기업의 주가에 영향을 주는지 알아보는 것이다. 사실 방점은 코로나19보다 종속변수인 주가에 찍혀있는데, 주린이의 입장에서 전염병과 같은 큰 사회적 이슈가 있을 때 주가에 대해 어떻게 생각해보면 좋을지 알아보는 데에 의의가 있기 때문이다. 분석을 통해 현재와 같은 상황에서 주가를 볼 때, 코로나19 관련 변수를 얼마나 고려해야 하는지 생각해볼 수 있다. - 다중회귀분석을 통해 대표 2개 기업의 주가와 독립변수들의 관계를 자세히 살펴본다. 이후 2개 기업이 각각 속한 업종 분류 총 30개 기업의 분석 결과를 요약, 정리한다. ### 종속/독립변수 설정 **종속변수** 네이버 금융, 증권플러스 등 대표적인 주식 포털에서 업종 기준으로 삼는 WICS를 분류 기준으로, 소분류상 **양방향미디어와서비스**와 **호텔, 레스토랑, 레저** 두 개 업종에 속한 기업들의 주가를 대상으로 하였다. 흔히 코로나19로 인해 흥한 업종, 쇠한 업종이라 불리는 두 업종을 선택했다. - (연속형) **양뱡향미디어와서비스** 업종 10개 기업 - (연속형) **호텔, 레스토랑, 레저** 업종 20개 기업 **독립변수** - (연속형) **한국 코로나 신규 확진자 수** - (연속형) **한국 코로나 신규 사망자 수** - (연속형) **세계 코로나 신규 확진자 수** - (연속형) **한국은행 통화 유동성 관련 키워드 뉴스 수** - (연속형) **미 연방준비제도 통화 유동성 관련 키워드 뉴스 수** - (연속형) **나스닥 지수** ### 사용 데이터셋 <수집기간과 단위> - Datetime 라이브러리에서 해당 연도의 몇 주차인지 알려주는 함수를 사용해 '주차'를 기준으로 한다. - 주가는 주말 및 공휴일에는 존재하지 않고, 주가 관련 데이터는 합계나 평균을 내기 어렵다. 따라서 전문가의 조언에 따라 '해당 주차 금요일 주가'를 '해당 주차 주가'로 가정한다 - 그 외에 코로나와 뉴스 데이터는 '해당 주차 주간 합계'를 구해서 넣는다. - 대한민국 첫 확진자 발생일은 1월 19일(일)이다.그러나 확진자가 0명이었던 날을 추가하기 위해 1월 17일(금)을 시작 날짜로 한다. - 금요일을 기준으로 삼는 주식 데이터 사용의 용이성을 위해 올해(2020년) 금요일이 공휴일인 주차를 제외한 10월 9일(금)을 종료 날짜로 한다. - 따라서 데이터 수집 기간은 아래와 같다. - **2020년 1월 17일(금) ~ 2020년 10월 9일(금)** <코로나 관련 데이터> - (WHO 세계보건기구) 국가별 코로나19 확진자, 사망자 정보 - (WHO 세계보건기구) 국가별 코로나19 확진자, 사망자 정보 <주가 관련 데이터> - (네이버 금융, 한국거래소) 특정 기업 주가 - (한국거래소) KOSPI, KOSDAQ, NASDAQ - (WISEfn) WICS 업종 분류 지수 <뉴스 데이터> - (한국언론진흥재단 빅카인즈) 정부의 미래 투자 산업 관련 키워드 뉴스 데이터 - (한국언론진흥재단 빅카인즈) 정부, 한국은행의 통화 정책 관련 키워드 뉴스 데이터 - (한국언론진흥재단 빅카인즈) 미국 연방준비제도(FED) 통화 정책 관련 키워드 뉴스 데이터 ### 사용 라이브러리 ``` import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot import matplotlib.pylab as pylab import matplotlib.patches as patches import matplotlib.pyplot as plt %matplotlib inline from IPython.display import Image from datetime import datetime import requests from bs4 import BeautifulSoup import FinanceDataReader as fdr import warnings warnings.filterwarnings(action='ignore') ``` --- # 본론1. 원본 데이터 전처리 ## 1-1. 코로나 데이터 --- - 원본데이터 출처: 세계보건기구(WHO) 링크 '1014-WHO-COVID-19-global-data.csv' (세계보건기구의 전세계 일일 확진자/사망자 추이) - 불필요한 컬럼은 제외하고 주차별 주간 신규확진자/사망자 합계를 구한 칼럼을 추가했다. **WD_covid** - 날짜 - 주차 - 세계 주간 확진자 / 세계 주간 사망자 **KR_covid** - 날짜 - 주차 - 한국 주간 확진자 / 한국 주간 사망자 **독립_KR_WD_covid_death** - 날짜 - 주차 - 한국 주간 확진자 / 한국 주간 사망자 / 세계 주간 확진자 ``` import pandas as pd from datetime import datetime # 원본데이터 covid_raw = pd.read_csv('1014-WHO-COVID-19-global-data.csv') covid_raw.head() ``` ### 1) 세계 '주간 확진자/사망자 수' ``` # 필요한 컬럼만 남기기 wd_covid = covid_raw[['Date_reported', ' New_cases', ' New_deaths']] # 컬럼명 변경하기 wd_covid = covid_raw.rename(columns={ 'Date_reported': 'Date', ' New_cases': 'WD_daily_covid', ' New_deaths': 'WD_daily_death'}) # 날짜 정보를 데이터 타입 Datetime 으로 변경하기 wd_covid["Date"] = pd.to_datetime(wd_covid["Date"]) # '주차' 데이터 생성 및 추가하기. # 이때 '주차'는 해당 연도의 몇 번째 주차인지를 의미 함. wd_covid["Week"] = wd_covid["Date"].dt.week wd_covid = wd_covid[['Date', 'Week', 'WD_daily_covid', 'WD_daily_death']] # '요일' 데이터 생성 및 추가하기. # 이때 '요일'은 '월:0 ~일:6' wd_covid["Day"] = wd_covid["Date"].dt.dayofweek wd_covid = wd_covid[['Date', 'Week', 'Day', 'WD_daily_covid', 'WD_daily_death']] # '주차' groupby 하고 '일일확진자' 합계로 '주간 확진자수' 구하기. world_w = wd_covid.groupby(["Week"])["WD_daily_covid"].sum() world_weekly = pd.DataFrame(world_w) world_weekly.columns = ["WD_covid"] # merge 를 통해 기존 데이터에 추가하기. wd_covid = wd_covid.merge(world_weekly, left_on="Week", right_on=world_weekly.index, how="left") world_wd = wd_covid.groupby(["Week"])["WD_daily_death"].sum() world_weekly_d = pd.DataFrame(world_wd) world_weekly_d.columns = ["WD_death"] wd_covid = wd_covid.merge(world_weekly_d, left_on="Week", right_on=world_weekly.index, how="left") # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. wd_covid = wd_covid[wd_covid['Day'].isin([4])] # 원하는 컬럼만 남기기. wd_covid = wd_covid[['Date', 'Week', 'WD_covid', 'WD_death']] # 중복값 제거하기. wd_covid = wd_covid.drop_duplicates() # 0117 ~ 0925 의 데이터만 남기기. # 야매 코딩 - 그냥 인덱스 번호 일일이 확인하고 자름 wd_covid = wd_covid.drop([0, 7, 273, 280]) # 저장하기 wd_covid.to_excel("독립_WD_covid.xlsx", index=False, encoding="utf8") wd_covid = pd.read_excel("독립_WD_covid.xlsx") wd_covid.head() ``` ### 2) 한국 '주간 확진자/사망자 수' ``` # 원본에서 필요한 컬럼만 남기기 kr_covid = covid_raw[covid_raw[' Country_code'].isin(['KR'])] # 컬럼명 변경하기 kr_covid = kr_covid[['Date_reported', ' New_cases', ' New_deaths']] kr_covid = kr_covid.rename(columns={ 'Date_reported': 'Date', ' New_cases': 'KR_daily_covid', ' New_deaths': 'KR_daily_death'}) # 날짜 정보를 데이터 타입 Datetime 으로 변경하기 kr_covid['Date'] = pd.to_datetime(kr_covid['Date']) # '주차' 데이터 생성 및 추가하기. kr_covid['Week'] = kr_covid['Date'].dt.week kr_covid = kr_covid[['Date', 'Week', 'KR_daily_covid', 'KR_daily_death']] # '요일' 데이터 생성 및 추가하기 kr_covid["Day"] = kr_covid['Date'].dt.dayofweek kr_covid = kr_covid[['Date', 'Week', 'Day', 'KR_daily_covid', 'KR_daily_death']] # '주차' groupby 하고 '일일확진자' 합계로 '주간 확진자수' 구하기. korea_w = kr_covid.groupby(["Week"])["KR_daily_covid"].sum() korea_weekly = pd.DataFrame(korea_w) korea_weekly.columns = ["KR_covid"] # merge 를 통해 기존 데이터에 추가하기. kr_covid = kr_covid.merge(korea_weekly, left_on="Week", right_on=korea_weekly.index, how="left") # '주간 사망자수' korea_wd = kr_covid.groupby(["Week"])["KR_daily_death"].sum() korea_weekly_d = pd.DataFrame(korea_wd) korea_weekly_d.columns = ["KR_death"] korea_weekly_d.head() kr_covid = kr_covid.merge(korea_weekly_d, left_on="Week", right_on=korea_weekly_d.index, how="left") # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. kr_covid = kr_covid[kr_covid['Day'].isin([4])] # 0117 ~ 0925 의 데이터만 남기기. # 야매 코딩 - 그냥 인덱스 번호 일일이 확인하고 자름 kr_covid = kr_covid.drop([0, 7, 273, 280]) # 필요한 컬럼만 남기기. kr_covid = kr_covid[['Date', 'Week', 'KR_covid', 'KR_death']] #저장하기 kr_covid.to_excel("독립_KR_covid.xlsx", index=False, encoding="utf8") kr_covid = pd.read_excel("독립_KR_covid.xlsx") kr_covid.head() ``` ## 1-2. 주가 데이터 --- ### 1) 개별 기업 주가 - 원본데이터 출처: 한국 거래소(기업코드), 네이버금융(주가정보), WICS(기업분류) - **양방향미디어와서비스**: 티사이언티픽, 키다리스튜디오, NAVER, 줌인터넷, 플리토, 퓨쳐스트림네트웍스, 카카오, 캐리소프트, 아프리카TV, THE E&M (10) - **호텔, 레스토랑, 레저**: 해마로푸드서비스, 용평리조트, 세중, 신세계푸드, MP그룹, GKL, 남화산업, 호텔신라, 시공테크, 서부T&D, 아난티, 롯데관광개발, 하나투어, 강원랜드, 이월드, 참좋은여행, 모두투어, 파라다이스, 디딤, 노랑풍선 (20) - 기업 주가 및 주가 관련 데이터는 시가, 상한가, 하한가, 종가, 수정종가 중 대표값으로 종가(Close)를 일괄적으로 사용했다. **기업명_close_weekly** * 30개 - 날짜 - 주차 - 입력한 기업의 주차별 금요일 종가(close) ``` import pandas as pd import requests from bs4 import BeautifulSoup # 기업 법인명을 넣으면, 주식 종목 코드를 찾아주는 함수 # 구글링 def get_code(df, name): code = df.query("name=='{}'".format(name))['code'].to_string(index=False) # 위와같이 code명을 가져오면 앞에 공백이 붙어있는 상황이 발생하여 앞뒤로 sript() 하여 공백 제거 code = code.strip() return code # excel 파일을 다운로드하는 동시에 pandas에 load하기 company_code_df = pd.read_html('http://kind.krx.co.kr/corpgeneral/corpList.do?method=download', header=0)[0] # data frame정리 company_code_df = company_code_df[['회사명', '종목코드']] # data frame title 변경 '회사명' = name, 종목코드 = 'code' company_code_df = company_code_df.rename(columns={'회사명': 'name', '종목코드': 'code'}) # 종목코드는 6자리로 구분되기때문에 0을 채워 6자리로 변경 company_code_df.code = company_code_df.code.map('{:06d}'.format) # [POINT 1] 데이터가 필요한 기업의 이름 입력하기. company_code = company_code_df[company_code_df['name'].isin(['호텔신라'])] company_code = company_code.iloc[0,1] company_code # 주식 종목 코드를 넣으면, 주식 정보를 찾아주는 함수 # 구글링 def get_price(company_code): # count=439에서 3000은 과거 439 영업일간의 데이터를 의미. 조절 가능 url = "https://fchart.stock.naver.com/sise.nhn?symbol={}&timeframe=day&count=184&requestType=0".format(company_code) get_result = requests.get(url) bs_obj = BeautifulSoup(get_result.content, "html.parser") # information inf = bs_obj.select('item') columns = ['Date', 'Open' ,'High', 'Low', 'Close', 'Volume'] df1_inf = pd.DataFrame([], columns = columns, index = range(len(inf))) for i in range(len(inf)): df1_inf.iloc[i] = str(inf[i]['data']).split('|') df1_inf.index = pd.to_datetime(df1_inf['Date']) return df1_inf.drop('Date', axis=1).astype(float) # 위에서 company_code 가 입력되도록 설정. 처음부터 돌려야함 주의. price = get_price(company_code) # 금요일 종가만 남기기 # 종가 데이터만 남기기 close = price[["Close"]] # 코딩하기 힘드니까 'Date' 를 인덱스에서 빼주기. close = close.reset_index() # 주차 데이터 생성 및 추가 close["Week"] = close["Date"].dt.week close = close[["Date", "Week", "Close"]] # '요일' 데이터 생성 및 추가 close["Day"] = close["Date"].dt.dayofweek close = close[["Date", "Week", "Day", "Close"]] # '주간 종가 %' 대신 '금요일 종가'를 '주간 종가'로 간주함. # 금요일 종가만 남기기 close = close[close["Day"].isin([4])] # 필요한 컬럼만 남기기. close = close[["Date", "Week", "Close"]] # [POINT 2] 기업명 컬럼에 입력한 기업명으로 바꾸기. close.columns = ["Date", "Week", "호텔신라"] # [POINT 3] 저장 파일명에서 입력한 기업명으로 바꿔서 저장하고 확인하기. close.to_excel("호텔신라_close_weekly.xlsx", index=False, encoding="utf8") 호텔신라 = pd.read_excel('호텔신라_close_weekly.xlsx') 호텔신라.head() ``` ### 2) KOSPI / KOSDAQ / NASDAQ - 원본데이터 출처: - **KOSPI** (KOrea Composite Stock Price Index) 또는 한국종합주가지수는 한국거래소의 유가증권시장에 상장된 회사들의 주식에 대한 총합인 시가총액의 기준시점(1980년 1월 4일)과 비교시점을 비교하여 나타낸 지표다. - **KOSDAQ** (KOrea Securities Dealers Automated Quotation)은 첨단 기술주 중심인 나스닥(NASDAQ) 시장을 본떠 만든 대한민국의 주식시장으로, 유가증권시장과는 규제 조치가 별개로 이루어지는 시장이다. - **NASDAQ** (Nasdaq Stock Market)은 미국의 장외 주식거래시장이다. 일반적으로 언급되는 나스닥은 '전미증권협회 주식시세 자동통보체계(National Association of Securities Dealers Automated Quotations)에 의한 지수 즉, 나스닥 지수의 줄임말이다. **KOSPI / KOSDAQ / 독립_NASDAQ** - 날짜 - 주차 - 해당 주차 금요일 종가 ``` # KOSPI, KOSDAQ, NASDAQ 지수를 불러와 전처리 !pip install -U finance-datareader !pip install html5lib import FinanceDataReader as fdr # 한국거래소 상장종목 전체를 확인할 수 있음 df_krx = fdr.StockListing('KRX') df_krx.head() # KOSPI, KOSDAQ, NASDAQ 종합 지수, 2010년 1월~현재 kospi = fdr.DataReader('KS11', '2020') kosdaq = fdr.DataReader('KQ11', '2020') nasdaq = fdr.DataReader('IXIC', '2020') # KOSPI 2010년 1월~현재 종가 그래프 kospi['Close'].plot() # KOSDAQ 2010년 1월~현재 종가 그래프 kosdaq['Close'].plot() # NASDAQ 2010년 1월~현재 종가 그래프 nasdaq['Close'].plot() # 날짜를 인덱스에서 컬럼으로 꺼내옴 kospi = kospi.reset_index() kosdaq = kosdaq.reset_index() nasdaq = nasdaq.reset_index() # 코스피에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 kospi["Week"] = kospi["Date"].dt.week kospi["Day"] = kospi["Date"].dt.dayofweek kospi = kospi[["Date", "Week", "Day", "Close"]] kospi = kospi[kospi["Day"].isin([4])] kospi = kospi[["Date", "Week", "Day", "Close"]] #저장하기 kospi.to_excel('독립_KOSPI.xlsx', index=False, encoding="utf8") kospi = pd.read_excel('독립_KOSPI.xlsx') kospi.head() # 코스닥에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 kosdaq["Week"] = kosdaq["Date"].dt.week kosdaq["Day"] = kosdaq["Date"].dt.dayofweek kosdaq = kosdaq[["Date", "Week", "Day", "Close"]] kosdaq = kosdaq[kosdaq["Day"].isin([4])] kosdaq = kosdaq[["Date", "Week", "Day", "Close"]] # 저장하기 kosdaq.to_excel('독립_KOSDAQ.xlsx', index=False, encoding="utf8") kosdaq = pd.read_excel('독립_KOSDAQ.xlsx') kosdaq.head() # 나스닥에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 nasdaq["Week"] = nasdaq["Date"].dt.week nasdaq["Day"] = nasdaq["Date"].dt.dayofweek nasdaq = nasdaq[["Date", "Week", "Day", "Close"]] nasdaq = nasdaq[nasdaq["Day"].isin([4])] nasdaq = nasdaq[["Date", "Week", "Day", "Close"]] # 저장하기 nasdaq.to_excel('독립_NASDAQ.xlsx', index=False, encoding="utf8") nasdaq = pd.read_excel('독립_NASDAQ.xlsx') nasdaq.head() ``` ### 3) WICS 지수 - 원본데이터 출처: WICS 'WICS_index.csv' - **WICS** (WISE Industry Classification Standard) 업종 분류의 업종은 국제적으로 통용되는 분류 기준을 국내 실정에 맞게 재구성하여 크게 대분류, 중분류, 소분류, 세개 분류로 나누어진다. 분류 종목은 대부분 지수 편입 종목이 된다. ([자세히](http://www.wiseindex.com/konannas/files/WICS_Sector_Index_Methodology.pdf?20200831)) ``` # 원본데이터 wics = pd.read_csv('WICS_index.csv') # 문자열 일자 데이터를 날짜형 데이터로 변환 wics['일자'] = pd.to_datetime(wics['일자']) # ' 소재' 컬럼의 빈칸 조정 wics.columns = ['일자', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티'] # ','가 있던 숫자 값의 쉼표 제거 및 float으로 변환 wics['경기관련소비재']=wics['경기관련소비재'].replace(',','',regex=True).astype(float) wics['에너지']=wics['에너지'].replace(',','',regex=True).astype(float) wics['소재']=wics['소재'].replace(',','',regex=True).astype(float) wics['산업재']=wics['산업재'].replace(',','',regex=True).astype(float) wics['필수소비재']=wics['필수소비재'].replace(',','',regex=True).astype(float) wics['건강관리']=wics['건강관리'].replace(',','',regex=True).astype(float) wics['금융']=wics['금융'].replace(',','',regex=True).astype(float) wics['IT']=wics['IT'].replace(',','',regex=True).astype(float) wics['커뮤니케이션서비스']=wics['커뮤니케이션서비스'].replace(',','',regex=True).astype(float) wics['유틸리티']=wics['유틸리티'].replace(',','',regex=True).astype(float) # 요일과 주차 데이터 추출. wics["요일"] = wics["일자"].dt.dayofweek wics["주차"] = wics["일자"].dt.week # 주차와 요일 컬럼 추가 및 정렬. wics = wics.reindex(columns=['일자', '주차', '요일', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티']) # 오래된 일자 순으로 정렬하고 인덱스 리셋. wics = wics.sort_values(by='일자', ascending=True) wics = wics.reset_index(drop=True) # 금요일 데이터만 남기기. wics = wics[wics["요일"].isin([4])] wics = wics[['일자', '주차', '요일', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티']] # 저장하기 wics.to_excel('독립_WICS.xlsx', index=False, encoding="utf8") wics = pd.read_excel('독립_WICS.xlsx') wics.head() ``` 최종 데이터셋의 컬럼은 아래와 같다. **WICS 지수** - 날짜 - 주차 - 경기관련소비재, 에너지, 소재, 산업재, 필수소비재, 건강관리, 금융, IT, 커뮤니케이션서비스, 유틸리티 ## 1-3. 뉴스 데이터 --- - 주가와 주식 관련한 각종 지수는 그 어떤 영역보다도 복잡계에 해당한다. (그렇지 않았다면 모두가 투자로 부자 되었을 것) 도메인 전문가 인터뷰를 통해 주가에 영향을 줄 수 있는 대표적인 요소들을 추가로 확인하고 분석에 활용했다. - 원본데이터 출처: 빅카인즈(한국언론진흥재단에서 운영하는 뉴스 데이터 서비스 '한국은행.xlsx' '연준.xlsx' '바이오헬스.xlsx' - 3개의 데이터셋은 전부 키워드 검색 결과 나온 뉴스 기사를 일자별로 수집한 뒤, 주차별 주간 기사 수 합계를 구했다. 1) 한국은행의 통화 정책 관련 키워드 뉴스 수 2) 미국 연방준비위원회의 통화 정책 관련 키워드 뉴스 수 3) 정부의 미래 투자 산업 관련 키워드 뉴스 수 - 각각의 키워드는 아래와 같다. 1. 디지털 뉴딜, 그린 뉴딜, 바이오 헬스, 시스템 반도체, 소부장, 자율주행, 5G, 태양광 2. 한국은행, 돈 풀다, 광의통화, 시중 유동성, 통화승수, 돈맥경화, 통화 증가세, 본원통화 3. 연준, 돈 풀다, 광의통화, 시중 유동성, 통화승수, 돈맥경화, 통화 증가세, 본원통화 ### 1) 한국은행의 통화 정책 관련 뉴스 수 ``` import pandas as pd from datetime import datetime # 원본데이터 kr_bank = pd.read_excel('한국은행.xlsx') # 필요한 컬럼만 남기기. kr_bank = kr_bank[['일자']] # 컬럼명 변경하기 kr_bank.columns = ["Date"] # Datetime 으로 변경하기. kr_bank['Date'] = pd.to_datetime(kr_bank['Date'], format='%Y%m%d') # '주차' 생성 및 추가하기. kr_bank["Week"] = kr_bank["Date"].dt.week kr_bank = kr_bank[['Date','Week']] kr_bank = kr_bank.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수"를 구합니다. kr_bank_w = kr_bank.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 만들어줍니다. kr_bank_weekly = pd.DataFrame(kr_bank_w) kr_bank_weekly.columns = ["KR_bank"] kr_bank_weekly.head() kr_bank = kr_bank.merge(kr_bank_weekly, left_on="Week", right_on=kr_bank_weekly.index, how="left") # '요일' 생성 및 추가하기 kr_bank["Day"] = kr_bank["Date"].dt.dayofweek kr_bank = kr_bank[['Date','Week','Day', 'KR_bank']] # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. kr_bank = kr_bank[kr_bank['Day'].isin([4])] # 중복값 제거하기. kr_bank = kr_bank.drop_duplicates() kr_bank = kr_bank.drop_duplicates("Week", keep="first") # 필요한 컬럼만 남기기. kr_bank = kr_bank[['Date', 'Week', 'KR_bank']] kr_bank.to_excel("독립_KR_bank.xlsx", index=False, encoding="utf8") kr_bank = pd.read_excel('독립_KR_bank.xlsx') kr_bank.head() kr_bank.plot(x='Week', y='KR_bank', figsize=(15,4)) ``` ### 2) 미국 연방준비위원회의 통화 정책 관련 뉴스 수 ``` import pandas as pd from datetime import datetime # 원본데이터 us_bank = pd.read_excel('연방.xlsx') # 필요한 컬럼만 남기기. us_bank = us_bank[['일자']] # 컬럼명 변경하기 us_bank.columns = ["Date"] # Datetime 으로 변경하기. us_bank['Date'] = pd.to_datetime(us_bank['Date'], format='%Y%m%d') # '주차' 생성 및 추가하기. us_bank["Week"] = us_bank["Date"].dt.week us_bank = us_bank[['Date','Week']] us_bank = us_bank.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수"를 구합니다. us_bank_w = us_bank.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 만들어줍니다. us_bank_weekly = pd.DataFrame(us_bank_w) us_bank_weekly.columns = ["US_bank"] us_bank_weekly.head() us_bank = us_bank.merge(us_bank_weekly, left_on="Week", right_on=us_bank_weekly.index, how="left") # '요일' 생성 및 추가하기 us_bank["Day"] = us_bank["Date"].dt.dayofweek us_bank = us_bank[['Date','Week','Day', 'US_bank']] # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. us_bank = us_bank[us_bank['Day'].isin([4])] # 중복값 제거하기. us_bank = us_bank.drop_duplicates() us_bank = us_bank.drop_duplicates("Week", keep="first") # 필요한 컬럼만 남기기. us_bank = us_bank[['Date', 'Week', 'US_bank']] us_bank.to_excel("독립_US_bank.xlsx", index=False, encoding="utf8") us_bank = pd.read_excel('독립_US_bank.xlsx') us_bank.head() us_bank.plot(x='Week', y='US_bank', figsize=(15,4)) ``` ### 3) 정부의 미래 투자 산업 관련 뉴스 수 ``` import pandas as pd from datetime import datetime # 원본데이터 biohealth = pd.read_excel('바이오 헬스.xlsx') # 필요한 컬럼만 남기기 biohealth = biohealth[['일자']] # 컬럼명 변경하기 biohealth.columns = ["Date"] # Datetime 으로 데이터타입 변경하기 biohealth['Date'] = pd.to_datetime(biohealth['Date'], format='%Y%m%d') # '주차'컬럼 생성 및 추가하기 biohealth["Week"] = biohealth["Date"].dt.week biohealth = biohealth[['Date','Week']] biohealth = biohealth.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수" 구하기 biohealth = biohealth.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 생성 biohealth_weekly = pd.DataFrame(biohealth) biohealth_weekly.rename(columns={"Date":"바이오헬스"}, inplace=True) biohealth_weekly.rename(columns={"Week":"주차"}, inplace=True) # 인덱스'week'를 컬럼으로 변경 biohealth_weekly.reset_index(level=['Week'], inplace = True) biohealth_weekly.head() # 위와 같이 정제된 각 단어들을 merge한 최종 결과 아래와 같다. 언론언급 = pd.read_excel('언론 언급.xlsx') 언론언급.head() ``` ## 1-4. 독립변수 / 종속변수 정리 **종속변수** - (연속형) **양뱡향미디어와서비스** 업종 10개 기업 - (연속형) **호텔, 레스토랑, 레저** 업종 20개 기업 **독립변수** - (연속형) **한국 코로나 신규 확진자 수** - (연속형) **한국 코로나 신규 사망자 수** - (연속형) **세계 코로나 신규 확진자 수** - (연속형) **한국은행 통화 유동성 관련 키워드 뉴스 수** - (연속형) **미 연방준비제도 통화 유동성 관련 키워드 뉴스 수** - (연속형) **나스닥 지수** **순서** 1. 종속변수 각 10개, 20개 항목 merge 2. 독립변수 6개 항목 merge 3. 종속변수와 독립변수 merge ### 1) 종속변수 **1. 양방향미디어와서비스 10개 기업 주가 merge** ``` # 10개 데이터 불러와서 merge하기 생략하고 미리 만든 결과만 가져옴. # 양방향미디어와서비스.to_excel("양방향미디어와서비스.xlsx", index=False, encoding="utf8") 양방향미디어와서비스 = pd.read_excel("양방향미디어와서비스.xlsx") 양방향미디어와서비스.head() ``` **2. 호텔,레스토랑,레저 20개 기업 주가 merge** ``` # 20개 데이터 불러와서 merge하기 생략하고 결과만 가져옴. #호텔레저.to_excel("호텔레스토랑레저.xlsx", index=False, encoding="utf8") 호텔레스토랑레저 = pd.read_excel("호텔레스토랑레저.xlsx") 호텔레스토랑레저.head() ``` ### 2) 독립변수 **주간 한국 확진자 수 / 한국 사망자 수 / 세계 확진자 수 / 한은 기사 수 / 연준 기사 수 / 나스닥 지수** ``` import pandas as pd # 한국 코로나 kr_covid = pd.read_excel('독립_KR_covid.xlsx') # 세계 코로나 wd_covid = pd.read_excel('독립_WD_covid.xlsx') # 한국은행 kr_bank = pd.read_excel('독립_KR_bank.xlsx') # 연방준비은행 us_bank = pd.read_excel('독립_US_bank.xlsx') # 나스닥 nasdaq = pd.read_excel('독립_NASDAQ.xlsx') # KR_covid + WD_covid covid = kr_covid.merge(wd_covid, left_on="Week", right_on=wd_covid["Week"]) covid = covid[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid' ]] covid.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid' ] # KR_bank + US_bank bank = kr_bank.merge(us_bank, left_on="Week", right_on=us_bank["Week"]) bank = bank[['Date_x', 'Week_x', 'KR_bank','US_bank' ]] bank.columns = ['Date', 'Week', 'KR_bank', 'US_bank'] # covid + bank covid_bank = covid.merge(bank, left_on="Week", right_on=bank["Week"]) covid_bank = covid_bank[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank' ]] covid_bank.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank'] # covid_bank + nasdaq independant = covid_bank.merge(nasdaq, left_on="Week", right_on=nasdaq["Week"]) independant = independant[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Close' ]] independant.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하기 independant.to_excel("독립_covid_bank_nasdaq.xlsx", index=False, encoding="utf8") independant = pd.read_excel("독립_covid_bank_nasdaq.xlsx") independant.head() ``` ### 3) 종속변수 + 독립변수 **양방향미디어와서비스 + 독립변수** ``` IT미디어 = 양방향미디어와서비스 IT미디어_독립 = IT미디어.merge(independant, left_on="Week", right_on=independant["Week"]) # 필요한 컬럼만 가져오기 IT미디어_독립 = IT미디어_독립[['Date_x', 'Week_x', '카카오', 'NAVER', 'THE E&M', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq']] # 컬럼명 변경 IT미디어_독립.columns = ['Date', 'Week', '카카오', 'NAVER', 'THE EnM', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하고 확인하기 IT미디어_독립.to_excel("양방향미디어와서비스_독립.xlsx", index=False, encoding="utf8") IT미디어_독립 = pd.read_excel("양방향미디어와서비스_독립.xlsx") IT미디어_독립.head() ``` **호텔레스토랑레저 + 독립변수** ``` 호텔레저 = 호텔레스토랑레저 호텔레저_독립 = 호텔레저.merge(independant, left_on="Week", right_on=independant["Week"]) # 필요한 컬럼만 가져오기. 호텔레저_독립 = 호텔레저_독립[['Date_x', 'Week_x', 'GKL', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부T&D', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq']] # 컬럼명 변경하기. 호텔레저_독립.columns = ['Date', 'Week', 'GKL', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부T&D', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하고 확인하기 호텔레저_독립.to_excel("호텔레스토랑레저_독립.xlsx", index=False, encoding="utf8") 호텔레저_독립 = pd.read_excel("호텔레스토랑레저_독립.xlsx") 호텔레저_독립.head() ``` # 본론2. EDA 시각화 및 회귀분석 ## 2-1. EDA 시각화 --- ``` # 각종 라이브러리 불러오기 %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.pylab as pylab from matplotlib import pyplot import seaborn as sns import warnings warnings.filterwarnings(action='ignore') # 시각화 자료에서의 한글 사용을 위한 준비 plt.rc('font', family='Gulim') plt.text(0.3, 0.3, '한글', size=100) # 맥에서는 아래와 같이 # plt.rcParams['font.family'] = 'AppleGothic' ``` ### 코로나19 세계 현황 ``` # 주차별 세계 코로나19 확진자 데이터 불러오기 wd_covid = pd.read_excel("독립_WD_covid.xlsx") wd_covid.head() # 컬럼명 변경 wd_covid.columns = ['날짜', '주차', '주간 확진자', '주간 사망자'] # 세계 코로나 주간 확진자와 주간 사망자 상관계수 # 다중공선성을 고려하여 주간 신규 확진자 데이터만 분석에 활용 wd_covid[['주간 확진자', '주간 사망자']].corr() # 세계 주간 신규 확진자 및 사망자 추이 wd_covid.plot(x='주차', y='주간 확진자', figsize=(15,4)) wd_covid.plot(x='주차', y='주간 사망자', figsize=(15,4)) wld_total = wd_covid[['주차','주간 확진자','주간 사망자']] # 첫 컬럼(date)를 인덱스로 convert wld_total = wld_total.set_index('주차') wld_total.plot(figsize=(15,4)) ``` ### 코로나19 한국 현황 ``` # 주차별 한국 코로나19 확진자 데이터 불러오기 kr_covid = pd.read_excel('독립_KR_covid.xlsx') kr_covid.head() # 컬럼명 변경 kr_covid.columns = ['날짜', '주차', '주간 확진자', '주간 사망자'] # 한국 코로나19 주간 신규 확진자와 주간 신규 사망자 상관계수 kr_covid[['주간 확진자','주간 사망자']].corr() # 한국의 주간 신규 확진자 및 사망자 추이 kr_covid.plot(x='주차', y='주간 확진자', figsize=(15,4)) kr_covid.plot(x='주차', y='주간 사망자', figsize=(15,4)) #한국의 주간 확진자 주간 사망자 수 그래프로 한 번에 kr_total=kr_covid[['주차','주간 확진자','주간 사망자']] kr_total.head() # 첫 컬럼(date)를 인덱스로 convert kr_total = kr_total.set_index('주차') kr_total.head() kr_total.plot(figsize=(15,4)) ``` ### KOSPI, KOSDAQ, NASDAQ ``` kospi_eda = pd.read_excel("독립_KOSPI.xlsx") kosdaq_eda = pd.read_excel("독립_KOSDAQ.xlsx") nasdaq_eda = pd.read_excel("독립_NASDAQ.xlsx") # 주차와 요일 정보를 삭제하고 날짜를 인덱스로 지정 kospi_eda = kospi_eda.drop("Week",1) kospi_eda = kospi_eda.drop("Day",1) kospi_eda = kospi.set_index('Date') kosdaq_eda = kospi_eda.drop("Week",1) kosdaq_eda = kospi_eda.drop("Day",1) kosdaq_eda = kospi.set_index('Date') nasdaq_eda = kospi_eda.drop("Week",1) nasdaq_eda = kospi_eda.drop("Day",1) nasdaq_eda = kospi.set_index('Date') kospi_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 KOSPI 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() kosdaq_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 KOSDAQ 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() nasdaq_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 NASDAQ 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() ``` ### WICS 10개 대분류 산업군 추이 ``` WICS_eda = pd.read_excel("독립_WICS.xlsx") WICS_eda.head() # 주차와 요일 정보를 삭제하고 날짜를 인덱스로 지정 WICS_eda = WICS_eda.drop("주차",1) WICS_eda = WICS_eda.drop("요일",1) WICS_eda = WICS_eda.set_index('일자') WICS_eda.plot() ``` ### 호텔 레스토랑 레저 산업 부문 내 기업 상관관계 히트맵 ``` import pandas as pd import numpy as np plt.rcParams['font.family'] = 'AppleGothic' # 애플 그래프 한글깨짐 방지 호텔레저_eda = pd.read_excel('호텔레스토랑레저.xlsx') 호텔레저_eda = 호텔레저_eda.drop('Week',1) 호텔레저_eda.head() 호텔레저_eda.corr().head() 호텔레저_eda = 호텔레저_eda.corr() # 그림 사이즈 지정 fig, ax = plt.subplots(figsize=(20,20) ) # 삼각형 마스크를 만든다(위 쪽 삼각형에 True, 아래 삼각형에 False) mask = np.zeros_like(호텔레저_eda, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # 히트맵을 그린다 sns.heatmap(호텔레저_eda, cmap = 'RdYlBu_r', annot = True, # 실제 값을 표시한다 mask=mask, # 표시하지 않을 마스크 부분을 지정한다 linewidths=.5, # 경계면 실선으로 구분하기 cbar_kws={"shrink": .5},# 컬러바 크기 절반으로 줄이기 vmin = -1,vmax = 1 # 컬러바 범위 -1 ~ 1 ) plt.show() IT미디어_eda = pd.read_excel('양방향미디어와서비스.xlsx') IT미디어_eda = IT미디어_eda.drop('Week',1) IT미디어_eda.head() IT미디어_eda.corr().head() IT미디어_eda = IT미디어_eda.corr() # 그림 사이즈 지정 fig, ax = plt.subplots(figsize=(20,20) ) # 삼각형 마스크를 만든다(위 쪽 삼각형에 True, 아래 삼각형에 False) mask = np.zeros_like(IT미디어_eda, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # 히트맵을 그린다 sns.heatmap(IT미디어_eda, cmap = 'RdYlBu_r', annot = True, # 실제 값을 표시한다 mask=mask, # 표시하지 않을 마스크 부분을 지정한다 linewidths=.5, # 경계면 실선으로 구분하기 cbar_kws={"shrink": .5},# 컬러바 크기 절반으로 줄이기 vmin = -1,vmax = 1 # 컬러바 범위 -1 ~ 1 ) plt.show() ``` ## 2-2. 회귀분석 **30개 기업 회귀분석을 위한 코드** 반복문을 짜기는 했지만, 휴먼러닝이 약간 필요합니다. ``` import pandas as pd from statsmodels.formula.api import ols # 위에서 작업한, 분석하려는 기업목록과 독립변수가 합쳐져 있는 파일 불러오기 호텔레저_독립 = pd.read_excel('호텔레스토랑레저_독립.xlsx') IT미디어_독립 = pd.read_excel('양방향미디어와서비스_독립.xlsx') ``` ### 코드: 호텔,레스토랑, 레저 ``` # 분석할 기업목록 확인 호텔레저_list = ['GKL', 'MP그룹', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부TnD', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라'] ``` **반복문(1)** **Kc = KR_covid, KR_death / Wc = WD_covid / C = Kc, Wc / N = Nasdaq / Kb = KR_bank / Ub = US_bank** ``` # 호텔레저_list 안에 분석하려는 기업명 입력하기. #KbUb 넣기 호텔레저_list = ['호텔신라'] for i in 호텔레저_list: Kc = i + 'Kc = ols(\"' + i + '~ KR_covid+KR_death\", data=호텔레저_독립).fit()' a = '#' + i + 'Kc.summary()' KcWc = i + 'KcWc = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid\", data=호텔레저_독립).fit()' b = '#' + i + 'KcWc.summary()' CN = i + 'CN = ols(\"' + i + '~ KR_covid+KR_death+WD_covid+Nasdaq\", data=호텔레저_독립).fit()' c = '#' + i + 'CN.summary()' CNKb = i + 'CNKb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq\", data=호텔레저_독립).fit()' d = '#' + i + 'CNKb.summary()' CNUb = i + 'CNUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq\", data=호텔레저_독립).fit()' e = '#' + i + 'CNUb.summary()' CNKbUb = i + 'CNKbUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq\", data=호텔레저_독립).fit()' f = '#' + i + 'CNKbUb.summary()' print(Kc) print(a) print(KcWc) print(b) print(CN) print(c) print(CNKb) print(d) print(CNUb) print(e) print(CNKbUb) print(f) # 위 결과 새로운 셀에 복붙하기. 호텔신라Kc = ols("호텔신라~ KR_covid+KR_death", data=호텔레저_독립).fit() #호텔신라Kc.summary() 호텔신라KcWc = ols("호텔신라 ~ KR_covid+KR_death+WD_covid", data=호텔레저_독립).fit() #호텔신라KcWc.summary() 호텔신라CN = ols("호텔신라~ KR_covid+KR_death+WD_covid+Nasdaq", data=호텔레저_독립).fit() #호텔신라CN.summary() 호텔신라CNKb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNKb.summary() 호텔신라CNUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNUb.summary() 호텔신라CNKbUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNKbUb.summary() ``` **반복문(2)** **분석 결과 별 R제곱, 수정R제곱, AIC, BIC** ``` # 호텔레저_list 안에 원하는 기업명 입력하기. 호텔레저_list = ['호텔신라'] for i in 호텔레저_list: result = i + '=pd.DataFrame({\"Kc\":[' + i + 'Kc.rsquared,' + i + 'Kc.rsquared_adj,' + i + 'Kc.aic,'+ i + 'Kc.bic],' + \ '\"KcWc\":[' + i + 'KcWc.rsquared,'+ i + 'KcWc.rsquared_adj,'+ i + 'KcWc.aic,'+ i + 'KcWc.bic],' + \ '\"CN\":[' + i + 'CN.rsquared,'+ i + 'CN.rsquared_adj,'+ i + 'CN.aic,'+ i + 'CN.bic],' + \ '\"CNKb\":[' + i + 'CNKb.rsquared,'+ i + 'CNKb.rsquared_adj,'+ i + 'CNKb.aic,'+ i + 'CNKb.bic],' + \ '\"CNUb\":[' + i + 'CNUb.rsquared,'+ i + 'CNUb.rsquared_adj,'+ i + 'CNUb.aic,'+ i + 'CNUb.bic],' + \ '\"CNKbUb\":[' + i + 'CNKbUb.rsquared,'+ i + 'CNKbUb.rsquared_adj,'+ i + 'CNKbUb.aic,'+ i + 'CNKbUb.bic]})' print(result) # 새로운 셀에 위 결과 복붙: R제곱, 수정 R제곱, AIC, BIC 데이터프레임화 호텔신라=pd.DataFrame({"Kc":[호텔신라Kc.rsquared,호텔신라Kc.rsquared_adj,호텔신라Kc.aic,호텔신라Kc.bic],"KcWc":[호텔신라KcWc.rsquared,호텔신라KcWc.rsquared_adj,호텔신라KcWc.aic,호텔신라KcWc.bic],"CN":[호텔신라CN.rsquared,호텔신라CN.rsquared_adj,호텔신라CN.aic,호텔신라CN.bic],"CNKb":[호텔신라CNKb.rsquared,호텔신라CNKb.rsquared_adj,호텔신라CNKb.aic,호텔신라CNKb.bic],"CNUb":[호텔신라CNUb.rsquared,호텔신라CNUb.rsquared_adj,호텔신라CNUb.aic,호텔신라CNUb.bic],"CNKbUb":[호텔신라CNKbUb.rsquared,호텔신라CNKbUb.rsquared_adj,호텔신라CNKbUb.aic,호텔신라CNKbUb.bic]}) # 결과 호텔신라 ``` ### 코드: 양방향미디어와서비스 ``` # 분석할 기업목록 확인 IT미디어_list = ['Date', 'Week', '카카오', 'NAVER', 'THE E&M', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] ``` **반복문(1)** **Kc = KR_covid, KR_death / Wc = WD_covid / C = Kc, Wc / N = Nasdaq / Kb = KR_bank / Ub = US_bank** ``` # 호텔레저_list 안에 분석하려는 기업명 입력하기. #KbUb 넣기 IT미디어_list = ['카카오'] for i in IT미디어_list: Kc = i + 'Kc = ols(\"' + i + '~ KR_covid+KR_death\", data=IT미디어_독립).fit()' a = '#' + i + 'Kc.summary()' KcWc = i + 'KcWc = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid\", data=IT미디어_독립).fit()' b = '#' + i + 'KcWc.summary()' CN = i + 'CN = ols(\"' + i + '~ KR_covid+KR_death+WD_covid+Nasdaq\", data=IT미디어_독립).fit()' c = '#' + i + 'CN.summary()' CNKb = i + 'CNKb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq\", data=IT미디어_독립).fit()' d = '#' + i + 'CNKb.summary()' CNUb = i + 'CNUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq\", data=IT미디어_독립).fit()' e = '#' + i + 'CNUb.summary()' CNKbUb = i + 'CNKbUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq\", data=IT미디어_독립).fit()' f = '#' + i + 'CNKbUb.summary()' print(Kc) print(a) print(KcWc) print(b) print(CN) print(c) print(CNKb) print(d) print(CNUb) print(e) print(CNKbUb) print(f) 카카오Kc = ols("카카오~ KR_covid+KR_death", data=IT미디어_독립).fit() #카카오Kc.summary() 카카오KcWc = ols("카카오 ~ KR_covid+KR_death+WD_covid", data=IT미디어_독립).fit() #카카오KcWc.summary() 카카오CN = ols("카카오~ KR_covid+KR_death+WD_covid+Nasdaq", data=IT미디어_독립).fit() #카카오CN.summary() 카카오CNKb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNKb.summary() 카카오CNUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNUb.summary() 카카오CNKbUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNKbUb.summary() ``` **반복문(2)** **분석 결과 별 R제곱, 수정R제곱, AIC, BIC** ``` # IT미디어_list 안에 원하는 기업명 입력하기. IT미디어_list = ['카카오'] for i in IT미디어_list: result = i + '=pd.DataFrame({\"Kc\":[' + i + 'Kc.rsquared,' + i + 'Kc.rsquared_adj,' + i + 'Kc.aic,'+ i + 'Kc.bic],' + \ '\"KcWc\":[' + i + 'KcWc.rsquared,'+ i + 'KcWc.rsquared_adj,'+ i + 'KcWc.aic,'+ i + 'KcWc.bic],' + \ '\"CN\":[' + i + 'CN.rsquared,'+ i + 'CN.rsquared_adj,'+ i + 'CN.aic,'+ i + 'CN.bic],' + \ '\"CNKb\":[' + i + 'CNKb.rsquared,'+ i + 'CNKb.rsquared_adj,'+ i + 'CNKb.aic,'+ i + 'CNKb.bic],' + \ '\"CNUb\":[' + i + 'CNUb.rsquared,'+ i + 'CNUb.rsquared_adj,'+ i + 'CNUb.aic,'+ i + 'CNUb.bic],' + \ '\"CNKbUb\":[' + i + 'CNKbUb.rsquared,'+ i + 'CNKbUb.rsquared_adj,'+ i + 'CNKbUb.aic,'+ i + 'CNKbUb.bic]})' print(result) # 새로운 셀에 위 결과 복붙: R제곱, 수정 R제곱, AIC, BIC 데이터프레임화 카카오=pd.DataFrame({"Kc":[카카오Kc.rsquared,카카오Kc.rsquared_adj,카카오Kc.aic,카카오Kc.bic],"KcWc":[카카오KcWc.rsquared,카카오KcWc.rsquared_adj,카카오KcWc.aic,카카오KcWc.bic],"CN":[카카오CN.rsquared,카카오CN.rsquared_adj,카카오CN.aic,카카오CN.bic],"CNKb":[카카오CNKb.rsquared,카카오CNKb.rsquared_adj,카카오CNKb.aic,카카오CNKb.bic],"CNUb":[카카오CNUb.rsquared,카카오CNUb.rsquared_adj,카카오CNUb.aic,카카오CNUb.bic],"CNKbUb":[카카오CNKbUb.rsquared,카카오CNKbUb.rsquared_adj,카카오CNKbUb.aic,카카오CNKbUb.bic]}) # 결과 카카오 ``` ### 다중회귀분석 ### 예시(1) 호텔신라 Kc = KR_covid, KR_death / Wc = WD_covid / C = Kc, Wc / N = Nasdaq / Kb = KR_bank / Ub = US_bank Korea corona:한국 확진+한국사망 Korea corona World corona: 한국 확진+한국사망+ 세계 확진 Corona Nasdaq:한국확진+한국사망+세계확진 + 나스닥 Corona Nasdaq Korea bank:한국확진+한국사망+세계확진 + 나스닥 + 한은 Corona Nasdaq Us bank: 한국확진+한국사망+세계확진 + 나스닥 + 한은 + 연준 Corona Nasdaq Korea bank Us bank: 한국확진+한국사망+세계확진 + 나스닥 + 한은 + 연준 ``` 호텔신라 ``` 설명: ``` 호텔신라CNUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=호텔레저_독립).fit() 호텔신라CNUb.summary() ``` - 위는 여러 조합 중, 수정R^2 값이 가장 높은 한국 코로나 신규 확진자 + 한국 코로나 신규 사망자 + 세계 코로나 신규 확진자 + 나스닥 지수 + 미국 연준 보도 수를 독립변수로 한 다중회귀분석 결과이다. - 한국 코로나 신규 확진자, 한국 코로나 신규 사망자, 미국 연준 보도 수를 모두 고려하고 나면, 세계 코로나 신규 확진자 수(음의 방향)와 나스닥 지수(양의 방향)는 통계적으로 유의미하게 주가와 연관이 있다. ### 예시(2) 카카오 Korea corona:한국 확진+한국사망 Korea corona World corona: 한국 확진+한국사망+ 세계 확진 Corona Nasdaq:한국확진+한국사망+세계확진 + 나스닥 Corona Nasdaq Korea bank:한국확진+한국사망+세계확진 + 나스닥 + 한은 Corona Nasdaq Us bank: 한국확진+한국사망+세계확진 + 나스닥 + 한은 + 연준 Corona Nasdaq Korea bank Us bank: 한국확진+한국사망+세계확진 + 나스닥 + 한은 + 연준 ``` 카카오 카카오CNKbUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=IT미디어_독립).fit() 카카오CNKbUb.summary() ``` - 위는 여러 조합 중, 수정R^2 값이 가장 높은 한국 코로나 신규 확진자 + 한국 코로나 신규 사망자 + 세계 코로나 신규 확진자 + 한국은행 보도 수 + 미국 연준 보도 수 + 나스닥 지수를 독립변수로 한 다중회귀분석 결과이다. - 한국 코로나 신규 확진자, 한국 코로나 신규 사망자, 한국은행 보도 수, 미국 연준 보도 수를 모두 고려하고 나면, 세계 코로나 신규 확진자 수(양의 방향)와 나스닥 지수(양의 방향)는 통계적으로 유의미하게 주가와 연관이 있다. - 세계 코로나 신규 확진자 수의 경우 양의 방향이며, 이는 호텔신라와 반대로 '세계 코로나19 확진자 수가 증가할수록 카카오의 주가가 상승한다'는 의미가 된다. ### 종합 30개 기업에 대한 회귀분석 결과를 표로 정리했다. ![title](img05.png) - 독립변수 중, 다른 요소들을 고려했을 때 p값이 0.05 이하인 변수로 세계 확진자 수(22)와 나스닥(21)이 다수 나타났다. 나스닥은 모든 경우에서 양(+)의 상관관계를 보이지만, 세계 확진자 수는 양(-)과 음(-)의 상관관계를 보인다. 따라서 주가에 일관적인 영향을 준다고 보기 어렵다고 보인다. - 독립변수 중, 연준 보도 수가 다른 요소들을 고려했을 때 p값이 0.05 이하로 나온 경우 모두 음(-)의 상관관계를 보인다. 이는 미국 연방준비제도에서 통화 유동성에 관한 보도가 적을수록 주가가 오른다는 의미다. # 결론 및 한계점 ## 결론 - 국내/외 코로나19 확진자, 사망자 수가 국내 기업 주가에는 영향을 준다는 근거를 찾기는 어렵다. - 하지만 '국내 주가는 나스닥에 영향을 받는다'라는 상식을 데이터로 확인할 수 있었다. - 한국은행과 미국 연준에서 통화량을 증가시킨다는 보도를 할수록, 시장 기대가 높아져 주가가 상승한다' 전문가 의견에 따라 '한국은행과 미국 연준 보도 수'를 코로나19 파생변수로 설정했으나 보도 데이터 수집의 정확한 집계가 어려워, 이를 뒷받침하는 데이터는 찾지 못했다. ## 한계점 - 코로나19 관련 데이터가 국내/세계 코로나 '확진자 및 사망자 수'로 한정되었다. 관련한 의미있는 다른 수치들도 함께 보면 더 좋은 분석이 되리라 생각한다. - 종속변수인 기업 주가를 한정했기 때문에, 어떤 분야나 한국 주가 전체의 전반적인 추세로 일반화하기 어렵다. - 정부가 투자할 산업에 관해 언론에서 많이 언급된 단어들과 연관된 산업들의 주가를 분석해보려 했지만, 크롤링과 어떤 산업의 주가 데이터를 정의하고 구할지에 관한 한계가 있어 진행되지 못했다. - 주가 및 지수 데이터의 경우 주말에 값이 없어서 1주 1개 데이터(금요 종가)로 산출했고, 대응하는 코로나19 데이터도 주별로 산출(합산)되여 최종 데이터 크기가 아주 크진 않았다. ---- ### "나 주린이, 코로나에 휩쓸리지 않겠어...!!!"
github_jupyter
import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot import matplotlib.pylab as pylab import matplotlib.patches as patches import matplotlib.pyplot as plt %matplotlib inline from IPython.display import Image from datetime import datetime import requests from bs4 import BeautifulSoup import FinanceDataReader as fdr import warnings warnings.filterwarnings(action='ignore') import pandas as pd from datetime import datetime # 원본데이터 covid_raw = pd.read_csv('1014-WHO-COVID-19-global-data.csv') covid_raw.head() # 필요한 컬럼만 남기기 wd_covid = covid_raw[['Date_reported', ' New_cases', ' New_deaths']] # 컬럼명 변경하기 wd_covid = covid_raw.rename(columns={ 'Date_reported': 'Date', ' New_cases': 'WD_daily_covid', ' New_deaths': 'WD_daily_death'}) # 날짜 정보를 데이터 타입 Datetime 으로 변경하기 wd_covid["Date"] = pd.to_datetime(wd_covid["Date"]) # '주차' 데이터 생성 및 추가하기. # 이때 '주차'는 해당 연도의 몇 번째 주차인지를 의미 함. wd_covid["Week"] = wd_covid["Date"].dt.week wd_covid = wd_covid[['Date', 'Week', 'WD_daily_covid', 'WD_daily_death']] # '요일' 데이터 생성 및 추가하기. # 이때 '요일'은 '월:0 ~일:6' wd_covid["Day"] = wd_covid["Date"].dt.dayofweek wd_covid = wd_covid[['Date', 'Week', 'Day', 'WD_daily_covid', 'WD_daily_death']] # '주차' groupby 하고 '일일확진자' 합계로 '주간 확진자수' 구하기. world_w = wd_covid.groupby(["Week"])["WD_daily_covid"].sum() world_weekly = pd.DataFrame(world_w) world_weekly.columns = ["WD_covid"] # merge 를 통해 기존 데이터에 추가하기. wd_covid = wd_covid.merge(world_weekly, left_on="Week", right_on=world_weekly.index, how="left") world_wd = wd_covid.groupby(["Week"])["WD_daily_death"].sum() world_weekly_d = pd.DataFrame(world_wd) world_weekly_d.columns = ["WD_death"] wd_covid = wd_covid.merge(world_weekly_d, left_on="Week", right_on=world_weekly.index, how="left") # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. wd_covid = wd_covid[wd_covid['Day'].isin([4])] # 원하는 컬럼만 남기기. wd_covid = wd_covid[['Date', 'Week', 'WD_covid', 'WD_death']] # 중복값 제거하기. wd_covid = wd_covid.drop_duplicates() # 0117 ~ 0925 의 데이터만 남기기. # 야매 코딩 - 그냥 인덱스 번호 일일이 확인하고 자름 wd_covid = wd_covid.drop([0, 7, 273, 280]) # 저장하기 wd_covid.to_excel("독립_WD_covid.xlsx", index=False, encoding="utf8") wd_covid = pd.read_excel("독립_WD_covid.xlsx") wd_covid.head() # 원본에서 필요한 컬럼만 남기기 kr_covid = covid_raw[covid_raw[' Country_code'].isin(['KR'])] # 컬럼명 변경하기 kr_covid = kr_covid[['Date_reported', ' New_cases', ' New_deaths']] kr_covid = kr_covid.rename(columns={ 'Date_reported': 'Date', ' New_cases': 'KR_daily_covid', ' New_deaths': 'KR_daily_death'}) # 날짜 정보를 데이터 타입 Datetime 으로 변경하기 kr_covid['Date'] = pd.to_datetime(kr_covid['Date']) # '주차' 데이터 생성 및 추가하기. kr_covid['Week'] = kr_covid['Date'].dt.week kr_covid = kr_covid[['Date', 'Week', 'KR_daily_covid', 'KR_daily_death']] # '요일' 데이터 생성 및 추가하기 kr_covid["Day"] = kr_covid['Date'].dt.dayofweek kr_covid = kr_covid[['Date', 'Week', 'Day', 'KR_daily_covid', 'KR_daily_death']] # '주차' groupby 하고 '일일확진자' 합계로 '주간 확진자수' 구하기. korea_w = kr_covid.groupby(["Week"])["KR_daily_covid"].sum() korea_weekly = pd.DataFrame(korea_w) korea_weekly.columns = ["KR_covid"] # merge 를 통해 기존 데이터에 추가하기. kr_covid = kr_covid.merge(korea_weekly, left_on="Week", right_on=korea_weekly.index, how="left") # '주간 사망자수' korea_wd = kr_covid.groupby(["Week"])["KR_daily_death"].sum() korea_weekly_d = pd.DataFrame(korea_wd) korea_weekly_d.columns = ["KR_death"] korea_weekly_d.head() kr_covid = kr_covid.merge(korea_weekly_d, left_on="Week", right_on=korea_weekly_d.index, how="left") # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. kr_covid = kr_covid[kr_covid['Day'].isin([4])] # 0117 ~ 0925 의 데이터만 남기기. # 야매 코딩 - 그냥 인덱스 번호 일일이 확인하고 자름 kr_covid = kr_covid.drop([0, 7, 273, 280]) # 필요한 컬럼만 남기기. kr_covid = kr_covid[['Date', 'Week', 'KR_covid', 'KR_death']] #저장하기 kr_covid.to_excel("독립_KR_covid.xlsx", index=False, encoding="utf8") kr_covid = pd.read_excel("독립_KR_covid.xlsx") kr_covid.head() import pandas as pd import requests from bs4 import BeautifulSoup # 기업 법인명을 넣으면, 주식 종목 코드를 찾아주는 함수 # 구글링 def get_code(df, name): code = df.query("name=='{}'".format(name))['code'].to_string(index=False) # 위와같이 code명을 가져오면 앞에 공백이 붙어있는 상황이 발생하여 앞뒤로 sript() 하여 공백 제거 code = code.strip() return code # excel 파일을 다운로드하는 동시에 pandas에 load하기 company_code_df = pd.read_html('http://kind.krx.co.kr/corpgeneral/corpList.do?method=download', header=0)[0] # data frame정리 company_code_df = company_code_df[['회사명', '종목코드']] # data frame title 변경 '회사명' = name, 종목코드 = 'code' company_code_df = company_code_df.rename(columns={'회사명': 'name', '종목코드': 'code'}) # 종목코드는 6자리로 구분되기때문에 0을 채워 6자리로 변경 company_code_df.code = company_code_df.code.map('{:06d}'.format) # [POINT 1] 데이터가 필요한 기업의 이름 입력하기. company_code = company_code_df[company_code_df['name'].isin(['호텔신라'])] company_code = company_code.iloc[0,1] company_code # 주식 종목 코드를 넣으면, 주식 정보를 찾아주는 함수 # 구글링 def get_price(company_code): # count=439에서 3000은 과거 439 영업일간의 데이터를 의미. 조절 가능 url = "https://fchart.stock.naver.com/sise.nhn?symbol={}&timeframe=day&count=184&requestType=0".format(company_code) get_result = requests.get(url) bs_obj = BeautifulSoup(get_result.content, "html.parser") # information inf = bs_obj.select('item') columns = ['Date', 'Open' ,'High', 'Low', 'Close', 'Volume'] df1_inf = pd.DataFrame([], columns = columns, index = range(len(inf))) for i in range(len(inf)): df1_inf.iloc[i] = str(inf[i]['data']).split('|') df1_inf.index = pd.to_datetime(df1_inf['Date']) return df1_inf.drop('Date', axis=1).astype(float) # 위에서 company_code 가 입력되도록 설정. 처음부터 돌려야함 주의. price = get_price(company_code) # 금요일 종가만 남기기 # 종가 데이터만 남기기 close = price[["Close"]] # 코딩하기 힘드니까 'Date' 를 인덱스에서 빼주기. close = close.reset_index() # 주차 데이터 생성 및 추가 close["Week"] = close["Date"].dt.week close = close[["Date", "Week", "Close"]] # '요일' 데이터 생성 및 추가 close["Day"] = close["Date"].dt.dayofweek close = close[["Date", "Week", "Day", "Close"]] # '주간 종가 %' 대신 '금요일 종가'를 '주간 종가'로 간주함. # 금요일 종가만 남기기 close = close[close["Day"].isin([4])] # 필요한 컬럼만 남기기. close = close[["Date", "Week", "Close"]] # [POINT 2] 기업명 컬럼에 입력한 기업명으로 바꾸기. close.columns = ["Date", "Week", "호텔신라"] # [POINT 3] 저장 파일명에서 입력한 기업명으로 바꿔서 저장하고 확인하기. close.to_excel("호텔신라_close_weekly.xlsx", index=False, encoding="utf8") 호텔신라 = pd.read_excel('호텔신라_close_weekly.xlsx') 호텔신라.head() # KOSPI, KOSDAQ, NASDAQ 지수를 불러와 전처리 !pip install -U finance-datareader !pip install html5lib import FinanceDataReader as fdr # 한국거래소 상장종목 전체를 확인할 수 있음 df_krx = fdr.StockListing('KRX') df_krx.head() # KOSPI, KOSDAQ, NASDAQ 종합 지수, 2010년 1월~현재 kospi = fdr.DataReader('KS11', '2020') kosdaq = fdr.DataReader('KQ11', '2020') nasdaq = fdr.DataReader('IXIC', '2020') # KOSPI 2010년 1월~현재 종가 그래프 kospi['Close'].plot() # KOSDAQ 2010년 1월~현재 종가 그래프 kosdaq['Close'].plot() # NASDAQ 2010년 1월~현재 종가 그래프 nasdaq['Close'].plot() # 날짜를 인덱스에서 컬럼으로 꺼내옴 kospi = kospi.reset_index() kosdaq = kosdaq.reset_index() nasdaq = nasdaq.reset_index() # 코스피에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 kospi["Week"] = kospi["Date"].dt.week kospi["Day"] = kospi["Date"].dt.dayofweek kospi = kospi[["Date", "Week", "Day", "Close"]] kospi = kospi[kospi["Day"].isin([4])] kospi = kospi[["Date", "Week", "Day", "Close"]] #저장하기 kospi.to_excel('독립_KOSPI.xlsx', index=False, encoding="utf8") kospi = pd.read_excel('독립_KOSPI.xlsx') kospi.head() # 코스닥에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 kosdaq["Week"] = kosdaq["Date"].dt.week kosdaq["Day"] = kosdaq["Date"].dt.dayofweek kosdaq = kosdaq[["Date", "Week", "Day", "Close"]] kosdaq = kosdaq[kosdaq["Day"].isin([4])] kosdaq = kosdaq[["Date", "Week", "Day", "Close"]] # 저장하기 kosdaq.to_excel('독립_KOSDAQ.xlsx', index=False, encoding="utf8") kosdaq = pd.read_excel('독립_KOSDAQ.xlsx') kosdaq.head() # 나스닥에서 주차(Week) 정보와 요일(Day) 정보를 만들고, 매주 금요일 일자와 종가를 추출 nasdaq["Week"] = nasdaq["Date"].dt.week nasdaq["Day"] = nasdaq["Date"].dt.dayofweek nasdaq = nasdaq[["Date", "Week", "Day", "Close"]] nasdaq = nasdaq[nasdaq["Day"].isin([4])] nasdaq = nasdaq[["Date", "Week", "Day", "Close"]] # 저장하기 nasdaq.to_excel('독립_NASDAQ.xlsx', index=False, encoding="utf8") nasdaq = pd.read_excel('독립_NASDAQ.xlsx') nasdaq.head() # 원본데이터 wics = pd.read_csv('WICS_index.csv') # 문자열 일자 데이터를 날짜형 데이터로 변환 wics['일자'] = pd.to_datetime(wics['일자']) # ' 소재' 컬럼의 빈칸 조정 wics.columns = ['일자', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티'] # ','가 있던 숫자 값의 쉼표 제거 및 float으로 변환 wics['경기관련소비재']=wics['경기관련소비재'].replace(',','',regex=True).astype(float) wics['에너지']=wics['에너지'].replace(',','',regex=True).astype(float) wics['소재']=wics['소재'].replace(',','',regex=True).astype(float) wics['산업재']=wics['산업재'].replace(',','',regex=True).astype(float) wics['필수소비재']=wics['필수소비재'].replace(',','',regex=True).astype(float) wics['건강관리']=wics['건강관리'].replace(',','',regex=True).astype(float) wics['금융']=wics['금융'].replace(',','',regex=True).astype(float) wics['IT']=wics['IT'].replace(',','',regex=True).astype(float) wics['커뮤니케이션서비스']=wics['커뮤니케이션서비스'].replace(',','',regex=True).astype(float) wics['유틸리티']=wics['유틸리티'].replace(',','',regex=True).astype(float) # 요일과 주차 데이터 추출. wics["요일"] = wics["일자"].dt.dayofweek wics["주차"] = wics["일자"].dt.week # 주차와 요일 컬럼 추가 및 정렬. wics = wics.reindex(columns=['일자', '주차', '요일', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티']) # 오래된 일자 순으로 정렬하고 인덱스 리셋. wics = wics.sort_values(by='일자', ascending=True) wics = wics.reset_index(drop=True) # 금요일 데이터만 남기기. wics = wics[wics["요일"].isin([4])] wics = wics[['일자', '주차', '요일', '경기관련소비재', '에너지', '소재', '산업재', '필수소비재', '건강관리', '금융', 'IT', '커뮤니케이션서비스', '유틸리티']] # 저장하기 wics.to_excel('독립_WICS.xlsx', index=False, encoding="utf8") wics = pd.read_excel('독립_WICS.xlsx') wics.head() import pandas as pd from datetime import datetime # 원본데이터 kr_bank = pd.read_excel('한국은행.xlsx') # 필요한 컬럼만 남기기. kr_bank = kr_bank[['일자']] # 컬럼명 변경하기 kr_bank.columns = ["Date"] # Datetime 으로 변경하기. kr_bank['Date'] = pd.to_datetime(kr_bank['Date'], format='%Y%m%d') # '주차' 생성 및 추가하기. kr_bank["Week"] = kr_bank["Date"].dt.week kr_bank = kr_bank[['Date','Week']] kr_bank = kr_bank.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수"를 구합니다. kr_bank_w = kr_bank.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 만들어줍니다. kr_bank_weekly = pd.DataFrame(kr_bank_w) kr_bank_weekly.columns = ["KR_bank"] kr_bank_weekly.head() kr_bank = kr_bank.merge(kr_bank_weekly, left_on="Week", right_on=kr_bank_weekly.index, how="left") # '요일' 생성 및 추가하기 kr_bank["Day"] = kr_bank["Date"].dt.dayofweek kr_bank = kr_bank[['Date','Week','Day', 'KR_bank']] # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. kr_bank = kr_bank[kr_bank['Day'].isin([4])] # 중복값 제거하기. kr_bank = kr_bank.drop_duplicates() kr_bank = kr_bank.drop_duplicates("Week", keep="first") # 필요한 컬럼만 남기기. kr_bank = kr_bank[['Date', 'Week', 'KR_bank']] kr_bank.to_excel("독립_KR_bank.xlsx", index=False, encoding="utf8") kr_bank = pd.read_excel('독립_KR_bank.xlsx') kr_bank.head() kr_bank.plot(x='Week', y='KR_bank', figsize=(15,4)) import pandas as pd from datetime import datetime # 원본데이터 us_bank = pd.read_excel('연방.xlsx') # 필요한 컬럼만 남기기. us_bank = us_bank[['일자']] # 컬럼명 변경하기 us_bank.columns = ["Date"] # Datetime 으로 변경하기. us_bank['Date'] = pd.to_datetime(us_bank['Date'], format='%Y%m%d') # '주차' 생성 및 추가하기. us_bank["Week"] = us_bank["Date"].dt.week us_bank = us_bank[['Date','Week']] us_bank = us_bank.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수"를 구합니다. us_bank_w = us_bank.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 만들어줍니다. us_bank_weekly = pd.DataFrame(us_bank_w) us_bank_weekly.columns = ["US_bank"] us_bank_weekly.head() us_bank = us_bank.merge(us_bank_weekly, left_on="Week", right_on=us_bank_weekly.index, how="left") # '요일' 생성 및 추가하기 us_bank["Day"] = us_bank["Date"].dt.dayofweek us_bank = us_bank[['Date','Week','Day', 'US_bank']] # 최종 원하는 데이터는 '주간 확진자'와 '주간 사망자'. # 금요일 주가를 기준으로 하는 주가 데이터와 합칠 예정이므로 금요일만 남기기. us_bank = us_bank[us_bank['Day'].isin([4])] # 중복값 제거하기. us_bank = us_bank.drop_duplicates() us_bank = us_bank.drop_duplicates("Week", keep="first") # 필요한 컬럼만 남기기. us_bank = us_bank[['Date', 'Week', 'US_bank']] us_bank.to_excel("독립_US_bank.xlsx", index=False, encoding="utf8") us_bank = pd.read_excel('독립_US_bank.xlsx') us_bank.head() us_bank.plot(x='Week', y='US_bank', figsize=(15,4)) import pandas as pd from datetime import datetime # 원본데이터 biohealth = pd.read_excel('바이오 헬스.xlsx') # 필요한 컬럼만 남기기 biohealth = biohealth[['일자']] # 컬럼명 변경하기 biohealth.columns = ["Date"] # Datetime 으로 데이터타입 변경하기 biohealth['Date'] = pd.to_datetime(biohealth['Date'], format='%Y%m%d') # '주차'컬럼 생성 및 추가하기 biohealth["Week"] = biohealth["Date"].dt.week biohealth = biohealth[['Date','Week']] biohealth = biohealth.sort_values(by=['Date'], axis=0, ascending=True) # "주차"로 groupby 한 다음 일자 합계로 "주간 기사 수" 구하기 biohealth = biohealth.groupby(["Week"])["Date"].count() # 용이한 가공을 위해 데이터프레임으로 생성 biohealth_weekly = pd.DataFrame(biohealth) biohealth_weekly.rename(columns={"Date":"바이오헬스"}, inplace=True) biohealth_weekly.rename(columns={"Week":"주차"}, inplace=True) # 인덱스'week'를 컬럼으로 변경 biohealth_weekly.reset_index(level=['Week'], inplace = True) biohealth_weekly.head() # 위와 같이 정제된 각 단어들을 merge한 최종 결과 아래와 같다. 언론언급 = pd.read_excel('언론 언급.xlsx') 언론언급.head() # 10개 데이터 불러와서 merge하기 생략하고 미리 만든 결과만 가져옴. # 양방향미디어와서비스.to_excel("양방향미디어와서비스.xlsx", index=False, encoding="utf8") 양방향미디어와서비스 = pd.read_excel("양방향미디어와서비스.xlsx") 양방향미디어와서비스.head() # 20개 데이터 불러와서 merge하기 생략하고 결과만 가져옴. #호텔레저.to_excel("호텔레스토랑레저.xlsx", index=False, encoding="utf8") 호텔레스토랑레저 = pd.read_excel("호텔레스토랑레저.xlsx") 호텔레스토랑레저.head() import pandas as pd # 한국 코로나 kr_covid = pd.read_excel('독립_KR_covid.xlsx') # 세계 코로나 wd_covid = pd.read_excel('독립_WD_covid.xlsx') # 한국은행 kr_bank = pd.read_excel('독립_KR_bank.xlsx') # 연방준비은행 us_bank = pd.read_excel('독립_US_bank.xlsx') # 나스닥 nasdaq = pd.read_excel('독립_NASDAQ.xlsx') # KR_covid + WD_covid covid = kr_covid.merge(wd_covid, left_on="Week", right_on=wd_covid["Week"]) covid = covid[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid' ]] covid.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid' ] # KR_bank + US_bank bank = kr_bank.merge(us_bank, left_on="Week", right_on=us_bank["Week"]) bank = bank[['Date_x', 'Week_x', 'KR_bank','US_bank' ]] bank.columns = ['Date', 'Week', 'KR_bank', 'US_bank'] # covid + bank covid_bank = covid.merge(bank, left_on="Week", right_on=bank["Week"]) covid_bank = covid_bank[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank' ]] covid_bank.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank'] # covid_bank + nasdaq independant = covid_bank.merge(nasdaq, left_on="Week", right_on=nasdaq["Week"]) independant = independant[['Date_x', 'Week_x', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Close' ]] independant.columns = ['Date', 'Week', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하기 independant.to_excel("독립_covid_bank_nasdaq.xlsx", index=False, encoding="utf8") independant = pd.read_excel("독립_covid_bank_nasdaq.xlsx") independant.head() IT미디어 = 양방향미디어와서비스 IT미디어_독립 = IT미디어.merge(independant, left_on="Week", right_on=independant["Week"]) # 필요한 컬럼만 가져오기 IT미디어_독립 = IT미디어_독립[['Date_x', 'Week_x', '카카오', 'NAVER', 'THE E&M', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq']] # 컬럼명 변경 IT미디어_독립.columns = ['Date', 'Week', '카카오', 'NAVER', 'THE EnM', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하고 확인하기 IT미디어_독립.to_excel("양방향미디어와서비스_독립.xlsx", index=False, encoding="utf8") IT미디어_독립 = pd.read_excel("양방향미디어와서비스_독립.xlsx") IT미디어_독립.head() 호텔레저 = 호텔레스토랑레저 호텔레저_독립 = 호텔레저.merge(independant, left_on="Week", right_on=independant["Week"]) # 필요한 컬럼만 가져오기. 호텔레저_독립 = 호텔레저_독립[['Date_x', 'Week_x', 'GKL', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부T&D', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq']] # 컬럼명 변경하기. 호텔레저_독립.columns = ['Date', 'Week', 'GKL', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부T&D', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 저장하고 확인하기 호텔레저_독립.to_excel("호텔레스토랑레저_독립.xlsx", index=False, encoding="utf8") 호텔레저_독립 = pd.read_excel("호텔레스토랑레저_독립.xlsx") 호텔레저_독립.head() # 각종 라이브러리 불러오기 %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.pylab as pylab from matplotlib import pyplot import seaborn as sns import warnings warnings.filterwarnings(action='ignore') # 시각화 자료에서의 한글 사용을 위한 준비 plt.rc('font', family='Gulim') plt.text(0.3, 0.3, '한글', size=100) # 맥에서는 아래와 같이 # plt.rcParams['font.family'] = 'AppleGothic' # 주차별 세계 코로나19 확진자 데이터 불러오기 wd_covid = pd.read_excel("독립_WD_covid.xlsx") wd_covid.head() # 컬럼명 변경 wd_covid.columns = ['날짜', '주차', '주간 확진자', '주간 사망자'] # 세계 코로나 주간 확진자와 주간 사망자 상관계수 # 다중공선성을 고려하여 주간 신규 확진자 데이터만 분석에 활용 wd_covid[['주간 확진자', '주간 사망자']].corr() # 세계 주간 신규 확진자 및 사망자 추이 wd_covid.plot(x='주차', y='주간 확진자', figsize=(15,4)) wd_covid.plot(x='주차', y='주간 사망자', figsize=(15,4)) wld_total = wd_covid[['주차','주간 확진자','주간 사망자']] # 첫 컬럼(date)를 인덱스로 convert wld_total = wld_total.set_index('주차') wld_total.plot(figsize=(15,4)) # 주차별 한국 코로나19 확진자 데이터 불러오기 kr_covid = pd.read_excel('독립_KR_covid.xlsx') kr_covid.head() # 컬럼명 변경 kr_covid.columns = ['날짜', '주차', '주간 확진자', '주간 사망자'] # 한국 코로나19 주간 신규 확진자와 주간 신규 사망자 상관계수 kr_covid[['주간 확진자','주간 사망자']].corr() # 한국의 주간 신규 확진자 및 사망자 추이 kr_covid.plot(x='주차', y='주간 확진자', figsize=(15,4)) kr_covid.plot(x='주차', y='주간 사망자', figsize=(15,4)) #한국의 주간 확진자 주간 사망자 수 그래프로 한 번에 kr_total=kr_covid[['주차','주간 확진자','주간 사망자']] kr_total.head() # 첫 컬럼(date)를 인덱스로 convert kr_total = kr_total.set_index('주차') kr_total.head() kr_total.plot(figsize=(15,4)) kospi_eda = pd.read_excel("독립_KOSPI.xlsx") kosdaq_eda = pd.read_excel("독립_KOSDAQ.xlsx") nasdaq_eda = pd.read_excel("독립_NASDAQ.xlsx") # 주차와 요일 정보를 삭제하고 날짜를 인덱스로 지정 kospi_eda = kospi_eda.drop("Week",1) kospi_eda = kospi_eda.drop("Day",1) kospi_eda = kospi.set_index('Date') kosdaq_eda = kospi_eda.drop("Week",1) kosdaq_eda = kospi_eda.drop("Day",1) kosdaq_eda = kospi.set_index('Date') nasdaq_eda = kospi_eda.drop("Week",1) nasdaq_eda = kospi_eda.drop("Day",1) nasdaq_eda = kospi.set_index('Date') kospi_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 KOSPI 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() kosdaq_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 KOSDAQ 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() nasdaq_eda['Close'].plot(color='#ff0000') #색상 red적용 pyplot.grid() # 그래프 배경 그리드 추가 pyplot.legend() # legend(범례) 추가 pyplot.title("2020년 NASDAQ 추세") pyplot.xlabel("날짜") pyplot.ylabel("종가") pyplot.show() WICS_eda = pd.read_excel("독립_WICS.xlsx") WICS_eda.head() # 주차와 요일 정보를 삭제하고 날짜를 인덱스로 지정 WICS_eda = WICS_eda.drop("주차",1) WICS_eda = WICS_eda.drop("요일",1) WICS_eda = WICS_eda.set_index('일자') WICS_eda.plot() import pandas as pd import numpy as np plt.rcParams['font.family'] = 'AppleGothic' # 애플 그래프 한글깨짐 방지 호텔레저_eda = pd.read_excel('호텔레스토랑레저.xlsx') 호텔레저_eda = 호텔레저_eda.drop('Week',1) 호텔레저_eda.head() 호텔레저_eda.corr().head() 호텔레저_eda = 호텔레저_eda.corr() # 그림 사이즈 지정 fig, ax = plt.subplots(figsize=(20,20) ) # 삼각형 마스크를 만든다(위 쪽 삼각형에 True, 아래 삼각형에 False) mask = np.zeros_like(호텔레저_eda, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # 히트맵을 그린다 sns.heatmap(호텔레저_eda, cmap = 'RdYlBu_r', annot = True, # 실제 값을 표시한다 mask=mask, # 표시하지 않을 마스크 부분을 지정한다 linewidths=.5, # 경계면 실선으로 구분하기 cbar_kws={"shrink": .5},# 컬러바 크기 절반으로 줄이기 vmin = -1,vmax = 1 # 컬러바 범위 -1 ~ 1 ) plt.show() IT미디어_eda = pd.read_excel('양방향미디어와서비스.xlsx') IT미디어_eda = IT미디어_eda.drop('Week',1) IT미디어_eda.head() IT미디어_eda.corr().head() IT미디어_eda = IT미디어_eda.corr() # 그림 사이즈 지정 fig, ax = plt.subplots(figsize=(20,20) ) # 삼각형 마스크를 만든다(위 쪽 삼각형에 True, 아래 삼각형에 False) mask = np.zeros_like(IT미디어_eda, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # 히트맵을 그린다 sns.heatmap(IT미디어_eda, cmap = 'RdYlBu_r', annot = True, # 실제 값을 표시한다 mask=mask, # 표시하지 않을 마스크 부분을 지정한다 linewidths=.5, # 경계면 실선으로 구분하기 cbar_kws={"shrink": .5},# 컬러바 크기 절반으로 줄이기 vmin = -1,vmax = 1 # 컬러바 범위 -1 ~ 1 ) plt.show() import pandas as pd from statsmodels.formula.api import ols # 위에서 작업한, 분석하려는 기업목록과 독립변수가 합쳐져 있는 파일 불러오기 호텔레저_독립 = pd.read_excel('호텔레스토랑레저_독립.xlsx') IT미디어_독립 = pd.read_excel('양방향미디어와서비스_독립.xlsx') # 분석할 기업목록 확인 호텔레저_list = ['GKL', 'MP그룹', '강원랜드', '남화산업', '노랑풍선', '디딤', '롯데관광개발', '모두투어', '서부TnD', '세중', '시공테크', '신세계푸드', '아난티', '용평리조트', '이월드', '참좋은여행', '파라다이스', '하나투어', '해마로푸드서비스', '호텔신라'] # 호텔레저_list 안에 분석하려는 기업명 입력하기. #KbUb 넣기 호텔레저_list = ['호텔신라'] for i in 호텔레저_list: Kc = i + 'Kc = ols(\"' + i + '~ KR_covid+KR_death\", data=호텔레저_독립).fit()' a = '#' + i + 'Kc.summary()' KcWc = i + 'KcWc = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid\", data=호텔레저_독립).fit()' b = '#' + i + 'KcWc.summary()' CN = i + 'CN = ols(\"' + i + '~ KR_covid+KR_death+WD_covid+Nasdaq\", data=호텔레저_독립).fit()' c = '#' + i + 'CN.summary()' CNKb = i + 'CNKb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq\", data=호텔레저_독립).fit()' d = '#' + i + 'CNKb.summary()' CNUb = i + 'CNUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq\", data=호텔레저_독립).fit()' e = '#' + i + 'CNUb.summary()' CNKbUb = i + 'CNKbUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq\", data=호텔레저_독립).fit()' f = '#' + i + 'CNKbUb.summary()' print(Kc) print(a) print(KcWc) print(b) print(CN) print(c) print(CNKb) print(d) print(CNUb) print(e) print(CNKbUb) print(f) # 위 결과 새로운 셀에 복붙하기. 호텔신라Kc = ols("호텔신라~ KR_covid+KR_death", data=호텔레저_독립).fit() #호텔신라Kc.summary() 호텔신라KcWc = ols("호텔신라 ~ KR_covid+KR_death+WD_covid", data=호텔레저_독립).fit() #호텔신라KcWc.summary() 호텔신라CN = ols("호텔신라~ KR_covid+KR_death+WD_covid+Nasdaq", data=호텔레저_독립).fit() #호텔신라CN.summary() 호텔신라CNKb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNKb.summary() 호텔신라CNUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNUb.summary() 호텔신라CNKbUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=호텔레저_독립).fit() #호텔신라CNKbUb.summary() # 호텔레저_list 안에 원하는 기업명 입력하기. 호텔레저_list = ['호텔신라'] for i in 호텔레저_list: result = i + '=pd.DataFrame({\"Kc\":[' + i + 'Kc.rsquared,' + i + 'Kc.rsquared_adj,' + i + 'Kc.aic,'+ i + 'Kc.bic],' + \ '\"KcWc\":[' + i + 'KcWc.rsquared,'+ i + 'KcWc.rsquared_adj,'+ i + 'KcWc.aic,'+ i + 'KcWc.bic],' + \ '\"CN\":[' + i + 'CN.rsquared,'+ i + 'CN.rsquared_adj,'+ i + 'CN.aic,'+ i + 'CN.bic],' + \ '\"CNKb\":[' + i + 'CNKb.rsquared,'+ i + 'CNKb.rsquared_adj,'+ i + 'CNKb.aic,'+ i + 'CNKb.bic],' + \ '\"CNUb\":[' + i + 'CNUb.rsquared,'+ i + 'CNUb.rsquared_adj,'+ i + 'CNUb.aic,'+ i + 'CNUb.bic],' + \ '\"CNKbUb\":[' + i + 'CNKbUb.rsquared,'+ i + 'CNKbUb.rsquared_adj,'+ i + 'CNKbUb.aic,'+ i + 'CNKbUb.bic]})' print(result) # 새로운 셀에 위 결과 복붙: R제곱, 수정 R제곱, AIC, BIC 데이터프레임화 호텔신라=pd.DataFrame({"Kc":[호텔신라Kc.rsquared,호텔신라Kc.rsquared_adj,호텔신라Kc.aic,호텔신라Kc.bic],"KcWc":[호텔신라KcWc.rsquared,호텔신라KcWc.rsquared_adj,호텔신라KcWc.aic,호텔신라KcWc.bic],"CN":[호텔신라CN.rsquared,호텔신라CN.rsquared_adj,호텔신라CN.aic,호텔신라CN.bic],"CNKb":[호텔신라CNKb.rsquared,호텔신라CNKb.rsquared_adj,호텔신라CNKb.aic,호텔신라CNKb.bic],"CNUb":[호텔신라CNUb.rsquared,호텔신라CNUb.rsquared_adj,호텔신라CNUb.aic,호텔신라CNUb.bic],"CNKbUb":[호텔신라CNKbUb.rsquared,호텔신라CNKbUb.rsquared_adj,호텔신라CNKbUb.aic,호텔신라CNKbUb.bic]}) # 결과 호텔신라 # 분석할 기업목록 확인 IT미디어_list = ['Date', 'Week', '카카오', 'NAVER', 'THE E&M', '아프리카TV', '줌인터넷', '캐리소프트', '키다리스튜디오', '티사이언티픽', '퓨쳐스트림네트웍스', '플리토', 'KR_covid', 'KR_death', 'WD_covid', 'KR_bank', 'US_bank', 'Nasdaq'] # 호텔레저_list 안에 분석하려는 기업명 입력하기. #KbUb 넣기 IT미디어_list = ['카카오'] for i in IT미디어_list: Kc = i + 'Kc = ols(\"' + i + '~ KR_covid+KR_death\", data=IT미디어_독립).fit()' a = '#' + i + 'Kc.summary()' KcWc = i + 'KcWc = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid\", data=IT미디어_독립).fit()' b = '#' + i + 'KcWc.summary()' CN = i + 'CN = ols(\"' + i + '~ KR_covid+KR_death+WD_covid+Nasdaq\", data=IT미디어_독립).fit()' c = '#' + i + 'CN.summary()' CNKb = i + 'CNKb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq\", data=IT미디어_독립).fit()' d = '#' + i + 'CNKb.summary()' CNUb = i + 'CNUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq\", data=IT미디어_독립).fit()' e = '#' + i + 'CNUb.summary()' CNKbUb = i + 'CNKbUb = ols(\"' + i + ' ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq\", data=IT미디어_독립).fit()' f = '#' + i + 'CNKbUb.summary()' print(Kc) print(a) print(KcWc) print(b) print(CN) print(c) print(CNKb) print(d) print(CNUb) print(e) print(CNKbUb) print(f) 카카오Kc = ols("카카오~ KR_covid+KR_death", data=IT미디어_독립).fit() #카카오Kc.summary() 카카오KcWc = ols("카카오 ~ KR_covid+KR_death+WD_covid", data=IT미디어_독립).fit() #카카오KcWc.summary() 카카오CN = ols("카카오~ KR_covid+KR_death+WD_covid+Nasdaq", data=IT미디어_독립).fit() #카카오CN.summary() 카카오CNKb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNKb.summary() 카카오CNUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNUb.summary() 카카오CNKbUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=IT미디어_독립).fit() #카카오CNKbUb.summary() # IT미디어_list 안에 원하는 기업명 입력하기. IT미디어_list = ['카카오'] for i in IT미디어_list: result = i + '=pd.DataFrame({\"Kc\":[' + i + 'Kc.rsquared,' + i + 'Kc.rsquared_adj,' + i + 'Kc.aic,'+ i + 'Kc.bic],' + \ '\"KcWc\":[' + i + 'KcWc.rsquared,'+ i + 'KcWc.rsquared_adj,'+ i + 'KcWc.aic,'+ i + 'KcWc.bic],' + \ '\"CN\":[' + i + 'CN.rsquared,'+ i + 'CN.rsquared_adj,'+ i + 'CN.aic,'+ i + 'CN.bic],' + \ '\"CNKb\":[' + i + 'CNKb.rsquared,'+ i + 'CNKb.rsquared_adj,'+ i + 'CNKb.aic,'+ i + 'CNKb.bic],' + \ '\"CNUb\":[' + i + 'CNUb.rsquared,'+ i + 'CNUb.rsquared_adj,'+ i + 'CNUb.aic,'+ i + 'CNUb.bic],' + \ '\"CNKbUb\":[' + i + 'CNKbUb.rsquared,'+ i + 'CNKbUb.rsquared_adj,'+ i + 'CNKbUb.aic,'+ i + 'CNKbUb.bic]})' print(result) # 새로운 셀에 위 결과 복붙: R제곱, 수정 R제곱, AIC, BIC 데이터프레임화 카카오=pd.DataFrame({"Kc":[카카오Kc.rsquared,카카오Kc.rsquared_adj,카카오Kc.aic,카카오Kc.bic],"KcWc":[카카오KcWc.rsquared,카카오KcWc.rsquared_adj,카카오KcWc.aic,카카오KcWc.bic],"CN":[카카오CN.rsquared,카카오CN.rsquared_adj,카카오CN.aic,카카오CN.bic],"CNKb":[카카오CNKb.rsquared,카카오CNKb.rsquared_adj,카카오CNKb.aic,카카오CNKb.bic],"CNUb":[카카오CNUb.rsquared,카카오CNUb.rsquared_adj,카카오CNUb.aic,카카오CNUb.bic],"CNKbUb":[카카오CNKbUb.rsquared,카카오CNKbUb.rsquared_adj,카카오CNKbUb.aic,카카오CNKbUb.bic]}) # 결과 카카오 호텔신라 호텔신라CNUb = ols("호텔신라 ~ KR_covid+KR_death+WD_covid+US_bank+Nasdaq", data=호텔레저_독립).fit() 호텔신라CNUb.summary() 카카오 카카오CNKbUb = ols("카카오 ~ KR_covid+KR_death+WD_covid+KR_bank+US_bank+Nasdaq", data=IT미디어_독립).fit() 카카오CNKbUb.summary()
0.284377
0.880335
# ORF recognition by MLP So far, no MLP has exceeded 50% accurcy on any ORF problem. Here, try a variety of things. RNA length 16, CDS length 8. No luck with 32 neurons or 64 neurons Instead of sigmoid, tried tanh and relu. Instead of 4 layers, tried 1. RNA length 12, CDS length 6. 2 layers of 32 neurons, sigmoid. Even 512 neurons, rectangular or triangular, didn't work. Move INPUT_SHAPE from compile() to first layer parameter. This works: All PC='AC'*, all NC='GT'*. 100% accurate on one epoch with 2 layers of 12 neurons. Nothing works! Now suspect the data preparation is incorrect. Try trivializing the problem by always adding ATG or TAG. ``` import time t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) PC_SEQUENCES=32000 # how many protein-coding sequences NC_SEQUENCES=32000 # how many non-coding sequences PC_TESTS=1000 NC_TESTS=1000 RNA_LEN=32 # how long is each sequence CDS_LEN=16 # min CDS len to be coding ALPHABET=4 # how many different letters are possible INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs INPUT_SHAPE = (None,RNA_LEN,ALPHABET) # MLP requires batch size None FILTERS = 16 # how many different patterns the model looks for CELLS = 16 NEURONS = 32 DROP_RATE = 0.4 WIDTH = 3 # how wide each pattern is, in bases STRIDE_2D = (1,1) # For Conv2D how far in each direction STRIDE = 1 # For Conv1D, how far between pattern matches, in bases EPOCHS=50 # how many times to train on all the data SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3 FOLDS=3 # train the model this many times (range 1 to SPLITS) import sys IN_COLAB = False try: from google.colab import drive IN_COLAB = True except: pass if IN_COLAB: print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this #drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(r.text) from RNA_describe import ORF_counter from RNA_describe import Random_Base_Oracle r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py') with open('RNA_prep.py', 'w') as f: f.write(r.text) from RNA_prep import prepare_inputs_len_x_alphabet else: print("CoLab not working. On my PC, use relative paths.") DATAPATH='data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle from SimTools.RNA_prep import prepare_inputs_len_x_alphabet MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login from os import listdir import csv from zipfile import ZipFile import numpy as np import pandas as pd from scipy import stats # mode from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.models import Sequential from keras.layers import Dense,Embedding,Dropout from keras.layers import Conv1D,Conv2D from keras.layers import GRU,LSTM from keras.layers import Flatten,TimeDistributed from keras.layers import MaxPooling1D,MaxPooling2D from keras.losses import BinaryCrossentropy # tf.keras.losses.BinaryCrossentropy import matplotlib.pyplot as plt from matplotlib import colors mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1 np.set_printoptions(precision=2) rbo=Random_Base_Oracle(RNA_LEN,True) pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS) print("Use",len(pc_all),"PC seqs") print("Use",len(nc_all),"NC seqs") # Make the problem super easy! def trivialize_sequences(list_of_seq,option): num_seq = len(list_of_seq) for i in range(0,num_seq): seq = list_of_seq[i] if option==0: list_of_seq[i] = 'TTTTTT'+seq[6:] else: list_of_seq[i] = 'AAAAAA'+seq[6:] if False: print("Trivialize...") trivialize_sequences(pc_all,1) print("Trivial PC:",pc_all[:5]) print("Trivial PC:",pc_all[-5:]) trivialize_sequences(nc_all,0) print("Trivial NC:",nc_all[:5]) print("Trivial NC:",nc_all[-5:]) # Describe the sequences def describe_sequences(list_of_seq): oc = ORF_counter() num_seq = len(list_of_seq) rna_lens = np.zeros(num_seq) orf_lens = np.zeros(num_seq) for i in range(0,num_seq): rna_len = len(list_of_seq[i]) rna_lens[i] = rna_len oc.set_sequence(list_of_seq[i]) orf_len = oc.get_max_orf_len() orf_lens[i] = orf_len print ("Average RNA length:",rna_lens.mean()) print ("Average ORF length:",orf_lens.mean()) print("Simulated sequences prior to adjustment:") print("PC seqs") describe_sequences(pc_all) print("NC seqs") describe_sequences(nc_all) pc_train=pc_all[:PC_SEQUENCES] nc_train=nc_all[:NC_SEQUENCES] pc_test=pc_all[PC_SEQUENCES:] nc_test=nc_all[NC_SEQUENCES:] # Use code from our SimTools library. X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles print("Data ready.") print(len(X),"sequences total") print(len(X[0]),"bases/sequence") print(len(X[0][0]),"dimensions/base") #print(X[0]) print(type(X[0])) print(X[0].shape) def make_DNN(): print("make_DNN") print("input shape:",INPUT_SHAPE) dnn = Sequential() dnn.add(Flatten()) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32, input_shape=INPUT_SHAPE )) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) #dnn.add(Dropout(DROP_RATE)) dnn.add(Dense(1,activation="sigmoid",dtype=np.float32)) dnn.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=False), metrics=['accuracy']) # add to default metrics=loss dnn.build(input_shape=INPUT_SHAPE) #dnn.build() #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE) #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) #model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"]) return dnn model = make_DNN() print(model.summary()) from keras.callbacks import ModelCheckpoint def do_cross_validation(X,y): cv_scores = [] fold=0 mycallbacks = [ModelCheckpoint( filepath=MODELPATH, save_best_only=True, monitor='val_accuracy', mode='max')] splitter = KFold(n_splits=SPLITS) # this does not shuffle for train_index,valid_index in splitter.split(X): if fold < FOLDS: fold += 1 X_train=X[train_index] # inputs for training y_train=y[train_index] # labels for training X_valid=X[valid_index] # inputs for validation y_valid=y[valid_index] # labels for validation print("MODEL") # Call constructor on each CV. Else, continually improves the same model. model = model = make_DNN() print("FIT") # model.fit() implements learning start_time=time.time() history=model.fit(X_train, y_train, epochs=EPOCHS, verbose=1, # ascii art while learning callbacks=mycallbacks, # called at end of each epoch validation_data=(X_valid,y_valid)) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) # print(history.history.keys()) # all these keys will be shown in figure pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale plt.show() do_cross_validation(X,y) from keras.models import load_model X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET) best_model=load_model(MODELPATH) scores = best_model.evaluate(X, y, verbose=0) print("The best model parameters were saved during cross-validation.") print("Best was defined as maximum validation accuracy at end of any epoch.") print("Now re-load the best model and test it on previously unseen data.") print("Test on",len(pc_test),"PC seqs") print("Test on",len(nc_test),"NC seqs") print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score ns_probs = [0 for _ in range(len(y))] bm_probs = best_model.predict(X) ns_auc = roc_auc_score(y, ns_probs) bm_auc = roc_auc_score(y, bm_probs) ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs) bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs) plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc) plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc) plt.title('ROC') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print("%s: %.2f%%" %('AUC',bm_auc*100.0)) t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) ```
github_jupyter
import time t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) PC_SEQUENCES=32000 # how many protein-coding sequences NC_SEQUENCES=32000 # how many non-coding sequences PC_TESTS=1000 NC_TESTS=1000 RNA_LEN=32 # how long is each sequence CDS_LEN=16 # min CDS len to be coding ALPHABET=4 # how many different letters are possible INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs INPUT_SHAPE = (None,RNA_LEN,ALPHABET) # MLP requires batch size None FILTERS = 16 # how many different patterns the model looks for CELLS = 16 NEURONS = 32 DROP_RATE = 0.4 WIDTH = 3 # how wide each pattern is, in bases STRIDE_2D = (1,1) # For Conv2D how far in each direction STRIDE = 1 # For Conv1D, how far between pattern matches, in bases EPOCHS=50 # how many times to train on all the data SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3 FOLDS=3 # train the model this many times (range 1 to SPLITS) import sys IN_COLAB = False try: from google.colab import drive IN_COLAB = True except: pass if IN_COLAB: print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this #drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(r.text) from RNA_describe import ORF_counter from RNA_describe import Random_Base_Oracle r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py') with open('RNA_prep.py', 'w') as f: f.write(r.text) from RNA_prep import prepare_inputs_len_x_alphabet else: print("CoLab not working. On my PC, use relative paths.") DATAPATH='data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle from SimTools.RNA_prep import prepare_inputs_len_x_alphabet MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login from os import listdir import csv from zipfile import ZipFile import numpy as np import pandas as pd from scipy import stats # mode from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.models import Sequential from keras.layers import Dense,Embedding,Dropout from keras.layers import Conv1D,Conv2D from keras.layers import GRU,LSTM from keras.layers import Flatten,TimeDistributed from keras.layers import MaxPooling1D,MaxPooling2D from keras.losses import BinaryCrossentropy # tf.keras.losses.BinaryCrossentropy import matplotlib.pyplot as plt from matplotlib import colors mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1 np.set_printoptions(precision=2) rbo=Random_Base_Oracle(RNA_LEN,True) pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS) print("Use",len(pc_all),"PC seqs") print("Use",len(nc_all),"NC seqs") # Make the problem super easy! def trivialize_sequences(list_of_seq,option): num_seq = len(list_of_seq) for i in range(0,num_seq): seq = list_of_seq[i] if option==0: list_of_seq[i] = 'TTTTTT'+seq[6:] else: list_of_seq[i] = 'AAAAAA'+seq[6:] if False: print("Trivialize...") trivialize_sequences(pc_all,1) print("Trivial PC:",pc_all[:5]) print("Trivial PC:",pc_all[-5:]) trivialize_sequences(nc_all,0) print("Trivial NC:",nc_all[:5]) print("Trivial NC:",nc_all[-5:]) # Describe the sequences def describe_sequences(list_of_seq): oc = ORF_counter() num_seq = len(list_of_seq) rna_lens = np.zeros(num_seq) orf_lens = np.zeros(num_seq) for i in range(0,num_seq): rna_len = len(list_of_seq[i]) rna_lens[i] = rna_len oc.set_sequence(list_of_seq[i]) orf_len = oc.get_max_orf_len() orf_lens[i] = orf_len print ("Average RNA length:",rna_lens.mean()) print ("Average ORF length:",orf_lens.mean()) print("Simulated sequences prior to adjustment:") print("PC seqs") describe_sequences(pc_all) print("NC seqs") describe_sequences(nc_all) pc_train=pc_all[:PC_SEQUENCES] nc_train=nc_all[:NC_SEQUENCES] pc_test=pc_all[PC_SEQUENCES:] nc_test=nc_all[NC_SEQUENCES:] # Use code from our SimTools library. X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles print("Data ready.") print(len(X),"sequences total") print(len(X[0]),"bases/sequence") print(len(X[0][0]),"dimensions/base") #print(X[0]) print(type(X[0])) print(X[0].shape) def make_DNN(): print("make_DNN") print("input shape:",INPUT_SHAPE) dnn = Sequential() dnn.add(Flatten()) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32, input_shape=INPUT_SHAPE )) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) #dnn.add(Dropout(DROP_RATE)) dnn.add(Dense(1,activation="sigmoid",dtype=np.float32)) dnn.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=False), metrics=['accuracy']) # add to default metrics=loss dnn.build(input_shape=INPUT_SHAPE) #dnn.build() #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE) #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) #model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"]) return dnn model = make_DNN() print(model.summary()) from keras.callbacks import ModelCheckpoint def do_cross_validation(X,y): cv_scores = [] fold=0 mycallbacks = [ModelCheckpoint( filepath=MODELPATH, save_best_only=True, monitor='val_accuracy', mode='max')] splitter = KFold(n_splits=SPLITS) # this does not shuffle for train_index,valid_index in splitter.split(X): if fold < FOLDS: fold += 1 X_train=X[train_index] # inputs for training y_train=y[train_index] # labels for training X_valid=X[valid_index] # inputs for validation y_valid=y[valid_index] # labels for validation print("MODEL") # Call constructor on each CV. Else, continually improves the same model. model = model = make_DNN() print("FIT") # model.fit() implements learning start_time=time.time() history=model.fit(X_train, y_train, epochs=EPOCHS, verbose=1, # ascii art while learning callbacks=mycallbacks, # called at end of each epoch validation_data=(X_valid,y_valid)) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) # print(history.history.keys()) # all these keys will be shown in figure pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale plt.show() do_cross_validation(X,y) from keras.models import load_model X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET) best_model=load_model(MODELPATH) scores = best_model.evaluate(X, y, verbose=0) print("The best model parameters were saved during cross-validation.") print("Best was defined as maximum validation accuracy at end of any epoch.") print("Now re-load the best model and test it on previously unseen data.") print("Test on",len(pc_test),"PC seqs") print("Test on",len(nc_test),"NC seqs") print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score ns_probs = [0 for _ in range(len(y))] bm_probs = best_model.predict(X) ns_auc = roc_auc_score(y, ns_probs) bm_auc = roc_auc_score(y, bm_probs) ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs) bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs) plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc) plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc) plt.title('ROC') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print("%s: %.2f%%" %('AUC',bm_auc*100.0)) t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
0.415847
0.649593
<pre> </pre> ## Start your mySQL Server from a terminal (if it isn't already running) <code>sudo docker start course-mysql</code> <pre> </pre> Don't forget that, if you use sqlMagic, you need to execute the connection lines in your Notebook! <pre> %load_ext sql %config SqlMagic.autocommit=False %sql mysql+pymysql://root:[email protected]:3306/mysql </pre> ## Create a new Python3 Jupyter Notebook in your Exam Answers folder Commit and push this Notebook to GitHub when you are finished. You must submit your answers to GitHub by 1800h Sept 14. ``` %load_ext sql %config SqlMagic.autocommit=False %sql mysql+pymysql://root:[email protected]:3306/mysql ``` ## Data Files Germplasm.tsv and LocusGene.tsv contain the datasets we need for the exam. Our objective is to create a database to contain the data in these files, insert the data into the database, then query the database in a variety of ways. ## Problem 1: Controls Write a Python script that proves that the lines of data in Germplasm.tsv, and LocusGene are in the same sequence, based on the AGI Locus Code (ATxGxxxxxx). (hint: This will help you decide how to load the data into the database). ### Answer: We are going to compare if lines of data in Germplasm.tsv and LocusGene.tsv are in the same sequence based on the AGI Locus Code: * The first character is "A" or "a" * The second character is "T" or "t" * The third character is the chromosome number (between 1 and 5) * The fourth character is "G" or "g" * The remaining characters are a set of 5 digits (between 0 and 9). For doing this, we use a Regular Expressions to prove if they have the same sequence and define a function to simplify our job. ``` import re def validar_datos (nombre_fichero): with open (nombre_fichero) as fichero: for linea in fichero: x = linea.split('\t') print(x[0]) text = x[0] matchObj1 = re.search(r'^[aA][tT][0-5][gG]\d{5}$', text) if matchObj1: print("Match 'AT*G*****': ", matchObj1.group()) else: print("No Match!") fichero.close() validar_datos("LocusGene.tsv") validar_datos("Germplasm.tsv") ``` ## Problem 2: Design and create the database. * It should have two tables - one for each of the two data files. * The two tables should be linked in a 1:1 relationship * you may use either sqlMagic or pymysql to build the database ### Answer: ``` %sql show databases %sql CREATE DATABASE germplasm %sql SHOW databases %sql USE germplasm %sql CREATE TABLE if not exists germplasm(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, locus VARCHAR(12) NOT NULL, germplasm VARCHAR(255) NOT NULL, phenotype TEXT NOT NULL, pubmed INT NOT NULL); %sql DESCRIBE germplasm %sql USE germplasm #%sql CREATE TABLE if not exists locusgene(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, locus VARCHAR(9) NOT NULL, gen VARCHAR(8) NOT NULL, proteinlength INT NOT NULL); %sql DESCRIBE locusgene %sql SELECT germplasm.germplasm, germplasm.phenotype, germplasm.pubmed, locusgene.gen, locusgene.proteinlength, locusgene.locus \ FROM germplasm \ INNER JOIN locusgene ON germplasm.locus=locusgene.locus; # I think here I can use "locus" as a key because both tables have this column. ``` ## Problem 3: Fill the database Using pymysql, create a Python script that reads the data from these files, and fills the database. There are a variety of strategies to accomplish this. I will give all strategies equal credit - do whichever one you are most confident with. ### Answer: ``` def rellenar_datos(nombre_fichero): with open (nombre_fichero) as fichero: next(fichero) for linea in fichero: x = linea.replace("\n", "").split("\t") a = x[0] b = x[1] c = x[2] d = int(x[3]) %sql INSERT INTO germplasm (locus, germplasm, phenotype, pubmed) VALUES (:a, :b, :c, :d); print(rellenar_datos("Germplasm.tsv")) def rellenar_datos2(nombre_fichero): with open (nombre_fichero) as fichero: next(fichero) for linea in fichero: x = linea.split("\t") a = x[0] b = x[1] c = int(x[2]) %sql INSERT INTO locusgene (locus, gen, proteinlength) VALUES (:a, :b, :c); print(rellenar_datos2("LocusGene.tsv")) ``` ## Problem 4: Create reports, written to a file 1. Create a report that shows the full, joined, content of the two database tables (including a header line) 2. Create a joined report that only includes the Genes SKOR and MAA3 3. Create a report that counts the number of entries for each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) 4. Create a report that shows the average protein length for the genes on each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) When creating reports 2 and 3, remember the "Don't Repeat Yourself" rule! All reports should be written to **the same file**. You may name the file anything you wish. ``` # Create a report that shows the full, joined, content of the two database tables (including a header line) %sql SELECT * FROM germplasm INNER JOIN locusgene ON \ germplasm.locus = locusgene.locus; # Create a joined report that only includes the Genes SKOR and MAA3 %sql SELECT * FROM germplasm INNER JOIN locusgene ON \ germplasm.locus = locusgene.locus WHERE locusgene.gen in ("SKOR", "MAA3"); #Create a report that counts the number of entries for each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) for i in range(1, 6): x = "AT" + str(i) + "%" result = %sql select count(*) from germplasm inner join locusgene on germplasm.locus = locusgene.locus where germplasm.locus like :x; print("Cromosoma: ", i, result) #Create a report that shows the average proteinlength for the genes on each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) %sql SELECT AVG(locusgene.proteinlength) FROM locusgene # this is the avarage of all chromosome for i in range(1, 6): x = "AT" + str(i) + "%" result = %sql select AVG(locusgene.proteinlength) from germplasm inner join locusgene on germplasm.locus = locusgene.locus where germplasm.locus like :x; print("Cromosoma: ", i, result) ``` <pre> </pre> ## Don't forget to commit and push your answers before you leave! It was wonderful to have you in my class! I hope to see you again soon! Good luck with your careers!! Mark
github_jupyter
%load_ext sql %config SqlMagic.autocommit=False %sql mysql+pymysql://root:[email protected]:3306/mysql import re def validar_datos (nombre_fichero): with open (nombre_fichero) as fichero: for linea in fichero: x = linea.split('\t') print(x[0]) text = x[0] matchObj1 = re.search(r'^[aA][tT][0-5][gG]\d{5}$', text) if matchObj1: print("Match 'AT*G*****': ", matchObj1.group()) else: print("No Match!") fichero.close() validar_datos("LocusGene.tsv") validar_datos("Germplasm.tsv") %sql show databases %sql CREATE DATABASE germplasm %sql SHOW databases %sql USE germplasm %sql CREATE TABLE if not exists germplasm(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, locus VARCHAR(12) NOT NULL, germplasm VARCHAR(255) NOT NULL, phenotype TEXT NOT NULL, pubmed INT NOT NULL); %sql DESCRIBE germplasm %sql USE germplasm #%sql CREATE TABLE if not exists locusgene(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, locus VARCHAR(9) NOT NULL, gen VARCHAR(8) NOT NULL, proteinlength INT NOT NULL); %sql DESCRIBE locusgene %sql SELECT germplasm.germplasm, germplasm.phenotype, germplasm.pubmed, locusgene.gen, locusgene.proteinlength, locusgene.locus \ FROM germplasm \ INNER JOIN locusgene ON germplasm.locus=locusgene.locus; # I think here I can use "locus" as a key because both tables have this column. def rellenar_datos(nombre_fichero): with open (nombre_fichero) as fichero: next(fichero) for linea in fichero: x = linea.replace("\n", "").split("\t") a = x[0] b = x[1] c = x[2] d = int(x[3]) %sql INSERT INTO germplasm (locus, germplasm, phenotype, pubmed) VALUES (:a, :b, :c, :d); print(rellenar_datos("Germplasm.tsv")) def rellenar_datos2(nombre_fichero): with open (nombre_fichero) as fichero: next(fichero) for linea in fichero: x = linea.split("\t") a = x[0] b = x[1] c = int(x[2]) %sql INSERT INTO locusgene (locus, gen, proteinlength) VALUES (:a, :b, :c); print(rellenar_datos2("LocusGene.tsv")) # Create a report that shows the full, joined, content of the two database tables (including a header line) %sql SELECT * FROM germplasm INNER JOIN locusgene ON \ germplasm.locus = locusgene.locus; # Create a joined report that only includes the Genes SKOR and MAA3 %sql SELECT * FROM germplasm INNER JOIN locusgene ON \ germplasm.locus = locusgene.locus WHERE locusgene.gen in ("SKOR", "MAA3"); #Create a report that counts the number of entries for each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) for i in range(1, 6): x = "AT" + str(i) + "%" result = %sql select count(*) from germplasm inner join locusgene on germplasm.locus = locusgene.locus where germplasm.locus like :x; print("Cromosoma: ", i, result) #Create a report that shows the average proteinlength for the genes on each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx) %sql SELECT AVG(locusgene.proteinlength) FROM locusgene # this is the avarage of all chromosome for i in range(1, 6): x = "AT" + str(i) + "%" result = %sql select AVG(locusgene.proteinlength) from germplasm inner join locusgene on germplasm.locus = locusgene.locus where germplasm.locus like :x; print("Cromosoma: ", i, result)
0.181263
0.729423
* 请在环境变量中设置`DB_URI`指向数据库 * 请在环境变量中设置`DATAYES_TOKEN`作为通联数据登陆凭证 ``` %matplotlib inline import os from matplotlib import pyplot as plt import uqer import numpy as np import pandas as pd from uqer import DataAPI as api from alphamind.api import * from alphamind.data.neutralize import neutralize plt.style.use('ggplot') _ = uqer.Client(token=os.environ['DATAYES_TOKEN']) ref_date = '2017-06-23' factor = 'EPS' engine = SqlEngine(os.environ['DB_URI']) universe = Universe('zz800') ``` # Algorithm Description -------------------------- 猜测的 ``neutralize`` 残差$\bar Res$计算公式: $$\bar Res_{i,k} = \bar f_{i,k} - \sum_j \beta_{j,k} \times \bar Ex_{i, j, k}$$ 其中:$k$为行业分类,$i$为该行业中第$i$只股票,$j$为第$j$个风险因子。$\bar f$为因子序列,$\bar Ex$为风险暴露矩阵。系数$\beta_{j,k}$由OLS确定。 下面的章节,我们分别比较三种``neutralize``的方法差别: * **UQER Neutralize** 使用优矿的SDK计算因子残差。 * **Alpha-Mind Neutralize** 使用alpha-mind计算因子残差,alpha-mind可以由以下地址安装: ``` https://github.com/wegamekinglc/alpha-mind ``` * **Direct Weighted Least Square Fit Implementation** 直接使用scikit-learn的线性回归功能来计算因子残差。 # Raw Data --------------------------- ``` codes = engine.fetch_codes(ref_date, universe) factor_data = engine.fetch_factor(ref_date, factor, codes) risk_cov, risk_expousre = engine.fetch_risk_model(ref_date, codes) total_data = pd.merge(factor_data, risk_expousre, on=['code']).dropna() total_data['ticker'] = total_data.code.apply(lambda x: '{0:06}'.format(x)) total_data.set_index('ticker', inplace=True) len(total_data) ``` # UQER Neutralize ----------------------- ``` %%timeit neutralized_factor_uqer = uqer.neutralize(total_data[factor], target_date=ref_date.replace('-', ''), industry_type='short') neutralized_factor_uqer = uqer.neutralize(total_data[factor], target_date=ref_date.replace('-', ''), industry_type='short').sort_index() df = pd.DataFrame(neutralized_factor_uqer, columns=['uqer']) df.head(10) len(neutralized_factor_uqer) risk_exposure_uqer = uqer.DataAPI.RMExposureDayGet(tradeDate=ref_date.replace('-', '')).set_index('ticker') targeted_secs = risk_exposure_uqer.loc[neutralized_factor_uqer.index] style_exposure = neutralized_factor_uqer.values @ targeted_secs[risk_styles].values industry_exposure = neutralized_factor_uqer.values @ targeted_secs[industry_styles].values exposure = pd.Series(np.concatenate([style_exposure, industry_exposure]), index=risk_styles+industry_styles) exposure ``` # Alpha-Mind Neutralize -------------------------- ``` x = targeted_secs[risk_styles + industry_styles].values y = total_data[factor].values %%timeit neutralized_factor_alphamind = neutralize(x, y, weights=np.ones(len(y))) neutralized_factor_alphamind = neutralize(x, y, weights=np.ones(len(y))) alphamind_series = pd.Series(neutralized_factor_alphamind.flatten(), index=total_data.index) df['alpha-mind'] = alphamind_series df.head() len(alphamind_series) style_exposure = targeted_secs[risk_styles].values.T @ neutralized_factor_alphamind industry_exposure = targeted_secs[industry_styles].values.T @ neutralized_factor_alphamind exposure = pd.Series(np.concatenate([style_exposure[:, 0], industry_exposure[:, 0]]), index=risk_styles+industry_styles) exposure ``` # The Ticker Missing in UQER but Still in Alpha-Mind ----------------------------------- ``` missed_codes = [c for c in alphamind_series.index if c not in neutralized_factor_uqer.index] total_data.loc[missed_codes] ``` # Direct Weighted Least Square Fit Implementation ------------------------ ``` import statsmodels.api as sm mod = sm.WLS(y, x, weights=np.ones(len(y))).fit() lg_series = pd.Series(mod.resid, index=total_data.index) df['ols'] = lg_series ``` # Comparison ------------------ ``` df['uqer - ols'] = df['uqer'] - df['ols'] df['alphamind - ols'] = df['alpha-mind'] - df['ols'] df[['uqer - ols', 'alphamind - ols']].plot(figsize=(14, 7), ylim=(-1e-4, 1e-4)) df.head() ```
github_jupyter
%matplotlib inline import os from matplotlib import pyplot as plt import uqer import numpy as np import pandas as pd from uqer import DataAPI as api from alphamind.api import * from alphamind.data.neutralize import neutralize plt.style.use('ggplot') _ = uqer.Client(token=os.environ['DATAYES_TOKEN']) ref_date = '2017-06-23' factor = 'EPS' engine = SqlEngine(os.environ['DB_URI']) universe = Universe('zz800') https://github.com/wegamekinglc/alpha-mind ``` * **Direct Weighted Least Square Fit Implementation** 直接使用scikit-learn的线性回归功能来计算因子残差。 # Raw Data --------------------------- # UQER Neutralize ----------------------- # Alpha-Mind Neutralize -------------------------- # The Ticker Missing in UQER but Still in Alpha-Mind ----------------------------------- # Direct Weighted Least Square Fit Implementation ------------------------ # Comparison ------------------
0.443841
0.781247
# Représenter et manipuler les nombres réels ## Nombres rationnels Pour calculer une valeur approchée d'un rationnel en Python, on utilise la commande ```/```. Par exemple, ```1/3``` affiche $0.333$. Calculez une valeur approchée de $\dfrac{1}{3}$, $\dfrac{2}{7}$, $\dfrac{5}{4}$, $\dfrac{37}{100}$. ``` # Shift + Entrée pour exécuter ``` Double-cliquez **ici** pour voir la réponse <!-- 1/3 0.3333333333333333 2/7 0.2857142857142857 5/4 1.25 37/100 0.37 --> Bien entendu, $\dfrac{5}{4}$ et $\dfrac{37}{100}$ sont aussi des nombres décimaux. ## Nombres réels Une **fonction** en langage Python peut avoir un ou plusieurs **arguments** mais ce nombre peut être aussi variable. Par exemple, la fonction ci-dessous permet de représenter un ou plusieurs réels selon le souhait de l'utilisateur. Pour signaler que le nombre d'arguments est variable, celui-ci est précédé d'un \*. (cet argument est alors de type **tuple** c'est-à-dire un couple, un triplet etc) ``` # Shift + Entrée pour exécuter import matplotlib as plt # importation du package "matplotlib" permettant de faire des tracé sous le nom "plt" def DroiteNumérique(*x): print(x) DroiteNumérique(2,3,4) ``` Notre fonction finale est donnée comme suit: ``` # Shift + Entrée pour exécuter import matplotlib.pyplot as plt # importation du package "matplotlib" permettant de faire des tracé sous le nom "plt" def DroiteNumérique(*x): plt.plot(x, [0]*len(x), 'ro',label='réels choisis') # abscisses formées des réels choisis, ordonnées nulles, style=points rouges, légende plt.plot(x, [0]*len(x),label='droite numérique') # Tracé de la droite entre la plus petite valeur de x et la plus grande plt.title("Représenter des réels") # Titre plt.legend() # Affichage de la légende plt.show() # Affichage du tracé ``` Par exemple exécutez : ``` # Shift + Entrée pour exécuter DroiteNumérique(-3,4,7) ``` Placez sur la droite numérique les réels suivants $-5$, $2/3$, $4$, $10$. ``` # Tapez votre commande puis Shift + Entrée pour exécuter ``` Double-cliquez **ici** pour voir la réponse <!-- DroiteNumérique(-5,2/3,4,10) --> ## Intervalles. Distance entre nombres réels ### Intervalles On considère la fonction suivante : ``` # Shift + Entrée pour exécuter def Test(x,a,b): if x>a and x<b: print(x,"appartient à l'intervalle ","]",a,",",b,"[" ) ``` Que se passe-t-il si l'on exécute les commandes suivantes: 1. ```Test(2,1,3) ``` 2. ```Test(1,2,3)``` ``` # Tapez vos commandes ci-dessous puis Shift + Entrée pour exécuter # Réponse à la question 1. : # Réponse à la question 2. : ``` Double-cliquez **ici** pour voir la réponse <!-- 1. il s'affiche 2 appartient à l'intervalle ]1,3[ 2. il ne s'affiche rien --> Proposez un algorithme permettant d'améliorer le précédent en affichant "*x n'appartient pas à l'intervalle \]a,b\[*" lorsque la condition (x<a and x<b) n'est pas vérifiée. (on pourra utiliser la clause ```else```) ``` # Tapez votre algorithme ci-dessous puis shift + Entrée pour exécuter ``` Double-cliquez **ici** pour voir la réponse <!-- ``` def Test(x,a,b): if x>a and x<b: print(x,"appartient à l'intervalle ","]",a,",",b,"[" ) else: print(x,"n appartient pas à l'intervalle ","]",a,",",b,"[" ) ``` --> ### Distance entre deux réels La commande ```abs()``` permet de calculer la **valeur absolue** d'un réel, de donc ```abs(x-a)``` fournit la distance $|x-a|$ entre $x$ et $a$. Par exemple, ```abs(3-5)``` affiche 2. Calculez la distance entre $1$ et $0$; $-10$ et $5$; $-13$ et $-4$. ``` # Tapez vos commandes ci-dessous puis Shift + Entrée pour exécuter abs(3-5) ``` Double-cliquez **ici** pour voir la réponse <!-- abs(1-0) fournit 1 abs(-10-5) fournit 15 abs(-13-(-4)) fournit 17 --> On se propose ci-dessous de reconstruire une fonction, que l'on nommera ```distance``` qui calculera la distance entre $x$ et $a$ fournira exactement les mêmes résultats que ```abs(x-a)```. Testez vos commandes ci-dessous. ``` # Shift + Entrée pour exéctuer def distance(x,a): if x<a: return a-x else: return x-a # Tapez vos commandes ci-dessous en reprenant les réels utilisés ci-dessus puis Shift + Entrée pour exécuter ``` ### Encadrement par des décimaux Pour rechercher le plus grand entier $k$ plus petit que le réel $x$ (appelé **partie entière de $x$**), on peut utiliser la fonction suivante: ``` # Shift + Entrée pour exéccuter def PartieEntière(x): if x>=0: return int(x) else: return int(x)-1 ``` Testez la fonction sur les réels $1,2$; $4,999$; $7$; $-1,01$ et $-1,9$. ``` # Tapez vos commandes ci-dessous puis Shift + Entrée pour exéctuer # Pensez à utiliser un "." au lieu d'une "," ``` Double-cliquez **ici** pour voir la réponse <!-- PartieEntière(1,2) 1 PartieEntière(4,999) 4 PartieEntière(7) 7 PartieEntière(-1.01) -2 PartieEntière(-1.9) -2 --> Pour encadrer un réel $x$ par deux décimaux, à $10^{-n}$, on utilise par exemple la fonction ci-dessous : ``` # Shift + Entrée pour exécuter def encadrement(x,n): k=PartieEntière(x) while k/10**n<x: k=k+1 print(x," est compris entre ",(k-1)/10**n," et ", k/10**n) return ((k-1)/10**n,x,k/10**n) ``` À l'aide de cette fonction, encadrer $1,5354545$ à $10^{-2}$ par deux décimaux: ``` # Tapez votre commande puis Shift + Entrée pour exécuter ``` Double-cliquez **ici** pour voir la réponse <!-- ``` encadrement(1.5354487,2) affiche "1.5354487 est compris entre 1.53 et 1.54" ``` --> Quel est le type de l'objet donnée en sortie de la fonction ```encadrement```? (utilisez la fonction ```type```) ``` # Tapez votre commande puis Shift + Entrée ``` Double-cliquez **ici** pour voir la réponse <!-- type(encadrement(1.5354487,2)) c'est un tuple --> Utiliser les fonctions ```DroiteNumérique``` et ```encadrement``` de façon judicieuse pour représenter ces trois réels. ``` # Tapez votre commande puis Shift + Entrée pour exécuter ``` Double-cliquez **ici** pour voir la réponse <!-- ``` DroiteNumérique(*encadrement(1.5354487,2)) ``` -->
github_jupyter
# Shift + Entrée pour exécuter # Shift + Entrée pour exécuter import matplotlib as plt # importation du package "matplotlib" permettant de faire des tracé sous le nom "plt" def DroiteNumérique(*x): print(x) DroiteNumérique(2,3,4) # Shift + Entrée pour exécuter import matplotlib.pyplot as plt # importation du package "matplotlib" permettant de faire des tracé sous le nom "plt" def DroiteNumérique(*x): plt.plot(x, [0]*len(x), 'ro',label='réels choisis') # abscisses formées des réels choisis, ordonnées nulles, style=points rouges, légende plt.plot(x, [0]*len(x),label='droite numérique') # Tracé de la droite entre la plus petite valeur de x et la plus grande plt.title("Représenter des réels") # Titre plt.legend() # Affichage de la légende plt.show() # Affichage du tracé # Shift + Entrée pour exécuter DroiteNumérique(-3,4,7) # Tapez votre commande puis Shift + Entrée pour exécuter # Shift + Entrée pour exécuter def Test(x,a,b): if x>a and x<b: print(x,"appartient à l'intervalle ","]",a,",",b,"[" ) 2. ```Test(1,2,3)``` Double-cliquez **ici** pour voir la réponse <!-- 1. il s'affiche 2 appartient à l'intervalle ]1,3[ 2. il ne s'affiche rien --> Proposez un algorithme permettant d'améliorer le précédent en affichant "*x n'appartient pas à l'intervalle \]a,b\[*" lorsque la condition (x<a and x<b) n'est pas vérifiée. (on pourra utiliser la clause ```else```) Double-cliquez **ici** pour voir la réponse <!-- --> ### Distance entre deux réels La commande ```abs()``` permet de calculer la **valeur absolue** d'un réel, de donc ```abs(x-a)``` fournit la distance $|x-a|$ entre $x$ et $a$. Par exemple, ```abs(3-5)``` affiche 2. Calculez la distance entre $1$ et $0$; $-10$ et $5$; $-13$ et $-4$. Double-cliquez **ici** pour voir la réponse <!-- abs(1-0) fournit 1 abs(-10-5) fournit 15 abs(-13-(-4)) fournit 17 --> On se propose ci-dessous de reconstruire une fonction, que l'on nommera ```distance``` qui calculera la distance entre $x$ et $a$ fournira exactement les mêmes résultats que ```abs(x-a)```. Testez vos commandes ci-dessous. ### Encadrement par des décimaux Pour rechercher le plus grand entier $k$ plus petit que le réel $x$ (appelé **partie entière de $x$**), on peut utiliser la fonction suivante: Testez la fonction sur les réels $1,2$; $4,999$; $7$; $-1,01$ et $-1,9$. Double-cliquez **ici** pour voir la réponse <!-- PartieEntière(1,2) 1 PartieEntière(4,999) 4 PartieEntière(7) 7 PartieEntière(-1.01) -2 PartieEntière(-1.9) -2 --> Pour encadrer un réel $x$ par deux décimaux, à $10^{-n}$, on utilise par exemple la fonction ci-dessous : À l'aide de cette fonction, encadrer $1,5354545$ à $10^{-2}$ par deux décimaux: Double-cliquez **ici** pour voir la réponse <!-- --> Quel est le type de l'objet donnée en sortie de la fonction ```encadrement```? (utilisez la fonction ```type```) Double-cliquez **ici** pour voir la réponse <!-- type(encadrement(1.5354487,2)) c'est un tuple --> Utiliser les fonctions ```DroiteNumérique``` et ```encadrement``` de façon judicieuse pour représenter ces trois réels. Double-cliquez **ici** pour voir la réponse <!--
0.280025
0.986675
``` import pandas as pd import numpy as np import os import matplotlib.pyplot as plt ``` # Loading the multiple datasets ``` data_path = './data' file_list = os.listdir(data_path) for file in file_list: print('*'*10, file, '*'*10) file_data = pd.read_csv(os.path.join(data_path, file)) print('{} has {} rows and {} columns'.format(file, file_data.shape[0], file_data.shape[1])) print(file_data.head()) del file_data # Avoid memory problems ``` We have $20216100$ entries in our train data vs $41697600$ entries in our test data. The *weather_train* data is paired with *train* by the variables *site\_id* and *timestamp*, as well, those datasets are paired with *building_metadata* by *site_id*, *building_id*, and *year_built*. # Study of building_metadata ``` file_path = './data/building_metadata.csv' building_metadata = pd.read_csv(file_path) building_metadata.head() building_metadata.describe() for column in building_metadata.columns: print('{} has {} different values and has {} Na values.'.format(column, len(building_metadata[column].unique()), np.sum(building_metadata[column].isna()))) ``` As we sucpect for the describe, *year_built* and *floor_count* have 774 and 1094 Na values respectively. ## site_id ``` building_metadata.site_id.value_counts() ``` ## primary_use ``` building_metadata.primary_use.value_counts() ``` ## square_feet ``` building_metadata.square_feet.hist(bins = 100) plt.show() ``` Looks like an exponential distribution ``` square_feet_log = np.log(building_metadata.square_feet) square_feet_log.hist(bins = 100) plt.show() ``` $log(x+1)$ looks better ## year_built ``` building_metadata.year_built.hist(bins = building_metadata.year_built.unique().shape[0]) plt.show() ``` #### NaNs in year_built We need to imput the Na values of the variable *year_built*, using the mean of each area, but we need to know earlier how many companies we have without year in our dataset ``` building_metadata['year_na'] = building_metadata.year_built.isna() building_metadata['ones'] = 1 building_metadata[['site_id', 'year_na', 'ones']].groupby('site_id').sum() ``` We have several areas where any of is companies have a value in *year_built* ## floor_count ``` building_metadata.floor_count.hist(bins = building_metadata.floor_count.unique().shape[0]) plt.show() ``` ## square_feet per floor ``` floor_count = building_metadata.floor_count square_feet = building_metadata.square_feet square_feet_per_floor = square_feet/floor_count square_feet_per_floor_log = np.log(square_feet_per_floor) square_feet_per_floor.hist(bins = 100) plt.show() square_feet_per_floor_log.hist(bins = 100) plt.show() ``` # Weather Train dataset study # Train dataset study ``` file_path = './data/train.csv' train = pd.read_csv(file_path) train.head() for i in train.meter.unique(): data = np.log(train.loc[train.meter == i, 'meter_reading']+1) data.hist(bins = 100) plt.title(i) plt.show() print('Aproximately, the {:.2f}% of the meter_reading values are 0'.format(100*np.sum(train.meter_reading == 0)/len(train.meter_reading))) ``` We have a problem here, because those 0 are noise for our future model. We will need to remove them because we need to predict how much energy uses the building (Maybe those 0 are measured when the buildings are closed). # Joining train.csv and weather_train.csv ``` building_metadata.year_built.unique().shape[0] ```
github_jupyter
import pandas as pd import numpy as np import os import matplotlib.pyplot as plt data_path = './data' file_list = os.listdir(data_path) for file in file_list: print('*'*10, file, '*'*10) file_data = pd.read_csv(os.path.join(data_path, file)) print('{} has {} rows and {} columns'.format(file, file_data.shape[0], file_data.shape[1])) print(file_data.head()) del file_data # Avoid memory problems file_path = './data/building_metadata.csv' building_metadata = pd.read_csv(file_path) building_metadata.head() building_metadata.describe() for column in building_metadata.columns: print('{} has {} different values and has {} Na values.'.format(column, len(building_metadata[column].unique()), np.sum(building_metadata[column].isna()))) building_metadata.site_id.value_counts() building_metadata.primary_use.value_counts() building_metadata.square_feet.hist(bins = 100) plt.show() square_feet_log = np.log(building_metadata.square_feet) square_feet_log.hist(bins = 100) plt.show() building_metadata.year_built.hist(bins = building_metadata.year_built.unique().shape[0]) plt.show() building_metadata['year_na'] = building_metadata.year_built.isna() building_metadata['ones'] = 1 building_metadata[['site_id', 'year_na', 'ones']].groupby('site_id').sum() building_metadata.floor_count.hist(bins = building_metadata.floor_count.unique().shape[0]) plt.show() floor_count = building_metadata.floor_count square_feet = building_metadata.square_feet square_feet_per_floor = square_feet/floor_count square_feet_per_floor_log = np.log(square_feet_per_floor) square_feet_per_floor.hist(bins = 100) plt.show() square_feet_per_floor_log.hist(bins = 100) plt.show() file_path = './data/train.csv' train = pd.read_csv(file_path) train.head() for i in train.meter.unique(): data = np.log(train.loc[train.meter == i, 'meter_reading']+1) data.hist(bins = 100) plt.title(i) plt.show() print('Aproximately, the {:.2f}% of the meter_reading values are 0'.format(100*np.sum(train.meter_reading == 0)/len(train.meter_reading))) building_metadata.year_built.unique().shape[0]
0.334698
0.808446
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com) https://github.com/rasbt/python-machine-learning-book [MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) # Python Machine Learning - Code Examples # Chapter 10 - Predicting Continuous Target Variables with Regression Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). ``` %load_ext watermark %watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,seaborn ``` *The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* <br> <br> ### Overview - [Introducing a simple linear regression model](#Introducing-a-simple-linear-regression-model) - [Exploring the Housing Dataset](#Exploring-the-Housing-Dataset) - [Visualizing the important characteristics of a dataset](#Visualizing-the-important-characteristics-of-a-dataset) - [Implementing an ordinary least squares linear regression model](#Implementing-an-ordinary-least-squares-linear-regression-model) - [Solving regression for regression parameters with gradient descent](#Solving-regression-for-regression-parameters-with-gradient-descent) - [Estimating the coefficient of a regression model via scikit-learn](#Estimating-the-coefficient-of-a-regression-model-via-scikit-learn) - [Fitting a robust regression model using RANSAC](#Fitting-a-robust-regression-model-using-RANSAC) - [Evaluating the performance of linear regression models](#Evaluating-the-performance-of-linear-regression-models) - [Using regularized methods for regression](#Using-regularized-methods-for-regression) - [Turning a linear regression model into a curve - polynomial regression](#Turning-a-linear-regression-model-into-a-curve---polynomial-regression) - [Modeling nonlinear relationships in the Housing Dataset](#Modeling-nonlinear-relationships-in-the-Housing-Dataset) - [Dealing with nonlinear relationships using random forests](#Dealing-with-nonlinear-relationships-using-random-forests) - [Decision tree regression](#Decision-tree-regression) - [Random forest regression](#Random-forest-regression) - [Summary](#Summary) <br> <br> ``` from IPython.display import Image %matplotlib inline ``` # Introducing a simple linear regression model ``` Image(filename='./images/10_01.png', width=500) ``` <br> <br> # Exploring the Housing dataset Source: [https://archive.ics.uci.edu/ml/datasets/Housing](https://archive.ics.uci.edu/ml/datasets/Housing) Attributes: <pre> 1. CRIM per capita crime rate by town 2. ZN proportion of residential land zoned for lots over 25,000 sq.ft. 3. INDUS proportion of non-retail business acres per town 4. CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) 5. NOX nitric oxides concentration (parts per 10 million) 6. RM average number of rooms per dwelling 7. AGE proportion of owner-occupied units built prior to 1940 8. DIS weighted distances to five Boston employment centres 9. RAD index of accessibility to radial highways 10. TAX full-value property-tax rate per $10,000 11. PTRATIO pupil-teacher ratio by town 12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town 13. LSTAT % lower status of the population 14. MEDV Median value of owner-occupied homes in $1000's </pre> ``` import pandas as pd df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/' 'housing/housing.data', header=None, sep='\s+') df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] df.head() ``` <hr> ### Note: If the link to the Housing dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/housing/housing.data](./../datasets/housing/housing.data). Or you could fetch it via ``` df = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/housing/housing.data', header=None, sep='\s+') df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] df.head() ``` <br> <br> ## Visualizing the important characteristics of a dataset ``` import matplotlib.pyplot as plt import seaborn as sns sns.set(style='whitegrid', context='notebook') cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV'] sns.pairplot(df[cols], size=2.5) plt.tight_layout() # plt.savefig('./figures/scatter.png', dpi=300) plt.show() import numpy as np cm = np.corrcoef(df[cols].values.T) sns.set(font_scale=1.5) hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 15}, yticklabels=cols, xticklabels=cols) # plt.tight_layout() # plt.savefig('./figures/corr_mat.png', dpi=300) plt.show() sns.reset_orig() %matplotlib inline ``` <br> <br> # Implementing an ordinary least squares linear regression model ... ## Solving regression for regression parameters with gradient descent ``` class LinearRegressionGD(object): def __init__(self, eta=0.001, n_iter=20): self.eta = eta self.n_iter = n_iter def fit(self, X, y): self.w_ = np.zeros(1 + X.shape[1]) self.cost_ = [] for i in range(self.n_iter): output = self.net_input(X) errors = (y - output) self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() cost = (errors**2).sum() / 2.0 self.cost_.append(cost) return self def net_input(self, X): return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): return self.net_input(X) X = df[['RM']].values y = df['MEDV'].values from sklearn.preprocessing import StandardScaler sc_x = StandardScaler() sc_y = StandardScaler() X_std = sc_x.fit_transform(X) y_std = sc_y.fit_transform(y) lr = LinearRegressionGD() lr.fit(X_std, y_std) plt.plot(range(1, lr.n_iter+1), lr.cost_) plt.ylabel('SSE') plt.xlabel('Epoch') plt.tight_layout() # plt.savefig('./figures/cost.png', dpi=300) plt.show() def lin_regplot(X, y, model): plt.scatter(X, y, c='lightblue') plt.plot(X, model.predict(X), color='red', linewidth=2) return lin_regplot(X_std, y_std, lr) plt.xlabel('Average number of rooms [RM] (standardized)') plt.ylabel('Price in $1000\'s [MEDV] (standardized)') plt.tight_layout() # plt.savefig('./figures/gradient_fit.png', dpi=300) plt.show() print('Slope: %.3f' % lr.w_[1]) print('Intercept: %.3f' % lr.w_[0]) num_rooms_std = sc_x.transform([5.0]) price_std = lr.predict(num_rooms_std) print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std)) ``` <br> <br> ## Estimating the coefficient of a regression model via scikit-learn ``` from sklearn.linear_model import LinearRegression slr = LinearRegression() slr.fit(X, y) y_pred = slr.predict(X) print('Slope: %.3f' % slr.coef_[0]) print('Intercept: %.3f' % slr.intercept_) lin_regplot(X, y, slr) plt.xlabel('Average number of rooms [RM]') plt.ylabel('Price in $1000\'s [MEDV]') plt.tight_layout() # plt.savefig('./figures/scikit_lr_fit.png', dpi=300) plt.show() ``` **Normal Equations** alternative: ``` # adding a column vector of "ones" Xb = np.hstack((np.ones((X.shape[0], 1)), X)) w = np.zeros(X.shape[1]) z = np.linalg.inv(np.dot(Xb.T, Xb)) w = np.dot(z, np.dot(Xb.T, y)) print('Slope: %.3f' % w[1]) print('Intercept: %.3f' % w[0]) ``` <br> <br> # Fitting a robust regression model using RANSAC ``` from sklearn.linear_model import RANSACRegressor ransac = RANSACRegressor(LinearRegression(), max_trials=100, min_samples=50, residual_metric=lambda x: np.sum(np.abs(x), axis=1), residual_threshold=5.0, random_state=0) ransac.fit(X, y) inlier_mask = ransac.inlier_mask_ outlier_mask = np.logical_not(inlier_mask) line_X = np.arange(3, 10, 1) line_y_ransac = ransac.predict(line_X[:, np.newaxis]) plt.scatter(X[inlier_mask], y[inlier_mask], c='blue', marker='o', label='Inliers') plt.scatter(X[outlier_mask], y[outlier_mask], c='lightgreen', marker='s', label='Outliers') plt.plot(line_X, line_y_ransac, color='red') plt.xlabel('Average number of rooms [RM]') plt.ylabel('Price in $1000\'s [MEDV]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/ransac_fit.png', dpi=300) plt.show() print('Slope: %.3f' % ransac.estimator_.coef_[0]) print('Intercept: %.3f' % ransac.estimator_.intercept_) ``` <br> <br> # Evaluating the performance of linear regression models ``` from sklearn.cross_validation import train_test_split X = df.iloc[:, :-1].values y = df['MEDV'].values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) slr = LinearRegression() slr.fit(X_train, y_train) y_train_pred = slr.predict(X_train) y_test_pred = slr.predict(X_test) plt.scatter(y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data') plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red') plt.xlim([-10, 50]) plt.tight_layout() # plt.savefig('./figures/slr_residuals.png', dpi=300) plt.show() from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) ``` <br> <br> # Using regularized methods for regression ``` from sklearn.linear_model import Lasso lasso = Lasso(alpha=0.1) lasso.fit(X_train, y_train) y_train_pred = lasso.predict(X_train) y_test_pred = lasso.predict(X_test) print(lasso.coef_) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) ``` <br> <br> # Turning a linear regression model into a curve - polynomial regression ``` X = np.array([258.0, 270.0, 294.0, 320.0, 342.0, 368.0, 396.0, 446.0, 480.0, 586.0])[:, np.newaxis] y = np.array([236.4, 234.4, 252.8, 298.6, 314.2, 342.2, 360.8, 368.0, 391.2, 390.8]) from sklearn.preprocessing import PolynomialFeatures lr = LinearRegression() pr = LinearRegression() quadratic = PolynomialFeatures(degree=2) X_quad = quadratic.fit_transform(X) # fit linear features lr.fit(X, y) X_fit = np.arange(250, 600, 10)[:, np.newaxis] y_lin_fit = lr.predict(X_fit) # fit quadratic features pr.fit(X_quad, y) y_quad_fit = pr.predict(quadratic.fit_transform(X_fit)) # plot results plt.scatter(X, y, label='training points') plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--') plt.plot(X_fit, y_quad_fit, label='quadratic fit') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/poly_example.png', dpi=300) plt.show() y_lin_pred = lr.predict(X) y_quad_pred = pr.predict(X_quad) print('Training MSE linear: %.3f, quadratic: %.3f' % ( mean_squared_error(y, y_lin_pred), mean_squared_error(y, y_quad_pred))) print('Training R^2 linear: %.3f, quadratic: %.3f' % ( r2_score(y, y_lin_pred), r2_score(y, y_quad_pred))) ``` <br> <br> ## Modeling nonlinear relationships in the Housing Dataset ``` X = df[['LSTAT']].values y = df['MEDV'].values regr = LinearRegression() # create quadratic features quadratic = PolynomialFeatures(degree=2) cubic = PolynomialFeatures(degree=3) X_quad = quadratic.fit_transform(X) X_cubic = cubic.fit_transform(X) # fit features X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis] regr = regr.fit(X, y) y_lin_fit = regr.predict(X_fit) linear_r2 = r2_score(y, regr.predict(X)) regr = regr.fit(X_quad, y) y_quad_fit = regr.predict(quadratic.fit_transform(X_fit)) quadratic_r2 = r2_score(y, regr.predict(X_quad)) regr = regr.fit(X_cubic, y) y_cubic_fit = regr.predict(cubic.fit_transform(X_fit)) cubic_r2 = r2_score(y, regr.predict(X_cubic)) # plot results plt.scatter(X, y, label='training points', color='lightgray') plt.plot(X_fit, y_lin_fit, label='linear (d=1), $R^2=%.2f$' % linear_r2, color='blue', lw=2, linestyle=':') plt.plot(X_fit, y_quad_fit, label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2, color='red', lw=2, linestyle='-') plt.plot(X_fit, y_cubic_fit, label='cubic (d=3), $R^2=%.2f$' % cubic_r2, color='green', lw=2, linestyle='--') plt.xlabel('% lower status of the population [LSTAT]') plt.ylabel('Price in $1000\'s [MEDV]') plt.legend(loc='upper right') plt.tight_layout() # plt.savefig('./figures/polyhouse_example.png', dpi=300) plt.show() ``` Transforming the dataset: ``` X = df[['LSTAT']].values y = df['MEDV'].values # transform features X_log = np.log(X) y_sqrt = np.sqrt(y) # fit features X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis] regr = regr.fit(X_log, y_sqrt) y_lin_fit = regr.predict(X_fit) linear_r2 = r2_score(y_sqrt, regr.predict(X_log)) # plot results plt.scatter(X_log, y_sqrt, label='training points', color='lightgray') plt.plot(X_fit, y_lin_fit, label='linear (d=1), $R^2=%.2f$' % linear_r2, color='blue', lw=2) plt.xlabel('log(% lower status of the population [LSTAT])') plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$') plt.legend(loc='lower left') plt.tight_layout() # plt.savefig('./figures/transform_example.png', dpi=300) plt.show() ``` <br> <br> # Dealing with nonlinear relationships using random forests ... ## Decision tree regression ``` from sklearn.tree import DecisionTreeRegressor X = df[['LSTAT']].values y = df['MEDV'].values tree = DecisionTreeRegressor(max_depth=3) tree.fit(X, y) sort_idx = X.flatten().argsort() lin_regplot(X[sort_idx], y[sort_idx], tree) plt.xlabel('% lower status of the population [LSTAT]') plt.ylabel('Price in $1000\'s [MEDV]') # plt.savefig('./figures/tree_regression.png', dpi=300) plt.show() ``` <br> <br> ## Random forest regression ``` X = df.iloc[:, :-1].values y = df['MEDV'].values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=1) from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(n_estimators=1000, criterion='mse', random_state=1, n_jobs=-1) forest.fit(X_train, y_train) y_train_pred = forest.predict(X_train) y_test_pred = forest.predict(X_test) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) plt.scatter(y_train_pred, y_train_pred - y_train, c='black', marker='o', s=35, alpha=0.5, label='Training data') plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', s=35, alpha=0.7, label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red') plt.xlim([-10, 50]) plt.tight_layout() # plt.savefig('./figures/slr_residuals.png', dpi=300) plt.show() ``` <br> <br> # Summary ...
github_jupyter
%load_ext watermark %watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,seaborn from IPython.display import Image %matplotlib inline Image(filename='./images/10_01.png', width=500) import pandas as pd df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/' 'housing/housing.data', header=None, sep='\s+') df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] df.head() df = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/housing/housing.data', header=None, sep='\s+') df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] df.head() import matplotlib.pyplot as plt import seaborn as sns sns.set(style='whitegrid', context='notebook') cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV'] sns.pairplot(df[cols], size=2.5) plt.tight_layout() # plt.savefig('./figures/scatter.png', dpi=300) plt.show() import numpy as np cm = np.corrcoef(df[cols].values.T) sns.set(font_scale=1.5) hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 15}, yticklabels=cols, xticklabels=cols) # plt.tight_layout() # plt.savefig('./figures/corr_mat.png', dpi=300) plt.show() sns.reset_orig() %matplotlib inline class LinearRegressionGD(object): def __init__(self, eta=0.001, n_iter=20): self.eta = eta self.n_iter = n_iter def fit(self, X, y): self.w_ = np.zeros(1 + X.shape[1]) self.cost_ = [] for i in range(self.n_iter): output = self.net_input(X) errors = (y - output) self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() cost = (errors**2).sum() / 2.0 self.cost_.append(cost) return self def net_input(self, X): return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): return self.net_input(X) X = df[['RM']].values y = df['MEDV'].values from sklearn.preprocessing import StandardScaler sc_x = StandardScaler() sc_y = StandardScaler() X_std = sc_x.fit_transform(X) y_std = sc_y.fit_transform(y) lr = LinearRegressionGD() lr.fit(X_std, y_std) plt.plot(range(1, lr.n_iter+1), lr.cost_) plt.ylabel('SSE') plt.xlabel('Epoch') plt.tight_layout() # plt.savefig('./figures/cost.png', dpi=300) plt.show() def lin_regplot(X, y, model): plt.scatter(X, y, c='lightblue') plt.plot(X, model.predict(X), color='red', linewidth=2) return lin_regplot(X_std, y_std, lr) plt.xlabel('Average number of rooms [RM] (standardized)') plt.ylabel('Price in $1000\'s [MEDV] (standardized)') plt.tight_layout() # plt.savefig('./figures/gradient_fit.png', dpi=300) plt.show() print('Slope: %.3f' % lr.w_[1]) print('Intercept: %.3f' % lr.w_[0]) num_rooms_std = sc_x.transform([5.0]) price_std = lr.predict(num_rooms_std) print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std)) from sklearn.linear_model import LinearRegression slr = LinearRegression() slr.fit(X, y) y_pred = slr.predict(X) print('Slope: %.3f' % slr.coef_[0]) print('Intercept: %.3f' % slr.intercept_) lin_regplot(X, y, slr) plt.xlabel('Average number of rooms [RM]') plt.ylabel('Price in $1000\'s [MEDV]') plt.tight_layout() # plt.savefig('./figures/scikit_lr_fit.png', dpi=300) plt.show() # adding a column vector of "ones" Xb = np.hstack((np.ones((X.shape[0], 1)), X)) w = np.zeros(X.shape[1]) z = np.linalg.inv(np.dot(Xb.T, Xb)) w = np.dot(z, np.dot(Xb.T, y)) print('Slope: %.3f' % w[1]) print('Intercept: %.3f' % w[0]) from sklearn.linear_model import RANSACRegressor ransac = RANSACRegressor(LinearRegression(), max_trials=100, min_samples=50, residual_metric=lambda x: np.sum(np.abs(x), axis=1), residual_threshold=5.0, random_state=0) ransac.fit(X, y) inlier_mask = ransac.inlier_mask_ outlier_mask = np.logical_not(inlier_mask) line_X = np.arange(3, 10, 1) line_y_ransac = ransac.predict(line_X[:, np.newaxis]) plt.scatter(X[inlier_mask], y[inlier_mask], c='blue', marker='o', label='Inliers') plt.scatter(X[outlier_mask], y[outlier_mask], c='lightgreen', marker='s', label='Outliers') plt.plot(line_X, line_y_ransac, color='red') plt.xlabel('Average number of rooms [RM]') plt.ylabel('Price in $1000\'s [MEDV]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/ransac_fit.png', dpi=300) plt.show() print('Slope: %.3f' % ransac.estimator_.coef_[0]) print('Intercept: %.3f' % ransac.estimator_.intercept_) from sklearn.cross_validation import train_test_split X = df.iloc[:, :-1].values y = df['MEDV'].values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) slr = LinearRegression() slr.fit(X_train, y_train) y_train_pred = slr.predict(X_train) y_test_pred = slr.predict(X_test) plt.scatter(y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data') plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red') plt.xlim([-10, 50]) plt.tight_layout() # plt.savefig('./figures/slr_residuals.png', dpi=300) plt.show() from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) from sklearn.linear_model import Lasso lasso = Lasso(alpha=0.1) lasso.fit(X_train, y_train) y_train_pred = lasso.predict(X_train) y_test_pred = lasso.predict(X_test) print(lasso.coef_) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) X = np.array([258.0, 270.0, 294.0, 320.0, 342.0, 368.0, 396.0, 446.0, 480.0, 586.0])[:, np.newaxis] y = np.array([236.4, 234.4, 252.8, 298.6, 314.2, 342.2, 360.8, 368.0, 391.2, 390.8]) from sklearn.preprocessing import PolynomialFeatures lr = LinearRegression() pr = LinearRegression() quadratic = PolynomialFeatures(degree=2) X_quad = quadratic.fit_transform(X) # fit linear features lr.fit(X, y) X_fit = np.arange(250, 600, 10)[:, np.newaxis] y_lin_fit = lr.predict(X_fit) # fit quadratic features pr.fit(X_quad, y) y_quad_fit = pr.predict(quadratic.fit_transform(X_fit)) # plot results plt.scatter(X, y, label='training points') plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--') plt.plot(X_fit, y_quad_fit, label='quadratic fit') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/poly_example.png', dpi=300) plt.show() y_lin_pred = lr.predict(X) y_quad_pred = pr.predict(X_quad) print('Training MSE linear: %.3f, quadratic: %.3f' % ( mean_squared_error(y, y_lin_pred), mean_squared_error(y, y_quad_pred))) print('Training R^2 linear: %.3f, quadratic: %.3f' % ( r2_score(y, y_lin_pred), r2_score(y, y_quad_pred))) X = df[['LSTAT']].values y = df['MEDV'].values regr = LinearRegression() # create quadratic features quadratic = PolynomialFeatures(degree=2) cubic = PolynomialFeatures(degree=3) X_quad = quadratic.fit_transform(X) X_cubic = cubic.fit_transform(X) # fit features X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis] regr = regr.fit(X, y) y_lin_fit = regr.predict(X_fit) linear_r2 = r2_score(y, regr.predict(X)) regr = regr.fit(X_quad, y) y_quad_fit = regr.predict(quadratic.fit_transform(X_fit)) quadratic_r2 = r2_score(y, regr.predict(X_quad)) regr = regr.fit(X_cubic, y) y_cubic_fit = regr.predict(cubic.fit_transform(X_fit)) cubic_r2 = r2_score(y, regr.predict(X_cubic)) # plot results plt.scatter(X, y, label='training points', color='lightgray') plt.plot(X_fit, y_lin_fit, label='linear (d=1), $R^2=%.2f$' % linear_r2, color='blue', lw=2, linestyle=':') plt.plot(X_fit, y_quad_fit, label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2, color='red', lw=2, linestyle='-') plt.plot(X_fit, y_cubic_fit, label='cubic (d=3), $R^2=%.2f$' % cubic_r2, color='green', lw=2, linestyle='--') plt.xlabel('% lower status of the population [LSTAT]') plt.ylabel('Price in $1000\'s [MEDV]') plt.legend(loc='upper right') plt.tight_layout() # plt.savefig('./figures/polyhouse_example.png', dpi=300) plt.show() X = df[['LSTAT']].values y = df['MEDV'].values # transform features X_log = np.log(X) y_sqrt = np.sqrt(y) # fit features X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis] regr = regr.fit(X_log, y_sqrt) y_lin_fit = regr.predict(X_fit) linear_r2 = r2_score(y_sqrt, regr.predict(X_log)) # plot results plt.scatter(X_log, y_sqrt, label='training points', color='lightgray') plt.plot(X_fit, y_lin_fit, label='linear (d=1), $R^2=%.2f$' % linear_r2, color='blue', lw=2) plt.xlabel('log(% lower status of the population [LSTAT])') plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$') plt.legend(loc='lower left') plt.tight_layout() # plt.savefig('./figures/transform_example.png', dpi=300) plt.show() from sklearn.tree import DecisionTreeRegressor X = df[['LSTAT']].values y = df['MEDV'].values tree = DecisionTreeRegressor(max_depth=3) tree.fit(X, y) sort_idx = X.flatten().argsort() lin_regplot(X[sort_idx], y[sort_idx], tree) plt.xlabel('% lower status of the population [LSTAT]') plt.ylabel('Price in $1000\'s [MEDV]') # plt.savefig('./figures/tree_regression.png', dpi=300) plt.show() X = df.iloc[:, :-1].values y = df['MEDV'].values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=1) from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(n_estimators=1000, criterion='mse', random_state=1, n_jobs=-1) forest.fit(X_train, y_train) y_train_pred = forest.predict(X_train) y_test_pred = forest.predict(X_test) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred))) plt.scatter(y_train_pred, y_train_pred - y_train, c='black', marker='o', s=35, alpha=0.5, label='Training data') plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', s=35, alpha=0.7, label='Test data') plt.xlabel('Predicted values') plt.ylabel('Residuals') plt.legend(loc='upper left') plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red') plt.xlim([-10, 50]) plt.tight_layout() # plt.savefig('./figures/slr_residuals.png', dpi=300) plt.show()
0.666605
0.967039
# 単回帰分析と重回帰分析 本章では、基礎的な機械学習手法として代表的な**単回帰分析**と**重回帰分析**の仕組みを、数式を用いて説明します。 また次章では、本章で紹介した数式を Python によるプログラミングで実装する例も紹介します。本章と次章を通じて、数学とプログラミングの結びつきを体験して理解することができます。 本チュートリアルの主題であるディープラーニングの前に、単回帰分析と重回帰分析を紹介することには 2 つの理由があります。 1 つ目は、単回帰分析と重回帰分析の数学がニューラルネットワーク含めたディープラーニングの数学の基礎となるためです。 2 つ目は、単回帰分析のアルゴリズムを通して微分、重回帰分析のアルゴリズムを通して線形代数に関する理解を深めることができるためです。 機械学習手法は、**教師あり学習 (supervised learning)** 、**教師なし学習 (unsupervised learning)** 、**強化学習 (reinforcement learning)** に大別され、単回帰分析は教師あり学習に含まれます。 本チュートリアルで扱う多くの手法が教師あり学習です。 教師あり学習の中でも典型的な問題設定は 2 つに大別されます。 与えられた入力変数から、$10$ や $0.1$ といった実数値を予測する **回帰 (regression)** と、「赤ワイン」、「白ワイン」といったカテゴリを予測する**分類 (classification)** の 2 つです。 単回帰分析は回帰を行うための手法であり、1 つの入力変数から 1 つの出力変数を予測します。 それに対し、重回帰分析は、複数の入力変数から 1 つの出力変数を予測します。 この両手法は教師あり学習であるため、訓練の際には、入力変数 $x$ と目的変数 $t$ がペアで準備されている必要があります。 回帰分析を行うアルゴリズムでは、以下の 3 ステップを順番に考えていきます。 - Step 1 : モデルを決める - Step 2 : 目的関数を決める - Step 3 : 最適なパラメータを求める ## 単回帰分析 ### 問題設定(単回帰分析) 単回帰分析では、 1 つの入力変数から 1 つの出力変数を予測します。 今回は身近な例として、部屋の広さ $x$ から家賃 $y$ を予測する問題を考えてみます。 ### Step 1:モデルを決める(単回帰分析) まずはじめに、入力変数 $x$ と出力変数 $y$ との関係をどのように定式化するかを決定します。 この定式化したものを **モデル** もしくは **数理モデル** と呼びます。 単回帰分析におけるモデルを具体的に考えていきましょう。 例えば、家賃と部屋の広さの組で表されるデータを 3 つ集め、「家賃」を $y$ 軸に、「部屋の広さ」を $x$ 軸にとってそれらをプロットしたとき、次のようになっていたとします。 ![部屋の広さと家賃](images/04/04_01.png) この場合、部屋が広くなるほど、家賃が高くなるという関係が予想されます。 また、この 2 変数間の関係性は直線によって表現を行うことができそうだと考えられます。 そこで、2 つのパラメータ $w$ と $b$ によって特徴づけられる直線の方程式 $$ y = wx + b $$ によって、部屋の広さと家賃の関係を表すことを考えます。 ここで、$w$ は **重み (weight)** 、$b$ は **バイアス (bias)** の頭文字を採用しています。 ![部屋の広さと家賃の関係](images/04/04_02.png) 単回帰分析では、このようにモデルとして直線 $y = wx + b$ を用います。 そして、2 つのパラメータ $w$ と $b$ を、直線がデータによくフィットするように調整します。 パラメータで特徴づけられたモデルを用いる場合、与えられた **データセット** に適合するように最適なパラメータを求めることが目標となります。 今回はデータセットとして部屋の広さ $x$ と家賃 $t$ の組からなるデータの集合を用います。 全部で $N$ 個のデータがあり、$n$ 番目のデータが $(x_n, t_n)$ と表されるとき、データセットは $$ \begin{align} \mathcal{D} &= \{(x_1, t_1), (x_2, t_2), \dots, (x_N, t_N)\} \\ &= \{(x_n, t_n)\}_{n=1}^{N} \end{align} $$ と表すことができます([注釈1](#note1))。 これを用いて、新しい $x$ を入力すると、それに対応する $t$ を予測するモデルを訓練します。 ### 前処理 次のステップに進む前に、データの**前処理 (preprocessing)** をひとつ紹介します。 データの **中心化 (centering)** は、データの平均値が 0 になるように全てのデータを平行移動する処理を言います。 下図は、データ集合 $(x_n, y_n) \ (n=1,\dots,11)$ を、平均が $(0, 0)$ になるように平行移動する例です。 ![データの中心化](images/04/04_03.png) 中心化によるデータ前処理の利点の一つとして、調整すべきパラメータを削減できることが挙げられます。中心化処理を行うことで切片を考慮する必要がなくなるため、データ間の関係性を表現する直線の方程式を、 $y_c = wx_c$ のように、簡潔に表現可能となります。 ![中心化によるパラメータ削減](images/04/04_04.png) データセット内の入力変数と目的変数の平均をそれぞれ $\bar{x}$, $\bar{t}$ としたとき、中心化後の入力変数と目的変数は、 $$ \begin{aligned} x_{c} &= x - \bar{x} \\ t_{c} &= t - \bar{t} \end{aligned} $$ となります。 以降は記述を簡単にするため、$_c$ という添え字を省略し、事前に中心化を行っている前提でデータを扱います。 また、そのデータにフィットさせたいモデルは、 $$ y = wx $$ と、こちらも添え字 $_c$ を省略して説明を行います。 ### Step 2:目的関数を決める(単回帰分析) ここでの目標は、部屋の広さと家賃の関係を直線の方程式によってモデル化することです。 このために、予め収集されたいくつかのデータセットを使って、モデルが部屋の広さの値から予測した家賃(予測値)と、その部屋の広さに対応する実際の家賃(目標値)の差が小さくなるように、モデルのパラメータを決定します。 今回は、目的関数として [こちらの章](https://tutorials.chainer.org/ja/src/03_Basic_Math_for_Machine_Learning_ja.html#note2) ですでに紹介した予測値と目標値との**二乗和誤差 (sum-of-squares error)** を用います。 二乗和誤差が $0$ のとき、またその時のみ予測値 $y$ は目標値 $t$ と完全に一致($t = y$)しています。 $n$ 個目のデータの部屋の広さ $x_n$ が与えられたときのモデルの予測値 $y_n$ と、対応する目標値 $t_n$ との間の二乗誤差は、 $$ (t_{n} - y_{n})^{2} $$ となります。これを全データに渡って合計したものが以下の二乗和誤差です。 $$ \begin{aligned} L &= (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 \\ &= \sum_{n=1}^N (t_n - y_n )^2 \\ \end{aligned} $$ 今回用いるモデルは $$ y_{n} = wx_{n} $$ であるため、目的関数は $$ L = \sum_{n=1}^N (t_n - wx_n)^2 $$ と書くこともできます。 ### Step 3:最適なパラメータを求める(単回帰分析) この目的関数を最小化するようなパラメータを求めます。 ここで、目的関数は差の二乗和であり、常に正の値または $0$ をとるような、下に凸な二次関数となっています。 (一般的には多くの場合において、最適なパラメータを用いてもモデルがすべてのデータを完全に表現できず、目的関数の値は $0$ にはなりません。) 目的関数の値が最小となる点を求める際には、微分の知識が有用です。 微分では、対象とする関数の接線の傾きを求めることができます。凸関数では、この接線の傾きが 0 である点において、関数の最小値、もしくは最大値が得られます。 今回は、目的関数が $x$ に関する二次関数となっているため、下図のように重み $w$ に関する接線の傾きが $0$ であるときに、目的関数の値が最小となります。 ![目的関数を最小化するパラメータ](images/04/04_05.png) それでは、具体的に今回定めた目的関数 $L$ をパラメータである $w$ で微分してみましょう。 微分に関する基本的な計算や性質は [こちらの章](https://tutorials.chainer.org/ja/src/04_Basics_of_Differential_ja.html)で紹介しました。 $$ \frac{\partial}{\partial w} L = \frac{\partial}{\partial w} \sum_{n=1}^N (t_n - wx_n)^2 $$ ここで、微分の**線形性**から、和の微分は、微分の和となるため、 $$ \frac{\partial}{\partial w} L = \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2 $$ と変形できます。 次に、総和($\sum$)の中の各項に着目すると、 $$ \frac{\partial}{\partial w} (t_n - wx_n)^2 $$ となっており、この部分は $t_n - wx_n$ と $(\cdot)^2$ という関数の**合成関数**になっています。 そこで、$u_n = t_n - wx_n$、$f(u_n) = u_n^2$ とおいて計算すると、 $$ \begin{aligned} \frac{\partial}{\partial w}(t_n - wx_n)^2 &= \frac{\partial}{\partial w} f(u_n) \\ &= \frac{\partial u_n}{\partial w}\frac{\partial f(u_n)}{\partial u_n} \\ &= -x_n (2 u_n) \\ &= -2x_n(t_n - wx_n) \end{aligned} $$ が得られます。 これを $\partial L / \partial w$ の式に戻すと、 $$ \begin{aligned} \frac{\partial}{\partial w} L &= \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2 \\ &= -\sum_{n=1}^N 2x_n(t_n - wx_n) \end{aligned} $$ となります。 この導関数の値が $0$ となるときの $w$ が、目的関数を最小にするパラメータです。 そこで、$\frac{\partial}{\partial w} L = 0$ とおいてこれを $w$ について解きます。 $$ \begin{aligned} \frac{\partial}{\partial w} L &= 0 \\ -2 \sum_{n=1}^N x_n (t_n - wx_n) &= 0 \\ -2 \sum_{n=1}^N x_n t_n + 2 \sum_{n=1}^N wx^2_n &= 0 \\ -2 \sum_{n=1}^N x_n t_n + 2 w \sum_{n=1}^N x^2_n &= 0 \\ w \sum_{n=1}^N x^2_n &= \sum_{n=1}^N x_n t_n \\ \end{aligned} $$ 以上より、 $$ w = \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x^2_n} $$ と求まりました。これを最適な $w$ と呼びます。 この値は、与えられたデータセット $\mathcal{D} = \{x_n, t_n\}_{n=1}^{N}$ のみから決定されています。 ### 数値例 例題にあげていた数値でパラメータ $w$ を求めてみましょう。 まずはデータの中心化を行うために、平均の値を事前に算出します。 $$ \begin{aligned} \bar{x} &= \frac{1}{3} (20 + 40 + 60) = 40 \\ \bar{t} &= \frac{1}{3} (60000 + 115000 + 155000) = 110000 \end{aligned} $$ この平均の値を使い、全変数に対して中心化を行うと、 $$ \begin{aligned} x_{1} &= 20 - 40 = -20 \\ x_{2} &= 40 -40 = 0 \\ x_{3} &= 60- 40- = 20 \\ t_{1} &= 60000 - 110000 = -50000 \\ t_{2} &= 115000- 110000 = 5000 \\ t_{3} &= 155000 - 110000 = 45000 \end{aligned} $$ となります。 これらの中心化後の値を用いて、最適なパラメータ $w$ を計算すると、 $$ \begin{aligned} w &= \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x_n^2} \\ &= \frac{x_1 t_1 + x_2 t_2 + x_3 t_3}{x_1^2 + x_2^2 + x_3^2} \\ &= \frac{-20 \times (-50000) + 0 \times 5000 + 20 \times 45000}{(-20)^2 + 0^2 + 20^2} \\ &= 2375 \end{aligned} $$ と求まります。 したがって、家賃が $1$ m$^{2}$ 増えるごとに、$2375$ 円家賃が上昇しているとわかります。 この $w$ を用いて決定される直線 $y = 2375 x$ と、学習データとして用いた 3 つの点をプロットした図が以下です。 ![家賃の予測値と目標値(中心化後)](images/04/04_08.png) この直線上の点の $y$ の値が、対応する $x$ の値に対するここで訓練したモデルによる予測値です。 ここで、$x$ 軸で負の値をとっていますが、これは中心化後であることに注意してください。 訓練済みのモデルを使って新しいサンプル $x_{q}$ に対する家賃の予測を行います。 例えば、部屋の広さが 50 m$^2$ の場合の家賃に対する予測値を計算する推論を行いましょう。 $$ \begin{aligned} y_c &= wx_c \\ y_q - \bar{t} &= w(x_q - \bar{x}) \\ \Rightarrow y_q &= w(x_q - \bar{x}) + \bar{t} \\ &= 2375 (50 - 40) + 110000 \\ &= 133750 \end{aligned} $$ このように、部屋の広さが 50 m$^{2}$ の場合の家賃が 133,750 円であると予測することができました。 上記のように、モデル化の際に中心化を行っていた処理を推論の際には元に戻して計算しましょう。 ## 重回帰分析 ### 問題設定(重回帰分析) 重回帰分析も単回帰分析の場合と同様に家賃を予測する問題を題材にして説明します。 単回帰分析の場合と異なり、入力変数として「部屋の広さ」だけでなく、「駅からの距離」や「犯罪発生率」などの変数を合わせて考慮することにします。 部屋の広さを $x_{1}$、駅からの距離を $x_{2}$、…、犯罪発生率を $x_{M}$ といった形で表し、$M$ 個の入力変数を扱うことを考えてみましょう。 ### Step 1:モデルを決める(重回帰分析) 単回帰分析では、 $$ y = wx + b $$ のように直線の方程式をモデルとして用いていました。重回帰分析でも、 $$ y = w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b $$ のように、単回帰分析と似た形のモデルを定義します。 単回帰分析の際は二次元平面を考え、その平面上に存在するデータに最もよくフィットする直線を考えましたが、今回は $M$ 次元空間に存在するデータに最もよくフィットする 直線を考えることになります。 重回帰分析のモデルは総和の記号を使って表記すると、 $$ y = \sum_{m=1}^{M} w_{m} x_{m} + b $$ と書くことができます。 ここでバイアス $b$ の扱い方を改めて考えます。 単回帰分析では、中心化を前処理として施し、バイアス $b$ を省略することで、簡潔に定式化することができました。 重回帰分析では、 $M$ 個の重み $w_{1}, w_{2}, \dots, w_{M}$ と 1 個のバイアス $b$ があり、合わせて $M + 1$ 個のパラメータが存在します。これらのパラメータをうまく定式化することを考えます。 そこで、今回は $x_0 = 1$、$w_0 = b$ とおくことで、 $$ \begin{aligned} y &= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b \\ &= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + w_{0}x_{0} \\ &= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\ &= \sum_{m=0}^M w_{m} x_{m} \end{aligned} $$ のように $b$ を総和の内側の項に含めて、簡潔に表記できるようにします。 (ここで、 $\sum$ 記号の下部が $m=1$ ではなく $m=0$ となっていることに注意してください。) さらに、ここから線形代数で学んだ知識を活かして、数式を整理していきます。 上式をベクトルの内積を用いて表記しなおすと、 $$ \begin{aligned} y &= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\ &= \begin{bmatrix} w_{0} & w_{1} & \cdots & w_{M} \end{bmatrix} \begin{bmatrix} x_{0} \\ x_{1} \\ \vdots \\ x_{M} \end{bmatrix} \\ &= {\bf w}^{\rm T}{\bf x} \end{aligned} $$ のように、シンプルな形式で表現することができます。 このモデルが持つパラメータは前述の通り $M + 1$ 個あり、$M + 1$ 次元のベクトル ${\bf w}$ で表されています。 重回帰分析では、この ${\bf w}$ のすべての要素について最適な値を求めます。 ### Step 2:目的関数を決める(重回帰分析) 単回帰分析の例と比べると、入力変数は増えましたが、家賃を目標値としている点は変わっていません。 そこで、単回帰分析と同じ目的関数 $$ L = (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 $$ を用います。 この目的関数は、ベクトルの内積を用いて表記し直すと、 $$ \begin{aligned} L &= (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 \\ &= \begin{bmatrix} t_1 - y_1 & t_2 - y_2 & \cdots & t_N - y_N \end{bmatrix} \begin{bmatrix} t_1 - y_1 \\ t_2 - y_2 \\ \vdots \\ t_N - y_N \end{bmatrix} \\ &= ({\bf t} - {\bf y})^{\rm T}({\bf t} - {\bf y}) \end{aligned} $$ と書くことができます。 ここで、内積には交換法則が成り立つため、${\bf w}^{\rm T}{\bf x}$ は ${\bf x}^{\rm T}{\bf w}$ と書くこともできます。これを利用して、モデルの方程式 ${\bf y} = {\bf w}^{\rm T}{\bf x}$ を、以下のように変形します。 $$ \begin{aligned} {\bf y} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{bmatrix} = \begin{bmatrix} {\bf x}_1^{\rm T}{\bf w} \\ {\bf x}_2^{\rm T}{\bf w} \\ \vdots \\ {\bf x}_N^{\rm T}{\bf w} \end{bmatrix} = \begin{bmatrix} {\bf x}_1^{\rm T} \\ {\bf x}_2^{\rm T} \\ \vdots \\ {\bf x}_N^{\rm T} \end{bmatrix} {\bf w} \end{aligned} $$ さらに、${\bf x}_n^{\rm T} = \bigl[ x_{n0},\ x_{n1},\ x_{n2},\ \dots,\ x_{nM} \bigr]$ $(n = 1, \dots, N)$ と展開すると、 $$ \begin{aligned} {\bf y} &= \begin{bmatrix} x_{10} & x_{11} & x_{12} & \cdots & x_{1M} \\ x_{20} & x_{21} & x_{22} & \cdots & x_{2M} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{N0} & x_{N1} & x_{N2} & \cdots & x_{NM} \end{bmatrix} \begin{bmatrix} w_{0} \\ w_{1} \\ w_{2} \\ \vdots \\ w_{M} \end{bmatrix} \\ &= {\bf X}{\bf w} \end{aligned} $$ と表記できます。 ここで、$N \times M$ 行列 ${\bf X}$ は、各行が各データを表しており、各列が各入力変数を表しています。 このような行列を、**デザイン行列 (design matrix)** と呼びます。 各列はそれぞれ入力変数の種類に対応しており、例えば、部屋の広さや、駅からの距離などです。 各行が表すデータ点がどのように表されているかを説明するため、具体的な数値例を挙げます。 部屋の広さ $= 50{\rm m}^2$ 、駅からの距離 $= 600 {\rm m}$ 、犯罪発生率 $= 2\%$ という 3 つの入力変数を考える場合、$M = 3$ であり、これが $n$ 個目のデータのとき、${\bf x}_n^{\rm T}$ は、 $$ {\bf x}_n^{\rm T} = \begin{bmatrix} 1 & 50 & 600 & 0.02 \end{bmatrix} $$ となります。先頭に $1$ があるのは、Step 1 で $x_0 = 1$ と定めたためです。 ### Step 3:パラメータを最適化する(重回帰分析) それでは、目的関数 $L$ を最小化するモデルのパラメータベクトル ${\bf w}$ を求めましょう。 単回帰分析と同様に、目的関数をパラメータで微分して 0 とおき、${\bf w}$ について解きます。 まずは目的関数に登場している予測値 ${\bf y}$ を、パラメータ ${\bf w}$ を用いた表記に置き換えます。 $$ \begin{aligned} L &= ({\bf t} - {\bf y})^{\rm T} ({\bf t} - {\bf y}) \\ &= ({\bf t} - {\bf X}{\bf w})^{\rm T} ({\bf t} - {\bf X}{\bf w}) \\ &= \{ {\bf t}^{\rm T} - ({\bf X}{\bf w})^{\rm T} \} ({\bf t} - {\bf X}{\bf w}) \\ &= ({\bf t}^{\rm T} - {\bf w}^{\rm T}{\bf X}^{\rm T}) ({\bf t} - {\bf X}{\bf w}) \end{aligned} $$ ここで、転置の公式 $({\bf A}{\bf B})^{\rm T} = {\bf B}^{\rm T}{\bf A}^{\rm T}$ を用いました。 さらに分配法則を使って展開すると、 $$ L = {\bf t}^{\rm T}{\bf t} - {\bf t}^{\rm T}{\bf X}{\bf w} - {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf t} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} $$ となります。この目的関数に対しパラメータの ${\bf w}$ についての偏微分を計算します。 その前に、この式をもう少し整理します。 まず、$(1)^{\rm T} = 1$ のように、スカラは転置しても変化しません。 また、${\bf w} \in \mathbb{R}^{M+1}$、${\bf X} \in \mathbb{R}^{N \times (M+1)}$ であり、${\bf X}{\bf w} \in \mathbb{R}^{N}$ となることから、${\bf t} \in \mathbb{R}^{N}$ との間の内積 ${\bf t}^{\rm T}{\bf X}{\bf w} \in \mathbb{R}$ は、スカラになります。 したがって、 $$ ({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf t}^{\rm T}{\bf X}{\bf w} $$ が成り立ちます。 さらに、転置の公式 $({\bf A}{\bf B}{\bf C})^{\rm T} = {\bf C}^{\rm T}{\bf B}^{\rm T}{\bf A}^{\rm T}$ より、 $$ ({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf w}^{\rm T} {\bf X}^{\rm T} {\bf t} $$ も成り立ちます。以上から、 $$({\bf t}^{T}{\bf X}{\bf w})^{T} = {\bf t}^{T}{\bf X}{\bf w} = {\bf w}^{T} {\bf X}^{T} {\bf t}$$ が導かれます。目的関数 $L$ は、この式を利用して、 $$ L = {\bf t}^{\rm T}{\bf t} - 2{\bf t}^{\rm T}{\bf X}{\bf w}+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} $$ と変形できます。 ここで、${\bf w}$ に関する偏微分を行いやすくするため、${\bf w}$ 以外の定数項を一つにまとめます。 $$ \begin{aligned} L &= {\bf t}^{\rm T}{\bf t} - 2{\bf t}^{\rm T}{\bf X}{\bf w} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\ &= {\bf t}^{\rm T}{\bf t} - 2({\bf X}^{\rm T}{\bf t})^{\rm T} {\bf w} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\ &= c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w} \end{aligned} $$ すると、${\bf w}$ に関する二次形式で表現できました。 ここで、 $$ \begin{align} {\bf A} &= {\bf X}^{\rm T}{\bf X} \\ {\bf b} &= -2 {\bf X}^{\rm T}{\bf t} \\ c &= {\bf t}^{\rm T}{\bf t} \end{align} $$ とおいていることに注意してください。 それでは、目的関数を最小にするパラメータ ${\bf w}$ の求め方を考えます。 目的関数はパラメータ ${\bf w}$ に関して二次関数になっています。 まずは、${\bf w}$ 以外のベクトルや行列に、具体的な数値を当てはめて考えてみましょう。 例えば、 $$ {\bf w} = \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}, {\bf A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, {\bf b} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, c = 1 $$ とおきます。すると、目的関数の値は $$ \begin{aligned} L &= {\bf w}^{\rm T}{\bf A}{\bf w} +{\bf b}^{\rm T}{\bf w} + c \\ &= \begin{bmatrix} w_1 & w_2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} + \begin{bmatrix} 1 & 2 \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} + 1 \\ &= \begin{bmatrix} w_1 & w_2 \end{bmatrix} \begin{bmatrix} w_1 + 2w_2 \\ 3w_1 + 4w_2 \end{bmatrix} + w_1 + 2w_2 + 1 \\ &= w_1 (w_1 + 2w_2) + w_2 (3w_1 + 4w_2) + w_1 + 2w_2 + 1 \\ &= w_1^2 + 5 w_1 w_2 + 4 w_2^2 + w_1 + 2 w_2 + 1 \\ \end{aligned} $$ となります。これを $w_1, w_2$ に関して整理すると、 $$ \begin{aligned} L &= w_1^2 + (5 w_2 + 1) w_1 + (4 w_2^2 + 2 w_2 + 1) \\ &= 4 w_2^2 + (5 w_1 + 2) w_2 + (w_1^2 + w_1 + 1) \end{aligned} $$ となり、$w_1, w_2$ それぞれについて二次関数になっていることが分かります。 目的関数 $L$ を $w_1$ の二次関数、$w_2$ の二次関数と見たとき、$L$ は、下図のような概形となっています。 ![目的関数の概形(2 次元)](images/04/04_06.png) さらに、各次元が $w_1, w_2, L$ を表す 3 次元空間上においては、 $L$ の概形は下図のようになっています。 ![目的関数の概形(3 次元)](images/04/04_07.png) ここでは 2 つのパラメータ $w_1$ と $w_2$ について図示していますが、目的関数が 任意の $M$ 個の変数 $w_0, w_1, w_2, \dots, w_M$ によって特徴づけられている場合でも、目的関数がそれぞれのパラメータについて二次形式になっている限り、同様のことが言えます。 すなわち、$M + 1$ 個の連立方程式、 $$ \begin{cases} \frac {\partial }{\partial w_0}L = 0 \\ \frac {\partial }{\partial w_1}L = 0 \\ \ \ \ \ \ \vdots \\ \frac {\partial }{\partial w_M}L = 0 \\ \end{cases} $$ を解けば良いということになります。 これはベクトルによる微分を用いて表記すると、以下のようになります。 $$ \begin{aligned} \begin{bmatrix} \frac {\partial}{\partial w_0} L \\ \frac {\partial}{\partial w_1} L \\ \vdots \\ \frac {\partial}{\partial w_M} L \\ \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \\ \Rightarrow \frac {\partial}{\partial {\bf w}} L &= {\bf 0} \\ \end{aligned} $$ 上式を ${\bf w}$ について解くために、以下のような式変形を行います。 式変形の途中で理解できない部分があった場合は、[こちらの章](https://tutorials.chainer.org/ja/src/05_Basics_of_Linear_Algebra_ja.html) を読み返してみてください。 まずは、左辺について整理を行います。 $$ \begin{aligned} \frac{\partial}{\partial {\bf w}} L &= \frac{\partial}{\partial {\bf w}} (c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w}) \\ &=\frac{\partial}{\partial {\bf w}} (c) + \frac{\partial}{\partial {\bf w}} ({\bf b}^{\rm T}{\bf w}) + \frac{\partial}{\partial {\bf w}} ({\bf w}^{\rm T}{\bf A}{\bf w}) \\ &={\bf 0} + {\bf b} + ({\bf A} + {\bf A}^{\rm T}) {\bf w} \end{aligned} $$ これを $0$ とおき、${\bf A}$ 、${\bf b}$ を展開すると $$ \begin{aligned} -2{\bf X}^{\rm T}{\bf t} + \{ {\bf X}^{\rm T}{\bf X} + ({\bf X}^{\rm T}{\bf X})^{\rm T} \} {\bf w} &= {\bf 0} \\ -2{\bf X}^{\rm T}{\bf t} + 2{\bf X}^{\rm T}{\bf X}{\bf w} &= {\bf 0} \\ {\bf X}^{\rm T}{\bf X}{\bf w}& = {\bf X}^{\rm T}{\bf t} \\ \end{aligned} $$ のように式変形できます。 ここで、${\bf X}^{\rm T}{\bf X}$に**逆行列が存在すると仮定**して、両辺に左側から $({\bf X}^{\rm T}{\bf X})^{-1}$ を掛けると、 $$ \begin{aligned} ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf X} {\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf I}{\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1}{\bf X}^{\rm T}{\bf t} \end{aligned} $$ が導かれます。特に、この最後の式を**正規方程式 (normal equation)** と呼びます。 上式は、与えられたデータを並べたデザイン行列 ${\bf X}$ と、各データの目標値を並べたベクトル ${\bf t}$ から、最適なパラメータ ${\bf w}$ を計算しています。 ${\bf I}$ は単位行列を表します。 ${\bf w}$ を求める際に気をつけたいこととして、以下の誤った式変形をしてしまう例が挙げられます。 $$ \begin{aligned} {\bf X}^{\rm T}{\bf X}{\bf w} &= {\bf X}^{\rm T}{\bf t} \\ ({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf X}{\bf w} &= ({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf X}{\bf w} &= {\bf t} \\ {\bf X}^{-1}{\bf X}{\bf w} &= {\bf X}^{-1}{\bf t} \\ {\bf w} &= {\bf X}^{-1}{\bf t} \end{aligned} $$ 上記の式変形は一般には成立しません。 この式変形が可能かどうかは、${\bf X}^{-1}$ が存在するかどうか、に関わっています。 サンプルサイズ $N$ と独立変数の数 $M + 1$ が等しくない場合、${\bf X} \in \mathbb{R}^{N \times (M + 1)}$ は**正方行列ではない**ため、逆行列 ${\bf X}^{-1}$ を持ちません。 従って、上式の 2 行目の変形を行うことはできません(逆行列が求まるためのより厳密な条件についてはここでは省略します)。 一方、 ${\bf X}^{\rm T}{\bf X}$ は $(M + 1) \times (M + 1)$ 行列であり、その形はサンプルサイズ $N$ に依存することなく、常に正方行列となるため、これを利用して式変形を行います。 新しい入力変数の値 ${\bf x}_q = [x_1, \dots, x_M]^{\rm T}$ に対して、対応する目標値 $y_q$ を予測するためには、訓練により決定されたパラメータ ${\bf w}$ を用いて、 $$ y_q = {\bf w}^{\rm T}{\bf x}_q $$ のように計算します。 <hr /> <div class="alert alert-info"> **注釈 1** データセット中の各データのことをデータ点(datum)ということがあります。具体的には、上の説明で同上した $\mathcal{D}$ 中の各 $(x_1, t_1)$ などのことです。 [▲上へ戻る](#ref_note1) </div>
github_jupyter
# 単回帰分析と重回帰分析 本章では、基礎的な機械学習手法として代表的な**単回帰分析**と**重回帰分析**の仕組みを、数式を用いて説明します。 また次章では、本章で紹介した数式を Python によるプログラミングで実装する例も紹介します。本章と次章を通じて、数学とプログラミングの結びつきを体験して理解することができます。 本チュートリアルの主題であるディープラーニングの前に、単回帰分析と重回帰分析を紹介することには 2 つの理由があります。 1 つ目は、単回帰分析と重回帰分析の数学がニューラルネットワーク含めたディープラーニングの数学の基礎となるためです。 2 つ目は、単回帰分析のアルゴリズムを通して微分、重回帰分析のアルゴリズムを通して線形代数に関する理解を深めることができるためです。 機械学習手法は、**教師あり学習 (supervised learning)** 、**教師なし学習 (unsupervised learning)** 、**強化学習 (reinforcement learning)** に大別され、単回帰分析は教師あり学習に含まれます。 本チュートリアルで扱う多くの手法が教師あり学習です。 教師あり学習の中でも典型的な問題設定は 2 つに大別されます。 与えられた入力変数から、$10$ や $0.1$ といった実数値を予測する **回帰 (regression)** と、「赤ワイン」、「白ワイン」といったカテゴリを予測する**分類 (classification)** の 2 つです。 単回帰分析は回帰を行うための手法であり、1 つの入力変数から 1 つの出力変数を予測します。 それに対し、重回帰分析は、複数の入力変数から 1 つの出力変数を予測します。 この両手法は教師あり学習であるため、訓練の際には、入力変数 $x$ と目的変数 $t$ がペアで準備されている必要があります。 回帰分析を行うアルゴリズムでは、以下の 3 ステップを順番に考えていきます。 - Step 1 : モデルを決める - Step 2 : 目的関数を決める - Step 3 : 最適なパラメータを求める ## 単回帰分析 ### 問題設定(単回帰分析) 単回帰分析では、 1 つの入力変数から 1 つの出力変数を予測します。 今回は身近な例として、部屋の広さ $x$ から家賃 $y$ を予測する問題を考えてみます。 ### Step 1:モデルを決める(単回帰分析) まずはじめに、入力変数 $x$ と出力変数 $y$ との関係をどのように定式化するかを決定します。 この定式化したものを **モデル** もしくは **数理モデル** と呼びます。 単回帰分析におけるモデルを具体的に考えていきましょう。 例えば、家賃と部屋の広さの組で表されるデータを 3 つ集め、「家賃」を $y$ 軸に、「部屋の広さ」を $x$ 軸にとってそれらをプロットしたとき、次のようになっていたとします。 ![部屋の広さと家賃](images/04/04_01.png) この場合、部屋が広くなるほど、家賃が高くなるという関係が予想されます。 また、この 2 変数間の関係性は直線によって表現を行うことができそうだと考えられます。 そこで、2 つのパラメータ $w$ と $b$ によって特徴づけられる直線の方程式 $$ y = wx + b $$ によって、部屋の広さと家賃の関係を表すことを考えます。 ここで、$w$ は **重み (weight)** 、$b$ は **バイアス (bias)** の頭文字を採用しています。 ![部屋の広さと家賃の関係](images/04/04_02.png) 単回帰分析では、このようにモデルとして直線 $y = wx + b$ を用います。 そして、2 つのパラメータ $w$ と $b$ を、直線がデータによくフィットするように調整します。 パラメータで特徴づけられたモデルを用いる場合、与えられた **データセット** に適合するように最適なパラメータを求めることが目標となります。 今回はデータセットとして部屋の広さ $x$ と家賃 $t$ の組からなるデータの集合を用います。 全部で $N$ 個のデータがあり、$n$ 番目のデータが $(x_n, t_n)$ と表されるとき、データセットは $$ \begin{align} \mathcal{D} &= \{(x_1, t_1), (x_2, t_2), \dots, (x_N, t_N)\} \\ &= \{(x_n, t_n)\}_{n=1}^{N} \end{align} $$ と表すことができます([注釈1](#note1))。 これを用いて、新しい $x$ を入力すると、それに対応する $t$ を予測するモデルを訓練します。 ### 前処理 次のステップに進む前に、データの**前処理 (preprocessing)** をひとつ紹介します。 データの **中心化 (centering)** は、データの平均値が 0 になるように全てのデータを平行移動する処理を言います。 下図は、データ集合 $(x_n, y_n) \ (n=1,\dots,11)$ を、平均が $(0, 0)$ になるように平行移動する例です。 ![データの中心化](images/04/04_03.png) 中心化によるデータ前処理の利点の一つとして、調整すべきパラメータを削減できることが挙げられます。中心化処理を行うことで切片を考慮する必要がなくなるため、データ間の関係性を表現する直線の方程式を、 $y_c = wx_c$ のように、簡潔に表現可能となります。 ![中心化によるパラメータ削減](images/04/04_04.png) データセット内の入力変数と目的変数の平均をそれぞれ $\bar{x}$, $\bar{t}$ としたとき、中心化後の入力変数と目的変数は、 $$ \begin{aligned} x_{c} &= x - \bar{x} \\ t_{c} &= t - \bar{t} \end{aligned} $$ となります。 以降は記述を簡単にするため、$_c$ という添え字を省略し、事前に中心化を行っている前提でデータを扱います。 また、そのデータにフィットさせたいモデルは、 $$ y = wx $$ と、こちらも添え字 $_c$ を省略して説明を行います。 ### Step 2:目的関数を決める(単回帰分析) ここでの目標は、部屋の広さと家賃の関係を直線の方程式によってモデル化することです。 このために、予め収集されたいくつかのデータセットを使って、モデルが部屋の広さの値から予測した家賃(予測値)と、その部屋の広さに対応する実際の家賃(目標値)の差が小さくなるように、モデルのパラメータを決定します。 今回は、目的関数として [こちらの章](https://tutorials.chainer.org/ja/src/03_Basic_Math_for_Machine_Learning_ja.html#note2) ですでに紹介した予測値と目標値との**二乗和誤差 (sum-of-squares error)** を用います。 二乗和誤差が $0$ のとき、またその時のみ予測値 $y$ は目標値 $t$ と完全に一致($t = y$)しています。 $n$ 個目のデータの部屋の広さ $x_n$ が与えられたときのモデルの予測値 $y_n$ と、対応する目標値 $t_n$ との間の二乗誤差は、 $$ (t_{n} - y_{n})^{2} $$ となります。これを全データに渡って合計したものが以下の二乗和誤差です。 $$ \begin{aligned} L &= (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 \\ &= \sum_{n=1}^N (t_n - y_n )^2 \\ \end{aligned} $$ 今回用いるモデルは $$ y_{n} = wx_{n} $$ であるため、目的関数は $$ L = \sum_{n=1}^N (t_n - wx_n)^2 $$ と書くこともできます。 ### Step 3:最適なパラメータを求める(単回帰分析) この目的関数を最小化するようなパラメータを求めます。 ここで、目的関数は差の二乗和であり、常に正の値または $0$ をとるような、下に凸な二次関数となっています。 (一般的には多くの場合において、最適なパラメータを用いてもモデルがすべてのデータを完全に表現できず、目的関数の値は $0$ にはなりません。) 目的関数の値が最小となる点を求める際には、微分の知識が有用です。 微分では、対象とする関数の接線の傾きを求めることができます。凸関数では、この接線の傾きが 0 である点において、関数の最小値、もしくは最大値が得られます。 今回は、目的関数が $x$ に関する二次関数となっているため、下図のように重み $w$ に関する接線の傾きが $0$ であるときに、目的関数の値が最小となります。 ![目的関数を最小化するパラメータ](images/04/04_05.png) それでは、具体的に今回定めた目的関数 $L$ をパラメータである $w$ で微分してみましょう。 微分に関する基本的な計算や性質は [こちらの章](https://tutorials.chainer.org/ja/src/04_Basics_of_Differential_ja.html)で紹介しました。 $$ \frac{\partial}{\partial w} L = \frac{\partial}{\partial w} \sum_{n=1}^N (t_n - wx_n)^2 $$ ここで、微分の**線形性**から、和の微分は、微分の和となるため、 $$ \frac{\partial}{\partial w} L = \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2 $$ と変形できます。 次に、総和($\sum$)の中の各項に着目すると、 $$ \frac{\partial}{\partial w} (t_n - wx_n)^2 $$ となっており、この部分は $t_n - wx_n$ と $(\cdot)^2$ という関数の**合成関数**になっています。 そこで、$u_n = t_n - wx_n$、$f(u_n) = u_n^2$ とおいて計算すると、 $$ \begin{aligned} \frac{\partial}{\partial w}(t_n - wx_n)^2 &= \frac{\partial}{\partial w} f(u_n) \\ &= \frac{\partial u_n}{\partial w}\frac{\partial f(u_n)}{\partial u_n} \\ &= -x_n (2 u_n) \\ &= -2x_n(t_n - wx_n) \end{aligned} $$ が得られます。 これを $\partial L / \partial w$ の式に戻すと、 $$ \begin{aligned} \frac{\partial}{\partial w} L &= \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2 \\ &= -\sum_{n=1}^N 2x_n(t_n - wx_n) \end{aligned} $$ となります。 この導関数の値が $0$ となるときの $w$ が、目的関数を最小にするパラメータです。 そこで、$\frac{\partial}{\partial w} L = 0$ とおいてこれを $w$ について解きます。 $$ \begin{aligned} \frac{\partial}{\partial w} L &= 0 \\ -2 \sum_{n=1}^N x_n (t_n - wx_n) &= 0 \\ -2 \sum_{n=1}^N x_n t_n + 2 \sum_{n=1}^N wx^2_n &= 0 \\ -2 \sum_{n=1}^N x_n t_n + 2 w \sum_{n=1}^N x^2_n &= 0 \\ w \sum_{n=1}^N x^2_n &= \sum_{n=1}^N x_n t_n \\ \end{aligned} $$ 以上より、 $$ w = \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x^2_n} $$ と求まりました。これを最適な $w$ と呼びます。 この値は、与えられたデータセット $\mathcal{D} = \{x_n, t_n\}_{n=1}^{N}$ のみから決定されています。 ### 数値例 例題にあげていた数値でパラメータ $w$ を求めてみましょう。 まずはデータの中心化を行うために、平均の値を事前に算出します。 $$ \begin{aligned} \bar{x} &= \frac{1}{3} (20 + 40 + 60) = 40 \\ \bar{t} &= \frac{1}{3} (60000 + 115000 + 155000) = 110000 \end{aligned} $$ この平均の値を使い、全変数に対して中心化を行うと、 $$ \begin{aligned} x_{1} &= 20 - 40 = -20 \\ x_{2} &= 40 -40 = 0 \\ x_{3} &= 60- 40- = 20 \\ t_{1} &= 60000 - 110000 = -50000 \\ t_{2} &= 115000- 110000 = 5000 \\ t_{3} &= 155000 - 110000 = 45000 \end{aligned} $$ となります。 これらの中心化後の値を用いて、最適なパラメータ $w$ を計算すると、 $$ \begin{aligned} w &= \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x_n^2} \\ &= \frac{x_1 t_1 + x_2 t_2 + x_3 t_3}{x_1^2 + x_2^2 + x_3^2} \\ &= \frac{-20 \times (-50000) + 0 \times 5000 + 20 \times 45000}{(-20)^2 + 0^2 + 20^2} \\ &= 2375 \end{aligned} $$ と求まります。 したがって、家賃が $1$ m$^{2}$ 増えるごとに、$2375$ 円家賃が上昇しているとわかります。 この $w$ を用いて決定される直線 $y = 2375 x$ と、学習データとして用いた 3 つの点をプロットした図が以下です。 ![家賃の予測値と目標値(中心化後)](images/04/04_08.png) この直線上の点の $y$ の値が、対応する $x$ の値に対するここで訓練したモデルによる予測値です。 ここで、$x$ 軸で負の値をとっていますが、これは中心化後であることに注意してください。 訓練済みのモデルを使って新しいサンプル $x_{q}$ に対する家賃の予測を行います。 例えば、部屋の広さが 50 m$^2$ の場合の家賃に対する予測値を計算する推論を行いましょう。 $$ \begin{aligned} y_c &= wx_c \\ y_q - \bar{t} &= w(x_q - \bar{x}) \\ \Rightarrow y_q &= w(x_q - \bar{x}) + \bar{t} \\ &= 2375 (50 - 40) + 110000 \\ &= 133750 \end{aligned} $$ このように、部屋の広さが 50 m$^{2}$ の場合の家賃が 133,750 円であると予測することができました。 上記のように、モデル化の際に中心化を行っていた処理を推論の際には元に戻して計算しましょう。 ## 重回帰分析 ### 問題設定(重回帰分析) 重回帰分析も単回帰分析の場合と同様に家賃を予測する問題を題材にして説明します。 単回帰分析の場合と異なり、入力変数として「部屋の広さ」だけでなく、「駅からの距離」や「犯罪発生率」などの変数を合わせて考慮することにします。 部屋の広さを $x_{1}$、駅からの距離を $x_{2}$、…、犯罪発生率を $x_{M}$ といった形で表し、$M$ 個の入力変数を扱うことを考えてみましょう。 ### Step 1:モデルを決める(重回帰分析) 単回帰分析では、 $$ y = wx + b $$ のように直線の方程式をモデルとして用いていました。重回帰分析でも、 $$ y = w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b $$ のように、単回帰分析と似た形のモデルを定義します。 単回帰分析の際は二次元平面を考え、その平面上に存在するデータに最もよくフィットする直線を考えましたが、今回は $M$ 次元空間に存在するデータに最もよくフィットする 直線を考えることになります。 重回帰分析のモデルは総和の記号を使って表記すると、 $$ y = \sum_{m=1}^{M} w_{m} x_{m} + b $$ と書くことができます。 ここでバイアス $b$ の扱い方を改めて考えます。 単回帰分析では、中心化を前処理として施し、バイアス $b$ を省略することで、簡潔に定式化することができました。 重回帰分析では、 $M$ 個の重み $w_{1}, w_{2}, \dots, w_{M}$ と 1 個のバイアス $b$ があり、合わせて $M + 1$ 個のパラメータが存在します。これらのパラメータをうまく定式化することを考えます。 そこで、今回は $x_0 = 1$、$w_0 = b$ とおくことで、 $$ \begin{aligned} y &= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b \\ &= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + w_{0}x_{0} \\ &= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\ &= \sum_{m=0}^M w_{m} x_{m} \end{aligned} $$ のように $b$ を総和の内側の項に含めて、簡潔に表記できるようにします。 (ここで、 $\sum$ 記号の下部が $m=1$ ではなく $m=0$ となっていることに注意してください。) さらに、ここから線形代数で学んだ知識を活かして、数式を整理していきます。 上式をベクトルの内積を用いて表記しなおすと、 $$ \begin{aligned} y &= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\ &= \begin{bmatrix} w_{0} & w_{1} & \cdots & w_{M} \end{bmatrix} \begin{bmatrix} x_{0} \\ x_{1} \\ \vdots \\ x_{M} \end{bmatrix} \\ &= {\bf w}^{\rm T}{\bf x} \end{aligned} $$ のように、シンプルな形式で表現することができます。 このモデルが持つパラメータは前述の通り $M + 1$ 個あり、$M + 1$ 次元のベクトル ${\bf w}$ で表されています。 重回帰分析では、この ${\bf w}$ のすべての要素について最適な値を求めます。 ### Step 2:目的関数を決める(重回帰分析) 単回帰分析の例と比べると、入力変数は増えましたが、家賃を目標値としている点は変わっていません。 そこで、単回帰分析と同じ目的関数 $$ L = (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 $$ を用います。 この目的関数は、ベクトルの内積を用いて表記し直すと、 $$ \begin{aligned} L &= (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 \\ &= \begin{bmatrix} t_1 - y_1 & t_2 - y_2 & \cdots & t_N - y_N \end{bmatrix} \begin{bmatrix} t_1 - y_1 \\ t_2 - y_2 \\ \vdots \\ t_N - y_N \end{bmatrix} \\ &= ({\bf t} - {\bf y})^{\rm T}({\bf t} - {\bf y}) \end{aligned} $$ と書くことができます。 ここで、内積には交換法則が成り立つため、${\bf w}^{\rm T}{\bf x}$ は ${\bf x}^{\rm T}{\bf w}$ と書くこともできます。これを利用して、モデルの方程式 ${\bf y} = {\bf w}^{\rm T}{\bf x}$ を、以下のように変形します。 $$ \begin{aligned} {\bf y} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{bmatrix} = \begin{bmatrix} {\bf x}_1^{\rm T}{\bf w} \\ {\bf x}_2^{\rm T}{\bf w} \\ \vdots \\ {\bf x}_N^{\rm T}{\bf w} \end{bmatrix} = \begin{bmatrix} {\bf x}_1^{\rm T} \\ {\bf x}_2^{\rm T} \\ \vdots \\ {\bf x}_N^{\rm T} \end{bmatrix} {\bf w} \end{aligned} $$ さらに、${\bf x}_n^{\rm T} = \bigl[ x_{n0},\ x_{n1},\ x_{n2},\ \dots,\ x_{nM} \bigr]$ $(n = 1, \dots, N)$ と展開すると、 $$ \begin{aligned} {\bf y} &= \begin{bmatrix} x_{10} & x_{11} & x_{12} & \cdots & x_{1M} \\ x_{20} & x_{21} & x_{22} & \cdots & x_{2M} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{N0} & x_{N1} & x_{N2} & \cdots & x_{NM} \end{bmatrix} \begin{bmatrix} w_{0} \\ w_{1} \\ w_{2} \\ \vdots \\ w_{M} \end{bmatrix} \\ &= {\bf X}{\bf w} \end{aligned} $$ と表記できます。 ここで、$N \times M$ 行列 ${\bf X}$ は、各行が各データを表しており、各列が各入力変数を表しています。 このような行列を、**デザイン行列 (design matrix)** と呼びます。 各列はそれぞれ入力変数の種類に対応しており、例えば、部屋の広さや、駅からの距離などです。 各行が表すデータ点がどのように表されているかを説明するため、具体的な数値例を挙げます。 部屋の広さ $= 50{\rm m}^2$ 、駅からの距離 $= 600 {\rm m}$ 、犯罪発生率 $= 2\%$ という 3 つの入力変数を考える場合、$M = 3$ であり、これが $n$ 個目のデータのとき、${\bf x}_n^{\rm T}$ は、 $$ {\bf x}_n^{\rm T} = \begin{bmatrix} 1 & 50 & 600 & 0.02 \end{bmatrix} $$ となります。先頭に $1$ があるのは、Step 1 で $x_0 = 1$ と定めたためです。 ### Step 3:パラメータを最適化する(重回帰分析) それでは、目的関数 $L$ を最小化するモデルのパラメータベクトル ${\bf w}$ を求めましょう。 単回帰分析と同様に、目的関数をパラメータで微分して 0 とおき、${\bf w}$ について解きます。 まずは目的関数に登場している予測値 ${\bf y}$ を、パラメータ ${\bf w}$ を用いた表記に置き換えます。 $$ \begin{aligned} L &= ({\bf t} - {\bf y})^{\rm T} ({\bf t} - {\bf y}) \\ &= ({\bf t} - {\bf X}{\bf w})^{\rm T} ({\bf t} - {\bf X}{\bf w}) \\ &= \{ {\bf t}^{\rm T} - ({\bf X}{\bf w})^{\rm T} \} ({\bf t} - {\bf X}{\bf w}) \\ &= ({\bf t}^{\rm T} - {\bf w}^{\rm T}{\bf X}^{\rm T}) ({\bf t} - {\bf X}{\bf w}) \end{aligned} $$ ここで、転置の公式 $({\bf A}{\bf B})^{\rm T} = {\bf B}^{\rm T}{\bf A}^{\rm T}$ を用いました。 さらに分配法則を使って展開すると、 $$ L = {\bf t}^{\rm T}{\bf t} - {\bf t}^{\rm T}{\bf X}{\bf w} - {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf t} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} $$ となります。この目的関数に対しパラメータの ${\bf w}$ についての偏微分を計算します。 その前に、この式をもう少し整理します。 まず、$(1)^{\rm T} = 1$ のように、スカラは転置しても変化しません。 また、${\bf w} \in \mathbb{R}^{M+1}$、${\bf X} \in \mathbb{R}^{N \times (M+1)}$ であり、${\bf X}{\bf w} \in \mathbb{R}^{N}$ となることから、${\bf t} \in \mathbb{R}^{N}$ との間の内積 ${\bf t}^{\rm T}{\bf X}{\bf w} \in \mathbb{R}$ は、スカラになります。 したがって、 $$ ({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf t}^{\rm T}{\bf X}{\bf w} $$ が成り立ちます。 さらに、転置の公式 $({\bf A}{\bf B}{\bf C})^{\rm T} = {\bf C}^{\rm T}{\bf B}^{\rm T}{\bf A}^{\rm T}$ より、 $$ ({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf w}^{\rm T} {\bf X}^{\rm T} {\bf t} $$ も成り立ちます。以上から、 $$({\bf t}^{T}{\bf X}{\bf w})^{T} = {\bf t}^{T}{\bf X}{\bf w} = {\bf w}^{T} {\bf X}^{T} {\bf t}$$ が導かれます。目的関数 $L$ は、この式を利用して、 $$ L = {\bf t}^{\rm T}{\bf t} - 2{\bf t}^{\rm T}{\bf X}{\bf w}+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} $$ と変形できます。 ここで、${\bf w}$ に関する偏微分を行いやすくするため、${\bf w}$ 以外の定数項を一つにまとめます。 $$ \begin{aligned} L &= {\bf t}^{\rm T}{\bf t} - 2{\bf t}^{\rm T}{\bf X}{\bf w} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\ &= {\bf t}^{\rm T}{\bf t} - 2({\bf X}^{\rm T}{\bf t})^{\rm T} {\bf w} + {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\ &= c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w} \end{aligned} $$ すると、${\bf w}$ に関する二次形式で表現できました。 ここで、 $$ \begin{align} {\bf A} &= {\bf X}^{\rm T}{\bf X} \\ {\bf b} &= -2 {\bf X}^{\rm T}{\bf t} \\ c &= {\bf t}^{\rm T}{\bf t} \end{align} $$ とおいていることに注意してください。 それでは、目的関数を最小にするパラメータ ${\bf w}$ の求め方を考えます。 目的関数はパラメータ ${\bf w}$ に関して二次関数になっています。 まずは、${\bf w}$ 以外のベクトルや行列に、具体的な数値を当てはめて考えてみましょう。 例えば、 $$ {\bf w} = \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}, {\bf A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, {\bf b} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, c = 1 $$ とおきます。すると、目的関数の値は $$ \begin{aligned} L &= {\bf w}^{\rm T}{\bf A}{\bf w} +{\bf b}^{\rm T}{\bf w} + c \\ &= \begin{bmatrix} w_1 & w_2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} + \begin{bmatrix} 1 & 2 \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} + 1 \\ &= \begin{bmatrix} w_1 & w_2 \end{bmatrix} \begin{bmatrix} w_1 + 2w_2 \\ 3w_1 + 4w_2 \end{bmatrix} + w_1 + 2w_2 + 1 \\ &= w_1 (w_1 + 2w_2) + w_2 (3w_1 + 4w_2) + w_1 + 2w_2 + 1 \\ &= w_1^2 + 5 w_1 w_2 + 4 w_2^2 + w_1 + 2 w_2 + 1 \\ \end{aligned} $$ となります。これを $w_1, w_2$ に関して整理すると、 $$ \begin{aligned} L &= w_1^2 + (5 w_2 + 1) w_1 + (4 w_2^2 + 2 w_2 + 1) \\ &= 4 w_2^2 + (5 w_1 + 2) w_2 + (w_1^2 + w_1 + 1) \end{aligned} $$ となり、$w_1, w_2$ それぞれについて二次関数になっていることが分かります。 目的関数 $L$ を $w_1$ の二次関数、$w_2$ の二次関数と見たとき、$L$ は、下図のような概形となっています。 ![目的関数の概形(2 次元)](images/04/04_06.png) さらに、各次元が $w_1, w_2, L$ を表す 3 次元空間上においては、 $L$ の概形は下図のようになっています。 ![目的関数の概形(3 次元)](images/04/04_07.png) ここでは 2 つのパラメータ $w_1$ と $w_2$ について図示していますが、目的関数が 任意の $M$ 個の変数 $w_0, w_1, w_2, \dots, w_M$ によって特徴づけられている場合でも、目的関数がそれぞれのパラメータについて二次形式になっている限り、同様のことが言えます。 すなわち、$M + 1$ 個の連立方程式、 $$ \begin{cases} \frac {\partial }{\partial w_0}L = 0 \\ \frac {\partial }{\partial w_1}L = 0 \\ \ \ \ \ \ \vdots \\ \frac {\partial }{\partial w_M}L = 0 \\ \end{cases} $$ を解けば良いということになります。 これはベクトルによる微分を用いて表記すると、以下のようになります。 $$ \begin{aligned} \begin{bmatrix} \frac {\partial}{\partial w_0} L \\ \frac {\partial}{\partial w_1} L \\ \vdots \\ \frac {\partial}{\partial w_M} L \\ \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \\ \Rightarrow \frac {\partial}{\partial {\bf w}} L &= {\bf 0} \\ \end{aligned} $$ 上式を ${\bf w}$ について解くために、以下のような式変形を行います。 式変形の途中で理解できない部分があった場合は、[こちらの章](https://tutorials.chainer.org/ja/src/05_Basics_of_Linear_Algebra_ja.html) を読み返してみてください。 まずは、左辺について整理を行います。 $$ \begin{aligned} \frac{\partial}{\partial {\bf w}} L &= \frac{\partial}{\partial {\bf w}} (c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w}) \\ &=\frac{\partial}{\partial {\bf w}} (c) + \frac{\partial}{\partial {\bf w}} ({\bf b}^{\rm T}{\bf w}) + \frac{\partial}{\partial {\bf w}} ({\bf w}^{\rm T}{\bf A}{\bf w}) \\ &={\bf 0} + {\bf b} + ({\bf A} + {\bf A}^{\rm T}) {\bf w} \end{aligned} $$ これを $0$ とおき、${\bf A}$ 、${\bf b}$ を展開すると $$ \begin{aligned} -2{\bf X}^{\rm T}{\bf t} + \{ {\bf X}^{\rm T}{\bf X} + ({\bf X}^{\rm T}{\bf X})^{\rm T} \} {\bf w} &= {\bf 0} \\ -2{\bf X}^{\rm T}{\bf t} + 2{\bf X}^{\rm T}{\bf X}{\bf w} &= {\bf 0} \\ {\bf X}^{\rm T}{\bf X}{\bf w}& = {\bf X}^{\rm T}{\bf t} \\ \end{aligned} $$ のように式変形できます。 ここで、${\bf X}^{\rm T}{\bf X}$に**逆行列が存在すると仮定**して、両辺に左側から $({\bf X}^{\rm T}{\bf X})^{-1}$ を掛けると、 $$ \begin{aligned} ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf X} {\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf I}{\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1}{\bf X}^{\rm T}{\bf t} \end{aligned} $$ が導かれます。特に、この最後の式を**正規方程式 (normal equation)** と呼びます。 上式は、与えられたデータを並べたデザイン行列 ${\bf X}$ と、各データの目標値を並べたベクトル ${\bf t}$ から、最適なパラメータ ${\bf w}$ を計算しています。 ${\bf I}$ は単位行列を表します。 ${\bf w}$ を求める際に気をつけたいこととして、以下の誤った式変形をしてしまう例が挙げられます。 $$ \begin{aligned} {\bf X}^{\rm T}{\bf X}{\bf w} &= {\bf X}^{\rm T}{\bf t} \\ ({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf X}{\bf w} &= ({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf t} \\ {\bf X}{\bf w} &= {\bf t} \\ {\bf X}^{-1}{\bf X}{\bf w} &= {\bf X}^{-1}{\bf t} \\ {\bf w} &= {\bf X}^{-1}{\bf t} \end{aligned} $$ 上記の式変形は一般には成立しません。 この式変形が可能かどうかは、${\bf X}^{-1}$ が存在するかどうか、に関わっています。 サンプルサイズ $N$ と独立変数の数 $M + 1$ が等しくない場合、${\bf X} \in \mathbb{R}^{N \times (M + 1)}$ は**正方行列ではない**ため、逆行列 ${\bf X}^{-1}$ を持ちません。 従って、上式の 2 行目の変形を行うことはできません(逆行列が求まるためのより厳密な条件についてはここでは省略します)。 一方、 ${\bf X}^{\rm T}{\bf X}$ は $(M + 1) \times (M + 1)$ 行列であり、その形はサンプルサイズ $N$ に依存することなく、常に正方行列となるため、これを利用して式変形を行います。 新しい入力変数の値 ${\bf x}_q = [x_1, \dots, x_M]^{\rm T}$ に対して、対応する目標値 $y_q$ を予測するためには、訓練により決定されたパラメータ ${\bf w}$ を用いて、 $$ y_q = {\bf w}^{\rm T}{\bf x}_q $$ のように計算します。 <hr /> <div class="alert alert-info"> **注釈 1** データセット中の各データのことをデータ点(datum)ということがあります。具体的には、上の説明で同上した $\mathcal{D}$ 中の各 $(x_1, t_1)$ などのことです。 [▲上へ戻る](#ref_note1) </div>
0.336549
0.794265
``` def get_model_result(file): import pandas as pd negation_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Negation') experiencer_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Experiencer') temporality_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Temporality') import sklearn.metrics as skm negation_df["gold"]=negation_df["Gold Value"]=="Negated" negation_df["pred"]=negation_df["FastContext Value"]!="Not negated" precision_neg_true=skm.precision_score(negation_df["gold"], negation_df["pred"],pos_label=True) recall_neg_true=skm.recall_score(negation_df["gold"], negation_df["pred"],pos_label=True) f1_neg_true=skm.f1_score(negation_df["gold"], negation_df["pred"],pos_label=True) precision_neg_false=skm.precision_score(negation_df["gold"], negation_df["pred"],pos_label=False) recall_neg_false=skm.recall_score(negation_df["gold"], negation_df["pred"],pos_label=False) f1_neg_false=skm.f1_score(negation_df["gold"], negation_df["pred"],pos_label=False) experiencer_df["gold"]=experiencer_df["Gold Value"]=="Patient" experiencer_df["pred"]=experiencer_df["FastContext Value"]=="patient" precision_exp_true=skm.precision_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) recall_exp_true=skm.recall_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) f1_exp_true=skm.f1_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) precision_exp_false=skm.precision_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) recall_exp_false=skm.recall_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) f1_exp_false=skm.f1_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) temporality_df["gold"]=temporality_df["Gold Value"]=="Historical" temporality_df["pred"]=temporality_df["FastContext Value"]!="Not historical" precision_his_true=skm.precision_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) recall_his_true=skm.recall_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) f1_his_true=skm.f1_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) precision_his_false=skm.precision_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) recall_his_false=skm.recall_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) f1_his_false=skm.f1_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) data = {'Precision':[precision_neg_true,precision_neg_false,precision_exp_true,precision_exp_false,precision_his_true,precision_his_false], 'Recall':[recall_neg_true,recall_neg_false,recall_exp_true,recall_exp_false,recall_his_true,recall_his_false], 'F1':[f1_neg_true,f1_neg_false,f1_exp_true,f1_exp_false,f1_his_true,f1_his_false] } model_outcome = pd.DataFrame(data, index=["Negated","Not Negated","Patient","Not Patient","Historical","Not Historical"]) return(model_outcome) get_model_result("HEGP_dataset_output.xlsx") get_model_result("CepiDC_dataset_output.xlsx") # rules_id=[146,195,197,329,330,331,332,335,336,340,341,345,378,379,380,394,407,408,411,414,421,422,423,424,425,426,462,464,506,507,527,579,581,583,637,675,677] # rules_list=(open("../Fast_Context_French/conf/"+"context_amine.txt", "r") # .read() # .split("\n")) # fired_rules=[rules_list[rule_id-1] for rule_id in rules_id] # fired_rules.sort() # outfile=open("../Fast_Context_French/conf/"+"context_mehdi.txt", "w") # outfile.write("\n".join(fired_rules)) # outfile.close() ```
github_jupyter
def get_model_result(file): import pandas as pd negation_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Negation') experiencer_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Experiencer') temporality_df=pd.read_excel("../Fast_Context_French/conf/"+file,sheet_name='Temporality') import sklearn.metrics as skm negation_df["gold"]=negation_df["Gold Value"]=="Negated" negation_df["pred"]=negation_df["FastContext Value"]!="Not negated" precision_neg_true=skm.precision_score(negation_df["gold"], negation_df["pred"],pos_label=True) recall_neg_true=skm.recall_score(negation_df["gold"], negation_df["pred"],pos_label=True) f1_neg_true=skm.f1_score(negation_df["gold"], negation_df["pred"],pos_label=True) precision_neg_false=skm.precision_score(negation_df["gold"], negation_df["pred"],pos_label=False) recall_neg_false=skm.recall_score(negation_df["gold"], negation_df["pred"],pos_label=False) f1_neg_false=skm.f1_score(negation_df["gold"], negation_df["pred"],pos_label=False) experiencer_df["gold"]=experiencer_df["Gold Value"]=="Patient" experiencer_df["pred"]=experiencer_df["FastContext Value"]=="patient" precision_exp_true=skm.precision_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) recall_exp_true=skm.recall_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) f1_exp_true=skm.f1_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=True) precision_exp_false=skm.precision_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) recall_exp_false=skm.recall_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) f1_exp_false=skm.f1_score(experiencer_df["gold"], experiencer_df["pred"],pos_label=False) temporality_df["gold"]=temporality_df["Gold Value"]=="Historical" temporality_df["pred"]=temporality_df["FastContext Value"]!="Not historical" precision_his_true=skm.precision_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) recall_his_true=skm.recall_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) f1_his_true=skm.f1_score(temporality_df["gold"], temporality_df["pred"],pos_label=True) precision_his_false=skm.precision_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) recall_his_false=skm.recall_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) f1_his_false=skm.f1_score(temporality_df["gold"], temporality_df["pred"],pos_label=False) data = {'Precision':[precision_neg_true,precision_neg_false,precision_exp_true,precision_exp_false,precision_his_true,precision_his_false], 'Recall':[recall_neg_true,recall_neg_false,recall_exp_true,recall_exp_false,recall_his_true,recall_his_false], 'F1':[f1_neg_true,f1_neg_false,f1_exp_true,f1_exp_false,f1_his_true,f1_his_false] } model_outcome = pd.DataFrame(data, index=["Negated","Not Negated","Patient","Not Patient","Historical","Not Historical"]) return(model_outcome) get_model_result("HEGP_dataset_output.xlsx") get_model_result("CepiDC_dataset_output.xlsx") # rules_id=[146,195,197,329,330,331,332,335,336,340,341,345,378,379,380,394,407,408,411,414,421,422,423,424,425,426,462,464,506,507,527,579,581,583,637,675,677] # rules_list=(open("../Fast_Context_French/conf/"+"context_amine.txt", "r") # .read() # .split("\n")) # fired_rules=[rules_list[rule_id-1] for rule_id in rules_id] # fired_rules.sort() # outfile=open("../Fast_Context_French/conf/"+"context_mehdi.txt", "w") # outfile.write("\n".join(fired_rules)) # outfile.close()
0.207777
0.197367
## Dependencies ``` import warnings, json, re, math from melanoma_utility_scripts import * from kaggle_datasets import KaggleDatasets from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV, cross_val_score, cross_validate from xgboost import XGBClassifier SEED = 42 seed_everything(SEED) warnings.filterwarnings("ignore") ``` # Model parameters ``` config = { "N_FOLDS": 5, "N_USED_FOLDS": 5, "DATASET_PATH": 'melanoma-512x512' } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config ``` # Load data ``` database_base_path = '/kaggle/input/siim-isic-melanoma-classification/' train = pd.read_csv(f"/kaggle/input/{config['DATASET_PATH']}/train.csv") test = pd.read_csv(database_base_path + 'test.csv') print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head()) ``` # Missing values ``` # age_approx (mean) train['age_approx'].fillna(train['age_approx'].mean(), inplace=True) test['age_approx'].fillna(train['age_approx'].mean(), inplace=True) # anatom_site_general_challenge (NaN) train['anatom_site_general_challenge'].fillna('NaN', inplace=True) test['anatom_site_general_challenge'].fillna('NaN', inplace=True) # sex (mode) train['sex'].fillna(train['sex'].mode()[0], inplace=True) test['sex'].fillna(train['sex'].mode()[0], inplace=True) ``` # Feature engineering ``` ### One-hot ecoding train = pd.concat([train, pd.get_dummies(train['sex'], prefix='sex_ohe', drop_first=True)], axis=1) test = pd.concat([test, pd.get_dummies(test['sex'], prefix='sex_ohe', drop_first=True)], axis=1) train = pd.concat([train, pd.get_dummies(train['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1) test = pd.concat([test, pd.get_dummies(test['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1) ### Mean ecoding # Sex train['sex_mean'] = train['sex'].map(train.groupby(['sex']).target.mean()) test['sex_mean'] = test['sex'].map(train.groupby(['sex']).target.mean()) # Age train['anatom_mean'] = train['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean()) test['anatom_mean'] = test['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean()) print('Train set') display(train.head()) print('Test set') display(test.head()) ``` # Model ``` features = ['age_approx', 'sex_mean', 'anatom_mean'] ohe_features = [col for col in train.columns if 'ohe' in col] features += ohe_features print(features) params = {'n_estimators': 750, 'min_child_weight': 0.81, 'learning_rate': 0.025, 'max_depth': 2, 'subsample': 0.80, 'colsample_bytree': 0.42, 'gamma': 0.10, 'random_state': SEED, 'n_jobs': -1} ``` # Training ``` skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED) test['target'] = 0 model_list = [] for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))): print(f'\nFOLD: {fold+1}') print(f'TRAIN: {idxT} VALID: {idxV}') train[f'fold_{fold+1}'] = train.apply(lambda x: 'validation' if x['tfrecord'] in idxT else 'train', axis=1) x_train = train[train['tfrecord'].isin(idxT)] y_train = x_train['target'] x_valid = train[~train['tfrecord'].isin(idxT)] y_valid = x_valid['target'] model = XGBClassifier(**params) model.fit(x_train[features], y_train, eval_set=[(x_valid[features], y_valid)], eval_metric='auc', early_stopping_rounds=10, verbose=0) model_list.append(model) # Evaludation preds = model.predict_proba(train[features])[:, 1] train[f'pred_fold_{fold+1}'] = preds # Inference preds = model.predict_proba(test[features])[:, 1] test[f'pred_fold_{fold+1}'] = preds test['target'] += preds / config['N_USED_FOLDS'] ``` # Feature importance ``` for n_fold, model in enumerate(model_list): print(f'Fold: {n_fold + 1}') feature_importance = model.get_booster().get_score(importance_type='weight') keys = list(feature_importance.keys()) values = list(feature_importance.values()) importance = pd.DataFrame(data=values, index=keys, columns=['score']).sort_values(by='score', ascending=False) plt.figure(figsize=(16, 8)) sns.barplot(x=importance.score.iloc[:20], y=importance.index[:20], orient='h', palette='Reds_r') plt.show() ``` # Model evaluation ``` display(evaluate_model(train, config['N_USED_FOLDS']).style.applymap(color_map)) display(evaluate_model_Subset(train, config['N_USED_FOLDS']).style.applymap(color_map)) ``` # Adversarial Validation ``` ### Adversarial set adv_train = train.copy() adv_test = test.copy() adv_train['dataset'] = 1 adv_test['dataset'] = 0 x_adv = pd.concat([adv_train, adv_test], axis=0) y_adv = x_adv['dataset'] ### Adversarial model model_adv = XGBClassifier(**params) model_adv.fit(x_adv[features], y_adv, eval_metric='auc', verbose=0) ### Preds preds = model_adv.predict_proba(x_adv[features])[:, 1] ### Plot feature importance and ROC AUC curve fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10)) # Feature importance feature_importance = model_adv.get_booster().get_score(importance_type='weight') keys = list(feature_importance.keys()) values = list(feature_importance.values()) importance = pd.DataFrame(data=values, index=keys, columns=['score']).sort_values(by='score', ascending=False) ax1.set_title('Feature Importances') sns.barplot(x=importance.score.iloc[:20], y=importance.index[:20], orient='h', palette='Reds_r', ax=ax1) # Plot ROC AUC curve fpr_train, tpr_train, _ = roc_curve(y_adv, preds) roc_auc_train = auc(fpr_train, tpr_train) ax2.set_title('ROC AUC curve') ax2.plot(fpr_train, tpr_train, color='blue', label='Adversarial AUC = %0.2f' % roc_auc_train) ax2.legend(loc = 'lower right') ax2.plot([0, 1], [0, 1],'r--') ax2.set_xlim([0, 1]) ax2.set_ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() ``` # Visualize predictions ``` train['pred'] = 0 for n_fold in range(config['N_USED_FOLDS']): train['pred'] += train[f'pred_fold_{n_fold+1}'] / config['N_FOLDS'] print('Label/prediction distribution') print(f"Train positive labels: {len(train[train['target'] > .5])}") print(f"Train positive predictions: {len(train[train['pred'] > .5])}") print(f"Train positive correct predictions: {len(train[(train['target'] > .5) & (train['pred'] > .5)])}") print('Top 10 samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].head(10)) print('Top 10 positive samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10)) print('Top 10 predicted positive samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10)) ``` # Visualize test predictions ``` print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}") print('Top 10 samples') display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] + [c for c in test.columns if (c.startswith('pred_fold'))]].head(10)) print('Top 10 positive samples') display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] + [c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10)) ``` # Test set predictions ``` submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission['target'] = test['target'] display(submission.head(10)) display(submission.describe()) submission[['image_name', 'target']].to_csv('submission.csv', index=False) ```
github_jupyter
import warnings, json, re, math from melanoma_utility_scripts import * from kaggle_datasets import KaggleDatasets from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV, cross_val_score, cross_validate from xgboost import XGBClassifier SEED = 42 seed_everything(SEED) warnings.filterwarnings("ignore") config = { "N_FOLDS": 5, "N_USED_FOLDS": 5, "DATASET_PATH": 'melanoma-512x512' } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config database_base_path = '/kaggle/input/siim-isic-melanoma-classification/' train = pd.read_csv(f"/kaggle/input/{config['DATASET_PATH']}/train.csv") test = pd.read_csv(database_base_path + 'test.csv') print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head()) # age_approx (mean) train['age_approx'].fillna(train['age_approx'].mean(), inplace=True) test['age_approx'].fillna(train['age_approx'].mean(), inplace=True) # anatom_site_general_challenge (NaN) train['anatom_site_general_challenge'].fillna('NaN', inplace=True) test['anatom_site_general_challenge'].fillna('NaN', inplace=True) # sex (mode) train['sex'].fillna(train['sex'].mode()[0], inplace=True) test['sex'].fillna(train['sex'].mode()[0], inplace=True) ### One-hot ecoding train = pd.concat([train, pd.get_dummies(train['sex'], prefix='sex_ohe', drop_first=True)], axis=1) test = pd.concat([test, pd.get_dummies(test['sex'], prefix='sex_ohe', drop_first=True)], axis=1) train = pd.concat([train, pd.get_dummies(train['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1) test = pd.concat([test, pd.get_dummies(test['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1) ### Mean ecoding # Sex train['sex_mean'] = train['sex'].map(train.groupby(['sex']).target.mean()) test['sex_mean'] = test['sex'].map(train.groupby(['sex']).target.mean()) # Age train['anatom_mean'] = train['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean()) test['anatom_mean'] = test['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean()) print('Train set') display(train.head()) print('Test set') display(test.head()) features = ['age_approx', 'sex_mean', 'anatom_mean'] ohe_features = [col for col in train.columns if 'ohe' in col] features += ohe_features print(features) params = {'n_estimators': 750, 'min_child_weight': 0.81, 'learning_rate': 0.025, 'max_depth': 2, 'subsample': 0.80, 'colsample_bytree': 0.42, 'gamma': 0.10, 'random_state': SEED, 'n_jobs': -1} skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED) test['target'] = 0 model_list = [] for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))): print(f'\nFOLD: {fold+1}') print(f'TRAIN: {idxT} VALID: {idxV}') train[f'fold_{fold+1}'] = train.apply(lambda x: 'validation' if x['tfrecord'] in idxT else 'train', axis=1) x_train = train[train['tfrecord'].isin(idxT)] y_train = x_train['target'] x_valid = train[~train['tfrecord'].isin(idxT)] y_valid = x_valid['target'] model = XGBClassifier(**params) model.fit(x_train[features], y_train, eval_set=[(x_valid[features], y_valid)], eval_metric='auc', early_stopping_rounds=10, verbose=0) model_list.append(model) # Evaludation preds = model.predict_proba(train[features])[:, 1] train[f'pred_fold_{fold+1}'] = preds # Inference preds = model.predict_proba(test[features])[:, 1] test[f'pred_fold_{fold+1}'] = preds test['target'] += preds / config['N_USED_FOLDS'] for n_fold, model in enumerate(model_list): print(f'Fold: {n_fold + 1}') feature_importance = model.get_booster().get_score(importance_type='weight') keys = list(feature_importance.keys()) values = list(feature_importance.values()) importance = pd.DataFrame(data=values, index=keys, columns=['score']).sort_values(by='score', ascending=False) plt.figure(figsize=(16, 8)) sns.barplot(x=importance.score.iloc[:20], y=importance.index[:20], orient='h', palette='Reds_r') plt.show() display(evaluate_model(train, config['N_USED_FOLDS']).style.applymap(color_map)) display(evaluate_model_Subset(train, config['N_USED_FOLDS']).style.applymap(color_map)) ### Adversarial set adv_train = train.copy() adv_test = test.copy() adv_train['dataset'] = 1 adv_test['dataset'] = 0 x_adv = pd.concat([adv_train, adv_test], axis=0) y_adv = x_adv['dataset'] ### Adversarial model model_adv = XGBClassifier(**params) model_adv.fit(x_adv[features], y_adv, eval_metric='auc', verbose=0) ### Preds preds = model_adv.predict_proba(x_adv[features])[:, 1] ### Plot feature importance and ROC AUC curve fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10)) # Feature importance feature_importance = model_adv.get_booster().get_score(importance_type='weight') keys = list(feature_importance.keys()) values = list(feature_importance.values()) importance = pd.DataFrame(data=values, index=keys, columns=['score']).sort_values(by='score', ascending=False) ax1.set_title('Feature Importances') sns.barplot(x=importance.score.iloc[:20], y=importance.index[:20], orient='h', palette='Reds_r', ax=ax1) # Plot ROC AUC curve fpr_train, tpr_train, _ = roc_curve(y_adv, preds) roc_auc_train = auc(fpr_train, tpr_train) ax2.set_title('ROC AUC curve') ax2.plot(fpr_train, tpr_train, color='blue', label='Adversarial AUC = %0.2f' % roc_auc_train) ax2.legend(loc = 'lower right') ax2.plot([0, 1], [0, 1],'r--') ax2.set_xlim([0, 1]) ax2.set_ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() train['pred'] = 0 for n_fold in range(config['N_USED_FOLDS']): train['pred'] += train[f'pred_fold_{n_fold+1}'] / config['N_FOLDS'] print('Label/prediction distribution') print(f"Train positive labels: {len(train[train['target'] > .5])}") print(f"Train positive predictions: {len(train[train['pred'] > .5])}") print(f"Train positive correct predictions: {len(train[(train['target'] > .5) & (train['pred'] > .5)])}") print('Top 10 samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].head(10)) print('Top 10 positive samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10)) print('Top 10 predicted positive samples') display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis', 'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10)) print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}") print('Top 10 samples') display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] + [c for c in test.columns if (c.startswith('pred_fold'))]].head(10)) print('Top 10 positive samples') display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] + [c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10)) submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission['target'] = test['target'] display(submission.head(10)) display(submission.describe()) submission[['image_name', 'target']].to_csv('submission.csv', index=False)
0.455441
0.606003
``` import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils bs = 16 ``` ## Layers ``` def conv1d(ni, nf, k=3, s=2, p=1): return nn.Conv1d(ni, nf, kernel_size=k, stride=s, padding=p) def conv2d(ni, nf, k=3, s=2, p=1): return nn.Conv2d(ni, nf, kernel_size=k, stride=s, padding=p) def upsample(ni, nf, k=3, s=2, p=1, op=0): return nn.ConvTranspose2d(ni, nf, kernel_size=k, stride=s, padding=p, output_padding=op) class Flatten(nn.Module): def __init__(self): super().__init__() def forward(self, x): x = x.view(x.size()[0], -1) return x class ResnetBlock(nn.Module): def __init__(self, nf): super().__init__() self.conv1 = conv2d(nf, nf, s=1) self.batchnorm = nn.BatchNorm2d(nf) self.relu = nn.ReLU(True) self.conv2 = conv2d(nf, nf, s=1) def forward(self, x): out = self.conv1(x) out = self.batchnorm(out) out = self.relu(out) out = self.conv2(out) return x + out def conv_res(ni, nf): return nn.Sequential(conv2d(ni, nf), ResnetBlock(nf)) def up_res(ni, nf): return nn.Sequential(upsample(ni, nf, op=1), ResnetBlock(nf)) def create_encoder(image_size=64, latent_dim=4): channels = [1, 4, 8, 16, 32] layers = [] layers.append(conv_res(channels[0], channels[1])) # (bs, 32, 32) layers.append(conv_res(channels[1], channels[2])) # (bs, 16, 16) layers.append(conv_res(channels[2], channels[3])) # (bs, 8, 8) layers.append(conv_res(channels[3], channels[4])) # (bs, 4, 4) return nn.Sequential(*layers) def create_decoder(image_size=64, latent_dim=4): channels = [32, 16, 8, 4, 1] layers = [] # use upsampling layers.append(up_res(channels[0], channels[1])) # (bs, 16, 16) layers.append(up_res(channels[1], channels[2])) # (bs, 8, 8) layers.append(up_res(channels[2], channels[3])) # (bs, 4, 4) layers.append(up_res(channels[3], channels[4])) # (bs, 1, 1) return nn.Sequential(*layers) ``` ## Playground #### Conv1D ``` # conv 1 input = torch.randn(1, 32, 32) output = conv1d(32, 16, k=1, s=2, p=0)(input) print(output.shape) ``` #### Conv2D ``` # conv input = torch.randn(bs, 1, 64, 64) output = conv2d(1, 1, k=3, s=2)(input) print(output.shape) # conv input = torch.randn(bs, 1, 64, 64) output = conv2d(1, 1, k=3, s=1)(input) print(output.shape) # conv input = torch.randn(bs, 1, 32, 32) output = conv2d(1, 8, k=3, s=2)(input) print(output.shape) # conv input = torch.randn(bs, 8, 16, 16) output = conv2d(8, 16, k=3, s=2)(input) print(output.shape) ``` #### ResNet ``` input = torch.randn(bs, 1, 64, 64) output = ResnetBlock(1)(input) print(output.shape) ``` #### Encoder ``` input = torch.randn(bs, 1, 64, 64) output = create_encoder()(input) print(output.shape) ``` #### Flatten ``` # conv 1 input = torch.randn(bs, 1, 32, 32) output = Flatten()(input) print(output.shape) ``` #### Upsample ``` input = torch.randn(bs, 64, 32, 32) output = upsample(64, 32)(input, output_size=(64, 64)) print(output.shape) input = torch.randn(bs, 64, 32, 32) output = upsample(64, 32, op=1)(input) print(output.shape) ``` #### Reshape ``` torch.randn(bs, 16).reshape((bs, 4, 4)).shape input = torch.randn(bs, 16) input.view(bs, 4, 4).shape ``` #### Decoder ``` input = torch.randn(bs, 32, 4, 4) output = create_decoder()(input) print(output.shape) ```
github_jupyter
import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils bs = 16 def conv1d(ni, nf, k=3, s=2, p=1): return nn.Conv1d(ni, nf, kernel_size=k, stride=s, padding=p) def conv2d(ni, nf, k=3, s=2, p=1): return nn.Conv2d(ni, nf, kernel_size=k, stride=s, padding=p) def upsample(ni, nf, k=3, s=2, p=1, op=0): return nn.ConvTranspose2d(ni, nf, kernel_size=k, stride=s, padding=p, output_padding=op) class Flatten(nn.Module): def __init__(self): super().__init__() def forward(self, x): x = x.view(x.size()[0], -1) return x class ResnetBlock(nn.Module): def __init__(self, nf): super().__init__() self.conv1 = conv2d(nf, nf, s=1) self.batchnorm = nn.BatchNorm2d(nf) self.relu = nn.ReLU(True) self.conv2 = conv2d(nf, nf, s=1) def forward(self, x): out = self.conv1(x) out = self.batchnorm(out) out = self.relu(out) out = self.conv2(out) return x + out def conv_res(ni, nf): return nn.Sequential(conv2d(ni, nf), ResnetBlock(nf)) def up_res(ni, nf): return nn.Sequential(upsample(ni, nf, op=1), ResnetBlock(nf)) def create_encoder(image_size=64, latent_dim=4): channels = [1, 4, 8, 16, 32] layers = [] layers.append(conv_res(channels[0], channels[1])) # (bs, 32, 32) layers.append(conv_res(channels[1], channels[2])) # (bs, 16, 16) layers.append(conv_res(channels[2], channels[3])) # (bs, 8, 8) layers.append(conv_res(channels[3], channels[4])) # (bs, 4, 4) return nn.Sequential(*layers) def create_decoder(image_size=64, latent_dim=4): channels = [32, 16, 8, 4, 1] layers = [] # use upsampling layers.append(up_res(channels[0], channels[1])) # (bs, 16, 16) layers.append(up_res(channels[1], channels[2])) # (bs, 8, 8) layers.append(up_res(channels[2], channels[3])) # (bs, 4, 4) layers.append(up_res(channels[3], channels[4])) # (bs, 1, 1) return nn.Sequential(*layers) # conv 1 input = torch.randn(1, 32, 32) output = conv1d(32, 16, k=1, s=2, p=0)(input) print(output.shape) # conv input = torch.randn(bs, 1, 64, 64) output = conv2d(1, 1, k=3, s=2)(input) print(output.shape) # conv input = torch.randn(bs, 1, 64, 64) output = conv2d(1, 1, k=3, s=1)(input) print(output.shape) # conv input = torch.randn(bs, 1, 32, 32) output = conv2d(1, 8, k=3, s=2)(input) print(output.shape) # conv input = torch.randn(bs, 8, 16, 16) output = conv2d(8, 16, k=3, s=2)(input) print(output.shape) input = torch.randn(bs, 1, 64, 64) output = ResnetBlock(1)(input) print(output.shape) input = torch.randn(bs, 1, 64, 64) output = create_encoder()(input) print(output.shape) # conv 1 input = torch.randn(bs, 1, 32, 32) output = Flatten()(input) print(output.shape) input = torch.randn(bs, 64, 32, 32) output = upsample(64, 32)(input, output_size=(64, 64)) print(output.shape) input = torch.randn(bs, 64, 32, 32) output = upsample(64, 32, op=1)(input) print(output.shape) torch.randn(bs, 16).reshape((bs, 4, 4)).shape input = torch.randn(bs, 16) input.view(bs, 4, 4).shape input = torch.randn(bs, 32, 4, 4) output = create_decoder()(input) print(output.shape)
0.912346
0.841696
# Development code snippets --- ## Imports ### Modules ``` %load_ext autoreload %autoreload 2 import torch import winsound import numpy as np import pandas as pd import matplotlib.pyplot as plt from deepplats import DeepPLF from deepplats.models import ARF, DenseARF, PLR, PLF from deepplats.models.utils import Scaler, TimeScaler from deepplats.utils import get_data ``` ### Data ``` df = get_data() ``` ## Model tests ### Data ``` y = df.value.to_numpy()[-500:] X = np.arange(y.size) yscaler = Scaler() y = yscaler.fit_transform(y) ``` ### In-sample ``` forecast_trend = 'rnn' forecast_resid = 'dense' horizon = 1 lags = 20 plf_kwargs = {} dar_kwargs = {}#{"size_hidden_layers": 32, "n_hidden_layers": 2, "dropout": 0.5} deepplf = DeepPLF( lags=lags, horizon=horizon, forecast_trend=forecast_trend, forecast_resid=forecast_resid, plf_kwargs=plf_kwargs, dar_kwargs=dar_kwargs) ``` ``` deepplf.fit(X, y, plr_epochs=5000, #plr_lam=.000, plr_lr=.001, plr_optim='Adam', plr_batch_size=1., plf_epochs=500, #plf_lam=0.0, plf_lr=.0001, plf_batch_size=1.0, dar_epochs=1000) # Zoom on the simple extrapolation. plt.plot(X[-20:], y[-20:]) plt.plot(X[-20:], deepplf.transform(X)[-20:]) plt.plot((X[lags-1:]+1)[-20:], deepplf.predict(X, y, model='trend')[-20:]) plt.plot((X+20)[-20:], deepplf.plf.extrapolate(torch.tensor([X]), horizon=20).detach().numpy().flatten(), c='red', ) plt.plot(X, y) plt.plot(X, deepplf.transform(X)) plt.plot(X[lags-1:]+1, deepplf.predict(X, y, model='trend')) plt.plot((X+20)[-20:], deepplf.plf.extrapolate(torch.tensor([X]), horizon=20).detach().numpy().flatten(), c='red', ) plt.vlines(deepplf.plr.breakpoints(), -2, 2, linewidth=.1, alpha=.5) plt.xlim(X.min()-20); # Either breaks=1.0 with lam=.001 OR breaks=.2 with lam=0 plt.style.use('default') # plt.style.use('ggplot') fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') transform = deepplf.transform(X) trend = deepplf.predict(X, y, model='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle='--', label='trend_pred') ax.plot(X, transform, c='orangered', linestyle=':', label='trend_transform') plt.legend(prop={'size': 16}) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('../images/example_is.jpg', bbox_inches=extent) fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') transform = deepplf.transform(X) trend = deepplf.predict(X, y, model='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') ax.plot(X, transform, c='orangered', linestyle='--', label='trend_transform') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') trend = deepplf.predict(X, y, mod='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) # plt.style.use('default') plt.style.use('ggplot') fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') trend = deepplf.predict(X, y, mod='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) ``` ### Out-of-sample ``` # plt.style.use('default') plt.style.use('ggplot') lags = 20 horizon = 1 breaks = 1. forecast_trend = 'rnn' forecast_resid = 'simple' plf_kwargs = {} dar_kwargs = {"size_hidden_layers": 32, "n_hidden_layers": 2, "dropout": 0.5} train_size = 100 sections = 30 epochs=dict( plr_epochs=5000, plf_epochs=500, dar_epochs=1000, ) preds = [] for i in range(sections): deepplf = DeepPLF( lags=lags, horizon=horizon, breaks=breaks, forecast_trend=forecast_trend, forecast_resid=forecast_resid, plf_kwargs=plf_kwargs, dar_kwargs=dar_kwargs ) index_diff = i * horizon X_train = X[index_diff : train_size + index_diff] y_train = y[index_diff : train_size + index_diff] deepplf.fit(X_train, y_train, **epochs) X_test = X[index_diff : train_size + index_diff] y_test = y[index_diff : train_size + index_diff] pred = deepplf.predict(X_test, y_test) pred = pred[-horizon:] preds.extend(pred.tolist()[-1:]) winsound.Beep(250, 1000) preds = np.zeros(shape=(3, 30)) for i in range(30): print(i) # <<<<======================== deepplf = DeepPLF( lags=10, horizon=1, breaks=1.0, forecast_trend='rnn', forecast_resid='dense', dar_kwargs = {"size_hidden_layers": 10, "n_hidden_layers": 1, "dropout": 0.5} ) X_train = X[i : 100 + i] y_train = y[i : 100 + i] deepplf.fit(X_train, y_train, plr_epochs=5000, plf_epochs=500, dar_epochs=1000) preds[0][i] = deepplf.plf.extrapolate(torch.tensor(X_train)[None,]).detach().numpy()[0][0] preds[1][i] = deepplf.predict(X_train, y_train, model='trend')[-1] preds[2][i] = deepplf.predict(X_train, y_train, model='resid')[-1] winsound.Beep(250, 1000) # <<<<======================== plt.plot(y[100:130]) plt.plot(preds[1:].sum(0)) # rnn trend + shallow resid fig, ax = plt.subplots(figsize=(15, 10)) ax.plot(X[:30 * horizon], y[train_size : train_size + 30 * horizon], c='slateblue') ax.plot(preds[1:].sum(0), c='orangered', label='full_pred') ax.plot(preds[1], c='orangered', linestyle='--', alpha=.5, label='trend_pred') ax.plot(preds[0], c='orangered', linestyle=':', alpha=.25, label='trend_extrapolation') plt.legend(prop={'size': 16}) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('../images/example_oos.jpg', bbox_inches=extent) # rnn trend + deep resid plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds[1:].sum(0)[:30], c='orangered') # rnn trend + deep resid plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # rnn trend [ rnn(y_lag, trend_lag) = trend ] plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # rnn trend [ f(trend) = y ] plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # simple (extrapolated) trend plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') ``` ### Testing ``` from deepplats.models.utils import FlattenLSTM from tqdm import tqdm y = df.value.to_numpy()[-500:] X = np.arange(y.size) SEQ_LENGTH = 50 scaler = Scaler() scaler.fit(y) y_trans = scaler.transform(y) y_roll = DeepPLF._roll_arr(y_trans, SEQ_LENGTH) target = y_trans[SEQ_LENGTH:] y_roll = y_roll[:-1] trend = deepplf.plr(torch.tensor(X, dtype=torch.float)[:, None, None]).detach().numpy().flatten() trend_roll = DeepPLF._roll_arr(scaler.transform(trend), SEQ_LENGTH) trend_roll = trend_roll[:-1] X_roll = DeepPLF._roll_arr(X, SEQ_LENGTH) X_roll = X_roll[:-1] # X_roll.T, y_roll.T, trend_roll.T X_torch = torch.tensor(np.stack([trend_roll.T]).T, dtype=torch.float) y_torch = torch.tensor(target, dtype=torch.float)[:, None] plt.plot(X, y) plt.plot(trend) epochs = 100 batch_size = 100 N_FEATURES = 1 HIDDEN_SIZE = 2 NUM_LAYERS = 1 LAST_STEP = False LINEAR_INPUT_SIZE = HIDDEN_SIZE if LAST_STEP else HIDDEN_SIZE*SEQ_LENGTH model = torch.nn.Sequential(*[ torch.nn.LSTM(input_size=N_FEATURES, hidden_size=HIDDEN_SIZE, num_layers=NUM_LAYERS, batch_first=True), FlattenLSTM(last_step=LAST_STEP), torch.nn.Linear(LINEAR_INPUT_SIZE, 1) ]) optimizer = getattr(torch.optim, 'Adam')(model.parameters(), lr=.01) lossfunc = getattr(torch.nn, 'MSELoss')() # model.train() batches = int(max(np.ceil(len(X) / batch_size), 1)) for epoch in tqdm(range(epochs)): for batch in range(batches): X_batch = X_torch[batch * batch_size : (batch + 1) * batch_size] y_batch = y_torch[batch * batch_size : (batch + 1) * batch_size] y_hat = model(X_batch) # model(X_torch) optimizer.zero_grad() loss = lossfunc(y_batch, y_hat) # lossfunc(y_torch, y_hat) loss.backward() optimizer.step() # model.eval(); pred = model(X_torch).detach().numpy() view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # y only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # time only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # trend only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) ```
github_jupyter
%load_ext autoreload %autoreload 2 import torch import winsound import numpy as np import pandas as pd import matplotlib.pyplot as plt from deepplats import DeepPLF from deepplats.models import ARF, DenseARF, PLR, PLF from deepplats.models.utils import Scaler, TimeScaler from deepplats.utils import get_data df = get_data() y = df.value.to_numpy()[-500:] X = np.arange(y.size) yscaler = Scaler() y = yscaler.fit_transform(y) forecast_trend = 'rnn' forecast_resid = 'dense' horizon = 1 lags = 20 plf_kwargs = {} dar_kwargs = {}#{"size_hidden_layers": 32, "n_hidden_layers": 2, "dropout": 0.5} deepplf = DeepPLF( lags=lags, horizon=horizon, forecast_trend=forecast_trend, forecast_resid=forecast_resid, plf_kwargs=plf_kwargs, dar_kwargs=dar_kwargs) deepplf.fit(X, y, plr_epochs=5000, #plr_lam=.000, plr_lr=.001, plr_optim='Adam', plr_batch_size=1., plf_epochs=500, #plf_lam=0.0, plf_lr=.0001, plf_batch_size=1.0, dar_epochs=1000) # Zoom on the simple extrapolation. plt.plot(X[-20:], y[-20:]) plt.plot(X[-20:], deepplf.transform(X)[-20:]) plt.plot((X[lags-1:]+1)[-20:], deepplf.predict(X, y, model='trend')[-20:]) plt.plot((X+20)[-20:], deepplf.plf.extrapolate(torch.tensor([X]), horizon=20).detach().numpy().flatten(), c='red', ) plt.plot(X, y) plt.plot(X, deepplf.transform(X)) plt.plot(X[lags-1:]+1, deepplf.predict(X, y, model='trend')) plt.plot((X+20)[-20:], deepplf.plf.extrapolate(torch.tensor([X]), horizon=20).detach().numpy().flatten(), c='red', ) plt.vlines(deepplf.plr.breakpoints(), -2, 2, linewidth=.1, alpha=.5) plt.xlim(X.min()-20); # Either breaks=1.0 with lam=.001 OR breaks=.2 with lam=0 plt.style.use('default') # plt.style.use('ggplot') fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') transform = deepplf.transform(X) trend = deepplf.predict(X, y, model='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle='--', label='trend_pred') ax.plot(X, transform, c='orangered', linestyle=':', label='trend_transform') plt.legend(prop={'size': 16}) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('../images/example_is.jpg', bbox_inches=extent) fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') transform = deepplf.transform(X) trend = deepplf.predict(X, y, model='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') ax.plot(X, transform, c='orangered', linestyle='--', label='trend_transform') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') trend = deepplf.predict(X, y, mod='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) # plt.style.use('default') plt.style.use('ggplot') fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(X, y, marker='.', c='slateblue') trend = deepplf.predict(X, y, mod='trend') pred = deepplf.predict(X, y) ax.plot(X[-pred.size:], pred, c='orangered', label='full_pred') ax.plot(X[-pred.size:], trend, c='orangered', linestyle=':', label='trend_pred') plt.legend(prop={'size': 16}) # extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) # plt.savefig('../images/example.jpg', bbox_inches=extent) # plt.style.use('default') plt.style.use('ggplot') lags = 20 horizon = 1 breaks = 1. forecast_trend = 'rnn' forecast_resid = 'simple' plf_kwargs = {} dar_kwargs = {"size_hidden_layers": 32, "n_hidden_layers": 2, "dropout": 0.5} train_size = 100 sections = 30 epochs=dict( plr_epochs=5000, plf_epochs=500, dar_epochs=1000, ) preds = [] for i in range(sections): deepplf = DeepPLF( lags=lags, horizon=horizon, breaks=breaks, forecast_trend=forecast_trend, forecast_resid=forecast_resid, plf_kwargs=plf_kwargs, dar_kwargs=dar_kwargs ) index_diff = i * horizon X_train = X[index_diff : train_size + index_diff] y_train = y[index_diff : train_size + index_diff] deepplf.fit(X_train, y_train, **epochs) X_test = X[index_diff : train_size + index_diff] y_test = y[index_diff : train_size + index_diff] pred = deepplf.predict(X_test, y_test) pred = pred[-horizon:] preds.extend(pred.tolist()[-1:]) winsound.Beep(250, 1000) preds = np.zeros(shape=(3, 30)) for i in range(30): print(i) # <<<<======================== deepplf = DeepPLF( lags=10, horizon=1, breaks=1.0, forecast_trend='rnn', forecast_resid='dense', dar_kwargs = {"size_hidden_layers": 10, "n_hidden_layers": 1, "dropout": 0.5} ) X_train = X[i : 100 + i] y_train = y[i : 100 + i] deepplf.fit(X_train, y_train, plr_epochs=5000, plf_epochs=500, dar_epochs=1000) preds[0][i] = deepplf.plf.extrapolate(torch.tensor(X_train)[None,]).detach().numpy()[0][0] preds[1][i] = deepplf.predict(X_train, y_train, model='trend')[-1] preds[2][i] = deepplf.predict(X_train, y_train, model='resid')[-1] winsound.Beep(250, 1000) # <<<<======================== plt.plot(y[100:130]) plt.plot(preds[1:].sum(0)) # rnn trend + shallow resid fig, ax = plt.subplots(figsize=(15, 10)) ax.plot(X[:30 * horizon], y[train_size : train_size + 30 * horizon], c='slateblue') ax.plot(preds[1:].sum(0), c='orangered', label='full_pred') ax.plot(preds[1], c='orangered', linestyle='--', alpha=.5, label='trend_pred') ax.plot(preds[0], c='orangered', linestyle=':', alpha=.25, label='trend_extrapolation') plt.legend(prop={'size': 16}) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('../images/example_oos.jpg', bbox_inches=extent) # rnn trend + deep resid plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds[1:].sum(0)[:30], c='orangered') # rnn trend + deep resid plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # rnn trend [ rnn(y_lag, trend_lag) = trend ] plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # rnn trend [ f(trend) = y ] plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') # simple (extrapolated) trend plt.plot(y[train_size : train_size + sections * horizon], c='slateblue') plt.plot(preds, c='orangered') from deepplats.models.utils import FlattenLSTM from tqdm import tqdm y = df.value.to_numpy()[-500:] X = np.arange(y.size) SEQ_LENGTH = 50 scaler = Scaler() scaler.fit(y) y_trans = scaler.transform(y) y_roll = DeepPLF._roll_arr(y_trans, SEQ_LENGTH) target = y_trans[SEQ_LENGTH:] y_roll = y_roll[:-1] trend = deepplf.plr(torch.tensor(X, dtype=torch.float)[:, None, None]).detach().numpy().flatten() trend_roll = DeepPLF._roll_arr(scaler.transform(trend), SEQ_LENGTH) trend_roll = trend_roll[:-1] X_roll = DeepPLF._roll_arr(X, SEQ_LENGTH) X_roll = X_roll[:-1] # X_roll.T, y_roll.T, trend_roll.T X_torch = torch.tensor(np.stack([trend_roll.T]).T, dtype=torch.float) y_torch = torch.tensor(target, dtype=torch.float)[:, None] plt.plot(X, y) plt.plot(trend) epochs = 100 batch_size = 100 N_FEATURES = 1 HIDDEN_SIZE = 2 NUM_LAYERS = 1 LAST_STEP = False LINEAR_INPUT_SIZE = HIDDEN_SIZE if LAST_STEP else HIDDEN_SIZE*SEQ_LENGTH model = torch.nn.Sequential(*[ torch.nn.LSTM(input_size=N_FEATURES, hidden_size=HIDDEN_SIZE, num_layers=NUM_LAYERS, batch_first=True), FlattenLSTM(last_step=LAST_STEP), torch.nn.Linear(LINEAR_INPUT_SIZE, 1) ]) optimizer = getattr(torch.optim, 'Adam')(model.parameters(), lr=.01) lossfunc = getattr(torch.nn, 'MSELoss')() # model.train() batches = int(max(np.ceil(len(X) / batch_size), 1)) for epoch in tqdm(range(epochs)): for batch in range(batches): X_batch = X_torch[batch * batch_size : (batch + 1) * batch_size] y_batch = y_torch[batch * batch_size : (batch + 1) * batch_size] y_hat = model(X_batch) # model(X_torch) optimizer.zero_grad() loss = lossfunc(y_batch, y_hat) # lossfunc(y_torch, y_hat) loss.backward() optimizer.step() # model.eval(); pred = model(X_torch).detach().numpy() view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # y only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # time only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH]) # trend only view_limit = 0, 100 view_floor = view_limit[0] + SEQ_LENGTH view_ceil = view_limit[1] + SEQ_LENGTH print(np.abs(scaler.transform(y)[SEQ_LENGTH:] - pred.flatten()).mean()) plt.plot(X[view_floor:view_ceil], scaler.transform(y)[view_floor:view_ceil]) plt.plot(X[view_floor:view_ceil], pred[view_floor-SEQ_LENGTH:view_ceil-SEQ_LENGTH])
0.517083
0.875468
*This is part of Kaggle's [Learn Machine Learning](https://www.kaggle.com/learn/machine-learning) series.* # Selecting and Filtering Data Your dataset had too many variables to wrap your head around, or even to print out nicely. How can you pare down this overwhelming amount of data to something you can understand? To show you the techniques, we'll start by picking a few variables using our intuition. Later tutorials will show you statistical techniques to automatically prioritize variables. Before we can choose variables/columns, it is helpful to see a list of all columns in the dataset. That is done with the **columns** property of the DataFrame (the bottom line of code below). ``` import pandas as pd melbourne_file_path = './data/melb_data.csv' melbourne_data = pd.read_csv(melbourne_file_path) print(melbourne_data.columns) ``` There are many ways to select a subset of your data. We'll start with two main approaches: ## Selecting a Single Column You can pull out any variable (or column) with **dot-notation**. This single column is stored in a **Series**, which is broadly like a DataFrame with only a single column of data. Here's an example: ``` # store the series of prices separately as melbourne_price_data. melbourne_price_data = melbourne_data.Price # the head command returns the top few lines of data. melbourne_price_data.head() ``` ## Selecting Multiple Columns You can select multiple columns from a DataFrame by providing a list of column names inside brackets. Remember, each item in that list should be a string (with quotes). ``` columns_of_interest = ['Landsize', 'BuildingArea'] two_columns_of_data = melbourne_data[columns_of_interest] ``` We can verify that we got the columns we need with the **describe** command. ``` two_columns_of_data.describe() ``` # Your Turn In the notebook with your code: 1. Print a list of the columns 2. From the list of columns, find a name of the column with the sales prices of the homes. Use the dot notation to extract this to a variable (as you saw above to create `melbourne_price_data`.) 3. Use the `head` command to print out the top few lines of the variable you just created. 4. Pick any two variables and store them to a new DataFrame (as you saw above to create `two_columns_of_data`.) 5. Use the describe command with the DataFrame you just created to see summaries of those variables. <br> --- # Continue Now that you can specify what data you want for a model, you are ready to **[build your first model](https://www.kaggle.com/dansbecker/your-first-scikit-learn-model)** in the next step. ``` import pandas as pd data_train = pd.read_csv('./data/house-prices-advanced-regression-techniques/train.csv') data_train.columns price_data = data_train['SalePrice'] price_data.head() two_columns_of_data = data_train[['Electrical', 'Condition1']] two_columns_of_data.describe() ```
github_jupyter
import pandas as pd melbourne_file_path = './data/melb_data.csv' melbourne_data = pd.read_csv(melbourne_file_path) print(melbourne_data.columns) # store the series of prices separately as melbourne_price_data. melbourne_price_data = melbourne_data.Price # the head command returns the top few lines of data. melbourne_price_data.head() columns_of_interest = ['Landsize', 'BuildingArea'] two_columns_of_data = melbourne_data[columns_of_interest] two_columns_of_data.describe() import pandas as pd data_train = pd.read_csv('./data/house-prices-advanced-regression-techniques/train.csv') data_train.columns price_data = data_train['SalePrice'] price_data.head() two_columns_of_data = data_train[['Electrical', 'Condition1']] two_columns_of_data.describe()
0.224735
0.98829
# Acronym Extraction (C) 2022 by [Damir Cavar](http://damir.cavar.me/) This is an example of the use of the *[abbreviations](https://github.com/philgooch/abbreviation-extraction)* module to extract acronyms from documents. Install the module using: pip install abbreviations For the *FileChooser* widget in this Jupyter notebook you might need to install also the *[ipyfilechooser](https://github.com/crahan/ipyfilechooser)*: pip install ipyfilechooser The code below assumes that the text is encoded as UTF-8. If this is not the case for you, adapt the encoding specification in the *get_abbreviation* function below or convert your text to use the UTF-8 character encoding. Run the following code to activate the *FileChooser* and select a folder with the target text files in it. The target text files can be in subfolders of arbitrary depth within this folder. ``` from ipyfilechooser import FileChooser fc = FileChooser() display(fc) ``` In the following code cell we will import the necessary modules *[abbreviations](https://github.com/philgooch/abbreviation-extraction)* and *os* used in the functions below to process subfolders, find target text files, and extract all abbreviations from them. ``` from abbreviations import schwartz_hearst import os ``` The following function reads the content from a text file in the *folder_path* and *directory* subdirectory. ``` def get_abbreviations(file_name = "", filter_terms = []): if not os.path.exists(file_name): return print("Processing file:", file_name) try: ifp = open(file_name, mode='r', encoding='utf-8') text = ifp.read() ifp.close() except IOError: return if not text: return most_common_defs = schwartz_hearst.extract_abbreviation_definition_pairs(doc_text=text, most_common_definition=True) first_defs = schwartz_hearst.extract_abbreviation_definition_pairs(doc_text=text, first_definition=True) for x in filter_terms: if x in most_common_defs: del most_common_defs[x] if x in first_defs: del first_defs[x] res = {} for s in (first_defs, most_common_defs): for x in s: val = res.get(x, set()) full_form = s[x].lower().title() val.add(full_form) res[x] = val abbreviations = list(res.items()) abbreviations.sort() return abbreviations ``` We can define the folders to be skipped here in this list. For example, the following specification would ignore the folder *_test* and *not_relevant* in the target path: ``` skip_folders = ["_test", "not_relevant"] ``` The following loop will walk through all subfolders, skipping the ones specified above, and processing each text file ending in *.txt* in the folder. When calling *get_abbreviations*, the second parameter can be a list of strings that should be skipped in the output of the abbreviation extractor. ``` for root, dirs, files in os.walk(fc.selected_path): if os.path.basename(os.path.normpath(root)) in skip_folders: continue for file in files: if file.endswith(".txt"): [ print(f"{x[0]}: {', '.join(x[1])}") for x in get_abbreviations(os.path.join(root, file), ["if any", "The"]) ] ``` (C) 2022 by [Damir Cavar](http://damir.cavar.me/)
github_jupyter
from ipyfilechooser import FileChooser fc = FileChooser() display(fc) from abbreviations import schwartz_hearst import os def get_abbreviations(file_name = "", filter_terms = []): if not os.path.exists(file_name): return print("Processing file:", file_name) try: ifp = open(file_name, mode='r', encoding='utf-8') text = ifp.read() ifp.close() except IOError: return if not text: return most_common_defs = schwartz_hearst.extract_abbreviation_definition_pairs(doc_text=text, most_common_definition=True) first_defs = schwartz_hearst.extract_abbreviation_definition_pairs(doc_text=text, first_definition=True) for x in filter_terms: if x in most_common_defs: del most_common_defs[x] if x in first_defs: del first_defs[x] res = {} for s in (first_defs, most_common_defs): for x in s: val = res.get(x, set()) full_form = s[x].lower().title() val.add(full_form) res[x] = val abbreviations = list(res.items()) abbreviations.sort() return abbreviations skip_folders = ["_test", "not_relevant"] for root, dirs, files in os.walk(fc.selected_path): if os.path.basename(os.path.normpath(root)) in skip_folders: continue for file in files: if file.endswith(".txt"): [ print(f"{x[0]}: {', '.join(x[1])}") for x in get_abbreviations(os.path.join(root, file), ["if any", "The"]) ]
0.232746
0.897471
# Laboratorio 5 ## Datos: _European Union lesbian, gay, bisexual and transgender survey (2012)_ Link a los datos [aquí](https://www.kaggle.com/ruslankl/european-union-lgbt-survey-2012). ### Contexto La FRA (Agencia de Derechos Fundamentales) realizó una encuesta en línea para identificar cómo las personas lesbianas, gays, bisexuales y transgénero (LGBT) que viven en la Unión Europea y Croacia experimentan el cumplimiento de sus derechos fundamentales. La evidencia producida por la encuesta apoyará el desarrollo de leyes y políticas más efectivas para combatir la discriminación, la violencia y el acoso, mejorando la igualdad de trato en toda la sociedad. La necesidad de una encuesta de este tipo en toda la UE se hizo evidente después de la publicación en 2009 del primer informe de la FRA sobre la homofobia y la discriminación por motivos de orientación sexual o identidad de género, que destacó la ausencia de datos comparables. La Comisión Europea solicitó a FRA que recopilara datos comparables en toda la UE sobre este tema. FRA organizó la recopilación de datos en forma de una encuesta en línea que abarca todos los Estados miembros de la UE y Croacia. Los encuestados eran personas mayores de 18 años, que se identifican como lesbianas, homosexuales, bisexuales o transgénero, de forma anónima. La encuesta se hizo disponible en línea, de abril a julio de 2012, en los 23 idiomas oficiales de la UE (excepto irlandés) más catalán, croata, luxemburgués, ruso y turco. En total, 93,079 personas LGBT completaron la encuesta. Los expertos internos de FRA diseñaron la encuesta que fue implementada por Gallup, uno de los líderes del mercado en encuestas a gran escala. Además, organizaciones de la sociedad civil como ILGA-Europa (Región Europea de la Asociación Internacional de Lesbianas, Gays, Bisexuales, Trans e Intersexuales) y Transgender Europe (TGEU) brindaron asesoramiento sobre cómo acercarse mejor a las personas LGBT. Puede encontrar más información sobre la metodología de la encuesta en el [__Informe técnico de la encuesta LGBT de la UE. Metodología, encuesta en línea, cuestionario y muestra__](https://fra.europa.eu/sites/default/files/eu-lgbt-survey-technical-report_en.pdf). ### Contenido El conjunto de datos consta de 5 archivos .csv que representan 5 bloques de preguntas: vida cotidiana, discriminación, violencia y acoso, conciencia de los derechos, preguntas específicas de personas transgénero. El esquema de todas las tablas es idéntico: * `CountryCode` - name of the country * `subset` - Lesbian, Gay, Bisexual women, Bisexual men or Transgender (for Transgender Specific Questions table the value is only Transgender) * `question_code` - unique code ID for the question * `question_label` - full question text * `answer` - answer given * `percentage` * `notes` - [0]: small sample size; [1]: NA due to small sample size; [2]: missing value En el laboratorio de hoy solo utilizaremos los relacionados a la vida cotidiana, disponibles en el archivo `LGBT_Survey_DailyLife.csv` dentro de la carpeta `data`. ``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline daily_life_raw = pd.read_csv(os.path.join("..", "data", "LGBT_Survey_DailyLife.csv")) daily_life_raw.head() daily_life_raw.info() daily_life_raw.describe(include="all").T questions = ( daily_life_raw.loc[: , ["question_code", "question_label"]] .drop_duplicates() .set_index("question_code") .squeeze() ) for idx, value in questions.items(): print(f"Question code {idx}:\n\n{value}\n\n") ``` ### Preprocesamiento de datos ¿Te fijaste que la columna `percentage` no es numérica? Eso es por los registros con notes `[1]`, por lo que los eliminaremos. ``` daily_life_raw.notes.unique() daily_life = ( daily_life_raw.query("notes != ' [1] '") .astype({"percentage": "int"}) .drop(columns=["question_label", "notes"]) .rename(columns={"CountryCode": "country"}) ) daily_life.head() ``` ## Ejercicio 1 (1 pto) ¿A qué tipo de dato (nominal, ordinal, discreto, continuo) corresponde cada columna del DataFrame `daily_life`? Recomendación, mira los valores únicos de cada columna. ``` daily_life.dtypes daily_life["subset"].unique() ``` __Respuesta:__ * `country`: Nominal * `subset`: Nominal * `question_code`: Nominal * `answer`: Nominal * `percentage`: Continuo ## Ejercicio 2 (1 pto) Crea un nuevo dataframe `df1` tal que solo posea registros de Bélgica, la pregunta con código `b1_b` y que hayan respondido _Very widespread_. Ahora, crea un gráfico de barras vertical con la función `bar` de `matplotlib` para mostrar el porcentaje de respuestas por cada grupo. La figura debe ser de tamaño 10 x 6 y el color de las barras verde. ``` print(f"Question b1_b:\n\n{questions['b1_b']}") df1 = daily_life[(daily_life["country"]=="Belgium")&(daily_life["question_code"]=="b1_b")&(daily_life["answer"]=="Very widespread")] df1 x = df1["subset"].values y = df1["percentage"].values fig = plt.figure(figsize=(10, 6)) plt.bar(x,y,color="g") plt.show() ``` ## Ejercicio 3 (1 pto) Respecto a la pregunta con código `g5`, ¿Cuál es el porcentaje promedio por cada valor de la respuesta (notar que la respuestas a las preguntas son numéricas)? ``` print(f"Question g5:\n\n{questions['g5']}") ``` Crea un DataFrame llamado `df2` tal que: 1. Solo sean registros con la pregunta con código `g5` 2. Cambia el tipo de la columna `answer` a `int`. 3. Agrupa por país y respuesta y calcula el promedio a la columna porcentaje (usa `agg`). 4. Resetea los índices. ``` df2 = ( daily_life[daily_life["question_code"]=="g5"] .astype({"answer": "int"}) .groupby(["country","answer"]) .agg(prom_perc_country=("percentage", "mean")) .reset_index() ) df2.head() ``` Crea un DataFrame llamado `df2_mean` tal que: 1. Agrupa `df2` por respuesta y calcula el promedio del porcentaje. 2. Resetea los índices. ``` df2_mean = df2.groupby("answer").agg(prom_perc_answer=("prom_perc_country","mean")).reset_index() df2_mean.head() ``` Ahora, grafica lo siguiente: 1. Una figura con dos columnas, tamaño de figura 15 x 12 y que compartan eje x y eje y. Usar `plt.subplots`. 2. Para el primer _Axe_ (`ax1`), haz un _scatter plot_ tal que el eje x sea los valores de respuestas de `df2`, y el eye y corresponda a los porcentajes de `df2`. Recuerda que en este caso corresponde a promedios por país, por lo que habrán más de 10 puntos en el gráfico.. 3. Para el segundo _Axe_ (`ax2`), haz un gráfico de barras horizontal tal que el eje x sea los valores de respuestas de `df2_mean`, y el eye y corresponda a los porcentajes de `df2_mean`. ``` x = df2["answer"] y = df2["prom_perc_country"] x_mean = df2_mean["answer"] y_mean = df2_mean["prom_perc_answer"] fig, (ax1, ax2) = plt.subplots(nrows=1,ncols=2,sharex=True,sharey=True,figsize=(15,12)) ax1.scatter(x,y) ax1.grid(alpha=0.3) ax2.barh(x_mean, y_mean) ax2.grid(alpha=0.3) fig.show() ``` ## Ejercicio 4 (1 pto) Respecto a la misma pregunta `g5`, cómo se distribuyen los porcentajes en promedio para cada país - grupo? Utilizaremos el mapa de calor presentado en la clase, para ello es necesario procesar un poco los datos para conformar los elementos que se necesitan. Crea un DataFrame llamado `df3` tal que: 1. Solo sean registros con la pregunta con código `g5` 2. Cambia el tipo de la columna `answer` a `int`. 3. Agrupa por país y subset, luego calcula el promedio a la columna porcentaje (usa `agg`). 4. Resetea los índices. 5. Pivotea tal que los índices sean los países, las columnas los grupos y los valores el promedio de porcentajes. 6. Llena los valores nulos con cero. Usa `fillna`. ``` ## Code from: # https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py import numpy as np import matplotlib import matplotlib.pyplot as plt def heatmap(data, row_labels, col_labels, ax=None, cbar_kw={}, cbarlabel="", **kwargs): """ Create a heatmap from a numpy array and two lists of labels. Parameters ---------- data A 2D numpy array of shape (N, M). row_labels A list or array of length N with the labels for the rows. col_labels A list or array of length M with the labels for the columns. ax A `matplotlib.axes.Axes` instance to which the heatmap is plotted. If not provided, use current axes or create a new one. Optional. cbar_kw A dictionary with arguments to `matplotlib.Figure.colorbar`. Optional. cbarlabel The label for the colorbar. Optional. **kwargs All other arguments are forwarded to `imshow`. """ if not ax: ax = plt.gca() # Plot the heatmap im = ax.imshow(data, **kwargs) # Create colorbar cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw) cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom") # We want to show all ticks... ax.set_xticks(np.arange(data.shape[1])) ax.set_yticks(np.arange(data.shape[0])) # ... and label them with the respective list entries. ax.set_xticklabels(col_labels) ax.set_yticklabels(row_labels) # Let the horizontal axes labeling appear on top. ax.tick_params(top=True, bottom=False, labeltop=True, labelbottom=False) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=-30, ha="right", rotation_mode="anchor") # Turn spines off and create white grid. for edge, spine in ax.spines.items(): spine.set_visible(False) ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True) ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True) ax.grid(which="minor", color="w", linestyle='-', linewidth=3) ax.tick_params(which="minor", bottom=False, left=False) return im, cbar def annotate_heatmap(im, data=None, valfmt="{x:.2f}", textcolors=["black", "white"], threshold=None, **textkw): """ A function to annotate a heatmap. Parameters ---------- im The AxesImage to be labeled. data Data used to annotate. If None, the image's data is used. Optional. valfmt The format of the annotations inside the heatmap. This should either use the string format method, e.g. "$ {x:.2f}", or be a `matplotlib.ticker.Formatter`. Optional. textcolors A list or array of two color specifications. The first is used for values below a threshold, the second for those above. Optional. threshold Value in data units according to which the colors from textcolors are applied. If None (the default) uses the middle of the colormap as separation. Optional. **kwargs All other arguments are forwarded to each call to `text` used to create the text labels. """ if not isinstance(data, (list, np.ndarray)): data = im.get_array() # Normalize the threshold to the images color range. if threshold is not None: threshold = im.norm(threshold) else: threshold = im.norm(data.max())/2. # Set default alignment to center, but allow it to be # overwritten by textkw. kw = dict(horizontalalignment="center", verticalalignment="center") kw.update(textkw) # Get the formatter in case a string is supplied if isinstance(valfmt, str): valfmt = matplotlib.ticker.StrMethodFormatter(valfmt) # Loop over the data and create a `Text` for each "pixel". # Change the text's color depending on the data. texts = [] for i in range(data.shape[0]): for j in range(data.shape[1]): kw.update(color=textcolors[int(im.norm(data[i, j]) > threshold)]) text = im.axes.text(j, i, valfmt(data[i, j], None), **kw) texts.append(text) return texts df3 = ( daily_life[daily_life["question_code"]=="g5"] .astype({"answer": "int"}) .groupby(["country","subset"]) .agg(prom_perc_subset=("percentage", "mean")) .reset_index() .pivot_table(index="country",columns="subset",values="prom_perc_subset") .fillna(0) ) df3.head() ``` Finalmente, los ingredientes para el heat map son: ``` countries = df3.index.tolist() subsets = df3.columns.tolist() answers = df3.values ``` El mapa de calor debe ser de la siguiente manera: * Tamaño figura: 15 x 20 * cmap = "YlGn" * cbarlabel = "Porcentaje promedio (%)" * Precición en las anotaciones: Flotante con dos decimales. ``` fig, ax = plt.subplots(figsize=(15,20)) im, cbar = heatmap(answers,countries,subsets,ax=ax,cmap="YlGn",cbarlabel="Porcentaje promedio (%)") texts = annotate_heatmap(im, valfmt="{x:.2f}") fig.tight_layout() plt.show() ```
github_jupyter
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline daily_life_raw = pd.read_csv(os.path.join("..", "data", "LGBT_Survey_DailyLife.csv")) daily_life_raw.head() daily_life_raw.info() daily_life_raw.describe(include="all").T questions = ( daily_life_raw.loc[: , ["question_code", "question_label"]] .drop_duplicates() .set_index("question_code") .squeeze() ) for idx, value in questions.items(): print(f"Question code {idx}:\n\n{value}\n\n") daily_life_raw.notes.unique() daily_life = ( daily_life_raw.query("notes != ' [1] '") .astype({"percentage": "int"}) .drop(columns=["question_label", "notes"]) .rename(columns={"CountryCode": "country"}) ) daily_life.head() daily_life.dtypes daily_life["subset"].unique() print(f"Question b1_b:\n\n{questions['b1_b']}") df1 = daily_life[(daily_life["country"]=="Belgium")&(daily_life["question_code"]=="b1_b")&(daily_life["answer"]=="Very widespread")] df1 x = df1["subset"].values y = df1["percentage"].values fig = plt.figure(figsize=(10, 6)) plt.bar(x,y,color="g") plt.show() print(f"Question g5:\n\n{questions['g5']}") df2 = ( daily_life[daily_life["question_code"]=="g5"] .astype({"answer": "int"}) .groupby(["country","answer"]) .agg(prom_perc_country=("percentage", "mean")) .reset_index() ) df2.head() df2_mean = df2.groupby("answer").agg(prom_perc_answer=("prom_perc_country","mean")).reset_index() df2_mean.head() x = df2["answer"] y = df2["prom_perc_country"] x_mean = df2_mean["answer"] y_mean = df2_mean["prom_perc_answer"] fig, (ax1, ax2) = plt.subplots(nrows=1,ncols=2,sharex=True,sharey=True,figsize=(15,12)) ax1.scatter(x,y) ax1.grid(alpha=0.3) ax2.barh(x_mean, y_mean) ax2.grid(alpha=0.3) fig.show() ## Code from: # https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py import numpy as np import matplotlib import matplotlib.pyplot as plt def heatmap(data, row_labels, col_labels, ax=None, cbar_kw={}, cbarlabel="", **kwargs): """ Create a heatmap from a numpy array and two lists of labels. Parameters ---------- data A 2D numpy array of shape (N, M). row_labels A list or array of length N with the labels for the rows. col_labels A list or array of length M with the labels for the columns. ax A `matplotlib.axes.Axes` instance to which the heatmap is plotted. If not provided, use current axes or create a new one. Optional. cbar_kw A dictionary with arguments to `matplotlib.Figure.colorbar`. Optional. cbarlabel The label for the colorbar. Optional. **kwargs All other arguments are forwarded to `imshow`. """ if not ax: ax = plt.gca() # Plot the heatmap im = ax.imshow(data, **kwargs) # Create colorbar cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw) cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom") # We want to show all ticks... ax.set_xticks(np.arange(data.shape[1])) ax.set_yticks(np.arange(data.shape[0])) # ... and label them with the respective list entries. ax.set_xticklabels(col_labels) ax.set_yticklabels(row_labels) # Let the horizontal axes labeling appear on top. ax.tick_params(top=True, bottom=False, labeltop=True, labelbottom=False) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=-30, ha="right", rotation_mode="anchor") # Turn spines off and create white grid. for edge, spine in ax.spines.items(): spine.set_visible(False) ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True) ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True) ax.grid(which="minor", color="w", linestyle='-', linewidth=3) ax.tick_params(which="minor", bottom=False, left=False) return im, cbar def annotate_heatmap(im, data=None, valfmt="{x:.2f}", textcolors=["black", "white"], threshold=None, **textkw): """ A function to annotate a heatmap. Parameters ---------- im The AxesImage to be labeled. data Data used to annotate. If None, the image's data is used. Optional. valfmt The format of the annotations inside the heatmap. This should either use the string format method, e.g. "$ {x:.2f}", or be a `matplotlib.ticker.Formatter`. Optional. textcolors A list or array of two color specifications. The first is used for values below a threshold, the second for those above. Optional. threshold Value in data units according to which the colors from textcolors are applied. If None (the default) uses the middle of the colormap as separation. Optional. **kwargs All other arguments are forwarded to each call to `text` used to create the text labels. """ if not isinstance(data, (list, np.ndarray)): data = im.get_array() # Normalize the threshold to the images color range. if threshold is not None: threshold = im.norm(threshold) else: threshold = im.norm(data.max())/2. # Set default alignment to center, but allow it to be # overwritten by textkw. kw = dict(horizontalalignment="center", verticalalignment="center") kw.update(textkw) # Get the formatter in case a string is supplied if isinstance(valfmt, str): valfmt = matplotlib.ticker.StrMethodFormatter(valfmt) # Loop over the data and create a `Text` for each "pixel". # Change the text's color depending on the data. texts = [] for i in range(data.shape[0]): for j in range(data.shape[1]): kw.update(color=textcolors[int(im.norm(data[i, j]) > threshold)]) text = im.axes.text(j, i, valfmt(data[i, j], None), **kw) texts.append(text) return texts df3 = ( daily_life[daily_life["question_code"]=="g5"] .astype({"answer": "int"}) .groupby(["country","subset"]) .agg(prom_perc_subset=("percentage", "mean")) .reset_index() .pivot_table(index="country",columns="subset",values="prom_perc_subset") .fillna(0) ) df3.head() countries = df3.index.tolist() subsets = df3.columns.tolist() answers = df3.values fig, ax = plt.subplots(figsize=(15,20)) im, cbar = heatmap(answers,countries,subsets,ax=ax,cmap="YlGn",cbarlabel="Porcentaje promedio (%)") texts = annotate_heatmap(im, valfmt="{x:.2f}") fig.tight_layout() plt.show()
0.61659
0.91782
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-documentation/tree/master/ImageCollection/06_compositing_and_mosaicking.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-documentation/blob/master/ImageCollection/06_compositing_and_mosaicking.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-documentation/master?filepath=ImageCollection/06_compositing_and_mosaicking.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-documentation/blob/master/ImageCollection/06_compositing_and_mosaicking.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`. The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. ``` import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) # Import libraries import ee import folium import geehydro # Authenticate and initialize Earth Engine API try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ``` Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset image = ee.Image('USGS/SRTMGL1_003') # Set visualization parameters. vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']} # Print the elevation of Mount Everest. xy = ee.Geometry.Point([86.9250, 27.9881]) elev = image.sample(xy, 30).first().get('elevation').getInfo() print('Mount Everest elevation (m):', elev) # Add Earth Eninge layers to Map Map.addLayer(image, vis_params, 'DEM') Map.addLayer(xy, {'color': 'red'}, 'Mount Everest') # Center the map based on an Earth Eninge object or coordinates (longitude, latitude) # Map.centerObject(xy, 4) Map.setCenter(86.9250, 27.9881, 4) ``` ## Display Earth Engine data layers ``` Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ```
github_jupyter
import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) # Import libraries import ee import folium import geehydro # Authenticate and initialize Earth Engine API try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') # Add Earth Engine dataset image = ee.Image('USGS/SRTMGL1_003') # Set visualization parameters. vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']} # Print the elevation of Mount Everest. xy = ee.Geometry.Point([86.9250, 27.9881]) elev = image.sample(xy, 30).first().get('elevation').getInfo() print('Mount Everest elevation (m):', elev) # Add Earth Eninge layers to Map Map.addLayer(image, vis_params, 'DEM') Map.addLayer(xy, {'color': 'red'}, 'Mount Everest') # Center the map based on an Earth Eninge object or coordinates (longitude, latitude) # Map.centerObject(xy, 4) Map.setCenter(86.9250, 27.9881, 4) Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
0.499512
0.958924
# PyQGIS: Expanding QGIS's functionality with Python. # Day 2 – Processing and plugins Yesterday, you learned the basics of PyQGIS: how to run code from the embedded Python console, how to use the central classes (such as QgsVectorLayer) and run operations of layer geometries and attributes. However, the strength of QGIS is its vast library of geospatial algorithms (both native and those added by plugins, such as GRASS) coupled with general data crunching tools. In these practicals, you will learn how to run processing algorithms via PyQGIS, even chaining multiple operations together and running batch processes on multiple layers. You will create automated processes using the graphic Model Builder and expand on the models with Python. The point of these exercises is to learn how to use Python in QGIS to create reproducible and easily shareable tools for automated spatial analysis. The final section is a brief look at QGIS plugin development – what are plugins, when to develop them, what are the basic parts of plugins and the underlying tools, like the QT framework. There's also a challenge related to modifying and adding to a simple graphical plugin. ### Preparations Open QGIS and load in all the layers from *practical_data.gpkg*. Do this via the GUI or, if you want to do the same programmatically, use the script below. Adapted [from the PyQGIS Cookbook](https://docs.qgis.org/latest/en/docs/pyqgis_developer_cookbook/cheat_sheet.html#layers). ``` # EXAMPLE PATH: define the actual path on your system gpkg_path = "C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg" # windows gpkg_layer = QgsVectorLayer(gpkg_path, "whole_gpkg", "ogr") # returns a list of strings describing the sublayers sub_strings = gpkg_layer.dataProvider().subLayers() # EXAMPLE: 1!!::!!Paavo!!::!!3027!!::!!MultiPolygon!!::!!geom!!::!! # !!::!! separates the values for sub_string in sub_strings: layer_name = sub_string.split(gpkg_layer.dataProvider().sublayerSeparator())[1] uri = "{0}|layername={1}".format(gpkg_path, layer_name) # Create layer sub_vlayer = QgsVectorLayer(uri, layer_name, 'ogr') # Add layer to map if sub_vlayer.isValid(): QgsProject.instance().addMapLayer(sub_vlayer) else: print("Can't add layer", layer_name) ``` Next, make sure you have the processing toolbox open in the GUI. If not, open it from the drop down menu _Processing > Toolbox_. If the toolbox is unavailable for some reason, [follow these instructions](https://docs.qgis.org/3.22/en/docs/training_manual/processing/set_up.html) to remedy. ### Data description As are reminder, below are brief descriptions of the layers and relevant fields used in these practicals. The original datasets are available through Statistics Finland: here, the field names have been translated to English. | Layer | Layer description |Fields | Field description | Source | | ----------- | ----------- | ----------- | ----------- | ----------- | | **Helsinki_region_pop_squares** | Capital region population (2020) in 1 km x 1 km squares | || [Statistics Finland](https://www.paikkatietohakemisto.fi/geonetwork/srv/eng/catalog.search#/metadata/a901d40a-8a6b-4678-814c-79d2e2ab130c), CC-BY 4.0| | | | pop |Population in 2020|| | | | men / women | Number of men or women| | | | | ages x_y | Number of people in age group|| | **NUTS2_FIN_POP** | Population (2020) in NUTS2 regions (suuralueet) | ||[Statistics Finland](https://www.paikkatietohakemisto.fi/geonetwork/srv/eng/catalog.search#/metadata/d136a588-7fc6-4865-8d7c-b50d1398e728), CC-BY 4.0| | | | nimi / name | Name of the region|| | | | pop | Population in 2020|| | | | total_pop_perc | This region's share of total Finnish population|| | | | male/female_perc | Share of men or women in this region|| | **Paavo** | Socio-economic statistics on postal regions | || [Statistics Finland](https://www.stat.fi/tup/paavo/index_en.html), CC-BY 4.0 | | | | Too many to describe | [See here for full description](https://www.stat.fi/static/media/uploads/tup/paavo/paavo_kuvaus_en.pdf) || ### Table of contents: [**1. Processing algorithms**](#1.-Processing-algorithms) [1.1. Basics of processing algorithms](#1.1.-Basics-of-processing-algorithms) [Task 1](#TASK-1) [1.2. Running processing algorithms with PyQGIS](#1.2.-Running-processing-algorithms-with-PyQGIS) [1.2.1. Batch processing](#1.2.1.-Batch-processing) [1.2.2. Chaining algorithms](#1.2.2.-Chaining-algorithms) [Task 2](#TASK-2) [1.3. Recreating simplify_and_trim as a processing script](#1.3.-Recreating-simplify_and_trim-as-a-processing-script) [1.3.1 Graphical processing models](#1.3.1-Graphical-processing-models) [1.3.2. Building the script base as a graphical model](#1.3.2.-Building-the-script-base-as-a-graphical-model) [1.3.3. Understanding processing scripts](#1.3.3.-Understanding-processing-scripts) [1.3.4. Creating a new processing script](#1.3.4.-Creating-a-new-processing-script) [TASK 3: Finalizing the model](#TASK-3:-Finalizing-the-model) [1.4. Wrapping up processing](#1.4.-Wrapping-up-processing) [**Challenge X: Processing algorithm with the @alg decorator**](#Challenge-X:-Processing-algorithm-with-the-@alg-decorator) [**2. A look at plugin development**](#2.-A-look-at-plugin-development) [2.1. Plugins: what are they good for?](#2.1.-Plugins:-what-are-they-good-for?) [2.2. QT framework: signals and slots](#2.2.-QT-framework:-signals-and-slots) [2.3. Elements of a QGIS plugin](#2.3.-Elements-of-a-QGIS-plugin) [**Challenge Y: Modifying a plugin**](#Challenge-Y:-Modifying-a-plugin) ## 1. Processing algorithms ## 1.1. Basics of processing algorithms The processing framework is a collection of helpful functions for topics ranging from network analysis to raster terrain analysis. Both native and custom algorithms can be called programmatically, but doing so requires us to know a few things, namely the algorigthm name and its parameters. The simplest way to access these is by letting QGIS create the initial code for us. ### TASK 1 - Using the GUI, run _Simplify_ algorithm, under _Vector geometry_. The easiest way to find it is to use the search bar. - Change _Tolerance_ to 5000, otherwise keep the default settings. - Run the algorithm. It should add a new memory layer with simplified geometry to the project. ![Simplify algorithm in gui](images/simplify_algorithm_settings.JPG) After running the algorithm, click on the _Processing_ drop-down menu and select _History_. Alternatively, _History_ can be opened through this button on the processing toolbox ![History button](images/history_button.JPG) The opened window contains a list of previous processing runs (under ALGORITHM). Select the latest one. An algorithm call is displayed on the text box below the list and it looks something like this (formatted for clarity): ``` processing.run("native:simplifygeometries", {'INPUT':'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop', 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) ``` What we have here is a call to the processing framework. Specifically, we name the algorithm and pass a dictionary of algorithm parameters (if you need a refresher of Python dicts, [check this out](https://www.w3schools.com/python/python_dictionaries.asp)). Notice that the parameters are identical to the ones we defined graphically. Though you might wonder how one could know that _Distance: Douglas-Peucker_ maps to method 0 without first running the algorithm graphically. The processing framework has a function for describing algorithms: ``` >>> processing.algorithmHelp("native:simplifygeometries") Simplify (native:simplifygeometries) This algorithm simplifies the geometries in a line or polygon layer. --- METHOD: Simplification method Parameter type: QgsProcessingParameterEnum Available values: - 0: Distance (Douglas-Peucker) - 1: Snap to grid - 2: Area (Visvalingam) --- ``` There we have it. Use this function if you want to learn more about the algorithms and the parameters. However, to use the helper function, you need the algorithm's id string, which is different from its screen name (like _native:simplifygeometries_ vs. _Simplify_). Print all available algorithms [like this](https://docs.qgis.org/latest/en/docs/user_manual/processing/console.html#calling-algorithms-from-the-python-console). Because the hundreds of algorithms can be overwhelming, a simple filter is applied below: ``` >>> for alg in QgsApplication.processingRegistry().algorithms(): ... search_str = "simplify" ... if search_str in alg.displayName().lower(): ... print(alg.displayName(), "->", alg.id()) Simplify Network -> NetworkGT:Simplify Network Simplify -> native:simplifygeometries ``` The part before the colon (_NetworkGT_ and _native_) refers to the algorithm provider – in this case a plugin and the native processing library. ## 1.2. Running processing algorithms with PyQGIS If you ran the processing.run call before, you might've noticed that it simply returned a dictionary with a layer as the value – this is because an algorithm can have multiple outputs. The output objects are accessed by entering the key of the output: most often, that's 'OUTPUT'. ``` >>> input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' >>> results = processing.run("native:simplifygeometries", ... {'INPUT':input_layer_path, ... 'METHOD':0, ... 'TOLERANCE':5000, ... 'OUTPUT':'TEMPORARY_OUTPUT'}) >>> print(results) {'OUTPUT': <QgsVectorLayer: 'Simplified' (memory)>} >>> simplified_layer = results['OUTPUT'] ``` The same can be achieved with one line of code by appending the output key to the end of the processing call. Like this: ``` >>> simplified_layer = processing.run("native:simplifygeometries", ... {'INPUT':input_layer_path, ... 'METHOD':0, ... 'TOLERANCE':5000, ... 'OUTPUT':'TEMPORARY_OUTPUT'})['OUTPUT'] #<--- output fetched directly ``` Or to skip the middle steps and load the layer directly to the current project, use _runAndLoadResults_. ``` >>> processing.runAndLoadResults("native:simplifygeometries", {'INPUT': input_layer_path, 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) ``` ### 1.2.1. Batch processing Running processing algorithms on multiple layers is straightforward to do with Python loops. The script below runs simplification on all vector layers in a project: ``` proj_layers = QgsProject.instance().mapLayers() for layer in proj_layers.values(): # excluding other layer types if isinstance(layer, QgsVectorLayer): processing.runAndLoadResults("native:simplifygeometries", {'INPUT': layer, 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) ``` ### 1.2.2. Chaining algorithms The real power of these algorithms gets unleashed when they're chained together to form an analysis pipeline. For example, you may remember how long and cumbersome the script used to get to simplify the geometries and fields of our input layer was previously. With the processing framework, we can offset the heavy lifting to two processing algorithms (_Simplify_ and _Drop fields_). Since we're _dropping_ fields instead of _keeping_ them, we need to do a bit of Python magic first. P.S. If we'd be using ≥QGIS 3.18., we could use [_Retain fields_](https://qgis.org/en/site/forusers/visualchangelog318/#feature-add-retain-fields-algorithm). It's not available in the current LTR. ``` # defining input parameters # path to the NUTS2 layer input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' input_layer = QgsVectorLayer(input_layer_path, "input_layer", "ogr") tolerance = 5000 # list of field names to keep fields_to_keep = ['name', 'pop'] # get all fields all_fields = input_layer.fields().names() # BASICALLY: create a list containing all fields except those that are in the "keep" list drop_fields = [field for field in all_fields if field not in fields_to_keep] simplified_layer = processing.run("native:simplifygeometries", {'INPUT':input_layer, 'METHOD':0, 'TOLERANCE':tolerance, 'OUTPUT':'TEMPORARY_OUTPUT'})['OUTPUT'] # NOTICE THAT THE LAYER IS IMMEDIATELY FETCHED FROM THE DICT processing.runAndLoadResults("qgis:deletecolumn", {'INPUT':simplified_layer, 'COLUMN':drop_fields, 'OUTPUT':'TEMPORARY_OUTPUT'}) ``` Notice that with _runAndLoadResults_, the layer will be named automatically according to the algorithm definitions (like _Remaining fields_). ### TASK 2 - Modify the script above by adding one more algorithm to the pipeline, namely _Add geometry attributes_ - Find out the algorithm id of _Add geometry attributes_. - Create the parameter dictionary as needed for the algorithm (HINT: if you're lost, run the algorithm through the GUI first) - Add the algorithm call to the end of the script and modify the previous algorithm call accordingly. ## 1.3. Recreating simplify_and_trim as a processing script Processing scripts like above are already quite neat, but they do have some weaknesses. Using them requires some programming understanding, which hinders their usability if you'd want to share your tools with others. It's much more user friendly to select the input parameters graphically like in the processing toolbox algorithms. We will create such a processing tool. One approach would be to write it in code from scratch, like the existing tools are. However, in the interested of time, we will create the base of the script using the graphic _Model Designer_ in QGIS. ### 1.3.1 Graphical processing models Model designer enables defining inputs and chaining processing algorithms graphically. The picture below shows an example pipeline for creating a population heatmap clipped to sea boundaries. Subpicture (1) shows the pipeline running through centroid creation, KDE, and clipping with a mask layer. The yellow rectangles are inputs given by the user. These are shown as options when running the script, as seen in subpicture (2). The process outputs a raster (subpicture (3)). If you want to play around with this model, find it in *scripts > heatmap_from_pop_grid.model3* in the practical materials. ![Graphic modeler example](images/graphic_model.png) Great thing about the models is that they can be exported to Python code. We will use this function later on. ### 1.3.2. Building the script base as a graphical model Now, let's once again recreate the layer simplification and trimming script, this time as an installable processing tool. **Open the Model Designer window** from the the left-most button under _Processing Toolbox > Create new Model_. ![Opening graphical model](images/opening_graphical_modeler.JPG) A mostly empty window is opened. The empty area in the middle is where we'll start building our script. On the left, there's a selection of inputs (1) and algorithms (2), which can be dragged to the builder. Other important functions are naming the model (3), exporting to a Python script (4) and running the model (5). ![Model builder introduction](images/model_builder_intro.png) Start by dragging and dropping inputs. Remember which ones we defined previously, namely vector layer and a list of fields: ``` input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' fields_to_keep = ['name', 'pop'] ``` Equivalently, first drag _Vector layer_, then _Vector field_. Below are the parameters definitions for both. Make sure to toggle _Accept multiple fields_ for the field input: ![Vector layer and field model inputs](images/model_builder_vector_input.png) Next, click the algorithms tab active. The whole algorithm toolbox and a search function is available. Search for _Drop fields_ and drag it to the model. In the properties window that opens, change the input type to _Model input_ for both _Input layer_ and _Fields to drop_. Also write _Kept fields_ in the output box: ![Setting drop field algorithm properties](images/drop_field_parameters.png) Now you should have something like this: ![Opening script template](images/drop_fields_model.JPG) This is a fully functional model that gives the user an option to select a layer and fields in it, and the drops the selected fields. Hmm, but we want to keep the fields. To implemented that, we need to apply the custom Python logic created before and create a new tool. Press the button with Python logo (_Export as script algorithm_). It throws a Script editor window where the model is expressed as Python code. Let's first briefly explore the code to understand processing scripts. ### 1.3.3. Understanding processing scripts The first thing you might notice are the imports. The Python console imports qgis.core automatically, but the same is not true of processing scripts. Following good coding practices, only the necessary methods are imported. ``` from qgis.core import QgsProcessing from qgis.core import QgsProcessingAlgorithm from qgis.core import QgsProcessingMultiStepFeedback from qgis.core import QgsProcessingParameterVectorLayer from qgis.core import QgsProcessingParameterField from qgis.core import QgsProcessingParameterFeatureSink import processing ``` Next, the script defines a **class** that inherits from the general [QgsProcessingAlgorithm](https://qgis.org/pyqgis/3.22/core/QgsProcessingParameterField.html#qgis.core.QgsProcessingParameterField). ``` class Model(QgsProcessingAlgorithm): ``` Under this class, there are methods that both define metadata information, such as the algorithm identifier and display name, and run the actual processing. For the interested, [here's a thorough description of all the mandatory methods](https://docs.qgis.org/latest/en/docs/user_manual/processing/scripts.html#extending-qgsprocessingalgorithm). The script is currently generically named "model". **Change the name to _Keep fields_** under _name_, _displayName_, _createInstance_ and class definition as shown below ![Script name changes](images/keep_fields_script_names.png) The workhorse methods are _initAlgorithm_ and _processAlgorithm_. In the initiation step, the input **and output** [parameters](https://qgis.org/pyqgis/latest/core/QgsProcessingParameters.html) are defined. Notice that the settings are the same we did previously graphically. BTW, the order the parameters are written here defines the order they're shown in the program. Therefore, if you want the user to first select the layer and then the fields, insert QgsProcessingParameterVectorLayer first. ``` def initAlgorithm(self, config=None): self.addParameter(QgsProcessingParameterVectorLayer('Inputlayer', 'Input layer', defaultValue=None)) self.addParameter(QgsProcessingParameterField('Fieldstokeep', 'Fields to keep', type=QgsProcessingParameterField.Any, parentLayerParameterName='Inputlayer', allowMultiple=True, defaultValue=None)) self.addParameter(QgsProcessingParameterFeatureSink('KeptFields', 'Kept fields', type=QgsProcessing.TypeVectorAnyGeometry, createByDefault=True, supportsAppend=True, defaultValue=None)) ``` Currently, the processing simply defines the parameters for Drop fields algorithm, runs it and returns a result dictionary. ``` def processAlgorithm(self, parameters, context, model_feedback): # Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the # overall progress through the model feedback = QgsProcessingMultiStepFeedback(1, model_feedback) results = {} outputs = {} # Drop field(s) alg_params = { 'COLUMN': parameters['Fieldstokeep'], 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } outputs['DropFields'] = processing.run('qgis:deletecolumn', alg_params, context=context, feedback=feedback, is_child_algorithm=True) results['KeptFields'] = outputs['DropFields']['OUTPUT'] return results ``` ### 1.3.4. Creating a new processing script Let's start inserting our own code. First, we need the vector layer and a list of fields to keep. The pre-made code calls the parameter dictionary, like: ``` 'INPUT': parameters['Inputlayer'] ``` But this returns a _string_ by default, not the objects we need. QgsProcessingAlgorithm has [methods to return the actual objects](https://qgis.org/pyqgis/latest/core/QgsProcessingAlgorithm.html#qgis.core.QgsProcessingAlgorithm.parameterAsVectorLayer) ``` input_layer = self.parameterAsVectorLayer(parameters, "Inputlayer", context) fields_to_keep = self.parameterAsFields(parameters, 'Fieldstokeep', context) ``` Next, insert the field selection code used previously: ``` all_fields = input_layer.fields().names() drop_fields = [field for field in all_fields if field not in fields_to_keep] ``` Now that drop_fields contains the fields we want to delete, change the COLUMN paramater to drop_fields: ``` alg_params = { 'COLUMN': drop_fields, 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } ``` All in all, the script looks like this: ``` from qgis.core import QgsProcessing from qgis.core import QgsProcessingAlgorithm from qgis.core import QgsProcessingMultiStepFeedback from qgis.core import QgsProcessingParameterVectorLayer from qgis.core import QgsProcessingParameterField from qgis.core import QgsProcessingParameterFeatureSink import processing class KeepFields(QgsProcessingAlgorithm): def initAlgorithm(self, config=None): self.addParameter(QgsProcessingParameterVectorLayer('Inputlayer', 'Input layer', defaultValue=None)) self.addParameter(QgsProcessingParameterField('Fieldstokeep', 'Fields to keep', type=QgsProcessingParameterField.Any, parentLayerParameterName='Inputlayer', allowMultiple=True, defaultValue=None)) self.addParameter(QgsProcessingParameterFeatureSink('KeptFields', 'Kept fields', type=QgsProcessing.TypeVectorAnyGeometry, createByDefault=True, supportsAppend=True, defaultValue=None)) def processAlgorithm(self, parameters, context, model_feedback): # Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the # overall progress through the model feedback = QgsProcessingMultiStepFeedback(1, model_feedback) results = {} outputs = {} input_layer = self.parameterAsVectorLayer(parameters, "Inputlayer", context) fields_to_keep = self.parameterAsFields(parameters, 'Fieldstokeep', context) all_fields = input_layer.fields().names() drop_fields = [field for field in all_fields if field not in fields_to_keep] # Drop field(s) alg_params = { 'COLUMN': drop_fields, 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } outputs['DropFields'] = processing.run('qgis:deletecolumn', alg_params, context=context, feedback=feedback, is_child_algorithm=True) results['KeptFields'] = outputs['DropFields']['OUTPUT'] return results def name(self): return 'keepFields' def displayName(self): return 'Keep fields' def group(self): return '' def groupId(self): return '' def createInstance(self): return KeepFields() ``` **Save your script** locally for example as keep_fields.py. After saving, close the script window. From prosessing toolboxes' Python drop-down, **select _Add script to toolbox_ and add the script file**. This adds _Keep fields_ in the toolbox under _Scripts_. You may run the tool to test that it indeed works. ![Adding script to toolbox](images/add_script_to_toolbox.JPG) ### TASK 3: Finalizing the model - Re-create the simplify-and-trim algorithm with the graphical modeler using _Simplify_ and _Keep fields_. It should look something like the pic below. - NOTE! Use _Algorithm output_ as the input layer for the second algorithm. - Simplification tolerance is simply a _Number_ input. ![Final model](images/simplify_and_trim_final.png) You may save and add the finalized model to the toolbox to use it anytime and in other processing scripts! Do this by clicking the save button ## 1.4. Wrapping up processing The take-home messages of this session were: - Processing algorithms can be run, chained and expanded upon with PyQGIS to create efficient automated GIS processes. - New processing algorithms can be created by using the model builder – these models can be expanded with Python. While this tutorial used graphic modeler to create a basis for the scripts, new processing scripts can be created purely by code as well. To do this, see the upper row tools of _Processing toolbox_, click on the Python drop-down menu and select _Create new script from template_. [See the user manual for a tutorial](https://docs.qgis.org/latest/en/docs/user_manual/processing/console.html#creating-scripts-and-running-them-from-the-toolbox). Another example is given in Challenge X. ![Opening script template](images/opening_script_template.JPG) ## Challenge X: Processing algorithm with the @alg decorator There's a way to create processing scripts that takes away a lot of the empty "boilerplate" code. It uses a Python trick called [decorators](https://www.programiz.com/python-programming/decorator): the inputs and outputs are defined by calls to @alg. See [here for a full explanation and an example](https://docs.qgis.org/3.22/en/docs/user_manual/processing/scripts.html#the-alg-decorator). The code below defines an algorithm that: - Gets a polygon input and a value field, such as population, from the same layer - Creates as many points inside the layer as the value field's attribute is for that feature - Scales the amount of point by a scaling factor given as input (e.g. population for a region is 1500 and scaling factor is 10 --> 1500 / 10 = 150 points). It does this by creating a [_QgsProperty_](https://qgis.org/pyqgis/3.16/core/QgsProperty.html) from an expression. - Outputs the point layer _and_ a count of how many features there are in the input layer Such an algorithm could for example be used to visualize the population in an area.. ### CHALLENGE X TASKS 1. There's an unused output called SUM_OF_FIELD. Use aggregation (check the first day for a refresher) to get the sum of values in the given value field: append the value to the results dictionary. 2. Add another algorithm of your choice after the points generation and use the points as inputs. The algorithm could for example be _Buffer_ or _Rasterize_. Return both the outputs both processes. [See here for an example](https://docs.qgis.org/3.22/en/docs/user_manual/processing/scripts.html#the-alg-decorator). ``` from qgis import processing from qgis.processing import alg from qgis.core import QgsProject, QgsProperty # INPUTS ARE DEFINED HERE @alg(name='polygonToAggregatedPoints', label='Polygons to aggregated points', group='examplescripts', group_label='Example scripts') # 'INPUT' is the recommended name for the main input parameter @alg.input(type=alg.SOURCE, name='INPUT', label='Input vector layer') # 'OUTPUT' is the recommended name for the main output parameter @alg.input(type=alg.VECTOR_LAYER_DEST, name='OUTPUT', label='Point output') @alg.input(type=alg.FIELD, name='VALUE_FIELD', parentLayerParameterName="INPUT", label='Value field to scale on') @alg.input(type=alg.NUMBER, name='SCALE_VALUE', label='Scale value', default=10) @alg.output(type=alg.NUMBER, name='NUMBER_OF_FEATURES', label='Number of features processed') #@alg.output(type=alg.NUMBER, name='SUM_OF_FIELD', # label='Sum of value field') def bufferrasteralg(self, parameters, context, feedback, inputs): """ Description of the algorithm. (If there is no comment here, you will get an error) """ input_layer = self.parameterAsVectorLayer(parameters, 'INPUT', context) value_field = self.parameterAsString(parameters, 'VALUE_FIELD', context) numfeatures = input_layer.featureCount() scale_value = self.parameterAsDouble(parameters, 'SCALE_VALUE', context) points_expression = QgsProperty.fromExpression('round( "{0}" / {1} )'.format(value_field, str(scale_value))) if feedback.isCanceled(): return {} points_generation = processing.run("native:randompointsinpolygons", {'INPUT':parameters['INPUT'], 'POINTS_NUMBER': points_expression, 'MIN_DISTANCE':0, 'OUTPUT': parameters['OUTPUT']}, context=context, feedback=feedback, is_child_algorithm=True) if feedback.isCanceled(): return {} results = {'OUTPUT': points_generation['OUTPUT'], 'NUMBER_OF_FEATURES': numfeatures} return results ``` # 2. A look at plugin development This section serves as a brief introduction to plugins in QGIS. First, there's a discussion on why plugins are made through a few examples. Then the basic building blocks of a plugin are introduced. Throughout, there're links to various resources where you may read more on specific aspects of plugin development, if you're interested in engaging it yourself. The section is followed by a challenge where you may modify and examine a simple pre-made plugin. ## 2.1. Plugins: what are they good for? QGIS consists both of [core plugins](https://docs.qgis.org/3.22/en/docs/user_manual/plugins/plugins.html#core-and-external-plugins), maintained by the program developers themselves, and external plugins by independent developers. So, plugins extend the base program in some way. But what does this mean in practice? A lot of times the plugins integrate some external service's functionality to QGIS: for example by allowing flexible download of rasters from [GeoCubes Finland](https://github.com/geoportti/GeoCubes-Finland-QGIS-Plugin) or to [add resources](https://qgis-contribution.github.io/QGIS-ResourceSharing/authoring/what-to-share.html) like processing scripts, styles and SVG images directly to QGIS from online repositories. Both of these plugins sport a custom graphical user interface: ![Examples of graphical plugins](images/graphical_plugin_examples.png) Sometimes the plugins add a neat new functionality, like [adding a globe visualization](https://github.com/GispoCoding/GlobeBuilder) to the current project. If such functions are deemed useful enough, they might be intergrated natively to the program over time. A custom user interface isn't a necessity for plugins. [Openlayers plugin](https://github.com/sourcepole/qgis-openlayers-plugin) has its main functionality (adding background maps) in a drop-down menu. [QNEAT3](https://root676.github.io/), a network analysis library, is a processing plugin: it adds a new provider and a bunch of scripts. The description of QNEAT3 mentions that some scripts require matplotlib Python plotting library: since QGIS includes a Python installation (on Windows), external libraries can easily be pip installed. As the examples above show, QGIS plugins can extend the base program in many forms. However, these materials focus on ones with custom GUI's. When is it beneficial to engage in plugin development? Plugin development can be time-consuming and not worth the while if you simply want to share a quick and dirty script with a co-worker. Models and processing scripts, too, are simpler than full GUI plugins since they have functionality to easily e.g. ask user for input and output paths. From the examples above, creating a GUI plugin might pay off when you have a service you want to reach more people or believe you can add a new necessary feature not possible with the existing tools. ## 2.2. QT framework: signals and slots QGIS is built on a development framework called [QT](https://www.qt.io/). Basically all the user interface elements (buttons, tables, windows etc.) are based on QT objects. A central QT concept is signals and slots. These two are used to connect what the user does to actions in the program: signal sends the information that something has been done and slot is the reaction. Signals can carry information related to the action made by the user or the state of the system, and then pass these to the slots, which are often Python functions. For a thorough discussion of these concepts, see [Nils Nolde's tutorial](https://gis-ops.com/qgis-3-plugin-tutorial-pyqt-signal-slot-explained/). As an example of the signal/slot system in action, pressing _Zoom to layer_ ![Zoom to layer icon](images/zoom_to_layer.JPG) sends a signal that the button has been clicked to the program and QGIS reacts presumably by calling ***iface.zoomToActiveLayer()*** behind the scenes. Developers of plugins with GUI's must similarly think in terms of what each element in their interface does and how the user can interact with it. ## 2.3. Elements of a QGIS plugin QGIS plugins written in Python (which pretty much all _external_ plugins are) are stored in a directory, which should have a few required and a few recommended files. This list is from [The PyQGIS developer cookbook](https://docs.qgis.org/latest/en/docs/pyqgis_developer_cookbook/plugins/plugins.html#plugin-files): ``` PYTHON_PLUGINS_PATH/ MyPlugin/ __init__.py --> *Initialization code: required* metadata.txt --> *Info about the plugin, its author etc.: required* mainPlugin.py --> *Actual plugin code: can have multiple .py files* resources.qrc --> *Resources, like paths to image files: likely useful* resources.py --> *Python compiled resources, likely useful* form.ui --> *Graphical user interface in XML format: likely useful* form.py --> *Python compiled version of GUI, likely useful* ``` _Resources.qrc_ and _form.ui_ are both related to QT: the first is a list of paths to inform QT where things like icon image files are found and the second instructions on how to format the UI. Both of these must be compiled, encodes the e.g. image paths to images as binary data in .py files. See [Ujaval Gandi's tutorial section 9 for more info the compiling the files](http://www.qgistutorials.com/it/docs/3/building_a_python_plugin.html#procedure). What about the plugin path? That's the path where QGIS loads the plugins from when the program is opened. The folder is found under the currently active profile's path. A shortcut to this path is available from QGIS interface: *Setting drop-down > User profiles > Open active profile folder*. From this folder navigate to *Python > Plugins*. Here you can find all your installed plugins. Check out a few of them! You may even open their main plugin .py files to see how other programmers have created their plugins. The plugin path may for example look like this: ![Example plugins folder](images/plugins_folder.png) ## Challenge Y: Modifying a plugin In the course materials folder under _data_, there's another [(zipped) folder called *cool_plugin*](https://github.com/csc-training/pyqgis/blob/main/data/cool_plugin.zip). Download the zipped folder and unzip it in your project folder. Within the folder are files for a functional, albeit simple plugin. If you're interested in creating such a plugin from the scratch, [here's an earlier tutorial](https://autogis-site.readthedocs.io/en/latest/lessons/PyQGIS/pyqgis.html#developing-the-plugin) by the author. #### Challenge preparations First copy the whole folder and paste it to the active profile's _plugins_ folder (see above on how to locate this folder). Then save your current project and restart QGIS. This should load the Cool Plugin and add it under _Plugins > Cool plugin_ drop down menu and as a default icon in the toolbar ![Default icon](images/default_plugin_icon.JPG) Running the plugin throws this GUI: ![Cool plugin interface](images/cool_plugin_interface.JPG) Try pressing the magic button! Yeah, doesn't do much. Your task is to add more functionality to the plugin. Start modifying the code in *plugins > Cool_plugin > cool_plugin.py*. You can do the editing in any text editor: like simply Notepad on Windows. The function's we're interested in are at the bottom of the script. There's _coolFunction_ and _run_. In _run_ Magic Button is connected to the cool function – or, for the clicked signal, the _coolFunction_ slot is defined: ``` self.dlg.magicButton.clicked.connect(self.coolFunction) ``` Translated to human speak, this means: ``` Every time this button is pressed, run coolFunction ``` coolFunction simply throws a QT [messagebox](https://doc.qt.io/qt-5/qmessagebox.html) (the third parameter is the actual message). ``` def coolFunction(self): """Does something cool (but currently just throws an messagebox)""" QMessageBox.information(self.dlg, "Cool message", "You just clicked a button") ``` FINALLY: install a plugin called **Plugin reloader** from _Plugins > Manage and install plugins_. This allows reloading changes made into the code without restarting QGIS every time. Once plugin reloader is installed, configure its settings so that it reinstalls _Cool plugin_. ![Cool plugin configuration](images/plugin_reloader_settings.JPG) ### CHALLENGE Y TASKS 1. Instead of giving out a static message, load in the current active layer and write the layer's name in the messagebox. If there're no layers in the project, write "No layers available!". - Hint: Reference to the interface is saved as `self.iface`. - What type of object does `activeLayer()` return when there are no layers? 2. Add a new GUI object by modifying *cool_plugin_dialog_base.ui* in _QT Designer_. QT designer is a graphical UI designer program, which should be installed alongside QGIS. [See these instuctions on adding GUI elements](https://autogis-site.readthedocs.io/en/latest/lessons/PyQGIS/pyqgis.html#adding-ui-elements). - Add another `Push button` with the text "Simplify". - Add a new function that processes geometry simplification on the active layer (if one is available). - Connect the button and the function. - Remember to import `Processing` 3. Add another GUI element. For example, add a new _QSpinBox_, which allows the user to input numerical values. Use the Spinbox input as the tolerance level for the simplification level. - GUI elements can be accessed through `self.dlg.elementName`. - How to access the current value of the spinbox? [Hint](https://doc.qt.io/qt-5/qspinbox.html#value-prop) The final plugin can, for example, look like this: ![Cool plugin final view](images/cool_plugin_final.JPG) ## 3. PyQGIS resources Below are some great resources for delving into PyQGIS: ### General resources [PyQGIS Developer Cookbook](https://docs.qgis.org/latest/en/docs/pyqgis_developer_cookbook/index.html), by [QGIS contributors](https://docs.qgis.org/3.16/en/docs/user_manual/preamble/contributors.html) [PyQGIS Documentation](https://qgis.org/pyqgis/latest/index.html), by [QGIS contributors](https://docs.qgis.org/3.16/en/docs/user_manual/preamble/contributors.html) [Customizing QGIS with Python (course)](https://courses.spatialthoughts.com/pyqgis-in-a-day.html), by Ujaval Gandhi, shared under CC BY-NC 4.0 [PyQGIS 101: Introduction to QGIS Python programming for non-programmers](https://anitagraser.com/pyqgis-101-introduction-to-qgis-python-programming-for-non-programmers/), by Anita Graser [The PyQGIS Programmer's Guide 3](http://locatepress.com/ppg3) (available for purchase) by Gary Sherman ### Plugin development For starting out with plugin development, see these resources: [Building a Python Plugin (QGIS3)](http://www.qgistutorials.com/it/docs/3/building_a_python_plugin.html), by Ujaval Gandhi [QGIS 3 Plugin Tutorial – Plugin development reference guide](https://gis-ops.com/qgis-3-plugin-tutorial-plugin-development-reference-guide/), by Nils Nolde
github_jupyter
# EXAMPLE PATH: define the actual path on your system gpkg_path = "C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg" # windows gpkg_layer = QgsVectorLayer(gpkg_path, "whole_gpkg", "ogr") # returns a list of strings describing the sublayers sub_strings = gpkg_layer.dataProvider().subLayers() # EXAMPLE: 1!!::!!Paavo!!::!!3027!!::!!MultiPolygon!!::!!geom!!::!! # !!::!! separates the values for sub_string in sub_strings: layer_name = sub_string.split(gpkg_layer.dataProvider().sublayerSeparator())[1] uri = "{0}|layername={1}".format(gpkg_path, layer_name) # Create layer sub_vlayer = QgsVectorLayer(uri, layer_name, 'ogr') # Add layer to map if sub_vlayer.isValid(): QgsProject.instance().addMapLayer(sub_vlayer) else: print("Can't add layer", layer_name) processing.run("native:simplifygeometries", {'INPUT':'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop', 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) >>> processing.algorithmHelp("native:simplifygeometries") Simplify (native:simplifygeometries) This algorithm simplifies the geometries in a line or polygon layer. --- METHOD: Simplification method Parameter type: QgsProcessingParameterEnum Available values: - 0: Distance (Douglas-Peucker) - 1: Snap to grid - 2: Area (Visvalingam) --- >>> for alg in QgsApplication.processingRegistry().algorithms(): ... search_str = "simplify" ... if search_str in alg.displayName().lower(): ... print(alg.displayName(), "->", alg.id()) Simplify Network -> NetworkGT:Simplify Network Simplify -> native:simplifygeometries >>> input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' >>> results = processing.run("native:simplifygeometries", ... {'INPUT':input_layer_path, ... 'METHOD':0, ... 'TOLERANCE':5000, ... 'OUTPUT':'TEMPORARY_OUTPUT'}) >>> print(results) {'OUTPUT': <QgsVectorLayer: 'Simplified' (memory)>} >>> simplified_layer = results['OUTPUT'] >>> simplified_layer = processing.run("native:simplifygeometries", ... {'INPUT':input_layer_path, ... 'METHOD':0, ... 'TOLERANCE':5000, ... 'OUTPUT':'TEMPORARY_OUTPUT'})['OUTPUT'] #<--- output fetched directly >>> processing.runAndLoadResults("native:simplifygeometries", {'INPUT': input_layer_path, 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) proj_layers = QgsProject.instance().mapLayers() for layer in proj_layers.values(): # excluding other layer types if isinstance(layer, QgsVectorLayer): processing.runAndLoadResults("native:simplifygeometries", {'INPUT': layer, 'METHOD':0, 'TOLERANCE':5000, 'OUTPUT':'TEMPORARY_OUTPUT'}) # defining input parameters # path to the NUTS2 layer input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' input_layer = QgsVectorLayer(input_layer_path, "input_layer", "ogr") tolerance = 5000 # list of field names to keep fields_to_keep = ['name', 'pop'] # get all fields all_fields = input_layer.fields().names() # BASICALLY: create a list containing all fields except those that are in the "keep" list drop_fields = [field for field in all_fields if field not in fields_to_keep] simplified_layer = processing.run("native:simplifygeometries", {'INPUT':input_layer, 'METHOD':0, 'TOLERANCE':tolerance, 'OUTPUT':'TEMPORARY_OUTPUT'})['OUTPUT'] # NOTICE THAT THE LAYER IS IMMEDIATELY FETCHED FROM THE DICT processing.runAndLoadResults("qgis:deletecolumn", {'INPUT':simplified_layer, 'COLUMN':drop_fields, 'OUTPUT':'TEMPORARY_OUTPUT'}) input_layer_path = 'C:/Users/tatu/pyqgis_practical/data/practical_data.gpkg|layername=NUTS2_FIN_pop' fields_to_keep = ['name', 'pop'] from qgis.core import QgsProcessing from qgis.core import QgsProcessingAlgorithm from qgis.core import QgsProcessingMultiStepFeedback from qgis.core import QgsProcessingParameterVectorLayer from qgis.core import QgsProcessingParameterField from qgis.core import QgsProcessingParameterFeatureSink import processing class Model(QgsProcessingAlgorithm): def initAlgorithm(self, config=None): self.addParameter(QgsProcessingParameterVectorLayer('Inputlayer', 'Input layer', defaultValue=None)) self.addParameter(QgsProcessingParameterField('Fieldstokeep', 'Fields to keep', type=QgsProcessingParameterField.Any, parentLayerParameterName='Inputlayer', allowMultiple=True, defaultValue=None)) self.addParameter(QgsProcessingParameterFeatureSink('KeptFields', 'Kept fields', type=QgsProcessing.TypeVectorAnyGeometry, createByDefault=True, supportsAppend=True, defaultValue=None)) def processAlgorithm(self, parameters, context, model_feedback): # Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the # overall progress through the model feedback = QgsProcessingMultiStepFeedback(1, model_feedback) results = {} outputs = {} # Drop field(s) alg_params = { 'COLUMN': parameters['Fieldstokeep'], 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } outputs['DropFields'] = processing.run('qgis:deletecolumn', alg_params, context=context, feedback=feedback, is_child_algorithm=True) results['KeptFields'] = outputs['DropFields']['OUTPUT'] return results 'INPUT': parameters['Inputlayer'] input_layer = self.parameterAsVectorLayer(parameters, "Inputlayer", context) fields_to_keep = self.parameterAsFields(parameters, 'Fieldstokeep', context) all_fields = input_layer.fields().names() drop_fields = [field for field in all_fields if field not in fields_to_keep] alg_params = { 'COLUMN': drop_fields, 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } from qgis.core import QgsProcessing from qgis.core import QgsProcessingAlgorithm from qgis.core import QgsProcessingMultiStepFeedback from qgis.core import QgsProcessingParameterVectorLayer from qgis.core import QgsProcessingParameterField from qgis.core import QgsProcessingParameterFeatureSink import processing class KeepFields(QgsProcessingAlgorithm): def initAlgorithm(self, config=None): self.addParameter(QgsProcessingParameterVectorLayer('Inputlayer', 'Input layer', defaultValue=None)) self.addParameter(QgsProcessingParameterField('Fieldstokeep', 'Fields to keep', type=QgsProcessingParameterField.Any, parentLayerParameterName='Inputlayer', allowMultiple=True, defaultValue=None)) self.addParameter(QgsProcessingParameterFeatureSink('KeptFields', 'Kept fields', type=QgsProcessing.TypeVectorAnyGeometry, createByDefault=True, supportsAppend=True, defaultValue=None)) def processAlgorithm(self, parameters, context, model_feedback): # Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the # overall progress through the model feedback = QgsProcessingMultiStepFeedback(1, model_feedback) results = {} outputs = {} input_layer = self.parameterAsVectorLayer(parameters, "Inputlayer", context) fields_to_keep = self.parameterAsFields(parameters, 'Fieldstokeep', context) all_fields = input_layer.fields().names() drop_fields = [field for field in all_fields if field not in fields_to_keep] # Drop field(s) alg_params = { 'COLUMN': drop_fields, 'INPUT': parameters['Inputlayer'], 'OUTPUT': parameters['KeptFields'] } outputs['DropFields'] = processing.run('qgis:deletecolumn', alg_params, context=context, feedback=feedback, is_child_algorithm=True) results['KeptFields'] = outputs['DropFields']['OUTPUT'] return results def name(self): return 'keepFields' def displayName(self): return 'Keep fields' def group(self): return '' def groupId(self): return '' def createInstance(self): return KeepFields() from qgis import processing from qgis.processing import alg from qgis.core import QgsProject, QgsProperty # INPUTS ARE DEFINED HERE @alg(name='polygonToAggregatedPoints', label='Polygons to aggregated points', group='examplescripts', group_label='Example scripts') # 'INPUT' is the recommended name for the main input parameter @alg.input(type=alg.SOURCE, name='INPUT', label='Input vector layer') # 'OUTPUT' is the recommended name for the main output parameter @alg.input(type=alg.VECTOR_LAYER_DEST, name='OUTPUT', label='Point output') @alg.input(type=alg.FIELD, name='VALUE_FIELD', parentLayerParameterName="INPUT", label='Value field to scale on') @alg.input(type=alg.NUMBER, name='SCALE_VALUE', label='Scale value', default=10) @alg.output(type=alg.NUMBER, name='NUMBER_OF_FEATURES', label='Number of features processed') #@alg.output(type=alg.NUMBER, name='SUM_OF_FIELD', # label='Sum of value field') def bufferrasteralg(self, parameters, context, feedback, inputs): """ Description of the algorithm. (If there is no comment here, you will get an error) """ input_layer = self.parameterAsVectorLayer(parameters, 'INPUT', context) value_field = self.parameterAsString(parameters, 'VALUE_FIELD', context) numfeatures = input_layer.featureCount() scale_value = self.parameterAsDouble(parameters, 'SCALE_VALUE', context) points_expression = QgsProperty.fromExpression('round( "{0}" / {1} )'.format(value_field, str(scale_value))) if feedback.isCanceled(): return {} points_generation = processing.run("native:randompointsinpolygons", {'INPUT':parameters['INPUT'], 'POINTS_NUMBER': points_expression, 'MIN_DISTANCE':0, 'OUTPUT': parameters['OUTPUT']}, context=context, feedback=feedback, is_child_algorithm=True) if feedback.isCanceled(): return {} results = {'OUTPUT': points_generation['OUTPUT'], 'NUMBER_OF_FEATURES': numfeatures} return results PYTHON_PLUGINS_PATH/ MyPlugin/ __init__.py --> *Initialization code: required* metadata.txt --> *Info about the plugin, its author etc.: required* mainPlugin.py --> *Actual plugin code: can have multiple .py files* resources.qrc --> *Resources, like paths to image files: likely useful* resources.py --> *Python compiled resources, likely useful* form.ui --> *Graphical user interface in XML format: likely useful* form.py --> *Python compiled version of GUI, likely useful* self.dlg.magicButton.clicked.connect(self.coolFunction) Every time this button is pressed, run coolFunction def coolFunction(self): """Does something cool (but currently just throws an messagebox)""" QMessageBox.information(self.dlg, "Cool message", "You just clicked a button")
0.46952
0.9803
![logo](https://docs.conan.io/en/latest/_static/conan_logo.png) # Welcome to conan Conan is a portable package manager, intended for C and C++ developers, but it is able to manage builds from source, dependencies, and precompiled binaries for any language. For more information, check [conan.io](conan.io). Install ======= Conan can be installed in many Operating Systems. It has been extensively used and tested in Windows, Linux (different distros), OSX, and is also actively used in FreeBSD and Solaris SunOS. There are also several additional operating systems on which it has been reported to work. There are three ways to install conan: 1. The preferred and **strongly recommended way to install Conan** is from PyPI, the Python Package Index, using the ``pip`` command. 2. There are other available installers for different systems, which might come with a bundled python interpreter, so that you don't have to install python first. Please note that some of **these installers might have some limitations**, specially those created with pyinstaller (such as Windows exe & Linux deb). 3. Running conan from sources. Install with pip (recommended) ------------------------------ To install Conan using **pip**, you need a python 2.7 or 3.X distribution installed in your machine. Modern python distros come with pip pre-installed. However, if necessary you can install pip by following the instructions in `pip docs`. Install conan: ``` !pip install conan ``` Initial configuration --------------------- Let's check if conan is correctly installed. In your console, run the following: ``` !conan ``` Getting started =============== Let's start with an example using one of the most popular C++ libraries: [POCO](https://pocoproject.org/). For convenience purposes we'll use CMake. Keep in mind that Conan **works with any build system** and does not depend on CMake. A Timer using POCO libraries ---------------------------- If your code is in a GitHub repository you can simply clone the project, instead of creating this folder, using the following command: ``` !git clone https://github.com/memsharded/example-poco-timer.git ``` Next, let's check the following source files inside this folder: ``` !pygmentize -g example-poco-timer/timer.cpp ``` Now let's read a conanfile.txt inside this folder with the following content: ``` !pygmentize -g example-poco-timer/conanfile.txt ``` In this example we will use CMake to build the project, which is why the ``cmake`` generator is specified. This generator will create a *conanbuildinfo.cmake* file that defines CMake variables as include paths and library names, that can be used in our build. If you are not a CMake user, change the ``[generators]`` section of your *conanfile.txt* to ``gcc`` or a more generic one ``txt`` to handle requirements with any build system. Just include the generated file and use these variables inside our *CMakeLists.txt*: ``` !pygmentize -g example-poco-timer/CMakeLists.txt ``` Installing dependencies ----------------------- Then create a build folder, for temporary build files, and install the requirements (pointing to the parent directory, as it is where the *conanfile.txt* is): ``` !conan install example-poco-timer ``` This `conan install` command will download the binary package required for your configuration (detected the first time that you ran the command), **together with other (transitively required by Poco) libraries, like OpenSSL and Zlib**. It will also create the *conanbuildinfo.cmake* file in the current directory, in which you can see the cmake defined variables, and a *conaninfo.txt* where information about settings, requirements and options is saved. It is very important to understand the installation process. When `conan install` command is issued, it will use some settings, specified on the command line or taken from the defaults in ``<userhome>/.conan/profiles/default`` file. ![install_flow](https://docs.conan.io/en/latest/_images/install_flow.png) For a command like `conan install . -s os="Linux" -s compiler="gcc"`, the steps are: - Check if the package recipe (for ``Poco/1.8.0.1@pocoproject/stable`` package) exists in the local cache. If we are just starting, the cache will be empty. - Look for the package recipe in the defined remotes. Conan comes with `conan-center`_ Bintray remote by default (you can change that). - If the recipe exists, Conan client will fetch and store it in your local cache. - With the package recipe and the input settings (Linux, gcc), Conan client will check in the local cache if the corresponding binary is there, if we are installing for the first time, it won't. - Conan client will search for the corresponding binary package in the remote, if it exists, it will be fetched. - Conan client will then finish generating the requested files specified in ``generators``. If the binary package necessary for some given settings doesn't exist, Conan client will throw an error. It is possible to try to build the binary package from sources with the ``--build=missing`` command line argument to install. A detailed description of how a binary package is built from sources will be given in a later section. Building the timer example -------------------------- Now you are ready to build and run your project: ``` !cmake example-poco-timer !cmake --build . !bin/timer ``` Inspecting dependencies ----------------------- The retrieved packages are installed to your local user cache (typically ``.conan/data``), and can be reused from there in other projects. This allows to clean your current project and keep working even without network connection. Search packages in the local cache using: ``` !conan search ``` Inspect binary package details (for different installed binaries for a given package recipe) using: ``` !conan search Poco/1.8.0.1@pocoproject/stable ``` There is also the option to generate a table for all binaries from a given recipe with the ``--table`` option, even in remotes: ``` !conan search Poco/1.8.0.1@pocoproject/stable --table=file.html -r=conan-center from IPython.display import HTML HTML(filename="./file.html") ``` Check the reference for more information on how to search in remotes, how to remove or clean packages from the local cache, and how to define custom cache directory per user or per project. Inspect your current project's dependencies with the ``info`` command, pointing it to the folder where the *conanfile.txt* is: ``` !conan info example-poco-timer ``` Generate a graph of your dependencies in dot or html formats: ``` !conan info example-poco-timer --graph=info.html from IPython.display import HTML HTML(filename="./info.html") ``` Searching packages ------------------ The packages that have been used are installed from the remote repository that is configured by default in the conan client, which is called "conan-center" and is in Bintray. You can search for existing packages there with: ``` !conan search "Poco*" -r=conan-center ``` There are other community repositories that can be configured and used. Building with other configurations ---------------------------------- In this example we have built our project using the default configuration detected by conan, this configuration is known as the `default profile`. The first time you run the command that requires a profile, such as `conan install`, your settings are detected (compiler, architecture...) automatically and stored as default in a profile. You can change your those settings by editing ``~/.conan/profiles/default`` or create new profiles with the desired configuration. For example, if we have a profile with a gcc configutarion for 32 bits in a profile called *gcc_x86*, we could issue the ``install``. However, the user can always override the default profile settings in ``install`` command with the ``-s`` parameter. As an exercise, try building your timer project with a different configuration. For example, you could try building the 32 bits version: ``` !conan install example-poco-timer -s arch=x86 ``` This will install a different package, using the ``-s arch=x86`` setting, instead of the default used previously, that in most cases will be ``x86_64``. To use the 32 bits binaries you will also have to change your project build: - In Windows, change the invocation accordingly to ``Visual Studio 14``. - In Linux, you have to add the ``-m32`` flag to your ``CMakeLists.txt`` with ``SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}-m32")``, and the same to ``CMAKE_C_FLAGS, CMAKE_SHARED_LINK_FLAGS and CMAKE_EXE_LINKER_FLAGS``. This can also be done more easily, automatically with Conan, as we'll see later. - In Mac, you need to add the definition ``-DCMAKE_OSX_ARCHITECTURES=i386``.
github_jupyter
!pip install conan !conan !git clone https://github.com/memsharded/example-poco-timer.git !pygmentize -g example-poco-timer/timer.cpp !pygmentize -g example-poco-timer/conanfile.txt !pygmentize -g example-poco-timer/CMakeLists.txt !conan install example-poco-timer !cmake example-poco-timer !cmake --build . !bin/timer !conan search !conan search Poco/1.8.0.1@pocoproject/stable !conan search Poco/1.8.0.1@pocoproject/stable --table=file.html -r=conan-center from IPython.display import HTML HTML(filename="./file.html") !conan info example-poco-timer !conan info example-poco-timer --graph=info.html from IPython.display import HTML HTML(filename="./info.html") !conan search "Poco*" -r=conan-center !conan install example-poco-timer -s arch=x86
0.396535
0.91804
## A Gentle (Pythonic) Introduction to JSON *Borrowed from https://github.com/umd-mith/aeri2015-twitter-workshop* Programming languages have lots of ways of representing data called *data structures*. Very often you want to flatten, or *serialize* these data structures so that they can be transmitted over the network or stored in a file for later. In the old days, you'd invent a file format of your own devising to do this and write the different bits of your data structure into the file format. But that's an expensive way of doing things and very error prone. Most modern languages have a way of automatically serialising data structures, and the one used by JavaScript took off in a big way, probably because it is very simple to understand and use, and is embedded in every Web browser. Conveniently in your case, it's also very similar to the way things are done natively in Python. This format is called JavaScript Object Notation or [JSON](https://en.wikipedia.org/wiki/JSON) for short. You really only need to know about two types of structure to understand JSON. In Python parlance these are [lists](https://docs.python.org/2/tutorial/datastructures.html#more-on-lists) and [dictionaries](https://docs.python.org/2/tutorial/datastructures.html#dictionaries), and they are both built in to the language itself. The same is true of other programming languages, like Ruby, Perl, PHP, Java and (of course) JavaScript. ### Lists A list is a container into which you can put other things, knowing that their order will be preserved. For example if I wrote: ``` numbers = [1, 2, 10] ``` I would create a list named `numbers` that contains the integers 1, 2 and 10 in that order. Python will never re-arrange a list, you can guarantee the order stays the same unless you manipulate it yourself. You can pretty much have anything you like in a list. For example this ``` things = [1, "dog", 4.5] ``` creates a list named `things` that contains the integer `1`, the string `dog` and the real number `4.5`. ### Dictionaries The next thing you need to know about is a dictionary. A dictionary makes an association between a *key* and a *value*. It's like a mini-database where you can put stuff in, give it a unique identifier, and then use that identifier later to retrieve it. For example, a dictionary called `ages` containing people's ages might look like: ``` ages = { "Anne" : 34, "Bob" : 29, "Alex" : 15 } ``` If you want you can make this a bit more readable by using multiple lines: ``` ages = { "Anne" : 34, "Bob" : 29, "Alex" : 15 } ``` This dictionary contains three entries, one with the key `Anne` and the value `34`, one with the key `Bob` and the value `29`, and a third with the key `Alex` and the value `15`. So you can print out Anne's age: ``` print(ages['Anne']) ``` ### Composing Lists and Dictionaries To make things a bit more interesting (and expressive) you can put lists and dictionaries inside one another. So you could have a list containing dictionaries, or a dictionary where each value is a list and so on. This turns out to be a fairly generic way of flattening complex data structures, and it's exactly what JSON is based on. JSON actually uses a notation that's very similar (I think perhaps even identical) to the way that Python displays lists and dictionaries, so if you're familiar with one you can read the other. So let's imagine I want to represent a *person*. I can create a dictionary with specific keys and values. Something like... ``` person1 = { "name": "Anne", "age": 34, "shoesize": 6 } ``` and maybe a second person: ``` person2 = { "name": "Bob", "age": 29, "shoesize": 11 } ``` I could now put these two people into a list ``` people = [person1, person2] ``` ### JSON What might the list `people` look like if I wrote it out in long hand? It would contain two dictionaries, `person1` and `person2`, and each of those would have three keys called `name`, `age` and `shoesize` associated with their respective values. If I wrote the whole thing out longhand in Python notation, it would look like this: ```python [ { "name": "Anne", "age": 34, "shoesize": 6 }, { "name": "Bob", "age": 29, "shoesize": 11 } ]``` And that conveniently happens to be JSON notation too! That’s all JSON really is: a convenient way of describing data structures as combinations of (what in Python we would call) dictionaries and lists so they can be saved into files or transmitted over communications links (e.g. over the web or between applications). [lists]: https://docs.python.org/2/tutorial/datastructures.html#more-on-lists [dictionaries]: https://docs.python.org/2/tutorial/datastructures.html#dictionaries [JSON]: https://en.wikipedia.org/wiki/JSON ### Reading and Writing JSON Python also conveniently comes with a [json](https://docs.python.org/3/library/json.html) module, that makes it easy to read and write JSON. First you need to import it: ``` import json ``` And then you can *serialize* your Python data structures easily as JSON, using the `dumps` function: ``` person = {"name": "Anne", "age": 34, "shoesize": 6} json_text = json.dumps(person) print(json_text) ``` If you want you can choose to serialize it in a pretty format using linebreaks and indentation: ``` json_text = json.dumps(person, indent=2) print(json_text) ``` And once you have the text you can *parse* it back into Python again with the *loads* function: ``` person = json.loads(json_text) print(person) ``` Since you can serialize and parse JSON using strings you can also save them to files and read them back later, or send them over the network, which is what a great deal of [Web APIs](https://en.wikipedia.org/wiki/Web_API) do.
github_jupyter
numbers = [1, 2, 10] things = [1, "dog", 4.5] ages = { "Anne" : 34, "Bob" : 29, "Alex" : 15 } ages = { "Anne" : 34, "Bob" : 29, "Alex" : 15 } print(ages['Anne']) person1 = { "name": "Anne", "age": 34, "shoesize": 6 } person2 = { "name": "Bob", "age": 29, "shoesize": 11 } people = [person1, person2] [ { "name": "Anne", "age": 34, "shoesize": 6 }, { "name": "Bob", "age": 29, "shoesize": 11 } ]``` And that conveniently happens to be JSON notation too! That’s all JSON really is: a convenient way of describing data structures as combinations of (what in Python we would call) dictionaries and lists so they can be saved into files or transmitted over communications links (e.g. over the web or between applications). [lists]: https://docs.python.org/2/tutorial/datastructures.html#more-on-lists [dictionaries]: https://docs.python.org/2/tutorial/datastructures.html#dictionaries [JSON]: https://en.wikipedia.org/wiki/JSON ### Reading and Writing JSON Python also conveniently comes with a [json](https://docs.python.org/3/library/json.html) module, that makes it easy to read and write JSON. First you need to import it: And then you can *serialize* your Python data structures easily as JSON, using the `dumps` function: If you want you can choose to serialize it in a pretty format using linebreaks and indentation: And once you have the text you can *parse* it back into Python again with the *loads* function:
0.476336
0.940572
# TP5 ``` from sklearn.decomposition import PCA from sklearn.cluster import KMeans import matplotlib % matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV file_path="/content/drive/My Drive/ROB311/" train_data = np.genfromtxt(file_path+"optdigits.tra", delimiter=',') test_data=np.genfromtxt(file_path+"optdigits.tes",delimiter=",") print("train data shape",train_data.shape) print("test data shape",test_data.shape) # split the data into the features and attribute x_train=train_data[:,:-1] y_train=train_data[:,-1] x_test=test_data[:,:-1] y_test=test_data[:,-1] print("train features: {}, train label: {}".format(x_train.shape,y_train.shape)) print("test features: {}, test label: {}".format(x_test.shape,y_test.shape)) # create the pipeline pca = PCA(n_components=5) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) ``` # project the cluster to labels and evaluate it with classification metric ``` def get_class_name(nums): counts = np.bincount(nums) return np.argmax(counts) # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i metrics.accuracy_score(y_test,projected_pred) ``` # find the most appropriate parameter ``` # search and find the most appropriate ncomponents component_range=[5,15,25,35,45,55,64] accuracy_score=[] for c in component_range: pca = PCA(n_components=c) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i accuracy_score.append(metrics.accuracy_score(y_test,projected_pred)) accuracy_score # draw the accuracy score figure plt.figure(figsize=(8,6)) plt.plot(component_range,accuracy_score,"-o") plt.ylabel("accuracy",fontsize=18) plt.xlabel("n_components",fontsize=18) plt.title("Accuracy Score",fontsize=20) plt.show() from sklearn import metrics print('init\t\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') print('%-9s\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' % ("kmeans", kmeans.inertia_, metrics.homogeneity_score(y_test, y_pred), metrics.completeness_score(y_test, y_pred), metrics.v_measure_score(y_test, y_pred), metrics.adjusted_rand_score(y_test, y_pred), metrics.adjusted_mutual_info_score(y_test, y_pred), metrics.silhouette_score(y_test.reshape(-1,1), y_pred.reshape(-1,1), metric='euclidean', sample_size=300))) ``` # visualize the results ``` # Visualize the results on PCA-reduced data reduced_data = PCA(n_components=2).fit_transform(x_train) # choose the best params kmeans = KMeans(init='k-means++', n_clusters=10, n_init=10) kmeans.fit(reduced_data) # Step size of the mesh. Decrease to increase the quality of the VQ. h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. # Plot the decision boundary. For that, we will assign a color to each x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() ```
github_jupyter
from sklearn.decomposition import PCA from sklearn.cluster import KMeans import matplotlib % matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV file_path="/content/drive/My Drive/ROB311/" train_data = np.genfromtxt(file_path+"optdigits.tra", delimiter=',') test_data=np.genfromtxt(file_path+"optdigits.tes",delimiter=",") print("train data shape",train_data.shape) print("test data shape",test_data.shape) # split the data into the features and attribute x_train=train_data[:,:-1] y_train=train_data[:,-1] x_test=test_data[:,:-1] y_test=test_data[:,-1] print("train features: {}, train label: {}".format(x_train.shape,y_train.shape)) print("test features: {}, test label: {}".format(x_test.shape,y_test.shape)) # create the pipeline pca = PCA(n_components=5) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) def get_class_name(nums): counts = np.bincount(nums) return np.argmax(counts) # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i metrics.accuracy_score(y_test,projected_pred) # search and find the most appropriate ncomponents component_range=[5,15,25,35,45,55,64] accuracy_score=[] for c in component_range: pca = PCA(n_components=c) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i accuracy_score.append(metrics.accuracy_score(y_test,projected_pred)) accuracy_score # draw the accuracy score figure plt.figure(figsize=(8,6)) plt.plot(component_range,accuracy_score,"-o") plt.ylabel("accuracy",fontsize=18) plt.xlabel("n_components",fontsize=18) plt.title("Accuracy Score",fontsize=20) plt.show() from sklearn import metrics print('init\t\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') print('%-9s\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' % ("kmeans", kmeans.inertia_, metrics.homogeneity_score(y_test, y_pred), metrics.completeness_score(y_test, y_pred), metrics.v_measure_score(y_test, y_pred), metrics.adjusted_rand_score(y_test, y_pred), metrics.adjusted_mutual_info_score(y_test, y_pred), metrics.silhouette_score(y_test.reshape(-1,1), y_pred.reshape(-1,1), metric='euclidean', sample_size=300))) # Visualize the results on PCA-reduced data reduced_data = PCA(n_components=2).fit_transform(x_train) # choose the best params kmeans = KMeans(init='k-means++', n_clusters=10, n_init=10) kmeans.fit(reduced_data) # Step size of the mesh. Decrease to increase the quality of the VQ. h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. # Plot the decision boundary. For that, we will assign a color to each x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show()
0.720172
0.859723
This project was intended to be done by Pycharm but due to technical difficulties and need for fast training by GPU, its development was moved to Google Colaboratory. The cell below asks you to mount your google drive in order to save the outputs as a downloadable file. ``` from google.colab import drive drive.mount('/content/gdrive') ``` In the cell below, all the packages and classes needed are imported. ``` import torch import torch.nn as net import torch.nn.functional as F import torch.optim as optim from torch.utils import data from torchvision import transforms as tf, datasets as ds from torch.optim.lr_scheduler import StepLR ``` In the cell below, the network class is implemented. This class extends torch.nn.Module, as all neural networks in pytorch should inherit this class in order to function as desired. The overall information about the network comes in comments and details come in a pdf file. ``` class Brain(net.Module): #inherits nn.Module def __init__(self): super(Brain, self).__init__() #gets one photo (acting as one feature map) as input, outputs 64 feature maps by using 3x3 convolution with stride 1. self.conv1 = net.Conv2d(1, 64, 3, 1) self.conv2 = net.Conv2d(64, 128, 3, 1) #gets the 64 feature maps as input, outputs 128 feature maps by using 3x3 convolution with stride 1. #Dropout technique with probability of each node's prescence in training = 0.25 and 0.5,in order to get more accurate network, #less prone to overfitting. self.dropout1 = net.Dropout2d(0.25) self.dropout2 = net.Dropout2d(0.5) #Two fully connected layers for classification. 18432 is the number of params after doing the second convolution. self.fc1 = net.Linear(18432, 256) self.fc2 = net.Linear(256, 10) def forward(self, x): #Applying convolutions and actications one by one. x = self.conv1(x) #First convolution x = F.relu(x) #Activation function x = self.conv2(x) #Second convolution x = F.relu(x) #Activation function x = F.max_pool2d(x, 2) #Max pool with stride 2 and kernel size 2. x = self.dropout1(x) #Dropout for more accuracy in classification and faster training x = x.view(x.size(0), -1) #Fully connected layers get a one dimentional output. So we have to flatten their inputs, if needed. x = self.fc1(x) #Applying first one for classification x = F.relu(x) #Activation function for classification x = self.dropout2(x) #Dropout for more accuracy in classification and faster training x = self.fc2(x) #Final classification, 10 outputs (as we have 10 digits) output = F.log_softmax(x, 1) #One dimentional LogSoftmax, better numerical performance. return output ``` In the cell below, we load the dataset. This function downloads the dataset, transforms it to tensors, normalizes it (the normalization std and mean are the global mean and std of MNIST dataset) and shuffles it. Train and test phases are also separated. Inputs: 1. root - where the data should be loaded 2. batch_size - size of each mini-batch for SGD. 3. phase - indicates whether loading the test or the training data 4. dataset_name - name of the dataset to be loaded. Default: MNIST 5. shuffle - whether shuffling the dataset or not. Used in order to avoid possible overfitting. Output: * Loaded data. Raises exceptions if the dataset is not defined for loading, or entering a phase any other than train or test. ``` def load_data(root, batch_size, phase, dataset_name='MNIST', shuffle=True): if dataset_name == 'MNIST': if phase == 'train': transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))]) #Transform dictionary. loaded_data = data.DataLoader(ds.MNIST(root=root, train=True, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle) return loaded_data elif phase == 'test': transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))]) loaded_data = data.DataLoader(ds.MNIST(root=root, train=False, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle) return loaded_data else: raise Exception('You can only train me, or test me based on my training!') else: raise Exception('Sorry, I have not received training on other datasets so far.') n_epochs = 50 #Epoch number. train_batch_size = 200 #Train batch size test_batch_size = 1000 #Test batch size learning_rate = 0.05 momentum = 0.6 log_interval = 10 #Intervals between printing the steps in training. random_seed = 1 #random seed for manual seed use_cuda = True #Using cuda GPU. Do not forget to set the environment to GPU (Runtime -> Change runtime type -> Hardware accelerator -> GPU) torch.manual_seed(random_seed) #Seeding RNG (Random Number Generator) for CPU and GPU as pytorch does not guarantee reproducible result across its releases. train_batch = load_data('train/', train_batch_size, 'train') #Train data test_batch = load_data('test/', test_batch_size, 'test') #Test data ``` Cell below contains the losses and counts of training and testing, so as to use it for plotting graphs. Updated in each iteration. ``` train_correct = [] train_count = [] test_correct = [] test_count = [i * len(train_batch.dataset) for i in range(n_epochs + 1)] #50 times testing. ``` Train and test functions. Details in comments. ``` def train(epoch): correct_detection = 0 network.train() #this train comes from the super class for batch_index, (data, labels) in enumerate(train_batch): #enumerating train batch to use all the things it contains. if use_cuda and torch.cuda.is_available(): #Passing the data and labels to GPU if possible data = data.cuda() labels = labels.cuda() optimizer.zero_grad() #Zero gradient in each iteration to avoid inaccurate results output = network(data) #Getting training output loss = F.nll_loss(output, labels) #NLL loss loss.backward() #Backpropagation to learn the parameters and get close to the global minimum optimizer.step() #Updating weights if batch_index % log_interval == 0: #Print where we stand print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_index * len(data), len(train_batch.dataset), 100. * batch_index / len(train_batch), loss.item())) prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections train_correct.append(correct_detection) train_count.append((batch_index * train_batch_size) + (epoch - 1) * (len(train_batch.dataset))) def test(): network.eval() #Evaluation function comes from the superclass, to indicate we are in testing phase. correct_detection = 0 #counting correct detections test_loss = 0 #Average test loss after each epoch with torch.no_grad(): #Setting gradients to zero in every iteration to avoid outlier and inaccuracies for batch_index, (data, labels) in enumerate(test_batch): if use_cuda and torch.cuda.is_available(): #Pass to GPU data = data.cuda() labels = labels.cuda() output = network(data) #Classify test_loss = test_loss + F.nll_loss(output, labels, reduction='sum').item() #Calculate sum of losses in each iteration prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections test_loss = test_loss / len(test_batch.dataset) test_correct.append(correct_detection/len(test_batch.dataset)) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct_detection, len(test_batch.dataset), 100. * correct_detection / len(test_batch.dataset))) network = Brain() #Create a new instance of the network optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) #Set optimizer to do stochastic gradient descent scheduler = StepLR(optimizer, step_size = 1, gamma=0.7) #Scheduler sets optimizer options like learning rate and momentum, helping it decay. if use_cuda and torch.cuda.is_available(): #Use GPU if existent network.cuda() for i in range(1, n_epochs + 1): #Train/test loop train(i) test() scheduler.step() #Set optimizer options torch.save(network.state_dict(), '/content/gdrive/My Drive/model.pth') #Save the model torch.save(optimizer.state_dict(), '/content/gdrive/My Drive/optimizer.pth') #save optimizer print("Model's state_dict:") for param_tensor in network.state_dict(): print(param_tensor, "\t", network.state_dict()[param_tensor].size()) import matplotlib.pyplot as plt ``` Plotting the training accuracy curve ``` fig_train = plt.figure(figsize = (16,16)) plt.plot(train_count, train_correct, color = 'blue') plt.legend(['Train Accuracy'], loc='upper right') plt.xlabel('number of training example seen', fontsize=18) plt.ylabel('training accuracy', fontsize=18) plt.savefig('/content/gdrive/My Drive/train_plot.jpg') fig_test = plt.figure(figsize = (16,16)) plt.plot(test_count, test_correct, color = 'red') plt.legend(['Test Accuracy'], loc='upper right') plt.xlabel('number of training example seen', fontsize=18) plt.ylabel('test accuracy', fontsize=18) plt.savefig('/content/gdrive/My Drive/test_plot.jpg') ```
github_jupyter
from google.colab import drive drive.mount('/content/gdrive') import torch import torch.nn as net import torch.nn.functional as F import torch.optim as optim from torch.utils import data from torchvision import transforms as tf, datasets as ds from torch.optim.lr_scheduler import StepLR class Brain(net.Module): #inherits nn.Module def __init__(self): super(Brain, self).__init__() #gets one photo (acting as one feature map) as input, outputs 64 feature maps by using 3x3 convolution with stride 1. self.conv1 = net.Conv2d(1, 64, 3, 1) self.conv2 = net.Conv2d(64, 128, 3, 1) #gets the 64 feature maps as input, outputs 128 feature maps by using 3x3 convolution with stride 1. #Dropout technique with probability of each node's prescence in training = 0.25 and 0.5,in order to get more accurate network, #less prone to overfitting. self.dropout1 = net.Dropout2d(0.25) self.dropout2 = net.Dropout2d(0.5) #Two fully connected layers for classification. 18432 is the number of params after doing the second convolution. self.fc1 = net.Linear(18432, 256) self.fc2 = net.Linear(256, 10) def forward(self, x): #Applying convolutions and actications one by one. x = self.conv1(x) #First convolution x = F.relu(x) #Activation function x = self.conv2(x) #Second convolution x = F.relu(x) #Activation function x = F.max_pool2d(x, 2) #Max pool with stride 2 and kernel size 2. x = self.dropout1(x) #Dropout for more accuracy in classification and faster training x = x.view(x.size(0), -1) #Fully connected layers get a one dimentional output. So we have to flatten their inputs, if needed. x = self.fc1(x) #Applying first one for classification x = F.relu(x) #Activation function for classification x = self.dropout2(x) #Dropout for more accuracy in classification and faster training x = self.fc2(x) #Final classification, 10 outputs (as we have 10 digits) output = F.log_softmax(x, 1) #One dimentional LogSoftmax, better numerical performance. return output def load_data(root, batch_size, phase, dataset_name='MNIST', shuffle=True): if dataset_name == 'MNIST': if phase == 'train': transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))]) #Transform dictionary. loaded_data = data.DataLoader(ds.MNIST(root=root, train=True, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle) return loaded_data elif phase == 'test': transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))]) loaded_data = data.DataLoader(ds.MNIST(root=root, train=False, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle) return loaded_data else: raise Exception('You can only train me, or test me based on my training!') else: raise Exception('Sorry, I have not received training on other datasets so far.') n_epochs = 50 #Epoch number. train_batch_size = 200 #Train batch size test_batch_size = 1000 #Test batch size learning_rate = 0.05 momentum = 0.6 log_interval = 10 #Intervals between printing the steps in training. random_seed = 1 #random seed for manual seed use_cuda = True #Using cuda GPU. Do not forget to set the environment to GPU (Runtime -> Change runtime type -> Hardware accelerator -> GPU) torch.manual_seed(random_seed) #Seeding RNG (Random Number Generator) for CPU and GPU as pytorch does not guarantee reproducible result across its releases. train_batch = load_data('train/', train_batch_size, 'train') #Train data test_batch = load_data('test/', test_batch_size, 'test') #Test data train_correct = [] train_count = [] test_correct = [] test_count = [i * len(train_batch.dataset) for i in range(n_epochs + 1)] #50 times testing. def train(epoch): correct_detection = 0 network.train() #this train comes from the super class for batch_index, (data, labels) in enumerate(train_batch): #enumerating train batch to use all the things it contains. if use_cuda and torch.cuda.is_available(): #Passing the data and labels to GPU if possible data = data.cuda() labels = labels.cuda() optimizer.zero_grad() #Zero gradient in each iteration to avoid inaccurate results output = network(data) #Getting training output loss = F.nll_loss(output, labels) #NLL loss loss.backward() #Backpropagation to learn the parameters and get close to the global minimum optimizer.step() #Updating weights if batch_index % log_interval == 0: #Print where we stand print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_index * len(data), len(train_batch.dataset), 100. * batch_index / len(train_batch), loss.item())) prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections train_correct.append(correct_detection) train_count.append((batch_index * train_batch_size) + (epoch - 1) * (len(train_batch.dataset))) def test(): network.eval() #Evaluation function comes from the superclass, to indicate we are in testing phase. correct_detection = 0 #counting correct detections test_loss = 0 #Average test loss after each epoch with torch.no_grad(): #Setting gradients to zero in every iteration to avoid outlier and inaccuracies for batch_index, (data, labels) in enumerate(test_batch): if use_cuda and torch.cuda.is_available(): #Pass to GPU data = data.cuda() labels = labels.cuda() output = network(data) #Classify test_loss = test_loss + F.nll_loss(output, labels, reduction='sum').item() #Calculate sum of losses in each iteration prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections test_loss = test_loss / len(test_batch.dataset) test_correct.append(correct_detection/len(test_batch.dataset)) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct_detection, len(test_batch.dataset), 100. * correct_detection / len(test_batch.dataset))) network = Brain() #Create a new instance of the network optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) #Set optimizer to do stochastic gradient descent scheduler = StepLR(optimizer, step_size = 1, gamma=0.7) #Scheduler sets optimizer options like learning rate and momentum, helping it decay. if use_cuda and torch.cuda.is_available(): #Use GPU if existent network.cuda() for i in range(1, n_epochs + 1): #Train/test loop train(i) test() scheduler.step() #Set optimizer options torch.save(network.state_dict(), '/content/gdrive/My Drive/model.pth') #Save the model torch.save(optimizer.state_dict(), '/content/gdrive/My Drive/optimizer.pth') #save optimizer print("Model's state_dict:") for param_tensor in network.state_dict(): print(param_tensor, "\t", network.state_dict()[param_tensor].size()) import matplotlib.pyplot as plt fig_train = plt.figure(figsize = (16,16)) plt.plot(train_count, train_correct, color = 'blue') plt.legend(['Train Accuracy'], loc='upper right') plt.xlabel('number of training example seen', fontsize=18) plt.ylabel('training accuracy', fontsize=18) plt.savefig('/content/gdrive/My Drive/train_plot.jpg') fig_test = plt.figure(figsize = (16,16)) plt.plot(test_count, test_correct, color = 'red') plt.legend(['Test Accuracy'], loc='upper right') plt.xlabel('number of training example seen', fontsize=18) plt.ylabel('test accuracy', fontsize=18) plt.savefig('/content/gdrive/My Drive/test_plot.jpg')
0.914319
0.947914
# Sklearn ## sklearn.datasets документация: http://scikit-learn.org/stable/datasets/ ``` from sklearn import datasets %pylab inline ``` ### Генерация выборок **Способы генерации данных:** * make_classification * make_regression * make_circles * make_checkerboard * etc #### datasets.make_circles ``` circles = datasets.make_circles() print("features: {}".format(circles[0][:10])) print("target: {}".format(circles[1][:10])) from matplotlib.colors import ListedColormap colors = ListedColormap(['red', 'yellow']) pyplot.figure(figsize(8, 8)) pyplot.scatter(list(map(lambda x: x[0], circles[0])), list(map(lambda x: x[1], circles[0])), c = circles[1], cmap = colors) def plot_2d_dataset(data, colors): pyplot.figure(figsize(5, 5)) pyplot.scatter(list(map(lambda x: x[0], data[0])), list(map(lambda x: x[1], data[0])), c = data[1], cmap = colors) noisy_circles = datasets.make_circles(noise = 0.15) plot_2d_dataset(noisy_circles, colors) ``` #### datasets.make_classification ``` simple_classification_problem = datasets.make_classification(n_features = 2, n_informative = 1, n_redundant = 1, n_clusters_per_class = 1, random_state = 1 ) plot_2d_dataset(simple_classification_problem, colors) classification_problem = datasets.make_classification(n_features = 2, n_informative = 2, n_classes = 4, n_redundant = 0, n_clusters_per_class = 1, random_state = 1) colors = ListedColormap(['red', 'blue', 'green', 'yellow']) plot_2d_dataset(classification_problem, colors) ``` ### "Игрушечные" наборы данных **Наборы данных:** * load_iris * load_boston * load_diabetes * load_digits * load_linnerud * etc #### datasets.load_iris ``` iris = datasets.load_iris() iris iris.keys() print(iris.DESCR) print("feature names: {}".format(iris.feature_names)) print("target names: {names}".format(names = iris.target_names)) iris.data[:10] iris.target ``` ### Визуализация выбокри ``` from pandas import DataFrame iris_frame = DataFrame(iris.data) iris_frame.columns = iris.feature_names iris_frame['target'] = iris.target iris_frame.head() iris_frame.target = iris_frame.target.apply(lambda x : iris.target_names[x]) iris_frame.head() iris_frame[iris_frame.target == 'setosa'].hist('sepal length (cm)') pyplot.figure(figsize(20, 24)) plot_number = 0 for feature_name in iris['feature_names']: for target_name in iris['target_names']: plot_number += 1 pyplot.subplot(4, 3, plot_number) pyplot.hist(iris_frame[iris_frame.target == target_name][feature_name]) pyplot.title(target_name) pyplot.xlabel('cm') pyplot.ylabel(feature_name[:-4]) ``` ### Бонус: библиотека seaborn ``` import seaborn as sns sns.pairplot(iris_frame, hue = 'target') ?sns.set() sns.set() data = sns.load_dataset("iris") sns.pairplot(data, hue = "species") ``` #### **Если Вас заинтересовала библиотека seaborn:** * установка: https://stanford.edu/~mwaskom/software/seaborn/installing.html * установка c помощью анаконды: https://anaconda.org/anaconda/seaborn * руководство: https://stanford.edu/~mwaskom/software/seaborn/tutorial.html * примеры: https://stanford.edu/~mwaskom/software/seaborn/examples/
github_jupyter
from sklearn import datasets %pylab inline circles = datasets.make_circles() print("features: {}".format(circles[0][:10])) print("target: {}".format(circles[1][:10])) from matplotlib.colors import ListedColormap colors = ListedColormap(['red', 'yellow']) pyplot.figure(figsize(8, 8)) pyplot.scatter(list(map(lambda x: x[0], circles[0])), list(map(lambda x: x[1], circles[0])), c = circles[1], cmap = colors) def plot_2d_dataset(data, colors): pyplot.figure(figsize(5, 5)) pyplot.scatter(list(map(lambda x: x[0], data[0])), list(map(lambda x: x[1], data[0])), c = data[1], cmap = colors) noisy_circles = datasets.make_circles(noise = 0.15) plot_2d_dataset(noisy_circles, colors) simple_classification_problem = datasets.make_classification(n_features = 2, n_informative = 1, n_redundant = 1, n_clusters_per_class = 1, random_state = 1 ) plot_2d_dataset(simple_classification_problem, colors) classification_problem = datasets.make_classification(n_features = 2, n_informative = 2, n_classes = 4, n_redundant = 0, n_clusters_per_class = 1, random_state = 1) colors = ListedColormap(['red', 'blue', 'green', 'yellow']) plot_2d_dataset(classification_problem, colors) iris = datasets.load_iris() iris iris.keys() print(iris.DESCR) print("feature names: {}".format(iris.feature_names)) print("target names: {names}".format(names = iris.target_names)) iris.data[:10] iris.target from pandas import DataFrame iris_frame = DataFrame(iris.data) iris_frame.columns = iris.feature_names iris_frame['target'] = iris.target iris_frame.head() iris_frame.target = iris_frame.target.apply(lambda x : iris.target_names[x]) iris_frame.head() iris_frame[iris_frame.target == 'setosa'].hist('sepal length (cm)') pyplot.figure(figsize(20, 24)) plot_number = 0 for feature_name in iris['feature_names']: for target_name in iris['target_names']: plot_number += 1 pyplot.subplot(4, 3, plot_number) pyplot.hist(iris_frame[iris_frame.target == target_name][feature_name]) pyplot.title(target_name) pyplot.xlabel('cm') pyplot.ylabel(feature_name[:-4]) import seaborn as sns sns.pairplot(iris_frame, hue = 'target') ?sns.set() sns.set() data = sns.load_dataset("iris") sns.pairplot(data, hue = "species")
0.629091
0.962285
###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from ([this Jupyter notebook](https://nbviewer.jupyter.org/github/heinerigel/coursera/blob/master/Notebooks4Coursera/W2/W2_P1.ipynb)) by Heiner Igel ([@heinerigel](https://github.com/heinerigel)) which is a supplemenatry material to his Coursera lecture [Computers, Waves, Simulations: A Practical Introduction to Numerical Methods using Python](https://www.coursera.org/learn/computers-waves-simulations), additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi ``` # Execute this cell to load the notebook's style sheet, then ignore it from IPython.core.display import HTML css_file = '../style/custom.css' HTML(open(css_file, "r").read()) ``` # What is an "optimum" grid point distance? After the introduction to the finite-difference method in the last class, you might think: how do I choose the "optimum" spatial or temporal grid point distance $dx$ or $dt$ for a given FD-scheme in order to find the sweet spot between numerical accuracy of the solution and computation time? To achieve this goal, I introduce the concept of "gridpoints per wavelength" and demonstrate the accuracy for an under- and oversampled computation of the first derivative of the sine function. ``` # Import Libraries import numpy as np from math import * import matplotlib.pyplot as plt ``` We initialize a space-dependent sine function \begin{equation} f(x)= \sin (k x) \notag \end{equation} where the wavenumber k is \begin{equation} k = \dfrac{2 \pi}{\lambda} \notag \end{equation} and $\lambda$ is wavelength. ``` # Initial parameters xmax = 10.0 # maximum extension of physical domain (m) nx = 200 # number of gridpoints dx = xmax/(nx-1) # grid increment dx (m) x = np.linspace(0,xmax,nx) # space coordinates # Initialization of sine function l = 20*dx # wavelength k = 2*pi/l # wavenumber f = np.sin(k*x) # Define figure size plt.figure(figsize=(10,5)) # Plot sine function plt.plot(x, f) plt.title('Sine function') plt.xlabel('x, m') plt.ylabel('Amplitude') plt.xlim((0, xmax)) plt.grid() plt.show() ``` In the cell below we calculate the central finite-difference derivative of f(x) using two points \begin{equation} f^{\prime}(x)=\dfrac{f(x+dx)-f(x-dx)}{2dx}\notag \end{equation} and compare with the analytical derivative \begin{equation} f^{\prime}(x) = k \cos(k x)\notag \end{equation} ``` # First derivative with central difference operator # Initiation of numerical and analytical derivatives nder=np.zeros(nx) # numerical derivative ader=np.zeros(nx) # analytical derivative # Numerical derivative of the given function for i in range (1, nx-1): nder[i]=(f[i+1]-f[i-1])/(2*dx) # Analytical derivative of the given function ader= k * np.cos(k*x) # Exclude boundaries ader[0]=0. ader[nx-1]=0. # Error (rms) rms = np.sqrt(np.mean(nder-ader)**2) # Define figure size plt.figure(figsize=(10,5)) # Plotting numerical & analytical solution and their difference plt.plot (x, nder,label="Numerical Derivative, 2-point central FD", lw=2, ls='-', color="blue") plt.plot (x, ader, label="Analytical Derivative", lw=2, ls="--",color="red") plt.plot (x, nder-ader, label="Difference", lw=2, ls=":") plt.title("First derivative, Err (rms) = %.6f " % (rms) ) plt.xlabel('x, m') plt.ylabel('Amplitude') plt.legend(loc='lower left') plt.grid() plt.show() ``` ### The concept of number of points per wavelength \begin{equation} n_\lambda = \dfrac{\lambda}{dx} \notag \end{equation} How does the error of the numerical derivative change with the number of points per wavelength? ``` # Define figure size plt.figure(figsize=(10,5)) # Plotting number of points per wavelength # ---------------------------------------- plt.plot (x, nder,label="Numerical Derivative, 2-point central FD", marker='o', color="blue") plt.title("First derivative, Error = %.6f, $n_\lambda$ = %.2f " % ( rms, l/dx) ) plt.xlabel('x, m') plt.ylabel('Amplitude') plt.legend(loc='lower left') plt.xlim((xmax/2-l,xmax/2+l)) plt.grid() plt.show() ``` ### Investigate the error as a function of grid points per wavelength Next, we investigate how the error of the FD solution changes as a function of grid points per wavelength ``` # Define a range of number of points per wavelength, e.g. [nmin=3,5,6 ... ,nmax=15] # Loop over points, calculate corresponding wavelength and calculate error # Initialize vectors nmin=1 nmax=15 na = np.zeros(nmax-nmin+1) # Vector with number of points per wavelength err = np.zeros(nmax-nmin+1) # Vector with error j = -1 # array index # Loop through finite-difference derivative calculation for n in range (nmin,nmax+1): j = j+1 # array index na[j] = n # Initialize sin function l = na[j]*dx # wavelength k = 2*pi/l # wavenumber f = np.sin(k*x) # Numerical derivative of the sin function for i in range (1, nx-1): nder[i]=(f[i+1]-f[i-1])/(2*dx) # Analytical derivative of the sin function ader= k * np.cos(k*x) # Exclude boundaries ader[0]=0. ader[nx-1]=0. i0 = np.int(nx/2) # Error (rms) err[j] = (nder[i0]-ader[i0])**2/ader[i0]**2 * 100 # Define figure size plt.figure(figsize=(12,5)) # Plotting error as function of number of points per wavelength # ------------------------------------------------------------- plt.semilogy(na,err, lw=2, ls='-', marker='o', color="blue") plt.title('Error as a function of $n_\lambda$ ') plt.xlabel('n$_\lambda$') plt.ylabel('rms ') plt.grid() plt.show() ``` ## We learned * 2-point finite-difference approximations can provide estimates of the 1st derivative of a function * The accuracy depends on the "number of points per wavelength", i.e., how well we sample the original function * The more points we use the more accurate is the derivative approximation
github_jupyter
# Execute this cell to load the notebook's style sheet, then ignore it from IPython.core.display import HTML css_file = '../style/custom.css' HTML(open(css_file, "r").read()) # Import Libraries import numpy as np from math import * import matplotlib.pyplot as plt # Initial parameters xmax = 10.0 # maximum extension of physical domain (m) nx = 200 # number of gridpoints dx = xmax/(nx-1) # grid increment dx (m) x = np.linspace(0,xmax,nx) # space coordinates # Initialization of sine function l = 20*dx # wavelength k = 2*pi/l # wavenumber f = np.sin(k*x) # Define figure size plt.figure(figsize=(10,5)) # Plot sine function plt.plot(x, f) plt.title('Sine function') plt.xlabel('x, m') plt.ylabel('Amplitude') plt.xlim((0, xmax)) plt.grid() plt.show() # First derivative with central difference operator # Initiation of numerical and analytical derivatives nder=np.zeros(nx) # numerical derivative ader=np.zeros(nx) # analytical derivative # Numerical derivative of the given function for i in range (1, nx-1): nder[i]=(f[i+1]-f[i-1])/(2*dx) # Analytical derivative of the given function ader= k * np.cos(k*x) # Exclude boundaries ader[0]=0. ader[nx-1]=0. # Error (rms) rms = np.sqrt(np.mean(nder-ader)**2) # Define figure size plt.figure(figsize=(10,5)) # Plotting numerical & analytical solution and their difference plt.plot (x, nder,label="Numerical Derivative, 2-point central FD", lw=2, ls='-', color="blue") plt.plot (x, ader, label="Analytical Derivative", lw=2, ls="--",color="red") plt.plot (x, nder-ader, label="Difference", lw=2, ls=":") plt.title("First derivative, Err (rms) = %.6f " % (rms) ) plt.xlabel('x, m') plt.ylabel('Amplitude') plt.legend(loc='lower left') plt.grid() plt.show() # Define figure size plt.figure(figsize=(10,5)) # Plotting number of points per wavelength # ---------------------------------------- plt.plot (x, nder,label="Numerical Derivative, 2-point central FD", marker='o', color="blue") plt.title("First derivative, Error = %.6f, $n_\lambda$ = %.2f " % ( rms, l/dx) ) plt.xlabel('x, m') plt.ylabel('Amplitude') plt.legend(loc='lower left') plt.xlim((xmax/2-l,xmax/2+l)) plt.grid() plt.show() # Define a range of number of points per wavelength, e.g. [nmin=3,5,6 ... ,nmax=15] # Loop over points, calculate corresponding wavelength and calculate error # Initialize vectors nmin=1 nmax=15 na = np.zeros(nmax-nmin+1) # Vector with number of points per wavelength err = np.zeros(nmax-nmin+1) # Vector with error j = -1 # array index # Loop through finite-difference derivative calculation for n in range (nmin,nmax+1): j = j+1 # array index na[j] = n # Initialize sin function l = na[j]*dx # wavelength k = 2*pi/l # wavenumber f = np.sin(k*x) # Numerical derivative of the sin function for i in range (1, nx-1): nder[i]=(f[i+1]-f[i-1])/(2*dx) # Analytical derivative of the sin function ader= k * np.cos(k*x) # Exclude boundaries ader[0]=0. ader[nx-1]=0. i0 = np.int(nx/2) # Error (rms) err[j] = (nder[i0]-ader[i0])**2/ader[i0]**2 * 100 # Define figure size plt.figure(figsize=(12,5)) # Plotting error as function of number of points per wavelength # ------------------------------------------------------------- plt.semilogy(na,err, lw=2, ls='-', marker='o', color="blue") plt.title('Error as a function of $n_\lambda$ ') plt.xlabel('n$_\lambda$') plt.ylabel('rms ') plt.grid() plt.show()
0.809765
0.950041
# Introduction to NumPy The learning objectives of this section are: * Understand advantages of vectorized code using Numpy (over standard python ways) * Create NumPy arrays * Convert lists and tuples to numpy arrays * Create (initialise) arrays * Inspect the structure and content of arrays * Subset, slice, index and iterate through arrays * Compare computation times in NumPy and standard Python lists ### NumPy Basics NumPy is a library written for scientific computing and data analysis. It stands for numerical python. The most basic object in NumPy is the ```ndarray```, or simply an ```array```, which is an **n-dimensional, homogenous** array. By homogenous, we mean that all the elements in a numpy array have to be of the **same data type**, which is commonly numeric (float or integer). Let's see some examples of arrays. ``` # Import the numpy library # np is simply an alias, you may use any other alias, though np is quite standard import numpy as np # Creating a 1-D array using a list # np.array() takes in a list or a tuple as argument, and converts into an array array_1d = np.array([2, 4, 5, 6, 7, 9]) print(array_1d) print(type(array_1d)) # Creating a 2-D array using two lists array_2d = np.array([[2, 3, 4], [5, 8, 7]]) print(array_2d) ``` In NumPy, dimensions are called **axes**. In the 2-d array above, there are two axes, having two and three elements respectively. In Numpy terminology, for 2-D arrays: * ```axis = 0``` refers to the rows * ```axis = 1``` refers to the columns <img src="numpy_axes.jpg" style="width: 600px; height: 400px"> ### Advantages of NumPy What is the use of arrays over lists, specifically for data analysis? Putting crudely, it is **convenience and speed **:<br> 1. You can write **vectorised** code on numpy arrays, not on lists, which is **convenient to read and write, and concise**. 2. Numpy is **much faster** than the standard python ways to do computations. Vectorised code typically does not contain explicit looping and indexing etc. (all of this happens behind the scenes, in precompiled C-code), and thus it is much more concise. Let's see an example of convenience, we'll see one later for speed. Say you have two lists of numbers, and want to calculate the element-wise product. The standard python list way would need you to map a lambda function (or worse - write a ```for``` loop), whereas with NumPy, you simply multiply the arrays. ``` list_1 = [3, 6, 7, 5] list_2 = [4, 5, 1, 7] # the list way to do it: map a function to the two lists product_list = list(map(lambda x, y: x*y, list_1, list_2)) print(product_list) # The numpy array way to do it: simply multiply the two arrays array_1 = np.array(list_1) array_2 = np.array(list_2) array_3 = array_1*array_2 print(array_3) print(type(array_3)) ``` As you can see, the numpy way is clearly more concise. Even simple mathematical operations on lists require for loops, unlike with arrays. For example, to calculate the square of every number in a list: ``` # Square a list list_squared = [i**2 for i in list_1] # Square a numpy array array_squared = array_1**2 print(list_squared) print(array_squared) ``` This was with 1-D arrays. You'll often work with 2-D arrays (matrices), where the difference would be even greater. With lists, you'll have to store matrices as lists of lists and loop through them. With NumPy, you simply multiply the matrices. ### Creating NumPy Arrays There are multiple ways to create numpy arrays, the most commmon ones being: * Convert lists or tuples to arrays using ```np.array()```, as done above * Initialise arrays of fixed size (when the size is known) ``` # Convert lists or tuples to arrays using np.array() # Note that np.array(2, 5, 6, 7) will throw an error - you need to pass a list or a tuple array_from_list = np.array([2, 5, 6, 7]) array_from_tuple = np.array((4, 5, 8, 9)) print(array_from_list) print(array_from_tuple) ``` The other common way is to initialise arrays. You do this when you know the size of the array beforehand. The following ways are commonly used: * ```np.ones()```: Create array of 1s * ```np.zeros()```: Create array of 0s * ```np.random.random()```: Create array of random numbers * ```np.arange()```: Create array with increments of a fixed step size * ```np.linspace()```: Create array of fixed length ``` # Tip: Use help to see the syntax when required help(np.ones) # Creating a 5 x 3 array of ones np.ones((5, 3)) # Notice that, by default, numpy creates data type = float64 # Can provide dtype explicitly using dtype np.ones((5, 3), dtype = np.int) # Creating array of zeros np.zeros(4, dtype = np.int) # Array of random numbers np.random.random([3, 4]) # np.arange() # np.arange() is the numpy equivalent of range() # Notice that 10 is included, 100 is not, as in standard python lists # From 10 to 100 with a step of 5 numbers = np.arange(10, 100, 5) print(numbers) # np.linspace() # Sometimes, you know the length of the array, not the step size # Array of length 25 between 15 and 18 np.linspace(15, 18, 25) ``` Apart from the methods mentioned above, there are a few more NumPy functions that you can use to create special NumPy arrays: - `np.full()`: Create a constant array of any number ‘n’ - `np.tile()`: Create a new array by repeating an existing array for a particular number of times - `np.eye()`: Create an identity matrix of any dimension - `np.randint()`: Create a random array of integers within a particular range ``` # Creating a 4 x 3 array of 7s using np.full() # The default data type here is int only np.full((4,3), 7) # Given an array, np.tile() creates a new array by repeating the given array for any number of times that you want # The default data type her is int only arr = ([0, 1, 2]) np.tile(arr, 3) # You can also create multidimensional arrays using np.tile() np.tile(arr, (3,2)) # Create a 3 x 3 identity matrix using np.eye() # The default data type here is float. So if we want integer values, we need to specify the dtype to be int np.eye(3, dtype = int) # Create a 4 x 4 random array of integers ranging from 0 to 9 np.random.randint(0, 10, (4,4)) ``` ### Inspect the Structure and Content of Arrays It is helpful to inspect the structure of numpy arrays, especially while working with large arrays. Some attributes of numpy arrays are: * ```shape```: Shape of array (n x m) * ```dtype```: data type (int, float etc.) * ```ndim```: Number of dimensions (or axes) * ```itemsize```: Memory used by each array elememnt in bytes Let's say you are working with a moderately large array of size 1000 x 300. First, you would want to wrap your head around the basic shape and size of the array. ``` # Initialising a random 1000 x 300 array rand_array = np.random.random((1000, 300)) # Print the first row print(rand_array[1, ]) # Inspecting shape, dtype, ndim and itemsize print("Shape: {}".format(rand_array.shape)) print("dtype: {}".format(rand_array.dtype)) print("Dimensions: {}".format(rand_array.ndim)) print("Item size: {}".format(rand_array.itemsize)) ``` Reading 3-D arrays is not very obvious, because we can only print maximum two dimensions on paper, and thus they are printed according to a specific convention. Printing higher dimensional arrays follows the following conventions: * The last axis is printed from left to right * The second-to-last axis is printed from top to bottom * The other axes are also printed top-to-bottom, with each slice separated by another using an empty line Let's see some examples. ``` # Creating a 3-D array # reshape() simply reshapes a 1-D array array_3d = np.arange(24).reshape(2, 3, 4) print(array_3d) ``` * The last axis has 4 elements, and is printed from left to right. * The second last has 3, and is printed top to bottom * The other axis has 2, and is printed in the two separated blocks ### Subset, Slice, Index and Iterate through Arrays For **one-dimensional arrays**, indexing, slicing etc. is **similar to python lists** - indexing starts at 0. ``` # Indexing and slicing one dimensional arrays array_1d = np.arange(10) print(array_1d) # Third element print(array_1d[2]) # Specific elements # Notice that array[2, 5, 6] will throw an error, you need to provide the indices as a list print(array_1d[[2, 5, 6]]) # Slice third element onwards print(array_1d[2:]) # Slice first three elements print(array_1d[:3]) # Slice third to seventh elements print(array_1d[2:7]) # Subset starting 0 at increment of 2 print(array_1d[0::2]) # Iterations are also similar to lists for i in array_1d: print(i**2) ``` **Multidimensional arrays** are indexed using as many indices as the number of dimensions or axes. For instance, to index a 2-D array, you need two indices - ```array[x, y]```. Each axes has an index starting at 0. The following figure shows the axes and their indices for a 2-D array. <img src="2_d_array.png" style="width: 350px; height: 300px"> ``` # Creating a 2-D array array_2d = np.array([[2, 5, 7, 5], [4, 6, 8, 10], [10, 12, 15, 19]]) print(array_2d) # Third row second column print(array_2d[2, 1]) # Slicing the second row, and all columns # Notice that the resultant is itself a 1-D array print(array_2d[1, :]) print(type(array_2d[1, :])) # Slicing all rows and the third column print(array_2d[:, 2]) # Slicing all rows and the first three columns print(array_2d[:, :3]) ``` **Iterating on 2-D arrays** is done with respect to the first axis (which is row, the second axis is column). ``` # Iterating over 2-D arrays for row in array_2d: print(row) # Iterating over 3-D arrays: Done with respect to the first axis array_3d = np.arange(24).reshape(2, 3, 4) print(array_3d) # Prints the two blocks for row in array_3d: print(row) ``` ### Compare Computation Times in NumPy and Standard Python Lists We mentioned that the key advantages of numpy are convenience and speed of computation. You'll often work with extremely large datasets, and thus it is important point for you to understand how much computation time (and memory) you can save using numpy, compared to standard python lists. Let's compare the computation times of arrays and lists for a simple task of calculating the element-wise product of numbers. ``` ## Comparing time taken for computation list_1 = [i for i in range(1000000)] list_2 = [j**2 for j in range(1000000)] # list multiplication import time # store start time, time after computation, and take the difference t0 = time.time() product_list = list(map(lambda x, y: x*y, list_1, list_2)) t1 = time.time() list_time = t1 - t0 print(t1-t0) # numpy array array_1 = np.array(list_1) array_2 = np.array(list_2) t0 = time.time() array_3 = array_1*array_2 t1 = time.time() numpy_time = t1 - t0 print(t1-t0) print("The ratio of time taken is {}".format(list_time/numpy_time)) ``` In this case, numpy is **an order of magnitude faster** than lists. This is with arrays of size in millions, but you may work on much larger arrays of sizes in order of billions. Then, the difference is even larger. Some reasons for such difference in speed are: * NumPy is written in C, which is basically being executed behind the scenes * NumPy arrays are more compact than lists, i.e. they take much lesser storage space than lists The following discussions demonstrate the differences in speeds of NumPy and standard python: 1. https://stackoverflow.com/questions/8385602/why-are-numpy-arrays-so-fast 2. https://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists
github_jupyter
# Import the numpy library # np is simply an alias, you may use any other alias, though np is quite standard import numpy as np # Creating a 1-D array using a list # np.array() takes in a list or a tuple as argument, and converts into an array array_1d = np.array([2, 4, 5, 6, 7, 9]) print(array_1d) print(type(array_1d)) # Creating a 2-D array using two lists array_2d = np.array([[2, 3, 4], [5, 8, 7]]) print(array_2d) list_1 = [3, 6, 7, 5] list_2 = [4, 5, 1, 7] # the list way to do it: map a function to the two lists product_list = list(map(lambda x, y: x*y, list_1, list_2)) print(product_list) # The numpy array way to do it: simply multiply the two arrays array_1 = np.array(list_1) array_2 = np.array(list_2) array_3 = array_1*array_2 print(array_3) print(type(array_3)) # Square a list list_squared = [i**2 for i in list_1] # Square a numpy array array_squared = array_1**2 print(list_squared) print(array_squared) # Convert lists or tuples to arrays using np.array() # Note that np.array(2, 5, 6, 7) will throw an error - you need to pass a list or a tuple array_from_list = np.array([2, 5, 6, 7]) array_from_tuple = np.array((4, 5, 8, 9)) print(array_from_list) print(array_from_tuple) # Tip: Use help to see the syntax when required help(np.ones) # Creating a 5 x 3 array of ones np.ones((5, 3)) # Notice that, by default, numpy creates data type = float64 # Can provide dtype explicitly using dtype np.ones((5, 3), dtype = np.int) # Creating array of zeros np.zeros(4, dtype = np.int) # Array of random numbers np.random.random([3, 4]) # np.arange() # np.arange() is the numpy equivalent of range() # Notice that 10 is included, 100 is not, as in standard python lists # From 10 to 100 with a step of 5 numbers = np.arange(10, 100, 5) print(numbers) # np.linspace() # Sometimes, you know the length of the array, not the step size # Array of length 25 between 15 and 18 np.linspace(15, 18, 25) # Creating a 4 x 3 array of 7s using np.full() # The default data type here is int only np.full((4,3), 7) # Given an array, np.tile() creates a new array by repeating the given array for any number of times that you want # The default data type her is int only arr = ([0, 1, 2]) np.tile(arr, 3) # You can also create multidimensional arrays using np.tile() np.tile(arr, (3,2)) # Create a 3 x 3 identity matrix using np.eye() # The default data type here is float. So if we want integer values, we need to specify the dtype to be int np.eye(3, dtype = int) # Create a 4 x 4 random array of integers ranging from 0 to 9 np.random.randint(0, 10, (4,4)) # Initialising a random 1000 x 300 array rand_array = np.random.random((1000, 300)) # Print the first row print(rand_array[1, ]) # Inspecting shape, dtype, ndim and itemsize print("Shape: {}".format(rand_array.shape)) print("dtype: {}".format(rand_array.dtype)) print("Dimensions: {}".format(rand_array.ndim)) print("Item size: {}".format(rand_array.itemsize)) # Creating a 3-D array # reshape() simply reshapes a 1-D array array_3d = np.arange(24).reshape(2, 3, 4) print(array_3d) # Indexing and slicing one dimensional arrays array_1d = np.arange(10) print(array_1d) # Third element print(array_1d[2]) # Specific elements # Notice that array[2, 5, 6] will throw an error, you need to provide the indices as a list print(array_1d[[2, 5, 6]]) # Slice third element onwards print(array_1d[2:]) # Slice first three elements print(array_1d[:3]) # Slice third to seventh elements print(array_1d[2:7]) # Subset starting 0 at increment of 2 print(array_1d[0::2]) # Iterations are also similar to lists for i in array_1d: print(i**2) # Creating a 2-D array array_2d = np.array([[2, 5, 7, 5], [4, 6, 8, 10], [10, 12, 15, 19]]) print(array_2d) # Third row second column print(array_2d[2, 1]) # Slicing the second row, and all columns # Notice that the resultant is itself a 1-D array print(array_2d[1, :]) print(type(array_2d[1, :])) # Slicing all rows and the third column print(array_2d[:, 2]) # Slicing all rows and the first three columns print(array_2d[:, :3]) # Iterating over 2-D arrays for row in array_2d: print(row) # Iterating over 3-D arrays: Done with respect to the first axis array_3d = np.arange(24).reshape(2, 3, 4) print(array_3d) # Prints the two blocks for row in array_3d: print(row) ## Comparing time taken for computation list_1 = [i for i in range(1000000)] list_2 = [j**2 for j in range(1000000)] # list multiplication import time # store start time, time after computation, and take the difference t0 = time.time() product_list = list(map(lambda x, y: x*y, list_1, list_2)) t1 = time.time() list_time = t1 - t0 print(t1-t0) # numpy array array_1 = np.array(list_1) array_2 = np.array(list_2) t0 = time.time() array_3 = array_1*array_2 t1 = time.time() numpy_time = t1 - t0 print(t1-t0) print("The ratio of time taken is {}".format(list_time/numpy_time))
0.586168
0.994174