markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Success! It definitely looksl like scattered moonlight is responsible for the bulk of the added sky brightness. But there's also a portion of data where the moon was bright but the sky was still dark. There's more to it than just phase. Now we turn to the task of fitting a model to this. 2) The ModelTurns out that the definitive reference for this was authored by a colleague of mine: Kevin Krisciunas at Texas A&M. His paper can be found at the ADS abstract service: http://adsabs.harvard.edu/abs/1991PASP..103.1033KYou can read the details (lots of empirical formulas, light-scattering theory, and unit conversions), but the short of it is that we get a predictive model of the sky-brightness at the position of an astronomical object as a function of the following variables:1. The lunar phase angle: $\alpha$2. The angular separation between the object and the moon: $\rho$3. The Zenith angle of the object: $Z$4. The Zenith angle of the moon: $Z_m$5. The extinction coefficient: $k_X$ (a measure of how much the atmosphere absorbs light)6. The dark-sky (no moon) sky background at zenith (in mag/square-arc-sec): $m_{dark}$The following diagram shows some of these variables: ![diagram showing variables](media/Embed.jpeg)Actually, $\alpha$, $\rho$, $Z$, and $Z_m$ are all functions of the date of observations and sky coordinates of the object, which we have already. That leaves $k_x$ and $m_{dark}$ as the only unknowns to be determined. Given these variables, the flux from the moon is given by an empirically-determined function that takes into account the fact that the moon is not a perfect sphere:$$I^* = 10^{-0.4(3.84 + 0.026|\alpha | + 4\times 10^{-9}\alpha^4)}$$This flux is then scattered by angle $\rho$ into our line of sight, contributing to the sky background. The fraction of light scattered into angle $\rho$ is given empirically by:$$f(\rho) = 10^{5.36}\left[1.06 + \cos^2\rho\right] + 10^{6.15 - \rho/40} $$This just tells us how quickly the sky brightness falls off as we look further away from the moon. We can visualize this by making a 2D array of angles from the center of an image ($\rho$) and comptuing $f(\rho)$. The first part of the next cell uses numpy array functions to create a 2D "image" with the moon at center and each pixel representing a value of $\rho$ degrees from the center.
import numpy as np jj,ii = np.indices((1024,1024))/1024 # 2D index arrays scaled 0->1 rho = np.sqrt((ii-0.5)**2 + (jj-0.5)**2)*45.0 # 2D array of angles from center in degrees f = 10**5.36*(1.06 + (np.cos(rho*np.pi/180)**2)) + np.power(10, 6.15-rho/40) plt.imshow(f, origin='lower', extent=(-22.5,22.5,-22.5,22.5)) plt.contour(f, origin='lower', extent=(-22.5,22.5,-22.5,22.5), colors='white', alpha=0.1) plt.xlabel('X angular distance') plt.ylabel('Y angular distance')
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
So there's less and less scattered light farther from the moon (at the center). But this scattered light is also attenuated (absorbed) by the atmosphere. This attenuation is parametrized by the *airmass* $X$, the relative amount of atmosphere the light has to penetrate (with $X=1$ for the zenith). Krisciunas & Schaefer (1991) present this formula for the airmass: $X(Z) = \left(1 - 0.96 \sin^2 Z\right)^{-1/2}$. We'll come back to this later. Suffice it to say for the moment that this is an approximation very close to the "infinite slab" model of the atmosphere. Putting it all together, the surface brigthness (in the interesting units of [nanoLamberts](https://en.wikipedia.org/wiki/Lambert_(unit))) from the moon will be:$$ B_{moon} = f(\rho)I^*10^{-0.4 k_X X(Z_m)}\left[1 - 10^{-0.4k_X X(Z)}\right] $$Let's visualize that first factor, which attenuates the light from the moon. I'll just set $I^*=1$ and $k_X=5$ to make the effect obvious. We'll define the airmass function for later use as well. Let's assume the moon is at a zenith angle of 22.5$^\circ$ so the bottom of the graph corresponds to $Z=45^\circ$ and the top is the zenith $Z=0^\circ$.
def X(Z): '''Airmass as afunction zenith angle Z in radians''' return 1./np.sqrt(1 - 0.96*np.power(np.sin(Z),2)) Z = (45 - jj*45)*np.pi/180. # rescale jj (0->1) to Z (45->0) and convert to radians plt.imshow(f*np.power(10, -0.4*5*X(Z)), origin='lower', extent=(-22.5,22.5,45,0)) plt.contour(f*np.power(10, -0.4*5*X(Z)), origin='lower', extent=(-22.5,22.5,45,0), colors='white', alpha=0.1) plt.xlabel('X angular distance') plt.ylabel('Zenith angle Z')
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
So as we get closer to the horizon, there's less moonlight, as it's been attenuated by the larger amount of atmosphere. Lastly, to convert these nanoLamberts into magnitudes per square arc-second, we need the dark (no moon) sky brightness at the zenith, $m_{dark}$, and convert that to nanoLamberts using this formula:$$ B_{dark} = 34.08\exp (20.7233 - 0.92104 m_{dark})10^{-0.4 k_X (X(Z)-1)}X(Z) $$where we have also corrected for attenuation by the atmosphere and air-glow (which increases with airmass). The final model for observed sky brightness $m_{sky}$ is:$$ m_{sky} = m_{dark} - 2.5 \log_{10}\left(\frac{B_{moon} + B_{dark}}{B_{dark}}\right) $$Whew! That's a lot of math. But that's all it is, and we can make a python function that will do it all for us.
def modelsky(alpha, rho, kx, Z, Zm, mdark): Istar = np.power(10, -0.4*(3.84+0.026*np.absolute(alpha)+4e-9*np.power(alpha,4))) frho = np.power(10, 5.36)*(1.06 + np.power(np.cos(rho),2))+np.power(10, 6.15-rho*180./np.pi/40) Bmoon = frho*Istar*np.power(10,-0.4*kx*X(Zm))*(1-np.power(10,-0.4*kx*X(Z))) Bdark = 34.08*np.exp(20.723 - 0.92104*mdark)*np.power(10,-0.4*kx*(X(Z)-1))*X(Z) return mdark - 2.5*np.log10((Bmoon+Bdark)/Bdark)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
Note that all angles should be entered in radians to work with `numpy` trig functions. 3) Data MungingNow, we just need the final ingredients: $\alpha$, $\rho$, $Z$, and $Z_m$, all of which are computed using `astropy.coordinates`. The lunar phase angle $\alpha$ is defined as the angular separation between the Earth and Sun as observed *on the moon*. Alas, `astropy` can't compute this directly (guess they never thought lunar astronauts would use the software). But since the Earth-moon distance is much less than the Earth-sun distance (i.e., $\gamma \sim 0$), this is close enough to 180 degrees minus the angular separation between the moon and sun as observed on Earth (call it $\beta$, which we already computed). See diaram below. ![Diagram showing Earth, moon, and sun](media/EarthMoonSun.jpg)
alpha = (180. - data['phase']) # Note: these need to be in degrees data['alpha'] = pd.Series(alpha, index=data.index)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
Next, in order to compute zenith angles and azimuths, we need to tell the `astropy` functions where on Earth we are located, since these quantities depend on our local horizon. Luckily, Las Campanas Observatory (LCO) is in `astropy`'s database of locations. We'll also need to create locations on the sky for all our background observations.
from astropy.coordinates import EarthLocation, SkyCoord, AltAz from astropy import units as u lco = EarthLocation.of_site('lco') fields = SkyCoord(data['RA']*u.degree, data['Decl']*u.degree) # astropy often requires units f_altaz = fields.transform_to(AltAz(obstime=times, location=lco)) # Transform from RA/DEc to Alt/Az m_altaz = moon.transform_to(AltAz(obstime=times, location=lco)) rho = moon.separation(fields)*np.pi/180.0 # angular distance between moon and all fields Z = (90. - f_altaz.alt.value)*np.pi/180.0 # remember: we need things in radians Zm = (90. - m_altaz.alt.value)*np.pi/180.0 skyaz = f_altaz.az.value data['rho'] = pd.Series(rho, index=data.index) data['Z'] = pd.Series(Z, index=data.index) # radians data['Zm'] = pd.Series(Zm, index=data.index) data['skyaz'] = pd.Series(skyaz, index=data.index)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
I've added the variables to the Pandas `dataFrame` as it will help with plotting later. We can try plotting some of these variables against others to see how things look. Let's try a scatter plot of moon/sky separation vs. sky brightness and color the points according to lunar phase. I tried this with the Pandas `scatter()` and it didn't look that great, so we'll do it with the matplotlib functions directly. Also with `matplotlib` we can invert the y axis so that brighter is 'up'.
fig,axes = plt.subplots(1,2, figsize=(15,6)) sc = axes[0].scatter(data['rho'], data['magsky'], marker='.', c=data['alpha'], cmap='viridis_r') axes[0].set_xlabel(r'$\rho$', fontsize=16) axes[0].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12) axes[0].text(1.25, 0.5, "lunar phase", va='center', ha='right', rotation=90, transform=axes[0].transAxes, fontsize=12) axes[0].invert_yaxis() fig.colorbar(sc, ax=axes[0]) sc = axes[1].scatter(data['alpha'], data['magsky'], marker='.', c=data['rho'], cmap='viridis_r') axes[1].set_xlabel('Lunar phase', fontsize=12) axes[1].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12) axes[1].text(1.25, 0.5, r"$\rho$", va='center', ha='right', rotation=90, transform=axes[1].transAxes, fontsize=12) axes[1].invert_yaxis() ymin,ymax = axes[0].get_ylim() fig.colorbar(sc, ax=axes[1])
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
There certainly seems to be a trend that the closer to full ($\alpha = 0$, yellow), the brighter the background and the closer the moon is to the field (lower $\rho$), the higher the background. Looks good. 4) Fitting (Training) the ModelLet's try and fit this data with our model and solve for $m_{dark}$, and $k_x$, the only unknowns in the problem. For this we need to create a dummy function that we can use with `scipy`'s `leastsq` function. It needs to take a list of parameters (`p`) as its first argument, followed by any other arguments and return the weighted difference between the model and data. We don't have any weights (uncertainties), so it will just return the differences.
from scipy.optimize import leastsq def func(p, alpha, rho, Z, Zm, magsky): mdark,kx = p return magsky - modelsky(alpha, rho, kx, Z, Zm, mdark)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
We now run the least-squares function, which will find the parameters `p` which minimize the squared sum of the residuals (i.e. $\chi^2$). `leastsq` takes as arguments the function we wrote above, `func`, an initial guess of the parameters, and a tuple of extra arguments needed by our function. It returns the best-fit parameters and a status code. We can print these out, but also use them in our `modelsky` function to get the prediction that we can compare to the observed data.
pars,stat = leastsq(func, [22, 0.2], args=(data['alpha'],data['rho'],data['Z'],data['Zm'],data['magsky'])) print(pars) # save the best-fit model and residuals data['modelsky']=pd.Series(modelsky(data['alpha'],data['rho'],pars[1],data['Z'],data['Zm'],pars[0]), index=data.index) data['residuals']=pd.Series(data['magsky']-data['modelsky'], index=data.index)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
Now that we have a model, we have a way to *predict* the sky brightness. So let's make the same two plots as we did above, but this time plotting the *model* brigthnesses rather than the observed brightnesses. Just to see if we get the same kinds of patterns/behaviours. This next cell is a copy of the earlier one, just changing `magsky` into `modelsky`.
fig,axes = plt.subplots(1,2, figsize=(15,6)) sc = axes[0].scatter(data['rho'], data['modelsky'], marker='.', c=data['alpha'], cmap='viridis_r') axes[0].set_xlabel(r'$\rho$', fontsize=16) axes[0].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12) axes[0].text(1.25, 0.5, "lunar phase", va='center', ha='right', rotation=90, transform=axes[0].transAxes, fontsize=12) axes[0].invert_yaxis() fig.colorbar(sc, ax=axes[0]) sc = axes[1].scatter(data['alpha'], data['modelsky'], marker='.', c=data['rho'], cmap='viridis_r') axes[1].set_xlabel('Lunar phase', fontsize=12) axes[1].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12) axes[1].text(1.25, 0.5, r"$\rho$", va='center', ha='right', rotation=90, transform=axes[1].transAxes, fontsize=12) axes[1].invert_yaxis() axes[0].set_ylim(ymin,ymax) axes[1].set_ylim(ymin,ymax) fig.colorbar(sc, ax=axes[1])
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
You will see that there are some patterns that are correctly predicted, but others that are not. In particular, there's a whole cloud of points with $\alpha 22 that are observed but *not* predicted. In other words, we observed some objects where the moon was relatively bright, yet the sky was relatively dark.This is where I hit a bit of a wall in my investigation. It was not at all obvious where these points were coming from because the data set was so large and we have so many variables at work. However, by luck this ended up being around the time that Shanon was playing around with [Bokeh](https://docs.bokeh.org/en/latest/index.html) and it turned out to be exactly what I needed to explore where things were not working correctly. Let's do that now. 5) Plotting ResidualsA good way to see where a model is failing is to plot the residuals (observed - model). Where the residuals are close to zero, the model is doing a good job, but where the residuals are large (positive or nagative), the model is failing to capture something. A good diagnostic is to plot these residuals versus each of your variables and see where things go wrong. The great thing about Bokeh is it gives a very powerful way to do this: linking graphs so that selecting points in one graph will select the corresponding points in all other graphs that share the same dataset. This is why we've been adding our variables to the pandas `dataFrame`, `data`: that's whay Bokeh uses for plotting. In this code block we setup a Bokeh graph and plot 6 different "slices" through our multi-dimenisonal data. In the resulting plots, try selecting different regions of the upper-left panel (the residuals) to see if they correspond to interesting sets of parameters in the other panels.
from bokeh.plotting import figure from bokeh.layouts import gridplot from bokeh.io import show,output_notebook from bokeh.models import ColumnDataSource output_notebook() source = ColumnDataSource(data) TOOLS = ['box_select','lasso_select','reset','box_zoom','help'] vars = [('alpha','residuals'),('alpha','rho'),('alpha','Zm'), ('jd','alpha'),('Z','Zm'),('RA','Decl')] plots = [] for var in vars: s = figure(tools=TOOLS, plot_width=300, plot_height=300) s.circle(*var, source=source, selection_color='red') s.xaxis.axis_label = var[0] s.yaxis.axis_label = var[1] plots.append(s) #plots[0].line([17.8,22.3],[17.8,22.3], line_color='orangered') p = gridplot([plots[0:3],plots[3:]]) show(p)
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
With a little data exploring, it's pretty obvious that the majority of the outlying points comes from observations when the moon is relatively full but very low (or even below) the horizon. The reason is that the airmass formula that we implemented above has a problem with $Zm > \pi/2$. To see this, we can simply plot `X(Z)` as a function of 'Z':
from matplotlib.pyplot import plot, xlabel, ylabel,ylim Z = np.linspace(0, 3*np.pi/4, 100) # make a range of Zenith angles plot(Z*180/np.pi, X(Z), '-') xlabel('Zenith angle (degrees)') ylabel('Airmass')
_____no_output_____
MIT
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
Visual Data Analysis of Fraudulent Transactions
# initial imports import pandas as pd import datetime import calendar import plotly.express as px import matplotlib.pyplot as plt import hvplot.pandas from sqlalchemy import create_engine import psycopg2 %matplotlib inline # create a connection to the database engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
_____no_output_____
PostgreSQL
visual_data_analysis.ipynb
dbogatic/sql-homework
Data Analysis Questions 1 Use `hvPlot` to create a line plot showing a time series from the transactions along all the year for **card holders 2 and 18**. In order to contrast the patterns of both card holders, create a line plot containing both lines. What difference do you observe between the consumption patterns? Does the difference could be a fraudulent transaction? Explain your rationale.
# loading data for card holder 2 and 18 from the database query = """ SELECT transaction.date, credit_card.id_card_holder, card_holder.name, credit_card.card, transaction.amount, merchant.merchant_name, merchant_category.merchant_category_name FROM card_holder LEFT JOIN credit_card ON credit_card.id_card_holder = card_holder.id LEFT JOIN transaction ON transaction.card = credit_card.card LEFT JOIN merchant ON merchant.id_merchant = transaction.id_merchant LEFT JOIN merchant_category ON merchant_category.id_merchant_category = merchant.id_merchant_category """ fraud_detection_df = pd.read_sql_query(query, engine) fraud_detection_df.set_index("date", inplace=True) fraud_detection_hourly_window = fraud_detection_df.between_time('07:00','09:00') fraud_detection_hourly_window.reset_index(inplace=True) fraud_detection_hourly_window.set_index("id_card_holder", inplace=True) card_holders_df = fraud_detection_hourly_window.loc[[2,18]] card_holders_df.head() # plot for cardholder 2 first_card_holder = fraud_detection_hourly_window.loc[2] first_card_holder first_card_holder_transactions = first_card_holder[["date","amount"]] first_card_holder_transactions first_card_holder_plot = first_card_holder_transactions.hvplot.line(x='date', y='amount', title="Cardholder id_2 transactions") first_card_holder_plot # Calculate stats for cardholder 2 first_card_holder_mean = first_card_holder_transactions["amount"].mean() first_card_holder_median = first_card_holder_transactions["amount"].median() first_card_holder_max = first_card_holder_transactions["amount"].max() first_card_holder_min = first_card_holder_transactions["amount"].min() print(f" Mean value = ${first_card_holder_mean :.2f}") print(f" Median Value = ${first_card_holder_median :.2f}") print(f" Max Value = ${first_card_holder_max :.2f}") print(f" Min Value = ${first_card_holder_min :.2f}") # plot for cardholder 18 second_card_holder = fraud_detection_hourly_window.loc[18] second_card_holder second_card_holder_transactions = second_card_holder[["date","amount"]] second_card_holder_transactions second_card_holder_plot = second_card_holder_transactions.hvplot.line(x='date', y='amount',title="Cardholder id_18 transactions") second_card_holder_plot # Calculate stats for cardholder 18 second_card_holder_mean = second_card_holder_transactions["amount"].mean() second_card_holder_median = second_card_holder_transactions["amount"].median() second_card_holder_max = second_card_holder_transactions["amount"].max() second_card_holder_min = second_card_holder_transactions["amount"].min() print(f" Mean value = ${second_card_holder_mean :.2f}") print(f" Median Value = ${second_card_holder_median :.2f}") print(f" Max Value = ${second_card_holder_max :.2f}") print(f" Min Value = ${second_card_holder_min :.2f}") # combined plot for card holders 2 and 18 card_holder_transaction_comparison_plot = (first_card_holder_plot * second_card_holder_plot).opts(title = "Transaction Comparison Id_2 and id_18", show_legend=True) card_holder_transaction_comparison_plot # legend was added but not showing?
_____no_output_____
PostgreSQL
visual_data_analysis.ipynb
dbogatic/sql-homework
Conclusions for Question 1
# Analysis of transactions between 7:00-9:00 for id_2 and id_18 shows that median and mean values are similar, thus it appears no suspicious transactions are occuring. However, it is wise to confirm the validity of small bar transactions (under 2 dol) for Id_18.
_____no_output_____
PostgreSQL
visual_data_analysis.ipynb
dbogatic/sql-homework
Data Analysis Question 2 Use `Plotly Express` to create a series of six box plots, one for each month, in order to identify how many outliers could be per month for **card holder id 25**. By observing the consumption patters, do you see any anomalies? Write your own conclusions about your insights.
# loading data of daily transactions from jan to jun 2018 for card holder 25 fraud_detection_df fraud_detection_df.reset_index(inplace=True) fraud_detection_df.set_index("id_card_holder", inplace=True) third_card_holder_df = fraud_detection_df.loc[25] third_card_holder_df.reset_index(inplace=True) third_card_holder_df.set_index("date", inplace=True) third_card_holder_df.sort_index(ascending=True, inplace=True) third_card_holder_suspicious_trans = third_card_holder_df.iloc[0:68] third_card_holder_suspicious_trans.sort_index(inplace=True, ascending=True) third_card_holder_suspicious_trans.head(10) # change the numeric month to month names using strftime formatter to create date as string third_card_holder_suspicious_trans.reset_index(inplace=True) third_card_holder_suspicious_trans.set_index("date", inplace=True) third_card_holder_suspicious_trans.index = third_card_holder_suspicious_trans.index.strftime('%B') third_card_holder_suspicious_trans.reset_index(inplace=True) third_card_holder_suspicious_trans.set_index("index", inplace=True) third_card_holder_suspicious_trans.head() # creating the six box plots using plotly express third_card_holder_suspicious_trans_plot = third_card_holder_suspicious_trans.boxplot(column="amount", by="index", figsize=(10,5)) plt.title("Suspicious Transactions by Month Cardholder 25") plt.ylabel("amount ($)") plt.xlabel("month") # need to find a way to sort months
_____no_output_____
PostgreSQL
visual_data_analysis.ipynb
dbogatic/sql-homework
Conclusions for Question 2
# Analysis of Id_25 cardholder's transactions show that there were high amount transactions between Jan-June (especially June with 3 high amounts) that took place in pub, bar, restaurant, food truck, suggesting misuse of the corporate credit card. # identify small transactions less than two dollars fraud_detection_df = pd.read_sql_query(query, engine) fraud_detection_df.set_index("date", inplace=True) suspicious_small_transactions_df = fraud_detection_df[fraud_detection_df["amount"] < 2] suspicious_small_transactions_df.head() suspicious_small_transactions_df.groupby(["merchant_category_name"]).count() # Analysis of small amount transactions show that the riskiest places where card hacks can occur are restaurants, pubs, food trucks, bars and coffe shops.
_____no_output_____
PostgreSQL
visual_data_analysis.ipynb
dbogatic/sql-homework
Comparing and evaluating models
%matplotlib inline import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns sns.set_style("whitegrid") sns.set_context("poster") from PIL import Image from sklearn.grid_search import GridSearchCV from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix def cv_optimize(clf, parameters, X, y, n_jobs=1, n_folds=5, score_func=None): if score_func: gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds, n_jobs=n_jobs, scoring=score_func) else: gs = GridSearchCV(clf, param_grid=parameters, n_jobs=n_jobs, cv=n_folds) gs.fit(X, y) print "BEST", gs.best_params_, gs.best_score_, gs.grid_scores_ best = gs.best_estimator_ return best def do_classify(clf, parameters, indf, featurenames, targetname, target1val, mask=None, reuse_split=None, score_func=None, n_folds=5, n_jobs=1): subdf=indf[featurenames] X=subdf.values y=(indf[targetname].values==target1val)*1 if mask !=None: print "using mask" Xtrain, Xtest, ytrain, ytest = X[mask], X[~mask], y[mask], y[~mask] if reuse_split !=None: print "using reuse split" Xtrain, Xtest, ytrain, ytest = reuse_split['Xtrain'], reuse_split['Xtest'], reuse_split['ytrain'], reuse_split['ytest'] if parameters: clf = cv_optimize(clf, parameters, Xtrain, ytrain, n_jobs=n_jobs, n_folds=n_folds, score_func=score_func) clf=clf.fit(Xtrain, ytrain) training_accuracy = clf.score(Xtrain, ytrain) test_accuracy = clf.score(Xtest, ytest) print "############# based on standard predict ################" print "Accuracy on training data: %0.2f" % (training_accuracy) print "Accuracy on test data: %0.2f" % (test_accuracy) print confusion_matrix(ytest, clf.predict(Xtest)) print "########################################################" return clf, Xtrain, ytrain, Xtest, ytest from matplotlib.colors import ListedColormap cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light, cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False): h = .02 X=np.concatenate((Xtr, Xte)) x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100), np.linspace(y_min, y_max, 100)) #plt.figure(figsize=(10,6)) if mesh: if zfunc: p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0] p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] Z=zfunc(p0, p1) else: Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=alpha, axes=ax) ax.scatter(Xtr[:, 0], Xtr[:, 1], c=ytr-1, cmap=cmap_bold, s=psize, alpha=alpha,edgecolor="k") # and testing points yact=clf.predict(Xte) ax.scatter(Xte[:, 0], Xte[:, 1], c=yte-1, cmap=cmap_bold, alpha=alpha, marker="s", s=psize+10) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) return ax,xx,yy def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light, cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1): ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False, colorscale=colorscale, cdiscrete=cdiscrete, psize=psize, alpha=alpha) Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax) cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax) plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax) return ax
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
The churn exampleThis is a dataset from a telecom company, of their customers. Based on various features of these customers and their calling plans, we want to predict if a customer is likely to leave the company. This is expensive for the company, as a lost customer means lost monthly revenue!
#data set from yhathq: http://blog.yhathq.com/posts/predicting-customer-churn-with-sklearn.html dfchurn=pd.read_csv("https://dl.dropboxusercontent.com/u/75194/churn.csv") dfchurn.head()
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Lets write some code to feature select and clean our data first, of-course.
dfchurn["Int'l Plan"] = dfchurn["Int'l Plan"]=='yes' dfchurn["VMail Plan"] = dfchurn["VMail Plan"]=='yes' colswewant_cont=[ u'Account Length', u'VMail Message', u'Day Mins', u'Day Calls', u'Day Charge', u'Eve Mins', u'Eve Calls', u'Eve Charge', u'Night Mins', u'Night Calls', u'Night Charge', u'Intl Mins', u'Intl Calls', u'Intl Charge', u'CustServ Calls'] colswewant_cat=[u"Int'l Plan", u'VMail Plan']
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Asymmetry First notice that our data set is very highly asymmetric, with positives, or people who churned, only making up 14-15% of the samples.
ychurn = np.where(dfchurn['Churn?'] == 'True.',1,0) 100*ychurn.mean()
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
This means that a classifier which predicts that EVERY customer is a negative (does not churn) has an accuracy rate of 85-86%. But is accuracy the correct metric? Remember the Confusion matrix? We reproduce it here for convenience - the samples that are +ive and the classifier predicts as +ive are called True Positives (TP)- the samples that are -ive and the classifier predicts (wrongly) as +ive are called False Positives (FP)- the samples that are -ive and the classifier predicts as -ive are called True Negatives (TN)- the samples that are +ive and the classifier predicts as -ive are called False Negatives (FN)A classifier produces a confusion matrix which looks like this:![hwimages](./images/confusionmatrix.png)IMPORTANT NOTE: In sklearn, to obtain the confusion matrix in the form above, always have the observed `y` first, i.e.: use as `confusion_matrix(y_true, y_pred)`Consider two classifiers, A and B, as in the image below. Suppose they were trained on a balanced set. Let A make its mistakes only through false positives: non-churners(n) predicted to churn(Y), while B makes its mistake only through false negatives, churners(p), predicted not to churn(N). Now consider what this looks like on an unbalanced set, where the ps (churners) are much less than the ns (non-churners). It would seem that B makes far fewer misclassifications based on accuracy than A, and would thus be a better classifier. ![m:abmodeldiag](./images/abmodeldiag.png)However, is B reaslly the best classifier for us? False negatives are people who churn, but we predicted them not to churn.These are very costly for us. So for us. classifier A might be better, even though, on the unbalanced set, it is way less accurate! Classifiers should be about the Business End: keeping costs down Establishing Baseline Classifiers via profit or loss. Whenever you are comparing classifiers you should always establish a baseline, one way or the other. In our churn dataset there are two obvious baselines: assume every customer wont churn, and assume all customers will churn.The former baseline, will on our dataset, straight away give you a 85.5% accuracy. If you are planning on using accuracy, any classifier you write ought to beat this. The other baseline, from an accuracy perspective is less interesting: it would only have a 14.5% correct rate.But as we have seen, on such asymmetric data sets, accuracy is just not a good metric. So what should we use?**A metric ought to hew to the business function that the classifier is intended for**.In our case, we want to minimize the cost/maximize the profit for the telecom.But to do this we need to understand the business situation. To do this, we write a **utility**, or, equivalently, **cost** matrix associated with the 4 scenarios that the confusion matrix talks about. ![cost matrix](images/costmatrix.png)Remember that +ives or 1s are churners, and -ives or 0s are the ones that dont churn. Lets assume we make an offer with an administrative cost of \$3 and an offer cost of \$100, an incentive for the customer to stay with us. If a customer leaves us, we lose the customer lifetime value, which is some kind of measure of the lost profit from that customer. Lets assume this is the average number of months a customer stays with the telecom times the net revenue from the customer per month. We'll assume 3 years and \$30/month margin per user lost, for roughly a $1000 loss.
admin_cost=3 offer_cost=100 clv=1000#customer lifetime value
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
- TN=people we predicted not to churn who wont churn. We associate no cost with this as they continue being our customers- FP=people we predict to churn. Who wont. Lets associate a `admin_cost+offer_cost` cost per customer with this as we will spend some money on getting them not to churn, but we will lose this money.- FN=people we predict wont churn. And we send them nothing. But they will. This is the big loss, the `clv`- TP= people who we predict will churn. And they will. These are the people we can do something with. So we make them an offer. Say a fraction f accept it. Our cost is`f * offer_cost + (1-f)*(clv+admin_cost)`This model can definitely be made more complex.Lets assume a conversion fraction of 0.5
conv=0.5 tnc = 0. fpc = admin_cost+offer_cost fnc = clv tpc = conv*offer_cost + (1. - conv)*(clv+admin_cost) cost=np.array([[tnc,fpc],[fnc, tpc]]) print cost
[[ 0. 103. ] [ 1000. 551.5]]
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
We can compute the average cost(profit) per person using the following formula, which calculates the "expected value" of the per-customer loss/cost(profit):\begin{eqnarray}Cost &=& c(1P,1A) \times p(1P,1A) + c(1P,0A) \times p(1P,0A) + c(0P,1A) \times p(0P,1A) + c(0P,0A) \times p(0P,0A) \\&=& \frac{TP \times c(1P,1A) + FP \times c(1P,0A) + FN \times c(0P,1A) + TN \times c(0P,0A)}{N}\end{eqnarray}where N is the total size of the test set, 1P is predictions for class 1, or positives, 0A is actual values of the negative class in the test set. The first formula above just weighs the cost of a combination of observed and predicted with the out-of-sample probability of the combination occurring. The probabilities are "estimated" by the corresponding confusion matrix on the test set. (We'll provide a proof of this later in the course for the mathematically inclined, or just come bug Rahul at office hour if you cant wait!)The cost can thus be found by multiplying the cost matrix by the confusion matrix elementwise, and dividing by the sum of the elements in the confusion matrix, or the test set size.We implement this process of finding the average cost per person in the `average_cost` function below:
def average_cost(y, ypred, cost): c=confusion_matrix(y,ypred) score=np.sum(c*cost)/np.sum(c) return score
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
No customer churns and we send nothingWe havent made any calculations yet! Lets fix that omission and create our training and test sets.
churntrain, churntest = train_test_split(xrange(dfchurn.shape[0]), train_size=0.6) churnmask=np.ones(dfchurn.shape[0], dtype='int') churnmask[churntrain]=1 churnmask[churntest]=0 churnmask = (churnmask==1) churnmask testchurners=dfchurn['Churn?'][~churnmask].values=='True.' testsize = dfchurn[~churnmask].shape[0] ypred_dste = np.zeros(testsize, dtype="int") print confusion_matrix(testchurners, ypred_dste) dsteval=average_cost(testchurners, ypred_dste, cost) dsteval
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Not doing anything costs us 140 per customer. All customers churn, we send everyone
ypred_ste = np.ones(testsize, dtype="int") print confusion_matrix(testchurners, ypred_ste) steval=average_cost(testchurners, ypred_ste, cost) steval
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Make offers to everyone costs us even more, not surprisingly. The first one is the one to beat! Naive Bayes ClassifierSo lets try a classifier. Here we try one known as Gaussian Naive Bayes. We'll just use the default parameters, since the actual details are not of importance to us.
from sklearn.naive_bayes import GaussianNB clfgnb = GaussianNB() clfgnb, Xtrain, ytrain, Xtest, ytest=do_classify(clfgnb, None, dfchurn, colswewant_cont+colswewant_cat, 'Churn?', "True.", mask=churnmask) confusion_matrix(ytest, clfgnb.predict(Xtest)) average_cost(ytest, clfgnb.predict(Xtest), cost)
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Ok! We did better! But is this the true value of our cost? To answer this question, we need to ask a question: what exactly is `clf.predict` doing?There is a caveat for SVM's though: we cannot repredict 1's and 0's directly for `clfsvm`, as the SVM is whats called a "discriminative" classifier: it directly gives us a decision function, with no probabilistic explanation and no probabilities. (I lie, an SVM can be retrofitted with probabilities: see http://scikit-learn.org/stable/modules/svm.htmlscores-probabilities, but these are expensive amd not always well callibrated (callibration of probabilities will be covered later in our class)).What do we do? The SVM does give us a measure of how far we are from the "margin" though, and this is an ordered set of distances, just as the probabilities in a statistical classifier are. This ordering on the distance is just like an ordering on the probabilities: a sample far on the positive side from the line is an almost very definite 1, just like a sample with a 0.99 probability of being a 1 is an almost very definite 1.For both these reasons we turn to ROC curves. Changing the Prediction threshold, and the ROC Curve Our dataset is a very lopsided data set with 86% of samples being negative. We now know that in such a case, accuracy is not a very good measure of a classifier.We have also noticed that, as is often the case in situations in which one class dominates the other, the costs of one kind of misclassification: false negatives are differently expensive than false positives. We saw above that FN are more costly in our case than FP. In the case of such asymmetric costs, the `sklearn` API function `predict` is useless, as it assumes a threshold probability of having a +ive sample to be 0.5; that is, if a sample has a greater than 0.5 chance of being a 1, assume it is so. Clearly, when FN are more expensive than FP, you want to lower this threshold: you are ok with falsely classifying -ive examples as +ive. We play with this below by chosing a threshold `t` in the function `repredict` which chooses a different threshold than 0.5 to make a classification.You can think about this very starkly from the perspective of the cancer doctor. Do you really want to be setting a threshold of 0.5 probability to predict if a patient has cancer or not? The false negative problem: ie the chance you predict someone dosent have cancer who has cancer is much higher for such a threshold. You could kill someone by telling them not to get a biopsy. Why not play it safe and assume a much lower threshold: for eg, if the probability of 1(cancer) is greater than 0.05, we'll call it a 1.One caveat: we cannot repredict for the linear SVM model `clfsvm`, as the SVM is whats called a "discriminative" classifier: it directly gives us a decision function, with no probabilistic explanation and no probabilities. (I lie, an SVM can be retrofitted with probabilities: see http://scikit-learn.org/stable/modules/svm.htmlscores-probabilities, but these are expensive amd not always well callibrated).
def repredict(est,t, xtest): probs=est.predict_proba(xtest) p0 = probs[:,0] p1 = probs[:,1] ypred = (p1 >= t)*1 return ypred average_cost(ytest, repredict(clfgnb, 0.3, Xtest), cost) plt.hist(clfgnb.predict_proba(Xtest)[:,1])
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Aha! At a 0.3 threshold we save more money!We see that in this situation, where we have asymmetric costs, we do need to change the threshold at which we make our positive and negative predictions. We need to change the threshold so that we much dislike false negatives (same in the cancer case). Thus we must accept many more false positives by setting such a low threshold.For otherwise, we let too many people slip through our hands who would have stayed with our telecom company given an incentive. But how do we pick this threshold? The ROC Curve ROC curves are actually a set of classifiers, in which we move the threshold for classifying a sample as positive from 0 to 1. (In the standard scenario, where we use classifier accuracy, this threshold is implicitly set at 0.5).We talked more about how to create a ROC curve in the accompanying lab to this one, so here we shall just repeat the ROC curve making code from there.
from sklearn.metrics import roc_curve, auc def make_roc(name, clf, ytest, xtest, ax=None, labe=5, proba=True, skip=0): initial=False if not ax: ax=plt.gca() initial=True if proba: fpr, tpr, thresholds=roc_curve(ytest, clf.predict_proba(xtest)[:,1]) else: fpr, tpr, thresholds=roc_curve(ytest, clf.decision_function(xtest)) roc_auc = auc(fpr, tpr) if skip: l=fpr.shape[0] ax.plot(fpr[0:l:skip], tpr[0:l:skip], '.-', alpha=0.3, label='ROC curve for %s (area = %0.2f)' % (name, roc_auc)) else: ax.plot(fpr, tpr, '.-', alpha=0.3, label='ROC curve for %s (area = %0.2f)' % (name, roc_auc)) label_kwargs = {} label_kwargs['bbox'] = dict( boxstyle='round,pad=0.3', alpha=0.2, ) for k in xrange(0, fpr.shape[0],labe): #from https://gist.github.com/podshumok/c1d1c9394335d86255b8 threshold = str(np.round(thresholds[k], 2)) ax.annotate(threshold, (fpr[k], tpr[k]), **label_kwargs) if initial: ax.plot([0, 1], [0, 1], 'k--') ax.set_xlim([0.0, 1.0]) ax.set_ylim([0.0, 1.05]) ax.set_xlabel('False Positive Rate') ax.set_ylabel('True Positive Rate') ax.set_title('ROC') ax.legend(loc="lower right") return ax make_roc("gnb",clfgnb, ytest, Xtest, None, labe=50)
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
OK. Now that we have a ROC curve that shows us different thresholds, we need to figure how to pick the appropriate threshold from the ROC curve. But first, let us try another classifier. Classifier Comparison Decision Trees Descision trees are very simple things we are all familiar with. If a problem is multi-dimensional, the tree goes dimension by dimension and makes cuts in the space to create a classifier.From scikit-docs:
from sklearn.tree import DecisionTreeClassifier reuse_split=dict(Xtrain=Xtrain, Xtest=Xtest, ytrain=ytrain, ytest=ytest)
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
We train a simple decision tree classifier.
clfdt=DecisionTreeClassifier() clfdt, Xtrain, ytrain, Xtest, ytest = do_classify(clfdt, {"max_depth": range(1,10,1)}, dfchurn, colswewant_cont+colswewant_cat, 'Churn?', "True.", reuse_split=reuse_split) confusion_matrix(ytest,clfdt.predict(Xtest))
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Compare!
ax=make_roc("gnb",clfgnb, ytest, Xtest, None, labe=60) make_roc("dt",clfdt, ytest, Xtest, ax, labe=1)
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
How do we read which classifier is better from a ROC curve. The usual advice is to go to the North-West corner of a ROC curve, as that is closest to TPE=1, FPR=0. But thats not our setup here..we have this asymmetric data set. The other advice is to look at the classifier with the highest AUC. But as we can see in the image below, captured from a run of this lab, the AUC is the same, but the classifiers seem to have very different performances in different parts of the graph![rocs](./images/churnrocs.png)And then there is the question of figuring what threshold to choose as well. To answer both of these, we are going to have to turn back to cost Reprediction again: Now with Cost or Risk You can use the utility or risk matrix to provide a threshold to pick for our classifier. The key idea is that we want to minimize cost on our test set, so for each sample, simply pick the class which does that. Decision Theory is the branch of statistics that speaks to this: its the theory which tells us how to make a positive or negative prediction for a given sample.Do you remember the log loss in Logistic Regression and the Hinge Loss in the SVM? The former, for example, gave us a bunch of probabilities which we needed to turn into decisions about what the samples are. In the latter, its the values the decision function gives us.There then is a second cost or risk or loss involved in machine learning. This is the decision loss.What do we mean by a "decision" exactly? We'll use the letter g here to indicate a decision, in both the regression and classification problems. In the classification problem, one example of a decision is the process used to choose the class of a sample, given the probability of being in that class. As another example, consider the cancer story from the previous chapter. The decision may be: ought we biopsy, or ought we not biopsy. By minimizing the estimation risk, we obtain a probability that the patient has cancer. We must mix these probabilities with "business knowledge" or "domain knowledge" to make a decision.(As an aside, this is true in regression as well. there are really two losses there. The first one, the one equivalent to the log loss is the one where we say that at each point the prediction for y is a gaussian....the samples of this gaussian come from the bootstrap we make on the original data set...each replication leads to a new line and a distribution for the prediction at a point x. But usually in a regression we just quote the mean of this distribution at each point, the regression line E[y|x]. Why the mean? The mean comes from choosing a least squares decision loss...if we chose a L1 loss, we'd be looking at a median.)**The cost matrix we have been using above is exactly what goes into this decision loss!!**Decision Theory MathTo understand this, lets follow through with a bit of math:(you can safely skip this section if you are not interested)We simply weigh each combinations loss by the probability that that combination can happen:$$ R_{g}(x) = \sum_y l(y,g(x)) p(y|x)$$That is, we calculate the **average risk** over all choices y, of making choice g for a given sample.Then, if we want to calculate the overall risk, given all the samples in our set, we calculate:$$R(g) = \sum_x p(x) R_{g}(x)$$It is sufficient to minimize the risk at each point or sample to minimize the overall risk since $p(x)$ is always positive.Consider the two class classification case. Say we make a "decision g about which class" at a sample x. Then:$$R_g(x) = l(1, g)p(1|x) + l(0, g)p(0|x).$$Then for the "decision" $g=1$ we have:$$R_1(x) = l(1,1)p(1|x) + l(0,1)p(0|x),$$and for the "decision" $g=0$ we have:$$R_0(x) = l(1,0)p(1|x) + l(0,0)p(0|x).$$Now, we'd choose $1$ for the sample at $x$ if:$$R_1(x) \lt R_0(x).$$$$ P(1|x)(l(1,1) - l(1,0)) \lt p(0|x)(l(0,0) - l(0,1))$$This gives us a ratio `r` between the probabilities to make a prediction. We assume this is true for all samples.So, to choose '1':$$p(1|x) \gt r P(0|x) \implies r=\frac{l(0,1) - l(0,0)}{l(1,0) - l(1,1)} =\frac{c_{FP} - c_{TN}}{c_{FN} - c_{TP}}$$This may also be written as:$$P(1|x) \gt t = \frac{r}{1+r}$$.If you assume that True positives and True negatives have no cost, and the cost of a false positive is equal to that of a false positive, then $r=1$ and the threshold is the usual intutive $t=0.5$.
cost def rat(cost): return (cost[0,1] - cost[0,0])/(cost[1,0]-cost[1,1]) def c_repredict(est, c, xtest): r = rat(c) print r t=r/(1.+r) print "t=", t probs=est.predict_proba(xtest) p0 = probs[:,0] p1 = probs[:,1] ypred = (p1 >= t)*1 return ypred average_cost(ytest, c_repredict(clfdt, cost, Xtest), cost)
0.229654403567 t= 0.18676337262
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
For reasons that will become clearer in a later lab, this value turns out to be only approximate, and we are better using a ROC curve or a Cost curve (below) to find minimum cost. However, it will get us in the right ballpark of the threshold we need. Note that the threshold itself depends only on costs and is independent of the classifier.
plt.plot(ts, [average_cost(ytest, repredict(clfdt, t, Xtest), cost) for t in ts] )
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Note that none of this can be done for classifiers that dont provide probabilities. So, once again, we turn to ROC curves to help us out. Model selection from Cost and ROC Notice that the ROC curve has a very interesting property: if you look at the confusion matrix , TPR is only calculated from the observed "1" row while FPR is calculated from the observed '0' row. This means that the ROC curve is idenpendent of the class balance/imbalance on the test set, and thus works for all ratios of positive to negative samples. The balance picks a point on the curve, as you can read below.Lets rewrite the cost equation from before.\begin{eqnarray}Cost &=& c(1P,1A) \times p(1P,1A) + c(1P,0A) \times p(1P,0A) + c(0P,1A) \times p(0P,1A) + c(0P,0A) \times p(0P,0A) \\&=& p(1A) \times \left ( c(1P,1A) \times p(1P | 1A) + c(0P,1A) \times p(0P | 1A) \right ) \\&+& p(0A) \times \left ( c(1P,0A) \times p(1P,0A) + c(0P,0A) \times p(0P | 0A) \right ) \\&=& p(1A) \times \left ( c(1P,1A) \times TPR + c(0P,1A) \times (1 - TPR)\right ) \\&+& p(0A) \times \left ( c(1P,0A) \times FPR + c(0P,0A) \times (1 - FPR) \right )\end{eqnarray}This can then be used to write TPR in terms of FPR, which as you can see from below is a line if you fix the cost. So lines on the graph correspond to a fixed cost. Of course they must intersect the ROC curve to be acceptable as coming from our classifier.$$TPR = \frac{1}{p(1A)(c_{FN} - c_{TP})} \left ( p(1A) c_{FP} + p(0A) c_{TN} - Cost \right ) + r \frac{p(0A)}{p(1A)} \times FPR$$ There are three observations to be made from here.1. The slope is the reprediction ratio $r$ multiplied by the negative positive imbalance. In the purely asymmetric case the ratio r is the ratio of the false-positive cost to the false-negative cost. Thus for the balanced case, low slopes penalize false negatives and correspond to low thresholds2. When imbalance is included, a much more middling slope is achieved, since low $r$ usually comes with high negative-positive imbalance. So we still usually land up finding a model somewhere in the northwest quadrant.3. The line you want is a tangent line. Why? The tangent line has the highest intercept. Since the cost is subtracted, the highest intercept corresponds to the lowest cost!. A diagram illustrates this for balanced classes:![asyroc](images/asyroc.png) So one can use the tangent line method to find the classifier we ought to use and multiple questions about ROC curves now get answered.(1) For a balanced data set, with equal misclassification costs, and no cost for true positives and true negatives, the slope is 1. Thus 45 degree lines are what we want, and hence closest to the north west corner, as thats where a 45 degree line would be tangent.(2) Classifiers which have some part of their ROC curve closer to the northwest corner than others have tangent lines with higher intercepts and thus lower cost(3) For any other case, find the line!
print rat(cost) slope = rat(cost)*(np.mean(ytest==0)/np.mean(ytest==1)) slope z1=np.arange(0.,1., 0.02) def plot_line(ax, intercept): plt.figure(figsize=(12,12)) ax=plt.gca() ax.set_xlim([0.0,1.0]) ax.set_ylim([0.0,1.0]) make_roc("gnb",clfgnb, ytest, Xtest, ax, labe=60) make_roc("dt",clfdt, ytest, Xtest, ax, labe=1) ax.plot(z1 , slope*z1 + intercept, 'k-') from IPython.html.widgets import interact, fixed interact(plot_line, ax=fixed(ax), intercept=(0.0,1.0, 0.02))
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
As you can see our slope is actually on the rising part of the curve, even with the imbalance. (Since the cost ratio isnt too small..an analyst should play around with the assumptions that went into the cost matrix!) Cost curves The proof is always in the pudding. So far we have used a method to calculate a rough threshold from the cost/utility matrix, and seen the ROC curve which implements one classifier per threshold to pick an appropriate model. But why not just plot the cost/profit (per person) per threshold on a ROC like curve to see which classifier maximizes profit/minimizes cost? Just like in a ROC curve, we go down the sorted (by score or probability) list of samples. We one-by-one add an additional sample to our positive samples, noting down the attendant classifier's TPR and FPR and threshold. In addition to what we do for the ROC curve, we now also note down the percentage of our list of samples predicted as positive. Remember we start from the mostest positive, where the percentage labelled as positive would be minuscule, like 0.1 or so and the threshold like a 0.99 in probability or so. As we decrease the threshold, the percentage predicted to be positive clearly increases until everything is predicted positive at a threshold of 0. What we now do is, at each such additional sample/threshold (given to us by the `roc_curve` function from `sklearn`), we calculate the expected profit per person and plot it against the percentage predicted positive by that threshold to produce a profit curve. Thus, small percentages correspond to samples most likely to be positive: a percentage of 8% means the top 8% of our samples ranked by likelihood of being positive.As in the ROC curve case, we use `sklearn`'s `roc_curve` function to return us a set of thresholds with TPRs and FPRs.
def percentage(tpr, fpr, priorp, priorn): perc = tpr*priorp + fpr*priorn return perc def av_cost2(tpr, fpr, cost, priorp, priorn): profit = priorp*(cost[1][1]*tpr+cost[1][0]*(1.-tpr))+priorn*(cost[0][0]*(1.-fpr) +cost[0][1]*fpr) return profit def plot_cost(name, clf, ytest, xtest, cost, ax=None, threshold=False, labe=200, proba=True): initial=False if not ax: ax=plt.gca() initial=True if proba: fpr, tpr, thresholds=roc_curve(ytest, clf.predict_proba(xtest)[:,1]) else: fpr, tpr, thresholds=roc_curve(ytest, clf.decision_function(xtest)) priorp=np.mean(ytest) priorn=1. - priorp ben=[] percs=[] for i,t in enumerate(thresholds): perc=percentage(tpr[i], fpr[i], priorp, priorn) ev = av_cost2(tpr[i], fpr[i], cost, priorp, priorn) ben.append(ev) percs.append(perc*100) ax.plot(percs, ben, '-', alpha=0.3, markersize=5, label='cost curve for %s' % name) if threshold: label_kwargs = {} label_kwargs['bbox'] = dict( boxstyle='round,pad=0.3', alpha=0.2, ) for k in xrange(0, fpr.shape[0],labe): #from https://gist.github.com/podshumok/c1d1c9394335d86255b8 threshold = str(np.round(thresholds[k], 2)) ax.annotate(threshold, (percs[k], ben[k]), **label_kwargs) ax.legend(loc="lower right") return ax ax = plot_cost("gnb",clfgnb, ytest, Xtest, cost, threshold=True, labe=50); plot_cost("dt",clfdt, ytest, Xtest, cost, ax, threshold=True, labe=2);
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
Note the customers on the left of this graph are most likely to churn (be positive).This if you had a finite budget, you should be targeting them!Finding the best classifier has a real consequence: you save money!!!![costcurves](./images/costcurves.png)
cost
_____no_output_____
MIT
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
import os import json import shutil import urllib.request import pandas as pd import numpy as np import matplotlib.pyplot as plt # Please use the latest version of CmdStanPy !pip install --upgrade cmdstanpy # Install pre-built CmdStan binary # (faster than compiling from source via install_cmdstan() function) tgz_file = 'colab-cmdstan-2.23.0.tar.gz' tgz_url = 'https://github.com/stan-dev/cmdstan/releases/download/v2.23.0/colab-cmdstan-2.23.0.tar.gz' if not os.path.exists(tgz_file): urllib.request.urlretrieve(tgz_url, tgz_file) shutil.unpack_archive(tgz_file) # Specify CmdStan location via environment variable os.environ['CMDSTAN'] = './cmdstan-2.23.0' # Check CmdStan path from cmdstanpy import CmdStanModel, cmdstan_path cmdstan_path() !pip install arviz import arviz as az
_____no_output_____
Apache-2.0
Chapter_16_questions.ipynb
cormach/bayesian_stats_by_b_lambert
Question 16_1 - discoveries
stan_text = '''data { int N; int<lower=0> X[N]; } parameters { real<lower=0> mu; real<lower=0> kappa; } model { X ~ neg_binomial_2(mu, kappa); mu ~ lognormal(2,1); kappa ~lognormal(2,1); } generated quantities { int<lower=0> XSim[N]; for (i in 1:N) {XSim[i] <- neg_binomial_2_rng(mu, kappa);} }''' with open('stan_file.stan', 'w') as f: f.write(stan_text) !cat stan_file.stan stan_model = CmdStanModel(stan_file='stan_file.stan') url='https://raw.githubusercontent.com/alexandrahotti/Solutions-to-A-Students-Guide-to-Bayesian-Statistics-by-Ben-Lambert/master/All_data/evaluation_discoveries.csv' df = pd.read_csv(url, error_bad_lines=False) data = {'X':df.discoveries.to_numpy(),'N':df.shape[0] } stan_posterior=stan_model.sample(data=data) stan_posterior.diagnose() stan_posterior.summary().round(decimals=3).iloc[1:4,:] stan_sample = stan_posterior.get_drawset() az_infdata_obj = az.from_cmdstanpy( posterior=stan_posterior, posterior_predictive="XSim", observed_data=data) az_infdata_obj az.plot_autocorr(az_infdata_obj) az.plot_pair(az_infdata_obj) az.plot_density(az_infdata_obj) az.plot_trace(az_infdata_obj) stan_sample.drop(columns=['lp__', 'accept_stat__','stepsize__', 'treedepth__', 'n_leapfrog__', 'divergent__', 'energy__', 'mu','kappa'], inplace=True) posterior_checks_max =np.amax(stan_sample, axis=1) (posterior_checks_max >=12).sum()/float(len(posterior_checks_max)) (df.discoveries-stan_sample['XSim.1']).dropna().plot() plt.acorr((df.discoveries-stan_sample['XSim.1']).dropna())
_____no_output_____
Apache-2.0
Chapter_16_questions.ipynb
cormach/bayesian_stats_by_b_lambert
Merge intervals Given an array of intervals where intervals[i] = [starti, endi], merge all overlapping intervals, and return an array of the non-overlapping intervals that cover all the intervals in the input.From Leetcode : https://leetcode.com/problems/merge-intervals/
from operator import itemgetter def merge_Intervals(intervals): merged_intervals = [] sorted_intervals = sorted(intervals, key=itemgetter(0)) for i in range(len(sorted_intervals)): if len(merged_intervals) == 0 or merged_intervals[-1][1] < sorted_intervals[i][0]: merged_intervals.append(sorted_intervals[i]) else: merged_intervals[-1][1] = max(merged_intervals[-1][1], sorted_intervals[i][1]) return merged_intervals merge_Intervals([[1,4],[4,5]])
_____no_output_____
Apache-2.0
Merge Intervals/Merge_Intervals.ipynb
LucasColas/Coding-Problems
ListsA list is an ordered (not necessarily sorted) sequence of values.
#You create a new list using square brackets primes = [] print(primes) type(primes) #Create a list with some values primes = [2, 3, 5, 7, 11, 13, 17, 19] print(primes)
[2, 3, 5, 7, 11, 13, 17, 19]
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
1. Operations
#Concatenate two lists: combine two lists to create a new list evens = [2, 4, 6, 8] odds = [1, 3, 5, 7, 9] numbers = evens + odds print(numbers) #Check whether a value is in a list print(5 in [1, 2, 3, 4, 5]) print(1 in primes) #Sequence repetition ripples = [1,2,3] * 3 print(ripples)
[1, 2, 3, 1, 2, 3, 1, 2, 3]
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
2. Built-in function
print("Maximum:", max(primes)) print("Minimum:", min(primes)) print(len(primes), "items") print("Sum of items", sum(primes)) #But of course, the values of the list must support summation #This will not work print(sum(["David", "Celine", "Camille"])) #If your list is list of boolean values, you can use the any and all #All returns True is all the values are True #Any returns True if one is at least True x = [True, True, True] y = [True, False, True] print(any(x)) print(any(y)) print(all(x)) print(all(y))
True True True False
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
3. Indexing
#You can access to a particular value of a list via its index #Note that in Python, the first element has index 0 #Hence the last element has index = len(x) - 1 print(primes[0]) print(primes[1]) print(primes[3]) #Negative indices allow you to go from the end of the list print(primes[-1]) print(primes[-2]) print(primes[-5]) #By definition therefore, the below is True print(primes[0] == primes[-len(primes)])
True
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
4. Slicing
#Recall the previous list of prime numbers primes = [2, 3, 5, 7, 11, 13, 17, 19] #Slicing allows you to take a slice - or a chop - of the list #and return a new list containing the elements of your slice #The second value of your slice is excluded #From the first (index 0) to the fourth (index 3) value x = primes[0:4] print(x) #from the 4th (index 3) to the last element (index len(primes) - 1) y = primes[3:len(primes)] print(y) #Slicing and indexing is not the same! #Indexing allows you to access a value at a given position #Slicing takes a piece of your list print(primes[0]) #Prints the first element print(primes[0:1]) #Prints a new list containing 1 element: the first #You can use negative indices in your slices too! print(primes[-5:-1]) #prints the first to the last (excluded) print(primes[-len(primes):3]) #prints the first to the fourth (excluded) #Omit an index in your slice, and you get sensible default print(primes[:3]) #from the first to the fourth (excluded) print(primes[2:]) #from the second to the last (included) print(primes[:]) #form the first to the last (included) #As above, it also works with negative numbers print(primes[-4:]) #four last ones print(primes[:-5]) #from the first to the fifth-to-last (excluded) #You can use a step! print(primes[0:7:2]) #every second from first to eigth (excluded) print(primes[1:8:2]) #every second from second to ninth (excluded) #Trick! Reverse the order of your list print(primes[::-1]) #from the last to the first (included) using a negative step (-1)
[19, 17, 13, 11, 7, 5, 3, 2]
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
4. Methods
#append to the end of the list primes.append(23) print(primes) primes.append(25) print(primes) #remove the first instance of a given value primes.remove(25) print(primes) #if the value doesnt exist... you get an error! primes.remove(99) print(primes) #delete a value at a position, and save it deleted = primes.pop(1) #the second element print(deleted) print(primes) #Insert a value at a given position primes.insert(1, 4) print(primes) #Whoops - that should've been 3 #Not a problem, simply reassign the value at the index primes[1] = 3 print(primes) #Reverse the list primes.reverse() print(primes) #Sort the list of primes primes.sort() print(primes)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 23]
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
5. Iteration
i = 0 while i < len(primes): print("{} is a prime number".format(primes[i])) i += 1 #More pythonic way: use this syntax! for value in primes: print("{} is still a prime number".format(value))
2 is still a prime number 3 is still a prime number 5 is still a prime number 7 is still a prime number 11 is still a prime number 13 is still a prime number 17 is still a prime number 19 is still a prime number 23 is still a prime number 23 is still a prime number
MIT
08 - Lists.ipynb
dschenck/Python-crash-course
Parasite axis demoThis example demonstrates the use of parasite axis to plot multiple datasetsonto one single plot.Notice how in this example, *par1* and *par2* are both obtained by calling``twinx()``, which ties their x-limits with the host's x-axis. From there, eachof those two axis behave separately from each other: different datasets can beplotted, and the y-limits are adjusted separately.Note that this approach uses the `mpl_toolkits.axes_grid1.parasite_axes`'`~mpl_toolkits.axes_grid1.parasite_axes.host_subplot` and`mpl_toolkits.axisartist.axislines.Axes`. An alternative approach using the`~mpl_toolkits.axes_grid1.parasite_axes`'s`~.mpl_toolkits.axes_grid1.parasite_axes.HostAxes` and`~.mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxes` is the:doc:`/gallery/axisartist/demo_parasite_axes` example.An alternative approach using the usual Matplotlib subplots is shown inthe :doc:`/gallery/ticks_and_spines/multiple_yaxis_with_spines` example.
from mpl_toolkits.axes_grid1 import host_subplot from mpl_toolkits import axisartist import matplotlib.pyplot as plt host = host_subplot(111, axes_class=axisartist.Axes) plt.subplots_adjust(right=0.75) par1 = host.twinx() par2 = host.twinx() par2.axis["right"] = par2.new_fixed_axis(loc="right", offset=(60, 0)) par1.axis["right"].toggle(all=True) par2.axis["right"].toggle(all=True) p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density") p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature") p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity") host.set_xlim(0, 2) host.set_ylim(0, 2) par1.set_ylim(0, 4) par2.set_ylim(1, 65) host.set_xlabel("Distance") host.set_ylabel("Density") par1.set_ylabel("Temperature") par2.set_ylabel("Velocity") host.legend() host.axis["left"].label.set_color(p1.get_color()) par1.axis["right"].label.set_color(p2.get_color()) par2.axis["right"].label.set_color(p3.get_color()) plt.show() fig html_str = mpld3.fig_to_html(fig) Html_file= open("demo1.html","w") Html_file.write(html_str) Html_file.close()
_____no_output_____
MIT
scr/.ipynb_checkpoints/demo_parasite_axes2-checkpoint.ipynb
RivasCalduch/IndiceReferenciaMercadoHipotecario_Visualizacion
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
import numpy as np import matplotlib.pyplot as plt from scipy import stats from scipy.stats import gamma from IPython.display import YouTubeVideo # @title Figure settings import ipywidgets as widgets %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def my_moving_window(x, window=3, FUN=np.mean): """ Calculates a moving estimate for a signal Args: x (numpy.ndarray): a vector array of size N window (int): size of the window, must be a positive integer FUN (function): the function to apply to the samples in the window Returns: (numpy.ndarray): a vector array of size N, containing the moving average of x, calculated with a window of size window There are smarter and faster solutions (e.g. using convolution) but this function shows what the output really means. This function skips NaNs, and should not be susceptible to edge effects: it will simply use all the available samples, which means that close to the edges of the signal or close to NaNs, the output will just be based on fewer samples. By default, this function will apply a mean to the samples in the window, but this can be changed to be a max/min/median or other function that returns a single numeric value based on a sequence of values. """ # if data is a matrix, apply filter to each row: if len(x.shape) == 2: output = np.zeros(x.shape) for rown in range(x.shape[0]): output[rown, :] = my_moving_window(x[rown, :], window=window, FUN=FUN) return output # make output array of the same size as x: output = np.zeros(x.size) # loop through the signal in x for samp_i in range(x.size): values = [] # loop through the window: for wind_i in range(int(1 - window), 1): if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1): # out of range continue # sample is in range and not nan, use it: if not(np.isnan(x[samp_i + wind_i])): values += [x[samp_i + wind_i]] # calculate the mean in the window for this point in the output: output[samp_i] = FUN(values) return output def my_plot_percepts(datasets=None, plotconditions=False): if isinstance(datasets, dict): # try to plot the datasets # they should be named... # 'expectations', 'judgments', 'predictions' plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really plt.ylabel('perceived self motion [m/s]') plt.xlabel('perceived world motion [m/s]') plt.title('perceived velocities') # loop through the entries in datasets # plot them in the appropriate way for k in datasets.keys(): if k == 'expectations': expect = datasets[k] plt.scatter(expect['world'], expect['self'], marker='*', color='xkcd:green', label='my expectations') elif k == 'judgments': judgments = datasets[k] for condition in np.unique(judgments[:, 0]): c_idx = np.where(judgments[:, 0] == condition)[0] cond_self_motion = judgments[c_idx[0], 1] cond_world_motion = judgments[c_idx[0], 2] if cond_world_motion == -1 and cond_self_motion == 0: c_label = 'world-motion condition judgments' elif cond_world_motion == 0 and cond_self_motion == 1: c_label = 'self-motion condition judgments' else: c_label = f"condition [{condition:d}] judgments" plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4], label=c_label, alpha=0.2) elif k == 'predictions': predictions = datasets[k] for condition in np.unique(predictions[:, 0]): c_idx = np.where(predictions[:, 0] == condition)[0] cond_self_motion = predictions[c_idx[0], 1] cond_world_motion = predictions[c_idx[0], 2] if cond_world_motion == -1 and cond_self_motion == 0: c_label = 'predicted world-motion condition' elif cond_world_motion == 0 and cond_self_motion == 1: c_label = 'predicted self-motion condition' else: c_label = f"condition [{condition:d}] prediction" plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3], marker='x', label=c_label) else: print("datasets keys should be 'hypothesis', \ 'judgments' and 'predictions'") if plotconditions: # this code is simplified but only works for the dataset we have: plt.scatter([1], [0], marker='<', facecolor='none', edgecolor='xkcd:black', linewidths=2, label='world-motion stimulus', s=80) plt.scatter([0], [1], marker='>', facecolor='none', edgecolor='xkcd:black', linewidths=2, label='self-motion stimulus', s=80) plt.legend(facecolor='xkcd:white') plt.show() else: if datasets is not None: print('datasets argument should be a dict') raise TypeError def my_plot_stimuli(t, a, v): plt.figure(figsize=(10, 6)) plt.plot(t, a, label='acceleration [$m/s^2$]') plt.plot(t, v, label='velocity [$m/s$]') plt.xlabel('time [s]') plt.ylabel('[motion]') plt.legend(facecolor='xkcd:white') plt.show() def my_plot_motion_signals(): dt = 1 / 10 a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0) t = np.arange(0, 10, dt) v = np.cumsum(a * dt) fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(14, 6)) fig.suptitle('Sensory ground truth') ax1.set_title('world-motion condition') ax1.plot(t, -v, label='visual [$m/s$]') ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]') ax1.set_xlabel('time [s]') ax1.set_ylabel('motion') ax1.legend(facecolor='xkcd:white') ax2.set_title('self-motion condition') ax2.plot(t, -v, label='visual [$m/s$]') ax2.plot(t, a, label='vestibular [$m/s^2$]') ax2.set_xlabel('time [s]') ax2.set_ylabel('motion') ax2.legend(facecolor='xkcd:white') plt.show() def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False, addaverages=False, integrateVestibular=False, addGroundTruth=False): if addGroundTruth: dt = 1 / 10 a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0) t = np.arange(0, 10, dt) v = a wm_idx = np.where(judgments[:, 0] == 0) sm_idx = np.where(judgments[:, 0] == 1) opticflow = opticflow.transpose() wm_opticflow = np.squeeze(opticflow[:, wm_idx]) sm_opticflow = np.squeeze(opticflow[:, sm_idx]) if integrateVestibular: vestibular = np.cumsum(vestibular * .1, axis=1) if addGroundTruth: v = np.cumsum(a * dt) vestibular = vestibular.transpose() wm_vestibular = np.squeeze(vestibular[:, wm_idx]) sm_vestibular = np.squeeze(vestibular[:, sm_idx]) X = np.arange(0, 10, .1) fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(15, 10)) fig.suptitle('Sensory signals') my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1) my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black') if addGroundTruth: my_axes[0][0].plot(t, -v, color='xkcd:red') if addaverages: my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1), color='xkcd:red', alpha=1) my_axes[0][0].set_title('optic-flow in world-motion condition') my_axes[0][0].set_ylabel('velocity signal [$m/s$]') my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1) my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black') if addGroundTruth: my_axes[0][1].plot(t, -v, color='xkcd:blue') if addaverages: my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1), color='xkcd:blue', alpha=1) my_axes[0][1].set_title('optic-flow in self-motion condition') my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1) my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black') if addaverages: my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1), color='xkcd:red', alpha=1) my_axes[1][0].set_title('vestibular signal in world-motion condition') if addGroundTruth: my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red') my_axes[1][0].set_xlabel('time [s]') if integrateVestibular: my_axes[1][0].set_ylabel('velocity signal [$m/s$]') else: my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]') my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1) my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black') if addGroundTruth: my_axes[1][1].plot(t, v, color='xkcd:blue') if addaverages: my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1), color='xkcd:blue', alpha=1) my_axes[1][1].set_title('vestibular signal in self-motion condition') my_axes[1][1].set_xlabel('time [s]') if returnaxes: return my_axes else: plt.show() def my_threshold_solution(selfmotion_vel_est, threshold): is_move = (selfmotion_vel_est > threshold) return is_move def my_moving_threshold(selfmotion_vel_est, thresholds): pselfmove_nomove = np.empty(thresholds.shape) pselfmove_move = np.empty(thresholds.shape) prop_correct = np.empty(thresholds.shape) pselfmove_nomove[:] = np.NaN pselfmove_move[:] = np.NaN prop_correct[:] = np.NaN for thr_i, threshold in enumerate(thresholds): # run my_threshold that the students will write: try: is_move = my_threshold(selfmotion_vel_est, threshold) except Exception: is_move = my_threshold_solution(selfmotion_vel_est, threshold) # store results: pselfmove_nomove[thr_i] = np.mean(is_move[0:100]) pselfmove_move[thr_i] = np.mean(is_move[100:200]) # calculate the proportion classified correctly: # (1-pselfmove_nomove) + () # Correct rejections: p_CR = (1 - pselfmove_nomove[thr_i]) # correct detections: p_D = pselfmove_move[thr_i] # this is corrected for proportion of trials in each condition: prop_correct[thr_i] = (p_CR + p_D) / 2 return [pselfmove_nomove, pselfmove_move, prop_correct] def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct): plt.figure(figsize=(12, 8)) plt.title('threshold effects') plt.plot([min(thresholds), max(thresholds)], [0, 0], ':', color='xkcd:black') plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':', color='xkcd:black') plt.plot([min(thresholds), max(thresholds)], [1, 1], ':', color='xkcd:black') plt.plot(thresholds, world_prop, label='world motion condition') plt.plot(thresholds, self_prop, label='self motion condition') plt.plot(thresholds, prop_correct, color='xkcd:purple', label='correct classification') plt.xlabel('threshold') plt.ylabel('proportion correct or classified as self motion') plt.legend(facecolor='xkcd:white') plt.show() def my_plot_predictions_data(judgments, predictions): # conditions = np.concatenate((np.abs(judgments[:, 1]), # np.abs(judgments[:, 2]))) # veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4])) # velpredict = np.concatenate((predictions[:, 3], predictions[:, 4])) # self: # conditions_self = np.abs(judgments[:, 1]) veljudgmnt_self = judgments[:, 3] velpredict_self = predictions[:, 3] # world: # conditions_world = np.abs(judgments[:, 2]) veljudgmnt_world = judgments[:, 4] velpredict_world = predictions[:, 4] fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row', figsize=(12, 5)) ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2) ax1.plot([0, 1], [0, 1], ':', color='xkcd:black') ax1.set_title('self-motion judgments') ax1.set_xlabel('observed') ax1.set_ylabel('predicted') ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2) ax2.plot([0, 1], [0, 1], ':', color='xkcd:black') ax2.set_title('world-motion judgments') ax2.set_xlabel('observed') ax2.set_ylabel('predicted') plt.show() # @title Data retrieval import os fname="W1D2_data.npz" if not os.path.exists(fname): !wget https://osf.io/c5xyf/download -O $fname filez = np.load(file=fname, allow_pickle=True) judgments = filez['judgments'] opticflow = filez['opticflow'] vestibular = filez['vestibular']
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
--- Section 6: Model planning
# @title Video 6: Planning from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1nC4y1h7yL', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video
Video available at https://youtube.com/watch?v=dRTOFFigxa0
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:![model functions purpose](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig05.png)Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
def my_train_illusion_model(sensorydata, params): """ Generate output predictions of perceived self-motion and perceived world-motion velocity based on input visual and vestibular signals. Args: sensorydata: (dict) dictionary with two named entries: opticflow: (numpy.ndarray of float) NxM array with N trials on rows and M visual signal samples in columns vestibular: (numpy.ndarray of float) NxM array with N trials on rows and M vestibular signal samples in columns params: (dict) dictionary with named entries: threshold: (float) vestibular threshold for credit assignment filterwindow: (list of int) determines the strength of filtering for the visual and vestibular signals, respectively integrate (bool): whether to integrate the vestibular signals, will be set to True if absent FUN (function): function used in the filter, will be set to np.mean if absent samplingrate (float): the number of samples per second in the sensory data, will be set to 10 if absent Returns: dict with two entries: selfmotion: (numpy.ndarray) vector array of length N, with predictions of perceived self motion worldmotion: (numpy.ndarray) vector array of length N, with predictions of perceived world motion """ # sanitize input a little if not('FUN' in params.keys()): params['FUN'] = np.mean if not('integrate' in params.keys()): params['integrate'] = True if not('samplingrate' in params.keys()): params['samplingrate'] = 10 # number of trials: ntrials = sensorydata['opticflow'].shape[0] # set up variables to collect output selfmotion = np.empty(ntrials) worldmotion = np.empty(ntrials) # loop through trials? for trialN in range(ntrials): # these are our sensory variables (inputs) vis = sensorydata['opticflow'][trialN, :] ves = sensorydata['vestibular'][trialN, :] # generate output predicted perception: selfmotion[trialN],\ worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves, params=params) return {'selfmotion': selfmotion, 'worldmotion': worldmotion} # here is a mock version of my_perceived motion. # so you can test my_train_illusion_model() def my_perceived_motion(*args, **kwargs): return [np.nan, np.nan] # let's look at the preditions we generated for two sample trials (0,100) # we should get a 1x2 vector of self-motion prediction and another # for world-motion sensorydata={'opticflow': opticflow[[0, 100], :0], 'vestibular': vestibular[[0, 100], :0]} params={'threshold': 0.33, 'filterwindows': [100, 50]} my_train_illusion_model(sensorydata=sensorydata, params=params)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
# Full perceived motion function def my_perceived_motion(vis, ves, params): """ Takes sensory data and parameters and returns predicted percepts Args: vis (numpy.ndarray) : 1xM array of optic flow velocity data ves (numpy.ndarray) : 1xM array of vestibular acceleration data params : (dict) dictionary with named entries: see my_train_illusion_model() for details Returns: [list of floats] : prediction for perceived self-motion based on vestibular data, and prediction for perceived world-motion based on perceived self-motion and visual data """ # estimate self motion based on only the vestibular data # pass on the parameters selfmotion = my_selfmotion(ves=ves, params=params) # estimate the world motion, based on the selfmotion and visual data # pass on the parameters as well worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params) return [selfmotion, worldmotion]
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
def my_selfmotion(arg1, arg2): """ Short description of the function Args: argument 1: explain the format and content of the first argument argument 2: explain the format and content of the second argument Returns: what output does the function generate? Any further description? """ # what operations do we perform on the input? # use the elements from micro-tutorials 3, 4, and 5 # 1. # 2. # 3. # 4. # what output should this function produce? return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
# World motion function def my_worldmotion(vis, selfmotion, params): """ Estimates world motion based on the visual signal, the estimate of Args: vis (numpy.ndarray): 1xM array with the optic flow signal selfmotion (float): estimate of self motion params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of world motion in m/s """ # 1. running window function # 2. take final value # 3. subtract selfmotion from value # return final value return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
--- Section 7: Model implementation
# @title Video 7: Implementation from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV18Z4y1u7yB', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video
Video available at https://youtube.com/watch?v=DMSIt7t-LO8
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
# Self motion function def my_selfmotion(ves, params): """ Estimates self motion for one vestibular signal Args: ves (numpy.ndarray): 1xM array with a vestibular signal params (dict) : dictionary with named entries: see my_train_illusion_model() for details Returns: (float) : an estimate of self motion in m/s """ # uncomment the code below and fill in with your code # 1. integrate vestibular signal # ves = np.cumsum(ves * (1 / params['samplingrate'])) # 2. running window function to accumulate evidence: # selfmotion = ... YOUR CODE HERE # 3. take final value of self-motion vector as our estimate # selfmotion = ... YOUR CODE HERE # 4. compare to threshold. Hint the threshodl is stored in # params['threshold'] # if selfmotion is higher than threshold: return value # if it's lower than threshold: return 0 # if YOURCODEHERE # selfmotion = YOURCODHERE # Comment this line when your function is ready raise NotImplementedError("Student excercise: estimate my_selfmotion") return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
#@title #@markdown Make sure you execute this cell to enable the widget! def refresh(threshold=0, windowsize=100): params = {'samplingrate': 10, 'FUN': np.mean} params['filterwindows'] = [windowsize, 50] params['threshold'] = threshold selfmotion_estimates = np.empty(200) # get the estimates for each trial: for trial_number in range(200): ves = vestibular[trial_number, :] selfmotion_estimates[trial_number] = my_selfmotion(ves, params) plt.figure() plt.hist(selfmotion_estimates, bins=20) plt.xlabel('self-motion estimate') plt.ylabel('frequency') plt.show() _ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
# World motion function def my_worldmotion(vis, selfmotion, params): """ Short description of the function Args: vis (numpy.ndarray): 1xM array with the optic flow signal selfmotion (float): estimate of self motion params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of world motion in m/s """ # running average to smooth/accumulate sensory evidence visualmotion = my_moving_window(vis, window=params['filterwindows'][1], FUN=np.mean) # take final value visualmotion = visualmotion[-1] # subtract selfmotion from value worldmotion = visualmotion + selfmotion # return final value return worldmotion
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
--- Section 8: Model completion
# @title Video 8: Completion from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1YK411H7oW', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video
Video available at https://youtube.com/watch?v=EM-G8YYdrDg
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
# @markdown Run to plot model predictions of motion estimates # prepare to run the model again: data = {'opticflow': opticflow, 'vestibular': vestibular} params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean} modelpredictions = my_train_illusion_model(sensorydata=data, params=params) # process the data to allow plotting... predictions = np.zeros(judgments.shape) predictions[:, 0:3] = judgments[:, 0:3] predictions[:, 3] = modelpredictions['selfmotion'] predictions[:, 4] = modelpredictions['worldmotion'] * -1 my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
# @title Video 9: Evaluation from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1uK411H7EK', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video
Video available at https://youtube.com/watch?v=bWLFyobm4Rk
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
# @markdown Run to plot predictions over data my_plot_predictions_data(judgments, predictions)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
# @markdown Run to calculate R^2 conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2]))) veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4])) velpredict = np.concatenate((predictions[:, 3], predictions[:, 4])) slope, intercept, r_value,\ p_value, std_err = stats.linregress(conditions, veljudgmnt) print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}") slope, intercept, r_value,\ p_value, std_err = stats.linregress(veljudgmnt, velpredict) print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
conditions -> judgments R^2: 0.032 predictions -> judgments R^2: 0.256
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
#@title #@markdown Make sure you execute this cell to enable the widget! data = {'opticflow': opticflow, 'vestibular': vestibular} def refresh(threshold=0, windowsize=100): # set parameters according to sliders: params = {'samplingrate': 10, 'FUN': np.mean} params['filterwindows'] = [windowsize, 50] params['threshold'] = threshold modelpredictions = my_train_illusion_model(sensorydata=data, params=params) predictions = np.zeros(judgments.shape) predictions[:, 0:3] = judgments[:, 0:3] predictions[:, 3] = modelpredictions['selfmotion'] predictions[:, 4] = modelpredictions['worldmotion'] * -1 # plot the predictions: my_plot_predictions_data(judgments, predictions) # calculate R2 veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4])) velpredict = np.concatenate((predictions[:, 3], predictions[:, 4])) slope, intercept, r_value,\ p_value, std_err = stats.linregress(veljudgmnt, velpredict) print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}") _ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 1: function for credit assigment of self motion
def my_selfmotion(ves, params): """ Estimates self motion for one vestibular signal Args: ves (numpy.ndarray): 1xM array with a vestibular signal params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of self motion in m/s """ # integrate signal: ves = np.cumsum(ves * (1 / params['samplingrate'])) # use running window to accumulate evidence: selfmotion = my_moving_window(ves, window=params['filterwindows'][0], FUN=params['FUN']) # take the final value as our estimate: selfmotion = selfmotion[-1] # compare to threshold, set to 0 if lower and else... if selfmotion < params['threshold']: selfmotion = 0 ########################################################################### # Exercise: Complete credit assignment. Remove the next line to test your function else: selfmotion = ... #YOUR CODE HERE raise NotImplementedError("Modify with credit assignment") ########################################################################### return selfmotion # Use the updated function to run the model and plot the data # Uncomment below to test your function data = {'opticflow': opticflow, 'vestibular': vestibular} params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean} #modelpredictions = my_train_illusion_model(sensorydata=data, params=params) predictions = np.zeros(judgments.shape) predictions[:, 0:3] = judgments[:, 0:3] predictions[:, 3] = modelpredictions['selfmotion'] predictions[:, 4] = modelpredictions['worldmotion'] * -1 #my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
#@title #@markdown Make sure you execute this cell to enable the widget! data = {'opticflow': opticflow, 'vestibular': vestibular} def refresh(threshold=0, windowsize=100): # set parameters according to sliders: params = {'samplingrate': 10, 'FUN': np.mean} params['filterwindows'] = [windowsize, 50] params['threshold'] = threshold modelpredictions = my_train_illusion_model(sensorydata=data, params=params) predictions = np.zeros(judgments.shape) predictions[:, 0:3] = judgments[:, 0:3] predictions[:, 3] = modelpredictions['selfmotion'] predictions[:, 4] = modelpredictions['worldmotion'] * -1 # plot the predictions: my_plot_predictions_data(judgments, predictions) # calculate R2 veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4])) velpredict = np.concatenate((predictions[:, 3], predictions[:, 4])) slope, intercept, r_value,\ p_value, std_err = stats.linregress(veljudgmnt, velpredict) print(f"predictions -> judgments R2: {r_value ** 2:0.3f}") _ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
# @title Video 10: Publication from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1M5411e7AG', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video
Video available at https://youtube.com/watch?v=zm8x7oegN6Q
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
Exercise 6, answers Problem 1
from pyomo.environ import * model = ConcreteModel() #Three variables model.x = Var([1,2,3]) #Objective function including powers and logarithm model.OBJ = Objective(expr = log(model.x[1]**2+1)+model.x[2]**4 +model.x[1]*model.x[3]) #Objective function model.constr = Constraint(expr = model.x[1]**3-model.x[2]**2>=1) model.box1 = Constraint(expr = model.x[1]>=0) model.box2 = Constraint(expr = model.x[3]>=0) from pyomo.opt import SolverFactory #Import interfaces to solvers opt = SolverFactory("ipopt") #Use ipopt res = opt.solve(model, tee=True) #Solve the problem and print the output print "Optimal solutions is " model.x.display() print "Objective value at the optimal solution is " model.OBJ.display()
****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version 3.12, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 4 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 3 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 3 inequality constraints with only lower bounds: 3 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 0.0000000e+00 1.00e+00 5.00e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 2.2129049e-06 1.00e+00 1.09e+02 -1.0 1.01e+00 - 1.00e+00 9.80e-03h 1 2 2.4328193e-06 1.00e+00 1.55e+05 -1.0 1.00e+00 - 1.39e-01 9.90e-05h 1 3 1.9533370e-02 9.97e-01 9.82e+04 -1.0 1.60e+05 - 6.81e-08 9.04e-07h 1 4 8.5283692e-01 0.00e+00 2.91e+06 -1.0 1.57e+01 - 1.18e-02 6.25e-02f 5 5 7.4508982e-01 0.00e+00 1.19e+07 -1.0 1.19e-01 8.0 3.17e-04 1.00e+00f 1 6 7.3522284e-01 0.00e+00 6.57e+05 -1.0 9.61e-03 7.5 7.73e-01 1.00e+00h 1 7 7.3514688e-01 0.00e+00 9.59e+02 -1.0 7.35e-05 7.0 1.00e+00 1.00e+00h 1 8 7.3514746e-01 0.00e+00 3.58e+00 -1.0 9.67e-07 6.6 1.00e+00 1.00e+00h 1 9 7.6476859e-01 0.00e+00 3.06e-02 -1.0 2.25e-02 - 1.00e+00 1.00e+00f 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 10 6.9848511e-01 0.00e+00 1.98e-03 -2.5 5.37e-02 - 1.00e+00 1.00e+00f 1 11 6.9344736e-01 0.00e+00 1.59e-05 -3.8 4.78e-03 - 1.00e+00 1.00e+00h 1 12 6.9315086e-01 0.00e+00 5.53e-08 -5.7 4.18e-04 - 1.00e+00 1.00e+00h 1 13 6.9314717e-01 0.00e+00 8.49e-12 -8.6 5.40e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 13 (scaled) (unscaled) Objective...............: 6.9314717223847255e-01 6.9314717223847255e-01 Dual infeasibility......: 8.4893203577962595e-12 8.4893203577962595e-12 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 2.5092981987187852e-09 2.5092981987187852e-09 Overall NLP error.......: 2.5092981987187852e-09 2.5092981987187852e-09 Number of objective function evaluations = 20 Number of objective gradient evaluations = 14 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 20 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 14 Number of Lagrangian Hessian evaluations = 13 Total CPU secs in IPOPT (w/o function evaluations) = 0.004 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. Ipopt 3.12: Optimal Solution Found Optimal solutions is x : Size=3, Index=x_index, Domain=Reals Key : Lower : Value : Upper : Fixed : Stale 1 : None : 0.999999999169 : None : False : False 2 : None : 0.0 : None : False : False 3 : None : -7.49070198136e-09 : None : False : False Objective value at the optimal solution is OBJ : Size=1, Index=None, Active=True Key : Active : Value None : True : 0.693147172238
CC-BY-3.0
Exercise 6, answers.ipynb
maeehart/TIES483
Problem 2 The set Pareto optimal solutions is $\{(t,1-t):t\in[0,1]\}$.Let us denote set of Pareto optimal solutions by $PS$ and show that $PS=\{(t,1-t):t\in[0,1]\}$.$PS\supset\{(t,1-t):t\in[0,1]\}$:Let's assume that there exists $t\in[0,1]$, which is not Pareto optimal. Then there exists $x=(x_1,x_2)\in\mathbb R^2$ and $t\in[0,1]$ such that$$\left\{\begin{align}\|x-(1,0)\|^2<\|(t,1-t)-(1,0) \|^2,\text{ and}\\\|x-(0,1)\|^2\leq\|(t,1-t)-(0,1) \|^2\end{align}\right.$$or$$\left\{\begin{align}\|x-(1,0)\|^2\leq\|(t,1-t)-(1,0) \|^2,\text{ and}\\\|x-(0,1)\|^2<\|(t,1-t)-(0,1)\|^2.\end{align}\right.$$But in both cases$$\sqrt{2} = \|(1,0)-(0,1)\|\\\leq \|(1,0)-x\|+\|x-(0,1)\|\\< \|(t,1-t)-(1,0) \|+\|(t,1-t)-(0,1) \|\\= \|(1,0)-(0,1)\| =\sqrt{2}.$$because the point $(t,1-t)$ is on the straight line from $(1,0)$ to $(0,1)$.Thus, neither one of the requirements of non-Pareto optimality can hold. Thus, the point is Pareto optimal.$PS\subset\{(t,1-t):t\in[0,1]\}$:Let's assume a Pareto optimal solution $x$. This follows from the triangle inequality. Problem 3 Ideal:To solve$$\min \|x-(1,0)\|^2\\\text{s.t. }x\in \mathbb R^2.$$The solution of this problem is naturally $x = (1,0)$ and the minimum is $0$. Minimizing the second objective give $x=(0,1)$ and the minimum is again $0$. Thus, the ideal is $(0,0)$.Now, the problem has just two objectives and thus, we get the components of the nadir by optimizing$$\min f_1(x)\\\text{s.t. }f_2(x)\leq z^{ideal}_2$$and$$\min f_2(x)\\\text{s.t. }f_1(x)\leq z^{ideal}_1.$$The solution of this problem is Pareto optimal because of the epsilon constraint method and also because the other one of the objectives is at the minimum and the other one cannot be grown with growing the other. Thus, the components of the nadir are at least the optimal values of the above optimization problems.On the other hand, the components of the nadir have to be at most the optimal values of the above optimization problems, because if this was not the case, then the solution would not be Pareto optimal.By solving these optimization problems, we get nadir (2,2). Problem 4
def prob(x): return [(x[0]-1)**2+x[1]**2,x[0]**2+(x[1]-1)**2]
_____no_output_____
CC-BY-3.0
Exercise 6, answers.ipynb
maeehart/TIES483
Let's do this using Pyomo:
from pyomo.environ import * from pyomo.opt import SolverFactory #Import interfaces to solvers def weighting_method_pyomo(f,w): points = [] for wi in w: model = ConcreteModel() model.x = Var([0,1]) #weighted sum model.obj = Objective(expr = wi[0]*f(model.x)[0]+wi[1]*f(model.x)[1]) opt = SolverFactory("ipopt") #Use ipopt #Combination of expression and function res=opt.solve(model) #Solve the problem points.append([model.x[0].value,model.x[1].value]) #We should check for optimality... return points w = np.random.random((500,2)) #500 random weights repr = weighting_method_pyomo(prob,w)
_____no_output_____
CC-BY-3.0
Exercise 6, answers.ipynb
maeehart/TIES483
**Plot the solutions in the objective space**
import matplotlib.pyplot as plt f_repr_ws = [prob(repri) for repri in repr] fig = plt.figure() plt.scatter([z[0] for z in f_repr_ws],[z[1] for z in f_repr_ws]) plt.show()
_____no_output_____
CC-BY-3.0
Exercise 6, answers.ipynb
maeehart/TIES483
**Plot the solutions in the decision space**
import matplotlib.pyplot as plt fig = plt.figure() plt.scatter([x[0] for x in repr],[x[1] for x in repr]) plt.show()
_____no_output_____
CC-BY-3.0
Exercise 6, answers.ipynb
maeehart/TIES483
Bonus: Temperature Analysis I
import pandas as pd from datetime import datetime as dt # "tobs" is "temperature observations" df = pd.read_csv('Resources/hawaii_measurements.csv') df.head() # Convert the date column format from string to datetime df["date"] = pd.to_datetime(df['date']) df.info() # Set the date column as the DataFrame index # Drop the date column df = df.set_index('date') df.head()
_____no_output_____
ADSL
temp_analysis_bonus_1_starter.ipynb
georgiafbi/sqlalchemy-challenge
Compare June and December data across all years
import warnings warnings.filterwarnings('ignore') %matplotlib inline from matplotlib import pyplot as plt import numpy as np import scipy.stats as stats from scipy.stats import ttest_rel # Filter data for desired months june_df = df[df.index.month==6] dec_df=df[df.index.month==12] # Identify the average temperature for June avg_temp_june = round(june_df.tobs.mean(),1) print(f"The average temperature in June from {june_df.index.year[0]} to {june_df.index.year[-1]} is {avg_temp_june} °F.") # Identify the average temperature for December avg_temp_dec = round(dec_df.tobs.mean(),1) print(f"The average temperature in December from {dec_df.index.year[0]} to {dec_df.index.year[-1]} is {avg_temp_dec} °F.") # Create collections of temperature dataq june_temps_df= pd.DataFrame(june_df.tobs).rename(columns={"tobs":"tobs_june"}) dec_temps_df= pd.DataFrame(dec_df.tobs).rename(columns={"tobs":"tobs_dec"}) # Run paired t-test # Generate some fake data to test with def ttest_plots(dataset1, dataset2): # Scatter Plot of Data ds1_col=dataset1.columns[0] ds2_col=dataset2.columns[0] x1_range= dataset1.index x2_range = dataset2.index plt.subplot(2, 1, 1) plt.scatter(x1_range,dataset1[ds1_col], label=ds1_col,alpha=0.7) plt.scatter(x2_range,dataset2[ds2_col], label=ds2_col,alpha=0.7) plt.xlabel("Year") plt.ylabel("Temperature (°F)") plt.title(f"Scatter Plot of {ds1_col} vs {ds2_col} from {dataset1.index.year[0]} to {dataset1.index.year[-1]}") plt.legend() plt.tight_layout plt.savefig("Scatter_Plot_June_and_December_Temps_Hawaii.png") plt.show() # Histogram Plot of Data plt.subplot(2, 1, 2) plt.hist(dataset1[ds1_col], 10, density=True, alpha=0.7, label=ds1_col) plt.hist(dataset2[ds2_col], 10, density=True, alpha=0.7, label=ds2_col) plt.axvline(dataset1[ds1_col].mean(), color='k', linestyle='dashed', linewidth=1) plt.axvline(dataset2[ds2_col].mean(), color='k', linestyle='dashed', linewidth=1) plt.legend() plt.xlabel("Temperature (°F)") plt.tight_layout plt.savefig("Histogram_Plot_June_and_December_Temps_Hawaii.png") plt.show() return dataset1[ds1_col], dataset2[ds2_col] temps_june, temps_dec = ttest_plots(june_temps_df, dec_temps_df) # Note: Setting equal_var=False performs Welch's t-test which does # not assume equal population variance print(stats.ttest_ind(temps_june,temps_dec, equal_var=False))
_____no_output_____
ADSL
temp_analysis_bonus_1_starter.ipynb
georgiafbi/sqlalchemy-challenge
Open-EDI Python Demo ----- Hello World import openediIf failed, check whether the python version for jupyter-notebook and that for building the project are consistent
import sys module_dir = ["../lib/", "./lib/", "../build/edi/python/"] # find from install_dir or build_dir sys.path.extend(module_dir) import openedi as edi edi.ediPrint(edi.MessageType.kInfo, "Hello World.\n")
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
创建一个database
db = edi.db.Database()
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
创建一个model, 并添加相应model term
m0 = db.addModel("model0") m0.setModelType(edi.ModelType.kCell) mt0 = m0.addTerm("term0") mt0.setSignalDirect(edi.SignalDirection.kInput) mt1= m0.addTerm("term1") mt1.setSignalDirect(edi.SignalDirection.kOutput)
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
创建一个design
design = db.getDesign()
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
在design里创建两个instances
inst0 = design.addInst() inst0.getAttr().setName("inst0") p0 = edi.geo.Point2DInt(0, 1) inst0.getAttr().setLoc(p0) inst0.addModel(m0) attr1 = edi.db.InstAttr() attr1.setName("inst1") p1 = edi.geo.Point2DInt(2, 3) attr1.setLoc(p1) inst1 = design.addInst(attr1) inst1.addModel(m0)
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
创建相应instance terms, 并连接至一个net
net0 = design.addNet() net0.getAttr().setName("net0") inst_term0 = design.addInstTerm() inst_term0.getAttr().setModelTerm(mt0) inst_term0.setInst(inst0) inst_term0.setNet(net0) inst0.addInstTerm(inst_term0) net0.addInstTerm(inst_term0) inst_term1 = design.addInstTerm() inst_term1.getAttr().setModelTerm(mt1) inst_term1.setInst(inst1) inst_term1.setNet(net0) inst1.addInstTerm(inst_term1) net0.addInstTerm(inst_term1)
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
将database写入文件
filename = "demo_db.txt" edi.db.write(db, filename, 0) # 0 means ascii mode, 1 means binary mode
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
从文件读入database
db2 = edi.db.Database() edi.db.read(db2, filename, 0) # 0 means ascii mode, 1 means binary mode print("We have %d models in db2." % (db2.numModels())) # =1 print("We have %d insts in db2.design_." % (db2.getDesign().numInsts())) # =2 print("We have %d nets in db2.design_." % (db2.getDesign().numNets())) # =1 print("We have %d inst_terms in db2.design_." % (db2.getDesign().numInstTerms())) # =2
_____no_output_____
BSD-3-Clause
demo/hello-world.ipynb
lbz007/rectanglequery
Load datasets, split, normalize etc..
datasets = np.load("datasets.npy") labels = np.load("labels.npy") datasets_val = np.load("datasets_val.npy") labels_val = np.load("labels_val.npy") datasets.shape X_train,X_test,y_train, y_test = train_test_split(datasets, labels, test_size=0.05,random_state=4242) # min-max normalization (can try other also..) def norm_dataset_minMax(dataset): for i in range(len(dataset)): d = dataset[i] d = (d-d.min()) / (d.max()-d.min()) dataset[i] = d return dataset def norm_dataset_meanStd(dataset): for i in range(len(dataset)): d = dataset[i] d = (d-d.mean()) / d.std() dataset[i] = d return dataset def print_statistics(dataset): print("min:{:.3f} max:{:.3f} mean:{:.3f} std:{:.3f}".format(dataset.min(), dataset.max(), dataset.mean(), dataset.std())) X_train = norm_dataset_meanStd(X_train) X_test = norm_dataset_meanStd(X_test) X_val = norm_dataset_meanStd(datasets_val) print_statistics(X_train) print_statistics(X_test) print_statistics(X_val)
min:-1.547 max:74.888 mean:-0.000 std:1.000 min:-1.376 max:62.494 mean:0.000 std:1.000 min:-1.411 max:49.713 mean:-0.000 std:1.000
MIT
train_test_nnets.ipynb
cemysf/BCI
- Make channel the last axis
X_train = np.swapaxes(X_train, -2, -1) X_test = np.swapaxes(X_test, -2, -1) X_val = np.swapaxes(X_val, -2, -1) X_test.shape
_____no_output_____
MIT
train_test_nnets.ipynb
cemysf/BCI
- Labels to one hot
def to_numericalLabel(x): if x == "left": return 0 elif x == "none": return 1 elif x == "right": return 2 y_train = [to_numericalLabel(l) for l in y_train] y_test = [to_numericalLabel(l) for l in y_test] y_train = to_categorical(y_train) y_test = to_categorical(y_test) y_train[:10] y_test[:10] y_val = [to_numericalLabel(l) for l in labels_val] y_val = to_categorical(y_val)
_____no_output_____
MIT
train_test_nnets.ipynb
cemysf/BCI
Try 1: simple conv net - 2D conv and max poolings, with skip connections (add)- Dense at the end for classification
input_img = Input(shape=(250,60,16)) ## 16 channels learning_rate = 5e-4 ## 1e-3 is default for adam reg_param = 1e-2 def net_model(input_img): conv1 = Convolution2D(32, (3,3), activation="Mish", padding="same", kernel_regularizer=l2(reg_param))(input_img) # add1 pool1 = MaxPooling2D((2,2), padding="same")(conv1) conv2 = Convolution2D(32, (3,3), activation="Mish", padding="same", kernel_regularizer=l2(reg_param))(pool1) add2 = add([pool1, conv2]) pool2 = MaxPooling2D((2,2), padding="same")(add2) conv3 = Convolution2D(32, (3,3), activation="Mish", padding="same", kernel_regularizer=l2(reg_param))(pool2) add3 = add([pool2, conv3]) pool3 = MaxPooling2D((2,2), padding="same")(add3) conv4 = Convolution2D(32, (3,3), activation="Mish", padding="same", kernel_regularizer=l2(reg_param))(pool3) add4 = add([pool3, conv4]) pool4 = MaxPooling2D((2,2), padding="same")(add4) conv5 = Convolution2D(32, (3,3), activation="Mish", padding="same", kernel_regularizer=l2(reg_param))(pool4) add5 = add([pool4, conv5]) pool5 = MaxPooling2D((2,2), padding="same")(add5) flatten = Flatten()(add5) #dense1 = Dense(256, activation="Mish")(flatten) #dense2 = Dense(32, activation="Mish")(dense1) preds = Dense(3, activation="softmax")(flatten) return preds nnet = Model(inputs=input_img, outputs=net_model(input_img)) nnet.summary() nnet.compile(optimizer=Adam(lr=learning_rate), loss="categorical_crossentropy", metrics=["accuracy"]) nnet.fit(x=X_train, y=y_train, batch_size=32, epochs=50, validation_data=(X_test,y_test))
Train on 999 samples, validate on 53 samples Epoch 1/50 999/999 [==============================] - 17s 17ms/step - loss: 3.1340 - accuracy: 0.3534 - val_loss: 2.5861 - val_accuracy: 0.4340 Epoch 2/50 999/999 [==============================] - 17s 17ms/step - loss: 2.4655 - accuracy: 0.4875 - val_loss: 2.5292 - val_accuracy: 0.4906 Epoch 3/50 999/999 [==============================] - 19s 19ms/step - loss: 2.3267 - accuracy: 0.5666 - val_loss: 2.3891 - val_accuracy: 0.3962 Epoch 4/50 999/999 [==============================] - 21s 21ms/step - loss: 2.1721 - accuracy: 0.6186 - val_loss: 2.3609 - val_accuracy: 0.4151 Epoch 5/50 999/999 [==============================] - 21s 21ms/step - loss: 2.0605 - accuracy: 0.6817 - val_loss: 2.3059 - val_accuracy: 0.5472 Epoch 6/50 999/999 [==============================] - 21s 21ms/step - loss: 1.9565 - accuracy: 0.7247 - val_loss: 2.2523 - val_accuracy: 0.5094 Epoch 7/50 999/999 [==============================] - 21s 21ms/step - loss: 1.8642 - accuracy: 0.7477 - val_loss: 2.3795 - val_accuracy: 0.4906 Epoch 8/50 999/999 [==============================] - 22s 22ms/step - loss: 1.7659 - accuracy: 0.7828 - val_loss: 2.0648 - val_accuracy: 0.5094 Epoch 9/50 999/999 [==============================] - 21s 21ms/step - loss: 1.6996 - accuracy: 0.7918 - val_loss: 2.1412 - val_accuracy: 0.4151 Epoch 10/50 999/999 [==============================] - 21s 21ms/step - loss: 1.7006 - accuracy: 0.7698 - val_loss: 2.0887 - val_accuracy: 0.5283 Epoch 11/50 999/999 [==============================] - 21s 21ms/step - loss: 1.5257 - accuracy: 0.8819 - val_loss: 2.0092 - val_accuracy: 0.5849 Epoch 12/50 999/999 [==============================] - 21s 21ms/step - loss: 1.4590 - accuracy: 0.8839 - val_loss: 2.0219 - val_accuracy: 0.5283 Epoch 13/50 999/999 [==============================] - 21s 21ms/step - loss: 1.3878 - accuracy: 0.9099 - val_loss: 1.9607 - val_accuracy: 0.5472 Epoch 14/50 999/999 [==============================] - 21s 21ms/step - loss: 1.3103 - accuracy: 0.9349 - val_loss: 1.9491 - val_accuracy: 0.5849 Epoch 15/50 999/999 [==============================] - 21s 21ms/step - loss: 1.3654 - accuracy: 0.8659 - val_loss: 1.8709 - val_accuracy: 0.5472 Epoch 16/50 999/999 [==============================] - 21s 21ms/step - loss: 1.2550 - accuracy: 0.9279 - val_loss: 2.0366 - val_accuracy: 0.5660 Epoch 17/50 999/999 [==============================] - 20s 20ms/step - loss: 1.1963 - accuracy: 0.9429 - val_loss: 2.1585 - val_accuracy: 0.5849 Epoch 18/50 999/999 [==============================] - 21s 21ms/step - loss: 1.1255 - accuracy: 0.9750 - val_loss: 2.0433 - val_accuracy: 0.5472 Epoch 19/50 999/999 [==============================] - 21s 21ms/step - loss: 1.0958 - accuracy: 0.9710 - val_loss: 1.9887 - val_accuracy: 0.6226 Epoch 20/50 999/999 [==============================] - 21s 21ms/step - loss: 1.0298 - accuracy: 0.9890 - val_loss: 1.8445 - val_accuracy: 0.6226 Epoch 21/50 999/999 [==============================] - 21s 21ms/step - loss: 0.9840 - accuracy: 0.9910 - val_loss: 2.1092 - val_accuracy: 0.5094 Epoch 22/50 999/999 [==============================] - 21s 21ms/step - loss: 0.9516 - accuracy: 0.9940 - val_loss: 1.9623 - val_accuracy: 0.6038 Epoch 23/50 999/999 [==============================] - 21s 21ms/step - loss: 0.9439 - accuracy: 0.9790 - val_loss: 1.8351 - val_accuracy: 0.6038 Epoch 24/50 999/999 [==============================] - 21s 21ms/step - loss: 0.9123 - accuracy: 0.9910 - val_loss: 1.7453 - val_accuracy: 0.6981 Epoch 25/50 999/999 [==============================] - 21s 21ms/step - loss: 0.8768 - accuracy: 0.9930 - val_loss: 1.8839 - val_accuracy: 0.6226 Epoch 26/50 999/999 [==============================] - 21s 21ms/step - loss: 0.8578 - accuracy: 0.9930 - val_loss: 2.0378 - val_accuracy: 0.5849 Epoch 27/50 999/999 [==============================] - 21s 21ms/step - loss: 0.8601 - accuracy: 0.9810 - val_loss: 2.1030 - val_accuracy: 0.4906 Epoch 28/50 999/999 [==============================] - 21s 21ms/step - loss: 0.9804 - accuracy: 0.9069 - val_loss: 1.9581 - val_accuracy: 0.6226 Epoch 29/50 999/999 [==============================] - 21s 21ms/step - loss: 0.8434 - accuracy: 0.9840 - val_loss: 1.6174 - val_accuracy: 0.6415 Epoch 30/50 999/999 [==============================] - 21s 21ms/step - loss: 0.7778 - accuracy: 0.9980 - val_loss: 1.6972 - val_accuracy: 0.6604 Epoch 31/50 999/999 [==============================] - 21s 21ms/step - loss: 0.7467 - accuracy: 0.9990 - val_loss: 1.6132 - val_accuracy: 0.6038 Epoch 32/50 999/999 [==============================] - 21s 21ms/step - loss: 0.7287 - accuracy: 1.0000 - val_loss: 1.8634 - val_accuracy: 0.6604 Epoch 33/50 999/999 [==============================] - 21s 21ms/step - loss: 0.7087 - accuracy: 1.0000 - val_loss: 1.7936 - val_accuracy: 0.6226 Epoch 34/50 999/999 [==============================] - 21s 21ms/step - loss: 0.6838 - accuracy: 1.0000 - val_loss: 1.6540 - val_accuracy: 0.6415 Epoch 35/50 999/999 [==============================] - 22s 22ms/step - loss: 0.6663 - accuracy: 0.9990 - val_loss: 1.8528 - val_accuracy: 0.6226 Epoch 36/50 999/999 [==============================] - 21s 21ms/step - loss: 0.6467 - accuracy: 0.9990 - val_loss: 1.6618 - val_accuracy: 0.6604 Epoch 37/50 999/999 [==============================] - 21s 21ms/step - loss: 0.6385 - accuracy: 0.9950 - val_loss: 1.5993 - val_accuracy: 0.6415 Epoch 38/50 999/999 [==============================] - 21s 21ms/step - loss: 0.7089 - accuracy: 0.9650 - val_loss: 1.5298 - val_accuracy: 0.6415 Epoch 39/50 999/999 [==============================] - 21s 21ms/step - loss: 0.6575 - accuracy: 0.9820 - val_loss: 1.8452 - val_accuracy: 0.5094 Epoch 40/50 999/999 [==============================] - 21s 21ms/step - loss: 0.6187 - accuracy: 0.9920 - val_loss: 1.5886 - val_accuracy: 0.6415 Epoch 41/50 999/999 [==============================] - 22s 22ms/step - loss: 0.5981 - accuracy: 0.9930 - val_loss: 1.6700 - val_accuracy: 0.5849 Epoch 42/50 999/999 [==============================] - 21s 21ms/step - loss: 0.5698 - accuracy: 1.0000 - val_loss: 1.6604 - val_accuracy: 0.6038 Epoch 43/50 999/999 [==============================] - 21s 21ms/step - loss: 0.5557 - accuracy: 0.9990 - val_loss: 1.7552 - val_accuracy: 0.5849 Epoch 44/50 999/999 [==============================] - 21s 21ms/step - loss: 0.5420 - accuracy: 0.9990 - val_loss: 1.7287 - val_accuracy: 0.6226 Epoch 45/50 999/999 [==============================] - 21s 21ms/step - loss: 0.5424 - accuracy: 0.9950 - val_loss: 2.1250 - val_accuracy: 0.5849 Epoch 46/50 999/999 [==============================] - 22s 22ms/step - loss: 0.5871 - accuracy: 0.9700 - val_loss: 1.7103 - val_accuracy: 0.5849 Epoch 47/50 999/999 [==============================] - 22s 22ms/step - loss: 0.5319 - accuracy: 0.9970 - val_loss: 1.5476 - val_accuracy: 0.5849 Epoch 48/50 999/999 [==============================] - 21s 21ms/step - loss: 0.5075 - accuracy: 0.9980 - val_loss: 1.4982 - val_accuracy: 0.6038 Epoch 49/50 999/999 [==============================] - 21s 21ms/step - loss: 0.4914 - accuracy: 0.9990 - val_loss: 1.7249 - val_accuracy: 0.6415 Epoch 50/50 999/999 [==============================] - 21s 21ms/step - loss: 0.4798 - accuracy: 1.0000 - val_loss: 1.8127 - val_accuracy: 0.5849
MIT
train_test_nnets.ipynb
cemysf/BCI
Check results
### https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(y_true, y_pred, classes, normalize=False, title=None, cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) # Only use the labels that appear in the data #classes = classes[unique_labels(y_true, y_pred)] if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) ax.figure.colorbar(im, ax=ax) # We want to show all ticks... ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), # ... and label them with the respective list entries xticklabels=classes, yticklabels=classes, title=title, ylabel='True label', xlabel='Predicted label') # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. fmt = '.4f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") fig.tight_layout() return ax val_preds = nnet.predict(X_val) plot_confusion_matrix(y_val.argmax(axis=1), val_preds.argmax(axis=1), ["left","none","right"], normalize=True) acc = accuracy_score(y_val.argmax(axis=1), val_preds.argmax(axis=1)) f1 = f1_score(y_val.argmax(axis=1), val_preds.argmax(axis=1) , average="weighted") print("accuracy:{:.4f} f1:{:.4f}".format(acc, f1))
accuracy:0.4894 f1:0.4805
MIT
train_test_nnets.ipynb
cemysf/BCI
![](../img/dl_banner.jpg) 基于深度学习的图像检索 \[稀牛学院 x 网易云课程\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)**提示:如果大家觉得计算资源有限,欢迎大家在翻-墙后免费试用[google的colab](https://colab.research.google.com),有免费的K80 GPU供大家使用,大家只需要把课程的notebook上传即可运行**
!rm -rf tiny* features !wget http://cs231n.stanford.edu/tiny-imagenet-200.zip import zipfile zfile = zipfile.ZipFile('tiny-imagenet-200.zip','r') zfile.extractall() zfile.close() !ls !ls tiny-imagenet-200 !ls tiny-imagenet-200/train/n01443537/images | wc -l # -*- coding: utf-8 -*- import os import random # 打开文件以便写入图片名称 out = open("ImageName.txt", 'w') # 递归遍历文件夹,并以一定的几率把图像名写入文件 def gci(filepath): #遍历filepath下所有文件,包括子目录 files = os.listdir(filepath) for fi in files: fi_d = os.path.join(filepath,fi) if os.path.isdir(fi_d): gci(fi_d) else: if random.random()<=0.02 and fi_d.endswith(".JPEG"): out.write(os.path.join(fi_d)+"\n") filepath = "tiny-imagenet-200" gci(filepath) out.close() !ls !head -5 ImageName.txt
tiny-imagenet-200/train/n02843684/images/n02843684_219.JPEG tiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG tiny-imagenet-200/train/n02843684/images/n02843684_152.JPEG tiny-imagenet-200/train/n02843684/images/n02843684_479.JPEG tiny-imagenet-200/train/n02843684/images/n02843684_95.JPEG
Apache-2.0
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
图像特征抽取 \[稀牛学院 x 网易云课程\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)
import numpy as np from numpy import linalg as LA import h5py from keras.applications.inception_v3 import InceptionV3 from keras.preprocessing import image import keras.applications.inception_v3 as inception_v3 import keras.applications.vgg16 as vgg16 from keras.applications.vgg16 import VGG16 class InceptionNet: def __init__(self): # weights: 'imagenet' # pooling: 'max' or 'avg' # input_shape: (width, height, 3), width and height should >= 48 self.input_shape = (224, 224, 3) self.weight = 'imagenet' self.pooling = 'max' # 构建不带分类器的预训练模型 self.model = InceptionV3(weights='imagenet', include_top=False) self.model.predict(np.zeros((1, 224, 224 , 3))) ''' Use inception_v3 model to extract features Output normalized feature vector ''' def extract_feat(self, img_path): img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1])) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) img = inception_v3.preprocess_input(img) feat = self.model.predict(img) return fea #norm_feat = feat[0]/LA.norm(feat[0]) #return norm_feat class VGGNet: def __init__(self): # weights: 'imagenet' # pooling: 'max' or 'avg' # input_shape: (width, height, 3), width and height should >= 48 self.input_shape = (224, 224, 3) self.weight = 'imagenet' self.pooling = 'max' self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False) self.model.predict(np.zeros((1, 224, 224 , 3))) ''' Use vgg16 model to extract features Output normalized feature vector ''' def extract_feat(self, img_path): img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1])) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) feat = self.model.predict(img) return feat
Using TensorFlow backend.
Apache-2.0
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
遍历图片抽取图像特征并存储 \[稀牛学院 x 网易云课程\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)
print("--------------------------------------------------") print(" 特征抽取开始 ") print("--------------------------------------------------") # 特征与文件名存储列表 feats = [] names = [] # 读取图片列表 img_list = open("ImageName.txt", 'r').readlines() img_list = [image.strip() for image in img_list] # 初始化模型 # model = InceptionNet() model = VGGNet() # 遍历与特征抽取 for i, img_path in enumerate(img_list): norm_feat = model.extract_feat(img_path) img_name = os.path.split(img_path)[1] feats.append(norm_feat) names.append(img_name) if i%50 == 0: print("抽取图片的特征,进度%d/%d" %((i+1), len(img_list))) # 特征转换成numpy array格式 feats = np.array(feats) print("--------------------------------------------------") print(" 把抽取的特征写入文件中 ") print("--------------------------------------------------") # 把特征写入文件 output = "features" h5f = h5py.File(output, 'w') h5f.create_dataset('dataset_1', data = feats) h5f.create_dataset('dataset_2', data = np.string_(names)) h5f.close() %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg from scipy import spatial def image_retrieval(input_img, max_res, feats): # 读取待检索图片与展示 queryImg = mpimg.imread(input_img) plt.title("Query Image") plt.imshow(queryImg) plt.grid(None) plt.show() # 初始化Inception模型 model = VGGNet() # 抽取特征,距离比对与排序 queryVec = model.extract_feat(input_img) queryVec = queryVec.reshape(1,-1) feats = feats.reshape(feats.shape[0],-1) scores = spatial.distance.cdist(queryVec, feats).ravel() rank_ID = np.argsort(scores) rank_score = scores[rank_ID] # 选取top max_res张最相似的图片展示 imlist = [img_list[index] for i,index in enumerate(rank_ID[0:max_res])] print("最接近的%d张图片为: " %max_res, imlist) for i,im in enumerate(imlist): image = mpimg.imread(im) plt.title("search output %d" %(i+1)) plt.imshow(image) plt.grid(None) plt.show() input_img = "tiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG" max_res = 8 image_retrieval(input_img, max_res, feats)
_____no_output_____
Apache-2.0
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
使用近似最近邻算法加速 \[稀牛学院 x 网易云课程\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)
feats.shape !pip install nearpy from nearpy import Engine from nearpy.hashes import RandomBinaryProjections DIMENSIONS = 512 PROJECTIONBITS = 16 ENGINE = Engine(DIMENSIONS, lshashes=[RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=2611), RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=261), RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=26)]) for i,f in enumerate(feats.reshape(feats.shape[0],-1)): #print(i, f.shape) ENGINE.store_vector(f, i) def image_retrieval_fast(input_img, max_res, ann): # 读取待检索图片与展示 queryImg = mpimg.imread(input_img) plt.title("Query Image") plt.imshow(queryImg) plt.grid(None) plt.show() # 初始化Inception模型 model = VGGNet() # 抽取特征,使用近似最近邻算法快速检索召回 queryVec = model.extract_feat(input_img) imlist = [img_list[int(k)] for v,k,d in ENGINE.neighbours(queryVec.ravel())[:max_res]] # 选取top max_res张最相似的图片展示 print("最接近的%d张图片为: " %max_res, imlist) for i,im in enumerate(imlist): image = mpimg.imread(im) plt.title("search output %d" %(i+1)) plt.imshow(image) plt.grid(None) plt.show() input_img = "tiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG" max_res = 8 image_retrieval_fast(input_img, max_res, feats)
_____no_output_____
Apache-2.0
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
Linear Regression for North American Pumpkins - Lesson 1 Import needed libraries
import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model, model_selection
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Load the diabetes dataset, divided into `X` data and `y` features
X, y = datasets.load_diabetes(return_X_y=True) print(X.shape) print(X[0])
(442, 10) [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076 -0.04340085 -0.00259226 0.01990842 -0.01764613]
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Select just one feature to target for this exercise
X = X[:, np.newaxis, 2]
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Split the training and test data for both `X` and `y`
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Select the model and fit it with the training data
model = linear_model.LinearRegression() model.fit(X_train, y_train)
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Use test data to predict a line
y_pred = model.predict(X_test)
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Display the results in a plot
plt.scatter(X_test, y_test, color='black') plt.plot(X_test, y_pred, color='blue', linewidth=3) plt.show()
_____no_output_____
MIT
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
Flow Cytometry DataLoad AML data from 21 samples, 5 of them are healthy (H\*), 16 of them are AML samples (SJ\*).
%%time # load data into a dictionary of pandas data frames PATH_DATA = '/extra/disij0/data/flow_cytometry/cytobank/levine_aml/CSV/' #PATH = '/Users/disiji/Dropbox/current/flow_cytometry/acdc/data/' user_ids = ['H1','H2','H3','H4','H5','SJ01','SJ02','SJ03','SJ04','SJ05','SJ06','SJ07','SJ08','SJ09','SJ10',\ 'SJ11','SJ12','SJ13','SJ14','SJ15','SJ16'] data_dict = dict() for id in user_ids: print id data_path = PATH_DATA + id allFiles = glob.glob(data_path + "/*fcsdim_42.csv") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) data_dict[id] = pd.concat(list_) markers = ['HLA-DR','CD19','CD34','CD45','CD47','CD44','CD117','CD123','CD38','CD11b',\ 'CD7','CD15','CD3','CD64','CD33','CD41'] print markers PATH_TABLE = '/home/disij/projects/acdc/data/AML_benchmark/' table = pd.read_csv(PATH_TABLE + 'AML_table.csv', sep=',', header=0, index_col=0) table = table.fillna(0) table = table[markers] print table.shape print table cell_type_name2idx = {x:i for i,x in enumerate(table.index)} cell_type_idx2name = {i:x for i,x in enumerate(table.index)}
['HLA-DR', 'CD19', 'CD34', 'CD45', 'CD47', 'CD44', 'CD117', 'CD123', 'CD38', 'CD11b', 'CD7', 'CD15', 'CD3', 'CD64', 'CD33', 'CD41'] (14, 16) HLA-DR CD19 CD34 CD45 CD47 CD44 CD117 CD123 \ Basophils -1.0 -1 -1 0.0 0.0 0.0 0.0 1 CD4 T cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 CD8 T cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 CD16- NK cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 CD16+ NK cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 CD34+CD38+CD123- HSPCs 0.0 -1 1 -1.0 0.0 0.0 0.0 -1 CD34+CD38+CD123+ HSPCs 0.0 -1 1 -1.0 0.0 0.0 0.0 1 CD34+CD38lo HSCs 0.0 -1 1 -1.0 0.0 0.0 0.0 -1 Mature B cells 0.0 1 -1 0.0 0.0 0.0 0.0 -1 Plasma B cells -1.0 1 -1 0.0 0.0 0.0 0.0 -1 Pre B cells 1.0 1 -1 0.0 0.0 0.0 0.0 -1 Pro B cells 0.0 1 1 -1.0 0.0 0.0 0.0 -1 Monocytes 1.0 -1 -1 0.0 0.0 0.0 0.0 -1 pDCs 1.0 -1 -1 0.0 0.0 0.0 0.0 1 CD38 CD11b CD7 CD15 CD3 CD64 CD33 CD41 Basophils 0.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 CD4 T cells 0.0 0.0 0.0 0.0 1 -1.0 0.0 0.0 CD8 T cells 0.0 0.0 1.0 0.0 1 -1.0 0.0 0.0 CD16- NK cells 0.0 0.0 1.0 0.0 -1 -1.0 0.0 0.0 CD16+ NK cells 0.0 0.0 1.0 0.0 -1 -1.0 0.0 0.0 CD34+CD38+CD123- HSPCs 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 CD34+CD38+CD123+ HSPCs 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 CD34+CD38lo HSCs -1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 Mature B cells 0.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 Plasma B cells 1.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 Pre B cells 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 Pro B cells 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 Monocytes 0.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 pDCs 0.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0
MIT
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian
Now run MCMC to collect posterior samples... Random effect model Training models for healthy samples
f = lambda x: np.arcsinh((x -1.)/5.) data = [data_dict[_].head(20000).applymap(f)[markers].values for _ in ['H1','H2','H3','H4','H5']] # compute data range data_ranges = np.array([[[data[_][:,d].min(),data[_][:,d].max()] \ for d in range(len(markers))] for _ in range(len(data))]) theta_space = np.array([[data_ranges[:,d,0].min(), data_ranges[:,d,1].max()] \ for d in range(len(markers))]) n_samples = len(data) %%time n_mcmc_chain = 5 n_mcmc_sample = 1000 mcmc_gaussin_std = 0.1 random_effect_gaussian_std = 0.5 pooled_data = np.concatenate(data) num_cores = multiprocessing.cpu_count() results = Parallel(n_jobs=num_cores)(delayed(mcmc_template)(i) for i in range(n_mcmc_chain)) accepts_template_mp_H = [] accepts_indiv_mp_lists_H = [] joint_logP_H = [] for _ in results: accepts_template_mp_H.append(_[0]) accepts_indiv_mp_lists_H.append(_[1]) joint_logP_H.append(_[2]) fig, axarr = plt.subplots(n_mcmc_chain / 3 + 1, 3, figsize=(15,6 * 1)) for i in range(n_mcmc_chain): axarr[i/3,i%3].plot(joint_logP_H[i]) fig.suptitle("log joint likelihood") plt.show() population_size_H = [None for _ in range(n_samples)] for id in range(n_samples): data_subset = data[id] burnt_samples = [i for _ in range(n_mcmc_chain) for i in \ accepts_indiv_mp_lists_H[_][id][-2:]] population_size_H[id] = compute_cell_population(data_subset, burnt_samples, \ table, cell_type_name2idx) for id in range(n_samples): plt.plot(population_size_H[id],color = 'g') plt.title('Healthy') plt.show()
_____no_output_____
MIT
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian
Training models for unhealthy samples
data = [data_dict[_].head(20000).applymap(f)[markers].values for _ in ['SJ01','SJ02',\ 'SJ03','SJ04','SJ05','SJ06','SJ07','SJ08','SJ09','SJ10',\ 'SJ11','SJ12','SJ13','SJ14','SJ15','SJ16']] # compute data range data_ranges = np.array([[[data[_][:,d].min(),data[_][:,d].max()] \ for d in range(len(markers))] for _ in range(len(data))]) theta_space = np.array([[data_ranges[:,d,0].min(), data_ranges[:,d,1].max()] \ for d in range(len(markers))]) n_samples = len(data) %%time pooled_data = np.concatenate(data) results = Parallel(n_jobs=num_cores)(delayed(mcmc_template)(i) for i in range(n_mcmc_chain)) accepts_template_mp_SJ = [] accepts_indiv_mp_lists_SJ = [] joint_logP_SJ = [] for _ in results: accepts_template_mp_SJ.append(_[0]) accepts_indiv_mp_lists_SJ.append(_[1]) joint_logP_SJ.append(_[2]) fig, axarr = plt.subplots(n_mcmc_chain / 2, 3, figsize=(15,6 )) for i in range(n_mcmc_chain): axarr[i/3,i%3].plot(joint_logP_SJ[i]) fig.suptitle("log joint likelihood") plt.show() population_size_SJ = [None for _ in range(n_samples)] for id in range(n_samples): data_subset = data[id] burnt_samples = [i for _ in range(n_mcmc_chain) for i in \ accepts_indiv_mp_lists_SJ[_][id][-1:]] population_size_SJ[id] = compute_cell_population(data_subset , burnt_samples, \ table, cell_type_name2idx) for id in range(n_samples): plt.plot(population_size_SJ[id],color = 'r') plt.title('AML') plt.show()
_____no_output_____
MIT
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian