markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Example Following is an example implementation of multi-layer perceptron on MNIST data set First initialize all the libraries rerquired
# %load mnist_mlp.py '''Trains a simple deep NN on the MNIST dataset. Gets to 98.40% test accuracy after 20 epochs (there is *a lot* of margin for parameter tuning). 2 seconds per epoch on a K520 GPU. ''' from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import RMSprop from keras.utils import np_utils
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
Simple function to display testdata with prediction results on the test dataset
def show_prediction_results(X_test,predicted_labels): for i,j in enumerate(random.sample(range(len(X_test)),10)): plt.subplot(5,2,i+1) plt.axis("off") plt.title("Predicted labels is "+str(np.argmax(predicted_labels[j]))) plt.imshow(X_test[j].reshape(28,28))
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
Generating and structuring dataset for training and testing. We will be using 28x28 images from MNIST dataset of about 60000 for training and 10000 for testing. We will use batch size of 128, for classifying 10 numbers in the images. For small computations 20 epochs are used to these can be increased for more accuracy.
batch_size = 128 nb_classes = 10 nb_epoch = 20 # the data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(60000, 784) X_test = X_test.reshape(10000, 784) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes)
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
Start building Sequiential model in keras. We will use 3 layer MLP model for modelling the dataset.
model = Sequential() model.add(Dense(512, input_shape=(784,))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) model.summary()
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
Compiling model is configuring model with performance parameters such as loss function. metric and optimizer
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
<span style="color:red;font-weight:bold">fit function for model</span> fits the training data to neural network configured before
history = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=0, validation_data=(X_test, Y_test)) # Let's save the model in local file to fetch at later point in time to skip computations # and directly start testing if need be model.save_weights('mnist_mlp.hdf5') with open('mnist_mlp.json', 'w') as f: f.write(model.to_json())
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
<span style="color:red;font-weight:bold">predict function for model</span> predicts labels or values for the testing data provided
predicted_labels = model.predict(X_test,verbose=0) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) # Let's visualize some results randomly picked from testdata set and predicted labels for them # show_prediction_results(X_test,predicted_labels)
notebooks/getting_started_with_keras.ipynb
ninadhw/ninadhw.github.io
cc0-1.0
Part 2 - Adding Latitude and Longitude
coordinates = pd.read_csv('http://cocl.us/Geospatial_data') coordinates.rename(columns={'Postal Code': 'PostalCode'}, inplace=True) final_result = pd.merge(toronto_data, coordinates, on='PostalCode') final_result
coursera/applied_data_science_capstone/Week 3 Applied Data Science Capstone.ipynb
mohanprasath/Course-Work
gpl-3.0
Part 3 - Clustering
import matplotlib.pyplot as plt lat_lons = [] lats = [] lons = [] for index, row in final_result.iterrows(): lat_lons.append([row['Longitude'], row['Latitude']]) lats.append(row['Latitude']) lons.append(row['Longitude']) plt.scatter(lons, lats) plt.xlabel("Longitude") plt.ylabel("Latitude") plt.title("Toronto Postal Codes Geo Location") plt.show()
coursera/applied_data_science_capstone/Week 3 Applied Data Science Capstone.ipynb
mohanprasath/Course-Work
gpl-3.0
Above plots shows the regions in Toronto. However the clusters are not visible clearly through visual analysis. It requires detailes Clusteing algorithms like k-Means for a good analysis. Please refer the following code for more info.
# I have Referred some clustering examples from Kaggle # https://www.kaggle.com/xxing9703/kmean-clustering-of-latitude-and-longitude import folium toronto_latitude = 43.6532; toronto_longitude = -79.3832 map_toronto = folium.Map(location = [toronto_latitude, toronto_longitude], zoom_start = 10.7) # adding markers to map for lat, lng, borough, neighborhood in zip(final_result['Latitude'], final_result['Longitude'], final_result['Borough'], final_result['Neighborhood']): label = '{}, {}'.format(neighborhood, borough) label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='red', fill=True, fill_color='#110000', fill_opacity=0.7).add_to(map_toronto) map_toronto
coursera/applied_data_science_capstone/Week 3 Applied Data Science Capstone.ipynb
mohanprasath/Course-Work
gpl-3.0
Tune your hyperparameters What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy. Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value. Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set. Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on the previous exercises. # learning_rates = [1e-4, 2e-4] regularization_strengths = [1,1e4] # results is dictionary mapping tuples of the form # (learning_rate, regularization_strength) to tuples of the form # (training_accuracy, validation_accuracy). The accuracy is simply the fraction # of data points that are correctly classified. results = {} best_val = -1 # The highest validation accuracy that we have seen so far. for learning_rate in learning_rates: for regularization_strength in regularization_strengths: net = TwoLayerNet(input_size,hidden_size,num_classes) net.train(X_train, y_train,X_val,y_val, learning_rate= learning_rate, reg=regularization_strength, num_iters=1500) y_train_predict = net.predict(X_train) y_val_predict = net.predict(X_val) accuracy_train = np.mean(y_train_predict == y_train) accuracy_validation = np.mean(y_val_predict == y_val) results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation) if accuracy_validation > best_val: best_val = accuracy_validation best_net = net ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # visualize the weights of the best network show_net_weights(best_net)
solutions/vijendra/assignment1/two_layer_net.ipynb
machinelearningnanodegree/stanford-cs231
mit
็ทด็ฟ’ๅ•้กŒ (1) 2ๅ€‹ใฎใ‚ตใ‚คใ‚ณใƒญใ‚’ๆŒฏใฃใŸ็ตๆžœใ‚’ใ‚ทใƒฅใƒŸใƒฌใƒผใ‚ทใƒงใƒณใ—ใพใ™ใ€‚ๆฌกใฎไพ‹ใฎใ‚ˆใ†ใซใ€1ใ€œ6ใฎๆ•ดๆ•ฐใฎใƒšใ‚ขใ‚’ๅซใ‚€arrayใ‚’ไนฑๆ•ฐใง็”Ÿๆˆใ—ใฆใใ ใ•ใ„ใ€‚
from numpy.random import randint randint(1,7,2)
Solutions/03-Random Numbers-solution.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
(2) 2ๅ€‹ใฎใ‚ตใ‚คใ‚ณใƒญใ‚’ๆŒฏใฃใŸ็ตๆžœใ‚’10ๅ›žๅˆ†็”จๆ„ใ—ใพใ™ใ€‚ๆฌกใฎไพ‹ใฎใ‚ˆใ†ใซใ€1ใ€œ6ใฎๆ•ดๆ•ฐใฎใƒšใ‚ข๏ผˆใƒชใ‚นใƒˆ๏ผ‰ใ‚’10็ต„ๅซใ‚€arrayใ‚’็”Ÿๆˆใ—ใฆใ€ๅค‰ๆ•ฐ dice ใซไฟๅญ˜ใ—ใฆใใ ใ•ใ„ใ€‚
dice = randint(1,7,[10,2]) dice
Solutions/03-Random Numbers-solution.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
(3) ๅค‰ๆ•ฐ dice ใซไฟๅญ˜ใ•ใ‚ŒใŸใใ‚Œใžใ‚Œใฎ็ตๆžœใซๅฏพใ—ใฆใ€ๆฌกใฎไพ‹ใฎใ‚ˆใ†ใซใ€2ๅ€‹ใฎใ‚ตใ‚คใ‚ณใƒญใฎ็›ฎใฎๅˆ่จˆใ‚’่จˆ็ฎ—ใ—ใฆใใ ใ•ใ„ใ€‚๏ผˆ่จˆ็ฎ—็ตๆžœใฏใƒชใ‚นใƒˆใซไฟๅญ˜ใ™ใ‚‹ใ“ใจใ€‚๏ผ‰
[a+b for (a,b) in dice]
Solutions/03-Random Numbers-solution.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
(4) 2ๅ€‹ใฎใ‚ตใ‚คใ‚ณใƒญใฎ็›ฎใฎๅˆ่จˆใ‚’1000ๅ›žๅˆ†็”จๆ„ใ—ใฆใ€2ใ€œ12ใฎใใ‚Œใžใ‚Œใฎๅ›žๆ•ฐใ‚’ใƒ’ใ‚นใƒˆใ‚ฐใƒฉใƒ ใซ่กจ็คบใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ’ใƒณใƒˆ๏ผšใ‚ชใƒ—ใ‚ทใƒงใƒณ bins=11, range=(1.5, 12.5) ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใจใใ‚Œใ„ใซๆใ‘ใพใ™ใ€‚
dice = randint(1,7,[1000,2]) sums = [a+b for (a,b) in dice] plt.hist(sums, bins=11, range=(1.5, 12.5))
Solutions/03-Random Numbers-solution.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
(5) 0โ‰ฆxโ‰ฆ1 ใฎ็ฏ„ๅ›ฒใ‚’็ญ‰ๅˆ†ใ—ใŸ10ๅ€‹ใฎ็‚น data_x = np.linspace(0,1,10) ใซๅฏพใ—ใฆใ€sin(2ฯ€x) ใฎๅ€คใ‚’ๆ ผ็ดใ—ใŸarrayใ‚’ไฝœๆˆใ—ใฆใ€ๅค‰ๆ•ฐ data_y ใซไฟๅญ˜ใ—ใชใ•ใ„ใ€‚ ใ•ใ‚‰ใซใ€data_y ใซๅซใพใ‚Œใ‚‹ใใ‚Œใžใ‚Œใฎๅ€คใซใ€ๆจ™ๆบ–ๅๅทฎ 0.3 ใฎๆญฃ่ฆๅˆ†ๅธƒใซๅพ“ใ†ไนฑๆ•ฐใ‚’ๅŠ ใˆใŸarrayใ‚’ไฝœๆˆใ—ใฆใ€ๅค‰ๆ•ฐ data_t ใซไฟๅญ˜ใ—ใŸๅพŒใ€(data_x, data_t) ใ‚’ๆ•ฃๅธƒๅ›ณใซ่กจ็คบใ—ใชใ•ใ„ใ€‚
from numpy.random import normal data_x = np.linspace(0,1,10) data_y = np.sin(2*np.pi*data_x) data_t = data_y + normal(loc=0, scale=0.3, size=len(data_y)) plt.scatter(data_x, data_t)
Solutions/03-Random Numbers-solution.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. Pick the number of bins for the histogram appropriately.
mass = data[:2] assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
AaronCWong/phys202-2015-work
mit
Next grab the simple variables out of the data we have (attaching correct units), and put them into a dictionary that we will hand the plotting function later:
# This is our container for the data data = dict() # Copy out to stage everything together. In an ideal world, this would happen on # the data reading side of things, but we're not there yet. data['longitude'] = data_arr['lon'] data['latitude'] = data_arr['lat'] data['air_temperature'] = data_arr['air_temperature'] * units.degC data['dew_point_temperature'] = data_arr['dew_point_temperature'] * units.degC data['air_pressure_at_sea_level'] = data_arr['slp'] * units('mbar')
v0.5/_downloads/Station_Plot_with_Layout.ipynb
metpy/MetPy
bsd-3-clause
The payoff
# Change the DPI of the resulting figure. Higher DPI drastically improves the # look of the text rendering plt.rcParams['savefig.dpi'] = 255 # Create the figure and an axes set to the projection fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(1, 1, 1, projection=proj) # Add some various map elements to the plot to make it recognizable ax.add_feature(feat.LAND, zorder=-1) ax.add_feature(feat.OCEAN, zorder=-1) ax.add_feature(feat.LAKES, zorder=-1) ax.coastlines(resolution='110m', zorder=2, color='black') ax.add_feature(state_boundaries, edgecolor='black') ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black') # Set plot bounds ax.set_extent((-118, -73, 23, 50)) # # Here's the actual station plot # # Start the station plot by specifying the axes to draw on, as well as the # lon/lat of the stations (with transform). We also the fontsize to 12 pt. stationplot = StationPlot(ax, data['longitude'], data['latitude'], transform=ccrs.PlateCarree(), fontsize=12) # The layout knows where everything should go, and things are standardized using # the names of variables. So the layout pulls arrays out of `data` and plots them # using `stationplot`. simple_layout.plot(stationplot, data) plt.show()
v0.5/_downloads/Station_Plot_with_Layout.ipynb
metpy/MetPy
bsd-3-clause
or instead, a custom layout can be used:
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted # out to Farenheit tenths. Extra data will be ignored custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF', color='darkgreen') # Also, we'll add a field that we don't have in our dataset. This will be ignored custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue') # Create the figure and an axes set to the projection fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(1, 1, 1, projection=proj) # Add some various map elements to the plot to make it recognizable ax.add_feature(feat.LAND, zorder=-1) ax.add_feature(feat.OCEAN, zorder=-1) ax.add_feature(feat.LAKES, zorder=-1) ax.coastlines(resolution='110m', zorder=2, color='black') ax.add_feature(state_boundaries, edgecolor='black') ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black') # Set plot bounds ax.set_extent((-118, -73, 23, 50)) # # Here's the actual station plot # # Start the station plot by specifying the axes to draw on, as well as the # lon/lat of the stations (with transform). We also the fontsize to 12 pt. stationplot = StationPlot(ax, data['longitude'], data['latitude'], transform=ccrs.PlateCarree(), fontsize=12) # The layout knows where everything should go, and things are standardized using # the names of variables. So the layout pulls arrays out of `data` and plots them # using `stationplot`. custom_layout.plot(stationplot, data) plt.show()
v0.5/_downloads/Station_Plot_with_Layout.ipynb
metpy/MetPy
bsd-3-clause
Overview Pre-processes COMPAS dataset: Download the COMPAS dataset from: https://github.com/propublica/compas-analysis/blob/master/compas-scores-two-years.csv and save it in the ./group_agnostic_fairness/data/compas folder. Input: ./group_agnostic_fairness/data/compas/compas-scores-two-years.csv Outputs: train.csv, test.csv, mean_std.json, vocabulary.json, IPS_exampleweights_with_label.json, IPS_exampleweights_without_label.json
pd.options.display.float_format = '{:,.2f}'.format dataset_base_dir = './group_agnostic_fairness/data/compas/' dataset_file_name = 'compas-scores-two-years.csv'
group_agnostic_fairness/data_utils/CreateCompasDatasetFiles.ipynb
google-research/google-research
apache-2.0
Processing original dataset
file_path = os.path.join(dataset_base_dir,dataset_file_name) with open(file_path, "r") as file_name: temp_df = pd.read_csv(file_name) # Columns of interest columns = ['juv_fel_count', 'juv_misd_count', 'juv_other_count', 'priors_count', 'age', 'c_charge_degree', 'c_charge_desc', 'age_cat', 'sex', 'race', 'is_recid'] target_variable = 'is_recid' target_value = 'Yes' # Drop duplicates temp_df = temp_df[['id']+columns].drop_duplicates() df = temp_df[columns].copy() # Convert columns of type ``object`` to ``category`` df = pd.concat([ df.select_dtypes(include=[], exclude=['object']), df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category') ], axis=1).reindex_axis(df.columns, axis=1) # Binarize target_variable df['is_recid'] = df.apply(lambda x: 'Yes' if x['is_recid']==1.0 else 'No', axis=1).astype('category') # Process protected-column values race_dict = {'African-American':'Black','Caucasian':'White'} df['race'] = df.apply(lambda x: race_dict[x['race']] if x['race'] in race_dict.keys() else 'Other', axis=1).astype('category') df.head()
group_agnostic_fairness/data_utils/CreateCompasDatasetFiles.ipynb
google-research/google-research
apache-2.0
1. A Quick Introduction to Cython Cython is a compiler and a programming language used to generate C extension modules for Python. The Cython language is a Python/C creole which is essentially Python with some additional keywords for specifying static data types. It looks something like this: cython def cython_sum(int n): cdef float s = 0.0 cdef int i for i in range(n): s += i return s The Cython compiler transforms this code into a "flavor" of C specific to Python extension modules. This C code is then compiled into a binary file that can be imported and used just like a regular Python module -- the difference being that the functions you use from that module can potentially be much faster and more efficient than an equivalent pure Python implementation. Aside from writing Cython code for computations, Cython is commonly used for writing wrappers around existing C code so that the functions therein can be made available in an extension module as described above. We will use this technique to make the SymPy-generated C code accessible to Python for use in SciPy's odeint. Example As a quick demonstration of what Cython can offer, we'll walk through a simple example of generating numbers in the Fibonacci sequence. If you're not familiar with it already, the sequence is initialized with $F_0 = 0$ and $F_1 = 1$, then the remaining terms are defined recursively by: $$ F_i = F_{i-1} + F_{i-2} $$ Our objective is to write a function that computes the $n$-th Fibonacci number. Let's start by writing a simple iterative solution in pure Python.
def python_fib(n): a = 0.0 b = 1.0 for i in range(n): tmp = a a = a + b b = tmp return a [python_fib(i) for i in range(10)]
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Let's see how long it takes to compute the 100th Fibonacci number.
%timeit python_fib(100)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now let's implement the same thing with Cython. Since Cython is essentially "Python with types," it is often fairly easy to make the move and see improvements in speed. It does come at the cost, however, of a separate compilation step. There are several ways to ways to go about the compilation process, and in many cases, Cython's tooling makes it fairly simple. For example, Jupyter notebooks can make use of a %%cython magic command that will do all of compilation in the background for us. To make use of it, we need to load the cython extension.
%load_ext cython
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we can write a Cython function. Note: the --annotate (or -a) flag of the %%cython magic command will produce an interactive annotated printout of the Cython code, allowing us to see the C code that is generated.
%%cython def cython_fib(int n): cdef double a = 0.0 cdef double b = 1.0 cdef double tmp for i in range(n): tmp = a a = a + b b = tmp return a %timeit cython_fib(100)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
To see a bit more about writing Cython and its potential performance benefits, see this Cython examples notebook. Even better, check out Kurt Smith's Cython tutorial which is happening at the same time as this tutorial. 2. Generating C Code with SymPy's codegen() Our main goal in using Cython is to wrap SymPy-generated C code into a Python extension module so that we can call the fast compiled numerical routines from Python. SymPy's codegen function takes code printing a step further: it wraps a snippet of code that numerically evaluates an expression with a function, and puts that function into the context of a file that is fully ready-to-compile code. Here we'll revisit the water radiolysis system, with the aim of numerically computing the right hand side of the system of ODEs and integrating using SciPy's odeint. Recall that this system looks like: $$ \begin{align} \frac{dy_0(t)}{dt} &= f_0\left(y_0,\,y_1,\,\dots,\,y_{13},\,t\right) \ &\vdots \ \frac{dy_{13}(t)}{dt} &= f_{13}\left(y_0,\,y_1,\,\dots,\,y_{13},\,t\right) \end{align} $$ where we are representing our state variables $y_0,\,y_1,\dots,y_{13}$ as a vector $\mathbf{y}(t)$ that we called states in our code, and the collection of functions on the right hand side $\mathbf{f}(\mathbf{y}(t))$ we called rhs_of_odes. Start by importing the system of ODEs and the matrix of state variables.
from scipy2017codegen.chem import load_large_ode rhs_of_odes, states = load_large_ode() rhs_of_odes[0]
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we'll use codegen (under sympy.utilities.codegen) to output C source and header files which can compute the right hand side (RHS) of the ODEs numerically, given the current values of our state variables. Here we'll import it and show the documentation:
from sympy.utilities.codegen import codegen #codegen?
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
We just have one expression we're interested in computing, and that is the matrix expression representing the derivatives of our state variables with respect to time: rhs_of_odes. What we want codegen to do is create a C function that takes in the current values of the state variables and gives us back each of the derivatives.
[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes), language='c')
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Note that we've just expanded the outputs into individual variables so we can access the generated code easily. codegen gives us back the .c filename and its source code in a tuple, and the .h filename and its source in another tuple. Let's print the source code.
print(cs)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
There are several things here worth noting: the state variables are passed in individually the state variables in the function signature are out of order the output array is passed in as a pointer like in our Fibonacci sequence example, but it has an auto-generated name Let's address the first issue first. Similarly to what we did in the C printing exercises, let's use a MatrixSymbol to represent our state vector instead of a matrix of individual state variable symbols (i.e. y[0] instead of y0). First, create the MatrixSymbol object that is the same shape as our states matrix.
y = sym.MatrixSymbol('y', *states.shape)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we need to replace the use of y0, y1, etc. in our rhs_of_odes matrix with the elements of our new state vector (e.g. y[0], y[1], etc.). We saw how to do this already in the previous notebook. Start by forming a mapping from y0 -&gt; y[0, 0], y1 -&gt; y[1, 0], etc.
state_array_map = dict(zip(states, y)) state_array_map
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now replace the symbols in rhs_of_odes according to the mapping. We'll call it rhs_of_odes_ind and use that from now on.
rhs_of_odes_ind = rhs_of_odes.xreplace(state_array_map) rhs_of_odes_ind[0]
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: use codegen again, but this time with rhs_of_odes_ind which makes use of a state vector rather than a container of symbols. Check out the resulting code. What is different about the function signature? python [(cf, cs), (hf, hs)] = codegen(???) Solution | | | | | | | | | | v
[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes_ind), language='c') print(cs)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
So by re-writing our expression in terms of a MatrixSymbol rather than individual symbols, the function signature of the generated code is cleaned up greatly. However, we still have the issue of the auto-generated output variable name. To fix this, we can form a matrix equation rather than an expression. The name given to the symbol on the left hand side of the equation will then be used for our output variable name. We'll start by defining a new MatrixSymbol that will represent the left hand side of our equation -- the derivatives of each state variable.
dY = sym.MatrixSymbol('dY', *y.shape)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: form an equation using sym.Eq to equate the two sides of our system of differential equations, then use this as the expression in codegen. Print out just the header source to see the function signature. What is the output argument called now? python ode_eq = sym.Eq(???) [(cf, cs), (hf, hs)] = codegen(???) print(???) Solution | | | | | | | | | | v
ode_eq = sym.Eq(dY, rhs_of_odes_ind) [(cf, cs), (hf, hs)] = codegen(('c_odes', ode_eq), language='c') print(hs)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we see that the c_odes function signature is nice and clean. We pass it a pointer to an array representing the current values of all of our state variables and a pointer to an array that we want to fill with the derivatives of each of those state variables. If you're not familiar with C and pointers, you just need to know that it is idiomatic in C to preallocate a block of memory representing an array, then pass the location of that memory (and usually the number of elements it can hold), rather than passing the array itself to/from a function. For our purposes, this is as complicated as pointers will get. Just so we can compile this code and use it, we'll re-use the codegen call above with to_files=True so the .c and .h files are actually written to the filesystem, rather than having their contents returned in a string.
codegen(('c_odes', ode_eq), language='c', to_files=True)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
3. Wrapping the Generated Code with Cython Now we want to wrap the function that was generated c_odes with a Cython function so we can generate an extension module and call that function from Python. Wrapping a set of C functions involves writing a Cython script that specifies the Python interface to the C functions. This script must do two things: specify the function signatures as found in the C source implement the Python interface to the C functions by wrapping them The build system of Cython is able to take the Cython wrapper source code as well as the C library source code and compile/link things together into a Python extension module. We will write our wrapper code in a cell making use of the magic command %%cython_pyximport, which does a few things for us: writes the contents of the cell to a Cython source file (modname.pyx) looks for a modname.pyxbld file for instructions on how to build things builds everything into an extension module imports the extension module, making the functions declared there available in the notebook So, it works similarly to the %%cython magic command we saw at the very beginning, but things are a bit more complicated now because we have this external library c_odes that needs to be compiled as well. Note: The pyxbld file contains code similar to what would be found in the setup.py file of a package making use of Cython code for wrapping C libraries. In either case, all that's needed is to tell setuptools/Cython: the name of the extension module we want to make the location of the Cython and C source files to be built the location of headers needed during compilation -- both our C library's headers as well as NumPy's headers We will call our extension module cy_odes, so here we'll generate a cy_odes.pyxbld file to specify how to build the module.
%%writefile cy_odes.pyxbld import numpy # module name specified by `%%cython_pyximport` magic # | just `modname + ".pyx"` # | | def make_ext(modname, pyxfilename): from setuptools.extension import Extension return Extension(modname, sources=[pyxfilename, 'c_odes.c'], include_dirs=['.', numpy.get_include()])
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we can write our wrapper code. To write the wrapper, we first write the function signature as specified by the C library. Then, we create a wrapper function that makes use of the C implementation and returns the result. This wrapper function becomes the interface to the compiled code, and it does not need to be identical to the C function signature. In fact, we'll make our wrapper function compliant with the odeint interface (i.e. takes a 1-dimensional array of state variable values and the current time).
%%cython_pyximport cy_odes import numpy as np cimport numpy as cnp # cimport gives us access to NumPy's C API # here we just replicate the function signature from the header cdef extern from "c_odes.h": void c_odes(double *y, double *dY) # here is the "wrapper" signature that conforms to the odeint interface def cy_odes(cnp.ndarray[cnp.double_t, ndim=1] y, double t): # preallocate our output array cdef cnp.ndarray[cnp.double_t, ndim=1] dY = np.empty(y.size, dtype=np.double) # now call the C function c_odes(<double *> y.data, <double *> dY.data) # return the result return dY
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: use np.random.randn to generate random state variable values and evaluate the right-hand-side of our ODEs with those values. python random_vals = np.random.randn(???) ??? Solution | | | | | | | | | | v
random_vals = np.random.randn(14) cy_odes(random_vals, 0) # note: any time value will do
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we can use odeint to integrate the equations and plot the results to check that it worked. First we need to import odeint.
from scipy.integrate import odeint
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
A couple convenience functions are provided in the scipy2017codegen package which give some reasonable initial conditions for the system and plot the state trajectories, respectively. Start by grabbing some initial values for our state variables and time values.
from scipy2017codegen.chem import watrad_init, watrad_plot y_init, t_vals = watrad_init()
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Finally we can integrate the equations using our Cython-wrapped C function and plot the results.
y_vals = odeint(cy_odes, y_init, t_vals) watrad_plot(t_vals, y_vals)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
4. Generating and Compiling a C Extension Module Automatically As yet another layer of abstraction on top of codegen, SymPy provides an autowrap function that can automatically generate a Cython wrapper for the generated C code. This greatly simplifies the process of going from a SymPy expression to numerical computation, but as we'll see, we lose a bit of flexibility compared to manually creating the Cython wrapper. Let's start by importing the autowrap function and checking out its documentation.
from sympy.utilities.autowrap import autowrap #autowrap?
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
So autowrap takes in a SymPy expression and gives us back a binary callable which evaluates the expression numerically. Let's use the Equality formed earlier to generate a function we can call to evaluate the right hand side of our system of ODEs.
auto_odes = autowrap(ode_eq, backend='cython', tempdir='./autowraptmp')
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: use the main Jupyter notebook tab to head to the temporary directory autowrap just created. Take a look at some of the files it contains. Can you map everything we did manually to the files generated? Solution | | | | | | | | | | v autowrap generates quite a few files, but we'll explicitly list a few here: wrapped_code_#.c: the same thing codegen generated wrapper_module_#.pyx: the Cython wrapper code wrapper_module_#.c: the cythonized code setup.py: specification of the Extension for how to build the extension module Exercise: just like we did before, generate some random values for the state variables and use auto_odes to compute the derivatives. Did it work like before? Hint: take a look at wrapper_module_#.pyx to see the types of the arrays being passed in / created. python random_vals = np.random.randn(???) auto_odes(???) Solution | | | | | | | | | | v
random_vals = np.random.randn(14, 1) # need a 2-dimensional vector auto_odes(random_vals)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
One advantage to wrapping the generated C code manually is that we get fine control over how the function is used from Python. That is, in our hand-written Cython wrapper we were able to specify that from the Python side, the input to our wrapper function and its return value are both 1-dimensional ndarray objects. We were also able to add in the extra argument t for the current time, making the wrapper function fully compatible with odeint. However, autowrap just sees that we have a matrix equation where each side is a 2-dimensional array with shape (14, 1). The function returned then expects the input array to be 2-dimensional and it returns a 2-dimensional array. This won't work with odeint, so we can write a simple wrapper that massages the input and output and adds an extra argument for t.
def auto_odes_wrapper(y, t): dY = auto_odes(y[:, np.newaxis]) return dY.squeeze()
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now a 1-dimensional input works.
random_vals = np.random.randn(14) auto_odes_wrapper(random_vals, 0)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: As we have seen previously, we can analytically evaluate the Jacobian of our system of ODEs, which can be helpful in numerical integration. Compute the Jacobian of rhs_of_odes_ind with respect to y, then use autowrap to generate a function that evaluates the Jacobian numerically. Finally, write a Python wrapper called auto_jac_wrapper to make it compatible with odeint. ```python compute jacobian of rhs_of_odes_ind with respect to y ??? generate a function that computes the jacobian auto_jac = autowrap(???) def auto_jac_wrapper(y, t): return ??? ``` Test your wrapper by passing in the random_vals array from above. The shape of the result should be (14, 14). Solution | | | | | | | | | | v
jac = rhs_of_odes_ind.jacobian(y) auto_jac = autowrap(jac, backend='cython', tempdir='./autowraptmp') def auto_jac_wrapper(y, t): return auto_jac(y[:, np.newaxis]) auto_jac_wrapper(random_vals, 2).shape
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Finally, we can use our two wrapped functions in odeint and compare to our manually-written Cython wrapper result.
y_vals = odeint(auto_odes_wrapper, y_init, t_vals, Dfun=auto_jac_wrapper) watrad_plot(t_vals, y_vals)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
5. Using a Custom Printer and an External Library with autowrap As of SymPy 1.1, autowrap accepts a custom CodeGen object, which is responsible for generating the code. The CodeGen object in turn accepts a custom CodePrinter object, meaning we can use these two points of flexibility to make use of customized code printing in an autowrapped function. The following example is somewhat contrived, but the concept in general is powerful. In our set of ODEs, there are quite a few instances of $y_i^2$, where $y_i$ is one of the 14 state variables. As an example, here's the equation for $\frac{dy_3(t)}{dt}$:
rhs_of_odes[3]
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
There is a library called fastapprox that provides computational routines things like powers, exponentials, logarithms, and a few others. These routines provide limited precision with respect to something like math.h's equivalent functions, but they offer potentially faster computation. The fastapprox library provides a function called fastpow, with the signature fastpow(float x, float p). It it follows the interface of pow from math.h. In the previous notebook, we saw how to turn instances of $x^3$ into x*x*x, which is potentially quicker than pow(x, 3). Here, let's just use fastpow instead. Exercise: implement a CustomPrinter class that inherits from C99CodePrinter and overrides the _print_Pow function to make use of fastpow. Test it by instantiating the custom printer and printing a SymPy expression $x^3$. Hint: it may be helpful to run C99CodePrinter._print_Pow?? to see how it works ```python from sympy.printing.ccode import C99CodePrinter class CustomPrinter(C99CodePrinter): def _print_Pow(self, expr): ??? printer = CustomPrinter() x = sym.symbols('x') print x**3 using the custom printer ??? ``` Solution | | | | | | | | | | v
from sympy.printing.ccode import C99CodePrinter class CustomPrinter(C99CodePrinter): def _print_Pow(self, expr): return "fastpow({}, {})".format(self._print(expr.base), self._print(expr.exp)) printer = CustomPrinter() x = sym.symbols('x') printer.doprint(x**3)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now we can create a C99CodeGen object that will make use of this printer. This object will be passed in to autowrap with the code_gen keyword argument, and autowrap will use it in the code generation process.
from sympy.utilities.codegen import C99CodeGen gen = C99CodeGen(printer=printer)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
However, for our generated code to use the fastpow function, it needs to have a #include "fastpow.h" preprocessor statement at the top. The code gen object supports this by allowing us to append preprocessor statements to its preprocessor_statements attribute.
gen.preprocessor_statements.append('#include "fastpow.h"')
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
One final issue remains, and that is telling autowrap where to find the fastapprox library headers. These header files have just been downloaded from GitHub and placed in the scipy2017codegen package, so it should be installed with the conda environment. We can find it by looking for the package directory.
import os import scipy2017codegen package_dir = os.path.dirname(scipy2017codegen.__file__) fastapprox_dir = os.path.join(package_dir, 'fastapprox')
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Finally we're ready to call autowrap. We'll just use ode_eq, the Equality we created before, pass in the custom CodeGen object, and tell autowrap where the fastapprox headers are located.
auto_odes_fastpow = autowrap(ode_eq, code_gen=gen, backend='cython', include_dirs=[fastapprox_dir], tempdir='autowraptmp_custom')
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
If we navigate to the tmp directory created, we can view the wrapped_code_#.c to see our custom printing in action. As before, we need a wrapper function for use with odeint, but aside from that, everything should be in place.
def auto_odes_fastpow_wrapper(y, t): dY = auto_odes_fastpow(y[:, np.newaxis]) return dY.squeeze() y_vals, info = odeint(auto_odes_fastpow_wrapper, y_init, t_vals, full_output=True) watrad_plot(t_vals, y_vals)
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: generate an array of random state variable values, then use this array in the auto_odes_wrapper and auto_odes_fastpow_wrapper functions. Compare their outputs. Solution | | | | | | | | | | v
random_vals = np.random.randn(14) dY1 = auto_odes_wrapper(random_vals, 0) dY2 = auto_odes_fastpow_wrapper(random_vals, 0) dY1 - dY2
notebooks/08-cythonizing.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
If you want to use the CoNLL-03 corpus, you need to download it and unpack it in your Flair data and model folder. This folder should be in your home-directory and it is named .flair. Once you have downloaded the corpus, unpack it into a folder .flair/datasets/conll_03. If you do not want to use the CoNLL-03 corpus, but rather the free W-NUT 17 corpus, you can use the Flair command: WNUT_17() If you decide to download the CoNLL-03 corpus, adapt the following code. We load the W-NUT17 corpus and down-sample it to 10% of its size:
corpus: Corpus = WNUT_17().downsample(0.1) print(corpus)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Declare the tag type to be predicted:
tag_type = 'ner'
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Create the tag-dictionary for the tag-type:
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) print(tag_dictionary)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Load the embeddings:
embedding_types: List[TokenEmbeddings] = [ WordEmbeddings('glove'), # comment in this line to use character embeddings # CharacterEmbeddings(), # comment in these lines to use flair embeddings # FlairEmbeddings('news-forward'), # FlairEmbeddings('news-backward'), ] embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Load and initialize the sequence tagger:
from flair.models import SequenceTagger tagger: SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=True)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Load and initialize the trainer:
from flair.trainers import ModelTrainer trainer: ModelTrainer = ModelTrainer(tagger, corpus)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
If you have a GPU (otherwise maybe tweak the batch size, etc.), run the training with 150 epochs:
trainer.train('resources/taggers/example-ner', learning_rate=0.1, mini_batch_size=32, max_epochs=150)
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Plot the training curves and results:
from flair.visual.training_curves import Plotter plotter = Plotter() plotter.plot_training_curves('resources/taggers/example-ner/loss.tsv') plotter.plot_weights('resources/taggers/example-ner/weights.txt')
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Use the model via the predict method:
from flair.data import Sentence model = SequenceTagger.load('resources/taggers/example-ner/final-model.pt') sentence = Sentence('John lives in the Empire State Building .') model.predict(sentence) print(sentence.to_tagged_string())
notebooks/Flair Training Sequence Labeling Models.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
Read in the file, see what we're working with We'll use the read() method to get the contents of the file.
# in a with block, open the HTML file with open('mountain-goats.html', 'r') as html_file: # .read() in the contents of a file -- it'll be a string html_code = html_file.read() # print the string to see what's there print(html_code)
completed/12. Web scraping (Part 2).ipynb
ireapps/cfj-2017
mit
Parse the table with BeautifulSoup Right now, Python isn't interpreting our table as data -- it's just a string. We need to use BeautifulSoup to parse that string into data objects that Python can understand. Once the string is parsed, we'll be working with a "tree" of data that we can navigate.
with open('mountain-goats.html', 'r') as html_file: html_code = html_file.read() # use the type() function to see what kind of object `html_code` is print(type(html_code)) # feed the file's contents (the string of HTML) to BeautifulSoup # will complain if you don't specify the parser soup = BeautifulSoup(html_code, 'html.parser') # use the type() function to see what kind of object `soup` is print(type(soup))
completed/12. Web scraping (Part 2).ipynb
ireapps/cfj-2017
mit
Decide how to target the table BeautifulSoup has several methods for targeting elements -- by position on the page, by attribute, etc. Right now we just want to find the correct table.
with open('mountain-goats.html', 'r') as html_file: html_code = html_file.read() soup = BeautifulSoup(html_code, 'html.parser') # by position on the page # find_all returns a list of matching elements, and we want the second ([1]) one # song_table = soup.find_all('table')[1] # by class name # => with `find`, you can pass a dictionary of element attributes to match on # song_table = soup.find('table', {'class': 'song-table'}) # by ID # song_table = soup.find('table', {'id': 'my-cool-table'}) # by style song_table = soup.find('table', {'style': 'width: 95%;'}) print(song_table)
completed/12. Web scraping (Part 2).ipynb
ireapps/cfj-2017
mit
Looping over the table rows Let's print a list of track numbers and song titles. Look at the structure of the table -- a table has rows represented by the tag tr, and within each row there are cells represented by td tags. The find_all() method returns a list. And we know how to iterate over lists: with a for loop. Let's do that.
with open('mountain-goats.html', 'r') as html_file: html_code = html_file.read() soup = BeautifulSoup(html_code, 'html.parser') song_table = soup.find('table', {'style': 'width: 95%;'}) # find the rows in the table # slice to skip the header row song_rows = song_table.find_all('tr')[1:] # loop over the rows for row in song_rows: # get the table cells in the row song = row.find_all('td') # assign them to variables track, title, duration, artist, album = song # use the .string attribute to get the text in the cell print(track.string, title.string)
completed/12. Web scraping (Part 2).ipynb
ireapps/cfj-2017
mit
Write data to file Let's put it all together and open a file to write the data to.
with open('mountain-goats.html', 'r') as html_file, open('mountain-goats.csv', 'w') as outfile: html_code = html_file.read() soup = BeautifulSoup(html_code, 'html.parser') song_table = soup.find('table', {'style': 'width: 95%;'}) song_rows = song_table.find_all('tr')[1:] # set up a writer object writer = csv.DictWriter(outfile, fieldnames=['track', 'title', 'duration', 'artist', 'album']) writer.writeheader() for row in song_rows: # get the table cells in the row song = row.find_all('td') # assign them to variables track, title, duration, artist, album = song # write out the dictionary to file writer.writerow({ 'track': track.string, 'title': title.string, 'duration': duration.string, 'artist': artist.string, 'album': album.string })
completed/12. Web scraping (Part 2).ipynb
ireapps/cfj-2017
mit
Toy model with two basins
plot_traj([],[],figsize=(6,5))
test/Clustering_test.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Generating MC trajectory Continuos Ensemble
mc_traj = mc_simulation2D(500000) my_ensemble = Ensemble([mc_traj])
test/Clustering_test.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Discrete Ensemble and Transition Matrix The mapping funcion divides each dimension in 12. The total number of bins is 144.
discrete_ens = DiscreteEnsemble.from_ensemble(my_ensemble, mapping_function2D) # Transition Matrix K = discrete_ens._mle_transition_matrix(N*N,prior_counts=1e-6)
test/Clustering_test.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Agglomerative Clustering The points with the same color belong to the same cluster, only the clusters with size > 1 are shown.
t_min_list=[] t_max_list=[] t_AB_list=[] n_clusters = [135, 130, 125, 120, 115, 110, 105, 100, 95, 90, 85, 80, 75, 70] for n in n_clusters: big_clusters=[] big_clusters_index =[] clusters, t_min, t_max, clustered_tmatrix = kinetic_clustering_from_tmatrix(K, n, verbose=False) t_min_list.append(t_min) t_max_list.append(t_max) for i, cluster in enumerate(clusters): if len(cluster) > 1: big_clusters.append(cluster) big_clusters_index.append(i) n_big = len(big_clusters) if n_big > 1: tAB = markov_commute_time(clustered_tmatrix,[big_clusters_index[0]],[big_clusters_index[1]] ) else: tAB = 0.0 t_AB_list.append(tAB) discrete = [True for i in range(n_big)] print("{} Clusters, t_cut: {:.2f}tau, t_max: {:.2e}tau, tAB: {:.2f}tau".format(n, t_min, t_max, tAB)) plot_traj([ [big_clusters[i],[]] for i in range(n_big) ], discrete, std = 0.00002, alpha=0.3, justpoints=True, figsize=(3,3)) plt.plot(n_clusters, t_min_list, label="t_cut") plt.plot(n_clusters, t_AB_list, label="t_AB") plt.xlabel("Number of Clusters") plt.ylabel("t (tau)") #plt.text(110, 4000,"Clustering", fontsize=14) plt.axis([70,135,0,9000]) #plt.arrow(125, 3600, -30, 0,shape='left', lw=2, length_includes_head=True) plt.title("Commute times vs Number of Clusters") plt.legend() plt.show() m_ratio = [t_AB_list[i]/t_min_list[i] for i in range(len(t_min_list))] plt.plot(n_clusters, m_ratio, label="t_AB / t_cut", color="red") plt.xlabel("Number of Clusters") plt.ylabel("t_AB / t_cut") plt.axis([70,135,0,65]) plt.legend() plt.show() m_ratio2 = [t_max_list[i]/t_min_list[i] for i in range(len(t_min_list))] plt.plot(n_clusters, m_ratio2, label="t_max / t_cut", color="green") plt.xlabel("Number of Clusters") plt.ylabel("t_max / t_cut") #plt.axis([70,135,0,1000]) plt.legend() plt.show()
test/Clustering_test.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Create a session. Note the api endpoint, lab-services.ovation.io for Ovation Service Lab.
s = session.connect(input('Email: '), api='https://lab-services.ovation.io')
examples/qc-activity-example.ipynb
physion/ovation-python
gpl-3.0
Create a Quality Check (QC) activity A QC activity determines the status of results for each Sample in a Workflow. Normally, QC activities are handled in the web application, but you can submit a new activity with the necessary information to complete the QC programaticallly. First, we'll need a workflow and the label of the QC activity WorkflowActivity:
workflow_id = input('Workflow ID: ') qc_activity_label = input('QC activity label: ')
examples/qc-activity-example.ipynb
physion/ovation-python
gpl-3.0
Next, we'll get the WorkflowSampleResults for the batch. Each WorkflowSampleResult contains the parsed data for a single Sample within the batch. Each WorkflowSampleResult has a result_type that distinguishes each kind of data.
result_type = input('Result type: ') workflow_sample_results = s.get(s.path('workflow_sample_results'), params={'workflow_id': workflow_id, 'result_type': result_type}) workflow_sample_results
examples/qc-activity-example.ipynb
physion/ovation-python
gpl-3.0
Within each WorkflowSampleResult you should see a result object containing records for each assay. In most cases, the results parser created a record for each line in an uploaded tabular (csv or tab-delimited) file. When that record has an entry identifiying the sample and an entry identifying the assay, the parser places that record into the WorkflowSampleResult for the corresponding Workflow Sample, result type, and assay. If more than one record matches this Sample > Result type > Assay, it will be appended to the records for that sample, result type, and assay. A QC activity updates the status of assays and entire Workflow Sample Results. Each assay may recieve a status ("accepted", "rejected", or "repeat") indicating the QC outcome of that assay for a particular sample. In addition, the WorkflowSampleResult has a global status indicating the overall QC outcome for that sample and result type. Individual assay statuses may be used on repeat to determine which assays need to be repeated. The global status determines how the sample is routed following QC. In fact, there can be multiple routing options for each status (e.g. an "Accept and process for workflow A" and "Accept and process for workflow B" options). Ovation internally uses a routing value to indicate (uniquely) which routing option to chose from the configuration. In many cases routing is the same as status (but not always). WorkflowSampleResult and assay statuses are set (overriding any existing status) by creating a QC activity, passing the updated status for each workflow sample result and contained assay(s). In this example, we'll randomly choose statuses for each of the workflow samples above:
import random WSR_STATUS = ["accepted", "rejected", "repeat"] ASSAY_STATUS = ["accepted", "rejected"] qc_results = [] for wsr in workflow_sample_results: assay_results = {} for assay_name, assay in wsr.result.items(): assay_results[assay_name] = {"status": random.choice(ASSAY_STATUS)} wsr_status = random.choice(WSR_STATUS) result = {'id': wsr.id, 'result_type': wsr.result_type, 'status': wsr_status, 'routing': wsr_status, 'result': assay_results} qc_results.append(result)
examples/qc-activity-example.ipynb
physion/ovation-python
gpl-3.0
The activity data we POST will look like this: {"workflow_sample_results": [{"id": WORKFLOW_SAMPLE_RESULT_ID, "result_type": RESULT_TYPE, "status":"accepted"|"rejected"|"repeat", "routing":"accepted", "result":{ASSAY:{"status":"accepted"|"rejected"}}}, ...]}}
qc = workflows.create_activity(s, workflow_id, qc_activity_label, activity={'workflow_sample_results': qc_results, 'custom_attributes': {} # Always an empty dictionary for QC activities })
examples/qc-activity-example.ipynb
physion/ovation-python
gpl-3.0
์ผ€๋ผ์Šค ๋ชจ๋ธ๋กœ ์ถ”์ •๊ธฐ ์ƒ์„ฑํ•˜๊ธฐ <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/keras_model_to_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">๊ตฌ๊ธ€ ์ฝ”๋žฉ(Colab)์—์„œ ์‹คํ–‰ํ•˜๊ธฐ</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">๊นƒํ—ˆ๋ธŒ(GitHub) ์†Œ์Šค ๋ณด๊ธฐ</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td> </table> ๊ฒฝ๊ณ : ์ถ”์ •๊ธฐ๋Š” ์ƒˆ ์ฝ”๋“œ์— ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. Estimator๋Š” v1.Session ์Šคํƒ€์ผ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋Š” ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘์„ฑํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ต๊ณ  ํŠนํžˆ TF 2 ์ฝ”๋“œ์™€ ๊ฒฐํ•ฉ๋  ๋•Œ ์˜ˆ๊ธฐ์น˜ ์•Š๊ฒŒ ์ž‘๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์Šคํ‹ฐ๋ฉ”์ดํ„ฐ๋Š” ํ˜ธํ™˜์„ฑ ๋ณด์žฅ ์ด ์ ์šฉ๋˜์ง€๋งŒ ๋ณด์•ˆ ์ทจ์•ฝ์  ์™ธ์—๋Š” ์ˆ˜์ • ์‚ฌํ•ญ์ด ์ œ๊ณต๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๊ฐ€์ด๋“œ ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๊ฐœ์š” TensorFlow Estimator๋Š” TensorFlow์—์„œ ์ง€์›๋˜๋ฉฐ ์‹ ๊ทœ ๋ฐ ๊ธฐ์กด tf.keras ๋ชจ๋ธ์—์„œ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž์Šต์„œ์—๋Š” ํ•ด๋‹น ํ”„๋กœ์„ธ์Šค์˜ ์™„์ „ํ•˜๊ณ  ์ตœ์†Œํ•œ์˜ ์˜ˆ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์˜: ์ผ€๋ผ์Šค ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด, ์ถ”์ •๋Ÿ‰์„ ๋ณ€ํ™˜ํ•˜์ง€ ์•Š๊ณ  tf.distribute strategies๊ณผ ํ•จ๊ป˜ ์ง์ ‘ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, model_to_estimators๋Š” ๋” ์ด์ƒ ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์„ค์ •
import tensorflow as tf import numpy as np import tensorflow_datasets as tfds
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฐ„๋‹จํ•œ ์ผ€๋ผ์Šค ๋ชจ๋ธ ๋งŒ๋“ค๊ธฐ ์ผ€๋ผ์Šค์—์„œ๋Š” ์—ฌ๋Ÿฌ ๊ฒน์˜ ์ธต์„ ์Œ“์•„ ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์€ ์ธต์˜ ๊ทธ๋ž˜ํ”„๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ์ค‘ ๊ฐ€์žฅ ํ”ํ•œ ํ˜•ํƒœ๋Š” ์ ์ธตํ˜• ๊ตฌ์กฐ๋ฅผ ๊ฐ–๊ณ  ์žˆ๋Š” tf.keras.Sequential ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ์™„์ „ํžˆ ์—ฐ๊ฒฐ ๋„คํŠธ์›Œํฌ(๋‹ค์ธต ํผ์…‰ํŠธ๋ก )๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค:
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(3) ])
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•œ ํ›„, ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์š”์•ฝํ•ด ์ถœ๋ ฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer='adam') model.summary()
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
์ž…๋ ฅ ํ•จ์ˆ˜ ๋งŒ๋“ค๊ธฐ ๋ฐ์ดํ„ฐ์…‹ API๋ฅผ ์‚ฌ์šฉํ•ด ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค๋ฃจ๊ฑฐ๋‚˜ ์—ฌ๋Ÿฌ ์žฅ์น˜์—์„œ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ ์ถ”์ •๊ธฐ๋Š” ์ž…๋ ฅ ํŒŒ์ดํ”„๋ผ์ธ(input pipeline)์ด ์–ธ์ œ ์–ด๋–ป๊ฒŒ ์ƒ์„ฑ๋˜์—ˆ๋Š”์ง€ ์ œ์–ดํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” "์ž…๋ ฅ ํ•จ์ˆ˜", ์ฆ‰ input_fn์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ถ”์ •๊ธฐ๋Š” ์ด ํ•จ์ˆ˜๋ฅผ ๋ณ„๋„์˜ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„ค์ • ์—†์ด ํ˜ธ์ถœํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด๋•Œ input_fn์€ tf.data.Dataset ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
def input_fn(): split = tfds.Split.TRAIN dataset = tfds.load('iris', split=split, as_supervised=True) dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels)) dataset = dataset.batch(32).repeat() return dataset
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
input_fn์ด ์ž˜ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด๋ด…๋‹ˆ๋‹ค.
for features_batch, labels_batch in input_fn().take(1): print(features_batch) print(labels_batch)
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
tf.keras.model์„ ์ถ”์ •๊ธฐ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ tf.keras.model์€ tf.keras.estimator.model_to_estimator ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด tf.estimator.Estimator ๊ฐ์ฒด๋กœ ๋ณ€ํ™˜ํ•จ์œผ๋กœ์จ tf.estimator API๋ฅผ ํ†ตํ•ด ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
import tempfile model_dir = tempfile.mkdtemp() keras_estimator = tf.keras.estimator.model_to_estimator( keras_model=model, model_dir=model_dir)
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
์ถ”์ •๊ธฐ๋ฅผ ํ›ˆ๋ จํ•œ ํ›„ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.
keras_estimator.train(input_fn=input_fn, steps=500) eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10) print('Eval result: {}'.format(eval_result))
site/ko/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Setting up the notebook's environment Install AI Platform Pipelines client library For AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.
AIP_CLIENT_WHEEL = 'aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl' AIP_CLIENT_WHEEL_GCS_LOCATION = f'gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}' !gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL} %pip install {AIP_CLIENT_WHEEL}
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Restart the kernel.
import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Import notebook dependencies
import logging import tfx import tensorflow as tf from aiplatform.pipelines import client from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner print('TFX Version: ', tfx.__version__)
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Configure GCP environment If you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running sh gcloud auth login in the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance. Set the following constants to the values reflecting your environment: PROJECT_ID - your GCP project ID PROJECT_NUMBER - your GCP project number BUCKET_NAME - a name of the GCS bucket that will be used to host artifacts created by the pipeline PIPELINE_NAME_SUFFIX - a suffix appended to the standard pipeline name. You can change to differentiate between pipelines from different users in a classroom environment API_KEY - a GCP API key VPC_NAME - a name of the GCP VPC to use for the index deployments. REGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage
PROJECT_ID = 'jk-mlops-dev' # <---CHANGE THIS PROJECT_NUMBER = '895222332033' # <---CHANGE THIS API_KEY = 'AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8' # <---CHANGE THIS USER = 'user' # <---CHANGE THIS BUCKET_NAME = 'jk-ann-staging' # <---CHANGE THIS VPC_NAME = 'default' # <---CHANGE THIS IF USING A DIFFERENT VPC REGION = 'us-central1' PIPELINE_NAME = "ann-pipeline-{}".format(USER) PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, PIPELINE_NAME) PATH=%env PATH %env PATH={PATH}:/home/jupyter/.local/bin print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Defining custom components In this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components. Each component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components. Remove files created in the previous executions of the notebook
component_folder = 'bq_components' if tf.io.gfile.exists(component_folder): print('Removing older file') tf.io.gfile.rmtree(component_folder) print('Creating component folder') tf.io.gfile.mkdir(component_folder) %cd {component_folder}
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Define custom types for ANN service artifacts This module defines a couple of custom TFX artifacts to track ANN Service indexes and index deployments.
%%writefile ann_types.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Custom types for managing ANN artifacts.""" from tfx.types import artifact class ANNIndex(artifact.Artifact): TYPE_NAME = 'ANNIndex' class DeployedANNIndex(artifact.Artifact): TYPE_NAME = 'DeployedANNIndex'
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create a wrapper around ANN Service REST API This module provides a convenience wrapper around ANN Service REST API. In the experimental stage, the ANN Service does not have an "official" Python client SDK nor it is supported by the Google Discovery API.
%%writefile ann_service.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Helper classes encapsulating ANN Service REST API.""" import datetime import logging import json import time import google.auth class ANNClient(object): """Base ANN Service client.""" def __init__(self, project_id, project_number, region): credentials, _ = google.auth.default() self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials) self.ann_endpoint = f'{region}-aiplatform.googleapis.com' self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}' self.project_id = project_id self.project_number = project_number self.region = region def wait_for_completion(self, operation_id, message, sleep_time): """Waits for a completion of a long running operation.""" api_url = f'{self.ann_parent}/operations/{operation_id}' start_time = datetime.datetime.utcnow() while True: response = self.authed_session.get(api_url) if response.status_code != 200: raise RuntimeError(response.json()) if 'done' in response.json().keys(): logging.info('Operation completed!') break elapsed_time = datetime.datetime.utcnow() - start_time logging.info('{}. Elapsed time since start: {}.'.format( message, str(elapsed_time))) time.sleep(sleep_time) return response.json()['response'] class IndexClient(ANNClient): """Encapsulates a subset of control plane APIs that manage ANN indexes.""" def __init__(self, project_id, project_number, region): super().__init__(project_id, project_number, region) def create_index(self, display_name, description, metadata): """Creates an ANN Index.""" api_url = f'{self.ann_parent}/indexes' request_body = { 'display_name': display_name, 'description': description, 'metadata': metadata } response = self.authed_session.post(api_url, data=json.dumps(request_body)) if response.status_code != 200: raise RuntimeError(response.text) operation_id = response.json()['name'].split('/')[-1] return operation_id def list_indexes(self, display_name=None): """Lists all indexes with a given display name or all indexes if the display_name is not provided.""" if display_name: api_url = f'{self.ann_parent}/indexes?filter=display_name="{display_name}"' else: api_url = f'{self.ann_parent}/indexes' response = self.authed_session.get(api_url).json() return response['indexes'] if response else [] def delete_index(self, index_id): """Deletes an ANN index.""" api_url = f'{self.ann_parent}/indexes/{index_id}' response = self.authed_session.delete(api_url) if response.status_code != 200: raise RuntimeError(response.text) class IndexDeploymentClient(ANNClient): """Encapsulates a subset of control plane APIs that manage ANN endpoints and deployments.""" def __init__(self, project_id, project_number, region): super().__init__(project_id, project_number, region) def create_endpoint(self, display_name, vpc_name): """Creates an ANN endpoint.""" api_url = f'{self.ann_parent}/indexEndpoints' network_name = f'projects/{self.project_number}/global/networks/{vpc_name}' request_body = { 'display_name': display_name, 'network': network_name } response = self.authed_session.post(api_url, data=json.dumps(request_body)) if response.status_code != 200: raise RuntimeError(response.text) operation_id = response.json()['name'].split('/')[-1] return operation_id def list_endpoints(self, display_name=None): """Lists all ANN endpoints with a given display name or all endpoints in the project if the display_name is not provided.""" if display_name: api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name="{display_name}"' else: api_url = f'{self.ann_parent}/indexEndpoints' response = self.authed_session.get(api_url).json() return response['indexEndpoints'] if response else [] def delete_endpoint(self, endpoint_id): """Deletes an ANN endpoint.""" api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}' response = self.authed_session.delete(api_url) if response.status_code != 200: raise RuntimeError(response.text) return response.json() def create_deployment(self, display_name, deployment_id, endpoint_id, index_id): """Deploys an ANN index to an endpoint.""" api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex' index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}' request_body = { 'deployed_index': { 'id': deployment_id, 'index': index_name, 'display_name': display_name } } response = self.authed_session.post(api_url, data=json.dumps(request_body)) if response.status_code != 200: raise RuntimeError(response.text) operation_id = response.json()['name'].split('/')[-1] return operation_id def get_deployment_grpc_ip(self, endpoint_id, deployment_id): """Returns a private IP address for a gRPC interface to an Index deployment.""" api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}' response = self.authed_session.get(api_url) if response.status_code != 200: raise RuntimeError(response.text) endpoint_ip = None if 'deployedIndexes' in response.json().keys(): for deployment in response.json()['deployedIndexes']: if deployment['id'] == deployment_id: endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress'] return endpoint_ip def delete_deployment(self, endpoint_id, deployment_id): """Undeployes an index from an endpoint.""" api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex' request_body = { 'deployed_index_id': deployment_id } response = self.authed_session.post(api_url, data=json.dumps(request_body)) if response.status_code != 200: raise RuntimeError(response.text) return response
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create Compute PMI component This component encapsulates a call to the BigQuery stored procedure that calculates item cooccurence. Refer to the preceeding notebooks for more details about item coocurrent calculations. The component tracks the output item_cooc table created by the stored procedure using the TFX (simple) Dataset artifact.
%%writefile compute_pmi.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """BigQuery compute PMI component.""" import logging from google.cloud import bigquery import tfx import tensorflow as tf from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter from tfx.types.experimental.simple_artifacts import Dataset as BQDataset @component def compute_pmi( project_id: Parameter[str], bq_dataset: Parameter[str], min_item_frequency: Parameter[int], max_group_size: Parameter[int], item_cooc: OutputArtifact[BQDataset]): stored_proc = f'{bq_dataset}.sp_ComputePMI' query = f''' DECLARE min_item_frequency INT64; DECLARE max_group_size INT64; SET min_item_frequency = {min_item_frequency}; SET max_group_size = {max_group_size}; CALL {stored_proc}(min_item_frequency, max_group_size); ''' result_table = 'item_cooc' logging.info(f'Starting computing PMI...') client = bigquery.Client(project=project_id) query_job = client.query(query) query_job.result() # Wait for the job to complete logging.info(f'Items PMI computation completed. Output in {bq_dataset}.{result_table}.') # Write the location of the output table to metadata. item_cooc.set_string_custom_property('table_name', f'{project_id}:{bq_dataset}.{result_table}')
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create Train Item Matching Model component This component encapsulates a call to the BigQuery stored procedure that trains the BQML Matrix Factorization model. Refer to the preceeding notebooks for more details about model training. The component tracks the output item_matching_model BQML model created by the stored procedure using the TFX (simple) Model artifact.
%%writefile train_item_matching.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """BigQuery compute PMI component.""" import logging from google.cloud import bigquery import tfx import tensorflow as tf from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter from tfx.types.experimental.simple_artifacts import Dataset as BQDataset from tfx.types.standard_artifacts import Model as BQModel @component def train_item_matching_model( project_id: Parameter[str], bq_dataset: Parameter[str], dimensions: Parameter[int], item_cooc: InputArtifact[BQDataset], bq_model: OutputArtifact[BQModel]): item_cooc_table = item_cooc.get_string_custom_property('table_name') stored_proc = f'{bq_dataset}.sp_TrainItemMatchingModel' query = f''' DECLARE dimensions INT64 DEFAULT {dimensions}; CALL {stored_proc}(dimensions); ''' model_name = 'item_matching_model' logging.info(f'Using item co-occurrence table: item_cooc_table') logging.info(f'Starting training of the model...') client = bigquery.Client(project=project_id) query_job = client.query(query) query_job.result() logging.info(f'Model training completed. Output in {bq_dataset}.{model_name}.') # Write the location of the model to metadata. bq_model.set_string_custom_property('model_name', f'{project_id}:{bq_dataset}.{model_name}')
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create Extract Embeddings component This component encapsulates a call to the BigQuery stored procedure that extracts embdeddings from the model to the staging table. Refer to the preceeding notebooks for more details about embeddings extraction. The component tracks the output item_embeddings table created by the stored procedure using the TFX (simple) Dataset artifact.
%%writefile extract_embeddings.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Extracts embeddings to a BQ table.""" import logging from google.cloud import bigquery import tfx import tensorflow as tf from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter from tfx.types.experimental.simple_artifacts import Dataset as BQDataset from tfx.types.standard_artifacts import Model as BQModel @component def extract_embeddings( project_id: Parameter[str], bq_dataset: Parameter[str], bq_model: InputArtifact[BQModel], item_embeddings: OutputArtifact[BQDataset]): embedding_model_name = bq_model.get_string_custom_property('model_name') stored_proc = f'{bq_dataset}.sp_ExractEmbeddings' query = f''' CALL {stored_proc}(); ''' embeddings_table = 'item_embeddings' logging.info(f'Extracting item embeddings from: {embedding_model_name}') client = bigquery.Client(project=project_id) query_job = client.query(query) query_job.result() # Wait for the job to complete logging.info(f'Embeddings extraction completed. Output in {bq_dataset}.{embeddings_table}') # Write the location of the output table to metadata. item_embeddings.set_string_custom_property('table_name', f'{project_id}:{bq_dataset}.{embeddings_table}')
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create Export Embeddings component This component encapsulates a BigQuery table extraction job that extracts the item_embeddings table to a GCS location as files in the JSONL format. The format of the extracted files is compatible with the ingestion schema for the ANN Service. The component tracks the output files location in the TFX (simple) Dataset artifact.
%%writefile export_embeddings.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Exports embeddings from a BQ table to a GCS location.""" import logging from google.cloud import bigquery import tfx import tensorflow as tf from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter from tfx.types.experimental.simple_artifacts import Dataset BQDataset = Dataset @component def export_embeddings( project_id: Parameter[str], gcs_location: Parameter[str], item_embeddings_bq: InputArtifact[BQDataset], item_embeddings_gcs: OutputArtifact[Dataset]): filename_pattern = 'embedding-*.json' gcs_location = gcs_location.rstrip('/') destination_uri = f'{gcs_location}/{filename_pattern}' _, table_name = item_embeddings_bq.get_string_custom_property('table_name').split(':') logging.info(f'Exporting item embeddings from: {table_name}') bq_dataset, table_id = table_name.split('.') client = bigquery.Client(project=project_id) dataset_ref = bigquery.DatasetReference(project_id, bq_dataset) table_ref = dataset_ref.table(table_id) job_config = bigquery.job.ExtractJobConfig() job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON extract_job = client.extract_table( table_ref, destination_uris=destination_uri, job_config=job_config ) extract_job.result() # Wait for resuls logging.info(f'Embeddings export completed. Output in {gcs_location}') # Write the location of the embeddings to metadata. item_embeddings_gcs.uri = gcs_location
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0
Create ANN index component This component encapsulats the calls to the ANN Service to create an ANN Index. The component tracks the created index int the TFX custom ANNIndex artifact.
%%writefile create_index.py # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Creates an ANN index.""" import logging import google.auth import numpy as np import tfx import tensorflow as tf from google.cloud import bigquery from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter from tfx.types.experimental.simple_artifacts import Dataset from ann_service import IndexClient from ann_types import ANNIndex NUM_NEIGHBOURS = 10 MAX_LEAVES_TO_SEARCH = 200 METRIC = 'DOT_PRODUCT_DISTANCE' FEATURE_NORM_TYPE = 'UNIT_L2_NORM' CHILD_NODE_COUNT = 1000 APPROXIMATE_NEIGHBORS_COUNT = 50 @component def create_index( project_id: Parameter[str], project_number: Parameter[str], region: Parameter[str], display_name: Parameter[str], dimensions: Parameter[int], item_embeddings: InputArtifact[Dataset], ann_index: OutputArtifact[ANNIndex]): index_client = IndexClient(project_id, project_number, region) logging.info('Creating index:') logging.info(f' Index display name: {display_name}') logging.info(f' Embeddings location: {item_embeddings.uri}') index_description = display_name index_metadata = { 'contents_delta_uri': item_embeddings.uri, 'config': { 'dimensions': dimensions, 'approximate_neighbors_count': APPROXIMATE_NEIGHBORS_COUNT, 'distance_measure_type': METRIC, 'feature_norm_type': FEATURE_NORM_TYPE, 'tree_ah_config': { 'child_node_count': CHILD_NODE_COUNT, 'max_leaves_to_search': MAX_LEAVES_TO_SEARCH } } } operation_id = index_client.create_index(display_name, index_description, index_metadata) response = index_client.wait_for_completion(operation_id, 'Waiting for ANN index', 45) index_name = response['name'] logging.info('Index {} created.'.format(index_name)) # Write the index name to metadata. ann_index.set_string_custom_property('index_name', index_name) ann_index.set_string_custom_property('index_display_name', display_name)
retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/analytics-componentized-patterns
apache-2.0