markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**WARNING**: it's not OK to extrapolate the validity of the model outside of the range of values where we have observed data.For example, there is no reason to believe in the model's predictions about ELV for 200 or 2000 hours of stats training:
result.predict({'hours':[200]})
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Set up your connectionThe next cell contains code to check if you already have a sascfg_personal.py file in your current conda environment. If you do not one is created for you.Next [choose your access method](https://sassoftware.github.io/saspy/install.htmlchoosing-an-access-method) and the read through the configuration properties in sascfg_personal.py
# Setup for the configuration file - for running inside of a conda environment saspyPfad = f"C:\\Users\\{getpass.getuser()}\\.conda\\envs\\{os.environ['CONDA_DEFAULT_ENV']}\\Lib\\site-packages\\saspy\\" saspycfg_personal = Path(f'{saspyPfad}sascfg_personal.py') if saspycfg_personal.is_file(): print('All setup and ready to go') else: copyfile(f'{saspyPfad}sascfg.py', f'{saspyPfad}sascfg_personal.py') print('The configuration file was created for you, please setup your connection method') print(f'Find sascfg_personal.py here: {saspyPfad}')
All setup and ready to go
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Configurationprod = { 'iomhost': 'rfnk01-0068.exnet.sas.com', <-- SAS Host Name 'iomport': 8591, <-- SAS Workspace Server Port 'class_id': '440196d4-90f0-11d0-9f41-00a024bb830c', <-- static, if the value is wrong use proc iomoperate 'provider': 'sas.iomprovider', <-- static 'encoding': 'windows-1252' <-- Python encoding for SAS session encoding }
# If no configuration name is specified, you get a list of the configured ones # sas = saspy.SASsession(cfgname='prod') sas = saspy.SASsession()
Please enter the name of the SAS Config you wish to run. Available Configs are: ['prod', 'dev'] prod Username: sasdemo Password: ········ SAS Connection established. Workspace UniqueIdentifier is 5A182D7A-E928-4CA9-8EC4-9BE60ECB2A79
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Explore some interactions with SASGetting a feeling for what SASPy can do.
# Let's take a quick look at all the different methods and variables provided by SASSession object dir(sas) # Get a list of all tables inside of the library sashelp table_df = sas.list_tables(libref='sashelp', results='pandas') # Search for a table containing a capital C in its name table_df[table_df['MEMNAME'].str.contains('C')] # If teach_me_sas is true instead of executing the code, we get the generated code returned sas.teach_me_SAS(True) sas.list_tables(libref='sashelp', results='pandas') # Let's turn it off again to actually run the code sas.teach_me_SAS(False) # Create a sasdata object, based on the table cars in the sashelp library cars = sas.sasdata('cars', 'sashelp') # Get information about the columns in the table cars.columnInfo() # Creating a simple heat map cars.heatmap('Horsepower', 'EngineSize') # Clean up for this section del cars, table_df
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Reading in data from local disc with Pandas and uploading it to SAS1. First we are going to read in a local csv file2. Creating a copy of the base data file in SAS3. Append the local data to the data stored in SAS and sort it The Opel data set:Make,Model,Type,Origin,DriveTrain,MSRP,Invoice,EngineSize,Cylinders,Horsepower,MPG_City,MPG_Highway,Weight,Wheelbase,LengthOpel,Astra Edition,Sedan,Europe,Rear,28495,26155,3,6,22.5,16,23,4023,110,180Opel,Astra Design & Tech,Sedan,Europe,Rear,30795,28245,4.4,8,32.5,16,22,4824,111,184Opel,Astra Elegance,Sedan,Europe,Rear,37995,34800,2.5,6,18.4,20,29,3219,107,176Opel,Astra Ultimate,Sedan,Europe,Rear,42795,38245,2.5,6,18.4,20,29,3197,107,177Opel,Astra Business Edition,Sedan,Europe,Rear,28495,24800,2.5,6,18.4,19,27,3560,107,177Opel,Astra Elegance,Sedan,Europe,Rear,30245,27745,2.5,6,18.4,19,27,3461,107,176
# Read a local csv file with pandas and take a look opel = pd.read_csv('cars_opel.csv') opel.describe() # Looks like the horsepower isn't right, let's fix that opel.loc[:, 'Horsepower'] *= 10 opel.describe() # Create a working copy of the cars data set sas.submitLOG('''data work.cars; set sashelp.cars; run;''') # Append the panda dataframe to the working copy of the cars data set in SAS cars = sas.sasdata('cars', 'work') # The pandas data frame is appended to the SAS data set cars.append(opel) cars.tail() # Sort the data set in SAS to restore the old order cars.sort('make model type') cars.tail() # Confirm that Opel has been added cars.bar('Make')
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Reading in data from SAS and manipulating it with Pandas
# Short form is sd2df() df = sas.sasdata2dataframe('cars', 'sashelp', dsopts={'where': 'make="BMW"'}) type(df)
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Now that the data set is available as a Pandas DataFrame you can use it in e.g. a sklearn pipeline
df
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
Creating a modelThe data can be found [here](https://www.kaggle.com/gsr9099/best-model-for-credit-card-approval)
# Read two local csv files df_applications = pd.read_csv('application_record.csv') df_credit = pd.read_csv('credit_record.csv') # Get a feel for the data print(df_applications.columns) print(df_applications.head(5)) df_applications.describe() # Join the two data sets together df_application_credit = df_applications.join(df_credit, lsuffix='_applications', rsuffix='_credit') print(df_application_credit.head()) df_application_credit.columns # Upload the data to the SAS server # Here just a small sample, as the data set is quite large and the data is pre-loaded on SAS server sas.df2sd(df_application_credit[:10], table='application_credit_sample', libref='saspy') # Create a training data set and test data set in SAS application_credit_sas = sas.sasdata('application_credit', 'saspy') application_credit_part = application_credit_sas.partition(fraction=.7, var='status') application_credit_part.info() # Creating a SAS/STAT object stat = sas.sasstat() dir(stat) # Target target = 'status' # Class Variables var_class = ['FLAG_OWN_CAR','FLAG_OWN_REALTY', 'OCCUPATION_TYPE', 'STATUS']
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
The HPSPLIT procedure is a high-performance procedure that builds tree-based statistical models for classification and regression. The procedure produces classification trees, which model a categorical response, and regression trees, which model a continuous response. Both types of trees are referred to as decision trees because the model is expressed as a series of if-then statements - [documentation](https://support.sas.com/documentation/onlinedoc/stat/141/hpsplit.pdf)
hpsplit_model = stat.hpsplit(data=application_credit_part, cls=var_class, model="status(event='N')= FLAG_OWN_CAR FLAG_OWN_REALTY OCCUPATION_TYPE MONTHS_BALANCE AMT_INCOME_TOTAL", code='trescore.sas', procopts='assignmissing=similar', out = 'work.dt_score', id = "ID", partition="rolevar=_partind_(TRAIN='1' VALIDATE='0');") dir(hpsplit_model) hpsplit_model.ROCPLOT hpsplit_model.varimportance sas.set_results('HTML') hpsplit_model.wholetreeplot
_____no_output_____
Apache-2.0
SAS_contrib/Ask_the_Expert_Germany_2021.ipynb
mp675/saspy-examples
VacationPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os import json # Import API key from config import g_key
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
ineal12/python-api-challenge
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
#read in weather data weather_data = pd.read_csv('../cities.csv') weather_data.head()
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
ineal12/python-api-challenge
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
#Filter columns to be used in weather dataframe cols = ["City", "Cloudiness", "Country", "Date", "Humidity", "Lat", "Lng", "Temp", "Wind Speed"] weather_data = weather_data[cols] #configure gmaps gmaps.configure(api_key=g_key) #create coordinates locations = weather_data[["Lat", "Lng"]].astype(float) humidity = weather_data["Humidity"].astype(float) fig = gmaps.figure() #create heatmap to display humidity across globe heat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False, max_intensity=100, point_radius = 1) #add heatmap layer fig.add_layer(heat_layer) #display heatmap fig
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
ineal12/python-api-challenge
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
weather_data = weather_data[weather_data["Temp"].between(70,80,inclusive=True)] weather_data = weather_data[weather_data["Temp"] > 70] weather_data = weather_data[weather_data["Wind Speed"] < 10] weather_data = weather_data[weather_data["Cloudiness"] == 0] weather_data.head()
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
ineal12/python-api-challenge
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
hotel_df = weather_data hotel_df["Hotel Name"]= '' hotel_df params = { "types": "lodging", "radius":5000, "key": g_key } # Use the lat/lng we recovered to identify airports for index, row in hotel_df.iterrows(): # get lat, lng from df lat = row["Lat"] lng = row["Lng"] # change location each iteration while leaving original params in place params["location"] = f"{lat},{lng}" # Use the search term: "International Airport" and our lat/lng base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" # make request and print url name_address = requests.get(base_url, params=params).json() print(json.dumps(name_address, indent=4, sort_keys=True)) try: hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"] except (KeyError, IndexError): print("Missing field/result... skipping.") hotel_df # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Lng"]] # Add marker layer ontop of heat map marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info) # Display Map fig.add_layer(marker_layer) fig
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
ineal12/python-api-challenge
Support Vector Machine (SVM) Tutorial Follow from: [link](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47) - SVM can be used for both regression and classification problems.- The goal of SVM models is to find a hyperplane in an N-dimensional space that distinctly classifies the data points.- The hyperplane must be the one with the maximum margin.- Support vectors are data points that are closer to the hyperplane and influence the position and orientation of the hyperplane.- In SVM, if the output of the model is greater than 1, it is identified with one class and if it is -1, it is identified with the other class $[-1, 1]$.
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.utils import shuffle from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score %matplotlib inline plt.style.use('seaborn') df = pd.read_csv('data/Iris.csv') df.head() df = df.drop(['Id'], axis=1) df.head() target = df['Species'] s = list(set(target)) rows = list(range(100, 150)) # Since the Iris dataset has three classes, we remove the third class. This then results in a binary classification problem df = df.drop(df.index[rows]) x, y = df['SepalLengthCm'], df['PetalLengthCm'] setosa_x, setosa_y = x[:50], y[:50] versicolor_x, versicolor_y = x[50:], y[50:] plt.figure(figsize=(8,6)) plt.scatter(setosa_x, setosa_y, marker='.', color='green') plt.scatter(versicolor_x, versicolor_y, marker='*', color='red') plt.show() df = df.drop(['SepalWidthCm', 'PetalWidthCm'], axis=1) Y = [] target = df['Species'] for val in target: if(val == 'Iris-setosa'): Y.append(-1) else: Y.append(1) df = df.drop(['Species'], axis=1) X = df.values.tolist() # Shuffle and split the data X, Y = shuffle(X, Y) x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.9) x_train = np.array(x_train) y_train = np.array(y_train).reshape(90, 1) x_test = np.array(x_test) y_test = np.array(y_test).reshape(10, 1)
_____no_output_____
MIT
SVM.ipynb
bbrighttaer/data_science_nbs
SVM implementation with Numpy
train_f1 = x_train[:, 0].reshape(90, 1) train_f2 = x_train[:, 1].reshape(90, 1) w1, w2 = np.zeros((90, 1)), np.zeros((90, 1)) epochs = 1 alpha = 1e-4 while epochs < 10000: y = w1 * train_f1 + w2 * train_f2 prod = y * y_train count = 0 for val in prod: if val >= 1: cost = 0 w1 = w1 - alpha * (2 * 1/epochs * w1) w2 = w2 - alpha * (2 * 1/epochs * w2) else: cost = 1 - val w1 = w1 + alpha * (train_f1[count] * y_train[count] - 2 * 1/epochs * w1) w2 = w2 + alpha * (train_f2[count] * y_train[count] - 2 * 1/epochs * w2) count += 1 epochs += 1
_____no_output_____
MIT
SVM.ipynb
bbrighttaer/data_science_nbs
Evaluation
index = list(range(10, 90)) w1 = np.delete(w1, index).reshape(10, 1) w2 = np.delete(w2, index).reshape(10, 1) ## Extract the test data features test_f1 = x_test[:,0].reshape(10, 1) test_f2 = x_test[:,1].reshape(10, 1) ## Predict y_pred = w1 * test_f1 + w2 * test_f2 predictions = [] for val in y_pred: if val > 1: predictions.append(1) else: predictions.append(-1) print(accuracy_score(y_test,predictions))
0.9
MIT
SVM.ipynb
bbrighttaer/data_science_nbs
SVM via sklearn
from sklearn.svm import SVC clf = SVC(kernel='linear') clf.fit(x_train, y_train) y_pred = clf.predict(x_test) print(accuracy_score(y_test, y_pred))
1.0
MIT
SVM.ipynb
bbrighttaer/data_science_nbs
This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/). Supervised Learning In-Depth: Random Forests Previously we saw a powerful discriminative classifier, **Support Vector Machines**.Here we'll take a look at motivating another powerful algorithm. This one is a *non-parametric* algorithm called **Random Forests**.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats plt.style.use('seaborn')
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
Motivating Random Forests: Decision Trees Random forests are an example of an *ensemble learner* built on decision trees.For this reason we'll start by discussing decision trees themselves.Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:
import fig_code fig_code.plot_example_decision_tree()
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
The binary splitting makes this extremely efficient.As always, though, the trick is to *ask the right questions*.This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information. Creating a Decision TreeHere's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:
from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=1.0) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
We have some convenience functions in the repository that help
from fig_code import visualize_tree, plot_tree_interactive
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
Now using IPython's ``interact`` (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
plot_tree_interactive(X, y);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
Notice that at each increase in depth, every node is split in two **except** those nodes which contain only a single class.The result is a very fast **non-parametric** classification, and can be extremely useful in practice.**Question: Do you see any problems with this?** Decision Trees and over-fittingOne issue with decision trees is that it is very easy to create trees which **over-fit** the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:
from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier() plt.figure() visualize_tree(clf, X[:200], y[:200], boundaries=False) plt.figure() visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
The details of the classifications are completely different! That is an indication of **over-fitting**: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal. Ensembles of Estimators: Random ForestsOne possible way to address over-fitting is to use an **Ensemble Method**: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!One of the most common ensemble methods is the **Random Forest**, in which the ensemble is made up of many decision trees which are in some way perturbed.There are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:
def fit_randomized_tree(random_state=0): X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=2.0) clf = DecisionTreeClassifier(max_depth=15) rng = np.random.RandomState(random_state) i = np.arange(len(y)) rng.shuffle(i) visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False, xlim=(X[:, 0].min(), X[:, 0].max()), ylim=(X[:, 1].min(), X[:, 1].max())) from ipywidgets import interact interact(fit_randomized_tree, random_state=(0, 100));
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
See how the details of the model change as a function of the sample, while the larger characteristics remain the same!The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, random_state=0) visualize_tree(clf, X, y, boundaries=False);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!*(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the [scikit-learn documentation](http://scikit-learn.org/stable/modules/ensemble.htmlforest)*) Quick Example: Moving to RegressionAbove we were considering random forests within the context of classification.Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is ``sklearn.ensemble.RandomForestRegressor``.Let's quickly demonstrate how this can be used:
from sklearn.ensemble import RandomForestRegressor x = 10 * np.random.rand(100) def model(x, sigma=0.3): fast_oscillation = np.sin(5 * x) slow_oscillation = np.sin(0.5 * x) noise = sigma * np.random.randn(len(x)) return slow_oscillation + fast_oscillation + noise y = model(x) plt.errorbar(x, y, 0.3, fmt='o'); xfit = np.linspace(0, 10, 1000) yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None]) ytrue = model(xfit, 0) plt.errorbar(x, y, 0.3, fmt='o') plt.plot(xfit, yfit, '-r'); plt.plot(xfit, ytrue, '-k', alpha=0.5);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model! Example: Random Forest for Classifying DigitsWe previously saw the **hand-written digits** data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.
from sklearn.datasets import load_digits digits = load_digits() digits.keys() X = digits.data y = digits.target print(X.shape) print(y.shape)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
To remind us what we're looking at, we'll visualize the first few data points:
# set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest') # label the image with the target value ax.text(0, 7, str(digits.target[i]))
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
We can quickly classify the digits using a decision tree as follows:
from sklearn.model_selection import train_test_split from sklearn import metrics Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0) clf = DecisionTreeClassifier(max_depth=11) clf.fit(Xtrain, ytrain) ypred = clf.predict(Xtest)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
We can check the accuracy of this classifier:
metrics.accuracy_score(ypred, ytest)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
and for good measure, plot the confusion matrix:
plt.imshow(metrics.confusion_matrix(ypred, ytest), interpolation='nearest', cmap=plt.cm.binary) plt.grid(False) plt.colorbar() plt.xlabel("predicted label") plt.ylabel("true label");
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
DininduSenanayake/sklearn_tutorial
Talks markdown generator for academicpagesTakes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data.TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.
import pandas as pd import os
_____no_output_____
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
Data formatThe TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk" - `date` must be formatted as YYYY-MM-DD.- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. - The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]` - The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenamesThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
!type talks.tsv
title type url_slug venue date location talk_url description Closing the Loop on Collections Review Conference presentation talk-1 North Carolina Serials Conference 2020-03-01 Chapel Hill, NC Breaking expectations for technical services assessment: outcomes over output Conference presentation talk-2 Southeastern Library Assessment Conference 2019-11-01 Atlanta, GA https://scholarworks.gsu.edu/southeasternlac/2019/2019/1/ You, too, can be a library administrator (and enjoy it). Conference presentation talk-3 62nd North Carolina Library Association Biennial Conference 2017-10-01 Winston-Salem, NC https://www.slideshare.net/KristinCalvert1/you-too-can-be-a-library-administrator-and-enjoy-it Technical services and public services: collaborative decision making Conference presentation talk-4 Role of Professional Librarian in Technical Services Interest Group, American Library Association Midwinter Meeting 2017-01-01 Atlanta, GA http://libres.uncg.edu/ir/wcu/listing.aspx?id=21773 The weighted allocation formula and the association between academic discipline and research cited by faculty Conference presentation talk-5 Seventh Annual Collection Management & Development Research Forum, American Library Association Annual Meeting 2016-06-01 Orlando, FL From Spreadsheets to SUSHI: Five Years of Assessing Use of E-Resources Conference presentation talk-6 Charleston Conference: Issues in Book and Serial Acquisitions 2013-11-01 Charleston, SC. https://www.slideshare.net/KristinCalvert1/from-spreadsheets-to-sushi Gone, but not forgotten: an assessment framework for collection reviews Conference presentation talk-7 ACRL 2021 Conference 2021-04-15 Virtual
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
Import TSVPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks
_____no_output_____
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
Escape special charactersYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False"
_____no_output_____
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
Creating the markdown filesThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md)
_____no_output_____
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
These files are in the talks directory, one directory below where we're working from.
!ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md
--- title: "Tutorial 1 on Relevant Topic in Your Field" collection: talks type: "Tutorial" permalink: /talks/2013-03-01-tutorial-1 venue: "UC-Berkeley Institute for Testing Science" date: 2013-03-01 location: "Berkeley CA, USA" --- [More information here](http://exampleurl.com) This is a description of your tutorial, note the different field in type. This is a markdown files that can be all markdown-ified like any other post. Yay markdown!
MIT
markdown_generator/talks.ipynb
krcalvert/krcalvert.github.io
How to build an RNA-seq logistic regression classifier with BigQuery MLCheck out other notebooks at our [Community Notebooks Repository](https://github.com/isb-cgc/Community-Notebooks)!- **Title:** How to build an RNA-seq logistic regression classifier with BigQuery ML- **Author:** John Phan- **Created:** 2021-07-19- **Purpose:** Demonstrate use of BigQuery ML to predict a cancer endpoint using gene expression data.- **URL:** https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb- **Note:** This example is based on the work published by [Bosquet et al.](https://molecular-cancer.biomedcentral.com/articles/10.1186/s12943-016-0548-9)This notebook builds upon the [scikit-learn notebook](https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier.ipynb) and demonstrates how to build a machine learning model using BigQuery ML to predict ovarian cancer treatment outcome. BigQuery is used to create a temporary data table that contains both training and testing data. These datasets are then used to fit and evaluate a Logistic Regression classifier. Import Dependencies
# GCP libraries from google.cloud import bigquery from google.colab import auth
_____no_output_____
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
AuthenticateBefore using BigQuery, we need to get authorization for access to BigQuery and the Google Cloud. For more information see ['Quick Start Guide to ISB-CGC'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html). Alternative authentication methods can be found [here](https://googleapis.dev/python/google-api-core/latest/auth.html)
# if you're using Google Colab, authenticate to gcloud with the following auth.authenticate_user() # alternatively, use the gcloud SDK #!gcloud auth application-default login
_____no_output_____
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
ParametersCustomize the following parameters based on your notebook, execution environment, or project. BigQuery ML must create and store classification models, so be sure that you have write access to the locations stored in the "bq_dataset" and "bq_project" variables.
# set the google project that will be billed for this notebook's computations google_project = 'google-project' ## CHANGE ME # bq project for storing ML model bq_project = 'bq-project' ## CHANGE ME # bq dataset for storing ML model bq_dataset = 'scratch' ## CHANGE ME # name of temporary table for data bq_tmp_table = 'tmp_data' # name of ML model bq_ml_model = 'tcga_ov_therapy_ml_lr_model' # in this example, we'll be using the Ovarian cancer TCGA dataset cancer_type = 'TCGA-OV' # genes used for prediction model, taken from Bosquet et al. genes = "'RHOT1','MYO7A','ZBTB10','MATK','ST18','RPS23','GCNT1','DROSHA','NUAK1','CCPG1',\ 'PDGFD','KLRAP1','MTAP','RNF13','THBS1','MLX','FAP','TIMP3','PRSS1','SLC7A11',\ 'OLFML3','RPS20','MCM5','POLE','STEAP4','LRRC8D','WBP1L','ENTPD5','SYNE1','DPT',\ 'COPZ2','TRIO','PDPR'" # clinical data table clinical_table = 'isb-cgc-bq.TCGA_versioned.clinical_gdc_2019_06' # RNA seq data table rnaseq_table = 'isb-cgc-bq.TCGA.RNAseq_hg38_gdc_current'
_____no_output_____
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
BigQuery ClientCreate the BigQuery client.
# Create a client to access the data within BigQuery client = bigquery.Client(google_project)
_____no_output_____
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
Create a Table with a Subset of the Gene Expression DataPull RNA-seq gene expression data from the TCGA RNA-seq BigQuery table, join it with clinical labels, and pivot the table so that it can be used with BigQuery ML. In this example, we will label the samples based on therapy outcome. "Complete Remission/Response" will be labeled as "1" while all other therapy outcomes will be labeled as "0". This prepares the data for binary classification. Prediction modeling with RNA-seq data typically requires a feature selection step to reduce the dimensionality of the data before training a classifier. However, to simplify this example, we will use a pre-identified set of 33 genes (Bosquet et al. identified 34 genes, but PRSS2 and its aliases are not available in the hg38 RNA-seq data). Creation of a BQ table with only the data of interest reduces the size of the data passed to BQ ML and can significantly reduce the cost of running BQ ML queries. This query also randomly splits the dataset into "training" and "testing" sets using the "FARM_FINGERPRINT" hash function in BigQuery. "FARM_FINGERPRINT" generates an integer from the input string. More information can be found [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/hash_functions).
tmp_table_query = client.query((""" BEGIN CREATE OR REPLACE TABLE `{bq_project}.{bq_dataset}.{bq_tmp_table}` AS SELECT * FROM ( SELECT labels.case_barcode as sample, labels.data_partition as data_partition, labels.response_label AS label, ge.gene_name AS gene_name, -- Multiple samples may exist per case, take the max value MAX(LOG(ge.HTSeq__FPKM_UQ+1)) AS gene_expression FROM `{rnaseq_table}` AS ge INNER JOIN ( SELECT * FROM ( SELECT case_barcode, primary_therapy_outcome_success, CASE -- Complete Reponse --> label as 1 -- All other responses --> label as 0 WHEN primary_therapy_outcome_success = 'Complete Remission/Response' THEN 1 WHEN (primary_therapy_outcome_success IN ( 'Partial Remission/Response','Progressive Disease','Stable Disease' )) THEN 0 END AS response_label, CASE WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) < 5 THEN 'training' WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) >= 5 THEN 'testing' END AS data_partition FROM `{clinical_table}` WHERE project_short_name = '{cancer_type}' AND primary_therapy_outcome_success IS NOT NULL ) ) labels ON labels.case_barcode = ge.case_barcode WHERE gene_name IN ({genes}) GROUP BY sample, label, data_partition, gene_name ) PIVOT ( MAX(gene_expression) FOR gene_name IN ({genes}) ); END; """).format( bq_project=bq_project, bq_dataset=bq_dataset, bq_tmp_table=bq_tmp_table, rnaseq_table=rnaseq_table, clinical_table=clinical_table, cancer_type=cancer_type, genes=genes )).result() print(tmp_table_query)
<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3894001250>
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
Let's take a look at this subset table. The data has been pivoted such that each of the 33 genes is available as a column that can be "SELECTED" in a query. In addition, the "label" and "data_partition" columns simplify data handling for classifier training and evaluation.
tmp_table_data = client.query((""" SELECT * --usually not recommended to use *, but in this case, we want to see all of the 33 genes FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}` """).format( bq_project=bq_project, bq_dataset=bq_dataset, bq_tmp_table=bq_tmp_table )).result().to_dataframe() print(tmp_table_data.info()) tmp_table_data
<class 'pandas.core.frame.DataFrame'> RangeIndex: 264 entries, 0 to 263 Data columns (total 36 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 sample 264 non-null object 1 data_partition 264 non-null object 2 label 264 non-null int64 3 RHOT1 264 non-null float64 4 MYO7A 264 non-null float64 5 ZBTB10 264 non-null float64 6 MATK 264 non-null float64 7 ST18 264 non-null float64 8 RPS23 264 non-null float64 9 GCNT1 264 non-null float64 10 DROSHA 264 non-null float64 11 NUAK1 264 non-null float64 12 CCPG1 264 non-null float64 13 PDGFD 264 non-null float64 14 KLRAP1 264 non-null float64 15 MTAP 264 non-null float64 16 RNF13 264 non-null float64 17 THBS1 264 non-null float64 18 MLX 264 non-null float64 19 FAP 264 non-null float64 20 TIMP3 264 non-null float64 21 PRSS1 264 non-null float64 22 SLC7A11 264 non-null float64 23 OLFML3 264 non-null float64 24 RPS20 264 non-null float64 25 MCM5 264 non-null float64 26 POLE 264 non-null float64 27 STEAP4 264 non-null float64 28 LRRC8D 264 non-null float64 29 WBP1L 264 non-null float64 30 ENTPD5 264 non-null float64 31 SYNE1 264 non-null float64 32 DPT 264 non-null float64 33 COPZ2 264 non-null float64 34 TRIO 264 non-null float64 35 PDPR 264 non-null float64 dtypes: float64(33), int64(1), object(2) memory usage: 74.4+ KB None
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
Train the Machine Learning ModelNow we can train a classifier using BigQuery ML with the data stored in the subset table. This model will be stored in the location specified by the "bq_ml_model" variable, and can be reused to predict samples in the future.We pass three options to the BQ ML model: model_type, auto_class_weights, and input_label_cols. Model_type specifies the classifier model type. In this case, we use "LOGISTIC_REG" to train a logistic regression classifier. Other classifier options are documented [here](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create). Auto_class_weights indicates whether samples should be weighted to balance the classes. For example, if the dataset happens to have more samples labeled as "Complete Response", those samples would be less weighted to ensure that the model is not biased towards predicting those samples. Input_label_cols tells BigQuery that the "label" column should be used to determine each sample's label. **Warning**: BigQuery ML models can be very time-consuming and expensive to train. Please check your data size before running BigQuery ML commands. Information about BigQuery ML costs can be found [here](https://cloud.google.com/bigquery-ml/pricing).
# create ML model using BigQuery ml_model_query = client.query((""" CREATE OR REPLACE MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}` OPTIONS ( model_type='LOGISTIC_REG', auto_class_weights=TRUE, input_label_cols=['label'] ) AS SELECT * EXCEPT(sample, data_partition) -- when training, we only the labels and feature columns FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}` WHERE data_partition = 'training' -- using training data only """).format( bq_project=bq_project, bq_dataset=bq_dataset, bq_ml_model=bq_ml_model, bq_tmp_table=bq_tmp_table )).result() print(ml_model_query) # now get the model metadata ml_model = client.get_model('{}.{}.{}'.format(bq_project, bq_dataset, bq_ml_model)) print(ml_model)
<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3893663810> Model(reference=ModelReference(project='isb-project-zero', dataset_id='jhp_scratch', project_id='tcga_ov_therapy_ml_lr_model'))
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
Evaluate the Machine Learning ModelOnce the model has been trained and stored, we can evaluate the model's performance using the "testing" dataset from our subset table. Evaluating a BQ ML model is generally less expensive than training. Use the following query to evaluate the BQ ML model. Note that we're using the "data_partition = 'testing'" clause to ensure that we're only evaluating the model with test samples from the subset table. BigQuery's ML.EVALUATE function returns several performance metrics: precision, recall, accuracy, f1_score, log_loss, and roc_auc. More details about these performance metrics are available from [Google's ML Crash Course](https://developers.google.com/machine-learning/crash-course/classification/video-lecture). Specific topics can be found at the following URLs: [precision and recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall), [accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy), [ROC and AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc).
ml_eval = client.query((""" SELECT * FROM ML.EVALUATE (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`, ( SELECT * EXCEPT(sample, data_partition) FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}` WHERE data_partition = 'testing' ) ) """).format( bq_project=bq_project, bq_dataset=bq_dataset, bq_ml_model=bq_ml_model, bq_tmp_table=bq_tmp_table )).result().to_dataframe() # Display the table of evaluation results ml_eval
_____no_output_____
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
Predict Outcome for One or More SamplesML.EVALUATE evaluates a model's performance, but does not produce actual predictions for each sample. In order to do that, we need to use the ML.PREDICT function. The syntax is similar to that of the ML.EVALUATE function and returns "label", "predicted_label", "predicted_label_probs", and all feature columns. Since the feature columns are unchanged from the input dataset, we select only the original label, predicted label, and probabilities for each sample. Note that the input dataset can include one or more samples, and must include the same set of features as the training dataset.
ml_predict = client.query((""" SELECT label, predicted_label, predicted_label_probs FROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`, ( SELECT * EXCEPT(sample, data_partition) FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}` WHERE data_partition = 'testing' -- Use the testing dataset ) ) """).format( bq_project=bq_project, bq_dataset=bq_dataset, bq_ml_model=bq_ml_model, bq_tmp_table=bq_tmp_table )).result().to_dataframe() # Display the table of prediction results ml_predict # Calculate the accuracy of prediction, which should match the result of ML.EVALUATE accuracy = 1-sum(abs(ml_predict['label']-ml_predict['predicted_label']))/len(ml_predict) print('Accuracy: ', accuracy)
Accuracy: 0.6230769230769231
Apache-2.0
MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
rpatil524/Community-Notebooks
FireCARES ops management notebook Using this notebookIn order to use this notebook, a single production/test web node will need to be bootstrapped w/ ipython and django-shell-plus python libraries. After bootstrapping is complete and while forwarding a local port to the port that the ipython notebook server will be running on the node, you can open the ipython notebook using the token provided in the SSH session after ipython notebook server start. Bootstrapping a prod/test nodeTo bootstrap a specific node for use of this notebook, you'll need to ssh into the node and forward a local port to localhost:8888 on the node.e.g. `ssh firecares-prod -L 8890:localhost:8888` to forward the local port 8890 to 8888 on the web node, assumes that the "firecares-prod" SSH config is listed w/ the correct webserver IP in your `~/.ssh/config`- `sudo chown -R firecares: /run/user/1000` as the `ubuntu` user- `sudo su firecares`- `workon firecares`- `pip install -r dev_requirements.txt`- `python manage.py shell_plus --notebook --no-browser --settings=firecares.settings.local`At this point, there will be a mention of "The jupyter notebook is running at: http://localhost:8888/?token=XXXX". Copy the URL, but be sure to use the local port that you're forwarding instead for the connection vs the default of 8888 if necessary.Since the ipython notebook server supports django-shell-plus, all of the FireCARES models will automatically be imported. From here any command that you execute in the notebook will run on the remote web node immediately. Fire department management Re-generate performance score for a specific fire departmentUseful for when a department's FDID has been corrected. Will do the following:1. Pull NFIRS counts for the department (cached in FireCARES database)1. Generate fires heatmap1. Update department owned census tracts geom1. Regenerate structure hazard counts in jurisdiction1. Regenerate population_quartiles materialized view to get safe grades for department1. Re-run performance score for the department
import psycopg2 from firecares.tasks import update from firecares.utils import dictfetchall from django.db import connections from django.conf import settings from django.core.management import call_command from IPython.display import display import pandas as pd fd = {'fdid': '18M04', 'state': 'WA'} nfirs = connections['nfirs'] department = FireDepartment.objects.filter(**fd).first() fid = department.id print 'FireCARES id: %s' % fid print 'https://firecares.org/departments/%s' % fid %%time # Get raw fire incident counts (prior to intersection with ) with nfirs.cursor() as cur: cur.execute(""" select count(1), fdid, state, extract(year from inc_date) as year from fireincident where fdid=%(fdid)s and state=%(state)s group by fdid, state, year order by year""", fd) fire_years = dictfetchall(cur) display(fire_years) print 'Total fires: %s\n' % sum([x['count'] for x in fire_years]) %%time # Get building fire counts after structure hazard level calculations sql = update.STRUCTURE_FIRES print sql with nfirs.cursor() as cur: cur.execute(sql, dict(fd, years=tuple([x['year'] for x in fire_years]))) fires_by_hazard_level = dictfetchall(cur) display(fires_by_hazard_level) print 'Total geocoded fires: %s\n' % sum([x['count'] for x in fires_by_hazard_level]) sql = """ select alarm, a.inc_type, alarms,ff_death, oth_death, ST_X(geom) as x, st_y(geom) as y, COALESCE(y.risk_category, 'Unknown') as risk_category from buildingfires a LEFT JOIN ( SELECT state, fdid, inc_date, inc_no, exp_no, x.geom, x.parcel_id, x.risk_category FROM ( SELECT * FROM incidentaddress a LEFT JOIN parcel_risk_category_local using (parcel_id) ) AS x ) AS y USING (state, fdid, inc_date, inc_no, exp_no) WHERE a.state = %(state)s and a.fdid = %(fdid)s""" with nfirs.cursor() as cur: cur.execute(sql, fd) rows = dictfetchall(cur) out_name = '{id}-building-fires.csv'.format(id=fid) full_path = '/tmp/' + out_name with open(full_path, 'w') as f: writer = csv.DictWriter(f, fieldnames=[x.name for x in cur.description]) writer.writeheader() writer.writerows(rows) # Push building fires to S3 !aws s3 cp $full_path s3://firecares-test/$out_name --acl="public-read" update.update_nfirs_counts(fid) update.calculate_department_census_geom(fid) # Fire counts by hazard level over all years, keep in mind that the performance score model will currently ONLY work # hazard levels w/ display(pd.DataFrame(fires_by_hazard_level).groupby(['risk_level']).sum()['count']) update.update_performance_score(fid)
_____no_output_____
MIT
Ops MGMT.ipynb
FireCARES/firecares
Solving vertex cover with a quantum annealer The problem of vertex cover is, given an undirected graph $G = (V, E)$, colour the smallest amount of vertices such that each edge $e \in E$ is connected to a coloured vertex.This notebooks works through the process of creating a random graph, translating to an optimization problem, and eventually finding the ground state using a quantum annealer. Graph setup The first thing we will do is create an instance of the problem, by constructing a small, random undirected graph. We are going to use the `networkx` package, which should already be installed if you have installed if you are using Anaconda.
import dimod import networkx as nx import matplotlib.pyplot as plt import numpy as np n_vertices = 5 n_edges = 6 small_graph = nx.gnm_random_graph(n_vertices, n_edges) nx.draw(small_graph, with_labels=True)
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Constructing the Hamiltonian I showed in class that the objective function for vertex cover looks like this:\begin{equation} \sum_{(u,v) \in E} (1 - x_u) (1 - x_v) + \gamma \sum_{v \in V} x_v\end{equation}We want to find an assignment of the $x_u$ of 1 (coloured) or 0 (uncoloured) that _minimizes_ this function. The first sum tries to force us to choose an assignment that makes sure every edge gets attached to a coloured vertex. The second sum is essentially just counting the number of coloured vertices.**Task**: Expand out the QUBO above to see how you can convert it to a more 'traditional' looking QUBO:\begin{equation} \sum_{(u,v) \in E} x_u x_v + \sum_{v \in V} (\gamma - \hbox{deg}(x_v)) x_v\end{equation}where deg($x_v$) indicates the degree of vertex $x_v$ in the graph.
γ = 0.8 Q = {x : 1 for x in small_graph.edges()} r = {x : (γ - small_graph.degree[x]) for x in small_graph.nodes}
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Let's convert it to the appropriate data structure, and solve using the exact solver.
bqm = dimod.BinaryQuadraticModel(r, Q, 0, dimod.BINARY) response = dimod.ExactSolver().sample(bqm) print(f"Sample energy = {next(response.data(['energy']))[0]}")
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Let's print the graph with proper colours included
colour_assignments = next(response.data(['sample']))[0] colours = ['grey' if colour_assignments[x] == 0 else 'red' for x in range(len(colour_assignments))] nx.draw(small_graph, with_labels=True, node_color=colours)
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Scaling up... That one was easy enough to solve by hand. Let's try a much larger instance...
n_vertices = 20 n_edges = 60 large_graph = nx.gnm_random_graph(n_vertices, n_edges) nx.draw(large_graph, with_labels=True) # Create h, J and put it into the exact solver γ = 0.8 Q = {x : 1 for x in large_graph.edges()} r = {x : (γ - large_graph.degree[x]) for x in large_graph.nodes} bqm = dimod.BinaryQuadraticModel(r, Q, 0, dimod.BINARY) response = dimod.ExactSolver().sample(bqm) print(f"Sample energy = {next(response.data(['energy']))[0]}") colour_assignments = next(response.data(['sample']))[0] colours = ['grey' if colour_assignments[x] == 0 else 'red' for x in range(len(colour_assignments))] nx.draw(large_graph, with_labels=True, node_color=colours) print(f"Coloured {list(colour_assignments.values()).count(1)}/{n_vertices} vertices.")
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Running on the D-Wave You'll only be able to run the next few cells if you have D-Wave access. We will send the same graph as before to the D-Wave QPU and see what kind of results we get back!
from dwave.system.samplers import DWaveSampler from dwave.system.composites import EmbeddingComposite sampler = EmbeddingComposite(DWaveSampler()) ising_conversion = bqm.to_ising() h, J = ising_conversion[0], ising_conversion[1] response = sampler.sample_ising(h, J, num_reads = 1000) best_solution =np.sort(response.record, order='energy')[0] print(f"Sample energy = {best_solution['energy']}") colour_assignments_qpu = {x : best_solution['sample'][x] for x in range(n_vertices)} for x in range(n_vertices): if colour_assignments_qpu[x] == -1: colour_assignments_qpu[x] = 0 colours = ['grey' if colour_assignments_qpu[x] == 0 else 'red' for x in range(len(colour_assignments_qpu))] nx.draw(large_graph, with_labels=True, node_color=colours) print(f"Coloured {list(colour_assignments_qpu.values()).count(1)}/{n_vertices} vertices.") print("Node\tExact\tQPU") for x in range(n_vertices): print(f"{x}\t{colour_assignments[x]}\t{colour_assignments_qpu[x]}")
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Here is a scatter plot of all the different energies we got out, against the number of times each solution occurred.
plt.scatter(response.record['energy'], response.record['num_occurrences']) response.record['num_occurrences']
_____no_output_____
MIT
04-annealing-applications/Vertex-Cover.ipynb
a-capra/Intro-QC-TRIUMF
Notebook TemplateThis Notebook is stubbed out with some project paths, loading of enviroment variables, and common package imports to speed up the process of starting a new project.It is highly recommended you copy and rename this notebook following the naming convention outlined in the readme of naming notebooks with a double number such as `01_first_thing`, and `02_next_thing`. This way the order of notebooks is apparent, and each notebook does not need to be needlesssly long, complex, and difficult to follow.
import importlib import os from pathlib import Path import sys from arcgis.features import GeoAccessor, GeoSeriesAccessor from arcgis.gis import GIS from dotenv import load_dotenv, find_dotenv import pandas as pd # import arcpy if available if importlib.util.find_spec("arcpy") is not None: import arcpy # load environment variables from .env load_dotenv(find_dotenv()) # paths to common data locations - NOTE: to convert any path to a raw string, simply use str(path_instance) project_parent = Path('./').absolute().parent data_dir = project_parent/'data' data_raw = data_dir/'raw' data_ext = data_dir/'external' data_int = data_dir/'interim' data_out = data_dir/'processed' gdb_raw = data_raw/'raw.gdb' gdb_int = data_int/'interim.gdb' gdb_out = data_out/'processed.gdb' # import the project package from the project package path sys.path.append(str(project_parent/'src')) import la_challenge # load the "autoreload" extension so that code can change, & always reload modules so that as you change code in src, it gets loaded %load_ext autoreload %autoreload 2 # create a GIS object instance; if you did not enter any information here, it defaults to anonymous access to ArcGIS Online gis = GIS( url=os.getenv('ESRI_GIS_URL'), username=os.getenv('ESRI_GIS_USERNAME'), password=os.getenv('ESRI_GIS_PASSWORD') ) gis
_____no_output_____
Apache-2.0
notebooks/notebook_template.ipynb
knu2xs/la-covid-challenge
Read DC data
fname = "../data/ChungCheonDC/20150101000000.apr" survey = readReservoirDC(fname) dobsAppres = survey.dobs fig, ax = plt.subplots(1,1, figsize = (10, 2)) dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='volt', sameratio=False) cb = dat[2] cb.set_label("Apprent resistivity (ohm-m)") geom = np.hstack(dat[3]) dobsDC = dobsAppres * geom # problem = DC.Problem2D_CC(mesh) cs = 2.5 npad = 6 hx = [(cs,npad, -1.3),(cs,160),(cs,npad, 1.3)] hy = [(cs,npad, -1.3),(cs,20)] mesh = Mesh.TensorMesh([hx, hy]) mesh = Mesh.TensorMesh([hx, hy],x0=[-mesh.hx[:6].sum()-0.25, -mesh.hy.sum()]) def from3Dto2Dsurvey(survey): srcLists2D = [] nSrc = len(survey.srcList) for iSrc in range (nSrc): src = survey.srcList[iSrc] locsM = np.c_[src.rxList[0].locs[0][:,0], np.ones_like(src.rxList[0].locs[0][:,0])*-0.75] locsN = np.c_[src.rxList[0].locs[1][:,0], np.ones_like(src.rxList[0].locs[1][:,0])*-0.75] rx = DC.Rx.Dipole_ky(locsM, locsN) locA = np.r_[src.loc[0][0], -0.75] locB = np.r_[src.loc[1][0], -0.75] src = DC.Src.Dipole([rx], locA, locB) srcLists2D.append(src) survey2D = DC.Survey_ky(srcLists2D) return survey2D from SimPEG import (Mesh, Maps, Utils, DataMisfit, Regularization, Optimization, Inversion, InvProblem, Directives) mapping = Maps.ExpMap(mesh) survey2D = from3Dto2Dsurvey(survey) problem = DC.Problem2D_N(mesh, mapping=mapping) problem.pair(survey2D) m0 = np.ones(mesh.nC)*np.log(1e-2) from ipywidgets import interact nSrc = len(survey2D.srcList) def foo(isrc): figsize(10, 5) mesh.plotImage(np.ones(mesh.nC)*np.nan, gridOpts={"color":"k", "alpha":0.5}, grid=True) # isrc=0 src = survey2D.srcList[isrc] plt.plot(src.loc[0][0], src.loc[0][1], 'bo') plt.plot(src.loc[1][0], src.loc[1][1], 'ro') locsM = src.rxList[0].locs[0] locsN = src.rxList[0].locs[1] plt.plot(locsM[:,0], locsM[:,1], 'ko') plt.plot(locsN[:,0], locsN[:,1], 'go') plt.gca().set_aspect('equal', adjustable='box') interact(foo, isrc=(0, nSrc-1, 1)) pred = survey2D.dpred(m0) # data_anal = [] # nSrc = len(survey.srcList) # for isrc in range(nSrc): # src = survey.srcList[isrc] # locA = src.loc[0] # locB = src.loc[1] # locsM = src.rxList[0].locs[0] # locsN = src.rxList[0].locs[1] # rxloc=[locsM, locsN] # a = EM.Analytics.DCAnalyticHalf(locA, rxloc, 1e-3, earth_type="halfspace") # b = EM.Analytics.DCAnalyticHalf(locB, rxloc, 1e-3, earth_type="halfspace") # data_anal.append(a-b) # data_anal = np.hstack(data_anal) survey.dobs = pred fig, ax = plt.subplots(1,1, figsize = (10, 2)) dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, scale="linear", clim=(0, 200)) out = hist(np.log10(abs(dobsDC)), bins = 100) weight = 1./abs(mesh.gridCC[:,1])**1.5 mesh.plotImage(np.log10(weight)) survey2D.dobs = dobsDC survey2D.eps = 10**(-2.3) survey2D.std = 0.02 dmisfit = DataMisfit.l2_DataMisfit(survey2D) regmap = Maps.IdentityMap(nP=int(mesh.nC)) reg = Regularization.Simple(mesh,mapping=regmap,cell_weights=weight) opt = Optimization.InexactGaussNewton(maxIter=5) invProb = InvProblem.BaseInvProblem(dmisfit, reg, opt) # Create an inversion object beta = Directives.BetaSchedule(coolingFactor=5, coolingRate=2) betaest = Directives.BetaEstimate_ByEig(beta0_ratio=1e0) inv = Inversion.BaseInversion(invProb, directiveList=[beta, betaest]) problem.counter = opt.counter = Utils.Counter() opt.LSshorten = 0.5 opt.remember('xc') mopt = inv.run(m0) xc = opt.recall("xc") fig, ax = plt.subplots(1,1, figsize = (10, 1.5)) sigma = mapping*mopt dat = mesh.plotImage(1./sigma, clim=(10, 150),grid=False, ax=ax, pcolorOpts={"cmap":"jet"}) ax.set_ylim(-50, 0) ax.set_xlim(-10, 290) print np.log10(sigma).min(), np.log10(sigma).max() survey.dobs = invProb.dpred fig, ax = plt.subplots(1,1, figsize = (10, 2)) dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, clim=(40, 170)) survey.dobs = dobsDC fig, ax = plt.subplots(1,1, figsize = (10, 2)) dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, clim=(40, 170)) survey.dobs = abs(dmisfit.Wd*(dobsDC-invProb.dpred)) fig, ax = plt.subplots(1,1, figsize = (10, 2)) dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='volt', sameratio=False, clim=(0, 2)) # sigma = np.ones(mesh.nC) modelname = "sigma0101.npy" np.save(modelname, sigma)
_____no_output_____
MIT
notebook/DCinversion.ipynb
sgkang/DamGeophysics
FloPy Using FloPy to simplify the use of the MT3DMS ```SSM``` packageA multi-component transport demonstration
import os import sys import numpy as np # run installed version of flopy or add local path try: import flopy except: fpth = os.path.abspath(os.path.join('..', '..')) sys.path.append(fpth) import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('flopy version: {}'.format(flopy.__version__))
3.8.10 (default, May 19 2021, 11:01:55) [Clang 10.0.0 ] numpy version: 1.19.2 flopy version: 3.3.4
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
First, we will create a simple model structure
nlay, nrow, ncol = 10, 10, 10 perlen = np.zeros((10), dtype=float) + 10 nper = len(perlen) ibound = np.ones((nlay,nrow,ncol), dtype=int) botm = np.arange(-1,-11,-1) top = 0.
_____no_output_____
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Create the ```MODFLOW``` packages
model_ws = 'data' modelname = 'ssmex' mf = flopy.modflow.Modflow(modelname, model_ws=model_ws) dis = flopy.modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol, perlen=perlen, nper=nper, botm=botm, top=top, steady=False) bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=top) lpf = flopy.modflow.ModflowLpf(mf, hk=100, vka=100, ss=0.00001, sy=0.1) oc = flopy.modflow.ModflowOc(mf) pcg = flopy.modflow.ModflowPcg(mf) rch = flopy.modflow.ModflowRch(mf)
_____no_output_____
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
We'll track the cell locations for the ```SSM``` data using the ```MODFLOW``` boundary conditions.Get a dictionary (```dict```) that has the ```SSM``` ```itype``` for each of the boundary types.
itype = flopy.mt3d.Mt3dSsm.itype_dict() print(itype) print(flopy.mt3d.Mt3dSsm.get_default_dtype()) ssm_data = {}
{'CHD': 1, 'BAS6': 1, 'PBC': 1, 'WEL': 2, 'DRN': 3, 'RIV': 4, 'GHB': 5, 'MAS': 15, 'CC': -1} [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8')]
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Add a general head boundary (```ghb```). The general head boundary head (```bhead```) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then ```bhead``` is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0
ghb_data = {} print(flopy.modflow.ModflowGhb.get_default_dtype()) ghb_data[0] = [(4, 4, 4, 0.1, 1.5)] ssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)] ghb_data[5] = [(4, 4, 4, 0.25, 1.5)] ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)] for k in range(nlay): for i in range(nrow): ghb_data[0].append((k, i, 0, 0.0, 100.0)) ssm_data[0].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0)) ghb_data[5] = [(4, 4, 4, 0.25, 1.5)] ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)] for k in range(nlay): for i in range(nrow): ghb_data[5].append((k, i, 0, -0.5, 100.0)) ssm_data[5].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('bhead', '<f4'), ('cond', '<f4')]
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Add an injection ```well```. The injection rate (```flux```) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the ```SSM``` data in stress period 6, we need to add the well to the ssm_data for stress period 6.
wel_data = {} print(flopy.modflow.ModflowWel.get_default_dtype()) wel_data[0] = [(0, 4, 8, 10.0)] ssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0)) ssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('flux', '<f4')]
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Add the ```GHB``` and ```WEL``` packages to the ```mf``` ```MODFLOW``` object instance.
ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=ghb_data) wel = flopy.modflow.ModflowWel(mf, stress_period_data=wel_data)
_____no_output_____
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Create the ```MT3DMS``` packages
mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws) btn = flopy.mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0) adv = flopy.mt3d.Mt3dAdv(mt) ssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data) gcg = flopy.mt3d.Mt3dGcg(mt)
found 'rch' in modflow model, resetting crch to 0.0 SSM: setting crch for component 2 to zero. kwarg name crch2
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Let's verify that ```stress_period_data``` has the right ```dtype```
print(ssm.stress_period_data.dtype)
[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8'), ('cssm(01)', '<f4'), ('cssm(02)', '<f4')]
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Create the ```SEAWAT``` packages
swt = flopy.seawat.Seawat(modflowmodel=mf, mt3dmodel=mt, modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws) vdf = flopy.seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1) mf.write_input() mt.write_input() swt.write_input()
_____no_output_____
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
And finally, modify the ```vdf``` package to fix ```indense```.
fname = modelname + '.vdf' f = open(os.path.join(model_ws, fname),'r') lines = f.readlines() f.close() f = open(os.path.join(model_ws, fname),'w') for line in lines: f.write(line) for kper in range(nper): f.write("-1\n") f.close()
_____no_output_____
CC0-1.0
examples/Notebooks/flopy3_multi-component_SSM.ipynb
jdlarsen-UA/flopy
Clean the Project Directory
import glob import os from pathlib import Path import shutil exec(Path('startup.py').read_text()) DEBUG=False VERBOSE=True def clean(d='../', pats=['.ipynb*','__pycache__']): """ Clean the working directory or a directory given by d. """ if DEBUG: print("debugging clean") if VERBOSE: print("running `clean` in `VERBOSE` mode") for p in pats: F = [Path(f) for f in Path(d).rglob(p)] if VERBOSE: print(f"files matching '{p}':") print(F) for f in F: if VERBOSE: print(f"removing {f}") if f.is_dir(): shutil.rmtree(f) else: f.unlink() clean()
running `clean` in `VERBOSE` mode files matching '.ipynb*': [WindowsPath('../etc/.ipynb_checkpoints'), WindowsPath('../gcv/.ipynb_checkpoints'), WindowsPath('../notes/.ipynb_checkpoints')] removing ..\etc\.ipynb_checkpoints removing ..\gcv\.ipynb_checkpoints removing ..\notes\.ipynb_checkpoints files matching '__pycache__': [WindowsPath('../gcv/__pycache__')] removing ..\gcv\__pycache__
MIT
gcv/notes/clean.ipynb
fuzzyklein/gcv-lab
IntroductionIn a prior notebook, documents were partitioned by assigning them to the domain with the highest Dice similarity of their term and structure occurrences. The occurrences of terms and structures in each domain is what we refer to as the domain "archetype." Here, we'll assess whether the observed similarity between documents and the archetype is greater than expected by chance. This would indicate that information in the framework generalizes well to individual documents. Load the data
import os import pandas as pd import numpy as np import sys sys.path.append("..") import utilities from ontology import ontology from style import style version = 190325 # Document-term matrix version clf = "lr" # Classifier used to generate the framework suffix = "_" + clf # Suffix for term lists n_iter = 1000 # Iterations for null distribution circuit_counts = range(2, 51) # Range of k values
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Brain activation coordinates
act_bin = utilities.load_coordinates() print("Document N={}, Structure N={}".format( act_bin.shape[0], act_bin.shape[1]))
Document N=18155, Structure N=118
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Document-term matrix
dtm_bin = utilities.load_doc_term_matrix(version=version, binarize=True) print("Document N={}, Term N={}".format( dtm_bin.shape[0], dtm_bin.shape[1]))
Document N=18155, Term N=4107
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Document splits
splits = {} # splits["train"] = [int(pmid.strip()) for pmid in open("../data/splits/train.txt")] splits["validation"] = [int(pmid.strip()) for pmid in open("../data/splits/validation.txt")] splits["test"] = [int(pmid.strip()) for pmid in open("../data/splits/test.txt")] for split, split_pmids in splits.items(): print("{:12s} N={}".format(split.title(), len(split_pmids))) pmids = dtm_bin.index.intersection(act_bin.index)
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Document assignments and distancesIndexing by min:max will be faster in subsequent computations
from collections import OrderedDict from scipy.spatial.distance import cdist def load_doc2dom(k, clf="lr"): doc2dom_df = pd.read_csv("../partition/data/doc2dom_k{:02d}_{}.csv".format(k, clf), header=None, index_col=0) doc2dom = {int(pmid): str(dom.values[0]) for pmid, dom in doc2dom_df.iterrows()} return doc2dom def load_dom2docs(k, domains, splits, clf="lr"): doc2dom = load_doc2dom(k, clf=clf) dom2docs = {dom: {split: [] for split, _ in splits.items()} for dom in domains} for doc, dom in doc2dom.items(): for split, split_pmids in splits.items(): if doc in splits[split]: dom2docs[dom][split].append(doc) return dom2docs sorted_pmids, doc_dists, dom_idx = {}, {}, {} for k in circuit_counts: print("Processing k={:02d}".format(k)) sorted_pmids[k], doc_dists[k], dom_idx[k] = {}, {}, {} for split, split_pmids in splits.items(): lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix) words = sorted(list(set(lists["TOKEN"]))) structures = sorted(list(set(act_bin.columns))) domains = list(OrderedDict.fromkeys(lists["DOMAIN"])) dtm_words = dtm_bin.loc[pmids, words] act_structs = act_bin.loc[pmids, structures] docs = dtm_words.copy() docs[structures] = act_structs.copy() doc2dom = load_doc2dom(k, clf=clf) dom2docs = load_dom2docs(k, domains, splits, clf=clf) ids = [] for dom in domains: ids += [pmid for pmid, sys in doc2dom.items() if sys == dom and pmid in split_pmids] sorted_pmids[k][split] = ids doc_dists[k][split] = pd.DataFrame(cdist(docs.loc[ids], docs.loc[ids], metric="dice"), index=ids, columns=ids) dom_idx[k][split] = {} for dom in domains: dom_idx[k][split][dom] = {} dom_pmids = dom2docs[dom][split] if len(dom_pmids) > 0: dom_idx[k][split][dom]["min"] = sorted_pmids[k][split].index(dom_pmids[0]) dom_idx[k][split][dom]["max"] = sorted_pmids[k][split].index(dom_pmids[-1]) + 1 else: dom_idx[k][split][dom]["min"] = 0 dom_idx[k][split][dom]["max"] = 0
Processing k=02 Processing k=03 Processing k=04 Processing k=05 Processing k=06 Processing k=07 Processing k=08 Processing k=09 Processing k=10 Processing k=11 Processing k=12 Processing k=13 Processing k=14 Processing k=15 Processing k=16 Processing k=17 Processing k=18 Processing k=19 Processing k=20 Processing k=21 Processing k=22 Processing k=23 Processing k=24 Processing k=25 Processing k=26 Processing k=27 Processing k=28 Processing k=29 Processing k=30 Processing k=31 Processing k=32 Processing k=33 Processing k=34 Processing k=35 Processing k=36 Processing k=37 Processing k=38 Processing k=39 Processing k=40 Processing k=41 Processing k=42 Processing k=43 Processing k=44 Processing k=45 Processing k=46 Processing k=47 Processing k=48 Processing k=49 Processing k=50
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Index by PMID and sort by structure
structures = sorted(list(set(act_bin.columns))) act_structs = act_bin.loc[pmids, structures]
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Compute domain modularity Observed values Distances internal and external to articles in each domain
dists_int, dists_ext = {}, {} for k in circuit_counts: dists_int[k], dists_ext[k] = {}, {} lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix) domains = list(OrderedDict.fromkeys(lists["DOMAIN"])) for split, split_pmids in splits.items(): dists_int[k][split], dists_ext[k][split] = {}, {} for dom in domains: dom_min, dom_max = dom_idx[k][split][dom]["min"], dom_idx[k][split][dom]["max"] dom_dists = doc_dists[k][split].values[:,dom_min:dom_max][dom_min:dom_max,:] dists_int[k][split][dom] = dom_dists other_dists_lower = doc_dists[k][split].values[:,dom_min:dom_max][:dom_min,:] other_dists_upper = doc_dists[k][split].values[:,dom_min:dom_max][dom_max:,:] other_dists = np.concatenate((other_dists_lower, other_dists_upper)) dists_ext[k][split][dom] = other_dists
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Domain-averaged ratio of external to internal distances
means = {split: np.empty((len(circuit_counts),)) for split in splits.keys()} for k_i, k in enumerate(circuit_counts): file_obs = "data/kvals/mod_obs_k{:02d}_{}_{}.csv".format(k, clf, split) if not os.path.isfile(file_obs): print("Processing k={:02d}".format(k)) lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix) domains = list(OrderedDict.fromkeys(lists["DOMAIN"])) dom2docs = load_dom2docs(k, domains, splits, clf=clf) pmid_list, split_list, dom_list, obs_list = [], [], [], [] for split, split_pmids in splits.items(): for dom in domains: n_dom_docs = dists_int[k][split][dom].shape[0] if n_dom_docs > 0: mean_dist_int = np.nanmean(dists_int[k][split][dom], axis=0) mean_dist_ext = np.nanmean(dists_ext[k][split][dom], axis=0) ratio = mean_dist_ext / mean_dist_int ratio[ratio == np.inf] = np.nan pmid_list += dom2docs[dom][split] dom_list += [dom] * len(ratio) split_list += [split] * len(ratio) obs_list += list(ratio) df_obs = pd.DataFrame({"PMID": pmid_list, "SPLIT": split_list, "DOMAIN": dom_list, "OBSERVED": obs_list}) df_obs.to_csv(file_obs, index=None) else: df_obs = pd.read_csv(file_obs) for split, split_pmids in splits.items(): dom_means = [] for dom in set(df_obs["DOMAIN"]): dom_vals = df_obs.loc[(df_obs["SPLIT"] == split) & (df_obs["DOMAIN"] == dom), "OBSERVED"] dom_means.append(np.nanmean(dom_vals)) means[split][k_i] = np.nanmean(dom_means)
Processing k=02 Processing k=03 Processing k=04 Processing k=05 Processing k=06 Processing k=07 Processing k=08 Processing k=09 Processing k=10 Processing k=11 Processing k=12 Processing k=13 Processing k=14
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Null distributions
nulls = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()} for split, split_pmids in splits.items(): for k_i, k in enumerate(circuit_counts): file_null = "data/kvals/mod_null_k{:02d}_{}_{}iter.csv".format(k, split, n_iter) if not os.path.isfile(file_null): print("Processing k={:02d}".format(k)) lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix) domains = list(OrderedDict.fromkeys(lists["DOMAIN"])) n_docs = len(split_pmids) df_null = np.empty((len(domains), n_iter)) for i, dom in enumerate(domains): n_dom_docs = dists_int[k][split][dom].shape[0] if n_dom_docs > 0: dist_int_ext = np.concatenate((dists_int[k][split][dom], dists_ext[k][split][dom])) for n in range(n_iter): null = np.random.choice(range(n_docs), size=n_docs, replace=False) dist_int_ext_null = dist_int_ext[null,:] mean_dist_int = np.nanmean(dist_int_ext_null[:n_dom_docs,:], axis=0) mean_dist_ext = np.nanmean(dist_int_ext_null[n_dom_docs:,:], axis=0) ratio = mean_dist_ext / mean_dist_int ratio[ratio == np.inf] = np.nan df_null[i,n] = np.nanmean(ratio) else: df_null[i,:] = np.nan df_null = pd.DataFrame(df_null, index=domains, columns=range(n_iter)) df_null.to_csv(file_null) else: df_null = pd.read_csv(file_null, index_col=0, header=0) nulls[split][k_i,:] = np.nanmean(df_null, axis=0)
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Bootstrap distributions
boots = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()} for split, split_pmids in splits.items(): for k_i, k in enumerate(circuit_counts): file_boot = "data/kvals/mod_boot_k{:02d}_{}_{}iter.csv".format(k, split, n_iter) if not os.path.isfile(file_boot): print("Processing k={:02d}".format(k)) lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix) domains = list(OrderedDict.fromkeys(lists["DOMAIN"])) df_boot = np.empty((len(domains), n_iter)) for i, dom in enumerate(domains): n_dom_docs = dists_int[k][split][dom].shape[0] if n_dom_docs > 0: for n in range(n_iter): boot = np.random.choice(range(n_dom_docs), size=n_dom_docs, replace=True) mean_dist_int = np.nanmean(dists_int[k][split][dom][:,boot], axis=0) mean_dist_ext = np.nanmean(dists_ext[k][split][dom][:,boot], axis=0) ratio = mean_dist_ext / mean_dist_int ratio[ratio == np.inf] = np.nan df_boot[i,n] = np.nanmean(ratio) else: df_boot[i,:] = np.nan df_boot = pd.DataFrame(df_boot, index=domains, columns=range(n_iter)) df_boot.to_csv(file_boot) else: df_boot = pd.read_csv(file_boot, index_col=0, header=0) boots[split][k_i,:] = np.nanmean(df_boot, axis=0)
_____no_output_____
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
Plot results over k
from matplotlib import rcParams %matplotlib inline rcParams["axes.linewidth"] = 1.5 for split in splits.keys(): print(split.upper()) utilities.plot_stats_by_k(means, nulls, boots, circuit_counts, metric="mod", split=split, op_k=6, clf=clf, interval=0.999, ylim=[0.8,1.4], yticks=[0.8, 0.9,1,1.1,1.2,1.3,1.4])
VALIDATION
MIT
modularity/mod_kvals_lr.ipynb
ehbeam/neuro-knowledge-engine
create a support vector classifier and manually set the gamma
from sklearn import svm, metrics clf = svm.SVC(gamma=0.001, C=100.)
_____no_output_____
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
fit the classifier to the model and use all the images in our dataset except the last one
clf.fit(digits.data[:-1], digits.target[:-1]) svm.SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) clf.predict(digits.data[-1:])
_____no_output_____
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
reshape the image data into a 8x8 array prior to rendering it
import matplotlib.pyplot as plt plt.imshow(digits.data[-1:].reshape(8,8), cmap=plt.cm.gray_r) plt.show()
_____no_output_____
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
persist the model using pickle and load it again to ensure it works
import pickle s = pickle.dumps(clf) with open(b"digits.model.obj", "wb") as f: pickle.dump(clf, f) clf2 = pickle.loads(s) clf2.predict(digits.data[0:1]) plt.imshow(digits.data[0:1].reshape(8,8), cmap=plt.cm.gray_r) plt.show()
_____no_output_____
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
alternately use joblib.dump
from sklearn.externals import joblib joblib.dump(clf, 'digits.model.pkl')
_____no_output_____
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
example from http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.htmlsphx-glr-auto-examples-classification-plot-digits-classification-py
images_and_labels = list(zip(digits.images, digits.target)) for index, (image, label) in enumerate(images_and_labels[:4]): plt.subplot(2, 4, index + 1) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') plt.title('Training: %i' % label) # To apply a classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(digits.images) data = digits.images.reshape((n_samples, -1)) # Create a classifier: a support vector classifier classifier = svm.SVC(gamma=0.001) # We learn the digits on the first half of the digits classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2]) # Now predict the value of the digit on the second half: expected = digits.target[n_samples // 2:] predicted = classifier.predict(data[n_samples // 2:]) print("Classification report for classifier %s:\n%s\n" % (classifier, metrics.classification_report(expected, predicted))) print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted)) for index, (image, prediction) in enumerate(images_and_predictions[:4]): plt.subplot(2, 4, index + 5) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') plt.title('Prediction: %i' % prediction) plt.show()
Classification report for classifier SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False): precision recall f1-score support 0 1.00 0.99 0.99 88 1 0.99 0.97 0.98 91 2 0.99 0.99 0.99 86 3 0.98 0.87 0.92 91 4 0.99 0.96 0.97 92 5 0.95 0.97 0.96 91 6 0.99 0.99 0.99 91 7 0.96 0.99 0.97 89 8 0.94 1.00 0.97 88 9 0.93 0.98 0.95 92 avg / total 0.97 0.97 0.97 899 Confusion matrix: [[87 0 0 0 1 0 0 0 0 0] [ 0 88 1 0 0 0 0 0 1 1] [ 0 0 85 1 0 0 0 0 0 0] [ 0 0 0 79 0 3 0 4 5 0] [ 0 0 0 0 88 0 0 0 0 4] [ 0 0 0 0 0 88 1 0 0 2] [ 0 1 0 0 0 0 90 0 0 0] [ 0 0 0 0 0 1 0 88 0 0] [ 0 0 0 0 0 0 0 0 88 0] [ 0 0 0 1 0 1 0 0 0 90]]
MIT
Hello, scikit-learn World!.ipynb
InterruptSpeed/mnist-svc
Residual NetworksWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.**In this assignment, you will:**- Implement the basic building blocks of ResNets. - Put together these building blocks to implement and train a state-of-the-art neural network for image classification. Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.* Refers to "shallower" layers as those layers closer to the input, and "deeper" layers as those closer to the output (Using "shallower" layers instead of "lower" or "earlier").* Added/updated instructions. This assignment will be done in Keras. Before jumping into the problem, let's run the cell below to load the required packages.
import numpy as np from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from keras.models import Model, load_model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from resnets_utils import * from keras.initializers import glorot_uniform import scipy.misc from matplotlib.pyplot import imshow %matplotlib inline import keras.backend as K K.set_image_data_format('channels_last') K.set_learning_phase(1)
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
1 - The problem of very deep neural networksLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output). * However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. * More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values). * During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds: **Figure 1** : **Vanishing gradient** The speed of learning decreases very rapidly for the shallower layers as the network trains You are now going to solve this problem by building a Residual Network! 2 - Building a Residual NetworkIn ResNets, a "shortcut" or a "skip connection" allows the model to skip layers: **Figure 2** : A ResNet block showing a **skip-connection** The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network. We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the "identity block" and the "convolutional block." 2.1 - The identity blockThe identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps: **Figure 3** : **Identity block.** Skip connection "skips over" 2 layers. The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras! In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this: **Figure 4** : **Identity block.** Skip connection "skips over" 3 layers. Here are the individual steps.First component of main path: - The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization. - The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization. - The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization. - The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. - Note that there is **no** ReLU activation function in this component. Final step: - The `X_shortcut` and the output from the 3rd layer `X` are added together.- **Hint**: The syntax will look something like `Add()([var1,var2])`- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest. - To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/conv2d)- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))- For the activation, use: `Activation('relu')(X)`- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/add)
# GRADED FUNCTION: identity_block def identity_block(X, f, filters, stage, block): """ Implementation of the identity block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X tf.reset_default_graph() with tf.Session() as test: np.random.seed(1) A_prev = tf.placeholder("float", [3, 4, 4, 6]) X = np.random.randn(3, 4, 4, 6) A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a') test.run(tf.global_variables_initializer()) out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) print("out = " + str(out[0][1][1][0]))
out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
**Expected Output**: **out** [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003] 2.2 - The convolutional blockThe ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path: **Figure 4** : **Convolutional block** * The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) * For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. * The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. The details of the convolutional block are as follows. First component of main path:- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component. Shortcut path:- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`. Final step: - The shortcut and the main path values are added together.- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.- [Conv2D](https://keras.io/layers/convolutional/conv2d)- [BatchNormalization](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- For the activation, use: `Activation('relu')(X)`- [Add](https://keras.io/layers/merge/add)
# GRADED FUNCTION: convolutional_block def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), padding='valid' , name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X tf.reset_default_graph() with tf.Session() as test: np.random.seed(1) A_prev = tf.placeholder("float", [3, 4, 4, 6]) X = np.random.randn(3, 4, 4, 6) A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a') test.run(tf.global_variables_initializer()) out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) print("out = " + str(out[0][1][1][0]))
out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
**Expected Output**: **out** [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603] 3 - Building your first ResNet model (50 layers)You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together. **Figure 5** : **ResNet-50 model** The details of this ResNet-50 model are:- Zero-padding pads the input with a pad of (3,3)- Stage 1: - The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1". - BatchNorm is applied to the 'channels' axis of the input. - MaxPooling uses a (3,3) window and a (2,2) stride.- Stage 2: - The convolutional block uses three sets of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a". - The 2 identity blocks use three sets of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".- Stage 3: - The convolutional block uses three sets of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a". - The 3 identity blocks use three sets of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".- Stage 4: - The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a". - The 5 identity blocks use three sets of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".- Stage 5: - The convolutional block uses three sets of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a". - The 2 identity blocks use three sets of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".- The 'flatten' layer doesn't have any hyperparameters or name.- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above. You'll need to use this function: - Average pooling [see reference](https://keras.io/layers/pooling/averagepooling2d)Here are some other functions we used in the code below:- Conv2D: [See reference](https://keras.io/layers/convolutional/conv2d)- BatchNorm: [See reference](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- Zero padding: [See reference](https://keras.io/layers/convolutional/zeropadding2d)- Max pooling: [See reference](https://keras.io/layers/pooling/maxpooling2d)- Fully connected layer: [See reference](https://keras.io/layers/core/dense)- Addition: [See reference](https://keras.io/layers/merge/add)
# GRADED FUNCTION: ResNet50 def ResNet50(input_shape = (64, 64, 3), classes = 6): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = 'bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') ### START CODE HERE ### # Stage 3 (≈4 lines) X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2) X = identity_block(X, 3, [128, 128, 512], stage=3, block='b') X = identity_block(X, 3, [128, 128, 512], stage=3, block='c') X = identity_block(X, 3, [128, 128, 512], stage=3, block='d') # Stage 4 (≈6 lines) X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2) X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f') # Stage 5 (≈3 lines) X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2) X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b') X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c') # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)" X = AveragePooling2D(pool_size=(2, 2))(X) ### END CODE HERE ### # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet50') return model
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
The model is now ready to be trained. The only thing you need is a dataset. Let's load the SIGNS Dataset. **Figure 6** : **SIGNS dataset**
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6)
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
Epoch 1/2
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
**Expected Output**: ** Epoch 1/2** loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours. ** Epoch 2/2** loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing. Let's see how this model (trained on only two epochs) performs on the test set.
preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
**Expected Output**: **Test Accuracy** between 0.16 and 0.25 For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well. After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU. Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
model = load_model('ResNet50.h5') preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system! 4 - Test on your own image (Optional/Ungraded) If you wish, you can also take a picture of your own hand and see the output of the model. To do this: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
img_path = 'images/my_image.jpg' img = image.load_img(img_path, target_size=(64, 64)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = x/255.0 print('Input image shape:', x.shape) my_image = scipy.misc.imread(img_path) imshow(my_image) print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ") print(model.predict(x))
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
You can also print a summary of your model by running the following code.
model.summary()
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
plot_model(model, to_file='model.png') SVG(model_to_dot(model).create(prog='dot', format='svg'))
_____no_output_____
MIT
4. Convolutional Neural Networks/Residual Networks v2a.ipynb
MohamedAskar/Deep-Learning-Specialization
Project 3 Sandbox-Blue-O, NLP using webscraping to create the dataset Objective: Determine if posts are in the SpaceX Subreddit or the Blue Origin SubredditWe'll utilize the RESTful API from pushshift.io to scrape subreddit posts from r/blueorigin and r/spacex and see if we cannot use the Bag-of-words algorithm to predict which posts are from where. Author: Matt Paterson, [email protected] notebook is the SANDBOX and should be used to play around. The formal presentation will be in a different notebook
import requests from bs4 import BeautifulSoup import pandas as pd import lebowski as dude from sklearn.feature_extraction.text import CountVectorizer import re, regex # Establish a connection to the API and search for a specific keyword. Maybe we'll add this function to the # lebowski library? Or maybe make a new and slicker Library called spaceman or something # CREDIT: code below adapted from Riley Dallas Lesson on webscraping # keyword = 'propulsion' # url_boeing = 'https://api.pushshift.io/reddit/search/comment/?q=' + keyword + '&subreddit=boeing' # res = requests.get(url_boeing) # res.status_code # instantiate a Beautiful Soup object for Boeing #boeing = BeautifulSoup(res.content, 'lxml') #boeing.find("body") spacex = dude.create_lexicon('spacex', 5000) blueorigin = dude.create_lexicon('blueorigin', 5000) spacex.head() blueorigin.head() spacex[['subreddit', 'selftext', 'title']].head() # predict the subreddit column blueorigin[['subreddit', 'selftext', 'title']].head() # predict the subreddit column print('Soux City Sarsparilla?') # silly print statement to check progress of long print spacex_comments = dude.create_lexicon('spacex', 5000, post_type='comment') spacex_comments.head() spacex_comments[['subreddit', 'body']].head() # predict the subreddit column blueorigin_comments = dude.create_lexicon('blueorigin', 5000, post_type='comment') blueorigin_comments[['subreddit', 'body']].head() # predict the subreddit column blueorigin_comments.columns
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp