markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Projected ReturnsIt's now time to check if your trading signal has the potential to become profitable!We'll start by computing the net returns this portfolio would return. For simplicity, we'll assume every stock gets an equal dollar amount of investment. This makes it easier to compute a portfolio's returns as the simple arithmetic average of the individual stock returns.Implement the `portfolio_returns` function to compute the expected portfolio returns. Using `df_long` to indicate which stocks to long and `df_short` to indicate which stocks to short, calculate the returns using `lookahead_returns`. To help with calculation, we've provided you with `n_stocks` as the number of stocks we're investing in a single period.
def portfolio_returns(df_long, df_short, lookahead_returns, n_stocks): """ Compute expected returns for the portfolio, assuming equal investment in each long/short stock. Parameters ---------- df_long : DataFrame Top stocks for each ticker and date marked with a 1 df_short : DataFrame Bottom stocks for each ticker and date marked with a 1 lookahead_returns : DataFrame Lookahead returns for each ticker and date n_stocks: int The number number of stocks chosen for each month Returns ------- portfolio_returns : DataFrame Expected portfolio returns for each ticker and date """ # TODO: Implement Function # print(lookahead_returns) # print(df_long) # print(df_short) returns = (lookahead_returns*df_long - lookahead_returns*df_short)/3 # print(returns) return returns project_tests.test_portfolio_returns(portfolio_returns)
Tests Passed
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
View DataTime to see how the portfolio did.
expected_portfolio_returns expected_portfolio_returns = portfolio_returns(df_long, df_short, lookahead_returns, 2*top_bottom_n) project_helper.plot_returns(expected_portfolio_returns.T.sum(), 'Portfolio Returns')
_____no_output_____
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
Statistical Tests Annualized Rate of Return
expected_portfolio_returns_by_date = expected_portfolio_returns.T.sum().dropna() portfolio_ret_mean = expected_portfolio_returns_by_date.mean() portfolio_ret_ste = expected_portfolio_returns_by_date.sem() portfolio_ret_annual_rate = (np.exp(portfolio_ret_mean * 12) - 1) * 100 print(""" Mean: {:.6f} Standard Error: {:.6f} Annualized Rate of Return: {:.2f}% """.format(portfolio_ret_mean, portfolio_ret_ste, portfolio_ret_annual_rate))
Mean: 0.106159 Standard Error: 0.071935 Annualized Rate of Return: 257.48%
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
The annualized rate of return allows you to compare the rate of return from this strategy to other quoted rates of return, which are usually quoted on an annual basis. T-TestOur null hypothesis ($H_0$) is that the actual mean return from the signal is zero. We'll perform a one-sample, one-sided t-test on the observed mean return, to see if we can reject $H_0$.We'll need to first compute the t-statistic, and then find its corresponding p-value. The p-value will indicate the probability of observing a t-statistic equally or more extreme than the one we observed if the null hypothesis were true. A small p-value means that the chance of observing the t-statistic we observed under the null hypothesis is small, and thus casts doubt on the null hypothesis. It's good practice to set a desired level of significance or alpha ($\alpha$) _before_ computing the p-value, and then reject the null hypothesis if $p < \alpha$.For this project, we'll use $\alpha = 0.05$, since it's a common value to use.Implement the `analyze_alpha` function to perform a t-test on the sample of portfolio returns. We've imported the `scipy.stats` module for you to perform the t-test.Note: [`scipy.stats.ttest_1samp`](https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.stats.ttest_1samp.html) performs a two-sided test, so divide the p-value by 2 to get 1-sided p-value
from scipy import stats def analyze_alpha(expected_portfolio_returns_by_date): """ Perform a t-test with the null hypothesis being that the expected mean return is zero. Parameters ---------- expected_portfolio_returns_by_date : Pandas Series Expected portfolio returns for each date Returns ------- t_value T-statistic from t-test p_value Corresponding p-value """ # TODO: Implement Function (t_value,p_value) = stats.ttest_1samp(expected_portfolio_returns_by_date,0) return t_value,p_value*0.5 project_tests.test_analyze_alpha(analyze_alpha)
Tests Passed
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
View DataLet's see what values we get with our portfolio. After you run this, make sure to answer the question below.
t_value, p_value = analyze_alpha(expected_portfolio_returns_by_date) print(""" Alpha analysis: t-value: {:.3f} p-value: {:.6f} """.format(t_value, p_value))
Alpha analysis: t-value: 1.476 p-value: 0.073339
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
4445 AARON FIGURA LONG, M/34 5:25:36
import pandas import requests import time from selenium import webdriver from bs4 import BeautifulSoup url = "http://results2.xacte.com/#/e/2306/searchable" response = requests.get(url) if response.status_code==200: print(response.text) # https://www.freecodecamp.org/news/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe/ # https://codeburst.io/web-scraping-101-with-python-beautiful-soup-bb617be1f486 soup = BeautifulSoup(response.content, 'html.parser') print(soup.prettify()) soup.find_all(class_="results-app")
_____no_output_____
MIT
tri-results/how-to-web-scrape.ipynb
kbridge14/how2py
4445AARON FIGURALONG, M/34MANHATTAN BEACH, CA5:25:365:25:36
# aria-hidden=false when the box with info is closed; true when you open up the box. # you'll want to set it to true when viewing all the information per individual """ <md-backdrop class="md-dialog-backdrop md-opaque ng-scope" style="position: fixed;" aria-hidden="true"></md-backdrop> """ soup.find_all(name = "md-row")#, class_="md-select") stuff = [] for i in range(36, len(soup.findAll('a')) + 1): #'a' tags are for links one_a_tag = soup.findAll('a')[i] link = one_a_tag['href'] download_url = url + link stuff.append(urllib.request.urlretrieve(download_url,'./'+link[link.find('/turnstile_')+1:])) time.sleep(1) #pause the code for a sec stuff soup.find_all('head') soup.script # https://www.geeksforgeeks.org/implementing-web-scraping-python-beautiful-soup/ soup2 = BeautifulSoup(response.content, 'html5lib') print(soup2.prettify()) soup.prettify() == soup2.prettify() # https://pythonprogramming.net/introduction-scraping-parsing-beautiful-soup-tutorial/ import urllib.request source = urllib.request.urlopen(url).read() soup3 = BeautifulSoup(source,'lxml') # title of the page print(soup3.title) # get attributes: print(soup3.title.name) # get values: print(soup3.title.string) # beginning navigation: print(soup3.title.parent.name) # getting specific values: print(soup3.p) print(soup3.div) print(soup3.get_text()) browser = webdriver.Chrome() browser.get('https://google.com') # https://sites.google.com/a/chromium.org/chromedriver/downloads for most recent version # At the time of writing, I downloaded Version 78: # https://chromedriver.storage.googleapis.com/index.html?path=78.0.3904.70/ # Mac: once downloaded, move the driver from Downloads to /usr/local/bin # Windows: once downloaded, move somewhere relevant and then add to PATH driver = webdriver.Chrome() driver.get(url) # get web page driver.get(url) # execute script to scroll down the page driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;") # sleep for 30s time.sleep(30) # driver.quit() """ <!doctype html> <html id="ng-app" ng-app="results" class="ng-scope"> <body> <div class="results-app ng-isolate-scope" results-app> <div class="page" ng-show="eventconfig.schema" aria-hidden="false" style> <md-content class="xact-contact _md"> <div ui-view="content" class="ng-scope" style> <div class="xact-search ng-scope layout-column flex" layout="column" flex> <md-content flex class="_md flex"> <md-table-container ng-show="!loading" aria-hidden="false" class style> <table md-table md-progress="promise" class="md-table ng-isolate-scope"> <tbody md-body class="md-body"> <tr md-row md-select="entrant" md-select-id="name" md-auto-select ng-repeat="entrant in entrants" ng-click="showEntrantInfo(entrant)" class="md-row ng-scope ng-isolate-scope" role="button" tabindex="0" style> <td md-cell class="md-cell ng-binding">4445</td> <td md-cell class="md-cell"> <b class="ng-binding">AARON FIGURA</b> <br> <small class="ng-binding">LONG, M/34</small> </td> <td md-cell ng-show="show_net" class="md-cell" aria-hidden="false"> <span ng-show="entrant.chiptime" class="ng-binding" aria-hidden="false">5:25:36</span> </td> """ # find elements by xpath #results = driver.find_elements_by_xpath("//*[@id='componentsContainer']//*[contains(@id,'listingsContainer')]//*[@class='product active']//*[@class='title productTitle']") #results = driver.find_elements_by_xpath("//[@b class='ng-binding']") results = driver.find_elements_by_xpath("//div[contains(@class, 'xact-search ng-scope layout-column flex')]") print('Number of results', len(results)) results = driver.find_elements_by_xpath("//div[contains(@class, 'xact-search ng-scope layout-column flex')]") print('Number of results', len(results)) browser.quit() driver.quit()
_____no_output_____
MIT
tri-results/how-to-web-scrape.ipynb
kbridge14/how2py
Train PhyDNet: We wil predict:- `n_in`: 5 images- `n_out`: 5 images - `n_obj`: up to 3 objects
Path.cwd() DATA_PATH = Path.cwd()/'data' ds = MovingMNIST(DATA_PATH, n_in=5, n_out=5, n_obj=[1,2], th=None) train_tl = TfmdLists(range(120), ImageTupleTransform(ds)) valid_tl = TfmdLists(range(120), ImageTupleTransform(ds)) # i=0 # fat_tensor = torch.stack([torch.cat(train_tl[i][0], 0) for i in range(100)]) # m,s = fat_tensor.mean(), fat_tensor.std() dls = DataLoaders.from_dsets(train_tl, valid_tl, bs=64,#).cuda() after_batch=[Normalize.from_stats(*mnist_stats)]).cuda() mse_loss = StackLoss(MSELossFlat(axis=1)) metrics = []
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
Left: Input, Right: Target
dls.show_batch() b = dls.one_batch() explode_types(b)
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
PhyDNet
phycell = PhyCell(input_shape=(16,16), input_dim=64, F_hidden_dims=[49], n_layers=1, kernel_size=(7,7)) convlstm = ConvLSTM(input_shape=(16,16), input_dim=64, hidden_dims=[128,128,64], n_layers=3, kernel_size=(3,3)) encoder = EncoderRNN(phycell, convlstm) model = StackUnstack(PhyDNet(encoder, sigmoid=False, moment=True), dim=1).cuda()
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
A handy callback to include the loss computed inside the model to the target loss
#export class PHyCallback(Callback): def after_pred(self): self.learn.pred, self.loss_phy = self.pred def after_loss(self): self.learn.loss += self.loss_phy learn = Learner(dls, model, loss_func=mse_loss, metrics=metrics, cbs=[TeacherForcing(10), PHyCallback()], opt_func=ranger) learn.lr_find() learn.fit_flat_cos(25, 3e-3) p,t = learn.get_preds(1) len(p), p[0].shape def show_res(t, idx, argmax=False): if argmax: im_seq = ImageSeq.create([t[i][idx].argmax(0).unsqueeze(0) for i in range(5)], TensorMask) else: im_seq = ImageSeq.create([t[i][idx] for i in range(5)]) im_seq.show(figsize=(8,4)); k = random.randint(0,99) show_res(t,k) show_res(p,k) learn.save('phydnet')
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
Project led by Nikolas Papastavrou Code developed by Varun Bopardikar Data Analysis conducted by Selina Ho, Hana Ahmed
import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split from sklearn import metrics from datetime import datetime from sklearn.naive_bayes import GaussianNB from sklearn import tree from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Load Data
def gsev(val): """ Records whether or not a number is greater than 7. """ if val <= 7: return 0 else: return 1 df = pd.read_csv('../../fservice.csv') df['Just Date'] = df['Just Date'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d')) df['Seven'] = df['ElapsedDays'].apply(gsev, 0)
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3057: DtypeWarning: Columns (10,33) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Parameters
c = ['Anonymous','AssignTo', 'RequestType', 'RequestSource','CD','Direction', 'ActionTaken', 'APC' ,'AddressVerified'] d = ['Latitude', 'Longitude']
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Feature Cleaning
#Put desired columns into dataframe, drop nulls. dfn = df.filter(items = c + d + ['ElapsedDays'] + ['Seven']) dfn = dfn.dropna() #Separate data into explanatory and response variables XCAT = dfn.filter(items = c).values XNUM = dfn.filter(items = d).values y = dfn['ElapsedDays'] <= 7 #Encode cateogrical data and merge with numerical data labelencoder_X = LabelEncoder() for num in range(len(c)): XCAT[:, num] = labelencoder_X.fit_transform(XCAT[:, num]) onehotencoder = OneHotEncoder() XCAT = onehotencoder.fit_transform(XCAT).toarray() X = np.concatenate((XCAT, XNUM), axis=1)
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values. If you want the future behaviour and silence this warning, you can specify "categories='auto'". In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly. warnings.warn(msg, FutureWarning)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Algorithms and Hyperparameters
##Used Random Forest in Final Model gnb = GaussianNB() dc = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = 20) rf = RandomForestClassifier(n_estimators = 50, max_depth = 20) lr = LogisticRegression()
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Validation Set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.2, random_state = 0) #Train Model classifier = rf classifier.fit(X_train, y_train) #Test model y_vpred = classifier.predict(X_val) #Print Accuracy Function results print("Accuracy:",metrics.accuracy_score(y_val, y_vpred)) print("Precision, Recall, F1Score:",metrics.precision_recall_fscore_support(y_val, y_vpred, average = 'binary'))
Accuracy: 0.9385983549336814 Precision, Recall, F1Score: (0.946896616482519, 0.9893259382317161, 0.9676463908853341, None)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Test Set
#Train Model #Test model y_tpred = classifier.predict(X_test) #Print Accuracy Function results print("Accuracy:",metrics.accuracy_score(y_test, y_tpred)) print("Precision, Recall, F1Score:",metrics.precision_recall_fscore_support(y_test, y_tpred, average = 'binary'))
Accuracy: 0.9387186223709323 Precision, Recall, F1Score: (0.9468199376863904, 0.9895874917412928, 0.9677314319565967, None)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Deep Learning : Simple DNN to Classify Images, and application of TensorBoard.dev
#Importing the necessary libraries import tensorflow as tf import keras import tensorflow.keras.datasets.fashion_mnist as data import numpy as np from time import time import matplotlib.pyplot as plt
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
1. Loading Data
#Assigning the raw datafrom Keras dataset - Fashion MNIST raw_data = data #Loading the dataset into training and validation dataset (train_image, train_label), (test_image, test_label) = raw_data.load_data( )
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
2. Data Inspection
#checking the input volume shape print("Total Training Images :{}".format(train_image.shape[0])) print("Training Images Shape (ht,wd) :{} X {}".format(train_image.shape[1],train_image.shape[2])) print("Total Testing Images :{}".format(test_image.shape[0])) print("Testing Images Shape (ht,wd) :{} X {}".format(test_image.shape[1],test_image.shape[2]))
Total Training Images :60000 Training Images Shape (ht,wd) :28 X 28 Total Testing Images :10000 Testing Images Shape (ht,wd) :28 X 28
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
3. Rescaling Data
#rescaling the images for better training of Neural Network train_image = train_image/255.0 test_image = test_image/255.0 #Existing Image classes from Fashion MNIST - in original Order class_labels= ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
4. Sample Images Visualization
#Visualizing some of the training images fig, ax= plt.subplots(3,3, figsize=(10,10) ) for i,img in enumerate(ax.flatten()): img.pcolor(train_image[i]) img.set_title(class_labels[train_label[i]]) plt.tight_layout()
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
5. Building the Model Architecture
#Defining a very Simple Deep Neural Network with Softmax as activation function of the top layer for multi-class classification model = keras.Sequential() model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(256, activation= 'relu', use_bias= True)) model.add(keras.layers.Dropout(rate= .2)) model.add(keras.layers.Dense(64, activation='relu', use_bias=True)) model.add(keras.layers.Dropout(rate= .2)) model.add(keras.layers.Dense(10, activation='softmax' ))
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
6. Defining TensorBoard for Training visualization
#creating a tensorboard object to be called while training the model tensorboard = keras.callbacks.TensorBoard(log_dir='.../logs', histogram_freq=1, batch_size=1000, write_grads=True, write_images=True )
/usr/local/lib/python3.6/dist-packages/keras/callbacks/tensorboard_v2.py:92: UserWarning: The TensorBoard callback `batch_size` argument (for histogram computation) is deprecated with TensorFlow 2.0. It will be ignored. warnings.warn('The TensorBoard callback `batch_size` argument ' /usr/local/lib/python3.6/dist-packages/keras/callbacks/tensorboard_v2.py:97: UserWarning: The TensorBoard callback does not support gradients display when using TensorFlow 2.0. The `write_grads` argument is ignored. warnings.warn('The TensorBoard callback does not support '
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
7. Model Training
model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics= ['accuracy']) # Fitting the model with tensorboard object as callbacks model.fit(train_image, train_label, batch_size=1000, epochs = 24, validation_data=(test_image, test_label), callbacks=[tensorboard] ) model.summary()
Model: "sequential_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_8 (Flatten) (None, 784) 0 _________________________________________________________________ dense_20 (Dense) (None, 256) 200960 _________________________________________________________________ dropout_7 (Dropout) (None, 256) 0 _________________________________________________________________ dense_21 (Dense) (None, 64) 16448 _________________________________________________________________ dropout_8 (Dropout) (None, 64) 0 _________________________________________________________________ dense_22 (Dense) (None, 10) 650 ================================================================= Total params: 218,058 Trainable params: 218,058 Non-trainable params: 0 _________________________________________________________________
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
8. Uploading the logs to TensorBoard.dev
# #checking out the TensorBoard dashboard to analyze training and validation performance with other statistics during the training of model %reload_ext tensorboard !tensorboard dev upload --logdir '.../logs' --name "Deep Learning : Tensorboard" --description "Modeling a very simple Image Classifier based on Fashion MNIST dataset "
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
Live Link : https://tensorboard.dev/experiment/u6hGU2LaQqKn1b1udgL1RA/ 9. Making a Sample Prediction
#selection of an image sample = test_image[6] plt.imshow(sample) plt.xlabel(class_labels[test_label[6]]) plt.title(test_label[6]) plt.tight_layout() #Prediction using trained model results= model.predict(test_image[6].reshape(1,28,28)) plt.bar(np.arange(0,10),results[0], tick_label=class_labels, ) plt.xticks(rotation=45) plt.tight_layout()
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
Dimensional Mechanics Coding Challenge Problem Statementβ€œYou are given a dictionary (dictionary.txt), containing a list of words, one per line. Imagine you have seven tiles. Each tile is either blank or contains a single lowercase letter (a-z).

Please list all the words from the dictionary that can be produced by using some or all of the seven tiles, in any order. A blank tile is a wildcard, and can be used in place of any letter.1. Find all of the words that can be formed if you don't have to deal with blank tiles. 2. Find all of the words that can be formed, including those where blank tiles are used as wildcards.3. Please bear in mind you will need to process several hundred of 7-tile sets with the same dictionary. SolutionConsider the 7 tiles, each tile can either be a blank or contains a single lowercase letter from a to z. We have the option of using some or all of the given 7 tiles. Therefore, each tile can be filled in 27 ways (a-z or blank). The word that we generate can either be a 1, 2, 3, 4, 5, 6, or 7 letter word. The sample space for all the possible tile combinations as per the above requirements will have 10862674479 (27^1 + 27^2 + 27^3 + 27^4 + 27^5 + 27^6 +27^7) words. However, since we only have to worry about the tile combinations (of characters a-z or blank) that form words which match with the ones in the dictionary, our new sample space is all the words in the given dictionary.Since we can only form words that are 7 letter or smaller, we eliminated all word that have more than 7 letter from the given dictionary. This would also be all of the words that can be formed if you don't have to deal with blank tiles. In this code, 'valid_words' stores this list. You can view this list of words in the 'wordlist.csv' file.To find all of the words that can be formed, including those where blank tiles are used as wildcards we have to realize all the possible combination of words that are formed if the blank is replaced by any letter. For example, the word 'b-girl' represents the following set of combinations:['bagirl', 'bbgirl', 'bcgirl', 'bdgirl', 'begirl', 'bfgirl', 'bggirl', 'bhgirl', 'bigirl', 'bjgirl', 'bkgirl', 'blgirl', 'bmgirl', 'bngirl', 'bogirl', 'bpgirl', 'bqgirl', 'brgirl', 'bsgirl', 'btgirl', 'bugirl', 'bvgirl', 'bwgirl', 'bxgirl', 'bygirl', 'bzgirl']The function replace() executes this idea and we apply this function to all the words that contain a blank in the list 'valid_words' ('valid_words' houses all the words of interest from the given dictionary). The combination of words that the blank wildcard represents is stored in the 'new_dictionary' list. The 'new_dictionary' list is added to the 'valid_words' list to form all of the words that can be formed, including those where blank tiles are used as wildcards. This combined list is new_dictionary_final. You can view this combined list of words in the 'wordlistforwildcards.csv' file. Big O Analysis1. Section 1 of the code has a for loop that iterates over the list 'dictionary' and thus a O(n) time complexity for n iterations of the loop.2. Section 2 of the code has a for loop nested within another for loop and thus a O(nxn)= O(n^2) time complexity, where n is the number of iterations of the for loop.3. Section 3 of the code has a for loop that iterates over the list 'valid_words' and thus a O(n) time complexity for n iterations of the loop. The overall complexity is thus O(n + n^2 + n^2), since n<< n^2 the complexity can be approximated to O(n^2)
#section 1 import csv import pandas as pd f = open("dictionary.txt","r") text_file= f.read() dictionary= text_file.split("\n") #'valid_words' stores this list. You can view this list of words in the 'wordlist.csv' file present #in the root directory (read instruction on how to access 'wordlist.csv') valid_words=[] for i in dictionary: if len(i)<=7: valid_words.append(i) print("The number of 7 letter words in given dictionary") print(len(valid_words)) print("This step has a for loop that iterates over the list 'dictionary' and thus a O(n) time complexity for n iterations of the loop") #read instruction on how to access 'wordlist.csv' df = pd.DataFrame(valid_words, columns=["Possible Matches"]) df.to_csv('wordlist.csv', index=False) #section 2 def replace(word, alphabets=["a","b","c","d", "e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"] ): word= list(word) expanded=[] copy=word for i,j in enumerate(word): if j=="-": for k in alphabets: copy[i]=k wildcard_option= ''.join(copy) expanded.append(wildcard_option) return(expanded) print("The replace function illustrated:") word_expanded= replace('b-girl') print(word_expanded) print("The function has a for loop nested within another for loop and thus a O(n^2) time complexity, where n is the number of iterations of the for loop") #section 3 new_dictionary=[] count=0 #The combination of words that the blank wildcard represents is stored in the 'new_dictionary' list. for l in valid_words: if "-" in l: count= count+1 word_expanded= replace(l) new_dictionary= new_dictionary + word_expanded print("This step has a for loop that iterates over the list 'valid_words' and thus a O(n) time complexity for n iterations of the loop") print("The number of wildcard words in the above subset") print(count) print("The number of possible representations the wildcard words correspond to in the above subset") print(len(new_dictionary)) #The 'new_dictionary' list is added to the 'valid_words' list to form all of the words that can be formed, #including those where blank tiles are used as wildcards. This combined list is new_dictionary_final. #You can view this combined list of words in the 'wordlistforwildcards.csv' file present in the root directory new_dictionary_final= new_dictionary + valid_words print("The number of words that can be formed, including those where blank tiles are used as wildcards") print(len(new_dictionary_final)) #read instruction on how to access 'wordlistforwildcards.csv' df = pd.DataFrame(new_dictionary_final, columns=["Possible Matches"]) df.to_csv('wordlistforwildcards.csv', index=False)
The number of words that can be formed, including those where blank tiles are used as wildcards 29455
MIT
DataWrangling.ipynb
niharikabalachandra/Data-Wrangling-Example
Video 1 - Linear regression with swyft
import numpy as np import pylab as plt from scipy.linalg import inv from scipy import stats
_____no_output_____
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
Linear regression for a second order polynomial $$y(x) = v_0 + v_1\cdot x + v_2 \cdot x^2$$$$d_i \sim \mathcal{N}(y(x_i), \sigma = 0.05)\;, \quad \text{with}\quad x_i = 0,\; 0.1,\; 0.2, \;\dots,\; 1.0$$
# Model and reference parameters N = 11 x = np.linspace(0, 1, N) T = np.array([x**0, x**1, x**2]).T v_true = np.array([-0.2, 0., 0.2]) # Mock data SIGMA = 0.05 np.random.seed(42) DATA = T.dot(v_true) + np.random.randn(N)*SIGMA # Linear regression v_lr = inv(T.T.dot(T)).dot(T.T.dot(DATA)) y_lr = T.dot(v_lr) # Fisher estimation of errors I = np.array([[(T[:,i]*T[:,j]).sum()/SIGMA**2 for i in range(3)] for j in range(3)]) Sigma = inv(I) v_fisher_err = np.diag(Sigma)**0.5 # Plot plt.plot(x, DATA, ls='', marker='x', label = 'Data') plt.plot(x, T.dot(v_true), 'r:', label='Ground truth') plt.plot(x, y_lr, 'k', label = 'Linear regression') plt.legend() plt.xlabel("x") plt.ylabel('y'); for i in range(3): print("v_%i = %.3f +- %.3f (%.3f)"%(i, v_lr[i], v_fisher_err[i], v_true[i]))
v_0 = -0.188 +- 0.038 (-0.200) v_1 = 0.098 +- 0.177 (0.000) v_2 = 0.079 +- 0.171 (0.200)
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
SWYFT!
import swyft def model(v): y = T.dot(v) return dict(y=y) sim = swyft.Simulator(model, ['v0', 'v1', 'v2'], dict(y=(11,))) def noise(sim, v): d = sim['y'] + np.random.randn(11)*SIGMA return dict(d=d) store = swyft.Store.memory_store(sim) prior = swyft.Prior(lambda u: u*2 - 1, 3) # Uniform(-1, 1) store.add(20000, prior) store.simulate() dataset = swyft.Dataset(20000, prior, store, simhook = noise) post = swyft.Posteriors(dataset) %%time marginals = [0, 1, 2] post.add(marginals, device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')) post.train(marginals) %%time obs = dict(d=DATA) samples = post.sample(1000000, obs) fig, diag = swyft.plot_1d(samples, [0, 1, 2], bins = 50, figsize=(15,4)) for i in range(3): x = np.linspace(-1, 1, 100) fig.axes[i].plot(x, stats.norm.pdf(x, v_lr[i], v_fisher_err[i])) swyft.plot_corner(samples, [0, 1, 2]) %%time marginals = [(0, 1), (0, 2)] post.add(marginals, device =torch.device('cuda' if torch.cuda.is_available() else 'cpu')) post.train(marginals) samples = post.sample(1000000, obs) swyft.plot_corner(samples, [0, 1, 2]);
_____no_output_____
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
Discrete Choice Models Fair's Affair data A survey of women only was conducted in 1974 by *Redbook* asking about extramarital affairs.
%matplotlib inline import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.formula.api import logit print(sm.datasets.fair.SOURCE) print( sm.datasets.fair.NOTE) dta = sm.datasets.fair.load_pandas().data dta['affair'] = (dta['affairs'] > 0).astype(float) print(dta.head(10)) print(dta.describe()) affair_mod = logit("affair ~ occupation + educ + occupation_husb" "+ rate_marriage + age + yrs_married + children" " + religious", dta).fit() print(affair_mod.summary())
Logit Regression Results ============================================================================== Dep. Variable: affair No. Observations: 6366 Model: Logit Df Residuals: 6357 Method: MLE Df Model: 8 Date: Tue, 24 Dec 2019 Pseudo R-squ.: 0.1327 Time: 14:49:03 Log-Likelihood: -3471.5 converged: True LL-Null: -4002.5 Covariance Type: nonrobust LLR p-value: 5.807e-224 =================================================================================== coef std err z P>|z| [0.025 0.975] ----------------------------------------------------------------------------------- Intercept 3.7257 0.299 12.470 0.000 3.140 4.311 occupation 0.1602 0.034 4.717 0.000 0.094 0.227 educ -0.0392 0.015 -2.533 0.011 -0.070 -0.009 occupation_husb 0.0124 0.023 0.541 0.589 -0.033 0.057 rate_marriage -0.7161 0.031 -22.784 0.000 -0.778 -0.655 age -0.0605 0.010 -5.885 0.000 -0.081 -0.040 yrs_married 0.1100 0.011 10.054 0.000 0.089 0.131 children -0.0042 0.032 -0.134 0.893 -0.066 0.058 religious -0.3752 0.035 -10.792 0.000 -0.443 -0.307 ===================================================================================
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
How well are we predicting?
affair_mod.pred_table()
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The coefficients of the discrete choice model do not tell us much. What we're after is marginal effects.
mfx = affair_mod.get_margeff() print(mfx.summary()) respondent1000 = dta.iloc[1000] print(respondent1000) resp = dict(zip(range(1,9), respondent1000[["occupation", "educ", "occupation_husb", "rate_marriage", "age", "yrs_married", "children", "religious"]].tolist())) resp.update({0 : 1}) print(resp) mfx = affair_mod.get_margeff(atexog=resp) print(mfx.summary())
Logit Marginal Effects ===================================== Dep. Variable: affair Method: dydx At: overall =================================================================================== dy/dx std err z P>|z| [0.025 0.975] ----------------------------------------------------------------------------------- occupation 0.0400 0.008 4.711 0.000 0.023 0.057 educ -0.0098 0.004 -2.537 0.011 -0.017 -0.002 occupation_husb 0.0031 0.006 0.541 0.589 -0.008 0.014 rate_marriage -0.1788 0.008 -22.743 0.000 -0.194 -0.163 age -0.0151 0.003 -5.928 0.000 -0.020 -0.010 yrs_married 0.0275 0.003 10.256 0.000 0.022 0.033 children -0.0011 0.008 -0.134 0.893 -0.017 0.014 religious -0.0937 0.009 -10.722 0.000 -0.111 -0.077 ===================================================================================
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
`predict` expects a `DataFrame` since `patsy` is used to select columns.
respondent1000 = dta.iloc[[1000]] affair_mod.predict(respondent1000) affair_mod.fittedvalues[1000] affair_mod.model.cdf(affair_mod.fittedvalues[1000])
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The "correct" model here is likely the Tobit model. We have an work in progress branch "tobit-model" on github, if anyone is interested in censored regression models. Exercise: Logit vs Probit
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6, 6, 1000) ax.plot(support, stats.logistic.cdf(support), 'r-', label='Logistic') ax.plot(support, stats.norm.cdf(support), label='Probit') ax.legend(); fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6, 6, 1000) ax.plot(support, stats.logistic.pdf(support), 'r-', label='Logistic') ax.plot(support, stats.norm.pdf(support), label='Probit') ax.legend();
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Compare the estimates of the Logit Fair model above to a Probit model. Does the prediction table look better? Much difference in marginal effects? Generalized Linear Model Example
print(sm.datasets.star98.SOURCE) print(sm.datasets.star98.DESCRLONG) print(sm.datasets.star98.NOTE) dta = sm.datasets.star98.load_pandas().data print(dta.columns) print(dta[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP', 'PERMINTE']].head(10)) print(dta[['AVYRSEXP', 'AVSALK', 'PERSPENK', 'PTRATIO', 'PCTAF', 'PCTCHRT', 'PCTYRRND']].head(10)) formula = 'NABOVE + NBELOW ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT ' formula += '+ PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Aside: Binomial distribution Toss a six-sided die 5 times, what's the probability of exactly 2 fours?
stats.binom(5, 1./6).pmf(2) from scipy.special import comb comb(5,2) * (1/6.)**2 * (5/6.)**3 from statsmodels.formula.api import glm glm_mod = glm(formula, dta, family=sm.families.Binomial()).fit() print(glm_mod.summary())
Generalized Linear Model Regression Results ================================================================================ Dep. Variable: ['NABOVE', 'NBELOW'] No. Observations: 303 Model: GLM Df Residuals: 282 Model Family: Binomial Df Model: 20 Link Function: logit Scale: 1.0000 Method: IRLS Log-Likelihood: -2998.6 Date: Tue, 24 Dec 2019 Deviance: 4078.8 Time: 14:50:24 Pearson chi2: 4.05e+03 No. Iterations: 5 Covariance Type: nonrobust ============================================================================================ coef std err z P>|z| [0.025 0.975] -------------------------------------------------------------------------------------------- Intercept 2.9589 1.547 1.913 0.056 -0.073 5.990 LOWINC -0.0168 0.000 -38.749 0.000 -0.018 -0.016 PERASIAN 0.0099 0.001 16.505 0.000 0.009 0.011 PERBLACK -0.0187 0.001 -25.182 0.000 -0.020 -0.017 PERHISP -0.0142 0.000 -32.818 0.000 -0.015 -0.013 PCTCHRT 0.0049 0.001 3.921 0.000 0.002 0.007 PCTYRRND -0.0036 0.000 -15.878 0.000 -0.004 -0.003 PERMINTE 0.2545 0.030 8.498 0.000 0.196 0.313 AVYRSEXP 0.2407 0.057 4.212 0.000 0.129 0.353 PERMINTE:AVYRSEXP -0.0141 0.002 -7.391 0.000 -0.018 -0.010 AVSALK 0.0804 0.014 5.775 0.000 0.053 0.108 PERMINTE:AVSALK -0.0040 0.000 -8.450 0.000 -0.005 -0.003 AVYRSEXP:AVSALK -0.0039 0.001 -4.059 0.000 -0.006 -0.002 PERMINTE:AVYRSEXP:AVSALK 0.0002 2.99e-05 7.428 0.000 0.000 0.000 PERSPENK -1.9522 0.317 -6.162 0.000 -2.573 -1.331 PTRATIO -0.3341 0.061 -5.453 0.000 -0.454 -0.214 PERSPENK:PTRATIO 0.0917 0.015 6.321 0.000 0.063 0.120 PCTAF -0.1690 0.033 -5.169 0.000 -0.233 -0.105 PERSPENK:PCTAF 0.0490 0.007 6.574 0.000 0.034 0.064 PTRATIO:PCTAF 0.0080 0.001 5.362 0.000 0.005 0.011 PERSPENK:PTRATIO:PCTAF -0.0022 0.000 -6.445 0.000 -0.003 -0.002 ============================================================================================
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The number of trials
glm_mod.model.data.orig_endog.sum(1) glm_mod.fittedvalues * glm_mod.model.data.orig_endog.sum(1)
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impacton the response variables:
exog = glm_mod.model.data.orig_exog # get the dataframe means25 = exog.mean() print(means25) means25['LOWINC'] = exog['LOWINC'].quantile(.25) print(means25) means75 = exog.mean() means75['LOWINC'] = exog['LOWINC'].quantile(.75) print(means75)
Intercept 1.000000 LOWINC 55.460075 PERASIAN 5.896335 PERBLACK 5.636808 PERHISP 34.398080 PCTCHRT 1.175909 PCTYRRND 11.611905 PERMINTE 14.694747 AVYRSEXP 14.253875 PERMINTE:AVYRSEXP 209.018700 AVSALK 58.640258 PERMINTE:AVSALK 879.979883 AVYRSEXP:AVSALK 839.718173 PERMINTE:AVYRSEXP:AVSALK 12585.266464 PERSPENK 4.320310 PTRATIO 22.464250 PERSPENK:PTRATIO 96.295756 PCTAF 33.630593 PERSPENK:PCTAF 147.235740 PTRATIO:PCTAF 747.445536 PERSPENK:PTRATIO:PCTAF 3243.607568 dtype: float64
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Again, `predict` expects a `DataFrame` since `patsy` is used to select columns.
resp25 = glm_mod.predict(pd.DataFrame(means25).T) resp75 = glm_mod.predict(pd.DataFrame(means75).T) diff = resp75 - resp25
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The interquartile first difference for the percentage of low income households in a school district is:
print("%2.4f%%" % (diff[0]*100)) nobs = glm_mod.nobs y = glm_mod.model.endog yhat = glm_mod.mu from statsmodels.graphics.api import abline_plot fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, ylabel='Observed Values', xlabel='Fitted Values') ax.scatter(yhat, y) y_vs_yhat = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit() fig = abline_plot(model_results=y_vs_yhat, ax=ax)
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Plot fitted values vs Pearson residuals Pearson residuals are defined to be$$\frac{(y - \mu)}{\sqrt{(var(\mu))}}$$where var is typically determined by the family. E.g., binomial variance is $np(1 - p)$
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title='Residual Dependence Plot', xlabel='Fitted Values', ylabel='Pearson Residuals') ax.scatter(yhat, stats.zscore(glm_mod.resid_pearson)) ax.axis('tight') ax.plot([0.0, 1.0],[0.0, 0.0], 'k-');
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Histogram of standardized deviance residuals with Kernel Density Estimate overlaid The definition of the deviance residuals depends on the family. For the Binomial distribution this is$$r_{dev} = sign\left(Y-\mu\right)*\sqrt{2n(Y\log\frac{Y}{\mu}+(1-Y)\log\frac{(1-Y)}{(1-\mu)}}$$They can be used to detect ill-fitting covariates
resid = glm_mod.resid_deviance resid_std = stats.zscore(resid) kde_resid = sm.nonparametric.KDEUnivariate(resid_std) kde_resid.fit() fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title="Standardized Deviance Residuals") ax.hist(resid_std, bins=25, density=True); ax.plot(kde_resid.support, kde_resid.density, 'r');
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
QQ-plot of deviance residuals
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = sm.graphics.qqplot(resid, line='r', ax=ax)
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Import Libraries
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from keras.initializers import glorot_uniform import keras from keras.models import Sequential from keras.layers import Dense from sklearn.metrics import confusion_matrix #!pip show tensorflow
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Background_Credit default_ can defined as the failure to repay a debt including interest or principal on a loan or security on the due date.This can cause losses for lenders so that preventive measures is a must, in which early detection for potential default can be one of those. This case study can be categorized as the binary classification.Artifical Neural Network (ANN) is one of models for classification problems, having the ability to capture the linier and also the non-linear model trends from data so that it can give predictions for the new data (having the same distributions).In jupyter notebook, the effectiveness of ANN model will be tried to classify the _credit default customer_ and hope that it can reach 95% accuracy. Data UnderstandingThe data used in this task is a public dataset from UCI Machine Learning entitled "Default of Credit Card Clients Dataset" containing information on default payments, demographic factors, credit data, history of payment, and bill statements of credit card clients in Taiwan from April 2005 to September 2005. This dataset contains 30,000 data observations with 25 variables consisting of 1 ID, 23 predictor variables, and 1 response variable as the default payment next month.Here are some samples of the data.
df = pd.read_csv('credit_cards_dataset.csv') df.head()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
The description of each column/variable can be seen below :- ID: ID of each client- LIMIT_BAL: Amount of given credit in NT dollars (includes individual and family/supplementary credit- SEX: Gender (1=male, 2=female)- EDUCATION: (1=graduate school, 2=university, 3=high school, 4=others, 5=unknown, 6=unknown)- MARRIAGE: Marital status (1=married, 2=single, 3=others)- AGE: Age in years- PAY_0: Repayment status in September, 2005 (-1=pay duly, 1=payment delay for one month, 2=payment delay for two months, … 8=payment delay for eight months, 9=payment delay for nine months and above)- PAY_2: Repayment status in August, 2005 (scale same as above)- PAY_3: Repayment status in July, 2005 (scale same as above)- PAY_4: Repayment status in June, 2005 (scale same as above)- PAY_5: Repayment status in May, 2005 (scale same as above)- PAY_6: Repayment status in April, 2005 (scale same as above)- BILL_AMT1: Amount of bill statement in September, 2005 (NT dollar)- BILL_AMT2: Amount of bill statement in August, 2005 (NT dollar)- BILL_AMT3: Amount of bill statement in July, 2005 (NT dollar)- BILL_AMT4: Amount of bill statement in June, 2005 (NT dollar)- BILL_AMT5: Amount of bill statement in May, 2005 (NT dollar)- BILL_AMT6: Amount of bill statement in April, 2005 (NT dollar)- PAY_AMT1: Amount of previous payment in September, 2005 (NT dollar)- PAY_AMT2: Amount of previous payment in August, 2005 (NT dollar)- PAY_AMT3: Amount of previous payment in July, 2005 (NT dollar)- PAY_AMT4: Amount of previous payment in June, 2005 (NT dollar)- PAY_AMT5: Amount of previous payment in May, 2005 (NT dollar)- PAY_AMT6: Amount of previous payment in April, 2005 (NT dollar)- default.payment.next.month: Default payment (1=yes, 0=no) Data ExploratoryAs we can see the description of each column/variable, those are the numerical data so that the data summary are all based on basic statistics in mean, median, minimum and maximum etc which detailed below.
df.describe()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Next, we want see the correlation between all of features and label in the dataset by using the Pearson correlation formula below. $$Covarian (S_{xy}) =\frac{\sum(x_{i}-\bar{x})(y_{i}-\bar{y})}{n-1}$$The plot below is the correlation between all features (predictor variables) toward label.
# Using Pearson Correlation plt.figure(figsize=(14,14)) cor = df.iloc[:,1:].corr() x = cor [['default.payment.next.month']] sns.heatmap(x, annot=True, cmap=plt.cm.Reds) plt.show()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
As we can see in the plot above, the repayment status of customers (PAY_0 - PAY_6) have the higher correlation towards the label (default.payment.next.month) in compared to other features. Data Preparation Data CleansingBefore implementing the ANN to predict the "credit default customer", we have to check the data, whether it needs cleaning or not.
df.isnull().sum()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
After checking the summary of missing value in the dataset, the result shows that the data has no missing values so that the data is ready to the next stage. Splitting Data to Training and Test DataIn this stage, the clean data will be splitted into 2 categories, train data and test data. The train data will be utilized in the training ANN model, and the data test will be used to test the trained model whether the model has good generalization or not in predicting the future data. In this stage, 80% data will be used as the train data and the rest as the test data.Before splitting, the dataset will be grouped into 2 variables, the data from 2nd to 24rd column as the predictor features (the first columns is not included as predictor) will be groped as X, and the data from 25th columns (label) will be renamed as y.
X = df.iloc[:, 1:24].values y = df.iloc[:, 24].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Data StandardizationAfter splitting data, the numeric data will be standardized by scaling the data to have mean of 0 and variance of 1. $$X_{stand} = \frac{X - \mu}{\sigma}$$
sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
ModellingOn the Modeling phase, we create the ANN model with 5 hidden layer (with 50,40,30,20, and 10 neurons respectively) with _relu_ activation function, and 1 output layer with 1 neuron with _sigmoid_ activation function. Furthermore, we choose the 'Adam' optimizer to optimize the parameter in the created model.
hl = 6 # number of hidden layer nohl = [50, 60, 40, 30, 20, 10] # number of neurons in each hidden layer classifier = Sequential() # Hidden Layer for i in range(hl): if i==0: classifier.add(Dense(units=nohl[i], input_dim=X_train.shape[1], kernel_initializer='uniform', activation='relu')) else : classifier.add(Dense(units=nohl[i], kernel_initializer=glorot_uniform(seed=0), activation='relu')) # Output Layer classifier.add(Dense(units=1, kernel_initializer=glorot_uniform(seed=0), activation='sigmoid')) classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Here below the summary of created model architecture by ANN with the parameters needed.
classifier.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 50) 1200 _________________________________________________________________ dense_2 (Dense) (None, 60) 3060 _________________________________________________________________ dense_3 (Dense) (None, 40) 2440 _________________________________________________________________ dense_4 (Dense) (None, 30) 1230 _________________________________________________________________ dense_5 (Dense) (None, 20) 620 _________________________________________________________________ dense_6 (Dense) (None, 10) 210 _________________________________________________________________ dense_7 (Dense) (None, 1) 11 ================================================================= Total params: 8,771 Trainable params: 8,771 Non-trainable params: 0 _________________________________________________________________
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
After create the model architecture by ANN, we train the model by a certain number of epoch and batch.
classifier.fit(X_train, y_train, epochs=50, batch_size=512)
Epoch 1/50 21000/21000 [==============================] - 1s 50us/step - loss: 0.5517 - accuracy: 0.7773 Epoch 2/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4703 - accuracy: 0.7833 Epoch 3/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4506 - accuracy: 0.8121 0s - loss: 0.4488 - accuracy: 0.81 Epoch 4/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4441 - accuracy: 0.8132 Epoch 5/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4386 - accuracy: 0.8149 Epoch 6/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4305 - accuracy: 0.8203 Epoch 7/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4298 - accuracy: 0.8207 Epoch 8/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4286 - accuracy: 0.8214 Epoch 9/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4262 - accuracy: 0.8207 0s - loss: 0.4281 - accuracy: 0. Epoch 10/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4253 - accuracy: 0.8220 Epoch 11/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4253 - accuracy: 0.8227 Epoch 12/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4293 - accuracy: 0.8202 Epoch 13/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4229 - accuracy: 0.8228 Epoch 14/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4235 - accuracy: 0.8220 Epoch 15/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4207 - accuracy: 0.8222 Epoch 16/50 21000/21000 [==============================] - 0s 9us/step - loss: 0.4230 - accuracy: 0.8200 Epoch 17/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4189 - accuracy: 0.8229 Epoch 18/50 21000/21000 [==============================] - 0s 9us/step - loss: 0.4182 - accuracy: 0.8242 Epoch 19/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4191 - accuracy: 0.8226 Epoch 20/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4215 - accuracy: 0.8234 Epoch 21/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4278 - accuracy: 0.8227 Epoch 22/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4159 - accuracy: 0.8240 Epoch 23/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4157 - accuracy: 0.8249 Epoch 24/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4146 - accuracy: 0.8255 Epoch 25/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4155 - accuracy: 0.8233 Epoch 26/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4182 - accuracy: 0.8245 Epoch 27/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4140 - accuracy: 0.8261 Epoch 28/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4131 - accuracy: 0.8264 Epoch 29/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4119 - accuracy: 0.8251 Epoch 30/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4125 - accuracy: 0.8242 Epoch 31/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4126 - accuracy: 0.8258 Epoch 32/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4122 - accuracy: 0.8249 Epoch 33/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4123 - accuracy: 0.8246 Epoch 34/50 21000/21000 [==============================] - 0s 13us/step - loss: 0.4110 - accuracy: 0.8255 Epoch 35/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4101 - accuracy: 0.8255 Epoch 36/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4079 - accuracy: 0.8267 Epoch 37/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4123 - accuracy: 0.8263 Epoch 38/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4068 - accuracy: 0.8275 Epoch 39/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4064 - accuracy: 0.8274 Epoch 40/50 21000/21000 [==============================] - 0s 9us/step - loss: 0.4080 - accuracy: 0.8276 Epoch 41/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4128 - accuracy: 0.8270 Epoch 42/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4097 - accuracy: 0.8286 Epoch 43/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4053 - accuracy: 0.8278 Epoch 44/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4051 - accuracy: 0.8291 Epoch 45/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4025 - accuracy: 0.8291 Epoch 46/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4076 - accuracy: 0.8282 Epoch 47/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4013 - accuracy: 0.8302 Epoch 48/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4110 - accuracy: 0.8256 Epoch 49/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4025 - accuracy: 0.8296 Epoch 50/50 21000/21000 [==============================] - 0s 12us/step - loss: 0.4019 - accuracy: 0.8290
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
EvaluationIn this classification problem, we evaluate model by looking at how many of their predictions are correct in which the threshold is 50%. This can be plotted into Confusion Matrix.Here is the confusion matrix from the ANN model after doing prediction to the dataset :
y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5) conf_matr = confusion_matrix(y_test, y_pred) TP = conf_matr[0,0]; FP = conf_matr[0,1]; TN = conf_matr[1,1]; FN = conf_matr[1,0] print('Confusion Matrix : ') print(conf_matr) print() print('True Positive (TP) : ',TP) print('False Positive (FP) : ',FP) print('True Negative (TN) : ',TN) print('False Negative (FN) : ',FN)
Confusion Matrix : [[6695 345] [1332 628]] True Positive (TP) : 6695 False Positive (FP) : 345 True Negative (TN) : 628 False Negative (FN) : 1332
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
in which - True Positive (TP) means the model predict customer will pay the credit and the prediction is correct.- False Positive (FP) means the model predict customer will will pay the credit and the prediction is incorrect.- True Negative (TN) means the model predict customer will not will pay the credit and the prediction is correct.- False Negative (FN) means the model predict customer will not will pay the credit and the prediction is incorrect.Based of the result above, then we can start doing evaluation using 3 different metrics: accuracy, recall, and precision. AccuracyAccuracy means how many prediction is true compared to the total data. The metric will be calculated by following formula.$$Accuray = \frac{TP+TN}{TP+TN+FP+FN}$$
acc = (TP+TN)/(TP+TN+FP+FN) print('By this metric, only '+ str(round(acc*100)) + '% of them are correctly predicted.')
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
PrecisionIn this metric (precision), it only concern on how many positive prediction that are actually correct and this will be calculated by formula below. $$Precision = \frac{TP}{TP+FP}$$
pre = TP/(TP+FP) print('From those classification result, by calculating the precision, there are '+ str(round(pre*100)) + '% of them who are actually pay the credit.')
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Hurricane Ike Maximum Water LevelsCompute the maximum water level during Hurricane Ike on a 9 million node triangular mesh storm surge model. Plot the results with Datashader.
import xarray as xr import numpy as np import pandas as pd import fsspec import warnings warnings.filterwarnings("ignore") from dask.distributed import Client, progress, performance_report from dask_kubernetes import KubeCluster
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Start a dask cluster to crunch the data
cluster = KubeCluster() cluster.scale(15); cluster import dask; print(dask.__version__)
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
For demos, I often click in this cell and do "Cell=>Run All Above", then wait until the workers appear. This can take several minutes (up to 6!) for instances to spin up and Docker containers to be downloaded. Then I shutdown the notebook and run again from the beginning, and the workers will fire up quickly because the instances have not spun down yet.
#cluster.adapt(maximum=15); client = Client(cluster)
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Read the data using the cloud-friendly zarr data format
ds = xr.open_zarr(fsspec.get_mapper('s3://pangeo-data-uswest2/esip/adcirc/ike', anon=False, requester_pays=True)) #ds = xr.open_zarr(fsspec.get_mapper('gcs://pangeo-data/rsignell/adcirc_test01')) ds['zeta']
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
How many GB of sea surface height data do we have?
ds['zeta'].nbytes/1.e9
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Take the maximum over the time dimension and persist the data on the workers in case we would like to use it later. This is the computationally intensive step.
%%time with performance_report(filename="dask-zarr-report.html"): max_var = ds['zeta'].max(dim='time').compute()
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Visualize data on mesh using HoloViz.org tools
import numpy as np import datashader as dshade import holoviews as hv import geoviews as gv import cartopy.crs as ccrs import hvplot.xarray import holoviews.operation.datashader as dshade dshade.datashade.precompute = True hv.extension('bokeh') v = np.vstack((ds['x'], ds['y'], max_var)).T verts = pd.DataFrame(v, columns=['x','y','vmax']) points = gv.operation.project_points(gv.Points(verts, vdims=['vmax'])) tris = pd.DataFrame(ds['element'].values.astype('int')-1, columns=['v0','v1','v2']) tiles = gv.tile_sources.OSM value = 'max water level' label = '{} (m)'.format(value) trimesh = gv.TriMesh((tris, points), label=label) mesh = dshade.rasterize(trimesh).opts( cmap='rainbow', colorbar=True, width=600, height=400) tiles * mesh
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Extract a time series at a specified lon, lat location Because Xarray does not yet understand that `x` and `y` are coordinate variables on this triangular mesh, we create our own simple function to find the closest point. If we had a lot of these, we could use a more fancy tree algorithm.
# find the indices of the points in (x,y) closest to the points in (xi,yi) def nearxy(x,y,xi,yi): ind = np.ones(len(xi),dtype=int) for i in range(len(xi)): dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2) ind[i] = dist.argmin() return ind #just offshore of Galveston lat = 29.2329856 lon = -95.1535041 ind = nearxy(ds['x'].values,ds['y'].values,[lon], [lat]) ds['zeta'][:,ind].hvplot(x='time', grid=True)
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Hello, PyTorch![img](https://pytorch.org/tutorials/_static/pytorch-logo-dark.svg)__This notebook__ will teach you to use PyTorch low-level core. If you're running this notebook outside the course environment, you can install it [here](https://pytorch.org).__PyTorch feels__ differently than tensorflow/theano on almost every level. TensorFlow makes your code live in two "worlds" simultaneously: symbolic graphs and actual tensors. First you declare a symbolic "recipe" of how to get from inputs to outputs, then feed it with actual minibatches of data. In PyTorch, __there's only one world__: all tensors have a numeric value.You compute outputs on the fly without pre-declaring anything. The code looks exactly as in pure numpy with one exception: PyTorch computes gradients for you. And can run stuff on GPU. And has a number of pre-implemented building blocks for your neural nets. [And a few more things.](https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b)And now we finally shut up and let PyTorch do the talking.
import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/deep_vision_and_graphics/fall21/week01-pytorch_intro/notmnist.py !touch .setup_complete import numpy as np import torch print(torch.__version__) # numpy world x = np.arange(16).reshape(4, 4) print("X:\n%s\n" % x) print("X.shape: %s\n" % (x.shape,)) print("add 5:\n%s\n" % (x + 5)) print("X*X^T:\n%s\n" % np.dot(x, x.T)) print("mean over rows:\n%s\n" % (x.mean(axis=-1))) print("cumsum of cols:\n%s\n" % (np.cumsum(x, axis=0))) # PyTorch world x = np.arange(16).reshape(4, 4) x = torch.tensor(x, dtype=torch.float32) # or torch.arange(0, 16).view(4, 4) print("X:\n%s" % x) print("X.shape: %s\n" % (x.shape,)) print("add 5:\n%s" % (x + 5)) print("X*X^T:\n%s" % torch.matmul(x, x.transpose(1, 0))) # short: x.mm(x.t()) print("mean over rows:\n%s" % torch.mean(x, dim=-1)) print("cumsum of cols:\n%s" % torch.cumsum(x, dim=0))
X: tensor([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) X.shape: torch.Size([4, 4]) add 5: tensor([[ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.], [17., 18., 19., 20.]]) X*X^T: tensor([[ 14., 38., 62., 86.], [ 38., 126., 214., 302.], [ 62., 214., 366., 518.], [ 86., 302., 518., 734.]]) mean over rows: tensor([ 1.5000, 5.5000, 9.5000, 13.5000]) cumsum of cols: tensor([[ 0., 1., 2., 3.], [ 4., 6., 8., 10.], [12., 15., 18., 21.], [24., 28., 32., 36.]])
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
NumPy and PyTorchAs you can notice, PyTorch allows you to hack stuff much the same way you did with NumPy. No graph declaration, no placeholders, no sessions. This means that you can _see the numeric value of any tensor at any moment of time_. Debugging such code can be done with by printing tensors or using any debug tool you want (e.g. [PyCharm debugger](https://www.jetbrains.com/help/pycharm/part-1-debugging-python-code.html) or [gdb](https://wiki.python.org/moin/DebuggingWithGdb)).You could also notice the a few new method names and a different API. So no, there's no compatibility with NumPy [yet](https://github.com/pytorch/pytorch/issues/2228) and yes, you'll have to memorize all the names again. Get excited!![img](http://i0.kym-cdn.com/entries/icons/original/000/017/886/download.jpg)For example, * If something takes a list/tuple of axes in NumPy, you can expect it to take `*args` in PyTorch * `x.reshape([1,2,8]) -> x.view(1,2,8)`* You should swap `axis` for `dim` in operations like `mean` or `cumsum` * `x.sum(axis=-1) -> x.sum(dim=-1)`* Most mathematical operations are the same, but types an shaping is different * `x.astype('int64') -> x.type(torch.LongTensor)`To help you acclimatize, there's a [table](https://github.com/torch/torch7/wiki/Torch-for-NumPy-users) covering most new things. There's also a neat [documentation page](http://pytorch.org/docs/master/).Finally, if you're stuck with a technical problem, we recommend searching [PyTorch forums](https://discuss.pytorch.org/). Or just googling, which usually works just as efficiently. If you feel like you almost give up, remember two things: __GPU__ and __free gradients__. Besides you can always jump back to NumPy with `x.numpy()`. Warmup: trigonometric knotwork_inspired by [this post](https://www.quora.com/What-are-the-most-interesting-equation-plots)_There are some simple mathematical functions with cool plots. For one, consider this:$$ x(t) = t - 1.5 * cos(15 t) $$$$ y(t) = t - 1.5 * sin(16 t) $$
import matplotlib.pyplot as plt %matplotlib inline t = torch.linspace(-10, 10, steps=10000) # compute x(t) and y(t) as defined above x = t - 1.5 * torch.cos(15 * t) y = t - 1.5 * torch.sin(0.5 * t) plt.plot(x.numpy(), y.numpy())
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
If you're done early, try adjusting the formula and seeing how it affects the function. --- Automatic gradientsAny self-respecting DL framework must do your backprop for you. Torch handles this with the `autograd` module.The general pipeline looks like this:* When creating a tensor, you mark it as `requires_grad`: * `torch.zeros(5, requires_grad=True)` * `torch.tensor(np.arange(5), dtype=torch.float32, requires_grad=True)`* Define some differentiable `loss = arbitrary_function(a)`* Call `loss.backward()`* Gradients are now available as ```a.grad```__Here's an example:__ let's fit a linear regression on Boston house prices.
from sklearn.datasets import load_boston boston = load_boston() plt.scatter(boston.data[:, -1], boston.target) NLR_DEGREE = 3 LR = 1e-2 w = torch.rand(NLR_DEGREE + 1, requires_grad=True) x = torch.tensor(boston.data[:, -1] / 10, dtype=torch.float32) x = x.unsqueeze(-1) ** torch.arange(NLR_DEGREE + 1) y = torch.tensor(boston.target, dtype=torch.float32) y_pred = x @ w.T loss = torch.mean((y_pred - y)**2) # propagate gradients loss.backward()
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
The gradients are now stored in `.grad` of those variables that require them.
print("dL/dw = \n", w.grad) # print("dL/db = \n", b.grad)
dL/dw = tensor([ -43.1492, -43.7833, -60.1212, -103.8557])
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
If you compute gradient from multiple losses, the gradients will add up at variables, therefore it's useful to __zero the gradients__ between iteratons.
from IPython.display import clear_output for i in range(int(1e5)): y_pred = x @ w.T # + b loss = torch.mean((y_pred - y)**2) loss.backward() w.data -= LR * w.grad.data # b.data -= LR * b.grad.data # zero gradients w.grad.data.zero_() # b.grad.data.zero_() # the rest of code is just bells and whistles with torch.no_grad(): if (i + 1) % int(1e4) == 0: clear_output(True) plt.scatter(x[:, 1].numpy(), y.numpy()) plt.scatter(x[:, 1].numpy(), y_pred.numpy(), color='orange', linewidth=5) plt.show() print("loss = ", loss.numpy()) if loss.numpy() < 0.5: print("Done!") break
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Bonus quest__: try implementing and writing some nonlinear regression. You can try quadratic features or some trigonometry, or a simple neural network. The only difference is that now you have more variables and a more complicated `y_pred`. High-level PyTorchSo far we've been dealing with low-level PyTorch API. While it's absolutely vital for any custom losses or layers, building large neural nets in it is a bit clumsy.Luckily, there's also a high-level PyTorch interface with pre-defined layers, activations and training algorithms. We'll cover them as we go through a simple image recognition problem: classifying letters into __"A"__ vs __"B"__.
from notmnist import load_notmnist X_train, y_train, X_test, y_test = load_notmnist(letters='AB') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) print("Train size = %i, test_size = %i" % (len(X_train), len(X_test))) for i in [0, 1]: plt.subplot(1, 2, i + 1) plt.imshow(X_train[i].reshape([28, 28])) plt.title(str(y_train[i]))
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Let's start with layers. The main abstraction here is __`torch.nn.Module`__:
from torch import nn import torch.nn.functional as F print(nn.Module.__doc__)
Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:`to`, etc. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
There's a vast library of popular layers and architectures already built for ya'.This is a binary classification problem, so we'll train __Logistic Regression__.$$P(y_i | X_i) = \sigma(W \cdot X_i + b) ={ 1 \over {1+e^{- [W \cdot X_i + b]}} }$$
# create a network that stacks layers on top of each other model = nn.Sequential() # add first "dense" layer with 784 input units and 1 output unit. model.add_module('l1', nn.Linear(784, 1)) # add softmax activation for probabilities. Normalize over axis 1 # note: layer names must be unique model.add_module('l2', nn.Sigmoid()) print("Weight shapes:", [w.shape for w in model.parameters()]) # create dummy data with 3 samples and 784 features x = torch.tensor(X_train[:3], dtype=torch.float32) y = torch.tensor(y_train[:3], dtype=torch.float32) # compute outputs given inputs, both are variables y_predicted = model(x)[:, 0] y_predicted # display what we've got
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Let's now define a loss function for our model.The natural choice is to use binary crossentropy (aka logloss, negative llh):$$ L = {1 \over N} \underset{X_i,y_i} \sum - [ y_i \cdot log P(y_i=1 | X_i) + (1-y_i) \cdot log (1-P(y_i=1 | X_i)) ]$$
crossentropy_lambda = lambda input, target: -(target * torch.log(input) + (1 - target) * torch.log(1 - input)) crossentropy = crossentropy_lambda(y_predicted, y) loss = crossentropy.mean() assert tuple(crossentropy.size()) == ( 3,), "Crossentropy must be a vector with element per sample" assert tuple(loss.size()) == tuple( ), "Loss must be scalar. Did you forget the mean/sum?" assert loss.data.numpy() > 0, "Crossentropy must non-negative, zero only for perfect prediction" assert loss.data.numpy() <= np.log( 3), "Loss is too large even for untrained model. Please double-check it."
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Note:__ you can also find many such functions in `torch.nn.functional`, just type __`F.`__. __Torch optimizers__When we trained Linear Regression above, we had to manually `.zero_()` gradients on both our variables. Imagine that code for a 50-layer network.Again, to keep it from getting dirty, there's `torch.optim` module with pre-implemented algorithms:
opt = torch.optim.RMSprop(model.parameters(), lr=0.01) # here's how it's used: opt.zero_grad() # clear gradients loss.backward() # add new gradients opt.step() # change weights # dispose of old variables to avoid bugs later del x, y, y_predicted, loss, y_pred
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Putting it all together
# create network again just in case model = nn.Sequential() model.add_module('first', nn.Linear(784, 1)) model.add_module('second', nn.Sigmoid()) opt = torch.optim.Adam(model.parameters(), lr=1e-3) history = [] for i in range(100): # sample 256 random images ix = np.random.randint(0, len(X_train), 256) x_batch = torch.tensor(X_train[ix], dtype=torch.float32) y_batch = torch.tensor(y_train[ix], dtype=torch.float32) # predict probabilities y_predicted = model(x_batch).squeeze() assert y_predicted.dim( ) == 1, "did you forget to select first column with [:, 0]" # compute loss, just like before loss = crossentropy_lambda(y_predicted, y_batch).mean() # compute gradients loss.backward() # Adam step opt.step() # clear gradients opt.zero_grad() history.append(loss.data.numpy()) if i % 10 == 0: print("step #%i | mean loss = %.3f" % (i, np.mean(history[-10:])))
step #0 | mean loss = 0.682 step #10 | mean loss = 0.371 step #20 | mean loss = 0.233 step #30 | mean loss = 0.168 step #40 | mean loss = 0.143 step #50 | mean loss = 0.126 step #60 | mean loss = 0.121 step #70 | mean loss = 0.114 step #80 | mean loss = 0.108 step #90 | mean loss = 0.109
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Debugging tips:__* Make sure your model predicts probabilities correctly. Just print them and see what's inside.* Don't forget the _minus_ sign in the loss function! It's a mistake 99% people do at some point.* Make sure you zero-out gradients after each step. Seriously:)* In general, PyTorch's error messages are quite helpful, read 'em before you google 'em.* if you see nan/inf, print what happens at each iteration to find our where exactly it occurs. * If loss goes down and then turns nan midway through, try smaller learning rate. (Our current loss formula is unstable). EvaluationLet's see how our model performs on test data
# use your model to predict classes (0 or 1) for all test samples with torch.no_grad(): predicted_y_test = (model(torch.from_numpy(X_test)) > 0.5).squeeze().numpy() assert isinstance(predicted_y_test, np.ndarray), "please return np array, not %s" % type( predicted_y_test) assert predicted_y_test.shape == y_test.shape, "please predict one class for each test sample" assert np.in1d(predicted_y_test, y_test).all(), "please predict class indexes" accuracy = np.mean(predicted_y_test == y_test) print("Test accuracy: %.5f" % accuracy) assert accuracy > 0.95, "try training longer"
Test accuracy: 0.96051
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
More about PyTorch:* Using torch on GPU and multi-GPU - [link](http://pytorch.org/docs/master/notes/cuda.html)* More tutorials on PyTorch - [link](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)* PyTorch examples - a repo that implements many cool DL models in PyTorch - [link](https://github.com/pytorch/examples)* Practical PyTorch - a repo that implements some... other cool DL models... yes, in PyTorch - [link](https://github.com/spro/practical-pytorch)* And some more - [link](https://www.reddit.com/r/pytorch/comments/6z0yeo/pytorch_and_pytorch_tricks_for_kaggle/)--- Homework tasksThere will be three tasks worth 2, 3 and 5 points respectively. If you get stuck with no progress, try switching to the next task and returning later. Task I (2 points) - tensormancy![img](https://media.giphy.com/media/3o751UMCYtSrRAFRFC/giphy.gif)When dealing with more complex stuff like neural network, it's best if you use tensors the way samurai uses his sword. __1.1 The Cannabola__ [(_disclaimer_)](https://gist.githubusercontent.com/justheuristic/e2c1fa28ca02670cabc42cacf3902796/raw/fd3d935cef63a01b85ed2790b5c11c370245cbd7/stddisclaimer.h)Let's write another function, this time in polar coordinates:$$\rho(\theta) = (1 + 0.9 \cdot cos (8 \cdot \theta) ) \cdot (1 + 0.1 \cdot cos(24 \cdot \theta)) \cdot (0.9 + 0.05 \cdot cos(200 \cdot \theta)) \cdot (1 + sin(\theta))$$Then convert it into cartesian coordinates ([howto](http://www.mathsisfun.com/polar-cartesian-coordinates.html)) and plot the results.Use torch tensors only: no lists, loops, numpy arrays, etc.
theta = torch.linspace(- np.pi, np.pi, steps=1000) # compute rho(theta) as per formula above rho = (1 + 0.9 * torch.cos(8 * theta)) * (1 + 0.1 * torch.cos(24 * theta)) * (0.9 + 0.05 * torch.cos(200 * theta)) * (1 + torch.sin(theta)) # Now convert polar (rho, theta) pairs into cartesian (x,y) to plot them. x = rho * torch.cos(theta) y = rho * torch.sin(theta) plt.figure(figsize=[6, 6]) plt.fill(x.numpy(), y.numpy(), color='green') plt.grid()
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Task II: The Game of Life (3 points)Now it's time for you to make something more challenging. We'll implement Conway's [Game of Life](http://web.stanford.edu/~cdebs/GameOfLife/) in _pure PyTorch_. While this is still a toy task, implementing game of life this way has one cool benefit: __you'll be able to run it on GPU!__ Indeed, what could be a better use of your GPU than simulating Game of Life on 1M/1M grids?![img](https://cdn.tutsplus.com/gamedev/authors/legacy/Stephane%20Beniak/2012/09/11/Preview_Image.png)If you've skipped the URL above out of sloth, here's the Game of Life:* You have a 2D grid of cells, where each cell is "alive"(1) or "dead"(0)* Any living cell that has 2 or 3 neighbors survives, else it dies [0,1 or 4+ neighbors]* Any cell with exactly 3 neighbors becomes alive (if it was dead)For this task, you are given a reference NumPy implementation that you must convert to PyTorch._[NumPy code inspired by: https://github.com/rougier/numpy-100]___Note:__ You can find convolution in `torch.nn.functional.conv2d(Z,filters)`. Note that it has a different input format.__Note 2:__ From the mathematical standpoint, PyTorch convolution is actually cross-correlation. Those two are very similar operations. More info: [video tutorial](https://www.youtube.com/watch?v=C3EEy8adxvc), [scipy functions review](http://programmerz.ru/questions/26903/2d-convolution-in-python-similar-to-matlabs-conv2-question), [stack overflow source](https://stackoverflow.com/questions/31139977/comparing-matlabs-conv2-with-scipys-convolve2d).
from scipy.signal import correlate2d def np_update(Z): # Count neighbours with convolution filters = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]]) N = correlate2d(Z, filters, mode='same') # Apply rules birth = (N == 3) & (Z == 0) survive = ((N == 2) | (N == 3)) & (Z == 1) Z[:] = birth | survive return Z def torch_update(Z): """ Implement an update function that does to Z exactly the same as np_update. :param Z: torch.FloatTensor of shape [height,width] containing 0s(dead) an 1s(alive) :returns: torch.FloatTensor Z after updates. You can opt to create a new tensor or change Z inplace. """ filters = torch.FloatTensor([[1, 1, 1], [1, 0, 1], [1, 1, 1]]) N = F.conv2d(Z.view(1, 1, *Z.shape), filters.view(1, 1, *filters.shape), padding=1).squeeze() birth = (N == 3) & (Z == 0) survive = ((N == 2) | (N == 3)) & (Z == 1) Z[:] = birth | survive return Z # initial frame Z_numpy = np.random.choice([0, 1], p=(0.5, 0.5), size=(100, 100)) Z = torch.from_numpy(Z_numpy).type(torch.FloatTensor) # your debug polygon :) Z_new = torch_update(Z.clone()) # tests Z_reference = np_update(Z_numpy.copy()) assert np.all(Z_new.numpy() == Z_reference), \ "your PyTorch implementation doesn't match np_update. Look into Z and np_update(ZZ) to investigate." print("Well done!") %matplotlib notebook plt.ion() # initialize game field Z = np.random.choice([0, 1], size=(100, 100)) Z = torch.from_numpy(Z).type(torch.FloatTensor) fig = plt.figure() ax = fig.add_subplot(111) fig.show() for _ in range(100): # update Z = torch_update(Z) # re-draw image ax.clear() ax.imshow(Z.numpy(), cmap='gray') fig.canvas.draw() # Some fun setups for your amusement # parallel stripes Z = np.arange(100) % 2 + np.zeros([100, 100]) # with a small imperfection Z[48:52, 50] = 1 Z = torch.from_numpy(Z).type(torch.FloatTensor) fig = plt.figure() ax = fig.add_subplot(111) fig.show() for _ in range(100): Z = torch_update(Z) ax.clear() ax.imshow(Z.numpy(), cmap='gray') fig.canvas.draw()
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
More fun with Game of Life: [video](https://www.youtube.com/watch?v=C2vgICfQawE) Task III: Going deeper (5 points)Your ultimate task for this week is to build your first neural network [almost] from scratch and pure PyTorch.This time you will solve the same digit recognition problem, but at a larger scale* 10 different letters* 20k samplesWe want you to build a network that reaches at least 80% accuracy and has at least 2 linear layers in it. Naturally, it should be nonlinear to beat logistic regression.With 10 classes you will need to use __Softmax__ at the top instead of sigmoid and train using __categorical crossentropy__ (see [here](http://wiki.fast.ai/index.php/Log_Loss)). Write your own loss or use `torch.nn.functional.nll_loss`. Just make sure you understand what it accepts as input.Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) neural network should already give you an edge over logistic regression.__[bonus kudos]__If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! It should be possible to reach 90% without convnets.__SPOILERS!__At the end of the notebook you will find a few tips and frequent errors. If you feel confident enough, just start coding right away and get there ~~if~~ once you need to untangle yourself.
from notmnist import load_notmnist X_train, y_train, X_test, y_test = load_notmnist(letters='ABCDEFGHIJ') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) %matplotlib inline plt.figure(figsize=[12, 4]) for i in range(20): plt.subplot(2, 10, i+1) plt.imshow(X_train[i].reshape([28, 28])) plt.title(str(y_train[i])) from tqdm.notebook import tqdm from torch.utils.data import TensorDataset, DataLoader INPUT_SHAPE = 784 EPOCHS = 15 DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' model = nn.Sequential(nn.Flatten(), nn.Linear(784, 256), nn.BatchNorm1d(256), nn.ReLU(), nn.Linear(256, 128), nn.ReLU(), nn.Linear(128, 10), nn.Softmax(dim=-1)) model.to(DEVICE) train_dataset = TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train)) test_dataset = TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test)) train_dataloader = DataLoader(train_dataset, batch_size=16, shuffle=True, num_workers=2, prefetch_factor=2) test_dataloader = DataLoader(test_dataset, batch_size=16, shuffle=False, num_workers=2, prefetch_factor=2) loss = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters()) def train_for_epoch(): def measure_accuracy(y_pred, y): return (y_pred == y).sum() / len(y) train_outputs = [] train_gt = [] test_outputs = [] test_gt = [] for x, y in train_dataloader: x = x.to(DEVICE) y = y.to(DEVICE) optimizer.zero_grad() y_pred = model(x) loss_value = loss(y_pred, y) loss_value.backward() optimizer.step() train_outputs.extend(y_pred.detach().argmax(dim=-1).cpu().tolist()) train_gt.extend(y.tolist()) train_accuracy = measure_accuracy(np.array(train_outputs), train_gt) print(f'Train accuracy: {train_accuracy:.3f}') with torch.no_grad(): for x, y in test_dataloader: y_pred = model(x.to(DEVICE)) test_outputs.extend(y_pred.detach().argmax(dim=-1).cpu().tolist()) test_gt.extend(y.tolist()) test_accuracy = measure_accuracy(np.array(test_outputs), test_gt) print(f'Test accuracy: {test_accuracy:.3f}') for ep in tqdm(range(EPOCHS)): train_for_epoch()
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Kili Tutorial: Importing medical data into a frame project In this tutorial, we will show you how to import dicom data into a [Frame Kili project](https://cloud.kili-technology.com/docs/video-interfaces/multi-frames-classification/docsNav). Such projects allow you to annotate volumes of image data.The data we use comes from [The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD) data collection](https://wiki.cancerimagingarchive.net/display/Public/TCGA-LUAD). We selected 3 scans out of this dataset. Downloading data Let's first import the scans. We host these files in a .zip on GDrive.
import os import subprocess import tqdm if 'recipes' in os.getcwd(): os.chdir('..') os.makedirs(os.path.expanduser('~/Downloads'), exist_ok=True)
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
We will use a small package to help downloading the file hosted on Google Drive
%%bash pip install gdown gdown https://drive.google.com/uc?id=1q3qswXthFh3xMtAAnePph6vav3N7UtOF -O ~/Downloads/TCGA-LUAD.zip !apt-get install unzip !unzip -o ~/Downloads/TCGA-LUAD.zip -d ~/Downloads/ > /dev/null
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Reading data We can then read the dicom files with [pydicom](https://pydicom.github.io/pydicom/stable/).
ASSET_ROOT = os.path.expanduser('~/Downloads/TCGA-LUAD') sorted_files = {} asset_number = 0 for root, dirs, files in os.walk(ASSET_ROOT): if len(files) > 0: file_paths = list(map(lambda path: os.path.join(root, path), files)) sorted_files[f'asset-{asset_number+1}'] = sorted(file_paths, key=lambda path: int(path.split('/')[-1].split('-')[1].split('.')[0])) asset_number += 1
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Let's see what is inside the dataset :
!pip install Pillow pydicom from PIL import Image import pydicom def read_dcm_image(path): dicom = pydicom.dcmread(path) image = dicom.pixel_array # Currently, Kili does not support windowing in the application. # This will soon change, but until then we advise you to reduce the range to 256 values. image = (image - image.min()) / (image.max() - image.min()) * 256 return Image.fromarray(image).convert('RGB') for asset_key in sorted_files.keys(): print(asset_key) im = read_dcm_image(sorted_files[asset_key][20]) im.save(f'./recipes/img/frame_dicom_data_{asset_key}.png')
Requirement already satisfied: Pillow in /opt/anaconda3/lib/python3.7/site-packages (8.4.0) Requirement already satisfied: pydicom in /opt/anaconda3/lib/python3.7/site-packages (2.0.0) asset-1 asset-2 asset-3
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
![asset-1](./img/frame_dicom_data_asset-1.png) ![asset-2](./img/frame_dicom_data_asset-2.png) ![asset-3](./img/frame_dicom_data_asset-3.png) Extracting and serving images For each of the dicom `.dcm` files, let's extract its content (image) and save it into a `.jpeg` image.
sorted_images = {} for asset_key, files in sorted_files.items(): images = [] for file in tqdm.tqdm(files): print(file) im = read_dcm_image(file) im_file = file.replace('.dcm', '.jpeg') im.save(im_file, format='JPEG') images.append(im_file) sorted_images[asset_key] = images
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 201/201 [00:02<00:00, 85.82it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 227/227 [00:02<00:00, 105.77it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 329/329 [00:02<00:00, 112.38it/s]
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
We now have extracted jpeg images processable by Kili. Creating the project We can now import those assets into a FRAME project !Let's begin by creating a project
## You can also directly create the interface on the application. interface = { "jobRendererWidth": 0.17, "jobs": { "JOB_0": { "mlTask": "OBJECT_DETECTION", "tools": [ "semantic" ], "instruction": "Segment the right class", "required": 1, "isChild": False, "content": { "categories": { "BONE": { "name": "Bone", "children": [], "color": "#0755FF" }, "LUNG": { "name": "Lung", "children": [], "color": "#EEBA00" }, "TISSUE_0": { "name": "Tissue", "children": [], "color": "#941100" } }, "input": "radio" } } } } ## Authentication from kili.client import Kili api_key = os.getenv('KILI_USER_API_KEY') api_endpoint = os.getenv('KILI_API_ENDPOINT') # If you use Kili SaaS, use the url 'https://cloud.kili-technology.com/api/label/v2/graphql' kili = Kili(api_key=api_key, api_endpoint=api_endpoint) ## Project creation project = kili.create_project( description='Demo FRAME project', input_type='FRAME', json_interface=interface, title='Lungs from TCGA-LUAD' ) project_id = project['id']
/Users/maximeduval/Documents/kili-playground/kili/authentication.py:97: UserWarning: Kili Playground version should match with Kili API version. Please install version: "pip install kili==2.100.0" warnings.warn(message, UserWarning)
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Importing images Finally, let's import the volumes using `appendManyToDataset` (see [link](https://staging.cloud.kili-technology.com/docs/python-graphql-api/python-api/append_many_to_dataset)). The key argument is `json_content_array`, which is a list of list of strings. Each element is the list of urls or paths pointing to images of the volume considered. - Let's host these images locally to demonstrate how we would do it with cloud URLs for example :
subprocess.Popen(f'python -m http.server 8001 --directory {ASSET_ROOT}', shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) ROOT_URL = 'http://localhost:8001/' def files_to_urls(files): return list(map(lambda file: ROOT_URL + file.split('TCGA-LUAD')[1], files)) kili.append_many_to_dataset( project_id=project_id, external_id_array=list(sorted_images.keys()), json_content_array=list(map(files_to_urls, sorted_images.values())) )
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Or, as mentionned, you can simply provide the paths to your images, and call the function like below :
kili.append_many_to_dataset( project_id=project_id, external_id_array=list(map(lambda key: f'local-path-{key}',sorted_images.keys())), json_content_array=list(sorted_images.values()) )
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Back to the interface We can see our assets were imported...
ds_size = kili.count_assets(project_id=project_id) print(ds_size) assert ds_size == 6
6
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
PEST setupThis notebook reads in the existing MF6 model built using modflow-setup with the script `../scripts/setup_model.py`. This notebook makes extensive use of the `PstFrom` functionality in `pyemu` to set up multipliers on parameters. There are a few custom parameterization steps as well. Observations are also defined, assigned initial values, and weights based on preliminary assumptions about error.
pyemu.__version__
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
define locations and other global variables
sim_ws = '../neversink_mf6/' # folder containing the MODFLOW6 files template_ws = '../run_data' # folder to create and write the PEST setup to noptmax0_dir = '../noptmax0_testing/' # folder in which to write noptmax=0 test run version of PST file
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
kill the `original` folder (a relic from the mfsetup process)
if os.path.exists(os.path.join(sim_ws,'original')): shutil.rmtree(os.path.join(sim_ws,'original')) run_MF6 = True # option to run MF6 to generate output but not needed if already been run in sim_ws cdir = os.getcwd() # optionally run MF6 to generate model output if run_MF6: os.chdir(sim_ws) os.system('mf6') os.chdir(cdir)
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
create land surface observations we will need at the endThese will be used as inequality observations (less than) to enforce that heads should not exceed the model top. Option for spatial frequency is set below.
irch_file = f'{sim_ws}/irch.dat' # file with the highest active layer identified id3_file = f'{sim_ws}/idomain_003.dat' # deepest layer idomain - gives the maximum lateral footprint top_file = f'{sim_ws}/top.dat' # the model top top = np.loadtxt(top_file) top[top<-8000] = np.nan plt.imshow(top) plt.colorbar() id3 = np.loadtxt(id3_file, dtype=int) plt.imshow(id3) irch = np.loadtxt(irch_file, dtype=int) irch -= 1 # note that this is 1-based, not 0-based because it's a MF6 file plt.imshow(irch) plt.colorbar() # set frequency for land surface observations lateralls, in model cells lsobs_every_n_cells = 50 # make a grid of cells spaced at the spacing suggested above nrow, ncol = id3.shape j = list(range(ncol))[0:ncol:lsobs_every_n_cells] i = list(range(nrow))[0:nrow:lsobs_every_n_cells] J,I = np.meshgrid(j,i) points = list(zip(I.ravel(),J.ravel())) # now keep only those that are in active cells (using ibound of layer4 as the basis) and drop a few others keep_points = [(irch[i,j],i,j) for i,j in points if id3[i,j]==1] drop_points = [(0, 150, 50),(3, 150, 100),(3, 100, 50)] keep_points = [i for i in keep_points if i not in drop_points]
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
make list of indices
with open(os.path.join(sim_ws,'land_surf_obs-indices.csv'), 'w') as ofp: ofp.write('k,i,j,obsname\n') [ofp.write('{0},{1},{2},land_surf_obs_{1}_{2}\n'.format(*i)) for i in keep_points]
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
make an observations file
with open(os.path.join(sim_ws,'land_surf_obs-observations.csv'), 'w') as ofp: ofp.write('obsname,obsval\n') [ofp.write('land_surf_obs_{1}_{2},{3}\n'.format(*i, top[i[1],i[2]])) for i in keep_points]
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
Start setting up the `PstFrom` object to create PEST inputs load up the simulation
sim = fp.mf6.MFSimulation.load(sim_ws=sim_ws) m = sim.get_model() # manually create a spatial reference object from the grid.json metadata # this file created by modflow-setup grid_data = json.load(open(os.path.join(sim_ws,'neversink_grid.json'))) sr_model = pyemu.helpers.SpatialReference(delr=grid_data['delr'], delc=grid_data['delc'], rotation= grid_data['angrot'], epsg = grid_data['epsg'], xul = grid_data['xul'], yul = grid_data['yul'], units='meters', lenuni=grid_data['lenuni']) # create the PstFrom object pf = pyemu.utils.PstFrom(original_d=sim_ws, new_d=template_ws, remove_existing=True, longnames=True, spatial_reference=sr_model, zero_based=False)
2021-03-26 16:08:56.834646 starting: opening PstFrom.log for logging 2021-03-26 16:08:56.835640 starting PstFrom process 2021-03-26 16:08:56.868554 starting: setting up dirs 2021-03-26 16:08:56.869551 starting: removing existing new_d '..\run_data' 2021-03-26 16:08:57.058048 finished: removing existing new_d '..\run_data' took: 0:00:00.188497 2021-03-26 16:08:57.058048 starting: copying original_d '..\neversink_mf6' to new_d '..\run_data' 2021-03-26 16:08:58.243337 finished: copying original_d '..\neversink_mf6' to new_d '..\run_data' took: 0:00:01.185289 2021-03-26 16:08:58.245331 finished: setting up dirs took: 0:00:01.376777
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
we will parameterize....- pilot points for k, k33, r- zones for l, k33, r- constant for R- sfr conductance by reach- well pumping - CHDs parameterize list-directed well and chd packages
list_tags = {'wel_':[.8,1.2], 'chd_':[.8,1.2]} for tag,bnd in list_tags.items(): lb,ub = bnd filename = os.path.basename(glob.glob(os.path.join(template_ws, '*{}*'.format(tag)))[0]) pf.add_parameters(filenames=filename, par_type = 'grid', upper_bound=ub, lower_bound=lb, par_name_base=tag, index_cols=[0,1,2], use_cols=[3],pargp=tag[:-1],alt_inst_str='', comment_char='#')
2021-03-26 16:08:58.270602 starting: adding grid type multiplier style parameters for file(s) ['wel_000.dat'] 2021-03-26 16:08:58.271600 starting: loading list ..\run_data\wel_000.dat 2021-03-26 16:08:58.272597 starting: reading list ..\run_data\wel_000.dat 2021-03-26 16:08:58.279579 finished: reading list ..\run_data\wel_000.dat took: 0:00:00.006982 2021-03-26 16:08:58.279579 loaded list '..\run_data\wel_000.dat' of shape (34, 5) 2021-03-26 16:08:58.284565 finished: loading list ..\run_data\wel_000.dat took: 0:00:00.012965 2021-03-26 16:08:58.285562 starting: writing list-based template file '..\run_data\wel__0_grid.csv.tpl' 2021-03-26 16:08:58.336024 finished: writing list-based template file '..\run_data\wel__0_grid.csv.tpl' took: 0:00:00.050462 2021-03-26 16:08:58.369854 finished: adding grid type multiplier style parameters for file(s) ['wel_000.dat'] took: 0:00:00.099252 2021-03-26 16:08:58.371847 starting: adding grid type multiplier style parameters for file(s) ['chd_000.dat'] 2021-03-26 16:08:58.371847 starting: loading list ..\run_data\chd_000.dat 2021-03-26 16:08:58.372844 starting: reading list ..\run_data\chd_000.dat 2021-03-26 16:08:58.376835 finished: reading list ..\run_data\chd_000.dat took: 0:00:00.003991 2021-03-26 16:08:58.377831 loaded list '..\run_data\chd_000.dat' of shape (176, 4) 2021-03-26 16:08:58.381821 finished: loading list ..\run_data\chd_000.dat took: 0:00:00.009974 2021-03-26 16:08:58.382818 starting: writing list-based template file '..\run_data\chd__0_grid.csv.tpl' 2021-03-26 16:08:58.418722 finished: writing list-based template file '..\run_data\chd__0_grid.csv.tpl' took: 0:00:00.035904 2021-03-26 16:08:58.441661 finished: adding grid type multiplier style parameters for file(s) ['chd_000.dat'] took: 0:00:00.069814
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
now set up pilot points
k_ub = 152 # ultimate upper bound on K # set up pilot points pp_tags = {'k':[.01,10.,k_ub], 'k33':[.01,10.,k_ub/10]}
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow