markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Training the model
max_epochs = 100 vec_size = 50 alpha = 0.025 # Distributed memory model model = Doc2Vec(vector_size=vec_size, alpha=alpha, min_alpha=0.00025, min_count=1, dm=1, workers=8) # Initialize the model model.build_vocab(documents) # Train the model for _ in tqdm_notebook(range(max_epochs)): model.train(documents, total_examples=model.corpus_count, epochs=model.epochs,) # Deacying learning rate model.alpha -= 0.0002 # fix the learning rate, no decay model.min_alpha = model.alpha # Save the model model.save("article.d2v")
_____no_output_____
MIT
TOI/Doc2Vec.ipynb
aashish-jain/Social-unrest-prediction
Generate the document dictionary that can be used to access a document by tag
# Generate the document dictionary that can be used to access a document by tag document_dic = {} for doc,tag in documents: document_dic[tag[0]] = doc
_____no_output_____
MIT
TOI/Doc2Vec.ipynb
aashish-jain/Social-unrest-prediction
Test the model for Random Data from ACLED after Jan-1-2019
article = "On July 15, a long protest march by farmers, from Mandsaur in Madhya Pradesh to New Delhi, demanding loan waiver and fair price for their produce, reached Jaipur." article = ' '.join(generate_document_vocabulary(article))
_____no_output_____
MIT
TOI/Doc2Vec.ipynb
aashish-jain/Social-unrest-prediction
Find the closest Document
# Reference : https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-lee.ipynb # find the vector vec = model.infer_vector(sabri) #find 10 closest documents sims = model.docvecs.most_similar([vec]) # Print the document for doc_tag,score in sims: print("Document has score : "score, "\nContent : " document_dic[doc_tag] + "\n\n")
_____no_output_____
MIT
TOI/Doc2Vec.ipynb
aashish-jain/Social-unrest-prediction
https://github.com/pysal/mgwr/pull/56
import sys sys.path.append("C:/Users/msachde1/Downloads/Research/Development/mgwr") import warnings warnings.filterwarnings("ignore") import pandas as pd import numpy as np from mgwr.gwr import GWR from spglm.family import Gaussian, Binomial, Poisson from mgwr.gwr import MGWR from mgwr.sel_bw import Sel_BW import multiprocessing as mp pool = mp.Pool() from scipy import linalg import numpy.linalg as la from scipy import sparse as sp from scipy.sparse import linalg as spla from spreg.utils import spdot, spmultiply from scipy import special import libpysal as ps import seaborn as sns import matplotlib.pyplot as plt from copy import deepcopy import copy from collections import namedtuple import spglm
_____no_output_____
MIT-0
Notebooks/Binomial_MGWR_approaches_tried.ipynb
mehak-sachdeva/MGWR_book
Fundamental equation By simple algebraic manipulation, the probability that Y=1 is: \begin{align}p = 1 / (1 + exp (-{\beta} & _k x _{k,i}) ) \\\end{align} Approaches tried:1. Changing XB to : `1 / (1 + np.exp (-1*np.sum(np.multiply(X,params),axis=1)))` - these are the predicted probabilities ~(0,1)2. Changing XB as above and writing a function to create temp_y as a binary variable using condition `1 if BXi > 0 else 0.`3. Derived manipulations to temp_y as in iwls for Logistic regression as below: `v = np.sum(np.multiply(X,params),axis=1)` `mu = 1/(1+(np.exp(-v)))` `z = v + (1/(mu * (1-mu)) * (y-mu))` -- this becomes the temp_y Then a simple linear regression can be run as z as the temp dependent variable 4. Taken from GAM logistic model literature: `y=exp(b0+b1*x1+...+bm*xm)/{1+exp(b0+b1*x1+...+bm*xm)}` Applying the logistic link function to the probability p (ranging between 0 and 1): `p' = log {p/(1-p)}` By applying the logistic link function, we can now rewrite the model as: `p' = b0 + b1*X1 + ... + bm*Xm` Finally, we substitute the simple single-parameter additive terms to derive the generalized additive logistic model: `p' = b0 + f1(X1) + ... + fm(Xm)` (http://www.statsoft.com/textbook/generalized-additive-modelsgam) This is the current approach in the latest commit: `XB = 1 / (1 + np.exp (-1*(np.multiply(X,params))))` XB is now the probability and is normally distributed Run MGWR (Gaussian) on this as the dependent variable for the partial models. Data Clearwater data - downloaded from link: https://sgsup.asu.edu/sparc/multiscale-gwr
data_p = pd.read_csv("C:/Users/msachde1/Downloads/logistic_mgwr_data/landslides.csv") data_p.head()
_____no_output_____
MIT-0
Notebooks/Binomial_MGWR_approaches_tried.ipynb
mehak-sachdeva/MGWR_book
Helper functions - hardcoded here for simplicity in the notebook workflowPlease note: A separate bw_func_b will not be required when changes will be made in the repository
kernel='bisquare' fixed=False spherical=False search_method='golden_section' criterion='AICc' interval=None tol=1e-06 max_iter=500 X_glob=[] def gwr_func(y, X, bw,family=Gaussian(),offset=None): return GWR(coords, y, X, bw, family,offset,kernel=kernel, fixed=fixed, constant=False, spherical=spherical, hat_matrix=False).fit( lite=True, pool=pool) def gwr_func_g(y, X, bw): return GWR(coords, y, X, bw, family=Gaussian(),offset=None,kernel=kernel, fixed=fixed, constant=False, spherical=spherical, hat_matrix=False).fit( lite=True, pool=pool) def bw_func_b(coords,y, X): selector = Sel_BW(coords,y, X,family=Binomial(),offset=None, X_glob=[], kernel=kernel, fixed=fixed, constant=False, spherical=spherical) return selector def bw_func_p(coords,y, X): selector = Sel_BW(coords,y, X,family=Poisson(),offset=off, X_glob=[], kernel=kernel, fixed=fixed, constant=False, spherical=spherical) return selector def bw_func(coords,y,X): selector = Sel_BW(coords,y,X,X_glob=[], kernel=kernel, fixed=fixed, constant=False, spherical=spherical) return selector def sel_func(bw_func, bw_min=None, bw_max=None): return bw_func.search( search_method=search_method, criterion=criterion, bw_min=bw_min, bw_max=bw_max, interval=interval, tol=tol, max_iter=max_iter, pool=pool, verbose=False)
_____no_output_____
MIT-0
Notebooks/Binomial_MGWR_approaches_tried.ipynb
mehak-sachdeva/MGWR_book
GWR Binomial model with independent variable, x = slope
coords = list(zip(data_p['X'],data_p['Y'])) y = np.array(data_p['Landslid']).reshape((-1,1)) elev = np.array(data_p['Elev']).reshape((-1,1)) slope = np.array(data_p['Slope']).reshape((-1,1)) SinAspct = np.array(data_p['SinAspct']).reshape(-1,1) CosAspct = np.array(data_p['CosAspct']).reshape(-1,1) X = np.hstack([elev,slope,SinAspct,CosAspct]) x = SinAspct X_std = (X-X.mean(axis=0))/X.std(axis=0) x_std = (x-x.mean(axis=0))/x.std(axis=0) y_std = (y-y.mean(axis=0))/y.std(axis=0) bw_gwbr=Sel_BW(coords,y,x_std,family=Binomial(),constant=False).search() gwbr_model=GWR(coords,y,x_std,bw=bw_gwbr,family=Binomial(),constant=False).fit() bw_gwbr predy = 1/(1+np.exp(-1*np.sum(gwbr_model.X * gwbr_model.params, axis=1).reshape(-1, 1))) sns.distplot(predy) (predy==gwbr_model.predy).all() sns.distplot(gwbr_model.y)
_____no_output_____
MIT-0
Notebooks/Binomial_MGWR_approaches_tried.ipynb
mehak-sachdeva/MGWR_book
Multi_bw changes
def multi_bw(init,coords,y, X, n, k, family=Gaussian(),offset=None, tol=1e-06, max_iter=20, multi_bw_min=[None], multi_bw_max=[None],rss_score=True,bws_same_times=3, verbose=True): if multi_bw_min==[None]: multi_bw_min = multi_bw_min*X.shape[1] if multi_bw_max==[None]: multi_bw_max = multi_bw_max*X.shape[1] if isinstance(family,spglm.family.Poisson): bw = sel_func(bw_func_p(coords,y,X)) optim_model=gwr_func(y,X,bw,family=Poisson(),offset=offset) err = optim_model.resid_response.reshape((-1, 1)) param = optim_model.params #This change for the Poisson model follows from equation (1) above XB = offset*np.exp(np.multiply(param, X)) elif isinstance(family,spglm.family.Binomial): bw = sel_func(bw_func_b(coords,y,X)) optim_model=gwr_func(y,X,bw,family=Binomial()) err = optim_model.resid_response.reshape((-1, 1)) param = optim_model.params XB = 1/(1+np.exp(-1*np.multiply(optim_model.params,X))) print("first family: "+str(optim_model.family)) else: bw=sel_func(bw_func(coords,y,X)) optim_model=gwr_func(y,X,bw) err = optim_model.resid_response.reshape((-1, 1)) param = optim_model.params XB = np.multiply(param, X) bw_gwr = bw XB=XB if rss_score: rss = np.sum((err)**2) iters = 0 scores = [] delta = 1e6 BWs = [] bw_stable_counter = np.ones(k) bws = np.empty(k) try: from tqdm.auto import tqdm #if they have it, let users have a progress bar except ImportError: def tqdm(x, desc=''): #otherwise, just passthrough the range return x for iters in tqdm(range(1, max_iter + 1), desc='Backfitting'): new_XB = np.zeros_like(X) neww_XB = np.zeros_like(X) params = np.zeros_like(X) for j in range(k): temp_y = XB[:, j].reshape((-1, 1)) temp_y = temp_y + err temp_X = X[:, j].reshape((-1, 1)) #The step below will not be necessary once the bw_func is changed in the repo to accept family and offset as attributes if isinstance(family,spglm.family.Poisson): bw_class = bw_func_p(coords,temp_y, temp_X) else: bw_class = bw_func(coords,temp_y, temp_X) print(bw_class.family) if np.all(bw_stable_counter == bws_same_times): #If in backfitting, all bws not changing in bws_same_times (default 3) iterations bw = bws[j] else: bw = sel_func(bw_class, multi_bw_min[j], multi_bw_max[j]) if bw == bws[j]: bw_stable_counter[j] += 1 else: bw_stable_counter = np.ones(k) optim_model = gwr_func_g(temp_y, temp_X, bw) print(optim_model.family) err = optim_model.resid_response.reshape((-1, 1)) param = optim_model.params.reshape((-1, )) new_XB[:,j]=optim_model.predy.reshape(-1) params[:, j] = param bws[j] = bw num = np.sum((new_XB - XB)**2) / n print("num = "+str(num)) den = np.sum(np.sum(new_XB, axis=1)**2) score = (num / den)**0.5 print(score) XB = new_XB if rss_score: print("here") predy = 1/(1+np.exp(-1*np.sum(X * params, axis=1).reshape(-1, 1))) new_rss = np.sum((y - predy)**2) score = np.abs((new_rss - rss) / new_rss) rss = new_rss scores.append(deepcopy(score)) delta = score print(delta) BWs.append(deepcopy(bws)) if verbose: print("Current iteration:", iters, ",SOC:", np.round(score, 7)) print("Bandwidths:", ', '.join([str(bw) for bw in bws])) if delta < tol: break print("iters = "+str(iters)) opt_bws = BWs[-1] print("opt_bws = "+str(opt_bws)) print(bw_gwr) return (opt_bws, np.array(BWs), np.array(scores), params, err, bw_gwr) mgwbr = multi_bw(init=None,coords=coords,y=y, X=x_std, n=239, k=x.shape[1], family=Binomial()) param = mgwbr[3] predy = 1/(1+np.exp(-1*np.sum(x_std * param, axis=1).reshape(-1, 1))) sns.distplot(predy)
_____no_output_____
MIT-0
Notebooks/Binomial_MGWR_approaches_tried.ipynb
mehak-sachdeva/MGWR_book
Transfer LearningIn this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.htmlid5). Let's print out the model architecture so we can see what's going on.
model = models.densenet121(pretrained=True) model
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
# Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
You can write device agnostic code which will automatically use CUDA if it's enabled like so:```python at beginning of the scriptdevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")... then whenever you get a new Tensor or Module this won't copy if they are already on the desired deviceinput = data.to(device)model = MyModule(...).to(device)```From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
# Use GPU if it's available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.densenet121(pretrained=True) # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.classifier = nn.Sequential(nn.Linear(1024, 256), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 2), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.003) model.to(device); epochs = 1 steps = 0 running_loss = 0 print_every = 5 for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the default device inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() # Calculate accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f"Epoch {epoch+1}/{epochs}.. " f"Train loss: {running_loss/print_every:.3f}.. " f"Test loss: {test_loss/len(testloader):.3f}.. " f"Test accuracy: {accuracy/len(testloader):.3f}") running_loss = 0 model.train()
_____no_output_____
MIT
deep_learning/new-intro-to-pytorch/Part 8 - Transfer Learning (Solution).ipynb
willcanniford/python-learning
PNW Focal Species Sanity CheckWe'll use GBIF to select a species with lots of occurrences in the PNW and assess the impact of MHWs on that species.
plankton = pd.read_csv("../data/Phytoplankton_temperature_growth_rate_dataset_2016_01_29/traits_derived_2016_01_29.csv", engine='python') plankton = plankton[(plankton.minqual == "good") & (plankton.maxqual == "good") & (plankton.curvequal == "good")] plankton = plankton[plankton.habitat == 'marine'] print(len(plankton)) print(len(set(list(zip(plankton.genus, plankton.species))))) plankton.genus.unique() plankton['mu.c.opt.list'].plot(kind='hist') plankton[plankton['mu.c.opt.list'].between(13, 16)] sample_loc = 178 plankton.columns
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
GBIF Occurrences*For locating PNW species...*
genera = plankton.genus.dropna().unique() genera genera = [pygbif.species.name_lookup(q=g, rank='genus') for g in genera] genera = [g for g in genera if len(g['results']) > 0] genera = [max(g['results'], key=lambda x: x['numDescendants']) for g in genera] [g['key'] for g in genera] genera_counts = [pygbif.occurrences.count(taxonKey=g['key']) for g in genera] genera_counts occs = [] for genus, genus_count in zip(genera, genera_counts): print(genus['genus'], genus_count) if genus_count > 5: occs.append(pygbif.occurrences.search( taxonKey=genus['key'], decimalLatitude=f"{PNW_LAT.start},{PNW_LAT.stop}", decimalLongitude=f"{PNW_LON.start},{PNW_LON.stop}" )) else: occs.append([]) occs = [o for o in occs if type(o) == dict] max_genus = max(occs, key=lambda x: x['count'])['results'][0] max_genus['genus'] max_genus_name = "Skeletonema"
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
TPC From Genus
def tpc(T, a, b, z, w): ''' https://science.sciencemag.org/content/sci/suppl/2012/10/25/science.1224836.DC1/Thomas.SM.pdf ''' return a * np.exp(b*T) * (1 - ((T - z)/(w / 2))**2) def plot_tpc(sample, ax=None): T = np.arange(sample['mu.c.opt.list'] - (sample['mu.wlist'] / 2), sample['mu.c.opt.list'] + (sample['mu.wlist'] / 2), 0.1) perf = tpc(T, sample['mu.alist'], sample['mu.blist'], sample['mu.c.opt.list'], sample['mu.wlist']) try: plotTitle = "{} {} [{}]".format(sample.genus, sample.species, sample['isolate.code']) except AttributeError as e: plotTitle = "" if ax: ax.plot(T, perf) ax.set_title(plotTitle) ax.set_xlabel("$T$") sns.despine(ax=ax) else: plt.plot(T, perf) plt.title(plotTitle) plt.xlabel("$T$ [${^\circ}C$]") sns.despine() this_genus = plankton[plankton.genus == max_genus_name] this_genus
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
**NOTE** Overrode above analysis and chose a single species
sample_species = this_genus.sample(1).iloc[0] plot_tpc(sample_species)
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
Enter MHW and SST
mhw_time = slice('2014-01-01', '2015-05-31') mhws = xr.open_dataset("../mhw_pipeline/pnw_mhw_intensity.nc").rename({ '__xarray_dataarray_variable__': 'mhw_intensity' }) mhws mhw_recent = mhws.sel(time=mhw_time) mhw_count = mhw_recent.median(dim='time').mhw_intensity plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) plot = mhw_count.plot(norm=colors.LogNorm(vmin=1, vmax=mhw_count.max()), cmap='PuBu_r', ax=ax) mhw_count.lon.max() plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) mhw_count.plot.contourf(cmap='Spectral', ax=ax, transform=ccrs.PlateCarree()) ax.gridlines(alpha=0.7) ax.add_feature(cf.COASTLINE, facecolor='black')
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
SST
fs = gcsfs.GCSFileSystem(project=GCP_PROJECT_ID, token="/home/jovyan/gc-pangeo.json") oisst = xr.open_zarr(fs.get_mapper(OISST_GCP)) oisst = oisst.assign_coords(lon=(((oisst.lon + 180) % 360) - 180)).sortby('lon') PNW_LAT = slice(mhw_count.lat.min(), mhw_count.lat.max()) PNW_LON = slice(mhw_count.lon.min(), mhw_count.lon.max()) oisst_pnw = oisst.sel(lat = PNW_LAT, lon = PNW_LON).persist() recent_sst = oisst_pnw.sel(time=mhw_time) plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) recent_sst.sst.max(dim='time',).plot( ax=ax, transform=ccrs.PlateCarree()) ax.add_feature(cf.COASTLINE) plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) recent_sst.sst.sel(time=mhw_time.stop).plot.contourf( ax=ax, transform=ccrs.PlateCarree(), levels=10) ax.add_feature(cf.COASTLINE) dates_for_plot = [pd.to_datetime(mhw_time.start) + timedelta(n) for n in range(len(recent_sst.time))] mhws.sel(time=mhw_time.start).mhw.plot.contourf(cmap='binary',vmax=1, levels=3) fig = plt.figure() def anim_func(i): fig.clf() print(i) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) mhws.sel(time=i).mhw.plot.contourf(cmap='binary', ax=ax, transform=ccrs.PlateCarree(), vmax=1, levels=3) anim = FuncAnimation(fig, anim_func, frames=dates_for_plot, interval=100) anim.save("test.gif")
MovieWriter ffmpeg unavailable; trying to use <class 'matplotlib.animation.PillowWriter'> instead.
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
Compute Performance Detriment
def perf_det(T, T_opt, tpc, axis=1): return tpc(T_opt) - tpc(T) def tsm(T, T_opt, axis): return T - T_opt def plot_det(s, ax): this_tpc = partial(tpc, a=s['mu.alist'], b=s['mu.blist'], z=s['mu.c.opt.list'], w=s['mu.wlist']) T = np.arange(s['mu.g.opt.list'] - (s['mu.wlist'] / 2), s['mu.c.opt.list'] + (s['mu.wlist'] / 2), 0.1) perf = this_tpc(T) max_perf = this_tpc(s['mu.g.opt.list']) randomT = np.random.choice(T, size=1) perf_T = this_tpc(randomT) det_T = perf_det(randomT, s['mu.g.opt.list'], this_tpc) plt.plot(T, perf) plt.vlines(randomT,max_perf, max_perf - det_T) plt.axhline(max_perf) mean_species=sample_species this_det = partial( perf_det, T_opt = mean_species['mu.g.opt.list'], tpc = partial(tpc, a=mean_species['mu.alist'], b=mean_species['mu.blist'], z=mean_species['mu.c.opt.list'], w=mean_species['mu.wlist']) ) this_tsm = partial( tsm, T_opt = mean_species['mu.g.opt.list'] ) ans = recent_sst.sst.reduce(this_det, dim='time').compute() tsm_ans = recent_sst.sst.reduce(this_tsm, dim='time').compute() plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) ans.sel(time=slice('2014-01-01', '2014-05-28')).sum(dim='time').plot.contourf(ax=ax, cmap='viridis', vmin=-10, vmax=10) ax.gridlines(alpha=0.7) ax.add_feature(cf.COASTLINE, facecolor='black', zorder=10) plt.show() plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) tsm_ans.mean(dim='time').plot.contourf(ax=ax, cmap='viridis', vmin=-1, vmax=1) ax.gridlines(alpha=0.7) ax.add_feature(cf.COASTLINE, facecolor='black') plt.figure(figsize=(10,10)) ax = plt.axes(projection=ccrs.Mercator().GOOGLE) ans.sel(time=slice('2014-10-25', '2014-12-31')).std(dim='time').plot.contourf(ax=ax) ax.gridlines(alpha=0.7) ax.add_feature(cf.COASTLINE, facecolor='black') example_mhw = slice('2014-01-01', '2014-5-31') ans_cumsum = ans.sel(time=example_mhw).cumsum(dim='time') mhw_cumsum = mhws.sel(time=example_mhw).mhw.cumsum(dim='time') def daterange(start_date, end_date): for n in range(int ((pd.to_datetime(end_date) - pd.to_datetime(start_date)).days)): yield pd.to_datetime(start_date) + timedelta(n) fig = plt.figure(figsize=(10,10)) def anim_func(i): print(i) fig.clf() ax = fig.subplots(nrows =1, ncols = 2, subplot_kw={"projection" : ccrs.PlateCarree()}) mhws.mhw.sel(time=i).plot(cmap='Spectral', ax=ax[0], transform=ccrs.PlateCarree(), vmax=1) ans_norm = colors.DivergingNorm(vmin=-1, vcenter=0, vmax=1) ans.sel(time=i).plot(cmap='bwr', ax=ax[1], transform=ccrs.PlateCarree(), norm=ans_norm) ax[0].add_feature(cf.COASTLINE, facecolor='black') ax[1].add_feature(cf.COASTLINE, facecolor='black') frames = list(daterange(example_mhw.start, example_mhw.stop)) anim = FuncAnimation(fig, anim_func, frames=frames, interval=100, save_count=len(frames)) anim.save("test.gif")
_____no_output_____
MIT
analysis/distribution-analysis.ipynb
HuckleyLab/phyto-mhw
Data Analysis Research Question: Examine the top 3 countries who have competed at the olympics, and examine their preformance over the past 120 years. For Each top 3 country, these aspects should be examined in order to answer the question:- Which years did this country win the most medals? - Most gold medals?- Which years did this country win the least amount of medals?- When did this country first appear in the olympics?- How many olympics has this country competed in?- How many athletes of that country have competed? - How many each year? - How many male or female athletes?- How many athletes of that country have won medals? - How many won each year? - How many male or female athletes have won medals? - How many gold, silver, bronze medals have been won by this country?- How many athletes of this country have competed during summer olympics compared to winter olympics? - How does this compare yearly?- How many medals were won by athletes of this country during the summer olympics compared to the winter olympics? - How does this compare yearly?- What are the ages of the oldest and youngest athletes of this country to compete, and what is the average age of competing athletes of this country?- What are the ages of the oldest and youngest athletes of this country who have won medals, and what is the average age of athletes who have won medals for this country?- What is the average age of completing athletes per year of this country?- What is the average age of athletes who have won medals for this country per year?- What are the top 5 competed sports for this country?- What are the top 5 most successful sports for this country? Starting with some basic imports, as well as the dataset import and a simple query, we can determine what are the 3 most sucessful countries during the olympics.
import pandas as pd import numpy as np from matplotlib import pyplot import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('../data/raw/olympic_dataset.csv', low_memory=False, encoding = 'utf-8') top3Countries=(data.dropna(subset=['Medal'])).value_counts(subset=['NOC']) top3Countries.head(3)
_____no_output_____
MIT
analysis/.ipynb_checkpoints/Task_5-checkpoint_LOCAL_1236.ipynb
data301-2020-winter2/course-project-group_1001
Based on this, we can see that the United States leads the dataset with the most amount of medals, with the Soviet Union coming in second, and Germany in third. Therefore, our data analysis will focus on these three countries.Although, with the Soviet Union and Germany, there have been many historical changes to those two countries over the past 120 years, which could affect medal counts. So, in order to remain competitive with the United States, as well as accurate, previous and modern versions of these two countries olympic teams will be considered as well. For the Soviet Union, this means including modern Russian teams, teams that belonged to the Empire of Russia, as well as the Unified Soviet Team. For Germany, this means including the Empire of Germany, East Germany, West Germany, and the Saar Protectorate, alongside its modern form. Now that we are considering all modern and historical versions of the countries mentioned, we can start with the analysis. United States(Funmi)
us =(data["NOC"] == "USA") athletes=data[(us)] print("Number of events particpated in by American athletes: "+str(athletes.shape[0])) athletesUnique=data[us].drop_duplicates(subset=['Name']) print("Number of American athletes who have competed at the olympics: "+str(athletesUnique.shape[0])) maleAthletesUnique=data[us&(data['Sex'] == 'M')].drop_duplicates(subset=['Name']) print("Number of male American athletes who have competed in the olympics: "+str(maleAthletesUnique.shape[0])) femaleAthletesUnique=data[us&(data['Sex'] == 'F')].drop_duplicates(subset=['Name']) print("Number of female American athletes who have competed in the olympics: "+str(femaleAthletesUnique.shape[0])+"\n") medals=athletes.dropna(subset=["Medal"]) print("Number medals won by American athletes: "+str(medals.shape[0])) maleAthletesMedals=data[us&(data['Sex'] == 'M')].dropna(subset=["Medal"]) print("Number of medals won by male American athletes: "+str(maleAthletesMedals.shape[0])) femaleAthletesMedals=data[us&(data['Sex'] == 'F')].dropna(subset=["Medal"]) print("Number of medals won by female American athletes: "+str(femaleAthletesMedals.shape[0])+"\n") AthletesMedalsUnique=medals.drop_duplicates(subset=['Name']) print("Number of American athletes who won medals: "+str(AthletesMedalsUnique.shape[0])) maleAthletesMedalsUnique=maleAthletesMedals.drop_duplicates(subset=['Name']) print("Number of male American athletes who won medals: "+str(maleAthletesMedalsUnique.shape[0])) femaleAthletesMedalsUnique=femaleAthletesMedals.drop_duplicates(subset=['Name']) print("Number of female American athletes who won medals: "+str(femaleAthletesMedalsUnique.shape[0])+"\n") mtype=medals.value_counts(subset=['Medal']) print("Types of medals won by American athletes") print(mtype) top5c=athletes.value_counts(['Sport']) print("top 5 most competed sports by American athletes:") print(top5c.iloc[:5]) print("\ntop top 5 most successful sports for American athletes:") top5m=medals.value_counts(['Sport']) print(top5m.iloc[:5]) cstats=athletesUnique.groupby('Sex').agg({'Age': ['mean', 'min', 'max']}) print("\nAverage, minimum, and maximum ages of American athletes who have competed at the olympics:") print(cstats) mstats=medals.groupby('Sex').agg({'Age': ['mean', 'min', 'max']}) print("\nAverage, minimum, and maximum ages of American athletes who have won medals at the olympics:") print(mstats) sns.lineplot(data=athletesUnique, x="Year", y='Age') plt.title('Average Age of American Olympic Athletes per Year') sns.set(rc={'figure.figsize':(15,10)}) sns.lineplot(data=medals, x="Year", y='Age') plt.title("Average age of American medal winning athletes per year") sns.countplot(data=medals, x="Year", hue="Medal", palette='rocket') plt.title("Medals won by American olympic athletes per year") sns.countplot(data=athletesUnique, x="Year") plt.title("American olympic athletes per year") sns.countplot(data=AthletesMedalsUnique, x="Year") plt.title("Americans that won that medals per year") sns.countplot(data=athletesUnique, x="Year", hue="Season", palette='dark') plt.title("American olympic athletes per year during summer vs winter olympics") sns.countplot(data=medals, x="Year", hue="Season", palette='dark') plt.title("American medal winning athletes per year during winter vs summer olympics")
_____no_output_____
MIT
analysis/.ipynb_checkpoints/Task_5-checkpoint_LOCAL_1236.ipynb
data301-2020-winter2/course-project-group_1001
Other Graphs to Note
sns.countplot(data=medals, x="Medal", hue="Sex", palette='pastel') plt.title("Types of medals won by male and female American athletes") sns.countplot(data=medals, x="Year", hue="Sex", palette='pastel') plt.title("Medals won by American olympic athletes per year compared to sex")
_____no_output_____
MIT
analysis/.ipynb_checkpoints/Task_5-checkpoint_LOCAL_1236.ipynb
data301-2020-winter2/course-project-group_1001
K26 - Heated Wall Interface at 90°. Variable fluid densities => but very small variations Also no Heat capacity => infinitely fast heat conduction Height of the domain is reduced Implicit RK timestepping, to allow for an adaption of the maximum step size Also only the "start" of the process is simulated, i.e. tE = 1
#r "..\..\..\src\L4-application\BoSSSpad\bin\Release\net5.0\BoSSSpad.dll" using System; using System.Collections.Generic; using System.Linq; using ilPSP; using ilPSP.Utils; using BoSSS.Platform; using BoSSS.Foundation; using BoSSS.Foundation.XDG; using BoSSS.Foundation.Grid; using BoSSS.Foundation.Grid.Classic; using BoSSS.Foundation.IO; using BoSSS.Solution; using BoSSS.Solution.Control; using BoSSS.Solution.GridImport; using BoSSS.Solution.Statistic; using BoSSS.Solution.Utils; using BoSSS.Solution.AdvancedSolvers; using BoSSS.Solution.Gnuplot; using BoSSS.Application.BoSSSpad; using BoSSS.Application.XNSE_Solver; using static BoSSS.Application.BoSSSpad.BoSSSshell; Init();
_____no_output_____
Apache-2.0
examples/XNSFE_Solver/HeatedWall_VariableDensity/HeatedWall90DegVariableDensity.ipynb
FDYdarmstadt/BoSSS
Setup Workflowmanagement, Batchprocessor and Database
ExecutionQueues static var myBatch = BoSSSshell.GetDefaultQueue(); static var myDb = myBatch.CreateOrOpenCompatibleDatabase("XNSFE_HeatedWall"); myDb.Path BoSSSshell.WorkflowMgm.Init($"HeatedWall_VariableDensity");
Project name is set to 'HeatedWall_VariableDensity'.
Apache-2.0
examples/XNSFE_Solver/HeatedWall_VariableDensity/HeatedWall90DegVariableDensity.ipynb
FDYdarmstadt/BoSSS
Setup Simulationcontrols
using BoSSS.Application.XNSFE_Solver; int[] hRes = {16, 24, 32, 48}; int[] pDeg = {2}; double[] Q = {0.2}; double[] dRho_B = {0.0, -1e-3, -5e-3, -1e-2, -5e-2, -1e-1}; List<XNSFE_Control> Controls = new List<XNSFE_Control>(); foreach(int h in hRes){ foreach(int p in pDeg){ foreach(double q in Q){ foreach(double dR in dRho_B){ var ctrl = new XNSFE_Control(); ctrl.Paramstudy_CaseIdentification.Add(new Tuple<string, object>("HeatFlux", q)); ctrl.DbPath = null; ctrl.SessionName = $"HeatedWall_VariableDensity_res:{h}_p:{p}_dR:{dR}"; ctrl.ProjectName = $"HeatedWall_VariableDensity"; ctrl.SetDatabase(myDb); ctrl.savetodb = true; ctrl.FieldOptions.Add("VelocityX", new FieldOpts() { Degree = p, SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("VelocityY", new FieldOpts() { Degree = p, SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("GravityX#A", new FieldOpts() { SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("GravityY#A", new FieldOpts() { SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("GravityX#B", new FieldOpts() { SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("GravityY#B", new FieldOpts() { SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("Pressure", new FieldOpts() { Degree = p - 1, SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("PhiDG", new FieldOpts() { SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("Phi", new FieldOpts() { Degree = Math.Max(p, 2), SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); ctrl.FieldOptions.Add("Temperature", new FieldOpts() { Degree = p, SaveToDB = FieldOpts.SaveToDBOpt.TRUE }); #region grid double L = 5.0; int kelemR = h; string[] Bndy = new string[] { "Inner", "NavierSlip_linear_ConstantHeatFlux_right", "pressure_outlet_ZeroGradient_top", "freeslip_ZeroGradient_left", "pressure_outlet_ZeroGradient_bottom"}; ctrl.GridFunc = delegate () { double[] Xnodes = GenericBlas.Linspace(-L, 0, kelemR + 1); double[] Ynodes = GenericBlas.Linspace(0, L, kelemR + 1); var grd = Grid2D.Cartesian2DGrid(Xnodes, Ynodes); for(byte i= 1; i < Bndy.Count(); i++) { grd.EdgeTagNames.Add(i, Bndy[i]); } grd.DefineEdgeTags(delegate (double[] X) { byte et = 0; if(Math.Abs(X[0] - Xnodes.Last()) < 1e-8) return 1; if(Math.Abs(X[0] - Xnodes.First()) < 1e-8) return 3; if(Math.Abs(X[1] - Ynodes.Last()) < 1e-8) return 2; if(Math.Abs(X[1] - Ynodes.First()) < 1e-8) return 4; return et; }); return grd; }; #endregion #region material ctrl.PhysicalParameters = new BoSSS.Solution.XNSECommon.PhysicalParameters() { rho_A = 1.0, // 958.0 rho_B = 1.0 + dR, // 0.59, mu_A = 1, //2.82 * 1e-4, mu_B = 0.001, //1.23 * 1e-6, Sigma = 1.0, betaS_A = 1000, // sliplength is mu/beta betaS_B = 1000, }; ctrl.ThermalParameters = new BoSSS.Solution.XheatCommon.ThermalParameters() { rho_A = 1.0, // 958.0 rho_B = 1.0 + dR, //0.59, k_A = 1.0, // 0.6 k_B = 1.0, // 0.026, c_A = 0.0, c_B = 0.0, hVap = 1,//2.257 * 1e6, T_sat = 0.0 // 373.0 }; ctrl.PhysicalParameters.IncludeConvection = true; ctrl.ThermalParameters.IncludeConvection = true; ctrl.PhysicalParameters.Material = false; #endregion #region Initial Condition - Exact Solution // solution for massflux and velocity at level set double y0 = 0.2 * L; // inital values double g = 4; ctrl.AddInitialValue("Phi", $"(X, t) => -{y0} + X[1]", true); ctrl.AddInitialValue("Temperature#A", $"(X, t) => {ctrl.ThermalParameters.T_sat}", true); ctrl.AddInitialValue("Temperature#B", $"(X, t) => {ctrl.ThermalParameters.T_sat}", true); ctrl.AddInitialValue("GravityY#A", $"(X, t) => -{g}", true); #endregion #region Boundary Conditions double v = 1.0; ctrl.AddBoundaryValue(Bndy[1], "HeatFluxX#A", $"(X, t) => {q}", true); ctrl.AddBoundaryValue(Bndy[1], "VelocityY#A", $"(X, t) => {v}", true); ctrl.AddBoundaryValue(Bndy[1], "VelocityY#B", $"(X, t) => {v}", true); ctrl.AddBoundaryValue(Bndy[3]); ctrl.AddBoundaryValue(Bndy[2]); ctrl.AddBoundaryValue(Bndy[4], "Pressure#A", $"(X, t) => {y0} * {ctrl.PhysicalParameters.rho_A} * {g}", true); #endregion #region AMR // No AMR int level = 0; ctrl.AdaptiveMeshRefinement = level > 0; ctrl.activeAMRlevelIndicators.Add(new BoSSS.Solution.LevelSetTools.SolverWithLevelSetUpdater.AMRonNarrowband() { maxRefinementLevel = level }); ctrl.AMR_startUpSweeps = level; #endregion #region Timestepping ctrl.AdvancedDiscretizationOptions.SST_isotropicMode = BoSSS.Solution.XNSECommon.SurfaceStressTensor_IsotropicMode.LaplaceBeltrami_ContactLine; ctrl.Option_LevelSetEvolution = BoSSS.Solution.LevelSetTools.LevelSetEvolution.FastMarching; ctrl.Timestepper_LevelSetHandling = BoSSS.Solution.XdgTimestepping.LevelSetHandling.LieSplitting; ctrl.NonLinearSolver.SolverCode = NonLinearSolverCode.Newton; ctrl.NonLinearSolver.Globalization = BoSSS.Solution.AdvancedSolvers.Newton.GlobalizationOption.Dogleg; ctrl.NonLinearSolver.ConvergenceCriterion = 1e-8; ctrl.NonLinearSolver.MaxSolverIterations = 10; ctrl.SkipSolveAndEvaluateResidual = false; ctrl.TimeSteppingScheme = BoSSS.Solution.XdgTimestepping.TimeSteppingScheme.RK_ImplicitEuler; ctrl.TimesteppingMode = BoSSS.Solution.Control.AppControl._TimesteppingMode.Transient; ctrl.dtFixed = 0.01; ctrl.Endtime = 1.0; ctrl.NoOfTimesteps = int.MaxValue; // timesteps can be adapted, simulate until endtime is reached #endregion ctrl.PostprocessingModules.Add(new BoSSS.Application.XNSFE_Solver.PhysicalBasedTestcases.MassfluxLogging() { LogPeriod = 1 }); ctrl.PostprocessingModules.Add(new BoSSS.Application.XNSFE_Solver.PhysicalBasedTestcases.MovingContactLineLogging() { LogPeriod = 1 }); Controls.Add(ctrl); } } } } Controls.Count
_____no_output_____
Apache-2.0
examples/XNSFE_Solver/HeatedWall_VariableDensity/HeatedWall90DegVariableDensity.ipynb
FDYdarmstadt/BoSSS
Start simulations on Batch processor
foreach(var C in Controls) { Type solver = typeof(BoSSS.Application.XNSFE_Solver.XNSFE<XNSFE_Control>); string jobName = C.SessionName; var oneJob = new Job(jobName, solver); oneJob.NumberOfMPIProcs = 1; oneJob.SetControlObject(C); oneJob.Activate(myBatch, true); }
_____no_output_____
Apache-2.0
examples/XNSFE_Solver/HeatedWall_VariableDensity/HeatedWall90DegVariableDensity.ipynb
FDYdarmstadt/BoSSS
Hyperexponential CaseThroughout this document, the following packages are required:
import numpy as np import scipy import math from scipy.stats import binom, erlang, poisson from scipy.optimize import minimize from functools import lru_cache
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
Plot Phase-Type Fit
from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets import matplotlib.pyplot as plt def SCV_to_params(SCV): # weighted Erlang case if SCV <= 1: K = math.floor(1/SCV) p = ((K + 1) * SCV - math.sqrt((K + 1) * (1 - K * SCV))) / (SCV + 1) mu = K + (1 - p) * (K + 1) return K, p, mu # hyperexponential case else: p = 0.5 * (1 + np.sqrt((SCV - 1) / (SCV + 1))) mu = 1 # 1 / mean mu1 = 2 * p * mu mu2 = 2 * (1 - p) * mu return p, mu1, mu2 # for i in range(81): # SCV = 1 + 0.1 * i # print(round(SCV,2),SCV_to_params(SCV)) def density_WE(x, K, p, mu): return p * erlang.pdf(x, K, scale=1/mu) + (1 - p) * erlang.pdf(x, K+1, scale=1/mu) def density_HE(x, p, mu1, mu2): return p * mu1 * np.exp(-mu1 * x) + (1 - p) * mu2 * np.exp(-mu2 * x) x = np.linspace(0,4,1001) def plot_f(SCV=1): if SCV <= 1: K, p, mu = SCV_to_params(SCV) f_x = density_WE(x, K, p, mu) title = f'SCV = {SCV}\n p = {p:.2f}, $K$ = {K}, $\mu$ = {mu:.2f}' else: p, mu1, mu2 = SCV_to_params(SCV) f_x = density_HE(x, p, mu1, mu2) title = f'SCV = {SCV}\n p = {p:.2f}, $\mu_1$ = {mu1:.2f}, $\mu_2$ = {mu2:.2f}' plt.plot(x,f_x) plt.title(title) plt.xlabel('$x$') plt.ylabel('density') plt.ylim(0,2) interact(plot_f, SCV=(0.01,2,0.01));
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
The recursion of the dynamic program is given as follows. For $i=1,\dots,n-1$, $k=1,\dots,i$, and $m\in\mathbb{N}_0$,\begin{align*}\xi_i(k,m) &= \inf_{t\in \mathbb{N}_0}\Big(\omega \bar{f}^{\circ}_{k,m\Delta}(t\Delta) + (1-\omega)\bar{h}^{\circ}_{k,m\Delta} +\sum_{\ell=2}^{k}\sum_{j=0}^{t}\bar{q}_{k\ell,mj}(t)\xi_{i+1}(\ell,j) +P^{\downarrow}_{k,m\Delta}(t\Delta)\xi_{i+1}(1,0) +P^{\uparrow}_{k,m\Delta}(t\Delta)\xi_{i+1}(k+1,m+t)\Big),\end{align*}whereas, for $k=1,\dots,n$ and $m\in \mathbb{N}_0$,\begin{align*}\xi_n(k,m) = (1-\omega)\bar{h}^{\circ}_{k,m\Delta}.\end{align*} We will implement this dynamic program step by step. First, we implement all functions in the equation above.Our formulas rely heavily on the survival function $\mathbb{P}(B>t)$ and $\gamma_z(t) = \mathbb{P}(Z_t = z\mid B>t)$:
@lru_cache(maxsize=128) def B_sf(t): """The survival function P(B > t).""" return p * np.exp(-mu1 * t) + (1 - p) * np.exp(-mu2 * t) @lru_cache(maxsize=128) def gamma(z, t): """Computes P(Z_t = z | B > t).""" gamma_circ = B_sf(t) if z == 1: return p * np.exp(-mu1 * t) / gamma_circ elif z == 2: return (1 - p) * np.exp(-mu2 * t) / gamma_circ
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
Next, we implement $\bar{f}^{\circ}_{k,u}(t)$, which depends on $\bar{f}_{k,z}(t)$:
@lru_cache(maxsize=128) def f_bar(k,z,t): if z == 1: return sum([binom.pmf(m, k-1, p) * sigma(t, m+1, k-1-m) for m in range(k)]) elif z == 2: return sum([binom.pmf(m, k-1, p) * sigma(t, m, k-m) for m in range(k)]) @lru_cache(maxsize=128) def f_circ(k, u, t): return gamma(1, u) * f_bar(k, 1, t) + gamma(2, u) * f_bar(k, 2, t)
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
In here, we need to evaluate the object $\sigma_{t}[m,k]$, which depends on $\rho_{t}[m,k]$:
@lru_cache(maxsize=512) def sigma(t,m,k): return (t - k / mu2) * erlang.cdf(t, m, scale=1/mu1) - (m / mu1) * erlang.cdf(t, m+1, mu1) + \ (mu1 / mu2) * sum([(k-i) * rho_t(t, m-1, i) for i in range(k)]) @lru_cache(maxsize=512) def rho_t(t,m,k): if not k: return np.exp(-mu2 * t) * (mu1 ** m) / ((mu1 - mu2) ** (m + 1)) * erlang.cdf(t, m+1, scale=1/(mu1 - mu2)) elif not m: return np.exp(-mu1 * t) * (mu2 ** k) / ((mu1 - mu2) ** (k + 1)) * erlang.cdf(t, k+1, scale=1/(mu1 - mu2)) else: return (mu1 * rho(t,a,m-1,k) - mu2 * rho(t,a,m,k-1)) / (mu1 - mu2) @lru_cache(maxsize=512) def rho(t,a,m,k): if not k: return np.exp(-mu2 * t) * (mu1 ** m) / ((mu1 - mu2) ** (m + 1)) * erlang.cdf(a, m+1, scale=1/(mu1 - mu2)) elif not m: return np.exp(-mu1 * t) * (mu2 ** k) / ((mu1 - mu2) ** (k + 1)) * \ (erlang.cdf(t, k+1, scale=1/(mu1 - mu2)) - erlang.cdf(t-a, k+1, scale=1/(mu1 - mu2))) else: return (mu1 * rho(t,a,m-1,k) - mu2 * rho(t,a,m,k-1) - r(t,a,m,k)) / (mu1 - mu2) @lru_cache(maxsize=512) def r(t,s,m,k): return poisson.pmf(m,mu1*s) * poisson.pmf(k,t-s)
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
We do the same for $\bar{h}^{\circ}_{k,u}(t)$, which only depends on $\bar{h}_{k,z}$:
@lru_cache(maxsize=128) def h_bar(k, z): if k == 1: return 0 elif z <= K: return ((k - 1) * (K + 1 - p) + 1 - z) / mu elif z == K + 1: return ((k - 2) * (K + 1 - p) + 1) / mu @lru_cache(maxsize=128) def h_circ(k, u): return gamma(1, u) * h_bar() sum([gamma(z, u) * h_bar(k, z) for z in range(1, K+2)])
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
The next objective is to implement $\bar{q}_{k\ell,mj}(t)$. This function depends on $q_{k\ell,z,v}(t)$, which depends on $\psi_{vt}[k,\ell]$: TODO
# TODO poisson.pmf(3,0)
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
Finally, we implement the remaining transition probabilities $P^{\uparrow}_{k,u}(t)$ and $P^{\downarrow}_{k,u}(t)$:
# @lru_cache(maxsize=128) def P_up(k, u, t): """Computes P(N_t- = k | N_0 = k, B_0 = u).""" return B_sf(u + t) / B_sf(u) @lru_cache(maxsize=128) def P_down(k, u, t): """Computes P(N_t- = 0 | N_0 = k, B_0 = u).""" return sum([binom.pmf(m, k, p) * Psi(t, m, k-m) for m in range(k+1)]) @lru_cache(maxsize=128) def Psi(t, m, k): return erlang.cdf(t, m, scale=1/mu1) - mu1 * sum([rho_t(t, m-1, i) for i in range(k)]) erlang.cdf(0,1,1)
_____no_output_____
MIT
.ipynb_checkpoints/Hyperexponential Case-checkpoint.ipynb
Roshanmahes/Appointment-Scheduling
Reading and Writing filesSo far, we have typed all of our "data" into the code of our software (e.g. the names of the students, and their ages.Most of the time, this kind of data is stored in files. We need to read (and write) files so that we can create and use permanent copies of these data (and exchange these data with other people or software)The function we will use to open a file is called (surprise!) "open"open takes two arguments":1. the path to the file2. "how" to open the file (for read? for write? for "append"? for read and write?)it looks like this: myfile = open("/path/to/file.csv", "r") opens file.csv for "read" I have already created a file called students.csv that we can open now:
studentfile = open("students.csv", "r") print(studentfile)
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Does the output of that print statement surprise you? What it tells you is that 'studentfile' is a Python "object" (again, we will discuss objects in more detail later, but you will start to see how they work now...)studentfile is an object of type "TextIOWrapper" (from the "io" Python library, which is automatically installed in all Python distributions). It knows what its filename is, it knows that it is open for "read", and it also has guessed the "encoding" of the file (UTF-8 is a kind of text encoding that allows extended text characters like the German umlaut's, and greek alpha, beta, etc. This is a good default for us!) Reading information from a filesurprise! The most basic method used to read information is.... 'read'! This reads **the entire file** print(studentfile.read())
print(studentfile.read())
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Now we need to talk about a feature of file input/output, called a "pointer". The pointer is the position where the code "is" in the file - is it at the beginning? is it at the end"? Is it at line 5?Where is the pointer now? Let's try the same command again!
print(studentfile.read())
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Nada de nada! That's because the pointer is at the end of the file - when we say file.read it tries to read starting from the end of the file...and of course, there is nothing there. To reset back to the beginning, we will use the "seek" function, and set it to position '0':
studentfile.seek(0) print(studentfile.read())
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
More refined file access - line-by-lineMost of the time, you do not want to read the enire file into memory (tell me why this can be very very bad!.... please)MOST of the time, a file will have one "record" per line. e.g. our CSV file has the "name,age" for one student per line. We want to read those lines one-at-a-time and do something useful with each record.The method we want to use is called "readlines()" print(studentfile.readlines())
studentfile.seek(0) # set it back to the beginning again for this lesson... print(studentfile.readlines())
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
You will see that this returns a list, which means we can use it in a FOR loop...
studentfile.seek(0) # set it back to the beginning again for this lesson... for line in studentfile.readlines(): print("the current record is", line)
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
We're getting closer to what we want! We have each record as a string in the format "Mark,50". What we want is to separate the "Mark" and the "50" so that we could put them into separate variables (e.g. *name* and *age*)There is a ***correct*** way to this, but you already know one way to solve this problem! In the box below, use regular expressions to capture the name and the age into the variables *name* and *age*!/usr/bin/python3import re this brings the python regular expression object into your programstudentfile.seek(0) set it back to the beginning again for this lesson...for line in studentfile.readlines(): print("the current record is", line) matchObj = re.search( r'(\w+),(\d+)', line) match the index letter, then CAPTURE the rest of the sentence if matchObj: name = matchObj.group(1) age = matchObj.group(2) print("Name: ", name, " Age: ", age) else: print ("No match!!")
# put your amazing solution here!
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
OK, so now you have solved the problem using regular expressions, however... the solution isn't very "abstract". In another case, you might have a more complex record: Mark,50,190cm,95kg,163483,113mmhg,29mg/mlYour regular expression would start to get ugly! What is the one thing that is constant in this CSV file? (in fact,the name of the file-type tells you!)In cases like this, there is a method called "split", which will take a string and split it based on whatever separator you give it. In this case, the comma. for line in studentfile.readlines(): print("the current record is", line) name, age = line.split(',')
studentfile.seek(0) # set it back to the beginning again for this lesson... for line in studentfile.readlines(): print("the current record is", line) name, age = line.split(',') print("Name:", name, " Age:", age)
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Much better! But... *Still not quite right!!* What are all of those blank lines? We didn't ask for blank lines...Remember just a few minutes ago we looked at the output from readlines(): studentfile.seek(0) set it back to the beginning again for this lesson... print(studentfile.readlines()) ==> ['Mark,50\n', 'Alejandro,25\n', 'Julia,26\n', 'Denise,23\n', 'Josef,21\n'] Those blank lines are because of the \n (newline) character at the end of every line. What is happening is that the print statements above ACTUALLY look like this: the current record is Alejandro,25\n Name: Alejandro Age: 25\n <----- the value of the age variable after the spit is '25\n' Can we discard this newline? Sure! The method *rstrip()* will strip all blank space (including newlines) from the end (right-hand end --> **r**strip() ) of the line:
studentfile.seek(0) # set it back to the beginning again for this lesson... for line in studentfile.readlines(): line = line.rstrip() print("the current record is", line) name, age = line.split(',') print("Name:", name, " Age:", age)
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
When you have finished with an open file, it is a very good idea to close it! studentfile.close() it's a good idea to close a file once you are finished with it! We are...
studentfile.close() # it's a good idea to close a file once you are finished with it! We are...
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Writing to a fileWriting to a file is very straightforward. Use the same "open" command that you have already learned, but using the "w" flag ("open for **w**rite), then write information to that open file using the *write()* method.Python will help you by creating the file if it doesn't exist. For example, the box below will create a file named "OLDERstudents.csv" if that file doesn't exist. ***IF IT DOES EXIST, IT WILL BE DESTROYED!!!!! YOU CANNOT GET THE CONTENT BACK!!!! BE CAREFUL!!!***The file pointer is set to the beginning of the file.Here is how easy it is:
olderstudents = open("OLDERstudents.csv", "w") olderstudents.write("hello, I am writing stuff to a file!\nThis is very cool!") # the write function, using \n (newline) olderstudents.close() checkcontent = open("OLDERstudents.csv", "r") print(checkcontent.read()) # print the content of the file checkcontent.close()
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
Now you!* create the file OLDERstudents.csv* using the data from the original students.csv, make everyone 5 years older* write the new older student data to the OLDERstudents.csv file, in an identical format (Mark,55....)* do this again, but this time, create a "header line" (Student Name, Student Age) Student Name, Student Age Mark,55 Alejandro,35 ... ... * do this again, but instead of creating a CSV (comma-separated value) file, create a TSV (tab-separated value) * call it ***OLDERstudents.tsv*** * You need to know: the symbol for TAB is \t * these are the two most common structured text-file formats * both of these can be imported into software like MS Excel
# put your amazing code here!
_____no_output_____
CC-BY-4.0
Lesson 6 - File access in Python3.ipynb
mjmtnez/Accelerated_Intro_to_CompBio_Part_2
SymPy Symbolic ComputationFree, Open Source, Python- solve equations - simplify expressions- compute derivatives, integrals, limits- work with matrices, - plotting & printing- code gen - physics - statitics - combinatorics- number theory - geometry - logic---- Modules[SymPy Core](http://docs.sympy.org/latest/modules/core.html) - [Combinatorics](http://docs.sympy.org/latest/modules/combinatorics/index.html) - [Number Theory](http://docs.sympy.org/latest/modules/ntheory.html) - [Basic Cryptography](http://docs.sympy.org/latest/modules/crypto.html) - [Concrete Maths](http://docs.sympy.org/latest/modules/concrete.html) - [Numerical Evaluation](http://docs.sympy.org/latest/modules/evalf.html) - [Code Gen](http://docs.sympy.org/latest/modules/codegen.html) - [Numeric Computation](http://docs.sympy.org/latest/modules/numeric-computation.html) - [Functions](http://docs.sympy.org/latest/modules/functions/index.html) - [Geometry](http://docs.sympy.org/latest/modules/geometry/index.html) - [Holonomic Functions](http://docs.sympy.org/latest/modules/holonomic/index.html) - [Symbolic Integrals](http://docs.sympy.org/latest/modules/integrals/integrals.html) - [Numeric Integrals](http://docs.sympy.org/latest/modules/integrals/integrals.htmlnumeric-integrals) - [Lie Algebra](http://docs.sympy.org/latest/modules/liealgebras/index.html) - [Logic](http://docs.sympy.org/latest/modules/logic.html) - [Matricies](http://docs.sympy.org/latest/modules/matrices/index.html) - [Polynomials](http://docs.sympy.org/latest/modules/polys/index.html) - [Printing](http://docs.sympy.org/latest/modules/printing.html) - [Plotting](http://docs.sympy.org/latest/modules/plotting.html) - [Pyglet Plotting](http://docs.sympy.org/latest/modules/plotting.htmlmodule-sympy.plotting.pygletplot) - [Assumptions](http://docs.sympy.org/latest/modules/assumptions/index.html) - [Term Rewriting](http://docs.sympy.org/latest/modules/rewriting.html) - [Series Module](http://docs.sympy.org/latest/modules/series/index.html) - [Sets](http://docs.sympy.org/latest/modules/sets.html) - [Symplify](http://docs.sympy.org/latest/modules/simplify/simplify.html) - [Hypergeometrtic](http://docs.sympy.org/latest/modules/simplify/hyperexpand.html) - [Stats](http://docs.sympy.org/latest/modules/stats.html) - [ODE](http://docs.sympy.org/latest/modules/solvers/ode.html) - [PDE](http://docs.sympy.org/latest/modules/solvers/pde.html) - [Solvers](http://docs.sympy.org/latest/modules/solvers/solvers.html) - [Diophantine](http://docs.sympy.org/latest/modules/solvers/diophantine.html) - [Inequality Solvers](http://docs.sympy.org/latest/modules/solvers/inequalities.html) - [Solveset](http://docs.sympy.org/latest/modules/solvers/solveset.html) - [Tensor](http://docs.sympy.org/latest/modules/tensor/index.html) - [Utilities](http://docs.sympy.org/latest/modules/utilities/index.html) - [Parsing Input](http://docs.sympy.org/latest/modules/parsing.html) - [Calculus](http://docs.sympy.org/latest/modules/calculus/index.html) - [Physics](http://docs.sympy.org/latest/modules/physics/index.html) - [Categrory Theory](http://docs.sympy.org/latest/modules/categories.html) - [Differential Geometry](http://docs.sympy.org/latest/modules/diffgeom.html) - [Vector](http://docs.sympy.org/latest/modules/vector/index.html) ---- Simple Expressions
# declare variable first x, y = symbols('x y') # Declare expression expr = x + 3*y # Print expressions print("expr =", expr) print("expr + 1 =", expr + 1) print("expr - x =", expr - x) # auto-simplify print("x * expr =", x * expr)
expr = x + 3*y expr + 1 = x + 3*y + 1 expr - x = 3*y x * expr = x*(x + 3*y)
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
---- Substitution
x = symbols('x') expr = x + 1 print(expr) display(expr) # Evaluate expression at a point print("expr(2)=", expr.subs(x, 2)) # Replace sub expression with another sub expression # 1. For expressions with symmetry x, y = symbols('x y') expr2 = x ** y expr2 = expr2.subs(y, x**y) expr2 = expr2.subs(y, x**x) display(expr2) # 2. Controlled simplifcation expr3 = sin(2*x) + cos(2*x) print("expr3") display(expr3) print(" ") print("expand_trig(expr3)") display(expand_trig(expr3)) print(" ") print("use this to only expand sin(2*x) if desired") print("expr3.subs(sin(2*x), 2*sin(x)*cos(x))") display(expr3.subs(sin(2*x), 2*sin(x)*cos(x))) # multi-substitute expr4 = x**3 + 4*x*y - z args = [(x,2), (y,4), (z,0)] expr5 = expr4.subs(args) display(expr4) print("args = ", args) display(expr5) expr6 = x**4 - 4*x**3 + 4 * x ** 2 - 2 * x + 3 args = [(x**i, y**i) for i in range(5) if i%2 == 0] display(expr6) print(args) display(expr6.subs(args))
_____no_output_____
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
---- Equality & Equivalence
# do not use == between symbols and variables, will return false x = symbols('x') x+1==4 # Create a symbolic equality expression expr2 = Eq(x+1, 4) print(expr2) display(expr2) print("if x=3, then", expr2.subs(x,3)) # two equivalent formulas expr3 = (x + 1)**2 # we use pythons ** exponentiation (instead of ^) expr4 = x**2 + 2*x + 1 eq34 = Eq(expr3, expr4) print("expr3") display(expr3) print(" ≡ expr4") display(expr4) print("") print("(expr3 == expr4) => ", expr3 == expr4) print("(these are equivalent, but not the same symbolically)") print("") print("Equal by negating, simplifying and comparing to 0") print("expr3 - expr4 => ", expr3 - expr4) print("simplify(expr3-expr4)==0=> ", simplify(expr3 - expr4)==0 ) print("") print("Equals (test by evaluating 2 random points)") print("expr3.equals(expr4) => ", expr3.equals(expr4))
expr3
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
---- SymPy Types & Casting
print( "1 =", type(1) ) print( "1.0 =", type(1.0) ) print( "Integer(1) =", type(Integer(1)) ) print( "Integer(1)/Integer(3) =", type(Integer(1)/Integer(3)) ) print( "Rational(0.5) =", type(Rational(0.5)) ) print( "Rational(1/3) =", type(Rational(1,3)) ) # string to SymPy sympify("x**2 + 3*x - 1/2")
_____no_output_____
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
---- Evaluating Expressions
# evaluate as float using .evalf(), and N display( sqrt(8) ) display( sqrt(8).evalf() ) display( sympy.N(sqrt(8)) ) # evaluate as float to nearest n decimals display(sympy.pi) display(sympy.pi.evalf(100))
_____no_output_____
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
---- SymPy Types Number Class[Number](http://docs.sympy.org/latest/modules/core.htmlnumber) - [Float](http://docs.sympy.org/latest/modules/core.htmlfloat) - [Rational](http://docs.sympy.org/latest/modules/core.htmlrational) - [Integer](http://docs.sympy.org/latest/modules/core.htmlinteger) - [RealNumber](http://docs.sympy.org/latest/modules/core.htmlrealnumber) Numbers[Zero](http://docs.sympy.org/latest/modules/core.htmlzero) - [One](http://docs.sympy.org/latest/modules/core.htmlone) - [Negative One](http://docs.sympy.org/latest/modules/core.htmlnegativeone) - [Half](http://docs.sympy.org/latest/modules/core.htmlhalf) - [NaN](http://docs.sympy.org/latest/modules/core.htmlnan) - [Infinity](http://docs.sympy.org/latest/modules/core.htmlinfinity) - [Negative Infinity](http://docs.sympy.org/latest/modules/core.htmlnegativeinfinity) - [Complex Infinity](http://docs.sympy.org/latest/modules/core.htmlcomplexinfinity) Constants[E (Transcedental Constant)](http://docs.sympy.org/latest/modules/core.htmlexp1) - [I (Imaginary Unit)](http://docs.sympy.org/latest/modules/core.htmlimaginaryunit) - [Pi](http://docs.sympy.org/latest/modules/core.htmlpi) - [EulerGamma (Euler-Mascheroni constant)](http://docs.sympy.org/latest/modules/core.htmleulergamma) - [Catalan (Catalan's Constant)](http://docs.sympy.org/latest/modules/core.htmlcatalan) - [Golden Ratio](http://docs.sympy.org/latest/modules/core.htmlgoldenratio) Rational Numbers
# Rational Numbers expr_rational = Rational(1)/3 print("expr_rational") display( type(expr_rational) ) display( expr_rational ) eval_rational = expr_rational.evalf() print("eval_rational") display( type(eval_rational) ) display( eval_rational ) neval_rational = N(expr_rational) print("neval_rational") display( type(neval_rational) ) display( neval_rational )
expr_rational
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
 Complex Numbers
# Complex Numbers supported. expr_cplx = 2.0 + 2*sympy.I print("expr_cplx") display( type(expr_cplx) ) display( expr_cplx ) print("expr_cplx.evalf()") display( type(expr_cplx.evalf()) ) display( expr_cplx.evalf() ) print("float() - errors") print(" ") # this errors complex cannot be converted to float #display( float(sym_cplx) ) print("complex() - evaluated to complex number") display( complex(expr_cplx) ) display( type(complex(expr_cplx)) ) # Partial Evaluation if cannot be evaluated as float display( (sympy.pi*x**2 + x/3).evalf(2) ) # use substitution in evalf expr = cos(2*x) expr.evalf(subs={x:2.4}) # sometimes there are round-offs smaller than the desired precision one = cos(1)**2 + sin(1)**2 display( (one-1).evalf() ) # chop=True can remove these errors display( (one-1).evalf(chop=True) ) import sys 'gmpy2' in sys.modules.keys()
_____no_output_____
MIT
notebooks/python-data-science/sympy/sympy.ipynb
sparkboom/my_jupyter_notes
This is statregistration.py but then for multiple .nlp's. Edited from Tobias' script: Calculates shifts from CPcorrected and create a shift corrected stack. The calculations implements the drift correction algorithm as described in section 4 of the paper. https://doi.org/10.1016/j.ultramic.2019.112913 https://github.com/TAdeJong/LEEM-analysis/blob/master/2%20-%20Driftcorrection.ipynb It uses cross correlation of all pairs of images after applying digital smoothing and edge detection filters to align Low Energy Electron Microscopy images with each other. When applied correctly, this allows for sub-pixel accurate image registration. Config parameters: SAVEFIG, boolean whether to save the figures stride: A stride larger than 1 takes 1 every stride images of the total dataset, This decreases computation time by a factor of stride**2, but decreases accuracy blocksize: dE is the blocksize used by dask, the number of images computed for at once. fftsize: The size of the image for which the drift correction is calculated startI: Starting frame for which the drift correction is calculated endI: Ending frame for which the drift correction is calculated sigma: the gaussian width over which the images are smoothenedAdded napari to choose a rectangular patch on which to perform drift correction.
import numpy as np import matplotlib.pyplot as plt import dask.array as da from dask.distributed import Client, LocalCluster import scipy.ndimage as ndi import os import time import napari %gui qt from pyL5.lib.analysis.container import Container from pyL5.analysis.CorrectChannelPlate.CorrectChannelPlate import CorrectChannelPlate import pyL5.lib.analysis.Registration as Reg cluster = LocalCluster(n_workers=1, threads_per_worker=6) client = Client(cluster) client def plot_masking(DX_DY, W_n, coords, dx, dy, shifts, min_normed_weight, sigma): """Plot W, DX and DY to pick a value for W_{min} (Step 7 of algorithm)""" extent = [startI, endI, endI, startI] fig, axs = plt.subplots(1, 4, figsize=(12, 3), constrained_layout=True) im = {} im[0] = axs[0].imshow(DX_DY[0], cmap='seismic', extent=extent, interpolation='none') im[1] = axs[1].imshow(DX_DY[1], cmap='seismic', extent=extent, interpolation='none') im[2] = axs[2].imshow(W_n - np.diag(np.diag(W_n)), cmap='inferno', extent=extent, clim=(0.0, None), interpolation='none') axs[3].plot(coords, dx, 'x', label='dx') axs[3].plot(coords, dy, 'x', label='dy') axs[3].plot(shifts[:, 0], color='C0') axs[3].plot(shifts[:, 1], color='C1') axs[3].set_xlabel('frames') axs[3].set_ylabel('shift (pixels)') axs[3].set_box_aspect(1) axs[3].legend() axs[0].set_ylabel('$j$') fig.colorbar(im[0], ax=axs[:2], shrink=0.82, fraction=0.1) axs[0].contourf(W_n, [0, min_normed_weight], colors='black', alpha=0.6, extent=extent, origin='upper') axs[1].contourf(W_n, [0, min_normed_weight], colors='black', alpha=0.6, extent=extent, origin='upper') CF = axs[2].contourf(W_n, [0, min_normed_weight], colors='white', alpha=0.2, extent=extent, origin='upper') cbar = fig.colorbar(im[2], ax=axs[2], shrink=0.82, fraction=0.1) cbar.ax.fill_between([0, 1], 0, min_normed_weight, color='white', alpha=0.2) axs[0].set_title('$DX_{ij}$') axs[1].set_title('$DY_{ij}$') axs[2].set_title('$W_{ij}$') plt.show() return min_normed_weight folder = 'D:\\20220210-36-CuGrKalbac-old\\growth' #folder = 'D:\\20211130-27-Si111SbPLD\\PLD2_100mJ_400C\\growthIVs' names = [f.name for f in os.scandir(folder) if f.is_file() and f.name[-4:] == ".nlp"] for name in names: script = CorrectChannelPlate(os.path.join(folder, name)) script.start() conts = [Container(os.path.join(folder,f)) for f in names] #original = da.stack([cont.getStack().getDaskArray() for cont in conts]) #original = da.stack([cont.getStack('CPcorrected').getDaskArray() for cont in conts]) original = da.image.imread(os.path.join(folder+'\driftcorrected01\*')) m = 1 subfolder = 'driftcorrected%02d' %m + 'it2' #original = original[:,m] original # config SAVEFIG = True stride = 1 dE = 20 fftsize = 256 startI, endI = 0, -1 Eslice = slice(startI,endI,stride) sigma = 10 min_norm = 0.4 #minimum
_____no_output_____
MIT
StatRegistration_multiNLP.ipynb
thagusta/LEEM-analysis-1
Step 0: choose areaChoose the (rectangular) area on which to perform drift correction.
center = [dim//2 for dim in original.shape[1:]] extent = (center[0]-fftsize, center[0]+fftsize, center[1]-fftsize, center[1]+fftsize) extent viewer = napari.view_image(np.swapaxes(original, -1, -2), name='original') # create the square in napari center = np.array(original.shape[1:]) // 2 square = np.array([[center[1]+fftsize, center[0]+fftsize], [center[1]-fftsize, center[0]+fftsize], [center[1]-fftsize, center[0]-fftsize], [center[1]+fftsize, center[0]-fftsize] ]) shapes_layer = viewer.add_shapes(square, shape_type='polygon', edge_width=2, edge_color='white') shapes_layer._fixed_aspect = True # Keep it square # load the outer coordinates of napari coords = np.flip(np.array(shapes_layer.data).astype(int)[0]) extent = np.min(coords[:,0]), np.max(coords[:,0]), np.min(coords[:,1]), np.max(coords[:,1]) #xmin, xmax, ymin, ymax fftsize = max(extent[1]-extent[0], extent[3]-extent[2]) //2 #This is basically for print('The extent in x,y is:', extent, 'pixels, which makes the largest side/2', fftsize, 'pixels.') viewer.close()
The extent in x,y is: (214, 726, 646, 1158) pixels, which makes the largest side/2 256 pixels.
MIT
StatRegistration_multiNLP.ipynb
thagusta/LEEM-analysis-1
Now starting the steps of the algorithm
def crop_and_filter_extent(images, extent, sigma=11, mode='nearest'): """Crop images to extent chosen and apply the filters. Cropping is initially with a margin of sigma, to prevent edge effects of the filters. extent = minx,maxx,miny,maxy of ROI""" result = images[:, extent[0]-sigma:extent[1]+sigma, extent[2]-sigma:extent[3]+sigma] result = result.map_blocks(filter_block, dtype=np.float64, sigma=sigma, mode=mode) if sigma > 0: result = result[:, sigma:-sigma, sigma:-sigma] return result # Step 1 to 3 of the algorithm as described in section 4 of the paper. sobel = crop_and_filter_extent(original[Eslice, ...].rechunk({0: dE}), extent, sigma=sigma) sobel = (sobel - sobel.mean(axis=(1, 2), keepdims=True)) # .persist() # Step 4 of the algorithm as described in paper. Corr = Reg.dask_cross_corr(sobel) # Step 5 of the algorithm weights, argmax = Reg.max_and_argmax(Corr) # Do actual computations t = time.monotonic() W, DX_DY = Reg.calculate_halfmatrices(weights, argmax, fftsize=fftsize) print(time.monotonic() - t, ' seconds') # Step 6 of the algorithm w_diag = np.atleast_2d(np.diag(W)) W_n = W / np.sqrt(w_diag.T*w_diag) # Step 7 of the algorithm nr = np.arange(W.shape[0])*stride + startI coords2, weightmatrix, DX, DY, row_mask = Reg.threshold_and_mask(min_norm, W, DX_DY, nr) # Step 8 of the algorithm: reduce the shift matrix to two vectors of absolute shifts dx, dy = Reg.calc_shift_vectors(DX, DY, weightmatrix) # Interpolate the shifts for all values not in coords shifts = np.stack(Reg.interp_shifts(coords2, [dx, dy], n=original.shape[0]), axis=1) neededMargins = np.ceil(shifts.max(axis=0)).astype(int) plot_masking(DX_DY, W_n, coords2, dx, dy, shifts, min_norm, sigma) print("shiftshape", shifts.shape) shifts = da.from_array(shifts, chunks=(dE, -1)) # Step 9, the actual shifting of the original images # Inferring output dtype is not supported in dask yet, so we need original.dtype here. @da.as_gufunc(signature="(i,j),(2)->(i,j)", output_dtypes=original.dtype, vectorize=True) def shift_images(image, shift): """Shift `image` by `shift` pixels.""" return ndi.shift(image, shift=shift, order=1) padded = da.pad(original.rechunk({0: dE}), ((0, 0), (0, neededMargins[0]), (0, neededMargins[1]) ), mode='constant' ) corrected = shift_images(padded.rechunk({1: -1, 2: -1}), shifts) corrected # Optional crop of images TODO # Save as png with dask from pyL5.lib.analysis.stack import da_imsave os.makedirs(os.path.join(folder, subfolder), exist_ok=True) da_imsave(os.path.join(folder, subfolder, 'image{:04d}.png'),corrected, compute=True) from pyL5.solidsnakephysics.helperFunctions import save_movie save_movie(corrected,os.path.join(folder, subfolder))
Saving stack as .png Now saving movie stack.mp4
MIT
StatRegistration_multiNLP.ipynb
thagusta/LEEM-analysis-1
Create a Class To create a class, use the keyword class:
class car: x = 5 print(car)
<class '__main__.car'>
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Object Creation
class car: x = 5 company="Maruti" p1 = car() print(p1.x) print(p1.company)
5 Maruti
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Object Creation & The self Parameter
class Jeep: # A simple class # attribute company = "Maruti" location = "India" # A sample method def test(self): print("I'm from", self.company) print("I'm made in ", self.location) # Object instantiation jimny = Jeep() # Accessing class attributes # and method through objects print(jimny.company) jimny.test()
Maruti I'm from Maruti I'm made in India
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
The self Parameter It does not have to be named *self* , you can call it whatever you like, but it has to be the first parameter of any function in the class:
class Car: def __init__(sampleobject, name, price): sampleobject.name = name sampleobject.price = price def myfunc(abc): print("Hello , my car is " + abc.name) p1 = Car("BMW", 1) p1.myfunc()
Hello , my car is BMW
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
The __init__() Function Note: The __init__() function is called automatically every time the class is being used to create a new object.
class Car: def __init__(self, company): self.company = company p1 = Car("Maruti" ) print(p1.company)
Maruti
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Modify Object Properties
class Car: def __init__(self, model, rate): self.model = model self.rate = rate def myfunc(self): print("Hello my name is " + self.model) p1 = Car("Swift", 500000) p1.myfunc() print(p1.model) print("Previous rate of my model was " , p1.rate) p1.rate = 400000 print("New rate is" , p1.rate)
Hello my name is Swift Swift Previous rate of my model was 500000 New rate is 400000
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Delete Object Properties You can delete properties on objects by using the del keyword:
class Car: def __init__(self, model, rate): self.model = model self.rate = rate def myfunc(self): print("Hello my name is " + self.model) p1 = Car("Swift",500000 ) del p1.rate print(p1.rate)
_____no_output_____
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Delete Object You can delete objects by using the del keyword:
class Car: def __init__(self, model, rate): self.model = model self.rate = rate def myfunc(self): print("Hello my name is " + self.model) p1 = Car("Swift",500000 ) del p1 print(p1.rate)
_____no_output_____
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
The pass Statement
class Person: pass class Car: # Class Variable fav_car = 'Verna' # The init method or constructor def __init__(self, model, color): # Instance Variable self.model = model self.color = color # Objects of Dog class Moderna = Car("swift", "brown") Buziga = Car("wagonR", "black") print('Moderna details:') print('Moderna is a', Moderna.fav_car) print('Model: ', Moderna.model) print('Color: ', Moderna.color) print('\nBuziga details:') print('Buziga is a', Buziga.fav_car) print('Model: ', Buziga.model) print('Color: ', Buziga.color) # Class variables can be accessed using class # name also print("\nAccessing class variable using class name") print(Car.fav_car)
Moderna details: Moderna is a Verna Model: swift Color: brown Buziga details: Buziga is a Verna Model: wagonR Color: black Accessing class variable using class name Verna
MIT
Ananthakrishnan_Python_Class&Objects.ipynb
VarunBhaaskar/Open-contributions
Lambda School Data Science - Survival Analysis![My normal approach is useless here, too.](https://imgs.xkcd.com/comics/probability.png)https://xkcd.com/881/The aim of survival analysis is to analyze the effect of different risk factors and use them to predict the duration of time between one event ("birth") and another ("death"). LectureSurvival analysis was first developed by actuaries and medical professionals to predict (as its name implies) how long individuals would survive. However, it has expanded into include many different applications.* it is referred to as **reliability analysis** in engineering* it can be referred to more generally as **time-to-event analysis**In the general sense, it can be thought of as a way to model anything with a finite duration - retention, churn, completion, etc. The culmination of this duration may have a "good" or "bad" (or "neutral") connotation, depending on the situation. However old habits die hard, so most often it is called survival analysis and the following definitions are still commonly used:* birth: the event that marks the beginning of the time period for observation* death: the event of interest, which then marks the end of the observation period for an individual Examples* Customer churn * birth event: customer subscribes to a service * death event: customer leaves the service* Employee retention * birth event: employee is hired * death event: employee quits* Engineering, part reliability * birth event: part is put in use * death event: part fails* Program completion * birth event: student begins PhD program * death event: student earns PhD* Response time * birth event: 911 call is made * death event: police arrive* Lambda School * birth event: student graduates LambdaSchool * death event: student gets a job! Take a moment and try to come up with your own specific example or two. So... if all we're predicting here is a length of time between two events, why can't we just use regular old Linear Regression?Well... if you have all the data, go for it. In some situations it may be reasonably effective. But, data for survival times are often highly skewed and, more importantly, we don't always get a chance to observe the "death" event. The current time or other factors interfere with our ability to observe the time of the event of interest. These observations are said to be _censored_.Additionally, the occurrence or non-occurrence of an event is binary - so, while the time is continuous, the event itself is in some ways similar to a binary event in logistic regression. Censorship in DataSuppose a new cancer treatment is developed. Researchers select 50 individuals for the study to undergo treatment and participate in post-treatment obsesrvation. Birth Event = Participant begins trial Death Event = Participant dies due to cancer or complications of cancerDuring the study:1. Some participants die during the course of the study--triggering their death event 2. Some participants drop out or the researchers otherwise lose contact with them. The researchers have their data up intil the time they dropped out, but they don't have a death event to record3. Some participants are still be alive at the end of the observation period. So again, researchers have their data up until some point, but there is no death event to recordWe only know the interval between the "birth" event and the "death" event for participants in category 1. All others we only know that they survived _up to_ a certain point. Dealing with Censored DataWithout survival analysis, we could deal with censored data in two ways:* We could just treat the end of the observation period as the time of the death event* (Even worse) We could drop the censored data using the rationale that we have "incomplete data" for those observationsBut... both of these will underestimate survival rates for the purpose of the study. We **know** that all those individuals "survived" the "death event" past a certain point.Luckily, in the 1980s a pair of smarty pants named David (main author Cox and coauthor Oakes) did the hard math work to make it possible to incorporate additional features as predictive measures to survival time probabilities. (Fun fact, the one named Cox also came up with logistic regression with non-David coauthor, Joyce Snell.) lifelinesIt wasn't until 2014 that some other smart people made an implementation of survival analysis in Python called lifelines. It is built over Pandas and follows the same conventions for usage as scikit-learn._Additional note: scikit pushed out a survival analysis implementation last year (2018) named scikit-survival that is imported by the name `sksurv`. It's super new so it may/may not have a bunch of bugs... but if you're interested you can check it out in the future. (For comparison, scikit originally came out in 2007 and Pandas came out in 2008)._
!pip install lifelines import numpy as np import pandas as pd import matplotlib.pyplot as plt import lifelines # lifelines comes with some datasets to get you started playing around with it. # Most of the datasets are cleaned-up versions of real datasets. Here we will # use their Leukemia dataset comparing 2 different treatments taken from # http://web1.sph.emory.edu/dkleinb/allDatasets/surv2datasets/anderson.dat from lifelines.datasets import load_leukemia leukemia = load_leukemia() leukemia.head()
_____no_output_____
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
You can you any Pandas DataFrame with lifelines. The only requirement is that the DataFrame includes features that describe:* a duration of time for the observation* a binary column regarding censorship (`1` if the death event was observed, `0` if the death event was not observed)Sometimes, you will have to engineer these features. How might you go about that? What information would you need?
leukemia.info() leukemia.describe() time = leukemia.t.values event = leukemia.status.values ax = lifelines.plotting.plot_lifetimes(time, event_observed=event) ax.set_xlim(0, 40) ax.grid(axis='x') ax.set_xlabel("Time in Months") ax.set_title("Lifelines for Survival of Leukemia Patients"); plt.plot();
_____no_output_____
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Kaplan-Meier survival estimate The Kaplan-Meier method estimates survival probability from observed survival times. It results in a step function that changes value only at the time of each event, and confidence intervals can be computer for the survival probabilities. The KM survival curve,a plot of KM survival probability against time, provides a useful summary of the data.It can be used to estimate measures such as median survival time.It CANNOT account for risk factors and is NOT regression. It is *non-parametric* (does not involve parameters).However it is a good way to visualize a survival dataset, and can be useful to compare the effects of a single categorical variable.
kmf = lifelines.KaplanMeierFitter() kmf.fit(time, event_observed=event) !pip install -U matplotlib # Colab has matplotlib 2.2.3, we need >3 kmf.survival_function_.plot() plt.title('Survival Function Leukemia Patients'); print(f'Median Survival: {kmf.median_} months after treatment') kmf.survival_function_.plot.line() ax = plt.subplot(111) treatment = (leukemia["Rx"] == 1) kmf.fit(time[treatment], event_observed=event[treatment], label="Treatment 1") kmf.plot(ax=ax) print(f'Median survival time with Treatment 1: {kmf.median_} months') kmf.fit(time[~treatment], event_observed=event[~treatment], label="Treatment 0") kmf.plot(ax=ax) print(f'Median survival time with Treatment 0: {kmf.median_} months') plt.ylim(0, 1); plt.title("Survival Times for Leukemia Treatments");
Median survival time with Treatment 1: 8.0 months Median survival time with Treatment 0: 23.0 months
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Cox Proportional Hazards Model -- Survival RegressionIt assumes the ratio of death event risks (hazard) of two groups remains about the same over time.This ratio is called the hazards ratio or the relative risk.All Cox regression requires is an assumption that ratio of hazards is constant over time across groups.*The good news* — we don’t need to know anything about overall shape of risk/hazard over time*The bad news* — the proportionality assumption can be restrictive
# Using Cox Proportional Hazards model cph = lifelines.CoxPHFitter() cph.fit(leukemia, 't', event_col='status') cph.print_summary()
<lifelines.CoxPHFitter: fitted with 42 observations, 12 censored> duration col = 't' event col = 'status' number of subjects = 42 number of events = 30 log-likelihood = -69.59 time fit was run = 2019-01-20 19:09:48 UTC --- coef exp(coef) se(coef) z p log(p) lower 0.95 upper 0.95 sex 0.31 1.37 0.45 0.69 0.49 -0.72 -0.58 1.21 logWBC 1.68 5.38 0.34 5.00 <0.005 -14.36 1.02 2.34 *** Rx 1.50 4.50 0.46 3.26 <0.005 -6.79 0.60 2.41 * --- Signif. codes: 0 '***' 0.0001 '**' 0.001 '*' 0.01 '.' 0.05 ' ' 1 Concordance = 0.85 Likelihood ratio test = 47.19 on 3 df, log(p)=-21.87
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Interpreting the Results`coef`: usually denoted with $b$, the coefficient`exp(coef)`: $e^{b}$, equals the estimate of the hazard ratio. Here, we can say that participants who received treatment 1 had ~4.5 times the hazard risk (risk of death) compared to those who received treatment 2. And for every unit the `logWBC` increased, the hazard risk increased >5 times.`se(coef)`: standard error of the coefficient (used for calculating z-score and therefore p-value)`z`: z-score $\frac{b}{se(b)}$`p`: p-value. derived from z-score. describes statistical significance. more specifically, it is the likelihood that the variable has no effect on the outcome`log(p)`: natural logarithm of p-value... used to more easily see differences in significance`lower/upper 0.95`: confidence levels for the coefficients. in this case, we can confidently say that the coefficient for `logWBC` is somewhere _between_ 1.02 and 2.34.`Signif. codes`: easily, visually identify significant variables! The more stars, the more solid (simply based on p-value). Here `logWBC` is highly significant, `Rx` is significant, and `sex` has no statistical significance`Concordance`: a measure of predictive power for classification problems (here looking at the `status` column. a value from 0 to 1 where values above 0.6 are considered good fits (the higher the better)`Likelihood ratio (LR) test`: this is a measure of how likely it is that the coefficients are not zero, and can compare the goodness of fit of a model versus an alternative null model. Is often actually calculated as a logarithm, resulting in the log-likelihood ratio statistic and allowing the distribution of the test statistic to be approximated with [Wilks' theorem](https://en.wikipedia.org/wiki/Wilks%27_theorem).
cph.plot_covariate_groups(covariate='logWBC', groups=np.arange(1.5,5,.5)) cph.plot_covariate_groups(covariate='sex', groups=[0,1])
_____no_output_____
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Remember how the Cox model assumes the ratio of death events between groups remains constant over time?Well we can check for that.
cph.check_assumptions(leukemia) # We can see that the sex variable is not very useful by plotting the coefficients cph.plot() # Let's do what the check_assumptions function suggested cph = lifelines.CoxPHFitter() cph.fit(leukemia, 't', event_col='status', strata=['sex']) cph.print_summary() cph.baseline_cumulative_hazard_.shape
<lifelines.CoxPHFitter: fitted with 42 observations, 12 censored> duration col = 't' event col = 'status' strata = ['sex'] number of subjects = 42 number of events = 30 log-likelihood = -55.73 time fit was run = 2019-01-20 19:09:51 UTC --- coef exp(coef) se(coef) z p log(p) lower 0.95 upper 0.95 logWBC 1.45 4.28 0.34 4.23 <0.005 -10.64 0.78 2.13 *** Rx 1.00 2.71 0.47 2.11 0.04 -3.35 0.07 1.93 . --- Signif. codes: 0 '***' 0.0001 '**' 0.001 '*' 0.01 '.' 0.05 ' ' 1 Concordance = 0.85 Likelihood ratio test = 74.90 on 2 df, log(p)=-37.45
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Notice that this regression has `Likelihood ratio test = 74.90 on 2 df, log(p)=-37.45`, while the one that included `sex` had `Likelihood ratio test = 47.19 on 3 df, log(p)=-21.87`. The LRT is higher and log(p) is lower, meaning this is likely a better fitting model.
cph.plot() cph.compute_residuals(leukemia, kind='score') cph.predict_cumulative_hazard(leukemia[:5]) surv_func = cph.predict_survival_function(leukemia[:5]) exp_lifetime = cph.predict_expectation(leukemia[:5]) plt.plot(surv_func) exp_lifetime # lifelines comes with some datasets to get you started playing around with it # The Rossi dataset originally comes from Rossi et al. (1980), # and is used as an example in Allison (1995). # The data pertain to 432 convicts who were released from Maryland state prisons # in the 1970s and who were followed up for one year after release. Half the # released convicts were assigned at random to an experimental treatment in # which they were given financial aid; half did not receive aid. from lifelines.datasets import load_rossi recidivism = load_rossi() recidivism.head() # Looking at the Rossi dataset, how long do you think the study lasted? # All features are coded with numerical values, but which features do you think # are actually categorical? recidivism.info() recidivism.describe()
_____no_output_____
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
These are the "lifelines" of the study participants as they attempt to avoid recidivism
recidivism_sample = recidivism.sample(n=25) duration = recidivism_sample.week.values arrested = recidivism_sample.arrest.values ax = lifelines.plotting.plot_lifetimes(duration, event_observed=arrested) ax.set_xlim(0, 78) ax.grid(axis='x') ax.vlines(52, 0, 25, lw=2, linestyles='--') ax.set_xlabel("Time in Weeks") ax.set_title("Recidivism Rates"); plt.plot(); kmf = lifelines.KaplanMeierFitter() duration = recidivism.week arrested = recidivism.arrest kmf.fit(duration, arrested) kmf.survival_function_.plot() plt.title('Survival Curve:\nRecidivism of Recently Released Prisoners'); kmf.plot() plt.title('Survival Function of Recidivism Data'); print(f'Median time before recidivism: {kmf.median_} weeks') kmf_w_aid = lifelines.KaplanMeierFitter() kmf_no_aid = lifelines.KaplanMeierFitter() ax = plt.subplot(111) w_aid = (recidivism['fin']==1) t = np.linspace(0, 70, 71) kmf_w_aid.fit(duration[w_aid], event_observed=arrested[w_aid], timeline=t, label="Received Financial Aid") ax = kmf_w_aid.plot(ax=ax) #print("Median survival time of democratic:", kmf.median_) kmf_no_aid.fit(duration[~w_aid], event_observed=arrested[~w_aid], timeline=t, label="No Financial Aid") ax = kmf_no_aid.plot(ax=ax) #print("Median survival time of non-democratic:", kmf.median_) plt.ylim(.5,1) plt.title("Recidivism for Participants Who Received Financial Aid \vs. Those Who Did Not"); naf = lifelines.NelsonAalenFitter() naf.fit(duration, arrested) print(naf.cumulative_hazard_.head()) naf.plot() naf_w_aid = lifelines.NelsonAalenFitter() naf_no_aid = lifelines.NelsonAalenFitter() naf_w_aid.fit(duration[w_aid], event_observed=arrested[w_aid], timeline=t, label="Received Financial Aid") ax = naf_w_aid.plot(loc=slice(0, 50)) naf_no_aid.fit(duration[~w_aid], event_observed=arrested[~w_aid], timeline=t, label="No Financial Aid") ax = naf_no_aid.plot(ax=ax, loc=slice(0, 50)) plt.title("Recidivism Cumulative Hazard\nfor Participants Who Received Financial Aid \nvs. Those Who Did Not"); plt.show() cph = lifelines.CoxPHFitter() cph.fit(recidivism, duration_col='week', event_col='arrest', show_progress=True) cph.print_summary() cph.plot() cph.plot_covariate_groups('fin', [0, 1]) cph.plot_covariate_groups('prio', [0, 5, 10, 15]) r = cph.compute_residuals(recidivism, 'martingale') r.head() cph = lifelines.CoxPHFitter() cph.fit(recidivism, duration_col='week', event_col='arrest', show_progress=True) cph.print_summary() cph.plot(); cph.plot_covariate_groups('prio', [0, 5, 10, 15]); cph.check_assumptions(recidivism)
<lifelines.StatisticalResult> test_name = proportional_hazard_test null_distribution = chi squared degrees_of_freedom = 1 --- test_statistic p log(p) age identity 12.06 <0.005 -7.57 ** km 11.03 <0.005 -7.02 ** log 13.07 <0.005 -8.11 ** rank 11.09 <0.005 -7.05 ** fin identity 0.06 0.81 -0.21 km 0.02 0.89 -0.12 log 0.49 0.48 -0.73 rank 0.02 0.90 -0.11 mar identity 0.75 0.39 -0.95 km 0.60 0.44 -0.82 log 1.03 0.31 -1.17 rank 0.67 0.41 -0.88 paro identity 0.12 0.73 -0.32 km 0.12 0.73 -0.31 log 0.01 0.92 -0.09 rank 0.14 0.71 -0.34 prio identity 0.01 0.92 -0.09 km 0.02 0.88 -0.13 log 0.38 0.54 -0.62 rank 0.02 0.88 -0.13 race identity 1.49 0.22 -1.50 km 1.44 0.23 -1.47 log 1.03 0.31 -1.17 rank 1.46 0.23 -1.48 wexp identity 6.93 0.01 -4.77 * km 7.48 0.01 -5.07 * log 5.54 0.02 -3.99 . rank 7.18 0.01 -4.91 * --- Signif. codes: 0 '***' 0.0001 '**' 0.001 '*' 0.01 '.' 0.05 ' ' 1 1. Variable 'age' failed the non-proportional test, p=0.0003. Advice: try binning the variable 'age' using pd.cut, and then specify it in `strata_col=['age']` in the call in `.fit`. See more documentation here: https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Proportional%20hazard%20assumption.html Advice: try adding an interaction term with your time variable. See more documentation here: https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Proportional%20hazard%20assumption.html 2. Variable 'wexp' failed the non-proportional test, p=0.0063. Advice: with so few unique values (only 2), you can try `strata_col=['wexp']` in the call in `.fit`. See documentation here: https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Proportional%20hazard%20assumption.html
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
The Intuition - Hazard and Survival Functions Hazard Function - the dangerous bathtubThe hazard function represents the *instantaneous* likelihood of failure. It can be treated as a PDF (probability density function), and with real-world data comes in three typical shapes.![Different hazard functions](https://upload.wikimedia.org/wikipedia/commons/2/25/Compsyseng17_04.jpg)Increasing and decreasing failure rate are fairly intuitive - the "bathtub" shaped is perhaps the most surprising, but actually models many real-world situations. In fact, life expectancy in general, and most threats to it, assume this shape.What the "bathtub" means is that - threats are highest at youth (e.g. infant mortality), but then decrease and stabilize at maturity, only to eventually re-emerge in old age. Many diseases primarily threaten children and elderly, and middle aged people are also more robust to physical trauma.The "bathtub" is also suitable for many non-human situations - often with reliability analysis, mechanical parts either fail early (due to manufacturing defects), or they survive and have a relatively long lifetime to eventually fail out of age and use. Survival Function (aka reliability function) - it's just a (backwards) CDFSince the hazard function can be treated as a probability density function, it makes sense to think about the corresponding cumulative distribution function (CDF). But because we're modeling time to failure, it's actually more interesting to look at the CDF backwards - this is called the complementary cumulative distribution function.In survival analysis there's a special name for it - the survival function - and it gives the probability that the object being studied will survive beyond a given time.![4 survival functions](https://upload.wikimedia.org/wikipedia/commons/e/e0/Four_survival_functions.svg)As you can see they all start at 1 for time 0 - at the beginning, all things are alive. Then they all move down over time to eventually approach and converge to 0. The different shapes reflect the average/expected retention of a population subject to this function over time, and as such this is a particularly useful visualization when modeling overall retention/churn situations. Ways to estimate/model survival analysis - terms to be aware ofKey Components Necessary for these models - duration, and whether observation is censored.- Kaplan Meier Estimator- Nelson-Aalen Estimator- Proportional Hazards (Cox Model, integrates covariates)- Additive Hazards Model (Aalen's Additive Model, when covariates are time-dependent)As with most statistics, these are all refinements of the general principles, with the math to back them up. Software packages will tend to select reasonable defaults, and allow you to use parameters to tune or select things. The math for these gets varied and deep - but feel free to [dive in](https://en.wikipedia.org/wiki/Survival_analysis) if you're curious! Live! Let's try modeling heart attack survivalhttps://archive.ics.uci.edu/ml/datasets/echocardiogram
# TODO - Live! (As time permits)
_____no_output_____
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
Assignment - Customer ChurnTreselle Systems, a data consulting service, [analyzed customer churn data using logistic regression](http://www.treselle.com/blog/customer-churn-logistic-regression-with-r/). For simply modeling whether or not a customer left this can work, but if we want to model the actual tenure of a customer, survival analysis is more appropriate.The "tenure" feature represents the duration that a given customer has been with them, and "churn" represents whether or not that customer left (i.e. the "event", from a survival analysis perspective). So, any situation where churn is "no" means that a customer is still active, and so from a survival analysis perspective the observation is censored (we have their tenure up to now, but we don't know their *true* duration until event).Your assignment is to [use their data](https://github.com/treselle-systems/customer_churn_analysis) to fit a survival model, and answer the following questions:- What features best model customer churn?- What would you characterize as the "warning signs" that a customer may discontinue service?- What actions would you recommend to this business to try to improve their customer retention?Please create at least *3* plots or visualizations to support your findings, and in general write your summary/results targeting an "interested layperson" (e.g. your hypothetical business manager) as your audience.This means that, as is often the case in data science, there isn't a single objective right answer - your goal is to *support* your answer, whatever it is, with data and reasoning.Good luck!
import pandas as pd # Loading the data to get you started df = pd.read_csv( 'https://raw.githubusercontent.com/treselle-systems/' 'customer_churn_analysis/master/WA_Fn-UseC_-Telco-Customer-Churn.csv') df.head() df.info() # A lot of these are "object" - some may need to be fixed... pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) df.head() df.Churn = df.Churn.map({'No': 0, 'Yes':1}) df.Churn.mean() df.pivot_table(index='PaperlessBilling', values='Churn') df.pivot_table(index='Contract', values='Churn') df.pivot_table(index='gender', values='Churn') df.head() df.columns categorical = ['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod'] for col in categorical: df = pd.get_dummies(df, columns=[col]) df.shape df.head() from sklearn.linear_model.logistic import LogisticRegression from sklearn.model_selection import train_test_split X = df.drop(['Churn', 'customerID','TotalCharges'],axis=1) y = df.Churn X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42) lr = LogisticRegression() lr.fit(X_train, y_train) lr.score(X_test, y_test)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning)
MIT
module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb
macscheffer/DS-Unit-2-Sprint-3-Advanced-Regression
$$p(word|C_k)$$
def nk(word,given_tag,data): if word not in voc1: return 0 else: word_in_sen=[(word in dc['s{}'.format(i)]) and (tag[i]==given_tag) for i in range(len(data))] return sum(word_in_sen) nk('find',4,twitter) def n(given_tag): if given_tag==0: return len(text_0) elif given_tag==4: return len(text_4) else: return 0 def pr(w,given_tag,data): n_k=nk(w,given_tag,data) n_n=n(given_tag) prob=(n_k+1)/(n_n+len(voc1)) return prob pr('@',0,twitter) def sentiment(new,data): tkn=word_tokenize(new) p1=p(0) q1=p(4) for w in tkn: p1=p1*pr(w,0,data) q1=q1*pr(w,4,data) if p1>q1: return "Sentence is Negative" else: return "Sentence is Positive" sentiment('I love nature',twitter) sentiment('trust',twitter)
_____no_output_____
MIT
Twitter Sentiment Analysis/Untitled2.ipynb
mmaleki/Deep-Learning-with-Tensorfow
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Laboratorio Clase 06: Proyectos de Machine Learning Instrucciones* Completa tus datos personales (nombre y rol USM) en siguiente celda.* La escala es de 0 a 4 considerando solo valores enteros.* Debes _pushear_ tus cambios a tu repositorio personal del curso.* Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a [email protected], debe contener todo lo necesario para que se ejecute correctamente cada celda, ya sea datos, imágenes, scripts, etc.* Se evaluará: - Soluciones - Código - Que Binder esté bien configurado. - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.* __La entrega es al final de esta clase.__ __Nombre__: Cristóbal Loyola__Rol__: 201510008-K GapMinder
import pandas as pd import altair as alt import numpy as np from vega_datasets import data alt.themes.enable('opaque') %matplotlib inline gapminder = data.gapminder_health_income() gapminder.head()
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
1. Análisis exploratorio (1 pto)Como mínimo, realizar un `describe` del dataframe y una visualización adecuada, una _scatter matrix_ con los valores numéricos.
desc=gapminder.describe() print(desc) pd.plotting.scatter_matrix(gapminder, alpha=0.3, figsize=(9,9), diagonal='kde')
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
2. Preprocesamiento (1 pto) Aplicar un escalamiento a los datos antes de aplicar nuestro algoritmo de clustering. Para ello, definir la variable `X_raw` que corresponde a un `numpy.array` con los valores del dataframe `gapminder` en las columnas _income_, _health_ y _population_. Luego, definir la variable `X` que deben ser los datos escalados de `X_raw`.
from sklearn.preprocessing import StandardScaler X_raw = gapminder.drop('country', axis=1).values X = StandardScaler().fit_transform(X_raw)
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
3. Clustering (1 pto)
from sklearn.cluster import KMeans
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
Definir un _estimator_ `KMeans` con `k=3` y `random_state=42`, luego ajustar con `X` y finalmente, agregar los _labels_ obtenidos a una nueva columna del dataframe `gapminder` llamada `cluster`. Finalmente, realizar el mismo gráfico del principio pero coloreado por los clusters obtenidos.
k = 3 kmeans = KMeans(k, random_state=42) kmeans.fit(X) clusters = kmeans.labels_ gapminder['cluster']=clusters ##gapminder.head() colors_palette = {0: "orange", 1: "green", 2: "violet"} colors = [colors_palette[c] for c in list(gapminder['cluster'])] pd.plotting.scatter_matrix(gapminder, alpha=0.3, figsize=(9,9), c=colors, diagonal='kde')
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
4. Regla del codo (1 pto) __¿Cómo escoger la mejor cantidad de _clusters_?__En este ejercicio hemos utilizado que el número de clusters es igual a 3. El ajuste del modelo siempre será mejor al aumentar el número de clusters, pero ello no significa que el número de clusters sea el apropiado. De hecho, si tenemos que ajustar $n$ puntos, claramente tomar $n$ clusters generaría un ajuste perfecto, pero no permitiría representar si existen realmente agrupaciones de datos.Cuando no se conoce el número de clusters a priori, se utiliza la [regla del codo](https://jarroba.com/seleccion-del-numero-optimo-clusters/), que indica que el número más apropiado es aquel donde "cambia la pendiente" de decrecimiento de la la suma de las distancias a los clusters para cada punto, en función del número de clusters.A continuación se provee el código para el caso de clustering sobre los datos estandarizados, leídos directamente de un archivo preparado especialmente.
elbow = pd.Series(name="inertia").rename_axis(index="k") for k in range(1, 10): kmeans = KMeans(n_clusters=k, random_state=42).fit(X) elbow.loc[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center elbow = elbow.reset_index() alt.Chart(elbow).mark_line(point=True).encode( x="k:O", y="inertia:Q" ).properties( height=600, width=800 )
_____no_output_____
MIT
m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb
Crhlb/mat281_repositorio
03 ImageClustering using PCA, Kmeans, DBSCAN![Assets/dbscan_graph.png](Assets/dbscan_graph.png) Learning Objectives* Explore and interpret the image dataset* Apply Intel® Extension for Scikit-learn* patches to Principal Components Analysis (PCA), Kmeans,and DBSCAN algorithms and target GPU Library Dependencies: - **pip install pillow** - **pip install seaborn** - also requires these libraries if they are not already installed: **matplotlib, numpy, pandas, sklearn** Sections- _Code:_ [Read Images](Define-image-manipulation-and-Reading-functions)- _Code:_ [Submit batch_clustering_Streamlined.py as a batch job](Submit-batch_clustering_Streamlined.py-as-a-batch-job)- _Code:_ [Read the results of the dictionary after GPU computation](Read-the-results-of-the-dictionary-after-GPU-computation)- _Code:_ [Plot Kmeans using GPU results](Plot-Kmeans)- _Code:_ [Plot DBSCAN using GPU results](Plot-DBSCAN)
from __future__ import print_function data_path = ['data'] # Notebook time start from datetime import datetime start_time = datetime.now() current_time = start_time.strftime("%H:%M:%S") print("Current Time =", current_time)
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Define image manipulation and Reading functions
from lab.Read_Transform_Images import *
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Actually read the images- [Back to Sections](Back_to_Sections)
resultsDict = {} #resultsDict = Read_Transform_Images(resultsDict,imagesFilenameList = imagesFilenameList) resultsDict = Read_Transform_Images(resultsDict) #resultsDict.keys()
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Display ImageGrid Random SamplingThis should give an idea of how closely or differently the various images appear. Notice that some of the collard lizard images have much differnet white balance and this will affect the clustering. For this dataset the images are clustered based on the similarity in RGB colorspace only.
img_arr = [] ncols = 8 imageGrid=(ncols,3) for pil in random.sample(resultsDict['list_PIL_Images'], imageGrid[0]*imageGrid[1]) : img_arr.append(np.array(pil)) #displayImageGrid(img_arr, imageGrid=imageGrid) displayImageGrid2(img_arr, ncols=ncols)
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Review Python file prior to submission Set SYCL Device Context- [Back to Sections](Back-to-Sections)Paste this code in cell below and run it (twice) once to load, once to execute the cell:%load batch_clustering_Streamlined.py
%load batch_clustering_Streamlined.py
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Submit batch_clustering_Streamlined.py- [Back to Sections](Back_to_Sections)batch_clustering_Streamlined.py executed with a Python* command inside a shell script - run_clustering_streamlined.sh.run_clustering_streamlined.sh is submitted as a batch job to a node which has GPUs
#!python batch_kmeans.py # works on Windows ! chmod 755 q; chmod 755 run_clustering_streamlined.sh; if [ -x "$(command -v qsub)" ]; then ./q run_clustering_streamlined.sh; else ./run_clustering_streamlined.sh; fi
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Read Results of the Dictionary After GPU Computation- [Back to Sections](Back_to_Sections)
# read results from json file in results folder resultsDict = read_results_json() # get list_PIL_Images from Read_Tansform_Images resultsDict = Read_Transform_Images(resultsDict)
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Plot Kmeans Clusters Plot a histogram of the using GPU results- [Back to Sections](Back_to_Sections)
resultsDict.keys() #resultsDict = Compute_kmeans_db_histogram_labels(resultsDict, knee = 6, gpu_available = gpu_available) #knee = 5 counts = np.asarray(resultsDict['counts']) bins = np.asarray(resultsDict['bins']) plt.xlabel("Weight") plt.ylabel("Probability") plt.title("Histogram with Probability Plot}") slice = min(counts.shape[0], bins.shape[0]) plt.bar(bins[:slice],counts[:slice]) plt.grid() plt.show()
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Print Kmeans Related Data as a Sanity Check
print(resultsDict['bins']) print(resultsDict['counts']) resultsDict.keys()
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Display Similar ImagesVisually compare image which have been clustered by the allgorithm
clusterRank = 2 d = {i:cts for i, cts in enumerate(resultsDict['counts'])} sorted_d = sorted(d.items(), key=operator.itemgetter(1), reverse=True) id = sorted_d[clusterRank][0] indexCluster = np.where(np.asarray(resultsDict['km_labels']) == id )[0].tolist() img_arr = [] for idx in indexCluster: img_arr.append(np.array((resultsDict['list_PIL_Images'][idx]))) img_arr = np.asarray(img_arr) displayImageGrid(img_arr)
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
PlotPlot Seaborn Kmeans ClustersIndicates numbers of images that are close in color space
%matplotlib inline n_components = 4 columns = ['PC{:0d}'.format(c) for c in range(n_components)] data = pd.DataFrame(np.asarray(resultsDict['PCA_fit_transform'])[:,:n_components], columns = columns) #k_means = resultsDict['model'] data['cluster'] = resultsDict['km_labels'] data.head() # similarlyNamedImages = [9,6,6,8,6,4,8,3] # print('number of similarly named images: ', similarlyNamedImages) columns.append('cluster') plt.figure(figsize=(8,8)) sns.set_context('notebook'); g = sns.pairplot(data[columns], hue="cluster", palette="Paired", diag_kws=dict(hue=None)); g.fig.suptitle("KMEANS pairplot");
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Find DBSCAN EPS parameterDensity-Based Spatial Clustering of Applications with Noise (DBSCAN) finds core samples of high density and expands clusters from them. Good for data which contains clusters of similar density.EPS: "epsilon" value in sklearn is the maximum distance between two samples for one to be considered as in the neighborhood of the other.At least a first value to start. We are using kNN to find distances commonly occuring in the dataset. Values of EPS below this threshold distance will be considered as lyig within a given cluster. This means we should look for long flat plateaus and read the y coordinate off the kNN plot to get a starting value for EPS.Different datasets can have wildly different sweet spots for EPS. Some datasets require EPS values of .001 other datasets may work best with values of EPS of several thousand. We use this trick to get in the right or approximate neighborhood of the EPS.
from sklearn.neighbors import NearestNeighbors PCA_images = resultsDict['PCA_fit_transform'] neighbors = NearestNeighbors(n_neighbors=2) #X = StandardScaler().fit_transform(PCA_images) neighbors_fit = neighbors.fit(PCA_images) distances, indices = neighbors_fit.kneighbors(PCA_images) distances = np.sort(distances, axis=0) plt.xlabel('number of images') plt.ylabel('distances') plt.title('KNN Distances Plot') plt.plot(distances[:,1]) plt.grid()
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Use DBSCAN to find clusterswe will use initial estiamtes from KNN above (find elbow) to given initial trial for DBSCAN EPS values In the plot above, there is a plateau in the y values somewherre near 350 indicating that a cluster distance (aka EPS) might work well somewhere near this value. We used this value in the batch_clustering_Streamlined.py file when computing DBSCAN.**EPS:** Two points are neighbors if the distance between the two points is below a threshold.**n:** The minimum number of neighbors a given point should have in order to be classified as a core point. The point itself is included in the minimum number of samples. Below: Sort DBSCAN Results by Cluster Size
#%%write_and_run lab/compute_DBSCANClusterRank.py def compute_DBSCANClusterRank(n, EPS): d = {index-1:int(cnt) for index, cnt in enumerate(counts )} sorted_d = sorted(d.items(), key=operator.itemgetter(1), reverse=True) for i in range(0, len(d)): idx = sorted_d[i][0] print('cluster = ', idx, ' occurs', int(sorted_d[i][1]), ' times') return db, counts, bins, sorted_d n_components = 4 columns = ['PC{:0d}'.format(c) for c in range(n_components)] data = pd.DataFrame(np.asarray(resultsDict['PCA_fit_transform'])[:,:n_components], columns = columns) columns.append('cluster') data['cluster'] = resultsDict['db_labels'] data.head()
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
DBSCAN Cluster PlotPlot a histogram of the using GPU results- [Back to Sections](Back_to_Sections) To indicate numbers of images in each cluster. color each point by its membership in a cluster
%matplotlib inline sns.set_context('notebook'); g = sns.pairplot(data[columns], hue="cluster", palette="Paired", diag_kws=dict(hue=None)); g.fig.suptitle("DBSCAN pairplot");
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials
Print Filenames of Outliers
print('Outlier/problematic images are: \n', [resultsDict['imagesFilenameList'][f] for f in list(data[data['cluster'] == -1].index)] )
_____no_output_____
MIT
03_scikit-learn-intelex_Image_Cluster/03_ImageClustering.ipynb
IntelSoftware/scikit-learn_essentials