markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
2.1 - TensorFlow implementationFor this assignment you will execute two implemenations, one in TensorFlow and one in PyTorch. Train and test datasets**Note:*** In the TensorFlow implementation, you will have to set the data format type to tensors, which may create ragged tensors (tensors of different lengths). * You will have to convert the ragged tensors to normal tensors using the `to_tensor()` method, which pads the tensors and sets the dimensions to `[None, tokenizer.model_max_length]` so you can feed different size tensors into your model based on the batch size.
import tensorflow as tf columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions'] train_ds.set_format(type='tf', columns=columns_to_return) train_features = {x: train_ds[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ['input_ids', 'attention_mask']} train_labels = {"start_positions": tf.reshape(train_ds['start_positions'], shape=[-1,1]), 'end_positions': tf.reshape(train_ds['end_positions'], shape=[-1,1])} train_tfdataset = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(8)
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Training It is finally time to start training your model! * Create a custom training function using [tf.GradientTape()](https://www.tensorflow.org/api_docs/python/tf/GradientTape)* Target two loss functions, one for the start index and one for the end index. * `tf.GradientTape()` records the operations performed during forward prop for automatic differentiation during backprop.
EPOCHS = 3 loss_fn1 = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True) loss_fn2 = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True) opt = tf.keras.optimizers.Adam(learning_rate=3e-5) losses = [] for epoch in range(EPOCHS): print("Starting epoch: %d"% epoch ) for step, (x_batch_train, y_batch_train) in enumerate(train_tfdataset): with tf.GradientTape() as tape: answer_start_scores, answer_end_scores = model(x_batch_train) loss_start = loss_fn1(y_batch_train['start_positions'], answer_start_scores) loss_end = loss_fn2(y_batch_train['end_positions'], answer_end_scores) loss = 0.5 * (loss_start + loss_end) losses.append(loss) grads = tape.gradient(loss, model.trainable_weights) opt.apply_gradients(zip(grads, model.trainable_weights)) if step % 20 == 0: print("Training loss (for one batch) at step %d: %.4f"% (step, float(loss_start)))
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Take a look at your losses and try playing around with some of the hyperparameters for better results!
from matplotlib.pyplot import plot plot(losses)
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
You have successfully trained your model to help automatically answer questions! Try asking it a question about a story.
question, text = 'What is south of the bedroom?','The hallway is south of the garden. The garden is south of the bedroom.' input_dict = tokenizer(text, question, return_tensors='tf') outputs = model(input_dict) start_logits = outputs[0] end_logits = outputs[1] all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0]) answer = ' '.join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0]+1]) print(question, answer.capitalize())
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Congratulations! You just implemented your first QA model in TensorFlow. 2.2 PyTorch implementation[PyTorch](https://pytorch.org/) is an open source machine learning framework developed by Facebook's AI Research lab that can be used for computer vision and natural language processing. As you can imagine, it is quite compatible with the bAbI dataset. Train and test datasetGo ahead and try creating a train and test dataset by importing PyTorch.
from torch.utils.data import DataLoader columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions'] train_ds.set_format(type='pt', columns=columns_to_return) test_ds.set_format(type='pt', columns=columns_to_return)
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
For the accuracy metrics for the PyTorch implementation, you will change things up a bit and use the [F1 score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) for start and end indicies over the entire test dataset as the loss functions.
from sklearn.metrics import f1_score def compute_metrics(pred): start_labels = pred.label_ids[0] start_preds = pred.predictions[0].argmax(-1) end_labels = pred.label_ids[1] end_preds = pred.predictions[1].argmax(-1) f1_start = f1_score(start_labels, start_preds, average='macro') f1_end = f1_score(end_labels, end_preds, average='macro') return { 'f1_start': f1_start, 'f1_end': f1_end, }
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
TrainingNow it is time to load a pre-trained model. **Note:** You will be using the DistilBERT instead of TFDistilBERT for a PyTorch implementation.
del model # We delete the tensorflow model to avoid memory issues from transformers import DistilBertForQuestionAnswering pytorch_model = DistilBertForQuestionAnswering.from_pretrained("model/pytorch")
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Instead of a custom training loop, you will use the [🤗 Trainer](https://huggingface.co/transformers/main_classes/trainer.html), which contains a basic training loop and is fairly easy to implement in PyTorch.
from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir='results', # output directory overwrite_output_dir=True, num_train_epochs=3, # total number of training epochs per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation warmup_steps=20, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=None, # directory for storing logs logging_steps=50 ) trainer = Trainer( model=pytorch_model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_ds, # training dataset eval_dataset=test_ds, compute_metrics=compute_metrics # evaluation dataset ) trainer.train() trainer.evaluate(test_ds)
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Now it is time to ask your PyTorch model a question! * Before testing your model with a question, you can tell PyTorch to send your model and inputs to the GPU if your machine has one, or the CPU if it does not. * You can then proceed to tokenize your input and create PyTorch tensors and send them to your device. * The rest of the pipeline is relatively similar to the one you implemented for TensorFlow.
import torch device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') pytorch_model.to(device) question, text = 'What is east of the hallway?','The kitchen is east of the hallway. The garden is south of the bedroom.' input_dict = tokenizer(text, question, return_tensors='pt') input_ids = input_dict['input_ids'].to(device) attention_mask = input_dict['attention_mask'].to(device) outputs = pytorch_model(input_ids, attention_mask=attention_mask) start_logits = outputs[0] end_logits = outputs[1] all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0]) answer = ' '.join(all_tokens[torch.argmax(start_logits, 1)[0] : torch.argmax(end_logits, 1)[0]+1]) print(question, answer.capitalize())
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Congratulations! You've completed this notebook, and can now implement Transformer models for QA tasks!You are now able to:* Perform extractive Question Answering * Fine-tune a pre-trained transformer model to a custom dataset* Implement a QA model in TensorFlow and PyTorch What you should remember:- Transformer models are often trained by tokenizers that split words into subwords. - Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word.- PyTorch is a relatively light and easy to implement framework that can make rapid prototyping easier, while TensorFlow has advantages in scaling and is more widely used in production - `tf.GradientTape` allows you to build custom training loops in TensorFlow - The `Trainer` API in PyTorch gives you a basic training loop that is compatible with 🤗 models and datasets
%%javascript let element = document.getElementById('submit-notebook-button-group'); if (!element) { window._save_and_close = function(){ IPython.notebook.save_checkpoint(); IPython.notebook.session.delete(); window.onbeforeunload = null setTimeout(function() {window.close();}, 1000) } let header = document.getElementById('maintoolbar-container'); element = document.createElement("div"); element.setAttribute("class", "btn-group"); element.setAttribute("id", "submit-notebook-button-group"); element.setAttribute("align", "right"); element.setAttribute("style", "float:right") element.innerHTML = '<button class="btn btn-default" title="Save and close this notebook." style="background-color:rgb(42, 115, 204); color:white; padding:4px 8px" onclick=window._save_and_close()>Save and close</button>' header.appendChild(element); }
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
import numpy as np from scipy.stats import chisquare # One-way chi square test # Chi square can take any crosstab/table and test the independence of rows/cols # The null hypothesis is that the rows/cols are independent -> low chi square # The alternative is that there is a dependence -> high chi square # Be aware! Chi square does *not* tell you direction/causation ind_obs = np.array([[1, 1], [2, 2]]).T print(ind_obs) print(chisquare(ind_obs, axis=None)) dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T print(dep_obs) print(chisquare(dep_obs, axis=None)) # Distribution tests: # We often assume that something is normal, but it can be important to *check* # For example, later on with predictive modeling, a typical assumption is that # residuals (prediction errors) are normal - checking is a good diagnostic from scipy.stats import normaltest # Poisson models arrival times and is related to the binomial (coinflip) sample = np.random.poisson(5, 1000) print(normaltest(sample)) # Pretty clearly not normal # Kruskal-Wallis H-test - compare the median rank between 2+ groups # Can be applied to ranking decisions/outcomes/recommendations # The underlying math comes from chi-square distribution, and is best for n>5 from scipy.stats import kruskal x1 = [1, 3, 5, 7, 9] y1 = [2, 4, 6, 8, 10] print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so x2 = [1, 1, 1] y2 = [2, 2, 2] z = [2, 2] # Hey, a third group, and of different size! print(kruskal(x2, y2, z)) # x clearly dominates
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895) KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
extrajp2014/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
# Taking requests! Come to lecture with a topic or problem and we'll try it.
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
extrajp2014/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
# TODO - your code!
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
extrajp2014/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
Visit the NASA mars news site
# Visit the Mars news site url = 'https://redplanetscience.com/' browser.visit(url) # Optional delay for loading the page--wait_time browser.is_element_present_by_css('div.list_text', wait_time=1) # Convert the browser html to a soup object html = browser.html news_soup = soup(html, 'html.parser') # display the current title content slide_elem = news_soup.select_one('div.list_text') slide_elem # Use the parent element to find the first a tag and save it as `news_title` news_title = news_soup.find('div', class_='content_title').text print(news_title) # Use the parent element to find the paragraph text news_parent = news_soup.find('div', class_='article_teaser_body').text print(news_parent)
NASA-JPL's coverage of the Mars InSight landing earns one of the two wins, making this the NASA center's second Emmy.
ADSL
Instructions/.ipynb_checkpoints/Mission_to_Mars-Starter-checkpoint.ipynb
jcus/web-scraping-challenge
JPL Space Images Featured Image
# Visit URL url = 'https://spaceimages-mars.com' browser.visit(url) # Find and click the full image button image_url = browser.find_by_tag('button')[1] image_url.click() # Parse the resulting html with soup # get HTML object html = browser.html # use beautiful soup parser on HTML image_soup = soup(html, 'html.parser') image_soup print(image_soup.prettify()) # find the relative image url ##thumbimg img_url_rel = image_soup.find('img', class_='fancybox-image').get('src') img_url_rel # Use the base url to create an absolute url img_url = f'https://spaceimages-mars.com/{img_url_rel}' img_url
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/Mission_to_Mars-Starter-checkpoint.ipynb
jcus/web-scraping-challenge
Mars Facts
# Use `pd.read_html` to pull the data from the Mars-Earth Comparison section # hint use index 0 to find the table df = pd.read_html('https://galaxyfacts-mars.com/')[0] df.head() df.to_html() df df.to_html()
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/Mission_to_Mars-Starter-checkpoint.ipynb
jcus/web-scraping-challenge
Hemispheres
url = 'https://marshemispheres.com/' browser.visit(url) # Create a list to hold the images and titles. hemisphere_image_urls = [] # Get a list of all of the hemispheres links = browser.find_by_css('a.product-item img') # Next, loop through those links, click the link, find the sample anchor, return the href for i in range(len(links)): # We have to find the elements on each loop to avoid a stale element exception # Next, we find the Sample image anchor tag and extract the href # Get Hemisphere title # Append hemisphere object to list # Finally, we navigate backwards browser.back() hemisphere_image_urls browser.quit()
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/Mission_to_Mars-Starter-checkpoint.ipynb
jcus/web-scraping-challenge
Wind statisticsFig. 3 from:>B. Moore-Maley and S. E. Allen: Wind-driven upwelling and surface nutrient delivery in a semi-enclosed coastal sea, Ocean Sci., 2022.Description:Hourly wind observations and HRDPS results for the 2015-2019 period at Sentry Shoal, Sisters Islet, Halibut Bank and Sand Heads.***
import numpy as np import xarray as xr import requests from pandas import read_csv from datetime import datetime from io import BytesIO from xml.etree import cElementTree as ElementTree from matplotlib import pyplot as plt from windrose import WindroseAxes from tqdm.notebook import tqdm %matplotlib inline plt.rcParams['font.size'] = 12
_____no_output_____
Apache-2.0
notebooks/windstatistics.ipynb
SalishSeaCast/SoG_upwelling_EOF_paper
*** Functions for loading data
def load_HRDPS(HRDPS, j, i, timerange): """Load HRDPS model results from salishsea.eos.ubc.ca/erddap """ # Extract velocities from ERDDAP and calculate wspd, wdir tslc = slice(*timerange) u, v = [HRDPS.sel(time=tslc)[var][:, j, i].values for var in ('u_wind', 'v_wind')] time = HRDPS.sel(time=tslc).time.values.astype('datetime64[s]').astype(datetime) wspd = np.sqrt(u**2 + v**2) wdir = np.rad2deg(np.arctan2(v, u)) # Transform wdir to degrees FROM, CW from N wdir = 270 - wdir wdir[wdir < 0] = wdir[wdir < 0] + 360 return time, wspd, wdir def load_EC(ID, year, month): """Load EC met station data from climate.weather.gc.ca """ # Submit query url = 'http://climate.weather.gc.ca/climate_data/bulk_data_e.html' query = { 'timeframe': 1, 'stationID': ID, 'format': 'xml', 'time': 'UTC', 'Year': year, 'Month': month, 'Day': 1, } response = requests.get(url, params=query) tree = ElementTree.parse(BytesIO(response.content)) root = tree.getroot() # Extract data time, wspd, wdir, values = [], [], [], {} for record in root.findall('stationdata'): for var in ('windspd', 'winddir'): try: values[var] = float(record.find(var).text) except: break else: dateargs = [int(record.get(field)) for field in ('year', 'month', 'day', 'hour')] time.append(datetime(*dateargs)) wspd.append(values['windspd']) wdir.append(values['winddir']) # Convert lists to arrays time = np.array(time) wspd = np.array(wspd) / 3.6 # km/h to m/s wdir = np.array(wdir) * 10 # tens of degrees to degrees return time, wspd, wdir def load_DFO(ID): """Load DFO wave buoy data from www.meds-sdmm.dfo-mpo.gc.ca """ # Extract data from csv link fn = f'https://www.meds-sdmm.dfo-mpo.gc.ca/alphapro/wave/waveshare/csvData/c{ID}_csv.zip' df = read_csv(fn, index_col='DATE', parse_dates=True) time = df.index.values.astype('datetime64[s]').astype(datetime) wspd, wdir = [df[var].values for var in ('WSPD', 'WDIR')] return time, wspd, wdir
_____no_output_____
Apache-2.0
notebooks/windstatistics.ipynb
SalishSeaCast/SoG_upwelling_EOF_paper
*** Load dataDefinitions
# Station attributes stations = { 'Sentry Shoal' : {'ID': 46131, 'ji': (183, 107)}, 'Sisters Islet': {'ID': 6813, 'ji': (160, 120)}, 'Halibut Bank' : {'ID': 46146, 'ji': (149, 141)}, 'Sand Heads' : {'ID': 6831, 'ji': (135, 151)}, } # Assign HRDPS as netcdf object from erddap HRDPS = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSaSurfaceAtmosphereFieldsV1')
_____no_output_____
Apache-2.0
notebooks/windstatistics.ipynb
SalishSeaCast/SoG_upwelling_EOF_paper
Load data (~15 minutes)
# Initialize lists keys, variables = ('obs', 'HRDPS'), ('time', 'wspd', 'wdir') data = {station: {key: {var: [] for var in variables} for key in keys} for station in stations} # Load DFO data and truncate to the 2015-2019 range timerange = [datetime(2015, 1, 1), datetime(2020, 1, 1)] for station in ['Halibut Bank', 'Sentry Shoal']: time, wspd, wdir = load_DFO(stations[station]['ID']) index = np.array([timerange[0] <= t < timerange[1] for t in time]) for var, values in zip(variables, [time, wspd, wdir]): data[station]['obs'][var] = values[index] # Loop through years for year in tqdm(range(2015, 2020)): # Time range timerange = [datetime(year, 1, 1), datetime(year, 12, 31, 23, 59)] # Loop through stations for station in stations: # Load HRDPS from erddap (whole year) time, wspd, wdir = load_HRDPS(HRDPS, *stations[station]['ji'], timerange) for var, values in zip(variables, [time, wspd, wdir]): data[station]['HRDPS'][var].append(values) # Load EC data (month by month) if station in ['Sand Heads', 'Sisters Islet']: for month in range(1, 13): time, wspd, wdir = load_EC(stations[station]['ID'], year, month) for var, values in zip(variables, [time, wspd, wdir]): data[station]['obs'][var].append(values) # Concatenate for station in stations: for key in keys: for var in variables: data[station][key][var] = np.hstack(data[station][key][var])
_____no_output_____
Apache-2.0
notebooks/windstatistics.ipynb
SalishSeaCast/SoG_upwelling_EOF_paper
*** Plot windroses
# Make figure subplot_kw, gridspec_kw = {'axes_class': WindroseAxes}, {'wspace': 0.15, 'hspace': 0.15} fig, axs = plt.subplots(4, 4, figsize=(12, 14), subplot_kw=subplot_kw, gridspec_kw=gridspec_kw) # Loop through stations and seasons keylist, seasonlist = np.meshgrid(keys, ['Oct-Mar', 'Apr-Sep']) for row, key, season in zip(axs, keylist.ravel(), seasonlist.ravel()): for ax, station in zip(row, stations): # Plot wind data tindex = np.array([3 < t.month < 10 for t in data[station][key]['time']]) if season == 'Oct-Mar': tindex = ~tindex wspd, wdir = [data[station][key][var][tindex] for var in ('wspd', 'wdir')] ax.bar( wdir, wspd, bins=range(0, 11, 2), normed=True, nsector=18, opening=0.8, edgecolor='k', cmap=plt.get_cmap('YlGn'), ) # Formatting axis ax.set_ylim([0, 30]) ax.yaxis.set_ticks([10, 20, 30]) ax.yaxis.set_ticklabels('') ax.xaxis.set_ticklabels('') if key == 'obs': ax.set_title(station, fontsize=12, y=1.1) if station == 'Sentry Shoal': ax.xaxis.set_ticklabels(['E', 'NE', 'N', 'NW', 'W', 'SW', 'S', 'SE']) ax.yaxis.set_ticklabels([0.1, 0.2, 0.3]) ax.text(2.05, 1.25, season, transform=ax.transAxes, fontdict={'weight': 'bold'}) else: pos = ax.get_position() ax.set_position([pos.x0, pos.y0+0.02, pos.width, pos.height]) if station == 'Sentry Shoal': ax.text(-0.2, 0.15, key, transform=ax.transAxes, fontdict={'style': 'italic', 'weight': 'bold'}) # Add legend and panel labels # (manually get legend handles, since WindroseAxes.bar returns None) handles, labels = ax.get_children()[:6], ['0-2', '2-4', '4-6', '6-8', '8-10', '> 10'] fig.legend(handles=handles, labels=labels, title='m/s', ncol=6, frameon=False, loc=8, bbox_to_anchor=(0.5, 0.1)) for k, ax in enumerate(axs.ravel()): ax.text(0, 1, f'({chr(97+k)})', transform=ax.transAxes)
_____no_output_____
Apache-2.0
notebooks/windstatistics.ipynb
SalishSeaCast/SoG_upwelling_EOF_paper
Getting an Overview of Regular 3D DataIn this notebook, we're going to talk a little bit about how you might get an overview of regularized 3D data, specifically using matplotlib.In a subsequent notebook we'll address the next few steps, specifically how you might use tools like ipyvolume and yt.To start with, let's generate some fake data! (Now, I say 'fake,' but that's a bit pejorative, isn't it? Data is data! Ours is just synthetic.)
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import scipy.special
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
We'll use the scipy [spherical harmonics](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.sph_harm.html) function to make some data, but first we need a reference coordinate system. We'll start with $x, y, z$ and then transform them into spherical coordinates.**Note**: we'll be using the convention that $\theta \in [0, \pi]$ and $\phi \in[0,2\pi)$, which is reverse from what SciPy expects. So if you compare to the docstring for sph_harm, keep that in mind. Feel free to switch the definitions if you like!
N = 64 x = np.mgrid[-1.0:1.0:N*1j][:,None,None] y = np.mgrid[-1.0:1.0:N*1j][None,:,None] z = np.mgrid[-1.0:1.0:N*1j][None,None,:] r = np.sqrt(x*x + y*y + z*z) theta = np.arctan2(np.sqrt(x*x + y*y), z) phi = np.arctan2(y, x) np.abs(x - r * np.sin(theta)*np.cos(phi)).max() np.abs(y - r * np.sin(theta)*np.sin(phi)).max() np.abs(z - r * np.cos(theta)).max() data = {} for n in [1, 4]: for m in range(n + 1): data[f"sph_n{n}_m{m}"] = np.absolute(scipy.special.sph_harm(m, n, phi, theta))
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
Now we have some data! And, we can use matplotlib to visualize it in *reduced* form. Let's try this out:
plt.imshow(data["sph_n4_m4"][:,:,N//4], norm=LogNorm()) plt.colorbar() phi.min(), phi.max() plt.imshow(data["sph_n1_m0"].max(axis=0), norm=LogNorm()) plt.colorbar()
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
This is getting a bit cumbersome, though! Let's try using the [`ipywidgets`](https://ipywidgets.readthedocs.org) library to speed this up just a bit.We're going to use the `ipywidgets.interact` decorator around our function to add some inputs. This is a pretty powerful decorator, as it sets up new widgets based on the info that you feed it, and then re-executes the function every time those inputs change.
import ipywidgets @ipywidgets.interact(dataset = list(sorted(data.keys())), slice_position = (0, N, 1)) def make_plots(dataset, slice_position): plt.imshow(data[dataset][slice_position,:,:], norm=LogNorm()) plt.colorbar()
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
We still have some artifacts here we want to get rid of; let's see if we can restrict our colorbar a bit.
print(min(_.min() for _ in data.values()), max(_.max() for _ in data.values()))
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
Typically in these cases, the more interesting values are the ones at the top -- the bottom are usually falling off rather quickly to zero. So let's set our maximum, and then drop 5 orders of magnitude for the minimum. I'm changing the colorbar's "extend" value to reflect this.
@ipywidgets.interact(dataset = list(sorted(data.keys())), slice_position = (0, N, 1)) def make_plots(dataset, slice_position): plt.imshow(data[dataset][slice_position,:,:], norm=LogNorm(vmin=1e-5, vmax=1.0)) plt.colorbar(extend = 'min')
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
We're going to do one more thing for getting an overview, and then we'll see if we can do some other, cooler things with it using plotly.We're going to change our `slice_position` to be in units of actual coordinates, instead of integers, and we'll add on a multiplot so we can see all three at once.
@ipywidgets.interact(dataset = list(sorted(data.keys())), x = (-1.0, 1.0, 2.0/N), y = (-1.0, 1.0, 2.0/N), z = (-1.0, 1.0, 2.0/N)) def make_plots(dataset, x, y, z): xi, yi, zi = (int(_*N + 1.0) for _ in (x, y, z)) fig, axes = plt.subplots(nrows=2, ncols=2, dpi = 200) datax = data[dataset][xi,:,:] datay = data[dataset][:,yi,:] dataz = data[dataset][:,:,zi] vmax = max(_.max() for _ in (datax, datay, dataz)) vmin = max( min(_.min() for _ in (datax, datay, dataz)), vmax / 1e5) imx = axes[0][0].imshow(datax, norm=LogNorm(vmin=vmin, vmax=vmax), extent = [-1.0, 1.0, -1.0, 1.0]) imy = axes[0][1].imshow(datay, norm=LogNorm(vmin=vmin, vmax=vmax), extent = [-1.0, 1.0, -1.0, 1.0]) imz = axes[1][0].imshow(dataz, norm=LogNorm(vmin=vmin, vmax=vmax), extent = [-1.0, 1.0, -1.0, 1.0]) fig.delaxes(axes[1][1]) fig.colorbar(imx, ax=axes, extend = 'min', fraction = 0.1) import plotly.graph_objects as go plt.hist(data["sph_n4_m3"].flatten()) iso_data=go.Isosurface( x=(x * np.ones((N,N,N))).flatten(), y=(y * np.ones((N,N,N))).flatten(), z=(z * np.ones((N,N,N))).flatten(), value=data["sph_n4_m3"].flatten(), isomin=0, isomax=data["sph_n4_m3"].max(), surface_count=5, # number of isosurfaces, 2 by default: only min and max colorbar_nticks=5, # colorbar ticks correspond to isosurface values caps=dict(x_show=False, y_show=False)) fig = go.Figure(data = iso_data) fig
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
One thing I've run into with plotly while making this notebook has been that in many cases, the 3D plots strain a bit under large data sizes. This is to be expected, and is completely understandable! One of the really nice things about regular mesh data like this is that you can usually cut it down quite effectively with slices. Unfortunately, what I have found -- and I may have done something completely wrong! -- is that plotly some times appears to almost work, and then doesn't quite make it when I throw too much data at it. I've found that it seems to work best in the neighborhood of $64^3$ zones, maybe a bit more. Other Summary TechniquesThere are, of course, other ways you can take a look at a set of values! Given a regular mesh, it's straightforward with numpy to apply any of the reduction operations along one of the axes. For instance, you might take the min, the max, the sum, the mean and so forth. If we do this with our spherical harmonics data:
plt.imshow(data["sph_n4_m3"].sum(axis=0), extent=[-1.0, 1.0, -1.0, 1.0])
_____no_output_____
MIT
Session12/Day3/NotebookIII_part2_overview_regular_3d.ipynb
lmwalkowicz/LSSTC-DSFP-Sessions
* Creating one graph per group
machine_number = len(data.index.values) print(machine_number) color = ['indianred', 'darkolivegreen','steelblue', 'saddlebrown'] init_list = [0,15,30,45] for i in range(1,5): # 1 to 4 target = 15 * i if init_list[i-1] < machine_number: fig, axs = plt.subplots(figsize = [15,10] ) x = machines[init_list[i-1]:target] y = data["PORCENTAJE"].iloc[init_list[i-1]:target] bars = axs.bar(x, y, color = color[i-1], alpha = 1 ,linewidth = 0.2) axs.set_xlabel('Máquinas') axs.set_ylabel('% de Scrap') axs.set_title('Porcentaje de scrap por máquina') axs.bar_label(bars, label_type = "edge", fmt='%0.1f') plt.savefig('scrap per machine_'+str(init_list[i-1])) else: continue
_____no_output_____
MIT
notebooks/5_scrap_per_machine.ipynb
SanTaroZ/production-_analysis
Read parsed, and splitted data:
import os execfile("../script/utils.py") eventsPath = os.environ["YAHOO_DATA"] splitedRdd = sc.textFile(eventsPath + "/splitedData") splitedRdd = splitedRdd.map(parseContextData2) a = splitedRdd.take(1) len(a[0][1][0]) + len(a[0][1][1]) #80% training and #20%test data already separated a number = 5 splitedRdd.filter(lambda row: len(row[1][1]) >= number).count()
_____no_output_____
BSD-2-Clause
notebooks/ParseEventData.ipynb
rubattino/apprecsys
Part 6: Build an Encrypted, Decentralized DatabaseIn the last section (Part 5), we learned about the basic tools PySyft supports for encrypted computation. In this section, we're going to give one example of how to use those tools to build an encrypted, decentralized database. EncryptedThe database will be encrypted because BOTH the values in the database will be encrypted AND all queries to the database will be encrypted. DecentralizedThe database will be decentralized because, using SMPC, all values will be "shared" amongst a variety of owners, meaning that all owners must agree to allow a query to be performed. It has no central "owner". The Schema:While we could construct a variety of database types, for this first tutorial we're going to focus on a simple key-value store, where both the keys and values are strings.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
import syft as sy hook = sy.TorchHook() bob = sy.VirtualWorker(id="bob") alice = sy.VirtualWorker(id="alice") bill = sy.VirtualWorker(id="bill")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 1: Constructing a Key SystemIn this section, we're going to show how to use the equality operation to build a simple key system. The only tricky part about this is that we need to choose the datatype we want to use for keys. The most common usecase is probably strings, so that's what we're going to use here.Now, one thing you'll notice about our SMPC techniques, they all use exclusively numbers. Thus, we now have an issue. We need to decide how to encode our strings into numbers so that we can query them efficiently as "keys". The fastest way would be to map every possible key to a unique hash (integer) and then key based on that. Let's use that approach.
# Note that sy.mpc.securenn.field is the max value that we can encode using SMPC by default # This is, however, somewhat configurable in the system. def string2key(input_str): return sy.LongTensor([hash(input_str) % sy.mpc.securenn.field]) string2key("hello") string2key("world")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 2: Constructing a Value Storage SystemNow, we are able to convert our string "keys" to integers which we can use for our database, but now we need to figure out how to encode the values in our database using numbers as well. For this, we're going to simply encode each string as a list of numbers like so.
import string string.punctuation import string char2int = {} int2char = {} for i, c in enumerate(' ' + string.ascii_letters + '0123456789' + string.punctuation): char2int[c] = i int2char[i] = c def string2values(input_str): values = list() for char in input_str: values.append(char2int[char]) return sy.LongTensor(values) def values2string(input_values): s = "" for v in input_values: s += int2char[int(v)] return s vs = string2values("hello world") vs values2string(vs)
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 3: Creating the Tensor Based Key-Value StoreNow for our next operation, we want to write some logic which will allow us to query this database using ONLY addition, multiplication, and comparison operations. For this we will use a simple strategy. The database will be a list of integer keys and a list of integer arrays (values).
keys = list() values = list()
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
To add a value to the database, we'll just add its key and value to the lists.
def add_entry(string_key, string_value): keys.append(string2key(string_key)) values.append(string2values(string_value)) add_entry("Bob","(123) 456-7890") add_entry("Bill", "(234) 567-8901") add_entry("Sue","(345) 678-9012") keys values
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 4: Querying the Key->Value StoreOur query will be in three:- 1) check for equality between the query key and every key in the database - returning a 1 or 0 for each row. We'll call each row's result it's "key_match" integer.- 2) Multiply each row's "key_match" integer by all the values in its corresponding row. This will zero out all rows in the database which don't have matching keys.- 3) Sum all the masked rows in the database together. - 4) Return the result.
# this is our query query = "Bob" # convert our query to a hash qhash = string2key(query) qhash[0] # see if our query matches any key key_match = list() for key in keys: key_match.append((key == qhash).long()) key_match # Multiply each row's value by its corresponding keymatch value_match = list() for i, value in enumerate(values): value_match.append(key_match[i].expand(value.shape) * value) # sum the values together final_value = value_match[0] for v in value_match[1:]: final_value = final_value + v # Decypher final value values2string(final_value)
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 5: Putting It TogetherHere's what this logic looks like when put together in a simple database class.
import string char2int = {} int2char = {} for i, c in enumerate(' ' + string.ascii_letters + '0123456789' + string.punctuation): char2int[c] = i int2char[i] = c def string2key(input_str): return sy.LongTensor([hash(input_str) % sy.mpc.securenn.field]) def string2values(input_str): values = list() for char in input_str: values.append(char2int[char]) return sy.LongTensor(values) def values2string(input_values): s = "" for v in input_values: s += int2char[int(v)] return s class TensorDB: def __init__(self): self.keys = list() self.values = list() def add_entry(self, string_key, string_value): self.keys.append(string2key(string_key)) self.values.append(string2values(string_value)) def query(self, str_query): # hash the query string qhash = string2key(str_query) # see if our query matches any key key_match = list() for key in self.keys: key_match.append((key == qhash).long()) # Multiply each row's value by its corresponding keymatch value_match = list() for i,value in enumerate(self.values): value_match.append(key_match[i].expand(value.shape) * value) # sum the values together final_value = value_match[0] for v in value_match[1:]: final_value = final_value + v # Decypher final value return values2string(final_value) db = TensorDB() db.add_entry("Bob","(123) 456-7890") db.add_entry("Bill", "(234) 567-8901") db.add_entry("Sue","(345) 678-9012") db.query("hey") db.query("Bob") db.query("Bill") db.query("Sue")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Section 6: Building an Encrypted, Decentralized DatabaseNow, the interesting thing here is that we have not used a single operation other than addition, multiplication, and comparison (equality). Thus, we can trivially create an encrypted database by simply encrypting all of our keys and values!
import string char2int = {} int2char = {} for i, c in enumerate(' ' + string.ascii_letters + '0123456789' + string.punctuation): char2int[c] = i int2char[i] = c def string2key(input_str): return sy.LongTensor([(hash(input_str)+1234) % int(sy.mpc.securenn.field)]) def string2values(input_str): values = list() for char in input_str: values.append(char2int[char]) return sy.LongTensor(values) def values2string(input_values): s = "" for v in input_values: if(int(v) in int2char): s += int2char[int(v)] else: s += "." return s class DecentralizedDB: def __init__(self, *owners): self.owners = owners self.keys = list() self.values = list() def add_entry(self, string_key, string_value): key = string2key(string_key).share(*self.owners) value = string2values(string_value).share(*self.owners) self.keys.append(key) self.values.append(value) def query(self, str_query): # hash the query string qhash = sy.LongTensor([string2key(str_query)]) qhash = qhash.share(*self.owners) # see if our query matches any key key_match = list() for key in self.keys: key_match.append((key == qhash)) # Multiply each row's value by its corresponding keymatch value_match = list() for i, value in enumerate(self.values): shape = list(value.get_shape()) km = key_match[i] expanded_key = km.expand(1,shape[0])[0] value_match.append(expanded_key * value) # sum the values together final_value = value_match[0] for v in value_match[1:]: final_value = final_value + v result = values2string(final_value.get()) # there is a certain element of randomness # which can cause the database to return empty # so if this happens, just try again if(list(set(result))[0] == '.'): return self.query(str_query) # Decypher final value return result db = DecentralizedDB(bob, alice) db.add_entry("Bob","(123) 456-7890") db.add_entry("Bill", "(234) 567-8901") db.add_entry("Sam","(345) 678-9012") db.query("Bob") db.query("Bill") db.query("Sam")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Success!!!And there you have it! We now have a key-value store capable of storing arbitrary strings and values in an encrypted, decentralized state such that even the queries are also private/encrypted. Section 7: Increasing Performance Strategy 1: One-hot Encoded KeysAs it turns out, comparisons (like ==) can be very expensive to compute, which make the query take a long time. Thus, we also have another option. We can encode our strings using one_hot encodings. This allows us to exclusively use multiplication for our database query, like so. Strategy 2: Fixed Length ValuesBy using fixed length values, we can encode the whole database as a single tensor which lets us use the underlying hardware to work a bit faster.
import string char2int = {} int2char = {} for i, c in enumerate(' ' + string.ascii_lowercase + '0123456789' + string.punctuation): char2int[c] = i int2char[i] = c def one_hot(index, length): vect = sy.zeros(length).long() vect[index] = 1 return vect def string2one_hot_matrix(str_input, max_len=8): # truncate strings longer than max_len str_input = str_input[:max_len].lower() # pad strings shorter than max_len if(len(str_input) < max_len): str_input = str_input + "." * (max_len - len(str_input)) char_vectors = list() for char in str_input: char_vectors.append(one_hot(char2int[char],len(int2char)).unsqueeze(0)) return sy.cat(char_vectors,dim=0) def string2values(str_input, max_len=128): # truncate strings longer than max_len str_input = str_input[:max_len].lower() # pad strings shorter than max_len if(len(str_input) < max_len): str_input = str_input + "." * (max_len - len(str_input)) values = list() for char in str_input: values.append(char2int[char]) return sy.LongTensor(values) one_hots = string2one_hot_matrix("hey") class DecentralizedDB: def __init__(self, *owners, max_key_len=8, max_value_len=256): self.max_key_len = max_key_len self.max_value_len = max_value_len self.owners = owners self.keys = list() self.values = list() def add_entry(self, string_key, string_value): key = string2one_hot_matrix(string_key, self.max_key_len).share(*self.owners) value = string2values(string_value, self.max_value_len).share(*self.owners) self.keys.append(key) self.values.append(value) def query(self,query_str): query = string2one_hot_matrix(query_str, self.max_key_len).send(*self.owners) # see if our query matches any key # note: this is the slowest part of the program # it could probably be greatly faster with minimal improvements key_match = list() for key in self.keys: vect = (key * query).sum(1) x = vect[0] for i in range(vect.get_shape()[0]): x = x * vect[i] key_match.append(x) # Multiply each row's value by its corresponding keymatch value_match = list() for i, value in enumerate(self.values): shape = list(value.get_shape()) km = key_match[i] expanded_key = km.expand(1,shape[0])[0] value_match.append(expanded_key * value) # NOTE: everything before this line could (in theory) happen in full parallel # on different threads. # sum the values together final_value = value_match[0] for v in value_match[1:]: final_value = final_value + v result = values2string(final_value.get()) return result.replace(".","") db = DecentralizedDB(bob, alice, bill, max_key_len=3) db.add_entry("Bob","(123) 456 7890") db.add_entry("Bill", "(234) 567 8901") db.add_entry("Sam","(345) 678 9012") db.add_entry("Key","really big json value") db.query("Bob") db.query("Bill") db.query("Sam") db.query("Not a Person") db.query("Key")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
Success!!And there we have it - a marginally more performant version. We could further add performance by running the query on all the rows in parallel, but we'll leave that for someone else to work on :).Note: we can add as many owners to the database as we want! (although the more owners you have the slower queries will be)
import syft as sy hook = sy.TorchHook() bob = sy.VirtualWorker(id="bob") alice = sy.VirtualWorker(id="alice") bill = sy.VirtualWorker(id="bill") sue = sy.VirtualWorker(id="sue") tara = sy.VirtualWorker(id="tara") db = DecentralizedDB(bob, alice, bill, sue, tara, max_key_len=3) db.add_entry("Bob","(123) 456 7890") db.add_entry("Bill", "(234) 567 8901") db.add_entry("Sam","(345) 678 9012") db.add_entry("Key","really big json value") db.query("Bob")
_____no_output_____
Apache-2.0
examples/tutorials/Part 6 - Build an Encrypted, Decentralized Database.ipynb
andreas-hjortgaard/PySyft
_by Max Schröder$^{1,2}$ and Frank Krüger$^1$_$^1$ Institute of Communications Engineering, University of Rostock, Rostock $^2$ University Library, University of Rostock, Rostock **Abstract**:This introduction to the Python programming language is based on [this NLP course program](https://github.com/stefanluedtke/NLP-Exercises) as well as [this tutorial on audio signal processing](https://github.com/spatialaudio/selected-topics-in-audio-signal-processing-exercises). BasicsBelow is a Python program containing a lot of the operations you will typically need: Assignments, arithmetics, logical operators, printing, comments. As you see, Python is quite easy to read.**Task 1:** Figure out the meaning of each line yourself.
x = 34 - 23 # A comment. y = "Hello" # Another one. z = 3.45 if z == 3.45 or y == "Hello": x = x + 1 y = y + " World" print(x) print(y)
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
IndentationPython handles blocks in a different way than other programming languages you might know, like Java or C: The first line with less indentation is outside of the block, the first line with more indentation starts a nested block. A colon often starts a new block. For example, in the code below, the fourth line is always executed, because it is not part of the block:
if 17<16: print("executed conditionally") print("also conditionally") print("always executed, because not part of the block above")
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Reference SemanticsAssignments behave as you might know from Java: For atomic data types, assignments work "by value", for all other data types (e.g. lists), assignments work "by reference": If we manipulate an object, this influences all references.
a=17 b=a #assign the *value* of a to b a=12 print(b) #still 17, because assinment by value x=[1,2,3] #this is what lists look like y=x #assign reference to the list to y x.append(4) #manipulate the list by adding a value print(y) #y also changed, because of assingment by reference
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
ListsLists are written in square brackets, as you have seen above. Lists can contain values of mixed types. List indices start with 0, as you can see here:
li = [17,"Hello",4.1,"Bar",5,6] li[2]
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
You can also use negative indices, which means that we start counting from the right:
li[-2]
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
You can also select subsets of lists ("slicing"), like this:
li[-4:-2]
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Note that slicing returns a copy of the sub-list. Some more list operatorsHere are some more operators you might find useful.
# Boolean test whether a value is in a list: the in operator t = [1,2,3,4] 2 in t # Concatenate lists: the + operator a = [1,2,3,4] b = [5,6,7] c = a + b c # Repeat a list n times: the * operator a=[1,2,3] 3*a # Append lists a=[1,2,3] a.append(4) # Index of first occurence a.index(2) # Number of occurences a = [1,2,3,2,1,2] a.count(2) # Remove first occurence a.remove(2) # Revese the list a.reverse() # Sort the list a.sort()
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Dictionaries: A mapping typeDictionaries are known as maps in other languages: They store a mapping between a set of keys and a set of values. Below is an example on how to use dictionaries:
# Create a new dictionary d = {'user':'bozo', 'pswd':1234} # Access the values via a key d['user'] d['pswd'] # Add key-value pairs d['id']=17 # List of keys d.keys() # List of values d.values()
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
FunctionsFunctions in Python work as you would expect: Arguments to functions are passed by assignment, that means passed arguments are assigned to local names. Assignments to arguments cannot affect the caller, but changing mutable arguments might. Here is an example of defining and calling a function:
def myfun(x,y): print("The function is executed.") y[0]=8 # This changes the list that y points to return(y[1]+x) mylist = [1,2,3] result=myfun(17,mylist) print("Function returned: ",result) print("List is now: ",mylist)
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Optional ArgumentsWe can define defaults for arguments that do not need to be passed:
def func(a, b, c=10, d=100): print(a, b, c, d) func(1,2) func(1,2,3,4)
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Some more facts about functions:* All functions in Python have a return value, functions without a return statement have the special return value None* There is no function overloading in Python* Functions can be used as any other data types: They can be arguments to functions, return values of functions, assigned to variables, etc. This means Python is a functional programming language, and we can do many of things you know and love from Haskell, like higher-order functions! Control of FlowWe have already seen If-branches above. For- and While-Loops also work exactly as you would expect, Here are just some examples:
x = 3 while x < 10: if x > 7: x += 2 continue x = x + 1 print("Still in the loop.") if x == 8: break print("Outside of the loop.") for x in [0,1,2,3,4,5,6,7,8,9]: if x > 7: x += 2 continue x = x + 1 print("Still in the loop.") if x == 8: break print("Outside of the loop.")
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
**Task 2:** Implement a function that tests whether a given number is prime.
for i in range(-1, 15): print('%i is %s' % (i, is_prime(i)))
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
List ComprehensionsThere is a special syntax for list comprehensions (which you might know from Haskell).
# List of all multiples of 3 that are <100: evens = [x for x in range(3,100) if x%3==0] print(evens)
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
**Task 3:** Use a list comprehension to make a list of all primes < 1000. Importing Modules/PackagesIn order to work with numeric arrays, we import the [NumPy](http://www.numpy.org) package.Numpy is a very popular Python package that allows to work more easily with numeric arrays. It is the basis for much of what you will see in this course.
import numpy as np
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Now we can use all NumPy functions (by prefixing "`np.`").
np.zeros(10000)
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Tab Completion**Task 4:** Type "`np.ze`" (without the quotes) and then hit the *Tab* key ... Getting HelpIf you want to know details about the usage of `np.zeros()` and all its supported arguments, have a look at its help text.Just append a question mark to the function name (without parentheses!):
np.arange?
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
A help window should open in the lower part of the browser window.This window can be closed by hitting the *q* key (like "quit").You can also get help for the whole NumPy package:
np?
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
You can get help for any object by appending (or prepending) a question mark to the name of the object.Let's check what the help system can tell us about our variable `a`:
a?
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Useful Jupyter Notebook CommandsIn addition to the general python programming, there are some very useful Jupyter Notebook specific commands starting with '%'.Let's first look at the variables, we have already defined including their type and the current value by the following command:
%whos
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Other commands for timing python expressions might also be helpful, e.g. `%time` and `%timeit`**%%time in first line of cell for timing the entire cell, %time before comment for a single line**
%time?
_____no_output_____
CC-BY-4.0
02 Python Introduction.ipynb
m6121/Jupyter-Workshop
Query and explore data included in WALIS This notebook contains scripts that allow querying and extracting data from the "World Atlas of Last Interglacial Shorelines" (WALIS) database. The notebook calls scripts contained in the /scripts folder. After downloading the database (internet connection required), field headers are renamed, and field values are substituted, following 1:n or n:n relationships. The tables composing the database are then saved in CSV, XLSS (multi-sheet), and geoJSON formats. The notebook also contains some plotting functions. Dependencies and packagesThis notebook calls various scripts that are included in the \scripts folder. The following is a list of the python libraries needed to run this notebook.
#Main packages import pandas as pd import pandas.io.sql as psql import geopandas import pygeos import numpy as np import mysql.connector from datetime import date import xlsxwriter as writer import math from scipy import optimize from scipy import stats #Plots import seaborn as sns import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable #Jupyter data display import tqdm from tqdm.notebook import tqdm_notebook from IPython.display import * import ipywidgets as widgets from ipywidgets import * #Geographic from shapely.geometry import Point from shapely.geometry import box import cartopy as ccrs import cartopy.feature as cfeature #System import os import glob import shutil #pandas options for debugging pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) #Set a date string for exported file names date=date.today() dt_string = date.strftime("_%d_%m_%Y") # Ignore warnings import warnings warnings.simplefilter(action='ignore', category=FutureWarning) warnings.filterwarnings('ignore')
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Import databaseConnect to the online MySQL database containing WALIS data and download data into a series of pandas data frames.
## Connect to the WALIS database server %run -i scripts/connection.py ## Import data tables and show progress bar with tqdm_notebook(total=len(SQLtables),desc='Importing tables from WALIS') as pbar: for i in range(len(SQLtables)): query = "SELECT * FROM {}".format(SQLtables[i]) walis_dict[i] = psql.read_sql(query, con=db) query2 = "SHOW FULL COLUMNS FROM {}".format(SQLtables[i]) walis_cols[i] = psql.read_sql(query2, con=db) pbar.update(1) %run -i scripts/create_outfolder.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Query the databaseNow, the data is ready to be queried according to a user input. There are two ways to extact data of interest from WALIS. Run either one and proceed.1. [Select by author](Query-option-1---Select-by-author)2. [Select by geographic coordinates](Query-option-2---Select-by-geographic-extent) Query option 1 - Select by authorThis option compiles data from multiple users who collaborated to create regional datasets for the WALIS Special Issue in ESSD. Select "WALIS Admin" in the dropdown menu if you want to extract the entire database.**NOTE: If you want to change users, just re-run this cell and select a different set of values**
%run -i scripts/select_user.py multiUsr
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Once the selection is done, run the following cell to query the database and extract only the data inserted by the selected user(s)
%run -i scripts/multi_author_query.py
Extracting values for: WALIS Admin The database you are exporting contains: 4006 RSL datapoints from stratigraphy 463 RSL datapoints from single corals 76 RSL datapoints from single speleothems 30 RSL indicators 19 Elevation measurement techniques 11 Geographic positioning techniques 28 Sea level datums 2717 U-Series ages (including RSL datapoints from corals and speleothems) 583 Amino Acid Racemization samples 213 Electron Spin Resonance ages 597 Luminescence ages 120 Chronostratigraphic constraints 160 Other age constraints 2130 References
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Query option 2 - Select by geographic extentThis option allows the download of data by geographic extent, defined as maximum-minimum bounds on Latitude and Longitude. Use this website to quickly find bounding coordinates: http://bboxfinder.com.
# bounding box coordinates in decimal degrees (x=Lon, y=Lat) xmin=-69.292145 xmax=-68.616486 ymin=12.009771 ymax=12.435235 # Curacao: -69.292145,12.009771,-68.616486,12.435235 #2.103882,39.219487,3.630981,39.993956 # From the dictionary in connection.py, extract the dataframes %run -i scripts/geoextent_query.py
Extracting values for the coordinates you specified The database you are exporting contains: 11 RSL datapoints from stratigraphy 15 RSL datapoints from single corals 0 RSL datapoints from single speleothems 2 RSL indicators 3 Elevation measurement techniques 3 Geographic positioning techniques 4 Sea level datums 30 U-Series ages 0 Amino Acid Racemization samples 24 Electron Spin Resonance ages 0 Luminescence ages 0 Chronostratigraphic constraints 0 Other age constraints 11 References
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Substitute data codes The following code makes joins between the data, substituting numerical or comma-separated codes with the corresponding text values.**WARNING - MODIFICATIONS TO THE ORIGINAL DATA**The following adjustments to the data are made:1. If there is an age in ka, but the uncertainty field is empty, the age uncertainty is set to 30%2. If the "timing constraint" is missing, the "MIS limit" is taken. If still empty, it is set to "Equal to"
%run -i scripts/substitutions.py %run -i scripts/make_summary.py
We are substituting values in your dataframes.... querying by user Putting nice names to the database columns.... Done!! making summary table.... Done!
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Write outputThe following scripts save the data in Xlsx, CSV, and geoJSON format (for use in GIS software).
%run -i scripts/write_spreadsheets.py %run -i scripts/write_geojson.py print ('Done!')
Your file will be created in /Users/alessiorovere/Dropbox/Mac/Documents/GitHub/WALIS/Code/Output/Data/ Done!
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Explore queried data through graphsThe following scrips produce a series of images representing different aspects of the data included in the database. Each graph is saved in the "Output/Images" folder in svg format.The following graphs can be plotted:1. [Monthly data insertion/update](Monthly-data-insertion/update)2. [References by year of publication](References-by-year-of-publication)3. [Elevation errors](Elevation-errors)4. [Sea level index points](Sea-level-index-points)5. [Elevation and positioning histograms](Elevation-and-positioning-histograms)6. [Quality plots](Quality-plots)7. [Maps](Maps)8. [Radiometric ages distribution](Radiometric-ages-distribution) Monthly data insertion/updateThis graph explores the timeline of data insertion or update in WALIS since its inception. Peaks in this graph correspond to data updated in bulk by the admin.
%run -i scripts/Database_contributions.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
References by year of publicationThis graph shows the year of publication of the manuscripts included in the WALIS "References" table. Note that these might not all be used in further data compilations.
References_query=References_query[References_query['Year'] != 0] #to eliminate works that are marked as "in prep" from the graph %run -i scripts/References_hist.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Elevation errorsThese two graphs show the measured elevation errors (plotted as Kernel Density Estimate) reported for sea-level data within WALIS. These include "RSL from statigraphy" data points and single coral or speleothems indicating former RSL positions. The difference in the two plots resides in the treatment of outliers. Points having elevation uncertainties higher than 3.5 times the median absolute deviation are excluded from the graph in the left. All points are considered on the graph on the right side.The outlier exclusion is bases on this reference:>Boris Iglewicz and David Hoaglin (1993), "Volume 16: How to Detect and Handle Outliers", The ASQC Basic References in Quality Control: Statistical Techniques, Edward F. Mykytka, Ph.D., Editor.And was derived from this link: https://stackoverflow.com/questions/11882393/matplotlib-disregard-outliers-when-plotting
%run -i scripts/Elevation_error.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Sea level index points This graph shows the frequency of sea-level indicators within the query, including the grouping in indicator types.
%run -i scripts/SL_Ind_Hist.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Elevation and positioning histogramsThese graphs show the distributions of the elevation metadata (Elevation measurement technique and sea-level datum) used to describe sea-level datapoints in WALIS.
%run -i scripts/Vrt_meas_hist.py %run -i scripts/SL_datum_hist.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Quality plotsThe RSL datapoints from stratigraphy contain two "data quality" fields, one for age and one for RSL information. Database compilers scored each site following standard guidelines (as per database documentation). This plot shows these quality scores plotted against each other. As the quality scores of one area can be better appreciated by comparison with other areas, tools to compare two nations or two regions are given. Overall quality of selected area
%run -i scripts/Quality_plot.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Compare two nations
%run -i scripts/select_nation_quality.py box %run -i scripts/Quality_nations.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Compare two regions
%run -i scripts/select_region_quality.py box %run -i scripts/Quality_regions.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
MapsIn this section, the data is organized in a series of maps. Some styling choices are available.
%run -i scripts/select_map_options.py %run -i scripts/Static_maps.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Global map of RSL datapoints. The following cell works only if the previous one is run choosing "RSL Datapoints" as Map Choice.
%run -i scripts/global_maps.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Radiometric ages distributionThe code below plots the age distribution of radiometric ages within the query. The data is run through a Monte-Carlo sampling of the gaussian distribution of each radiometric age, and Kernel density estimate (KDE) plots are derived.
#Insert age limits to be plotted min_age=0 max_age=150 %run -i scripts/age_kde.py
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Create ZIP archiveCreate a ZIP archive of the entire "Output" folder.
shutil.make_archive('Output', 'zip', Output_path)
_____no_output_____
MIT
Code/Query_and_Explore_data.ipynb
Alerovere/WALIS
Predicting Heart DiseaseThis dataset contains 76 features, but all published experiments refer to using a subset of 14 of them. The "goal" feature refers to the presence of heart disease in the patient. It is integer valued from 0 (no presence) to 4 (values 1,2,3,4) from absence (value 0). It is therefore a multiclass classification problem.*For our example, we will use several more features than the traditional 14.*Feature info (attributes used): 1. feature 3 (age) - Age in years2. feature 4 (sex) - male or female3. feature 9 (cp) - chest pain type (typical angina, atypical angina, non-anginal, asymptomatic)4. feature 10 (trestbps) - resting blood pressure (mm Hg)5. feature 12 (chol) - cholesterol (mg/dl)6. feature 14 (cigperday) - number of cigarettes per day7. feature 16 (fbs) - fasting blood sugar > 120 mg/dl (1 = true; 0 = false) 8. feature 18 (famhist) - family history of heart disease (1 = true; 0 = false)9. feature 19 (restecg) - resting electrocardiographic results (normal; st-t = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV); vent = showing probable or definite left ventricular hypertrophy by Estes' criteria)10. feature 32 (thalach) - maximum heart rate achieved11. feature 38 (exang) - exercise induced angina (1 = yes; 0 = no)12. feature 40 (oldpeak) - ST depression induced by exercise relative to rest13. feature 41 (slope) - the slope of the peak exercise ST segment (upsloping, flat, downsloping)14. feature 44 (ca) - number of major vessels (0-3) colored by flourosopy15. feature 51 (thal) - normal, fixed defect, or reversable defect16. feature 58 (target) (the predicted attribute) - 0: < 50% diameter narrowing - 1+: > 50% diameter narrowing Our focus in using this dataset will be exploring pre-processing methods more thoroughlyMore details can be found at [the UCI repository](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). AcknowledgmentsThe authors of the dataset have requested that any use of the data include the names of the principal investigator responsible for the data collection at each institution. They would be: 1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D. 2. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D. 3. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D. 4. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation:Robert Detrano, M.D., Ph.D. Loading the data from CSVWe can read the data directly from the CSV located in the [data/](data/) directory. The [raw data](data/heart-disease-raw.csv) was pre-processed to re-name categorical features where they are otherwise ordinal variables. This allows us to walk through an entire pre-processing pipeline
import pandas as pd import numpy as np from functions import cls as packt_classes # read the raw csv X = pd.read_csv('data/heart-disease-2.csv', header=None) # rename the columns cols = ['age', 'sex', 'cp', 'trestbps', 'chol', 'cigperday', 'fbs', 'famhist', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal', 'target'] X.columns = cols y = X.pop('target') # don't want target in the X matrix X.head()
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Pre-split: any major imbalance?If there are any categorical features with rare factor levels that need to be considered before splitting, we'll find out here.
def examine_cats(frame): for catcol in frame.columns[frame.dtypes == 'object'].tolist(): print(catcol) print(frame[catcol].value_counts()) print("") examine_cats(X)
sex male 206 female 97 Name: sex, dtype: int64 cp asymptomatic 144 non-anginal 86 atypical anginal 50 typical anginal 23 Name: cp, dtype: int64 restecg normal 151 vent 148 st-t 4 Name: restecg, dtype: int64 slope upsloping 142 flat 140 downsloping 21 Name: slope, dtype: int64 thal normal 166 reversable 117 fixed 18 Name: thal, dtype: int64
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Perform train/test splitRemember, we always need to split! We will also stratify on the '`restecg`' variable since it's the most likely to be poorly split.
from sklearn.model_selection import train_test_split seed = 42 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=seed, stratify=X['restecg']) print("Train size: %i" % X_train.shape[0]) print("Test size: %i" % X_test.shape[0]) X_train.head() examine_cats(X_train)
sex male 153 female 74 Name: sex, dtype: int64 cp asymptomatic 105 non-anginal 66 atypical anginal 37 typical anginal 19 Name: cp, dtype: int64 restecg normal 113 vent 111 st-t 3 Name: restecg, dtype: int64 slope flat 110 upsloping 102 downsloping 15 Name: slope, dtype: int64 thal normal 121 reversable 92 fixed 12 Name: thal, dtype: int64
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Custom TransformersThere are several custom transformers that will be useful for this data:- Custom one-hot encoding that drops one level to avoid the [dummy variable trap](http://www.algosome.com/articles/dummy-variable-trap-regression.html)- Model-based imputation of continuous variables, since mean/median centering is rudimentary Custom base classWe'll start with a cusom base class that depends on the input to be a Pandas dataframe. This base class will provide super methods for validating the input type as well as the presence of any prescribed columns.
from sklearn.base import BaseEstimator, TransformerMixin from sklearn.utils.validation import check_is_fitted class CustomPandasTransformer(BaseEstimator, TransformerMixin): def _validate_input(self, X): if not isinstance(X, pd.DataFrame): raise TypeError("X must be a DataFrame, but got type=%s" % type(X)) return X @staticmethod def _validate_columns(X, cols): scols = set(X.columns) # set for O(1) lookup if not all(c in scols for c in cols): raise ValueError("all columns must be present in X")
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Explanation of LabelEncoder
from sklearn.preprocessing import LabelEncoder labels = ['banana', 'apple', 'orange', 'apple', 'orange'] le = LabelEncoder() le.fit(labels) le.transform(labels)
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
One-hot encode categorical dataIt is probably (hopefully) obvious why we need to handle data that is in string format. There is not much we can do numerically with data that resembles the following: [flat, upsloping, downsloping, ..., flat, flat, downsloping] There is a natural procedure to force numericism amongst string data: map each unique string to a unique level (1, 2, 3). This is, in fact, exactly what the sklearn [`LabelEncoder`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) does. However, this is not sufficient for modeling purposes, since most algorithms will treat this as [ordinal data](https://en.wikipedia.org/wiki/Ordinal_data), where in many cases it is not. Imagine you fit a regression on data you've label-encoded, and one feature (type of chest pain, for instance) is now: [0, 2, 3, ..., 1, 0] You might get coefficients back that make no sense since "asymptomatic" or "non-anginal", etc., are not inherently numerically greater or less than one another. Therefore, we [*one-hot encode*](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) our categorical data into a numerical representation. Now we have dummy data and a binary feature for each variable/factor-level combination.
from sklearn.preprocessing import OneHotEncoder, LabelEncoder class DummyEncoder(CustomPandasTransformer): """A custom one-hot encoding class that handles previously unseen levels and automatically drops one level from each categorical feature to avoid the dummy variable trap. Parameters ---------- columns : list The list of columns that should be dummied sep : str or unicode, optional (default='_') The string separator between the categorical feature name and the level name. drop_one_level : bool, optional (default=True) Whether to drop one level for each categorical variable. This helps avoid the dummy variable trap. tmp_nan_rep : str or unicode, optional (default="N/A") Each categorical variable adds a level for missing values so test data that is missing data will not break the encoder """ def __init__(self, columns, sep='_', drop_one_level=True, tmp_nan_rep='N/A'): self.columns = columns self.sep = sep self.drop_one_level = drop_one_level self.tmp_nan_rep = tmp_nan_rep def fit(self, X, y=None): # validate the input, and get a copy of it X = self._validate_input(X).copy() # load class attributes into local scope tmp_nan = self.tmp_nan_rep # validate all the columns present cols = self.columns self._validate_columns(X, cols) # begin fit # for each column, fit a label encoder lab_encoders = {} for col in cols: vec = [tmp_nan if pd.isnull(v) else v for v in X[col].tolist()] # if the tmp_nan value is not present in vec, make sure it is # so the transform won't break down svec = list(set(vec)) if tmp_nan not in svec: svec.append(tmp_nan) le = LabelEncoder() lab_encoders[col] = le.fit(svec) # transform the column, re-assign X[col] = le.transform(vec) # fit a single OHE on the transformed columns - but we need to ensure # the N/A tmp_nan vals make it into the OHE or it will break down later. # this is a hack - add a row of all transformed nan levels ohe_set = X[cols] ohe_nan_row = {c: lab_encoders[c].transform([tmp_nan])[0] for c in cols} ohe_set = ohe_set.append(ohe_nan_row, ignore_index=True) ohe = OneHotEncoder(sparse=False).fit(ohe_set) # assign fit params self.ohe_ = ohe self.le_ = lab_encoders self.cols_ = cols return self def transform(self, X): check_is_fitted(self, 'ohe_') X = self._validate_input(X).copy() # fit params that we need ohe = self.ohe_ lenc = self.le_ cols = self.cols_ tmp_nan = self.tmp_nan_rep sep = self.sep drop = self.drop_one_level # validate the cols and the new X self._validate_columns(X, cols) col_order = [] drops = [] for col in cols: # get the vec from X, transform its nans if present vec = [tmp_nan if pd.isnull(v) else v for v in X[col].tolist()] le = lenc[col] vec_trans = le.transform(vec) # str -> int X[col] = vec_trans # get the column names (levels) so we can predict the # order of the output cols le_clz = le.classes_.tolist() classes = ["%s%s%s" % (col, sep, clz) for clz in le_clz] col_order.extend(classes) # if we want to drop one, just drop the last if drop and len(le_clz) > 1: drops.append(classes[-1]) # now we can get the transformed OHE ohe_trans = pd.DataFrame.from_records(data=ohe.transform(X[cols]), columns=col_order) # set the index to be equal to X's for a smooth concat ohe_trans.index = X.index # if we're dropping one level, do so now if drops: ohe_trans = ohe_trans.drop(drops, axis=1) # drop the original columns from X X = X.drop(cols, axis=1) # concat the new columns X = pd.concat([X, ohe_trans], axis=1) return X de = DummyEncoder(columns=['sex', 'cp', 'restecg', 'slope', 'thal']) de.fit(X_train) X_train_dummied = de.transform(X_train) X_train_dummied.head()
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
ImputationWe can either use a built-in scikit-learn `Imputer`, which will require mean/median as a statistic, or we can build a model. Statistic-based imputation
from sklearn.preprocessing import Imputer imputer = Imputer(strategy='median') imputer.fit(X_train_dummied) imputer.transform(X_train_dummied)[:5]
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Model-based imputationAs discussed in the iris notebook, there are many pitfalls to using mean or median for scaling. In instances where our data is too large to examine all features graphically, many times we cannot discern whether all features are normally distributed (a pre-requisite for mean-scaling). If we want to get more sophisticated, we can use an approach for imputation that is based on a model; we will use a [`BaggingRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html) (since we are filling in NaN continuous variables only at this point).Note that there are other common approaches for this, like KNN imputation, but nearest neighbors models require your data to be scaled, which we're trying to avoid. Beware:Sometimes missing data is informative. For instance... failure to report `cigperday` could be a bias on part of the patient who may not want to receive judgment or a lecture, or could indicate 0.
from sklearn.ensemble import BaggingRegressor from sklearn.externals import six class BaggedRegressorImputer(CustomPandasTransformer): """Fit bagged regressor models for each of the impute columns in order to impute the missing values. Parameters ---------- impute_cols : list The columns to impute base_estimator : object or None, optional (default=None) The base estimator to fit on random subsets of the dataset. If None, then the base estimator is a decision tree. n_estimators : int, optional (default=10) The number of base estimators in the ensemble. max_samples : int or float, optional (default=1.0) The number of samples to draw from X to train each base estimator. - If int, then draw `max_samples` samples. - If float, then draw `max_samples * X.shape[0]` samples. max_features : int or float, optional (default=1.0) The number of features to draw from X to train each base estimator. - If int, then draw `max_features` features. - If float, then draw `max_features * X.shape[1]` features. bootstrap : boolean, optional (default=True) Whether samples are drawn with replacement. bootstrap_features : boolean, optional (default=False) Whether features are drawn with replacement. n_jobs : int, optional (default=1) The number of jobs to run in parallel for both `fit` and `predict`. If -1, then the number of jobs is set to the number of cores. random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. verbose : int, optional (default=0) Controls the verbosity of the building process. """ def __init__(self, impute_cols, base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, n_jobs=1, random_state=None, verbose=0): self.impute_cols = impute_cols self.base_estimator = base_estimator self.n_estimators = n_estimators self.max_samples = max_samples self.max_features = max_features self.bootstrap = bootstrap self.bootstrap_features = bootstrap_features self.n_jobs = n_jobs self.random_state = random_state self.verbose = verbose def fit(self, X, y=None): # validate that the input is a dataframe X = self._validate_input(X) # don't need a copy this time # validate the columns exist in the dataframe cols = self.impute_cols self._validate_columns(X, cols) # this dictionary will hold the models regressors = {} # this dictionary maps the impute column name(s) to the vecs targets = {c: X[c] for c in cols} # drop off the columns we'll be imputing as targets X = X.drop(cols, axis=1) # these should all be filled in (no NaN) # iterate the column names and the target columns for k, target in six.iteritems(targets): # split X row-wise into train/test where test is the missing # rows in the target test_mask = pd.isnull(target) train = X.loc[~test_mask] train_y = target[~test_mask] # fit the regressor regressors[k] = BaggingRegressor( base_estimator=self.base_estimator, n_estimators=self.n_estimators, max_samples=self.max_samples, max_features=self.max_features, bootstrap=self.bootstrap, bootstrap_features=self.bootstrap_features, n_jobs=self.n_jobs, random_state=self.random_state, verbose=self.verbose, oob_score=False, warm_start=False).fit(train, train_y) # assign fit params self.regressors_ = regressors return self def transform(self, X): check_is_fitted(self, 'regressors_') X = self._validate_input(X).copy() # need a copy cols = self.impute_cols self._validate_columns(X, cols) # fill in the missing models = self.regressors_ for k, model in six.iteritems(models): target = X[k] # split X row-wise into train/test where test is the missing # rows in the target test_mask = pd.isnull(target) # if there's nothing missing in the test set for this feature, skip if test_mask.sum() == 0: continue test = X.loc[test_mask].drop(cols, axis=1) # drop impute cols # generate predictions preds = model.predict(test) # impute! X.loc[test_mask, k] = preds return X bagged_imputer = BaggedRegressorImputer(impute_cols=['cigperday', 'ca'], random_state=seed) bagged_imputer.fit(X_train_dummied) # save the masks so we can look at them afterwards ca_nan_mask = pd.isnull(X_train_dummied.ca) cpd_nan_mask = pd.isnull(X_train_dummied.cigperday) # impute X_train_imputed = bagged_imputer.transform(X_train_dummied) X_train_imputed.head() X_train_imputed[ca_nan_mask].ca X_train_imputed[cpd_nan_mask].cigperday X_train_imputed.isnull().sum().sum()
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Feature selection/dimensionality reductionOften times, when there is very high-dimensional data (100s or 1000s of features), it's useful to perform feature selection techniques to create more simple models that can be understood by analysts. A common one is [principal components analysis](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html), but one of its drawbacks is diminished model clarity.
from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train_imputed) # fit PCA, get explained variance of ALL features pca_all = PCA(n_components=None) pca_all.fit(scaler.transform(X_train_imputed)) explained_var = np.cumsum(pca_all.explained_variance_ratio_) explained_var from matplotlib import pyplot as plt %matplotlib inline x_axis = np.arange(X_train_imputed.shape[1]) + 1 plt.plot(x_axis, explained_var) # At which point to cut off? minexp = np.where(explained_var > 0.9)[0][0] plt.axvline(x=minexp, linestyle='dashed', color='red', alpha=0.5) plt.xticks(x_axis) plt.show() print("Cumulative explained variance at %i components: %.5f" % (minexp, explained_var[minexp]))
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
At 15 (of 25) features, we finally explain >90% cumulative variance in our components. This is not a significant enough feature reduction to warrant use of PCA, so we'll skip it. Setup our CV
from sklearn.model_selection import StratifiedKFold # set up our CV cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=seed)
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Examine folds
folds = cv.split(X_train, y_train) for i, fold in enumerate(folds): tr, te = fold print("Fold %i:" % i) print("Training sample indices:\n%r" % tr) print("Testing sample indices:\n%r" % te) print("\n")
Fold 0: Training sample indices: array([ 1, 3, 4, 5, 6, 8, 9, 10, 12, 13, 15, 17, 18, 21, 23, 24, 25, 26, 27, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 47, 49, 51, 53, 54, 55, 56, 58, 59, 62, 65, 66, 69, 70, 71, 73, 74, 75, 79, 81, 82, 85, 87, 89, 90, 93, 94, 95, 96, 97, 99, 100, 102, 103, 105, 106, 107, 108, 110, 111, 114, 115, 116, 117, 118, 119, 121, 123, 125, 127, 129, 131, 132, 134, 135, 136, 139, 140, 143, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 160, 162, 163, 166, 169, 170, 171, 172, 173, 174, 177, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 193, 194, 198, 199, 200, 201, 203, 204, 205, 207, 209, 210, 211, 213, 215, 216, 218, 219, 220, 221, 224, 226]) Testing sample indices: array([ 0, 2, 7, 11, 14, 16, 19, 20, 22, 29, 33, 40, 46, 48, 50, 52, 57, 60, 61, 63, 64, 67, 68, 72, 76, 77, 78, 80, 83, 84, 86, 88, 91, 92, 98, 101, 104, 109, 112, 113, 120, 122, 124, 126, 128, 130, 133, 137, 138, 141, 142, 144, 155, 159, 161, 164, 165, 167, 168, 175, 176, 178, 179, 182, 192, 195, 196, 197, 202, 206, 208, 212, 214, 217, 222, 223, 225]) Fold 1: Training sample indices: array([ 0, 1, 2, 3, 7, 11, 14, 16, 19, 20, 22, 23, 26, 29, 30, 33, 35, 36, 39, 40, 43, 46, 47, 48, 49, 50, 52, 53, 54, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 76, 77, 78, 79, 80, 81, 83, 84, 86, 87, 88, 91, 92, 93, 96, 98, 100, 101, 102, 103, 104, 108, 109, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 126, 128, 130, 131, 132, 133, 134, 135, 137, 138, 139, 140, 141, 142, 143, 144, 145, 147, 153, 155, 156, 159, 161, 162, 163, 164, 165, 166, 167, 168, 171, 175, 176, 177, 178, 179, 180, 181, 182, 183, 185, 186, 189, 190, 191, 192, 194, 195, 196, 197, 198, 200, 201, 202, 206, 208, 209, 210, 211, 212, 213, 214, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226]) Testing sample indices: array([ 4, 5, 6, 8, 9, 10, 12, 13, 15, 17, 18, 21, 24, 25, 27, 28, 31, 32, 34, 37, 38, 41, 42, 44, 45, 51, 55, 56, 58, 71, 73, 74, 75, 82, 85, 89, 90, 94, 95, 97, 99, 105, 106, 107, 110, 116, 125, 127, 129, 136, 146, 148, 149, 150, 151, 152, 154, 157, 158, 160, 169, 170, 172, 173, 174, 184, 187, 188, 193, 199, 203, 204, 205, 207, 215, 216]) Fold 2: Training sample indices: array([ 0, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 27, 28, 29, 31, 32, 33, 34, 37, 38, 40, 41, 42, 44, 45, 46, 48, 50, 51, 52, 55, 56, 57, 58, 60, 61, 63, 64, 67, 68, 71, 72, 73, 74, 75, 76, 77, 78, 80, 82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 94, 95, 97, 98, 99, 101, 104, 105, 106, 107, 109, 110, 112, 113, 116, 120, 122, 124, 125, 126, 127, 128, 129, 130, 133, 136, 137, 138, 141, 142, 144, 146, 148, 149, 150, 151, 152, 154, 155, 157, 158, 159, 160, 161, 164, 165, 167, 168, 169, 170, 172, 173, 174, 175, 176, 178, 179, 182, 184, 187, 188, 192, 193, 195, 196, 197, 199, 202, 203, 204, 205, 206, 207, 208, 212, 214, 215, 216, 217, 222, 223, 225]) Testing sample indices: array([ 1, 3, 23, 26, 30, 35, 36, 39, 43, 47, 49, 53, 54, 59, 62, 65, 66, 69, 70, 79, 81, 87, 93, 96, 100, 102, 103, 108, 111, 114, 115, 117, 118, 119, 121, 123, 131, 132, 134, 135, 139, 140, 143, 145, 147, 153, 156, 162, 163, 166, 171, 177, 180, 181, 183, 185, 186, 189, 190, 191, 194, 198, 200, 201, 209, 210, 211, 213, 218, 219, 220, 221, 224, 226])
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Baseline several modelsWe will build three models with default parameters and look at how the cross validation scores perform across folds, then we'll select the two better models to take into the model tuning stage.__NOTE__ we could theoretically go straight to tuning all three models to select the best, but it is often times not feasible to run grid searches for every model you want to try.
from sklearn.pipeline import Pipeline import numpy as np # these are the pre-processing stages stages = [ ('dummy', packt_classes.DummyEncoder(columns=['sex', 'cp', 'restecg', 'slope', 'thal'])), ('impute', packt_classes.BaggedRegressorImputer(impute_cols=['cigperday', 'ca'], random_state=seed)) ] # we'll add a new estimator onto the end of the pre-processing stages def build_pipeline(pipe_stages, estimator, est_name='clf'): # copy the stages pipe_stages = [stage for stage in pipe_stages] pipe_stages.append((est_name, estimator)) # return the pipe return Pipeline(pipe_stages) # report how the model did def cv_report(cv_scores): mean = np.average(cv_scores) std = np.std(cv_scores) print("CV scores: %r" % cv_scores) print("Average CV score: %.4f" % mean) print("CV score standard deviation: %.4f" % std) from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression # fit a Logistic regression lgr_pipe = build_pipeline(stages, LogisticRegression(random_state=seed)) cv_report(cross_val_score(lgr_pipe, X=X_train, y=y_train, scoring='neg_log_loss', cv=cv)) from sklearn.ensemble import GradientBoostingClassifier # fit a GBM gbm_pipe = build_pipeline(stages, GradientBoostingClassifier(n_estimators=25, max_depth=3, random_state=seed)) cv_report(cross_val_score(gbm_pipe, X=X_train, y=y_train, scoring='neg_log_loss', cv=cv)) from sklearn.ensemble import RandomForestClassifier # fit a RF rf_pipe = build_pipeline(stages, RandomForestClassifier(n_estimators=25, random_state=seed)) cv_report(cross_val_score(rf_pipe, X=X_train, y=y_train, scoring='neg_log_loss', cv=cv))
CV scores: array([-1.09616462, -2.26438127, -1.94406386]) Average CV score: -1.7682 CV score standard deviation: 0.4929
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Initial thoughts* Our GBM and logistic regression perform similarly* Random forest did not perform very well and showed high variability across training folds* Let's move forward with LR & GBM Tuning hyper-paramsNow that we've baselined several models, let's choose a couple of the better-performing models to tune.
from scipy.stats import randint, uniform from sklearn.model_selection import RandomizedSearchCV gbm_pipe = Pipeline([ ('dummy', packt_classes.DummyEncoder(columns=['sex', 'cp', 'restecg', 'slope', 'thal'])), ('impute', packt_classes.BaggedRegressorImputer(impute_cols=['cigperday', 'ca'], random_state=seed)), ('clf', GradientBoostingClassifier(random_state=seed)) ]) # define the hyper-params hyper_params = { 'impute__n_estimators': randint(10, 50), 'impute__max_samples': uniform(0.75, 0.125), 'impute__max_features': uniform(0.75, 0.125), 'clf__n_estimators': randint(50, 400), 'clf__max_depth': [1, 3, 4, 5, 7], 'clf__learning_rate': uniform(0.05, 0.1), 'clf__min_samples_split': [2, 4, 5, 10], 'clf__min_samples_leaf': [1, 2, 5] } # define the search gbm_search = RandomizedSearchCV(gbm_pipe, param_distributions=hyper_params, random_state=seed, cv=cv, n_iter=100, n_jobs=-1, verbose=1, scoring='neg_log_loss', return_train_score=False) gbm_search.fit(X_train, y_train) lgr_pipe = Pipeline([ ('dummy', packt_classes.DummyEncoder(columns=['sex', 'cp', 'restecg', 'slope', 'thal'])), ('impute', packt_classes.BaggedRegressorImputer(impute_cols=['cigperday', 'ca'], random_state=seed)), ('clf', LogisticRegression(random_state=seed)) ]) # define the hyper-params hyper_params = { 'impute__n_estimators': randint(10, 50), 'impute__max_samples': uniform(0.75, 0.125), 'impute__max_features': uniform(0.75, 0.125), 'clf__penalty': ['l1', 'l2'], 'clf__C': uniform(0.5, 0.125), 'clf__max_iter': randint(100, 500) } # define the search lgr_search = RandomizedSearchCV(lgr_pipe, param_distributions=hyper_params, random_state=seed, cv=cv, n_iter=100, n_jobs=-1, verbose=1, scoring='neg_log_loss', return_train_score=False) lgr_search.fit(X_train, y_train)
Fitting 3 folds for each of 100 candidates, totalling 300 fits
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Examine the resultsRight away we can tell that the logistic regression model was *much* faster than the gradient boosting model. However, does the extra time spent fitting end up giving us a performance boost? Let's introduce our test set to the optimized models and select the one that performs better. We are using [__log loss__](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html) as a scoring metric.See [this answer](https://stats.stackexchange.com/questions/208443/intuitive-explanation-of-logloss) for a full intuitive explanation of log loss, but note that lower (closer to zero) is better. There is no maximum to log loss, and typically, the more classes you have, the higher it will be. First the CV scores
from sklearn.utils import gen_batches def grid_report(search, n_splits, key='mean_test_score'): res = search.cv_results_ arr = res[key] slices = gen_batches(arr.shape[0], n_splits) return pd.Series({ '%s_MEAN' % key: arr.mean(), '%s_STD' % key: arr.std(), # the std of fold scores for each set of hyper-params, # averaged over all sets of params '%s_STD_OVER_FOLDS' % key: np.asarray([ arr[slc].std() for slc in slices ]).mean()}) pd.DataFrame.from_records([grid_report(gbm_search, cv.get_n_splits()), grid_report(lgr_search, cv.get_n_splits())], index=["GBM", "Log. Reg"]).T
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
If the CV scores meet business requirements, move on to model selection
from sklearn.metrics import log_loss gbm_preds = gbm_search.predict_proba(X_test) lgr_preds = lgr_search.predict_proba(X_test) print("GBM test LOSS: %.5f" % log_loss(y_true=y_test, y_pred=gbm_preds)) print("Logistic regression test LOSS: %.5f" % log_loss(y_true=y_test, y_pred=lgr_preds))
GBM test LOSS: 0.96101 Logistic regression test LOSS: 0.97445
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Note that in log loss, greater is WORSE. Therefore, the logistic regression was out-performed by the GBM. If the greater time to fit is not an issue for you, then this would be the better model to select. Likewise, you may favor model transparency over the extra few decimal points of accuracy, in which case the logistic regression might be favorable. Variable importanceMost times, it's not enough to build a good model. Most executives will want to know *why* something works. Moreover, in regulated industries like banking or insurance, knowing why a model is working is incredibly important for defending models to a regulatory board.One of the methods commonly used for observing variable importance for non-linear methods (like our gradient boosting model) is to break the model into piecewise linear functions and measure how the model performs against each variable. This is called a "partial dependency plot." Raw feature importancesWe can get the raw feature importances from the estimator itself, and match them up with the transformed column names:
# feed data through the pipe stages to get the transformed feature names X_trans = X_train for step in gbm_search.best_estimator_.steps[:-1]: X_trans = step[1].transform(X_trans) transformed_feature_names = X_trans.columns transformed_feature_names best_gbm = gbm_search.best_estimator_.steps[-1][1] importances = best_gbm.feature_importances_ importances feature_importances = sorted(zip(np.arange(len(transformed_feature_names)), transformed_feature_names, importances), key=(lambda ici: ici[2]), reverse=True) feature_importances
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Partial dependencyIn the following section, we'll break our GBM into a piecewise linear functions to gauge how different variables impact the target, and create [partial dependency plots](http://scikit-learn.org/stable/auto_examples/ensemble/plot_partial_dependence.html)
from sklearn.ensemble.partial_dependence import plot_partial_dependence from sklearn.ensemble.partial_dependence import partial_dependence def plot_partial(est, which_features, X, names, label): fig, axs = plot_partial_dependence(est, X, which_features, feature_names=names, n_jobs=3, grid_resolution=50, label=label) fig.suptitle('Partial dependence of %i features\n' 'on heart disease' % (len(which_features))) plt.subplots_adjust(top=0.8) # tight_layout causes overlap with suptitle plot_partial(est=best_gbm, X=X_trans, which_features=[2, 8, 9, 0, 6, (2, 9)], names=transformed_feature_names, label=1)
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Post-processingSuppose our board of surgeons only cares if the prediction is class "3" with a probability of >=0.3. In this segment we'll write and test a piece of code that we'll use as post-processing in our Flask API.
def is_certain_class(predictions, cls=3, proba=0.3): # find the row arg maxes (ones that are predicted 'cls') argmaxes = predictions.argmax(axis=1) # get the probas for the cls of interest probas = predictions[:, cls] # boolean mask that becomes our prediction vector return ((argmaxes == cls) & (probas >= proba)).astype(int)
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
This means we'll need to use "`predict_proba`" rather than "`predict`":
P = lgr_search.predict_proba(X_test) P[:5] is_certain_class(P)
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn