markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
QUEUE USING LinkedList
class Node: def __init__(self,value): self.value=value self.next=None class Queue: def __init__(self): self.head = None self.tail = None self.num_elements = 0 def enqueue(self, value): new_node = Node(value) if self.head is None: self.head = new_node self.tail = self.head else: self.tail.next = new_node # add data to the next attribute of the tail (i.e. the end of the queue) self.tail = self.tail.next # shift the tail (i.e., the back of the queue) self.num_elements += 1 def dequeue(self): if self.is_empty(): return None value=self.head.value self.head=self.head.next self.num_elements-=1 return value def size(self): return self.num_elements def is_empty(self): return self.num_elements == 0 # Setup q = Queue() q.enqueue(1) q.enqueue(2) q.enqueue(3) # Test size print ("Pass" if (q.size() == 3) else "Fail") # Test dequeue print ("Pass" if (q.dequeue() == 1) else "Fail") # Test enqueue q.enqueue(4) print ("Pass" if (q.dequeue() == 2) else "Fail") print ("Pass" if (q.dequeue() == 3) else "Fail") print ("Pass" if (q.dequeue() == 4) else "Fail") q.enqueue(5) print ("Pass" if (q.size() == 1) else "Fail")
Pass Pass Pass Pass Pass Pass
MIT
QueueLL.ipynb
souravgopal25/Data-Structure-Algorithm-Nanodegree
Vary: Rmax, EFF, constant: x_HI, z. x_HI error: 1e-2%
R_BUBBLE_MAXES = np.linspace(30, 0.225, 9) HII_EFF_FACTORS = np.array( [19.04625, 19.511249999999997, 20.23875, 21.085, 22.655000000000012, 25.779375, 32.056640625, 56.6734375, 5291.5] ) redshifts = np.array([6]*len(R_BUBBLE_MAXES)) total_neutral_fractions = np.array([0.19999881, 0.19998097, 0.20000417, 0.20001106, 0.19998624, 0.20001978, 0.19999591, 0.19998911, 0.19998213]) color='w' percent=0.475 mfp_maxRs = np.zeros(len(mfp_size_probabilities)) fig = plt.figure(dpi=500, facecolor='#404040') ax = fig.gca() for spine in ax.spines.values(): # figure color spine.set_edgecolor(color) for i in np.array([0, 5, 6, 7]): mfp_maxRs[i] = mfp_neutral_region_size[np.argmax(mfp_size_probabilities[i])] plt.plot( mfp_neutral_region_size[:int(percent*bin_num_mfp)], mfp_size_probabilities[i][:int(percent*bin_num_mfp)], '-', label=f'$R_{{max}}={R_BUBBLE_MAXES[i]:.2f}, \zeta={HII_EFF_FACTORS[i]:.2f}, \ peak={mfp_maxRs[i]:.2f}Mpc$' ) plt.legend(fancybox=True, framealpha=0) plt.tick_params(color=color, labelcolor=color) plt.xlabel('$R$ (Mpc)', color=color) plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$', color=color) plt.title(f'Mean Free Path method', color=color) # plt.rcParams['font.size'] = font # plt.yscale('log') plt.show() color='white' percent=0.475 mfp_maxRs = np.zeros(len(mfp_size_probabilities)) plt.rcParams['figure.figsize'] = [10, 6] for i, mfp_size_probability in enumerate(mfp_size_probabilities): mfp_maxRs[i] = mfp_neutral_region_size[np.argmax(mfp_size_probability)] plt.plot( mfp_neutral_region_size[:int(percent*bin_num_mfp)], mfp_size_probability[:int(percent*bin_num_mfp)], '-', label=f'Rmax={R_BUBBLE_MAXES[i]:.2f}, EFF={HII_EFF_FACTORS[i]:.2f}, \ x_HI={total_neutral_fractions[i]*100:.1f}%, \ maxR={mfp_maxRs[i]:.2f}' ) plt.legend(prop={'size': 15}, fancybox=True, framealpha=0) plt.tick_params(color=color, labelcolor=color) plt.xlabel('$R$ (Mpc)', color=color) plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$', color=color) plt.title(f'Our Boxes, MFP method: Vary: Rmax, EFF, constant: x_HI, z={redshifts[0]} ({iteration_mfp:.0e} iterations)', color=color) # plt.rcParams['font.size'] = 18 # plt.yscale('log')
_____no_output_____
BSD-3-Clause
MFPHistogramRmaxEFF.ipynb
tommychinwj/HI_characterization
Challenge 1 - Getting started Welcome to the 2020 NextWave Data Science Challenge! Thank you for taking part in the private challenge and helping us test and improve the student experience.We have prepared a small initial dataset for you to start flexing your data science muscles. We are hoping you will be able to open and view some data, create a basic solution to the problem, and submit your results via the EY Data Science platform. Registering for the challenge and getting the dataPrior to running this notebook, make sure you have:* **Created a profile** on the [EY Data Science Platform](http://datascience.cognistreamer.com/)* **Registered** for the "NextWave Bushfire Challenge Phase 1 - Detect fire edges in airborne image" on the Platform* **Downloaded and extracted** the "Challenge1_v1.zip" file under "Additional data" from the Challenge page on the Platform* **Uploaded** the contents of the .zip file into your jupyter environment, in the "03_EY_challenge1" folder.Your folder structure should look like the below: To check you have executed this correct, execute the code cell below and compare the contents of your current working directory (where this notebook is executing from) to the image above. You should see:* `/home/jovyan/03_EY_challenge1` showing you are working in the "03_EY_challenge1" folder.* `['.ipynb_checkpoints', 'EY_Challenge1_Getting_started.ipynb', 'input_linescan', 'test.csv', 'tgt_mask', 'train.csv', 'world']` showing the contents of the folder.
import os print(os.getcwd()) print(os.listdir())
/home/jovyan/test_nb_4/03_EY_challenge1_v1 ['.ipynb_checkpoints', 'EY_Challenge1_Getting_started_v1.ipynb', 'input_linescan', 'sample_submission.csv', 'test_v1.csv', 'tgt_mask', 'tmp.tif', 'train_v1.csv', 'world']
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
A quick word on the dataThe initial dataset is organised into the following folder structure: input_linescan: these are images of fires taken from a plane. They are simple .jpg files, not georeferenced in space. tgt_mask: these are masks which align to the linescan images. They have been manually drawn based on the linescan images. world: these are strings of numbers called "world" files used for georeferencing the linescan and mask files. They put the .jpg files 'in context' with respect to a Coordinate Reference System (CRS). There are 25 linescan and associated world images, however only 20 masks. Your task is to use the 20 linescan/mask pairs to train a model or process which can produce a mask for the remaining 5 linescans with no mask.
%matplotlib inline import os import numpy as np import pandas as pd import datacube import rasterio import matplotlib.pyplot as plt from skimage import io from skimage import data from skimage import color from skimage import morphology from skimage import segmentation from skimage import measure from affine import Affine from rasterio.plot import show, show_hist
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Import input variable: aerial linescan images
file_stem = 'JORDAN 244 P1_201901291522_MGA94_55' raster_filename = 'input_linescan/' + file_stem + '.jpg' world_filename = 'world/' + file_stem + '.bqw' src = rasterio.open(raster_filename, mode='r+') src.read()
Dataset has no geotransform set. The identity matrix may be returned.
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Contextualise raster data in space by providing a Coordinate Reference System (CRS) and transform function
show(src.read())
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Note that this raster data is just a table of values, it is not pinned to a particular location, or "georeferenced". For this we need a CRS and an affine transformation function.1. CRS: 'epsg:28355' is a useful CRS for Australia, otherwise known as GDA94 / MGA zone 55. https://epsg.io/283552. Affine transformation function: we also need a transformation function to descibe the how to transform our raster data into the relevant CRS. This includes the location, scale and rotation of our raster data. These values can be found in the world files ending in '.bqw' files of the same name. https://en.wikipedia.org/wiki/World_file
a, d, b, e, c, f = np.loadtxt(world_filename) # order depends on convention transform = Affine(a, b, c, d, e, f) crs = rasterio.crs.CRS({"init": "epsg:28355"}) # "epsg:4326" WGS 84, or whatever CRS you know the image is in src.transform = transform src.crs = crs show(src.read(), transform=src.transform)
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Note that the coordinates of the image are now shown in the 'epsg:28355' CRS. This means our data is no longer just an image, but an observation of a particular location. Compare against the target maskEach linescan/mask pair share the same transform (found in the world file of the same name), so we can reuse the transform defined above to view the target for this particular linescan.
mask_filename = 'tgt_mask/' + file_stem + '.jpg' tgt = rasterio.open(mask_filename, mode='r+') tgt.transform = transform tgt.crs = crs show(tgt.read(), transform=tgt.transform)
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Plotting the linescan and the mask together allows us to take a look at how they compare.
fig, ax = plt.subplots(1, 1, figsize=(10,10)) show(src.read(), transform=src.transform, ax=ax) show(tgt.read(), transform=src.transform, ax=ax, alpha=0.5)
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Understanding the linescan files
src.read().shape
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
We can see that there are three channgels in the raster image file: red, green and blue. If we show these individually we can see the image is similar for all three channels
r = src.read(1) g = src.read(2) b = src.read(3) fig, (axr, axg, axb) = plt.subplots(1,3, figsize=(21,7)) show(r, ax=axr, cmap='Reds', title='red channel', transform=src.transform) show(g, ax=axg, cmap='Greens', title='green channel', transform=src.transform) show(b, ax=axb, cmap='Blues', title='blue channel', transform=src.transform) plt.show()
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
A histogram of each channel shows that the distribution of the values of the three channels is also similar, with a slightly higher red channel count at the high end of the distribution. The dynamic range of each channel is 8 bits, so the values vary between 0 and 255, with 0 meaning the camera sensor received no light and 255 meaning the camera received the maximum amout of light that can be recorded.
show_hist( src.read(), bins=50, lw=0.0, stacked=False, alpha=0.3, histtype='stepfilled', title="Histogram by channel")
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
For the red channel, the histogram shows two clusters of values, below which the data is mostly noise, and above which the signal is clearer. Preprocess raster Before we can extract meaninful information from the raster image, we need to clean up noise in the image and make the signal clearer. Based on the histogram above, we could suggest a threshold in the red channel of 100 to mask the data and remove the noise.
threshold = 100 r[r < threshold] = 0 g[g < threshold] = 0 b[b < threshold] = 0 fig, (axr, axg, axb) = plt.subplots(1,3, figsize=(21,7)) show(r, ax=axr, cmap='Reds', title='red channel') show(g, ax=axg, cmap='Greens', title='green channel') show(b, ax=axb, cmap='Blues', title='blue channel') for ax in (axr, axg, axb): ax.set_axis_off() plt.show()
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
A number of further cleansing operations have been applied below. You can experiment with different strategies including machine learning and feature engineering to find an optimal process.
r = src.read(1) threshold = 100 r[r < threshold] = 0 lum = color.rgb2gray(r) mask1 = morphology.remove_small_objects(lum, 50) mask2 = morphology.remove_small_holes(mask1, 5) mask3 = morphology.opening(mask2, morphology.disk(3)) fig, ax_arr = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(20, 10)) ax1, ax2, ax3, ax4 = ax_arr.ravel() ax1.imshow(r, cmap='Reds') ax1.set_title("Thresholded image - red channel greater than " + str(threshold)) ax2.imshow(mask1, cmap="gray") ax2.set_title("Mask1 - small objects removed") ax3.imshow(mask2, cmap="gray") ax3.set_title("Mask2 - small holes removed") ax4.imshow(mask3, cmap="gray") ax4.set_title("Mask3 - disk + opening") for ax in ax_arr.ravel(): ax.set_axis_off() plt.tight_layout() plt.show()
The behavior of rgb2gray will change in scikit-image 0.19. Currently, rgb2gray allows 2D grayscale image to be passed as inputs and leaves them unmodified as outputs. Starting from version 0.19, 2D arrays will be treated as 1D images with 3 channels. Any labeled images will be returned as a boolean array. Did you mean to use a boolean array?
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
At this point, our mask is just a table of values, it is not georeferenced. We can write this array into a temporary rasterio dataset so that we can query it in context with the source linescan image.
# convert boolean mask to integers mask = mask3.astype(np.uint8) # create a temporary dataset for storing the array temp = rasterio.open( 'tmp.tif', mode='w+', driver='GTiff', height=mask.shape[0], width=mask.shape[1], count=1, dtype=mask.dtype, crs=src.crs, transform=src.transform) # copy the array into the opened dataset temp.write(mask, 1) show(temp.read(), transform=src.transform) temp.close()
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Now, the mask is georeferenced. To understand more about the rasterio.open() function, uncomment and run the cell below.
# help(rasterio.open)
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Once we are happy with our preprocessing steps, we can create a function to pass each image to directly.
def get_mask(img, thresh): r = img.read(1) r[r < thresh] = 0 lum = color.rgb2gray(r) mask1 = morphology.remove_small_objects(lum, 50) mask2 = morphology.remove_small_holes(mask1, 5) mask3 = morphology.opening(mask2, morphology.disk(3)) mask3[mask3 > 0 ] = 255 return mask3.astype(np.uint8) mask = get_mask(src, 90) show(mask, transform=src.transform, cmap='binary_r')
The behavior of rgb2gray will change in scikit-image 0.19. Currently, rgb2gray allows 2D grayscale image to be passed as inputs and leaves them unmodified as outputs. Starting from version 0.19, 2D arrays will be treated as 1D images with 3 channels. Any labeled images will be returned as a boolean array. Did you mean to use a boolean array?
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Making a submissionFor the five linescans where there is no mask provided, you must first create a mask, and then return True or False for a specific set of coordinates, where True indicates that coordinate is on fire, and False indicates it is not.The "test.csv" file provides a list of 1000 coordinates that are required to be classified for each of these five linescans. For this part of the challenge, you can ignore the dateTimeLocal column as we are not working with timestamps yet. Note that the coordinates are denoted in the CRS mentioned above, epsg:28355.
test = pd.read_csv('test_v1.csv', index_col='id') test.head()
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
The index method allows you to pass in a set of x and y coordinates and return the row and column of a rasterio dataset which is georeferenced in space. We can then index the dataset using this row and col to return the value at that address.
# get the red band of the dataset only red = src.read(1) # get the coordinates of the centre of the dataset x, y = (src.bounds.left + src.width // 2 , src.bounds.top - src.height // 2) # get the row and column indicies that correspond to the centre of the dataset row, col = src.index(x, y) # get the value at that address red[row, col]
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
Now we will iterate over the test set of linescan images, and iterate over the test coordinates required in each image, filling the 'onFire' column of the 'test' dataframe with the results of the masking process we have developed.
fnames = test.stem.unique() fnames for file_stem in fnames: # open the raster file and georeference with the corresponding world file raster_filename = 'input_linescan/' + file_stem + '.jpg' world_filename = 'world/' + file_stem + '.bqw' src = rasterio.open(raster_filename, mode='r+') a, d, b, e, c, f = np.loadtxt(world_filename) # order depends on convention transform = Affine(a, b, c, d, e, f) crs = rasterio.crs.CRS({"init": "epsg:28355"}) # "epsg:4326" WGS 84, or whatever CRS you know the image is in src.transform = transform src.crs = crs # create a mask using the process we developed earlier. For this example, provide the same threshold for all linescans mask = get_mask(src, 100) # create a temporary dataset for storing the array temp = rasterio.open( 'tmp.tif', mode='w+', driver='GTiff', height=mask.shape[0], width=mask.shape[1], count=1, dtype=mask.dtype, crs=src.crs, transform=src.transform) # copy the array into the opened dataset temp.write(mask, 1) # iterate over the coordinates that are required for testing in the current linescan file for idx, ob in test.loc[test.stem==file_stem].iterrows(): row, col = temp.index(ob.x, ob.y) result = temp.read(1)[row, col] test.loc[(test.stem==file_stem) & (test.x==ob.x) & (test.y==ob.y), 'target'] = result temp.close() test.to_csv('sample_submission.csv', columns = ['target']) test.head()
_____no_output_____
MIT
notebooks/03_EY_challenge1_v1/EY_Challenge1_Getting_started_v1.ipynb
aogeodh/aogeodh-cube-in-a-box
training the seq2seq model
batch_index_check_training_loss = 100 batch_index_check_validation_loss = ((len(training_questions)) // batch_size // 2) - 1 total_training_loss_error = 0 list_validation_loss_error = [] early_stopping_check = 0 early_stopping_stop = 1000 checkpoint = "chatbot_weights.ckpt" session.run(tf.global_variables_initializer()) for epoch in range (1, epochs+1): for batch_index, (padded_questions_in_batch, padded_answers_in_batch) in enumerate(split_into_batches(training_questions, training_answers, batch_size)): starting_time = time.time() _, batch_training_loss_error = session.run([optimizer_gradient_clipping, loss_error], {inputs: padded_questions_in_batch, targets: padded_answers_in_batch, lr: learning_rate, sequence_length: padded_answers_in_batch.shape[1], keep_prob: keep_probability}) total_training_loss_error += batch_training_loss_error ending_time = time.time() batch_time = ending_time - starting_time if batch_index % batch_index_check_training_loss == 0: print('Epoch: {:>3}/{}, Batch: {:>4}/{}, Training Loss Error: {:>6.3f}, Training Time on 100 Batches: {:d} seconds' .format(epoch, epochs, batch_index, len(training_questions) // batch_size, total_training_loss_error / batch_index_check_training_loss, int(batch_time * batch_index_check_training_loss))) total_training_loss_error = 0 if batch_index % batch_index_check_validation_loss == 0 and batch_index > 0: total_training_loss_error = 0 starting_time = time.time() for batch_index_validation, (padded_questions_in_batch, padded_answers_in_batch) in enumerate(split_into_batches(validation_questions, validation_answers, batch_size)): batch_validation_loss_error = session.run(loss_error, {inputs: padded_questions_in_batch, targets: padded_answers_in_batch, lr: learning_rate, sequence_length: padded_answers_in_batch.shape[1], keep_prob: 1}) total_validation_loss_error += batch_validation_loss_error ending_time = time.time() batch_time = ending_time - starting_time average_validation_loss_error = total_validation_loss_error / (len(validation_questions) / batch_size) print('Validation Loss Error: {:>6.3f}, Batch Validation Time: {:d} seconds'.format(average_validation_loss_error, int(batch_time))) learning_rate *= learning_rate_decay if learning_rate < min_learning_rate: learning_rate = min_learning_rate list_validation_loss_error.append(average_validation_loss_error) if average_validation_loss_error <= min(list_validation_loss_error): print('I speak better now!!') early_stopping_check = 0 saver = tf.train.Saver() saver.save(session, checkpoint) else: print('Sorry I do not speak better, I need to practice more.') early_stopping_check += 1 if early_stopping_check == early_stopping_stop: break if early_stopping_check == early_stopping_stop: print('My apologies, I cannot speak better anymore. This is the best I can do!.') break print('Game Over')
_____no_output_____
MIT
Training.py.ipynb
sudoberlin/chatbot
Visualizing and Comparing LIS Output ```{figure} ./images/nasa-lis-combined-logos.png---width: 300px---``` LIS Output PrimerLIS writes model state variables to disk at a frequency selected by the user (e.g., 6-hourly, daily, monthly). The LIS output we will be exploring was originally generated as *daily* NetCDF files, meaning one NetCDF was written per simulated day. We have converted these NetCDF files into a [Zarr](https://zarr.readthedocs.io/en/stable/) store for improved performance in the cloud. Import Libraries
# interface to Amazon S3 filesystem import s3fs # interact with n-d arrays import numpy as np import xarray as xr # interact with tabular data (incl. spatial) import pandas as pd import geopandas as gpd # interactive plots import holoviews as hv import geoviews as gv import hvplot.pandas import hvplot.xarray # used to find nearest grid cell to a given location from scipy.spatial import distance # set bokeh as the holoviews plotting backend hv.extension('bokeh')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Load the LIS OutputThe `xarray` library makes working with labelled n-dimensional arrays easy and efficient. If you're familiar with the `pandas` library it should feel pretty familiar.Here we load the LIS output into an `xarray.Dataset` object:
# create S3 filesystem object s3 = s3fs.S3FileSystem(anon=False) # define the name of our S3 bucket bucket_name = 'eis-dh-hydro/SNOWEX-HACKWEEK' # define path to store on S3 lis_output_s3_path = f's3://{bucket_name}/DA_SNODAS/SURFACEMODEL/LIS_HIST.d01.zarr/' # create key-value mapper for S3 object (required to read data stored on S3) lis_output_mapper = s3.get_mapper(lis_output_s3_path) # open the dataset lis_output_ds = xr.open_zarr(lis_output_mapper, consolidated=True) # drop some unneeded variables lis_output_ds = lis_output_ds.drop_vars(['_history', '_eis_source_path'])
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Explore the DataDisplay an interactive widget for inspecting the dataset by running a cell containing the variable name. Expand the dropdown menus and click on the document and database icons to inspect the variables and attributes.
lis_output_ds
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Accessing Attributes Dataset attributes (metadata) are accessible via the `attrs` attribute:
lis_output_ds.attrs
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Accessing VariablesVariables can be accessed using either **dot notation** or **square bracket notation**:
# dot notation lis_output_ds.SnowDepth_tavg # square bracket notation lis_output_ds['SnowDepth_tavg']
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Which syntax should I use?While both syntaxes perform the same function, the square-bracket syntax is useful when interacting with a dataset programmatically. For example, we can define a variable `varname` that stores the name of the variable in the dataset we want to access and then use that with the square-brackets notation:
varname = 'SnowDepth_tavg' lis_output_ds[varname]
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
The dot notation syntax will not work this way because `xarray` tries to find a variable in the dataset named `varname` instead of the value of the `varname` variable. When `xarray` can't find this variable, it throws an error:
# uncomment and run the code below to see the error # varname = 'SnowDepth_tavg' # lis_output_ds.varname
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Dimensions and Coordinate VariablesThe dimensions and coordinate variable fields put the "*labelled*" in "labelled n-dimensional arrays":* **Dimensions:** labels for each dimension in the dataset (e.g., `time`)* **Coordinates:** labels for indexing along dimensions (e.g., `'2019-01-01'`)We can use these labels to select, slice, and aggregate the dataset. Selecting/Subsetting`xarray` provides two methods for selecting or subsetting along coordinate variables:* index selection: `ds.isel(time=0)`* value selection `ds.sel(time='2019-01-01')`For example, we can select the first timestep from our dataset using index selection by passing the dimension name as a keyword argument:
# remember: python indexes start at 0 lis_output_ds.isel(time=0)
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Or we can use value selection to select based on the coordinate(s) (think "labels") of a given dimension:
lis_output_ds.sel(time='2018-01-01')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
The `.sel()` approach also allows the use of shortcuts in some cases. For example, here we select all timesteps in the month of January 2018:
lis_output_ds.sel(time='2018-01')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Select a custom range of dates using Python's built-in `slice()` method:
lis_output_ds.sel(time=slice('2018-01-01', '2018-01-15'))
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Latitude and LongitudeYou may have noticed that latitude (`lat`) and longitude (`lon`) are listed as data variables, not coordinate variables. This dataset would be easier to work with if `lat` and `lon` were coordinate variables and dimensions. Here we define a helper function that reads the spatial information from the dataset attributes, generates arrays containing the `lat` and `lon` values, and appends them to the dataset:
def add_latlon_coords(dataset: xr.Dataset)->xr.Dataset: """Adds lat/lon as dimensions and coordinates to an xarray.Dataset object.""" # get attributes from dataset attrs = dataset.attrs # get x, y resolutions dx = round(float(attrs['DX']), 3) dy = round(float(attrs['DY']), 3) # get grid cells in x, y dimensions ew_len = len(dataset['east_west']) ns_len = len(dataset['north_south']) # get lower-left lat and lon ll_lat = round(float(attrs['SOUTH_WEST_CORNER_LAT']), 3) ll_lon = round(float(attrs['SOUTH_WEST_CORNER_LON']), 3) # calculate upper-right lat and lon ur_lat = ll_lat + (dy * ns_len) ur_lon = ll_lon + (dx * ew_len) # define the new coordinates coords = { # create an arrays containing the lat/lon at each gridcell 'lat': np.linspace(ll_lat, ur_lat, ns_len, dtype=np.float32, endpoint=False), 'lon': np.linspace(ll_lon, ur_lon, ew_len, dtype=np.float32, endpoint=False) } lon_attrs = dataset.lon.attrs lat_attrs = dataset.lat.attrs # rename the original lat and lon variables dataset = dataset.rename({'lon':'orig_lon', 'lat':'orig_lat'}) # rename the grid dimensions to lat and lon dataset = dataset.rename({'north_south': 'lat', 'east_west': 'lon'}) # assign the coords above as coordinates dataset = dataset.assign_coords(coords) dataset.lon.attrs = lon_attrs dataset.lat.attrs = lat_attrs return dataset
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Now that the function is defined, let's use it to append `lat` and `lon` coordinates to the LIS output:
lis_output_ds = add_latlon_coords(lis_output_ds)
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Inspect the dataset:
lis_output_ds
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Now `lat` and `lon` are listed as coordinate variables and have replaced the `north_south` and `east_west` dimensions. This will make it easier to spatially subset the dataset! Basic Spatial Subsetting We can use the `slice()` function we used above on the `lat` and `lon` dimensions to select data between a range of latitudes and longitudes:
lis_output_ds.sel(lat=slice(37, 41), lon=slice(-110, -101))
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Notice how the sizes of the `lat` and `lon` dimensions have decreased. Subset Across Multiple Dimensions Select snow depth for Jan 2017 within a range of lat/lon:
# define a range of dates to select wy_2018_slice = slice('2017-10-01', '2018-09-30') lat_slice = slice(37, 41) lon_slice = slice(-109, -102) # select the snow depth and subset to wy_2018_slice snd_CO_wy2018_ds = lis_output_ds['SnowDepth_tavg'].sel(time=wy_2018_slice, lat=lat_slice, lon=lon_slice) # inspect resulting dataset snd_CO_wy2018_ds
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
PlottingWe've imported two plotting libraries:* `matplotlib`: static plots* `hvplot`: interactive plotsWe can make a quick `matplotlib`-based plot for the subsetted data using the `.plot()` function supplied by `xarray.Dataset` objects. For this example, we'll select one day and plot it:
# simple matplotlilb plot snd_CO_wy2018_ds.sel(time='2018-01-01').plot()
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Similarly we can make an interactive plot using the `hvplot` accessor and specifying a `quadmesh` plot type:
# hvplot based map snd_CO_20180101_plot = snd_CO_wy2018_ds.sel(time='2018-01-01').hvplot.quadmesh(geo=True, rasterize=True, project=True, xlabel='lon', ylabel='lat', cmap='viridis', tiles='EsriImagery') snd_CO_20180101_plot
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Pan, zoom, and scroll around the map. Hover over the LIS data to see the data values. If we try to plot more than one time-step `hvplot` will also provide a time-slider we can use to scrub back and forth in time:
snd_CO_wy2018_ds.sel(time='2018-01').hvplot.quadmesh(geo=True, rasterize=True, project=True, xlabel='lon', ylabel='lat', cmap='viridis', tiles='EsriImagery')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
From here on out we will stick with `hvplot` for plotting. Timeseries PlotsWe can generate a timeseries for a given grid cell by selecting and calling the plot function:
# define point to take timeseries (note: must be present in coordinates of dataset) ts_lon, ts_lat = (-105.65, 40.35) # plot timeseries (hvplot knows how to plot based on dataset's dimensionality!) snd_CO_wy2018_ds.sel(lat=ts_lat, lon=ts_lon).hvplot(title=f'Snow Depth Timeseries @ Lon: {ts_lon}, Lat: {ts_lat}', xlabel='Date', ylabel='Snow Depth (m)') + \ snd_CO_20180101_plot * gv.Points([(ts_lon, ts_lat)]).opts(size=10, color='red')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
In the next section we'll learn how to create a timeseries over a broader area. AggregationWe can perform aggregation operations on the dataset such as `min()`, `max()`, `mean()`, and `sum()` by specifying the dimensions along which to perform the calculation.For example we can calculate the mean and maximum snow depth at each grid cell over water year 2018 as follows:
# calculate the mean at each grid cell over the time dimension mean_snd_CO_wy2018_ds = snd_CO_wy2018_ds.mean(dim='time') max_snd_CO_wy2018_ds = snd_CO_wy2018_ds.max(dim='time') # plot the mean and max snow depth mean_snd_CO_wy2018_ds.hvplot.quadmesh(geo=True, rasterize=True, project=True, xlabel='lon', ylabel='lat', cmap='viridis', tiles='EsriImagery', title='Mean Snow Depth - WY2018') + \ max_snd_CO_wy2018_ds.hvplot.quadmesh(geo=True, rasterize=True, project=True, xlabel='lon', ylabel='lat', cmap='viridis', tiles='EsriImagery', title='Max Snow Depth - WY2018')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Area Average
# take area-averaged mean at each timestep mean_snd_CO_wy2018_ds = snd_CO_wy2018_ds.mean(['lat', 'lon']) # inspect the dataset mean_snd_CO_wy2018_ds # plot timeseries (hvplot knows how to plot based on dataset's dimensionality!) mean_snd_CO_wy2018_ds.hvplot(title='Mean LIS Snow Depth for Colorado', xlabel='Date', ylabel='Snow Depth (m)')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Comparing LIS OutputNow that we're familiar with the LIS output, let's compare it to two other datasets: SNODAS (raster) and SNOTEL (point). LIS (raster) vs. SNODAS (raster) First, we'll load the SNODAS dataset which we also have hosted on S3 as a Zarr store:
# load SNODAS dataset #snodas depth key = "SNODAS/snodas_snowdepth_20161001_20200930.zarr" snodas_depth_ds = xr.open_zarr(s3.get_mapper(f"{bucket_name}/{key}"), consolidated=True) # apply scale factor to convert to meters (0.001 per SNODAS user guide) snodas_depth_ds = snodas_depth_ds * 0.001
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Next we define a helper function to extract the (lon, lat) of the nearest grid cell to a given point:
def nearest_grid(ds, pt): """ Returns the nearest lon and lat to pt in a given Dataset (ds). pt : input point, tuple (longitude, latitude) output: lon, lat """ if all(coord in list(ds.coords) for coord in ['lat', 'lon']): df_loc = ds[['lon', 'lat']].to_dataframe().reset_index() else: df_loc = ds[['orig_lon', 'orig_lat']].isel(time=0).to_dataframe().reset_index() loc_valid = df_loc.dropna() pts = loc_valid[['lon', 'lat']].to_numpy() idx = distance.cdist([pt], pts).argmin() return loc_valid['lon'].iloc[idx], loc_valid['lat'].iloc[idx]
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
The next cell will look pretty similar to what we did earlier to plot a timeseries of a single point in the LIS data. The general steps are:* Extract the coordinates of the SNODAS grid cell nearest to our LIS grid cell (`ts_lon` and `ts_lat` from earlier)* Subset the SNODAS and LIS data to the grid cells and date ranges of interest* Create the plots!
# get lon, lat of snodas grid cell nearest to the LIS coordinates we used earlier snodas_ts_lon, snodas_ts_lat = nearest_grid(snodas_depth_ds, (ts_lon, ts_lat)) # define a date range to plot (shorter = quicker for demo) start_date, end_date = ('2018-01-01', '2018-03-01') plot_daterange = slice(start_date, end_date) # select SNODAS grid cell and subset to plot_daterange snodas_snd_subset_ds = snodas_depth_ds.sel(lon=snodas_ts_lon, lat=snodas_ts_lat, time=plot_daterange) # select LIS grid cell and subset to plot_daterange lis_snd_subset_ds = lis_output_ds['SnowDepth_tavg'].sel(lat=ts_lat, lon=ts_lon, time=plot_daterange) # create SNODAS snow depth plot snodas_snd_plot = snodas_snd_subset_ds.hvplot(label='SNODAS') # create LIS snow depth plot lis_snd_plot = lis_snd_subset_ds.hvplot(label='LIS') # create SNODAS vs LIS snow depth plot lis_vs_snodas_snd_plot = (lis_snd_plot * snodas_snd_plot) # display the plot lis_vs_snodas_snd_plot.opts(title=f'Snow Depth @ Lon: {ts_lon}, Lat: {ts_lat}', legend_position='right', xlabel='Date', ylabel='Snow Depth (m)')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
LIS (raster) vs. SNODAS (raster) vs. SNOTEL (point)Now let's add SNOTEL point data to our plot.First, we're going to define some helper functions to load the SNOTEL data:
# load csv containing metadata for SNOTEL sites in a given state (e.g,. 'colorado') def load_site(state): # define the path to the file key = f"SNOTEL/snotel_{state}.csv" # load the csv into a pandas DataFrame df = pd.read_csv(s3.open(f's3://{bucket_name}/{key}', mode='r')) return df # load SNOTEL data for a specific site def load_snotel_txt(state, var): # define the path to the file key = f"SNOTEL/snotel_{state}{var}_20162020.txt" # determine how many lines to skip in the file (they start with #) fh = s3.open(f"{bucket_name}/{key}") lines = fh.readlines() skips = sum(1 for ln in lines if ln.decode('ascii').startswith('#')) # load the data into a pandas DataFrame df = pd.read_csv(s3.open(f"s3://{bucket_name}/{key}"), skiprows=skips) # convert the Date column from strings to datetime objects df['Date'] = pd.to_datetime(df['Date']) return df
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
For the purposes of this tutorial let's load the SNOTEL data for sites in Colorado. We'll pick one site to plot in a few cells.
# load SNOTEL snow depth for Colorado into a dictionary snotel_depth = {'CO': load_snotel_txt('CO', 'depth')}
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
We'll need another helper function to load the depth data:
# get snotel depth def get_depth(state, site, start_date, end_date): # grab the depth for the given state (e.g., CO) df = snotel_depth[state] # define a date range mask mask = (df['Date'] >= start_date) & (df['Date'] <= end_date) # use mask to subset between time range df = df.loc[mask] # extract timeseries for the given site return pd.concat([df.Date, df.filter(like=site)], axis=1).set_index('Date')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Load the site metadata for Colorado:
co_sites = load_site('colorado') # peek at the first 5 rows co_sites.head()
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
The point we've been using so far in the tutorial actually corresponds to the coordinates for the Bear Lake SNOTEL site! Let's extract the site data for that point:
# get the depth data by passing the site name to the get_depth() function bear_lake_snd_df = get_depth('CO', 'Bear Lake (322)', start_date, end_date) # convert from cm to m bear_lake_snd_df = bear_lake_snd_df / 100
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
Now we're ready to plot:
# create SNOTEL plot bear_lake_plot = bear_lake_snd_df.hvplot(label='SNOTEL') # combine the SNOTEl plot with the LIS vs SNODAS plot (bear_lake_plot * lis_vs_snodas_snd_plot).opts(title=f'Snow Depth @ Lon: {ts_lon}, Lat: {ts_lat}', legend_position='right')
_____no_output_____
MIT
book/tutorials/lis/1_exploring_lis_output.ipynb
zachghiaccio/website
``` python!python train.py --img 160 --batch 4 --epochs 5 --data ./data/train_imgs_sliced_160.yaml --cfg ./models/yolov5s.yaml --weights '' ``` ``` python!python train.py --img 640 --batch 10 --epochs 100 --data ./data/train_imgs_sliced_640_val.yaml --cfg ./models/yolov5_tile.yaml --weights '' ``` ``` python%run train.py --img 320 --batch 5 --epochs 5 --data ./data/train_imgs_sliced_320.yaml --cfg ./models/yolov5s.yaml --weights '' ``` %run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --cfg ./models/yolov5st.yaml --weights ./runs/train/exp40/weights/last.pt
%run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --weights ./runs/train/exp40/weights/last.pt %run train.py --img 320 --batch 5 --epochs 60 --data ./data/train_imgs_sliced_320_val.yaml --cfg ./models/yolov5s_se.yaml --weights '' from utils.plots import plot_results plot_results(save_dir='./runs/train/exp21')
_____no_output_____
MIT
code/03_run_yolo_train.ipynb
ccjaread/tianchi_tile_defect_detection
bert4sentiment - an easy implementation with BERT with Hugggingface for sentiment analysisLet's build a Sentiment Classifier using the amazing Transformers library by Hugging Face!Load the ber4sentiment environment. Type from the project folder type `conda env create -f configuration.yml` this will create a conda _bert4sentiment_ environment. then type `conda activate bert4sentiment`and run the notebook`jupyter notebook`
%reload_ext watermark %watermark -v -p numpy,pandas,torch,transformers #@title Setup & Config import transformers from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup import torch import numpy as np import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, classification_report from collections import defaultdict from textwrap import wrap from torch import nn, optim from torch.utils.data import Dataset, DataLoader import torch.nn.functional as F %matplotlib inline %config InlineBackend.figure_format='retina' sns.set(style='whitegrid', palette='muted', font_scale=1.2) HAPPY_COLORS_PALETTE = ["#01BEFE", "#FFDD00", "#FF7D00", "#FF006D", "#ADFF02", "#8F00FF"] sns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE)) rcParams['figure.figsize'] = 12, 8 RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) torch.manual_seed(RANDOM_SEED) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Data ExplorationWe'll load the Google Play app reviews dataset, that we've put together in the previous part:
df = pd.read_csv("reviews.csv") df.head() df.shape
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We have about 16k examples. Let's check for missing values:
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 15746 entries, 0 to 15745 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 userName 15746 non-null object 1 userImage 15746 non-null object 2 content 15746 non-null object 3 score 15746 non-null int64 4 thumbsUpCount 15746 non-null int64 5 reviewCreatedVersion 13533 non-null object 6 at 15746 non-null object 7 replyContent 7367 non-null object 8 repliedAt 7367 non-null object 9 sortOrder 15746 non-null object 10 appId 15746 non-null object dtypes: int64(2), object(9) memory usage: 1.3+ MB
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Great, no missing values in the score and review texts! Do we have class imbalance?
sns.countplot(x=df.score) plt.xlabel('review score');
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
That's hugely imbalanced, but it's okay. We're going to convert the dataset into negative, neutral and positive sentiment:
def to_sentiment(rating): rating = int(rating) if rating <= 2: return 0 elif rating == 3: return 1 else: return 2 df['sentiment'] = df.score.apply(to_sentiment) class_names = ['negative', 'neutral', 'positive'] ax = sns.countplot(x=df.sentiment) plt.xlabel('review sentiment') ax.set_xticklabels(class_names);
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The balance was (mostly) restored. Data PreprocessingWe have to prepare the data for the Transformers that means to: - Add special tokens to separate sentences and do classification- Pass sequences of constant length (introduce padding)- Create array of 0s (pad token) and 1s (real token) called *attention mask*
PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Let's load a pre-trained [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.htmlberttokenizer):
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We'll use this text to understand the tokenization process:
sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.'
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Some basic operations can convert the text to tokens and tokens to unique integers (ids):
tokens = tokenizer.tokenize(sample_txt) token_ids = tokenizer.convert_tokens_to_ids(tokens) print(f' Sentence: {sample_txt}') print(f' Tokens: {tokens}') print(f'Token IDs: {token_ids}')
Sentence: When was I last outside? I am stuck at home for 2 weeks. Tokens: ['When', 'was', 'I', 'last', 'outside', '?', 'I', 'am', 'stuck', 'at', 'home', 'for', '2', 'weeks', '.'] Token IDs: [1332, 1108, 146, 1314, 1796, 136, 146, 1821, 5342, 1120, 1313, 1111, 123, 2277, 119]
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Special Tokens`[SEP]` - marker for ending of a sentence
tokenizer.sep_token, tokenizer.sep_token_id
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
`[CLS]` - we must add this token to the start of each sentence, so BERT knows we're doing classification
tokenizer.cls_token, tokenizer.cls_token_id
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
There is also a special token for padding:
tokenizer.pad_token, tokenizer.pad_token_id
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
BERT understands tokens that were in the training set. Everything else can be encoded using the `[UNK]` (unknown) token:
tokenizer.unk_token, tokenizer.unk_token_id
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
All of that work can be done using the [`encode_plus()`](https://huggingface.co/transformers/main_classes/tokenizer.htmltransformers.PreTrainedTokenizer.encode_plus) method:
encoding = tokenizer.encode_plus( sample_txt, max_length=32, add_special_tokens=True, # Add '[CLS]' and '[SEP]' return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', # Return PyTorch tensors truncation=True, ) encoding.keys()
/home/test/anaconda3/envs/bert4sentiment/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2126: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert). warnings.warn(
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The token ids are now stored in a Tensor and padded to a length of 32:
print(len(encoding['input_ids'][0])) encoding['input_ids'][0]
32
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The attention mask has the same length:
print(len(encoding['attention_mask'][0])) encoding['attention_mask']
32
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We can inverse the tokenization to have a look at the special tokens:
tokenizer.convert_ids_to_tokens(encoding['input_ids'][0])
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Choosing Sequence LengthBERT works with fixed-length sequences. We'll use a simple strategy to choose the max length. Let's store the token length of each review:
token_lens = [] for txt in df.content: tokens = tokenizer.encode(txt, max_length=512) token_lens.append(len(tokens))
Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
and plot the distribution:
sns.histplot(x=token_lens) plt.xlim([0, 256]); plt.xlabel('Token count');
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Most of the reviews seem to contain less than 128 tokens, but we'll be on the safe side and choose a maximum length of 160.
MAX_LEN = 160
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We have all building blocks required to create a PyTorch dataset. Let's do it:
class GPReviewDataset(Dataset): def __init__(self, reviews, targets, tokenizer, max_len): self.reviews = reviews self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.reviews) def __getitem__(self, item): review = str(self.reviews[item]) target = self.targets[item] encoding = self.tokenizer.encode_plus( review, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) return { 'review_text': review, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'targets': torch.tensor(target, dtype=torch.long) }
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The tokenizer is doing most of the heavy lifting for us. We also return the review texts, so it'll be easier to evaluate the predictions from our model. Let's split the data:
df_train, df_test = train_test_split(df, test_size=0.1, random_state=RANDOM_SEED) df_val, df_test = train_test_split(df_test, test_size=0.5, random_state=RANDOM_SEED) df_train.shape, df_val.shape, df_test.shape
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We also need to create a couple of data loaders. Here's a helper function to do it:
def create_data_loader(df, tokenizer, max_len, batch_size): ds = GPReviewDataset( reviews=df.content.to_numpy(), targets=df.sentiment.to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=4 ) BATCH_SIZE = 16 train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE) val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE) test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Let's have a look at an example batch from our training data loader:
data = next(iter(train_data_loader)) data.keys() print(data['input_ids'].shape) print(data['attention_mask'].shape) print(data['targets'].shape)
torch.Size([16, 160]) torch.Size([16, 160]) torch.Size([16])
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Sentiment Classification with BERT and Hugging Face We'll use the basic [BertModel](https://huggingface.co/transformers/model_doc/bert.htmlbertmodel) and build our sentiment classifier on top of it. Let's load the model:
bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict = False)
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
And try to use it on the encoding of our sample text:
last_hidden_state, pooled_output = bert_model( input_ids=encoding['input_ids'], attention_mask=encoding['attention_mask'], return_dict=False )
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The `last_hidden_state` is a sequence of hidden states of the last layer of the model. Obtaining the `pooled_output` is done by applying the [BertPooler](https://github.com/huggingface/transformers/blob/edf0582c0be87b60f94f41c659ea779876efc7be/src/transformers/modeling_bert.pyL426) on `last_hidden_state`:
last_hidden_state.shape
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We have the hidden state for each of our 32 tokens (the length of our example sequence). But why 768? This is the number of hidden units in the feedforward-networks. We can verify that by checking the config:
bert_model.config.hidden_size
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
You can think of the `pooled_output` as a summary of the content, according to BERT. Albeit, you might try and do better. Let's look at the shape of the output:
pooled_output.shape
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We can use all of this knowledge to create a classifier that uses the BERT model:
class SentimentClassifier(nn.Module): def __init__(self, n_classes): super(SentimentClassifier, self).__init__() self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) self.drop = nn.Dropout(p=0.3) self.out = nn.Linear(self.bert.config.hidden_size, n_classes) def forward(self, input_ids, attention_mask): bertOutput = self.bert( input_ids=input_ids, attention_mask=attention_mask ) output = self.drop(bertOutput['pooler_output']) return self.out(output)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Our classifier delegates most of the heavy lifting to the BertModel. We use a dropout layer for some regularization and a fully-connected layer for our output. Note that we're returning the raw output of the last layer since that is required for the cross-entropy loss function in PyTorch to work.This should work like any other PyTorch model. Let's create an instance and move it to the GPU:
model = SentimentClassifier(len(class_names)) model = model.to(device)
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
We'll move the example batch of our training data to the GPU:
input_ids = data['input_ids'].to(device) attention_mask = data['attention_mask'].to(device) print(input_ids.shape) # batch size x seq length print(attention_mask.shape) # batch size x seq length
torch.Size([16, 160]) torch.Size([16, 160])
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
To get the predicted probabilities from our trained model, we'll apply the softmax function to the outputs:
F.softmax(model(input_ids, attention_mask), dim=1)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Training we'll use the [AdamW](https://huggingface.co/transformers/main_classes/optimizer_schedules.htmladamw) optimizer provided by Hugging Face that corrects weight decay.
EPOCHS = 100 optimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False) total_steps = len(train_data_loader) * EPOCHS scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) loss_fn = nn.CrossEntropyLoss().to(device)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
How do we come up with all hyperparameters? The BERT authors have some recommendations for fine-tuning:- Batch size: 16, 32- Learning rate (Adam): 5e-5, 3e-5, 2e-5- Number of epochs: 2, 3, 4We're going to ignore the number of epochs recommendation but stick with the rest. Note that increasing the batch size reduces the training time significantly, but gives you lower accuracy.Let's continue with writing a helper function for training our model for one epoch:
def train_epoch( model, data_loader, loss_fn, optimizer, device, scheduler, n_examples ): model = model.train() losses = [] correct_predictions = 0 for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Training the model should look familiar, except for two things. The scheduler gets called every time a batch is fed to the model. We're avoiding exploding gradients by clipping the gradients of the model using [clip_grad_norm_](https://pytorch.org/docs/stable/nn.htmlclip-grad-norm).Let's write another one that helps us evaluate the model on a given data loader:
def eval_model(model, data_loader, loss_fn, device, n_examples): model = model.eval() losses = [] correct_predictions = 0 with torch.no_grad(): for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) return correct_predictions.double() / n_examples, np.mean(losses)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Using those two, we can write our training loop. We'll also store the training history:
%%time history = defaultdict(list) best_accuracy = 0 for epoch in range(EPOCHS): print(f'Epoch {epoch + 1}/{EPOCHS}') print('-' * 10) train_acc, train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, len(df_train) ) print(f'Train loss {train_loss} accuracy {train_acc}') val_acc, val_loss = eval_model( model, val_data_loader, loss_fn, device, len(df_val) ) print(f'Val loss {val_loss} accuracy {val_acc}') print() history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['val_acc'].append(val_acc) history['val_loss'].append(val_loss) if val_acc > best_accuracy: torch.save(model.state_dict(), 'best_model_state.bin') best_accuracy = val_acc
Epoch 1/100 ----------
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Note that we're storing the state of the best model, indicated by the highest validation accuracy. Whoo, this took some time! We can look at the training vs validation accuracy:
plt.plot(history['train_acc'], label='train accuracy') plt.plot(history['val_acc'], label='validation accuracy') plt.title('Training history') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend() plt.ylim([0, 1]);
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The training accuracy starts to approach 100% after 10 epochs or so. You might try to fine-tune the parameters a bit more, but this will be good enough for us.
# !gdown --id 1V8itWtowCYnb2Bc9KlK9SxGff9WwmogA #model = SentimentClassifier(len(class_names)) #model.load_state_dict(torch.load('best_model_state.bin')) #model = model.to(device)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
EvaluationSo how good is our model on predicting sentiment? Let's start by calculating the accuracy on the test data:
test_acc, _ = eval_model( model, test_data_loader, loss_fn, device, len(df_test) ) test_acc.item()
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
The accuracy is about 1% lower on the test set. Our model seems to generalize well.We'll define a helper function to get the predictions from our model:
def get_predictions(model, data_loader): model = model.eval() review_texts = [] predictions = [] prediction_probs = [] real_values = [] with torch.no_grad(): for d in data_loader: texts = d["review_text"] input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) probs = F.softmax(outputs, dim=1) review_texts.extend(texts) predictions.extend(preds) prediction_probs.extend(probs) real_values.extend(targets) predictions = torch.stack(predictions).cpu() prediction_probs = torch.stack(prediction_probs).cpu() real_values = torch.stack(real_values).cpu() return review_texts, predictions, prediction_probs, real_values
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
This is similar to the evaluation function, except that we're storing the text of the reviews and the predicted probabilities (by applying the softmax on the model outputs):
y_review_texts, y_pred, y_pred_probs, y_test = get_predictions( model, test_data_loader )
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Let's have a look at the classification report
print(classification_report(y_test, y_pred, target_names=class_names))
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Looks like it is really hard to classify neutral (3 stars) reviews. And I can tell you from experience, looking at many reviews, those are hard to classify.We'll continue with the confusion matrix:
def show_confusion_matrix(confusion_matrix): hmap = sns.heatmap(confusion_matrix, annot=True, fmt="d", cmap="Blues") hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right') hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right') plt.ylabel('True sentiment') plt.xlabel('Predicted sentiment'); cm = confusion_matrix(y_test, y_pred) df_cm = pd.DataFrame(cm, index=class_names, columns=class_names) show_confusion_matrix(df_cm)
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
This confirms that our model is having difficulty classifying neutral reviews. It mistakes those for negative and positive at a roughly equal frequency.That's a good overview of the performance of our model. But let's have a look at an example from our test data:
idx = 2 review_text = y_review_texts[idx] true_sentiment = y_test[idx] print("\n".join(wrap(review_text))) print() print(f'True sentiment: {class_names[true_sentiment]}')
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Now we can look at the confidence of each sentiment of our model:
pred_df = pd.DataFrame({ 'class_names': class_names, 'values': y_pred_probs[idx].tolist() #converting tensor to numbers }) sns.barplot(x='values', y='class_names', data=pred_df, orient='h') plt.ylabel('sentiment') plt.xlabel('probability') plt.xlim([0, 1]);
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch
Predicting on Raw TextLet's use our model to predict the sentiment of some raw text:
review_text = "I love completing my todos! Best app ever!!!"
_____no_output_____
MIT
bert4sentiment_pytorch.ipynb
nluninja/bert4sentiment_pytorch