markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Lab 2: Data Structures ~ Advanced ApplicationsWow, look at you! Congratulations on making it to the second part of the lab!These assignments are *absolutely not required*! Even if you're here, you shouldn't try to solve all of the problems in this file. Our suggestion is that you should skim through these problems to find the ones that are most interesting to you. Pretty PascalThis is a variation on the Pascal question from Part 1. Given a number `n`, print out the first `n` rows of Pascal's triangle, *centering* each line. You should use the `generate_pascal_row` function you wrote previously (copy it over from the other notebook). The Pascal's triangle with 1 row just contains the number `1`.To center a string in Python, you can use use string format specifiers. `'{:^10}'.format(var)` will produce a string of length 10 or `len(var)` (whichever is longer), with `str(var)` centered.```python'{:^10}'.format('CS41') => ' CS41 '```You can even specify an optional `fillchar` to fill with characters other than spaces!For example, for `n = 10`:```pythonprint_pascal_triangle(10) 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 35 21 7 1 1 8 28 56 70 56 28 8 1 1 9 36 84 126 126 84 36 9 1```
def generate_pascal_row(row): """Generate the next row of Pascal's triangle.""" if not row: return [1] row1, row2 = row + [0], [0] + row return list(map(sum, zip(row1, row2))) def print_pascal_triangle(n): """Print the first n rows of Pascal's triangle.""" total_spaces = n + n - 1 prev_row = [] for i in range(1, n + 1): prev_row = generate_pascal_row(prev_row) print_row = ' '.join(map(str, prev_row)) space_either_side = (total_spaces - (i + i - 1)) // 2 print_row = ' ' * space_either_side + print_row + ' ' * space_either_side print(print_row) print_pascal_triangle(3) print_pascal_triangle(10)
1 1 1 1 2 1 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 35 21 7 1 1 8 28 56 70 56 28 8 1 1 9 36 84 126 126 84 36 9 1
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
Special PhrasesFor the next few problems, just like cyclone phrases, we'll describe a criterion that makes a word or phrase special.Let's load up the dictionary file again. Remember, if you are using macOS or Linux, you should have a dictionary file available at `/usr/share/dict/words` and we've mirrored the file at `https://stanfordpython.com/res/misc/words`, so you can download the dictionary from there.Copy (or rewrite) `load_english` to load the words from this file.
# If you downloaded words from the course website, # change me to the path to the downloaded file. DICTIONARY_FILE = '/usr/share/dict/words' def load_english(): """Load and return a collection of english words from a file.""" pass english = load_english() print(len(english))
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
Triad PhrasesTriad words are English words for which the two smaller strings you make by extracting alternating letters both form valid words.For example:![Triad Phrases](http://i.imgur.com/jGEXJWi.png)Write a function to determine whether an entire phrase passed into a function is made of triad words. You can assume that all words are made of only alphabetic characters, and are separated by whitespace. We will consider the empty string to be an invalid English word.```pythonis_triad_phrase("learned theorem") => Trueis_triad_phrase("studied theories") => Falseis_triad_phrase("wooded agrarians") => Trueis_triad_phrase("forrested farmers") => Falseis_triad_phrase("schooled oriole") => Trueis_triad_phrase("educated small bird") => Falseis_triad_phrase("a") => Falseis_triad_phrase("") => False```Generate a list of all triad words. How many are there? We found 2770 distinct triad words (case-insensitive).
def is_triad_word(word, english): """Return whether a word is a triad word.""" pass def is_triad_phrase(phrase, english): """Return whether a phrase is composed of only triad words.""" pass
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
Surpassing Phrases (challenge)Surpassing words are English words for which the gap between each adjacent pair of letters strictly increases. These gaps are computed without "wrapping around" from Z to A.For example:![Surpassing Phrases](http://i.imgur.com/XKiCnUc.png)Write a function to determine whether an entire phrase passed into a function is made of surpassing words. You can assume that all words are made of only alphabetic characters, and are separated by whitespace. We will consider the empty string and a 1-character string to be valid surpassing phrases.```pythonis_surpassing_phrase("superb subway") => Trueis_surpassing_phrase("excellent train") => Falseis_surpassing_phrase("porky hogs") => Trueis_surpassing_phrase("plump pigs") => Falseis_surpassing_phrase("turnip fields") => Trueis_surpassing_phrase("root vegetable lands") => Falseis_surpassing_phrase("a") => Trueis_surpassing_phrase("") => True```We've provided a `character_gap` function that returns the gap between two characters. To understand how it works, you should first learn about the Python functions `ord` (one-character string to integer ordinal) and `chr` (integer ordinal to one-character string). For example:```pythonord('a') => 97chr(97) => 'a'```So, in order to find the gap between `G` and `E`, we compute `abs(ord('G') - ord('E'))`, where `abs` returns the absolute value of its argument.Generate a list of all surpassing words. How many are there? We found 1931 distinct surpassing words.
def character_gap(ch1, ch2): """Return the absolute gap between two characters.""" return abs(ord(ch1) - ord(ch2)) def is_surpassing_word(word): """Return whether a word is surpassing.""" pass def is_surpassing_phrase(word): """Return whether a word is surpassing."""
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
Triangle WordsThe nth term of the sequence of triangle numbers is given by $1 + 2 + ... + n = \frac{n(n+1)}{2}$. For example, the first ten triangle numbers are: `1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...`By converting each letter in a word to a number corresponding to its alphabetical position (`A=1`, `B=2`, etc) and adding these values we form a word value. For example, the word value for SKY is `19 + 11 + 25 = 55` and 55 is a triangle number. If the word value is a triangle number then we shall call the word a triangle word.Generate a list of all triangle words. How many are there? As a sanity check, we found 16303 distinct triangle words.*Hint: you can use `ord(ch)` to get the integer ASCII value of a character. You can also use a dictionary to accomplish this!*
def is_triangle_word(word): """Return whether a word is a triangle word.""" pass print(is_triangle_word("SKY")) # => True
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
Polygon CollisionGiven two polygons in the form of lists of 2-tuples, determine whether the two polygons intersect.Formally, a polygon is represented by a list of (x, y) tuples, where each (x, y) tuple is a vertex of the polygon. Edges are assumed to be between adjacent vertices in the list, and the last vertex is connected to the first. For example, the unit square would be represented by```square = [(0,0), (0,1), (1,1), (1,0)]```You can assume that the polygon described by the provided list of tuples is not self-intersecting, but do not assume that it is convex.**Note: this is a *hard* problem. Quite hard.**
# compare each edge of poly1 with each edge of poly2 # how do two lines intersect? define line1 has (x1a, y1a) and (x1b, y1b), and line2 has (x2a, y2a) and (x2b, y2b). # they intersect when def polygon_collision(poly1, poly2): pass unit_square = [(0,0), (0,1), (1,1), (1,0)] triangle = [(0,0), (0.5,2), (1,0)] print(polygon_collision(unit_square, triangle)) # => True
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
ComprehensionsWe haven't talked about data comprehensions yet, but if you're interested, you can read about them [here](https://docs.python.org/3/tutorial/datastructures.htmllist-comprehensions) and then tackle the problems below. ReadPredict the output of each of the following list comprehensions. After you have written down your hypothesis, run the code cell to see if you were correct. If you were incorrect, discuss with a partner why Python returns what it does.```Python[x for x in [1, 2, 3, 4]][n - 2 for n in range(10)][k % 10 for k in range(41) if k % 3 == 0][s.lower() for s in ['PythOn', 'iS', 'cOoL'] if s[0] < s[-1]] Something is fishy here. Can you spot it?arr = [[3,2,1], ['a','b','c'], [('do',), ['re'], 'mi']]print([el.append(el[0] * 4) for el in arr]) What is printed?print(arr) What is the content of `arr` at this point?[letter for letter in "pYthON" if letter.isupper()]{len(w) for w in ["its", "the", "remix", "to", "ignition"]}```
# Predict the output of the following comprehensions. Does the output match what you expect? print([x for x in [1, 2, 3, 4]]) # [1, 2, 3, 4] print([n - 2 for n in range(10)]) # -2, -1 ... 7 print([k % 10 for k in range(41) if k % 3 == 0]) # 0, 3, ... , 9 'P' < 'n' print([s.lower() for s in ['PythOn', 'iS', 'cOoL'] if s[0] < s[-1]]) # ['python'] # Something is fishy here. Can you spot it? arr = [[3,2,1], ['a','b','c'], [('do',), ['re'], 'mi']] print([el.append(el[0] * 4) for el in arr]) # What is printed? # None, None, None print(arr) # What is the content of `arr` at this point? # [[3, 2, 1, 12], ['a', 'b', 'c', 'aaaa'], [not sure..]] print([letter for letter in "pYthON" if letter.isupper()]) # ['Y', 'O', 'N'] print({len(w) for w in ["its", "the", "remix", "to", "ignition"]}) # {3, 5, 2, 8}
{8, 2, 3, 5}
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
WriteWrite comprehensions to transform the input data structure into the output data structure:```python[0, 1, 2, 3] -> [1, 3, 5, 7] Double and add one['apple', 'orange', 'pear'] -> ['A', 'O', 'P'] Capitalize first letter['apple', 'orange', 'pear'] -> ['apple', 'pear'] Contains a 'p'["TA_parth", "student_poohbear", "TA_michael", "TA_guido", "student_htiek"] -> ["parth", "michael", "guido"]['apple', 'orange', 'pear'] -> [('apple', 5), ('orange', 6), ('pear', 4)]['apple', 'orange', 'pear'] -> {'apple': 5, 'orange': 6, 'pear': 4}```
nums = [0, 1, 2, 3] fruits = ['apple', 'orange', 'pear'] people = ["TA_parth", "student_poohbear", "TA_michael", "TA_guido", "student_htiek"] # Add your comprehensions here! print([2 * n + 1 for n in nums]) print([c[0].upper() for c in fruits]) print([w for w in fruits if 'p' in w]) print('-'*20) print([name[3:] for name in people if name[:3] == 'TA_']) print([(fruit, len(fruit)) for fruit in fruits]) print('-'*20) print({fruit: len(fruit) for fruit in fruits})
[1, 3, 5, 7] ['A', 'O', 'P'] ['apple', 'pear'] -------------------- ['parth', 'michael', 'guido'] [('apple', 5), ('orange', 6), ('pear', 4)] -------------------- {'apple': 5, 'orange': 6, 'pear': 4}
BSD-2-Clause-FreeBSD
notebooks/lab-2/data-structures-part-2.ipynb
samuelcheang0419/python-labs
# import the necessary packages import numpy as np import imutils import cv2 def align_images(image, template, maxFeatures=500, keepPercent=0.2, debug=False): # convert both the input image and template to grayscale imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) templateGray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY) # use ORB to detect keypoints and extract (binary) local # invariant features orb = cv2.ORB_create(maxFeatures) (kpsA, descsA) = orb.detectAndCompute(imageGray, None) (kpsB, descsB) = orb.detectAndCompute(templateGray, None) # match the features method = cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING matcher = cv2.DescriptorMatcher_create(method) matches = matcher.match(descsA, descsB, None) # sort the matches by their distance (the smaller the distance, # the "more similar" the features are) matches = sorted(matches, key=lambda x:x.distance) # keep only the top matches keep = int(len(matches) * keepPercent) matches = matches[:keep] # check to see if we should visualize the matched keypoints if debug: matchedVis = cv2.drawMatches(image, kpsA, template, kpsB, matches, None) matchedVis = imutils.resize(matchedVis, width=1000) cv2.imshow("Matched Keypoints", matchedVis) cv2.waitKey(0) # allocate memory for the keypoints (x, y)-coordinates from the # top matches -- we'll use these coordinates to compute our # homography matrix ptsA = np.zeros((len(matches), 2), dtype="float") ptsB = np.zeros((len(matches), 2), dtype="float") # loop over the top matches for (i, m) in enumerate(matches): # indicate that the two keypoints in the respective images # map to each other ptsA[i] = kpsA[m.queryIdx].pt ptsB[i] = kpsB[m.trainIdx].pt # compute the homography matrix between the two sets of matched # points (H, mask) = cv2.findHomography(ptsA, ptsB, method=cv2.RANSAC) # use the homography matrix to align the images (h, w) = template.shape[:2] aligned = cv2.warpPerspective(image, H, (w, h)) # return the aligned image return aligned # import the necessary packages from pyimagesearch.alignment import align_images import numpy as np import argparse import imutils import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image that we'll align to template") ap.add_argument("-t", "--template", required=True, help="path to input template image") args = vars(ap.parse_args()) # load the input image and template from disk print("[INFO] loading images...") image = cv2.imread(args["image"]) template = cv2.imread(args["template"]) # align the images print("[INFO] aligning images...") aligned = align_images(image, template, debug=True) # resize both the aligned and template images so we can easily # visualize them on our screen aligned = imutils.resize(aligned, width=700) template = imutils.resize(template, width=700) # our first output visualization of the image alignment will be a # side-by-side comparison of the output aligned image and the # template stacked = np.hstack([aligned, template]) # our second image alignment visualization will be *overlaying* the # aligned image on the template, that way we can obtain an idea of # how good our image alignment is overlay = template.copy() output = aligned.copy() cv2.addWeighted(overlay, 0.5, output, 0.5, 0, output) # show the two output image alignment visualizations cv2.imshow("Image Alignment Stacked", stacked) cv2.imshow("Image Alignment Overlay", output) cv2.waitKey(0)
_____no_output_____
BSD-3-Clause
ImageAlign.ipynb
emilswan/stockstats
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
# %%capture # !pip install earthengine-api # !pip install geehydro
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
Import libraries
import ee import folium import geehydro
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
# ee.Authenticate() ee.Initialize()
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID')
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
Add Earth Engine Python script
dataset = ee.FeatureCollection('TIGER/2010/Tracts_DP1') visParams = { 'min': 0, 'max': 4000, 'opacity': 0.8, } # Turn the strings into numbers dataset = dataset.map(lambda f: f.set('shape_area', ee.Number.parse(f.get('dp0010001')))) # Map.setCenter(-103.882, 43.036, 8) image = ee.Image().float().paint(dataset, 'dp0010001') Map.addLayer(image, visParams, 'TIGER/2010/Tracts_DP1') # Map.addLayer(dataset, {}, 'for Inspector', False)
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
Display Earth Engine data layers
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
_____no_output_____
MIT
Datasets/Vectors/us_census_tracts.ipynb
dmendelo/earthengine-py-notebooks
This notebook functionizes the 'Array to ASPA'. Goal is to convert any input dictionary to a usable ASPA for analysis.IMPORTANT:During the visualisation of the images. Each cmap per individual image is scaled depending on the contents. Therefor the images array has to be saved and used... Saving the PNG's will give faulty results. TODO:Split up all features into e.g. 4 scales so they can be scaled and distuingished better?But also reserve space for 'cloud' models. Imports
import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from tqdm import tqdm from keijzer_exogan import * %matplotlib inline %config InlineBackend.print_figure_kwargs={'facecolor' : "w"} # Make sure the axis background of plots is white, this is usefull for the black theme in JupyterLab sns.set()
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Load chunkX[0] is a dict from regular chunk X[0][0] is a dict from .npy selection
dir_ = '/datb/16011015/ExoGAN_data//' X = np.load(dir_+'selection/last_chunks_25_percent.npy') X = X.flatten() np.random.seed(23) # Set seed for the np.random functions # Shuffle X along the first axis to make the order of simulations random np.random.shuffle(X) # note that X = np.rand.... isn't required len(X) X[0].keys() # scale the data def scale_param(X, X_min, X_max): """ Formule source: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html In this case 1 is max, 0 is min """ std = (X-X_min)/ (X_max - X_min) return std*(1 - 0)+0 x = X[0] cmap = 'gray' """ Transforms the input dictionary (in the format of ExoGAN), to the ASPA format. TODO: - devide each parameter in bins and scale the data per bin (to hopefully increase the contrast in the data) - make sure to leave space for cloud model information (max2, min2 is currently double info from max1, min1) """ spectrum = x['data']['spectrum'] if len(spectrum) != 515: print('Spectrum length != 515. breaking script') #break """ Scale the spectrum """ spectrum = spectrum.reshape(-1, 1) # convert 1D array to 2D cause standardscaler requires it scaler = MinMaxScaler(feature_range=(0,1)).fit(spectrum) std = np.std(spectrum) min_ = spectrum.min() max_ = spectrum.max() spectrum = scaler.transform(spectrum) # fill spectrum to have a size of 529, to then reshape to 23x23 spectrum = np.append(spectrum, [0 for _ in range(14)]) # fill array to size 529 with zeroes spectrum = spectrum.reshape(23, 23) # building block one # Also scale min_ max_ from the spectrum min_ = scale_param(min_, 6.5e-3, 2.6e-2) max_ = scale_param(max_, 6.5e-3, 2.6e-2) """ Add the different building blocks to each other """ max1 = np.full((12,6), max_) # create array of shape 12,6 (height, width) with the max_ value min1 = np.full((11,6), min_) max1min1 = np.concatenate((max1, min1), axis=0) # Add min1 below max1 (axis=0) image = np.concatenate((spectrum, max1min1), axis=1) # Add max1min1 to the right of spectrum (axis=1) """ Get all parameters and scale them """ # Get the param values ch4 = x['param']['ch4_mixratio'] co2 = x['param']['co2_mixratio'] co = x['param']['co_mixratio'] h2o = x['param']['h2o_mixratio'] mass = x['param']['planet_mass'] radius = x['param']['planet_radius'] temp = x['param']['temperature_profile'] # Scale params (parm, min_value, max_value) where min/max should be the ch4 = scale_param(ch4, 1e-8, 1e-1) co2 = scale_param(co2, 1e-8, 1e-1) co = scale_param(co, 1e-8, 1e-1) h2o = scale_param(h2o, 1e-8, 1e-1) mass = scale_param(mass, 1.5e27, 3.8e27) radius = scale_param(radius, 5.6e7, 1.0e8) temp = scale_param(temp, 1e3, 2e3) # Create the building blocks co2 = np.full((23,1), co2) co = np.full((23,1), co) ch4 = np.full((23,1), ch4) mass = np.full((1,23), mass) radius = np.full((1,23), radius) temp = np.full((1,23), temp) h2o = np.full((9,9), h2o) max2 = np.full((6,12), max_) # create array of shape 12,7 (height, width) with the max_ value min2 = np.full((6,11), min_) """ Put building blocks together """ image = np.concatenate((image, co2), axis=1) image = np.concatenate((image, co), axis=1) image = np.concatenate((image, ch4), axis=1) sub_image = np.concatenate((max2, min2), axis=1) sub_image = np.concatenate((sub_image, mass), axis=0) sub_image = np.concatenate((sub_image, radius), axis=0) sub_image = np.concatenate((sub_image, temp), axis=0) sub_image = np.concatenate((sub_image, h2o), axis=1) image = np.concatenate((image, sub_image), axis=0) plt.imshow(image, cmap='gray')
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
New ASPA Load data, combine $(R_p/R_s)^2$ with the wavelength
i = np.random.randint(0,len(X)) x = X[i] # select a dict from X wavelengths = pd.read_csv(dir_+'wnw_grid.txt', header=None).values spectrum = x['data']['spectrum'] spectrum = np.expand_dims(spectrum, axis=1) # change shape from (515,) to (515,1) params = x['param'] for param in params: if 'mixratio' in param: params[param] = np.log(np.abs(params[param])) # transform mixratio's because they are generated on logarithmic scale params
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Normalize params
# Min max values from training set, in the same order as params above: planet mass, temp, .... co mixratio. min_values = [1.518e26, 1e3, -18.42, 5.593e7, -18.42, -18.42, -18.42] max_values = [3.796e27, 2e3, -2.303, 1.049e8, -2.306, -2.306, -2.306] for i,param in enumerate(params): params[param] = scale_param(params[param], min_values[i], max_values[i]) params wavelengths.shape, spectrum.shape data = np.concatenate([wavelengths,spectrum], axis=1) data = pd.DataFrame(data) data.columns = ['x', 'y'] # x is wavelength, y is (R_p / R_s)^2
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Original ExoGAN simulationFrom 0.3 to 50 micron
plt.figure(figsize=(10,5)) plt.plot(data.x, data.y, '.-', color='r') plt.xlabel(r'Wavelength [µm]') plt.ylabel(r'$(R_P / R_S)^2$') plt.xscale('log') len(data)
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Select 0.3 to 16 micron
data = data[(data.x >= 0.3) & (data.x <= 16)] # select data between 0.3 and 16 micron plt.figure(figsize=(10,5)) plt.plot(data.x, data.y, '.-', color='r') plt.xlabel(r'Wavelength [µm]') plt.ylabel(r'$(R_P / R_S)^2$') #plt.xscale('log') plt.xlim((2, 16)) len(data)
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Important!Notice how $(R_p/R_s)^2$ by index goes from a high to a low wavelength. Apart from that, i'm assuming the spatial difference between peaks is due to plotting against the index instead of the wavelength. The spectrum (below) will remain unchanged and is encoded this way into an ASPA, the wavelength values from above therefor have to be used to transform the ASPA back into $(R_p/R_s)^2$ with the wavelength values.
#spectrum = np.flipud(data.y) plt.figure(figsize=(10,5)) plt.plot(data.y, '.-', color='r') plt.xlabel(r'Index') plt.ylabel(r'$(R_P / R_S)^2$')
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Split the spectrum in bins
# Could loop this, but right now this is more visual bin1 = data[data.x <= 0.8] bin2 = data[(data.x > 0.8) & (data.x <= 1.3)] # select data between 2 and 4 micron bin3 = data[(data.x > 1.3) & (data.x <= 2)] bin4 = data[(data.x > 2) & (data.x <= 4)] bin5 = data[(data.x > 4) & (data.x <= 6)] bin6 = data[(data.x > 6) & (data.x <= 10)] bin7 = data[(data.x > 10) & (data.x <= 14)] bin8 = data[data.x > 14] bin1.head()
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Bins against wavelength
""" Visualize the bins """ bins = [bin8, bin7, bin6, bin5, bin4, bin3, bin2, bin1] plt.figure(figsize=(10,5)) for b in bins: plt.plot(b.iloc[:,0], b.iloc[:,1], '.-') plt.xlabel(r'Wavelength [µm]') plt.ylabel(r'$(R_P / R_S)^2$') #plt.xlim((0.3, 9))
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Bins against indexNotice how bin1 (0-2 micron) has way more datapoints than bin 8 (14-16 micron)
plt.figure(figsize=(10,5)) for b in bins: plt.plot(b.iloc[:,1], '.-') plt.xlabel(r'Index [-]') plt.ylabel(r'$(R_P / R_S)^2$')
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Normalize the spectrum in bins
scalers = [MinMaxScaler(feature_range=(0,1)).fit(b) for b in bins] # list of 8 scalers for the 8 bins mins = [ b.iloc[:,1].min() for b in bins] # .iloc[:,1] selects the R/R (y) only maxs = [ b.iloc[:,1].max() for b in bins] stds = [ b.iloc[:,1].std() for b in bins] bins_scaled = [] for i,b in enumerate(bins): bins_scaled.append(scalers[i].transform(b)) plt.figure(figsize=(10,5)) for i,b in enumerate(bins_scaled): plt.plot(b[:, 0], b[:,1], '.-', label=i) plt.legend() np.concatenate(bins_scaled, axis=0).shape
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Scaled spectrum in bins
spectrum_scaled = np.concatenate(bins_scaled, axis=0) spectrum_scaled = spectrum_scaled[:,1] plt.plot(spectrum_scaled, '.-') len(spectrum_scaled)
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Start creating the ASPA
import math aspa = np.zeros((32,32)) row_length = 25 # amount of pixels used per row n_rows = math.ceil(len(spectrum_scaled) / row_length) # amount of rows the spectrum needs in the aspa, so for 415 data points, 415/32=12.96 -> 13 rows print('Using %s rows' % n_rows) for i in range(n_rows): # for i in start = i*row_length stop = start+row_length spec = spectrum_scaled[start:stop] if len(spec) != row_length: n_missing_points = row_length-len(spec) spec = np.append(spec, [0 for _ in range(n_missing_points)]) # for last row, if length != 32, fill remaining with 0's print('Filled row with %s points' % n_missing_points) aspa[i, :row_length] = spec plt.imshow(aspa, cmap='gray')
Using 16 rows
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Fill in the 7 ExoGAN params
params for i,param in enumerate(params): aspa[:16, 25+i:32+i] = params[param] plt.imshow(aspa, cmap='gray')
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Fill in the min, max, std valued for the binsTODO: Normalize these properly
mins, maxs, stds for i in range(len(mins)): min_ = scale_param(mins[i], 0.005, 0.03) max_ = scale_param(maxs[i], 0.005, 0.03) std_ = scale_param(stds[i], 1e-7, 1e-4) aspa[16:17, i*4:i*4+4] = min_ aspa[17:18, i*4:i*4+4] = std_ aspa[18:19, i*4:i*4+4] = max_ print(min_, max_, std_) plt.imshow(aspa, cmap='gray')
0.18304209407699484 0.18501794207820269 0.12168214443077108 0.1795876155164152 0.18350703182382339 0.3251502352023359 0.17866701946658978 0.1806349121948885 0.1224142192152229 0.17703549815458045 0.18609035470782206 0.6717569588276018 0.17673002557975623 0.18317877235056546 0.4599526393679086 0.17562721815244248 0.17967088755857144 0.27905825425832625 0.17332491781668713 0.1769439181886055 0.27431291578560724 0.17239153039711982 0.17512444145796277 0.1951118129395312
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Fill in unused space with noise
for i in range(13): noise = np.random.rand(32) # random noise betweem 0 and 1 for each row aspa[19+i:20+i*1, :] = noise plt.imshow(aspa, cmap='gray')
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Functionize ASPA v2
def ASPA_v2(x, wavelengths): spectrum = x['data']['spectrum'] spectrum = np.expand_dims(spectrum, axis=1) # change shape from (515,) to (515,1) params = x['param'] for param in params: if 'mixratio' in param: params[param] = np.log(np.abs(params[param])) # transform mixratio's because they are generated on logarithmic scale """ Normalize params """ # Min max values from training set, in the same order as params above: planet mass, temp, .... co mixratio. min_values = [1.518400e+27, 1.000000e+03, -1.842068e+01, 5.592880e+07, -1.842068e+01, -1.842068e+01, -1.842068e+01] max_values = [3.796000e+27, 2.000000e+03, -2.302585e+00, 1.048665e+08, -2.302585e+00, -2.302585e+00, -2.302585e+00] for i,param in enumerate(params): params[param] = scale_param(params[param], min_values[i], max_values[i]) #print('%s: %s' % (param, params[param])) #print('-'*5) """ Select bins """ data = np.concatenate([wavelengths,spectrum], axis=1) data = pd.DataFrame(data) data.columns = ['x', 'y'] # x is wavelength, y is (R_p / R_s)^2 # Could loop this, but right now this is more visual bin1 = data[data.x <= 0.8] bin2 = data[(data.x > 0.8) & (data.x <= 1.3)] # select data between 2 and 4 micron bin3 = data[(data.x > 1.3) & (data.x <= 2)] bin4 = data[(data.x > 2) & (data.x <= 4)] bin5 = data[(data.x > 4) & (data.x <= 6)] bin6 = data[(data.x > 6) & (data.x <= 10)] bin7 = data[(data.x > 10) & (data.x <= 14)] bin8 = data[data.x > 14] bins = [bin8, bin7, bin6, bin5, bin4, bin3, bin2, bin1] """ Normalize bins """ scalers = [MinMaxScaler(feature_range=(0,1)).fit(b) for b in bins] # list of 8 scalers for the 8 bins mins = [ b.iloc[:,1].min() for b in bins] # .iloc[:,1] selects the R/R (y) only maxs = [ b.iloc[:,1].max() for b in bins] stds = [ b.iloc[:,1].std() for b in bins] #print(min(mins), max(maxs)) bins_scaled = [] for i,b in enumerate(bins): bins_scaled.append(scalers[i].transform(b)) spectrum_scaled = np.concatenate(bins_scaled, axis=0) spectrum_scaled = spectrum_scaled[:,1] """ Create the ASPA """ """Spectrum""" aspa = np.zeros((32,32)) row_length = 25 # amount of pixels used per row n_rows = math.ceil(len(spectrum_scaled) / row_length) # amount of rows the spectrum needs in the aspa, so for 415 data points, 415/32=12.96 -> 13 rows #print('Using %s rows' % n_rows) for i in range(n_rows): # for i in start = i*row_length stop = start+row_length spec = spectrum_scaled[start:stop] if len(spec) != row_length: n_missing_points = row_length-len(spec) spec = np.append(spec, [0 for _ in range(n_missing_points)]) # for last row, if length != 32, fill remaining with 0's #print('Filled row with %s points' % n_missing_points) aspa[i, :row_length] = spec """ExoGAN params""" for i,param in enumerate(params): aspa[:16, 25+i:26+i] = params[param] """min max std values for spectrum bins""" for i in range(len(mins)): min_ = scale_param(mins[i], 0.005, 0.03) max_ = scale_param(maxs[i], 0.005, 0.03) std_ = scale_param(stds[i], 9e-6, 2e-4) aspa[16:17, i*4:i*4+4] = min_ aspa[17:18, i*4:i*4+4] = std_ aspa[18:19, i*4:i*4+4] = max_ """Fill unused space with noice""" for i in range(13): noise = np.random.rand(32) # random noise betweem 0 and 1 for each row aspa[19+i:20+i*1, :] = noise return aspa
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Test ASPA v2 function
## Load data i = np.random.randint(0,len(X)) dict_ = X[i] # select a dict from X wavelengths = pd.read_csv(dir_+'wnw_grid.txt', header=None).values dict_['param'] aspa = ASPA_v2(dict_, wavelengths) plt.imshow(aspa, cmap='gray') np.random.shuffle(X) plt.figure(figsize=(10,20)) for i in tqdm(range(8*4)): image = ASPA_v2(X[i], wavelengths) plt.subplot(8, 4, i+1) plt.imshow(image, cmap='gray', vmin=0, vmax=1.2) plt.tight_layout()
100%|██████████| 32/32 [00:06<00:00, 2.92it/s]
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Creating images from all simulations in the chunk
images = [] for i in tqdm(range(len(X))): image = ASPA_v2(X[i], wavelengths) image = image.reshape(1, 32, 32) # [images, channel, width, height] images.append(image) images = np.array(images) images.shape
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Saving this array to disk
%%time np.save(dir_+'selection/last_chunks_25_percent_images.npy', images)
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Test loading and visualization
print('DONE') print("DONE") print("DONE") images = np.load('/datb/16011015/ExoGAN_data/selection/first_chunks_25_percent_images.npy') images.shape plt.imshow(images[0,0,:,:]) plt.figure(figsize=(10,20)) for i in range(8*4): plt.subplot(8, 4, i+1) plt.imshow(images[i,0,:,:], cmap='gnuplot2') plt.tight_layout()
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Randomly mask pixels from the encoded spectrum
image = images[0, 0, :, :] plt.imshow(image) # image[:23, :23] is the encoded spectrum. t = image.copy() print(t.shape) #t[:23, :23] = 0 plt.imshow(t)
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Random uniform dropout
t = image.copy() dropout = 0.9 for i in range(24): # loop over rows for j in range(24): # loop over cols a = np.random.random() # random uniform dist 0 - 1 if a < dropout: t[i-1:i, j-1:j] = 0 else: pass plt.figure(figsize=(10,10)) plt.imshow(t) # image[:23, :23] is the encoded spectrum. t = image.copy() #t[:23, :23] = 0 plt.imshow(t) t.shape
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Range dropout
# TODO: Mask everything but the visible spectrum def mask_image(image, visible_length, random_visible_spectrum=True): """ Masks everything in an input image, apart from the start to visible_length. start = start wavelength/index value of the visible (non masked) spectrum visible_length = length of the visible spectrum (in pixels) output: masked_image """ image_masked = image.copy() spectrum_length = 23*23 # length of spectrum in ASPA start_max = spectrum_length - visible_length # maximum value start can have to still be able to show spectrum of length visible_length start = np.random.randint(0, start_max) # start stop index to mask before the visible (not masked) spectrum / sequence stop = start + visible_length # stop index of unmasked sequence spectrum = image_masked[:23, :23].flatten() # flatten the spectrum spectrum[:start] = 0 spectrum[stop:] = 0 spectrum = spectrum.reshape(23, 23) #t[:, :] = 0 image_masked[:23, :23] = spectrum image_masked[:, 29:] = 0 # right side params image_masked[29:, :] = 0 # bottom params image_masked[23:, 23:] = 0 # h2o image_masked = image_masked.reshape(1, 32, 32) # add the channel dimension back return image_masked image = images[0, 0, :, :].copy() visible_length = 46 # length of the visible (not to mask) spectrum image_masked = mask_image(image, visible_length) plt.imshow(image_masked[0, :, :])
_____no_output_____
MIT
notebooks/old notebooks/dict to ASPA v2.ipynb
deKeijzer/SRON-DCGAN
Python - Writing Your First Python Code! Welcome! This notebook will teach you the basics of the Python programming language. Although the information presented here is quite basic, it is an important foundation that will help you read and write Python code. By the end of this notebook, you'll know the basics of Python, including how to write basic commands, understand some basic types, and how to perform simple operations on them. Table of Contents Say "Hello" to the world in Python What version of Python are we using? Writing comments in Python Errors in Python Does Python know about your error before it runs your code? Exercise: Your First Program Types of objects in Python Integers Floats Converting from one object type to a different object type Boolean data type Exercise: Types Expressions and Variables Expressions Exercise: Expressions Variables Exercise: Expression and Variables in Python Estimated time needed: 25 min Say "Hello" to the world in Python When learning a new programming language, it is customary to start with an "hello world" example. As simple as it is, this one line of code will ensure that we know how to print a string in output and how to execute code within cells in a notebook. [Tip]: To execute the Python code in the code cell below, click on the cell to select it and press Shift + Enter.
# Try your first Python output print("Hello, Python!")
Hello, Python!
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
After executing the cell above, you should see that Python prints Hello, Python!. Congratulations on running your first Python code! [Tip:] print() is a function. You passed the string 'Hello, Python!' as an argument to instruct Python on what to print. What version of Python are we using? There are two popular versions of the Python programming language in use today: Python 2 and Python 3. The Python community has decided to move on from Python 2 to Python 3, and many popular libraries have announced that they will no longer support Python 2. Since Python 3 is the future, in this course we will be using it exclusively. How do we know that our notebook is executed by a Python 3 runtime? We can look in the top-right hand corner of this notebook and see "Python 3". We can also ask directly Python and obtain a detailed answer. Try executing the following code:
# Check version runing on Jupyter notebook from platform import python_version print(python_version()) # Check version inside your Python program import sys print(sys.version)
3.7.10 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
[Tip:] sys is a built-in module that contains many system-specific parameters and functions, including the Python version in use. Before using it, we must explictly import it. Writing comments in Python In addition to writing code, note that it's always a good idea to add comments to your code. It will help others understand what you were trying to accomplish (the reason why you wrote a given snippet of code). Not only does this help other people understand your code, it can also serve as a reminder to you when you come back to it weeks or months later. To write comments in Python, use the number symbol before writing your comment. When you run your code, Python will ignore everything past the on a given line.
# Practice on writing comments print('Hello, Python!') # This line prints a string # print('Hi')
Hello, Python!
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
After executing the cell above, you should notice that This line prints a string did not appear in the output, because it was a comment (and thus ignored by Python). The second line was also not executed because print('Hi') was preceded by the number sign () as well! Since this isn't an explanatory comment from the programmer, but an actual line of code, we might say that the programmer commented out that second line of code. Errors in Python Everyone makes mistakes. For many types of mistakes, Python will tell you that you have made a mistake by giving you an error message. It is important to read error messages carefully to really understand where you made a mistake and how you may go about correcting it.For example, if you spell print as frint, Python will display an error message. Give it a try:
# Print string as error message frint("Hello, Python!")
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
The error message tells you: where the error occurred (more useful in large notebook cells or scripts), and what kind of error it was (NameError) Here, Python attempted to run the function frint, but could not determine what frint is since it's not a built-in function and it has not been previously defined by us either. You'll notice that if we make a different type of mistake, by forgetting to close the string, we'll obtain a different error (i.e., a SyntaxError). Try it below:
# Try to see build in error message print("Hello, Python!)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Does Python know about your error before it runs your code? Python is what is called an interpreted language. Compiled languages examine your entire program at compile time, and are able to warn you about a whole class of errors prior to execution. In contrast, Python interprets your script line by line as it executes it. Python will stop executing the entire program when it encounters an error (unless the error is expected and handled by the programmer, a more advanced subject that we'll cover later on in this course). Try to run the code in the cell below and see what happens:
# Print string and error to see the running order print("This will be printed") frint("This will cause an error") print("This will NOT be printed")
This will be printed
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Exercise: Your First Program Generations of programmers have started their coding careers by simply printing "Hello, world!". You will be following in their footsteps.In the code cell below, use the print() function to print out the phrase: Hello, world!
# Write your code below and press Shift+Enter to execute print("Hello World!")
Hello World!
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!")--> Now, let's enhance your code with a comment. In the code cell below, print out the phrase: Hello, world! and comment it with the phrase Print the traditional hello world all in one line of code.
# Write your code below and press Shift+Enter to execute #print the traditional Hello World print("Hello World!")
Hello World!
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!") Print the traditional hello world--> Types of objects in Python Python is an object-oriented language. There are many different types of objects in Python. Let's start with the most common object types: strings, integers and floats. Anytime you write words (text) in Python, you're using character strings (strings for short). The most common numbers, on the other hand, are integers (e.g. -1, 0, 100) and floats, which represent real numbers (e.g. 3.14, -42.0). The following code cells contain some examples.
# Integer 11 # Float 2.14 # String "Hello, Python 101!"
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
You can get Python to tell you the type of an expression by using the built-in type() function. You'll notice that Python refers to integers as int, floats as float, and character strings as str.
# Type of 12 type(12) # Type of 2.14 type(2.14) # Type of "Hello, Python 101!" type("Hello, Python 101!")
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
In the code cell below, use the type() function to check the object type of 12.0.
# Write your code below. Don't forget to press Shift+Enter to execute the cell type(12.0)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:type(12.0)--> Integers Here are some examples of integers. Integers can be negative or positive numbers: We can verify this is the case by using, you guessed it, the type() function:
# Print the type of -1 type(-1) # Print the type of 4 type(4) # Print the type of 0 type(0)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Floats Floats represent real numbers; they are a superset of integer numbers but also include "numbers with decimals". There are some limitations when it comes to machines representing real numbers, but floating point numbers are a good representation in most cases. You can learn more about the specifics of floats for your runtime environment, by checking the value of sys.float_info. This will also tell you what's the largest and smallest number that can be represented with them.Once again, can test some examples with the type() function:
# Print the type of 1.0 type(1.0) # Notice that 1 is an int, and 1.0 is a float # Print the type of 0.5 type(0.5) # Print the type of 0.56 type(0.56) # System settings about float type sys.float_info
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Converting from one object type to a different object type You can change the type of the object in Python; this is called typecasting. For example, you can convert an integer into a float (e.g. 2 to 2.0).Let's try it:
# Verify that this is an integer type(2)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Converting integers to floatsLet's cast integer 2 to float:
# Convert 2 to a float float(2) # Convert integer 2 to a float and check its type type(float(2))
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
When we convert an integer into a float, we don't really change the value (i.e., the significand) of the number. However, if we cast a float into an integer, we could potentially lose some information. For example, if we cast the float 1.1 to integer we will get 1 and lose the decimal information (i.e., 0.1):
# Casting 1.1 to integer will result in loss of information int(1.1)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Converting from strings to integers or floats Sometimes, we can have a string that contains a number within it. If this is the case, we can cast that string that represents a number into an integer using int():
# Convert a string into an integer int('1')
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
But if you try to do so with a string that is not a perfect match for a number, you'll get an error. Try the following:
# Convert a string into an integer with error int('1 or 2 people')
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
You can also convert strings containing floating point numbers into float objects:
# Convert the string "1.2" into a float float('1.2')
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
[Tip:] Note that strings can be represented with single quotes ('1.2') or double quotes ("1.2"), but you can't mix both (e.g., "1.2'). Converting numbers to strings If we can convert strings to numbers, it is only natural to assume that we can convert numbers to strings, right?
# Convert an integer to a string str(1)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
And there is no reason why we shouldn't be able to make floats into strings as well:
# Convert a float to a string str(1.2)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Boolean data type Boolean is another important type in Python. An object of type Boolean can take on one of two values: True or False:
# Value true True
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Notice that the value True has an uppercase "T". The same is true for False (i.e. you must use the uppercase "F").
# Value false False
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
When you ask Python to display the type of a boolean object it will show bool which stands for boolean:
# Type of True type(True) # Type of False type(False)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
We can cast boolean objects to other data types. If we cast a boolean with a value of True to an integer or float we will get a one. If we cast a boolean with a value of False to an integer or float we will get a zero. Similarly, if we cast a 1 to a Boolean, you get a True. And if we cast a 0 to a Boolean we will get a False. Let's give it a try:
# Convert True to int int(True) # Convert 1 to boolean bool(1) # Convert 0 to boolean bool(0) # Convert True to float float(True)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Exercise: Types What is the data type of the result of: 6 / 2?
# Write your code below. Don't forget to press Shift+Enter to execute the cell type (6/2)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:type(6/2) float--> What is the type of the result of: 6 // 2? (Note the double slash //.)
# Write your code below. Don't forget to press Shift+Enter to execute the cell type (6//2)
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:type(6//2) int, as the double slashes stand for integer division --> Expression and Variables Expressions Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like adding multiple numbers:
# Addition operation expression 43 + 60 + 16 + 41
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
We can perform subtraction operations using the minus operator. In this case the result is a negative number:
# Subtraction operation expression 50 - 60
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
We can do multiplication using an asterisk:
# Multiplication operation expression 5 * 5
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
We can also perform division with the forward slash:
# Division operation expression 25 / 5 # Division operation expression 25 / 6
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
As seen in the quiz above, we can use the double slash for integer division, where the result is rounded to the nearest integer:
# Integer division operation expression 25 // 5 # Integer division operation expression 25 // 6
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Exercise: Expression Let's write an expression that calculates how many hours there are in 160 minutes:
# Write your code below. Don't forget to press Shift+Enter to execute the cell 160 / 60
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:160/60 Or 160//60--> Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120).
# Mathematical expression 30 + 2 * 60
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60.
# Mathematical expression (30 + 2) * 60
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Variables Just like with most programming languages, we can store values in variables, so we can use them later on. For example:
# Store value into variable x = 43 + 60 + 16 + 41
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
To see the value of x in a Notebook, we can simply place it on the last line of a cell:
# Print out the value in variable x
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
We can also perform operations on x and save the result to a new variable:
# Use another variable to store the result of the operation between variable and value y = x / 60 y
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
If we save a value to an existing variable, the new value will overwrite the previous value:
# Overwrite variable with new value x = x / 60 x
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily:
# Name the variables meaningfully total_min = 43 + 42 + 57 # Total length of albums in minutes total_min # Name the variables meaningfully total_hours = total_min / 60 # Total length of albums in hours total_hours
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
In the cells above we added the length of three albums in minutes and stored it in total_min. We then divided it by 60 to calculate total length total_hours in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below.
# Complicate expression total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression total_hours
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., //). Exercise: Expression and Variables in Python What is the value of x where x = 3 + 2 * 2
# Write your code below. Don't forget to press Shift+Enter to execute the cell x = 3 + 2 * 2 x
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:7--> What is the value of y where y = (3 + 2) * 2?
# Write your code below. Don't forget to press Shift+Enter to execute the cell y = (3+2)*2 y
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
Double-click __here__ for the solution.<!-- Your answer is below:10--> What is the value of z where z = x + y?
# Write your code below. Don't forget to press Shift+Enter to execute the cell z= x+y z
_____no_output_____
MIT
1.1-Types.ipynb
mohamedsuhaib/Python_Study
from google.colab import drive drive.mount('/content/drive') import numpy as np import pandas as pd import matplotlib.pyplot as plt work_path = "/content/drive/My Drive/Colab Notebooks" free_x = pd.read_csv(f"{work_path}/data/free_x.csv") free_y = pd.read_csv(f"{work_path}/data/free_y.csv") step_x = pd.read_csv(f"{work_path}/data/step_x.csv") step_y = pd.read_csv(f"{work_path}/data/step_y.csv") ms1a_x = pd.read_csv(f"{work_path}/data/ms1a_x.csv") ms1a_y = pd.read_csv(f"{work_path}/data/ms1a_y.csv") ms2a_x = pd.read_csv(f"{work_path}/data/ms2a_x.csv") ms2a_y = pd.read_csv(f"{work_path}/data/ms2a_y.csv") ms3a_x = pd.read_csv(f"{work_path}/data/ms3a_x.csv") ms3a_y = pd.read_csv(f"{work_path}/data/ms3a_y.csv") label_x = list(free_x.columns) label_y = list(free_y.columns) label_corr_x = label_x[1:7] label_corr_y = label_y[1:] df_free = free_x[label_corr_x].join(free_y[label_corr_y]) df_step = step_x[label_corr_x].join(step_y[label_corr_y]) df_ms1a = ms1a_x[label_corr_x].join(ms1a_y[label_corr_y]) df_ms2a = ms2a_x[label_corr_x].join(ms2a_y[label_corr_y]) df_ms3a = ms3a_x[label_corr_x].join(ms3a_y[label_corr_y]) corr_free = df_free.corr()[label_corr_x][6:] corr_step = df_step.corr()[label_corr_x][6:] corr_ms1a = df_ms1a.corr()[label_corr_x][6:] corr_ms2a = df_ms2a.corr()[label_corr_x][6:] corr_ms3a = df_ms3a.corr()[label_corr_x][6:] fig, ax = plt.subplots(3, 2, figsize=(12,8), dpi=200) fig.subplots_adjust(hspace=0.35, wspace=0.12) corr_free.abs().plot.bar(ax=ax[0,0], rot=0, legend=False, title="Free Response") corr_step.abs().plot.bar(ax=ax[0,1], rot=0, legend=False, title="Step Response") corr_ms1a.abs().plot.bar(ax=ax[1,0], rot=0, legend=False, title="M-Series1 Response") corr_ms2a.abs().plot.bar(ax=ax[1,1], rot=0, legend=False, title="M-Series2 Response") corr_ms3a.abs().plot.bar(ax=ax[2,0], rot=0, legend=False, title="M-Series3 Response") ax[1,1].legend(bbox_to_anchor=(1.2, 1.2), loc='center', fontsize=14) for i in range(3): for j in range(2): ax[i,j].hlines([0.7], -1, 5, color="r", linestyle=":")
_____no_output_____
MIT
check_correlation.ipynb
heros-lab/colaboratory
Planning Challenge As a data scientist at a hotel chain, I'm trying to find out what customers are happy and unhappy with, based on reviews. I'd like to know the topics in each review and a score for the topic. Approach - Use standard NLP techniques (tokenization, TF-IDF, etc.) to process the reviews- Use LDA to identify topics in the reviews for each hotel - Learn the topics from whole reviews - For each hotel, combine all of the reviews into a metareview - Use the fit LDA model to score the appropriateness of each topic for this hotel - Also across all hotels- Look at topics coming up in happy vs. unhappy reviews for each hotel Results Takeaways
import logging import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import plotly.express as px import plotly.io as pio import pyLDAvis # Has a warning on import import pyLDAvis.sklearn import pyLDAvis.gensim import seaborn as sns from gensim.corpora.dictionary import Dictionary from gensim.models import LdaMulticore, Phrases, TfidfModel # Has a warning on import from gensim.parsing.preprocessing import STOPWORDS from IPython.display import display from nltk.corpus import stopwords # Has a warning on import from nltk.stem import WordNetLemmatizer, SnowballStemmer from nltk.tokenize import RegexpTokenizer from pprint import pprint from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer lemmatizer = WordNetLemmatizer() stemmer = SnowballStemmer("english") regex_tokenizer = RegexpTokenizer(r'\w+') vader_analyzer = SentimentIntensityAnalyzer() # Plot settings sns.set(style="whitegrid", font_scale=1.10) pio.templates.default = "plotly_white" # Set random number seed for reproducibility np.random.seed(48) # Set logging level for gensim logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG) data_dir = '~/devel/insight-data-challenges/07-happy-hotel/data' output_dir = '~/devel/insight-data-challenges/07-happy-hotel/output'
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Read in and clean the data Before reading in all of the files I downloaded from the GDrive, I used `diff` to compare the files because they looked like they might be duplicates. ```diff hotel_happy_reviews\ -\ hotel_happy_reviews.csv hotel_happy_reviews\ -\ hotel_happy_reviews.csv.csvdiff hotel_happy_reviews\ -\ hotel_happy_reviews.csv hotel_happy_reviews(1)\ -\ hotel_happy_reviews.csv```This indicated that three of the files were exact duplicates, leaving me with one file of happy reviews and one file of not happy reviews.```hotel_happy_reviews - hotel_happy_reviews.csvhotel_not_happy_reviews - hotel_not_happy_reviews.csv.csv```
happy_reviews = pd.read_csv( os.path.join(os.path.expanduser(data_dir), 'hotel_happy_reviews - hotel_happy_reviews.csv'), ) display(happy_reviews.info()) display(happy_reviews) # Name this bad_reviews so it's easier to distinguish bad_reviews = pd.read_csv( os.path.join(os.path.expanduser(data_dir), 'hotel_not_happy_reviews - hotel_not_happy_reviews.csv.csv'), ) display(bad_reviews.info()) display(bad_reviews)
<class 'pandas.core.frame.DataFrame'> RangeIndex: 26521 entries, 0 to 26520 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 User_ID 26521 non-null object 1 Description 26521 non-null object 2 Is_Response 26521 non-null object 3 hotel_ID 26521 non-null int64 dtypes: int64(1), object(3) memory usage: 828.9+ KB
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Check that the two dfs are formatted the same
assert happy_reviews.columns.to_list() == bad_reviews.columns.to_list() assert happy_reviews.dtypes.to_list() == bad_reviews.dtypes.to_list()
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Look at the data in detail
display(happy_reviews['hotel_ID'].value_counts()) display(happy_reviews['User_ID'].describe()) display(bad_reviews['hotel_ID'].value_counts()) display(bad_reviews['User_ID'].describe())
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Process review text Tokenize Split the reviews up into individual words
def tokenize(review): '''Split review string into tokens; remove stop words. Returns: list of strings, one for each word in the review ''' s = review.lower() # Make lowercase s = regex_tokenizer.tokenize(s) # Split into words and remove punctuation. s = [t for t in s if not t.isnumeric()] # Remove numbers but not words containing numbers. s = [t for t in s if len(t) > 2] # Remove 1- and 2-character tokens. # I found that the lemmatizer didn't work very well here - it needs a little more tuning to be useful. # For example, "was" and "has" were lemmatized to "wa" and "ha", which was counterproductive. s = [stemmer.stem(lemmatizer.lemmatize(t, pos='v')) for t in s] # Stem and lemmatize verbs s = [t for t in s if t not in STOPWORDS] # Remove stop words return s happy_tokens = happy_reviews['Description'].apply(tokenize) bad_tokens = bad_reviews['Description'].apply(tokenize) display(happy_tokens.head()) display(bad_tokens.head()) all_tokens = happy_tokens.append(bad_tokens, ignore_index=True)
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Find bigrams and trigrams Identify word pairs and triplets that are above a given count threshold across all reviews.
# Add bigrams to single tokens bigrammer = Phrases(all_tokens, min_count=20) trigrammer = Phrases(bigrammer[all_tokens], min_count=20) # For bigrams and trigrams meeting the min and threshold, add them to the token lists. for idx in range(len(all_tokens)): all_tokens.iloc[idx].extend([token for token in trigrammer[all_tokens.iloc[idx]] if '_' in token]) # Bigrams and trigrams are joined by underscores
2020-04-09 15:31:27,829 : INFO : collecting all words and their counts
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Remove rare and common tokens, and limit vocabulary
dictionary = Dictionary(all_tokens) dictionary.filter_extremes(no_below=30, no_above=0.5, keep_n=20000) # Look at the top 100 and bottom 100 tokens temp = dictionary[0] # Initialize the dict token_counts = pd.DataFrame(np.array( [[token_id, dictionary.id2token[token_id], dictionary.cfs[token_id]] for token_id in dictionary.keys() if token_id in dictionary.cfs.keys() and token_id in dictionary.id2token.keys() ] ), columns=['id', 'token', 'count']) token_counts['count'] = token_counts['count'].astype('int') token_counts['count'].describe() token_counts = token_counts.sort_values('count') plt.rcParams.update({'figure.figsize': (5, 3.5), 'figure.dpi': 200}) token_counts['count'].head(5000).hist(bins=100) plt.suptitle("Counts for 5,000 least frequent included words") plt.show() display(token_counts.head(50)) plt.rcParams.update({'figure.figsize': (5, 3.5), 'figure.dpi': 200}) token_counts['count'].tail(1000).hist(bins=100) plt.suptitle("Counts for 1,000 most frequent included words") plt.show() display(token_counts.tail(50)) # Replace the split data with the data updated with phrases display(happy_tokens.shape, bad_tokens.shape) happy_tokens = all_tokens.iloc[:len(happy_tokens)].copy().reset_index(drop=True) bad_tokens = all_tokens.iloc[len(happy_tokens):].copy().reset_index(drop=True) display(happy_tokens.shape, bad_tokens.shape)
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Look at two examples before and after preprocessing
happy_idx = np.random.randint(1, len(happy_tokens)) bad_idx = np.random.randint(1, len(bad_tokens)) print('HAPPY before:') display(happy_reviews['Description'].iloc[happy_idx]) print('HAPPY after:') display(happy_tokens.iloc[happy_idx]) print('NOT HAPPY before:') display(bad_reviews['Description'].iloc[bad_idx]) print('NOT HAPPY after:') display(bad_tokens.iloc[bad_idx])
HAPPY before:
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Vectorize with Bag of Words and TF-IDF
bow_corpus = [dictionary.doc2bow(review) for review in all_tokens] tfidf_model = TfidfModel(bow_corpus) tfidf_corpus = tfidf_model[bow_corpus] print('Number of unique tokens: {}'.format(len(dictionary))) print('Number of documents: {}'.format(len(bow_corpus))) len(tfidf_corpus)
2020-04-09 15:32:02,623 : INFO : collecting document frequencies
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
LDA topic modeling
# Fit a single version of the LDA model. num_topics = 10 chunksize = 5000 passes = 4 iterations = 200 eval_every = 1 # Evaluate convergence at the end id2word = dictionary.id2token lda_model = LdaMulticore( corpus=tfidf_corpus, id2word=id2word, chunksize=chunksize, alpha='symmetric', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every, workers=4 # Use all four cores ) top_topics = lda_model.top_topics(tfidf_corpus) pprint(top_topics)
2020-04-09 15:32:02,990 : INFO : using symmetric alpha at 0.1
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Gensim calculates the [intrinsic coherence score](http://qpleple.com/topic-coherence-to-evaluate-topic-models/) foreach topic. By averaging across all of the topics in the model you can get an average coherence score. Coherenceis a measure of the strength of the association between words in a topic cluster. It is supposed to be an objectiveway to evaluate the quailty of the topic clusters. Higher scores are better.
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics. avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics print('Average topic coherence: %.4f.' % avg_topic_coherence)
Average topic coherence: -1.2838.
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
References:- https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.htmlsphx-glr-auto-examples-tutorials-run-lda-py- https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24
# This code is used to run the .py script from beginning to end in the python interpreter # with open('python/happy_hotel.py', 'r') as f: # exec(f.read()) # plt.close('all')
_____no_output_____
MIT
07-happy-hotel/python/happy_hotel.ipynb
leslem/insight-data-challenges
Recommendations via Dimensionality ReductionAll the content discovery approaches we have explored in previous notebooks can be used to do content recommendations. Here we explore yet another approach to do that, but instead of considering a single article as input, we will look at situations where we know that a user has read a set of articles and he is looking for recommendations on what to read next.Since we have already extracted the authors, orgs and keywords for each article, we can now construct a bipartite graph between author and article, orgs and article and keywords and article, which gives us the basis for a recommender.
from sklearn.decomposition import NMF import joblib import json import numpy as np import os import requests import urllib DATA_DIR = "../data" MODEL_DIR = "../models" SOLR_URL = "http://localhost:8983/solr/nips2index" FEATURES_DUMP_FILE = os.path.join(DATA_DIR, "comb-features.tsv") NMF_MODEL_FILE = os.path.join(MODEL_DIR, "recommender-nmf.pkl") PAPERS_METADATA = os.path.join(DATA_DIR, "papers_metadata.tsv")
_____no_output_____
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial
Extract features from index
query_string = "*:*" field_list = "id,keywords,authors,orgs" cursor_mark = "*" num_docs, num_keywords = 0, 0 doc_keyword_pairs = [] fdump = open(FEATURES_DUMP_FILE, "w") all_keywords, all_authors, all_orgs = set(), set(), set() while True: if num_docs % 1000 == 0: print("{:d} documents ({:d} keywords, {:d} authors, {:d} orgs) retrieved" .format(num_docs, len(all_keywords), len(all_authors), len(all_orgs))) payload = { "q": query_string, "fl": field_list, "sort": "id asc", "rows": 100, "cursorMark": cursor_mark } params = urllib.parse.urlencode(payload, quote_via=urllib.parse.quote_plus) search_url = SOLR_URL + "/select?" + params resp = requests.get(search_url) resp_json = json.loads(resp.text) docs = resp_json["response"]["docs"] docs_retrieved = 0 for doc in docs: doc_id = int(doc["id"]) keywords, authors, orgs = ["NA"], ["NA"], ["NA"] if "keywords" in doc.keys(): keywords = doc["keywords"] all_keywords.update(keywords) if "authors" in doc.keys(): authors = doc["authors"] all_authors.update(authors) if "orgs" in doc.keys(): orgs = doc["orgs"] all_orgs.update(orgs) fdump.write("{:d}\t{:s}\t{:s}\t{:s}\n" .format(doc_id, "|".join(keywords), "|".join(authors), "|".join(orgs))) num_docs += 1 docs_retrieved += 1 if docs_retrieved == 0: break # for next batch of ${rows} rows cursor_mark = resp_json["nextCursorMark"] print("{:d} documents ({:d} keywords, {:d} authors, {:d} orgs) retrieved, COMPLETE" .format(num_docs, len(all_keywords), len(all_authors), len(all_orgs))) fdump.close()
0 documents (0 keywords, 0 authors, 0 orgs) retrieved 1000 documents (1628 keywords, 1347 authors, 159 orgs) retrieved 2000 documents (1756 keywords, 2601 authors, 214 orgs) retrieved 3000 documents (1814 keywords, 3948 authors, 269 orgs) retrieved 4000 documents (1833 keywords, 5210 authors, 311 orgs) retrieved 5000 documents (1842 keywords, 6537 authors, 350 orgs) retrieved 6000 documents (1847 keywords, 7983 authors, 385 orgs) retrieved 7000 documents (1847 keywords, 9517 authors, 420 orgs) retrieved 7238 documents (1847 keywords, 9719 authors, 426 orgs) retrieved, COMPLETE
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial
Build sparse feature vector for documentsThe feature vector for each document will consist of a sparse vector of size 11992 (1847+9719+426). An entry is 1 if the item occurs in the document, 0 otherwise.
def build_lookup_table(item_set): item2idx = {} for idx, item in enumerate(item_set): item2idx[item] = idx return item2idx keyword2idx = build_lookup_table(all_keywords) author2idx = build_lookup_table(all_authors) org2idx = build_lookup_table(all_orgs) print(len(keyword2idx), len(author2idx), len(org2idx)) def build_feature_vector(items, item2idx): vec = np.zeros((len(item2idx))) if items == "NA": return vec for item in items.split("|"): idx = item2idx[item] vec[idx] = 1 return vec Xk = np.zeros((num_docs, len(keyword2idx))) Xa = np.zeros((num_docs, len(author2idx))) Xo = np.zeros((num_docs, len(org2idx))) fdump = open(FEATURES_DUMP_FILE, "r") for line in fdump: doc_id, keywords, authors, orgs = line.strip().split("\t") doc_id = int(doc_id) Xk[doc_id] = build_feature_vector(keywords, keyword2idx) Xa[doc_id] = build_feature_vector(authors, author2idx) Xo[doc_id] = build_feature_vector(orgs, org2idx) fdump.close() X = np.concatenate((Xk, Xa, Xo), axis=1) print(X.shape) print(X)
(7238, 11992) [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]]
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial
Reduce dimensionalityWe reduce the sparse feature vector to a lower dimensional dense vector which effectively maps the original vector to a new "taste" vector space. Topic modeling has the same effect. We will use non-negative matrix factorization.Idea here is to factorize the input matrix X into two smaller matrices W and H, which can be multiplied back together with minimal reconstruction error. The training phase will try to minimize the reconstruction error.$$X = WH \approx X$$The W matrix can be used as a reduced denser proxy for X.
if os.path.exists(NMF_MODEL_FILE): print("model already generated, loading") model = joblib.load(NMF_MODEL_FILE) W = model.transform(X) H = model.components_ else: model = NMF(n_components=150, init='random', solver="cd", verbose=True, random_state=42) W = model.fit_transform(X) H = model.components_ joblib.dump(model, NMF_MODEL_FILE) print(W.shape, H.shape)
model already generated, loading violation: 1.0 violation: 0.2411207712867099 violation: 0.0225518954481444 violation: 0.00395945567371017 violation: 0.0004979448419219516 violation: 8.176770536033433e-05 Converged at iteration 6 (7238, 150) (150, 11992)
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial